Presented at The Developing Group 3 Dec 2011
The Research Project
We have been approached to take part in a research project which will make use of Symbolic Modelling and Clean Language as a research methodology. The project in part seeks to understand how coachees evaluate their experience of coaching. Specifically, the research aims to gather information through interviews about how coachees evaluate an experience of being coached with one particular type coaching.
Since we had not used Symbolic Modelling and Clean Language (SyM/CL) as a methodology for facilitating interviewees to evaluate their experience we ran some trial interviews and recruited 21 members of the Developing Group to investigate the similarities and differences with ‘standard’ clean research interviewing.
Others, notably Nancy Doyle and Caitlin Walker, have conducted clean interviews into the effectiveness of their work in organisations. Their evaluative interviews (discussed in Appendix A) are of a different kind to those to be undertaken in our project.
Below are some guidelines for anyone wanting to undertake a ‘clean evaluative interview’ (CEI).
Evaluative interviewing can be distinguished from other types of interview by the content it aims to identify. In an evaluative interview we want to know the value the interviewee assigns to a particular experience, i.e how they assess, judge, gauge, rate, mark, measure or appraise what happened. While quantitative evaluations are useful, people often use more naturalistic, qualitative ways of assessing. We want to understand how people arrive at this kind of evaluation.
We propose that Symbolic Modelling and Clean Language offers the academic and commercial research community new ways to identify and explore this kind of highly subjective phenomena.
A clean evaluative interview is one that cleanly facilitates an interviewee to evaluate an experience they have had – and optionally to describe the process by which they arrived at that evaluation.
In a clean interview, the interviewer aims to supply as little content as possible, to make minimal assumptions about the interviewee’s experience, and to allow them maximum opportunity to express themselves in their own words. In particular, clean interviewers are keen that their questions do not presuppose ways of understanding the world. While many interview methods claim to do this, it is clear from transcripts that interviewers often unwittingly introduce content and presuppose ways of answering. Since they don’t know they are doing it many interviewers are blind to the effect they are having See Paul Tosey’s paper for excellent examples .
There are two distinctions that set apart a clean interview from other clean methods where change is the purpose: (a) the interviewer/modeller decides the topic or frame for the interview, usually in advance; and (b) the information gathered is used for a purpose unconnected, or loosely connected, to the interviewee.
In a CEI it may not be possible to only use classic clean questions. At times a few bespoke or ‘contextually clean’ questions – ones based on the interviewee’s particular logic and the context within which the interview is conducted – will be needed.
While it not the aim of the interview, it is expected that the interviewee will benefit by greater self-understanding of their internal behaviour. That in turn may give them more flexibility with their external behaviour. In the work-life balance research for example, without it being suggested several of the managers interviewed reported that they had made adjustments to bring their lives more into balance as a result of the interview.
The use of Symbolic Modelling and Clean Language as an interview / research / modelling methodology has been growing over the last ten years. The table below lists a number of ways a clean approach has been used in interviewing. While the method of interviewing cleanly needs to be adjusted to suit each context, the core process remain the same.
|Type of Interview
|Purpose of Interview
|Critical incident interviewing
|To gain a full description from people who observed or were involved in an event such as an accident or crime, e.g Caitlin Walker’s training police officers to interview vulnerable witnesses [trainingattention.co.uk link no longer available]
|To evaluate an experience or to investigate the effectiveness of an intervention. And sometimes to describe the process of evaluating.
|To identify how an expert does what they do so expertly, e.g. our Modelling Robert Dilts Modelling: cleanlanguage.co.uk/articles/articles/266/
|For patients to the describe their symptoms and/or their well-bring in their own words and metaphors, e.g. our training of specialist Multiple Sclerosis nurses. Also used for diagnosis or social health planning.
|Information gathering for articles.
|Market research interviewing
|To seek the opinion and views of a person or group on a product or service e.g. Wendy Sullivan and Margaret Meyers’ Work: cleanchange.co.uk/cleanlanguage/2011/03/16/clean-case-studies-and-research/
|Modelling Shared Reality
|To identify connecting themes, ‘red threads’, across a disparate group or groups. (The process was developed by Stefan Ouboter and refined by others):
|For individuals to describe their first-person perspective of an experience, e.g. Work-Life Balance research conducted by University of Surrey and Clean Change Company. See note  and download: Clean_Language_WLB_final_report_October_2010.pdf
|To interview candidates for a position or for executive search.
|To produce a specification of a role, process or competency. Used in benchmarking, needs analysis, customer requirements, etc.
When using Symbolic Modelling the interviewee’s metaphors and internal process form a major part of the interview. The rationale for this emphasis is based on the the Cognitive Linguistic hypothesis that autogenic metaphors not only describe what people experience, they reveal how that experience is structured. Furthermore, it proposes that metaphor informs and mediates all of our significant concepts.
A symbolic modeller facilitates the interviewee to self-model and to describe the results of that self-exploration. The interviewee is not led by being given criteria against which to evaluate. In a traditional interview the coachee-interviewee might be asked: “How do you rate the coaching in terms of ‘rapport’ (‘insights’, ‘challenge’, etc.)?” or “on a scale of 1 to 10?”. In a CEI the interviewee defines their own evaluative criteria through the process of self-modelling. This requires the interviewer to have the skill to pick out the criteria identified by the interviewee during the ongoing flow of description, explanation and narrative – and to facilitate an elaboration of the interviewee’s internal process.
To evaluate anything requires the comparison of two things: the comparison of an experience against a ‘standard’, ‘yardstick’, ‘ideal’, ‘criteria’ or some other measure. What gets compared with what can vary enormously from one person to another. For example, a coachee could evaluate a coaching session by comparing their feelings at the beginning and the end of a session; another by whether their expectations of the session were met; and yet another by whether they achieved their desired outcome after the session.
People will likely evaluate their experience as it happens. And they will often form an overall assessment afterwards. Daniel Kahneman has shown that our overall assessments are far from a simple summation of our moment-by-moment assessments.  For example, if you randomly ask people to assess how happy they are at that moment they often give significantly different answers to when they are asked for a general assessment for their happiness. Given that it would be impractical to keep interrupting a coaching session to question the coachee, we are left with identifying the coachee’s overall assessment sometime after the session.
Below are some examples of what coachees have used to measure their experience against:
|“The most powerful coaching I’ve had.”
|Other coaching sessions
|“I felt a big shift at the time.”
|Their experience in the session
|“I can now happily present to a room full of people.”
|The outcome of the coaching
|“I hoped I’d get unstuck and I have.”
|“I was disappointed, everyone said they were so brilliant.”
|“Overcoming the fear means I’m a different person.”
|The consequences of the coaching
Evaluating almost inevitably makes use of scales and scaling. Rarely is an evaluation of a complex experience such as coaching a digital or binary comparison. Most people have more subtle ways of evaluating than whether something is either good or bad, useful or a waste of time, etc. Instead, evaluation normally involves assessing ‘how much?’. How much of a good feeling? How much of an expectation was met? How much was achieved? To understand more about how people scale their experience we recommend our article, ‘Big Fish in a Small Pond: The importance of scale’, NLP News, Number 7, March 2004. cleanlanguage.com/big-fish-in-a-small-pond/
People evaluate without giving much thought to how they evaluate. They may not consciously distinguish between their evaluations and other aspects of their experience, and they may have little idea of the process they went through to arrive at an evaluation – until they are facilitated to consider that aspect of their experience.
To make sure the interview accomplishes its aim, the interviewer needs to vigilantly hold to the frame of the interview and keep a number of distinctions in the forefront of their mind. Below we describe the distinctions which we have found are key to gaining the maximum from a clean evaluative interview.
Types of information
People provide different types of information during the interview. They will describe what happened during the eventbeing evaluated, what were the effects of that event, and their evaluation of what happened. We call these the three E’s and hypothesise that in most cases it is the effects that are evaluated, rather than the event itself.
An interviewee may need to describe the event and its effects before they can recognise or focus on their evaluation:
The interviewee’s language provides clues to the type of information being described:
- The coach listened to me.
- This happened, and that happened.
- I experienced x, y and z.
- The questions got me thinking in a new way.
- I took a step forward in my development.
- I realised I needed more sessions.
- I didn’t find it very useful.
- Afterwards I see I got a lot out of it.
- Wow, what a session!
Moreover, interviewees will often switch back and forth between event, effect and evaluation. The interviewer therefore has to be alert to the kind of information being presented to ensure that when the opportunity arises they direct the interviewee’s attention to evaluative information.
The Semantic Differential
In the 1950s Charles Osgood developed the semantic differential as a way to measure how people differentiate the meaning of a concept. Osgood and others used a large number of polar-opposite scales and found three recurring factors of affective meaning: evaluation, potency and activity. Further studies showed that these three dimensions were also influential in dozens of other cultures. Examples of these scales are:
|nice-awful, good-bad, sweet-sour, helpful-unhelpful
|big-little, powerful-powerless, strong-weak, deep-shallow
|fast-slow, alive-dead, noisy-quiet, young-old.
Osgood was studying ‘affective meaning’ which is a more general concept than our area of interest. However, how much a person is ‘affected’ will involve some kind of evaluation. So what of Osgood’s work can we apply to our study?
Not surprisingly, Osgood’s ‘evaluation scale’ is a vital sub-set of the evaluation process we want to model. Because we are already using ‘evaluation’ to refer to the whole process, we will borrow a term from chemistry – valence – to refer to this scale. Since no experience is intrinsically positive/negative or helpful/unhelpful, etc. our aim is to discover how the interviewee evaluates it, what they call it, and whether the evaluation involves degrees or graduations of valence.
‘Potency’ involves scales of amount and is therefore directly relevant to our study. For our purposes it is not clear that Osgood’s ‘activity’ scales are fundamentally different from potency scales since they both enable a person to grade or measure the amount of a quality or attribute. We propose combining them into a single ‘grading’ scale.
Note that using a ‘semantic differential’ involves the pre-selection of the word/concept to be evaluated; whereas, in a clean evaluative interview only the context is defined (in this case a coaching session). The interviewee selects whatever content they consider significant. We call the bits of content they select, the criteria.
To summarise, based on Osgood’s semantic differential and having studied how people express their evaluations in everyday language we propose that an evaluation involves four elements:
|the quality used to make the assessment
|a means of ranking or measuring the relative amount of the criterion
|a judgement of the degree to which an experience is considered favorable/positive/good/valuable or unfavorable/negative/bad/valueless by the interviewee. (Note sometimes the entire scale is regarded as positive or negative)
|the position on a grading or valence scale allocated to a particular experience of the criterion (X and Y on the diagram below)
Evaluation can thus be depicted as:
The importance of ‘valence’ is illustrated by the coachee who said “I’d say my anxiety was 50% down.” We know the criterion (anxiety), the grading scale (percentage down), the rating (50), but without the valence we cannot know whether the coachee thought this was a good or a poor result (and it is vital to not make an assumption).
In fact the coachee said “and that’s a fantastic success.” This suggests a valence scale of amount-of-success and a rating of “fantastic”.
The following examples show how three interviewees’ language indicates the criterion, grading and valence scales, and rating thereof they used to make their evaluation:
- I valued the deep rapport.
- The coach wasn’t much help.
- I got the lasting effect I wanted.
Content versus Process
A key distinction that is embedded in the above is the difference between the result of the interviewee’s evaluation and the way they arrived at that evaluation. This is the difference between the what and the how, the product and the process, the rating and the scale.
A coachee can make their evaluation during the session, immediately after, or much later. Coachees often report that their evaluations change over time and the length of time between the coaching session and the interview may be a factor. Therefore it may be important to find out when the evaluation was made and if it has changed. To complicate matters, during an evaluative interview the interviewee will inevitably reflect on their evaluation and they may change their evaluation as they self-model.
It’s important to note that the above distinctions are to help the interviewer familiarise themselves with the common processes of evaluation. It’s vital in a CEI that the interviewee is modelled from their perspective. These distinctions cannot be introduced into a clean interview since an interviewee may not see the world this way. Our aim is to discover how each particular interviewee evaluates – regardless of how idiosyncratic that might be.
Overview of a Clean Evaluative Interview
The frame and what will happen to the output should be clearly stated at the outset of the interview.
Start with something very open, e.g. ‘How did it go?’ or ‘How was that [context]?’
Use ‘And is there anything else about …?’, ‘What kind of … ?’ and ‘How do you know?’ questions to invite the interviewee to describe their experience in more detail.
Direct the interviewee’s attention to their evaluative words. Listen for words like:
|good / bad
useful / useless
big / little impact
poor / rich
over (e.g. overwhelm)
I got lots out of it.
better / worse
valuable / no value
To help you develop an ear for evaluative words Appendix B shows examples from 40 people’s evaluation of their previous therapy/counselling.
- Develop the metaphor for the scale (or scales) the interviewee is using. Facilitate them to identify the attributes of the scale e.g. length, top, bottom, threshold, graduations, linear/nonlinear (without introducing any of these words).
- Pay attention to scaling indicated by the interviewee’s gestures. As soon as the interviewee starts to physicalise a scale with their gestures, use that description as a reference point. It makes it easier for the interviewee to explain and easier for the interviewer to understand.
- If it doesn’t happen spontanteously, ask location questions of where the scale is within the interviewees perception and where the rating is on the scale.
- Focus the interviewee’s attention on their evaluation of a specific event. Find out how they rate it, i.e. where they put their assessment of the event on their scale.
- If they use several scales, it is important to know the relative weighting. Spend more time on the one that is most important for the interviewee.
- As new information emerges find out how it relates to the scale(s) already developed. The aim is to make the evaluation and the process of evaluating central to the whole interview.
- Once enough of the interviewee’s model has been elicited, recap it and check for congruence, e.g. for verbal and nonverbal “Yeses”.
- If time permits, check your modelling with another sample event
- The following analogy may help interviewers understand their role. The interviewee is like a film/theatre reviewer. The interviewer is like a journalist who has to write an account of the reviewer’s assessment of a film, including the process by which the reviewer came to that evaluation. While the criteria a coaching client uses to evaluate their experience will be different to the criteria used by a film critic, both go through an internal evaluation process to arrive at their conclusion.
- It is useful to know your own criteria for evaluating before you start doing this kind of research. It can reduce assumption, confirmation bias and the halo effect (unconsciously thinking that aspects of a person’s behaviour or way of thinking that are like yours have extra value).
- Consider using two interviewers. Both can track what has and hasn’t been covered and both can ensure the interview remains focussed and within the specified fame. Since both interviewers are modelling from the interviewee’s perspective it should seem like a single seamless interview. Hence once a Symbolic Modelling interview has started the interviewee is rarely bothered by an extra interviewer in the room.
- Given that the interview will be recorded, think in advance about how to take notes. The main purpose of note taking will be to remind you of the exact words the interviewee used to describe their evaluation, and as a check that you have covered the range of potential material available.
- Keep your nonverbals clean and use lots of location questions.
- Do not assume, ask! One person’s “4 star” rating may not be the same as another’s: “And 4 stars out of how many?”.
- Be careful of asking questions like, “What would you have liked instead?”. While this kind of question can help fill in gaps it also risks turning the interview into a change-work session; and that is definitely inappropriate in a CEI. Also, people’s evaluations are not always based on what they like, e.g. “I like things to be easy but when it wasn’t that’s when I got my breakthrough.”
- Bringing your experiences into the session would not be clean. If you want to offer the interviewee a comparison so they make distinctions, make sure you use something you know they have experienced e.g. “How was this session compared to (other coaching sessions you’ve had)?”
- Occasionally some people will evaluate on a digital or binary scale. For instance, evaluating a session as being ‘boring’ or ‘engaging’ could be like an on/off switch with no options in between. If so, what happens in the sequence of switching on and off is of great interest. Having said that, it is common for an apparent binary evaluation after a little consideration to reveal itself as an analogue scale.
- Although ‘And what’s that like?’ is often mistakenly used by novice symbolic modellers, in a CEI it may be useful. (David Grove’s original clean question, ‘And that’s […] like what?’, is used to invite a person to translate their conceptual or sensory description into a metaphor.) ‘And what was that like?’ is a clean question in that does not introduce content-leading information. It can be useful because it frequently invokes an evaluation, judgement or comparison, e.g.
Interviewee: The coach sat there quietly, blank-faced, he didn’t say much at all. Interviewer: And what was that like? Interviewee: It was really bad, well it’s horrible isn’t it, you know, like, I would say it was really hellish.
- An important evaluative word to be aware of is ‘important’ and other such words:
significant major critical decisive crucial essential valuable vital fundamental
- If something is ‘important’ it means it is ‘of consequence’ or ‘of value’ and is, or is close to being, a universal way people evaluate. As such it can be regarded as being a pretty clean concept to use. The word is not overtly metaphorical either, so while it is not classically clean (David Grove did not use it) it could be regarded as being contextually clean in an evaluative interview. Asking ‘What is important about that?’ is much cleaner than saying ‘What was useful/enjoyable/worked well?’; since ‘important’ doesn’t presuppose that what happened was viewed as positive or negative for the interviewee. (i.e. it does not assume a valence).
- Don’t rush to ask questions like “How do you evaluate?” (even if the interviewee has used the word). It is a highly complex question that many people will not be able to answer except with generalities. Instead listen and watch for clues to the interviewee’s process. Use these to facilitate them to reveal bit by bit what and how they evaluate. Allow the distinctions of the interviewee’s evaluation process to emerge naturally in their own time.
Sample contextually clean questions
The following sample questions are included to give a flavour of the variety of questions you may find yourself needing to craft in order to ask as cleanly as possible about what the interviewee has said. Some of the questions may seem odd when read out of context; however during a SyM/CL interview they will likely make complete sense to the particular interviewee.
How do you know [their evaluation]?
What lets you know [their criterion] is happening?
How [their criterion] was it?
To what extent/degree did you [their criterion]?
How much [their evaluation] was it?
And the effect of that was?
What was important about that?
Is there anything else about [the session] that was important to you?
Is there anything else about [their evaluation] in relation to […]?
What determines where [their rating] is on [their scale]?
When [criterion or event identified as important] what happens to [evaluation]?
How many [name for graduations on their scale] are there?
Is there anything between X and Y on [their scale]?
When [one place on their scale], where is [higher/lower rating]? (e.g. And when ‘deep’ is there, where is ‘deeper’ that you were used to going?)
When this is [one end of the their scale] and this is [other end of their scale] where is [their evaluation of the session]?
Does that [gesture to nonverbal that marks out the scale] have a name when that’s [gesture to one end of the scale], and that’s [gesture to the other end]?
Does that [experience] relate to [gesture to an already explored scale] or not?
How was that session in relation to [their means of comparison]?
When it wasn’t [their evaluation] what kind of session was it?
When did you know it was [their evaluation]?
When did you [decide] it was [gesture to relevant place on their scale)?
When-abouts did you notice, that it was [their evaluation]?
1 The Developing Group provided a test-bed for establishing a methodology for clean evaluative interviewing. We set up interviews that resembled as closely as possible what will take place in the research project. We gathered feedback from the interviewee, the interviewer and an observer on how well the process worked – or not. We sought suggestions for improvements and things for interviewers to take into account. These notes have incorporated that learning. They also draw on the excellent handout Wendy Sullivan produced after the day.
2 In a video entitled ‘Individual Identity’, Baroness Professor Susan Greenfield describes how evaluative processes develop in children: youtube.com/watch?v=rcSSPTfEcJM
“You are most adaptable when you are very young, when you are unconditionally open to everything because you don’t have anything with which to evaluate the world. So you start off as a one-way street where you just evaluate it in terms of the senses and you unconditionally accept whatever comes in. …But the more experiences you have the more you will interpret what comes in in terms of what you’ve experienced already. So you shift from a sensory input of how sweet, how fast, how cold or how bright, to evaluating ‘What does this mean to me?’. And so we shift from noticing something not because they are noisy or bright or fast, but simply because they are your mother. And so you switch from senses to cognition. …Then we start to live in a world in which we are evaluating in much more subtle ways, much more personal ways, much less obvious ways, more cognitive ways, which gives you your unique take on the world. And that is constantly updating and changing.”
3 Tosey, Paul (2011) ‘Symbolic Modelling as an innovative phenomenological method in HRD research: the work-life balance project’, paper presented at the 12th International HRD Conference, University of Gloucestershire, 25–27 May 2011.
4 For more on the distinctions between interviewing and change-work see our article, What is Therapeutic Modelling?, ReSource Magazine Issue 8, April 2006.
5 Some of these are described in our article, Using Symbolic Modelling as a Research Interview Tool.
6 Evans, V., Green M., 2006. Cognitive Linguistics: An Introduction. Edinburgh University Press.
Kövecses, Z. 2002. Metaphor: A Practical Introduction. Oxford University Press.
Lakoff, G., Johnson, M., 1980/2003. Metaphors we live by. University of Chicago Press.
Lakoff, G., Johnson, M., 1999. Philosophy in the flesh: the embodied mind and its challenge to Western thought. Basic Books.
7 Kahneman, Daniel and Jason Riis, ‘Living, and Thinking about It: Two Perspectives on Life’ in Huppert, F.A., Baylis, N. & Keverne, B., The Science of Well-Being, 2007, pp. 284-304.
8 These examples are taken from Chapter 14, The Semantic Differential and Attitude Research, David R Heise in Attitude Measurement (edited by Gene F. Summers. Chicago: Rand McNally, 1970, pp. 235-253).
9 When it comes to analysing a transcript of an interview our Perceiver-Perceived-Relationship-Context (PPRC) model may be useful for classifying the source of the evaluation. For example:
|“I felt good”
“The coach was quite cold”
“Safety was an issue”
“The room was very relaxing”
10 There is a subtle link to a recent Developing Group topic – calibrating. Calibrating, as we use the term, involves the facilitator assessing whether what is happening for a client is working, or not. It is a moment-by-moment judgement: Is it likely that the client is getting something valuable, or potentially valuable from what’s happening? While interviewees may benefit from a research interview, this is not the aim. As long as the interviewee is ok with what is happening (and if there is any doubt it is best to check) interviewers have to use a different calibration to facilitators of change. They have to assess moment by moment: ‘Is the interview on topic’? and ‘Am I getting the kind of information required?’.
11 People often imbue the unexpected with import. Therefore listen out for clues such as: surprising, sudden, it just hit me, out of the blue, I wasn’t expecting that. These may indicate areas to explore further.