Archive for the ‘My PhD and related things’ Category

new yorker you are here

“It’s in our DNA…”

This is an expression that is much in circulation these days. So much so, in fact, Private Eye magazine now has a regular feature called ‘DNA Testing’ which has plenty of examples culled from journalism. I figured that it will surely follow that the people who manage organisations and (far more dangerously) the people who write theory about how organisations should be run, will become tempted to follow suit and use the same idea as a logical form of explanation.

And sure enough, in Blackwell’s, which is a fine bookstore in Oxford, I found evidence* of just such a trend in Rhea Duttagupta’s 2012 book “Leadership: It’s in your DNA” (Bloomsbury Press, available – evidently – in many fine bookstores).  By way of mini review, the book appears harmless enough at first glance, and is written using a reflexive, folksy style. I’m sure it is well-intentioned in its central assertion that Leadership can be defined in 10 key ingredients. Be warned, the list consists of a set of concepts which are drawn from a rag-bag of the usual suspects in central casting, such as “Self”, “emotion”, “fear”, “dark side” and “intuition”… you get the idea. There is nothing new here, though.

One first sees that this list is built around an assertion that these elements are innate traits. This is the long-standing pop psychology mantra of “you have all the ingredients for success as a leader inside you”, is a well-worn path to an individualist and reductionist notion of the person. Second,  there is an equally well-worn path to a behaviourist tradition in the realisation of the self in management practice. It is within the paradigm of these grand antecedents that the logic of the metaphor ‘these 10 ingredients = the DNA for Leadership’ is selected. This feels like  a worryingly literal, not to say absurd, suggestion. It’s a shame, really, because using an abductive form of inference could have been a really good way to try to understand this phenomenon we call leadership. The problem is that there are no ‘things’, no nouns, no ‘instinct’, no ‘self’ etc. in our DNA, despite many of us finding this a useful way of processing what we think DNA really does. DNA must operate, if it can be said to operate in an isolated way at all, in a system of relationships. It functions relationally, in dynamic and complex arrangements of contexts, boundaries and thresholds, and not in terms of coded properties which are embedded as traits. It is incorrect, though tempting, to say that DNA contains ‘information’, because information is always a matter of relationship and ratio. A trait-view of genetics, however, fits nicely with a trait-view of human beings. And this, despite the humanism evident in the choice of the 10 ingredients, is what I think Rhea’s book is claiming.

Doubtless anyone using this phrase will be aware that they are employing it as metaphor, but I suspect that paradoxically it is a message of the book that the metaphor be understood literally. It would follow that  all the incredible technical advances in neuroscience and in our understanding of the biological functioning of the brain is also  the explanation of how we think and act. The basis for this claim is flimsy, but not because the examples Rhea uses in the book aren’t any good, or aren’t interesting, or that she lacks conviction. All three of those things are there. The real problem is that this is just, to borrow a phrase from Bateson, ‘shoddy epistemology’. In other words, when the way we think we know things is not in line with the way we know things, the results will end up being catastrophic because our ability to use technology and abuse our intelligence in pursuit of short-term domination of our situation is always unsustainable.


I found my thinking got a bit knotted in writing this, and I’m not sure the main point comes across. So, I’ll re-state what I think it is I’m trying to say:

1. it is a trap to take metaphor literally.

2. Metaphor is the key to understanding how the world actually is (it is just a shame to say it).

3. To confuse the properties of the referents of a metaphor with the metaphor itself is to make a categorical error in thinking.

*A quick review of Amazon books later showed me that the use of this DNA metaphor is spreading… see also Judith Glaser’s “The DNA of Leadership: Leverage Your Instincts To: Communicate-Differentiate-Innovate” (Platinum Press, 2007), or Thomas Harrison’s “Instinct: Tapping Your Entrepreneurial DNA to Achieve Your Business Goals” (Business Plus 2005), or Silverman and Honold’s “Organizational DNA: Diagnosing Your Organization for Increased Effectiveness” (Davies-Black Publishing, 2003)…


Read Full Post »

I’ve been in Nottingham attending my first academic conference post PhD completion. The conference in question is the annual three-day get-together of Critical Realists, which was a slightly surreal experience (a note on this at the foot of this post).

But the conference itself was preceded by a two-day workshop, a ‘discussion’ on Critical Realism. I put the word discussion in quote marks because there was actually very little discussion and very much note-taking; in short, it was mostly an exercise in listening to the philosopher and founder of Critical Realism (in its current incarnation as a philosophical perspective of research and science), Roy Bhaskar while trying to write down what he was saying. There is something about his presence and delivery that seems to make you do this. Even audience members who had seen and heard him many times reached for their pens whenever he said something like “I’ll just outline for you the four types of laminated system in the dialectic…”

Bhaskar is quite an intellect. Speaking without slides and without many notes, he took the audience step by step through the three main stages of development of his philosophy. You may not need ever to know what these are, but for the record it goes:

1. Original Critical Realism OCR

OCR results from immanent critique of prevalent positivist/empiricist and constructivist/postmodern views of natural and social science. The shortcoming CR addresses in both is their conflation of epistemology (what we can know) with ontology (what there is), and this is known as the epistemic fallacy. Basic CR starts by vindicating ontology. In other words, it is not only acceptable to hold that there must be a world ‘out there’, aspects of which are not necessarily accessible to us in our experience of events, such realism is inescapable. Without it all epistemology would be impossible. To take this one step further, Original Critical Realism presents a stratified (actually, nested) ontology of three levels:

The Empirical, the level of sense data and information, arguably also the level of meaning, which is emergent from….

The Actual, the level of events, which may or may not be experienced by us in the Empirical, emergent from…

The Real, the level of ‘generative mechanisms’ or ‘forces’, ‘fundamental laws’, or tendencies etc. that might, or might not, produce events in the Actual.

You can see that the idea of emergence is quite central to CR and to its aim, which – because the world is an open and not a closed system – is explanatory rather than predictive. OCR chooses to make the inferential jump from the Empirical to the Real. In fact, it insists on this move because the smaller, deductive jump from Empirical to Actual will always falls short of providing fundamental explanatory principles, while the inductive jump from Actual to Empirical is scientifically a very poor way of explaining or predicting.

A couple of other ideas are important here, namely:

a) in CR there are two dimensions of knowledge – transitive and intransitive. The transitive is knowledge that is socially produced, and are all the things that would not exist if we were not here to know them. The intransitive are any entities that exist independently of our knowledge of them. Both types can be causal, but only the intransitive tells us anything about the nature of the Real. You will recall that this is the realm of the fundamental laws of nature, which is what science is trying to reveal (keep up!). Social science is full of heuristics that are transitive (e.g. ‘Ego’, ‘profit’ or ‘leadership’) but these are simply constructions and not explanations (transitive ideas dressed up as intransitive facts).

b) CR is ontologically realist but epistemically relativist. What this means is that we can accept that there can be a variety of views the nature of knowledge, and it also means that knowledge is always fallible.

The above is where a lot of researchers get to and then for various reasons stop. But OCR serves to pave the way to another step in this philosophy that builds on those basic precepts by introducing the dialectic.

2. Dialectical Critical Realism DCR

Having brought a realist ontology (inference about being) back into service to clear the erroneous or rubbish ideas littering science, the second phase of Critical Realism was a connection to epistemology (knowing) that challenges the idea first put forward by the English philosopher David Hume, that science is unable to make the jump from “is” to “ought”.

DCR makes this move by first seeing ontic ‘Being’, established in OCR, as generating a process of epistemic ‘Becoming’. The whole process is dialectic, i.e. it exists only by virtue of inherent contradiction, or difference. Examples of basic dialectic relationships include up/down, beginning/end, on/off, profit/loss etc. (any concept has meaning by virtue of that which it is not…).

The most basic such dialectic is absence. Believe me when I say: this is huge.

DCR continues the philosophic trail by arguing from dialectic parts to the necessity of an emergent whole, or totality, or synthesis (if you prefer), and finally to the idea that human beings can act on the world so as to transform it. Here a whole section of the CR world branches off into the complex relationship between ‘agency’, ‘structure’ and ‘culture’ in social science. Indeed I suspect that is what gets a lot of Critical Realists up in the morning to debate endlessly with each other.

A lot of researchers have followed CR and Bhaskar quite happily to this point, especially if they come to this looking for a framework sympathetic to certain social science traditions and philosophies that also feature the idea of the dialectic.

But Bhaskar wishes, I think, to do two other (related) things with the Philosophy of Science. The first is fostering the whole idea of science as interdisciplinary and integrative. The second is demonstrating that life is intrinsically meaningful. These are themes of the third part, which has been labelled by some as the spiritual turn, though Bhaskar himself sees this as a secular project.

3. The Philosophy of MetaReality

There is as yet, unfortunately, no accompanying text written to offer a layperson’s guide to Bhaskar’s MetaReality, so one is required to refer to the source texts, which are not easy reads, and which have not been picked up or developed by many researchers. But, very crudely, he continues the sequence of developing his ontology by seeing ‘Being’ as:

a) inward, reflexive and natural (or spiritual, if you prefer)
b) re-enchanted (in contrast to the disenchanted view that says Being has no intrinsic value)
c) a matter of awakening

Sounds very hocus-pocus? Don’t worry, a lot of other people think so, too. Bhaskar readily admits that this isn’t easy – we live day to day with our world of dualities, and these undeniably do have a real effect on our lives. MetaReality exposes these, however, as only “demi-reailties”.

Bhaskar’s end point, I would say, is to try and get us to understand something which almost every other ancient (and several modern) philosophy also wants us to see, namely that once we have dealt with and transcended all the dualistic illusions of the self (embodied by us in ideas such as the ego) for what they are, what remains is a non-separation; a nonduality. There are all sorts of ways that this “ground state” presents itself, albeit merely in the briefest of glimpses.

Better that I stop there for if I haven’t already lost you, I am liable to lose myself, since my own reading in this area is still very limited, and in fact Bhaskar is still working on it.

Anyway, the conference is continuing another day, but without me. My own Demi-realities have re-asserted themselves and tomorrow I must return to the office. As far as I could tell, there were three types of person at the conference. The first group, fairly large but the most silent, were people like me – new to academia or new to Critical Realism, probably following only a portion of what was being said by the second group, also quite large, consisting of established academics. But this more experienced and assured group was dominated by people who seemed to me to be going round and round (and then round again) in intellectual circles, making smaller and smaller amounts of common sense. And, to make matters worse, they were doing so to project or protect their egos (ironic, considering CR’s take on that matter). Finally, and thankfully, there was a small group of inspirational and very clever people who had the courage or the vision to move the whole conversation forward. I enjoyed talking and getting to know the first group, found myself with little in common with the second, and learnt some valuable lessons from the last (a section that includes Bhaskar himself).


Read Full Post »

With just a few days to go, I’ve been making some notes on the argument. Here is one of them, about the process of getting from data to theory…

Reflection as an entity and our perception of it are both “occasions of experience”*. All occasions of experience have a temporal and historical duration and so are portions not wholes. The entity of reflection is, therefore, a part; a fragment in a much bigger picture. Because that big picture is a unified whole it cannot be reported in an analysis of a part.

In many examples of social science research it is only occasions of experience that are considered suitable as units of analysis, but this invites conclusions from fragmented description and it creates – as a minimum – a division between observed and the observer. This might be unavoidable, or avoidable only with considerable artistry, but the researcher’s decisions on where the boundaries are and where the description of an entity starts and stops is always an arbitrary one. Reading too much into our analyses is highly risky since that sort of understanding is inevitably limited by and to our capacity to observe. The occasion of experience, in all its subtlety and complexity, is never fully capturable in an epistemic model built to analyse the parts. In themselves, these occasions of experience aren’t ‘things’ but patterns of inseperable relationships.

I think it is essentially my thesis that something of the nature of these patterns of relationships that are about the wider story can, however, be inferred.

(*after C H Waddington)

Read Full Post »

Now that there is nothing material that I can do to the thesis – it just has to do its own work for a bit – I have been noticing how my thoughts explore areas of more practical application of some of the concepts, ideas and conclusions it contains.

One of these is the question as to why reflection in Personal Development should be more, not less, important as we get older.

If it is true, as Jung believed, that the purpose of the second half of life is to make sense of the first, well then that could be one explanation. And then it occurred to me that a function of the difference between decisions made lower in an organisation (typically) and those made much higher up is connected to reversibility. If organisations follow the same organismic logic of complex living systems, biologies, ecologies and so forth (that is, systems where there are complex circuits of flows of information), then this would make sense. At the bottom, it may be more than inadvisable to make decisions that result in irreversible changes, it may be impossible (i.e. the structure of the organisation will forbid it). At the top, reversible changes may be possible, but would be redundant and energy inefficient (or would be indistinguishable to lower level, adaptive changes).

In Batesonian terms this lower level decision-making is analogous to somatic change, which is like, for example, the body’s ability to regulate and adapt skin tone in reaction to sunlight, or its breathing in acclimatisation to altitude. These are changes, but the not of a parameter at a higher level. At higher, or senior levels, permanent change is much more difficult and much riskier because it could represent a loss of flexibility at a lower level. Like a lot of these meandering thoughts, I am sure it needs development, but it does feel like a significant idea.

Read Full Post »

Handing the PhD in


That’s the instruction I usually have to give to others at the end of their Henley MBA exam, but today it’s something I have to tell myself (at least for a while) as I have just handed in my PhD Thesis to the Registrar at the University of Lancaster. Done. Dusted.

And what an odd feeling it is.

I am proud of the achievement, and thankful that I had time to make the thousands of small edits and still meet my own personal deadline of the end of February. Now I have to focus on being ready to defend my thesis to a panel of examiners in a viva examination in a few months The fact of the viva is both petrifying and  galvanising – something to occupy the mind, certainly. However, not feeling the need to sit in front of a screen for hours and hours a day with notes, papers and books trying to draft and craft a text is, well, weird.

I might even read a book for the fun of it (I brought two with me up to Lancaster – a Penguin paperback of science fiction short stories, and R G Collingwood’s autobiography. The latter title is cheating a bit, of course.)

Oh, but, you know, this feels good!!!

Read Full Post »

The title of this post is explained at the end, so read on to find out – or skip to the bottom.

After a sprint through several Personal Development workshops in January, both at Henley and in several European countries, it’s perhaps time for a breather to see what needs to be noticed. A while ago I might have just said “time to reflect”, as a lot us do, as if the act of reflection was somehow predicated on a deliberate switch from one mode or model of thinking to another. I’m now no longer sure this is a helpful way to look at it. And even less sure that it’s truly accurate. I’m using some of this post as a space for ideas to work themselves around each other, and so want to ask whether there are any things to do with reflection about which one can be sure. The short list below covers some of what has been occurring to me lately:

1. It struck me the other day that the senses are not five in number but actually one, in sum. Our demarcation of one sense from another in perception is artificial. This makes perfect sense to me, though I think the idea would need expansion to convince anyone else. This means that reflection, like all perception, is actually a systemic process, not a systematic one. Unless we understand how systems work, we will never understand the function that reflection has in our learning. I think that the ideas of many of the seminal originators of reflection, in their own ways acknowledge this. But those complex ideas tend to become worn smooth over time by constant reproduction, reinterpretation and simplification by others.

2. What we call reflection is just our punctuation of what is actually a constant flow of experience. We can’t easily prevent ourselves doing this since we hold very dearly to the idea that conscious purpose is, to borrow Sellar and Yeatman’s memorable phrase, “a good thing”. The need to know “to what end?” drives many different varieties of and purposes for reflection, but in every case the process we use is much the same. While helpful in the short-term and therefore essential in formal learning among adults, ultimately our attachment to and affection for conscious purpose in reflection may be counter-productive and in error (right now, this is just a hunch!).

3. Two common denominators seem to anchor everyone’s experience of reflection. The first is that it involves some form of noticing a difference, and the second is that the difference noticed will relate in some way to “unfinished business”. I hope I will be able to expand on this (even explain it…) in future blog postings.

So, that’s my current bedside thinking and my rehearsal of big ideas. The workshops this month have been really fantastic to run. They have, I think, really hit the spot with their place in the curriculum, and are in tune with the collective experience of the intakes at that point. I think this makes all the difference. There are just some things that would be pointless to say at the start (unless one was planning to dump an “I told you so” on people later) but which are liberating to play with later on. For example, I’m glad we don’t start the MBA with lots of goal setting, but with a challenge to how people behave, think and see themselves. If you don’t get that bit right, then the planning would probably resemble the shape of the past, not the future. Also, talking about what “career” means doesn’t make much sense too early in the MBA. Generally, people who are in mid-career don’t need to make any decisions about career steps and goals until they have a certain vocabulary, fluency and confidence which is attained through hard work by about the mid-point. That is actually when career things tend to happen anyway. So I’m glad that the thoughtful approach seems to be paying off. Still, there are always ways in which this could be better, and I’m aware that there is more that is needed in order for the MBA experience to be something remarkable.

This month I was also able to start playing with application of ideas and thinking from the PhD for the first time. This is to a group which was less restricted than in the context of the MBA, and therefore a good challenge because that particular audience was not a captive one (the venue, Gam3 in Copenhagen was unusual too, and it’s worth checking out their web site to see why).

It went pretty much as I had hoped, though I talked more than I let them talk. I was left also wondering whether I could do such a thing without having PowerPoint blazing away in the background. I do try to use it as a graphic guide or creative prompt, and not as just a horribly magnified set of speaker notes, but even so. The best speakers on TED seem to be the ones who just, well, speak, and who hold the audience with the power of imagination and the eloquence of their choice of words. Have I become so entrenched in believing that “it is done this way” (and the PD workshops are no different – the tyranny of the slide pack is also part of the expectation of the group) that I may be missing something here…?

So, the title of this blog is my understanding of the Laws of Jante (10 rules set out originally in the 1930s in a novel by Aksel Sandemose), which amount to a cultural explanation of the collective attitude in Danish society toward the delicate relationship between the individual success and the group identity. “You’re not to think you are anything special” is the first of these, and they are deliberately written in a rather negative overtone. I don’t think this is the same as the English sentiment of not “acting above your station” because that’s an affirmation of a society with rigid class divides and appropriate behaviours at each level. The Danes are very protective, it seems, of everyone’s right to object to the idea of anyone else telling them what to think or behaving as if they were better than anyone else. I’m not sure if this means they like to “cut people down to size” who are “too big for their boots” (see how metaphor gets us from one idea to another without Passing Go…).

Anyway, I quite liked the atmosphere in Copenhagen, so they must be doing something right.

Read Full Post »

Here’s a challenge. Isn’t “sustainable growth”, one of those ideas that gets bandied about by economic specialists as well as by mission statement writers, an oxymoron? In a finite world where ever-increasing growth consumes ever-decreasing non-renewable resources how can growth be a sustainable concept? It makes no sense on a global scale, and little sense (other than in the very short and greedy term) at the level of the firm. And yet the government would have Higher Education adopt this sort of thinking in its own thinking.

I do wonder what definition of sustainable, for example, the UK government has in mind when it comes out with policies that are meant to influence what goes on in Higher Education. One example comes via the Dept for Business and Innovation & Skills (see, here for an example connected with Higher Education, but there are others). I’m guessing that they mean sustainable simply as a qualifier for “growth”.  This must seem logical to the legislators, but it also means that their policy is – logically – doomed. Or am I missing something?

Perhaps the problem lies with our way of thinking.

When people are figuring things out in a learning space, it has become generally accepted over the last 100 years that there are three methods that may be used, whether the learners know it or not, to reach explanation. The three methods are induction, deduction and abduction (retroduction).

Learning without knowledge of the form of inference that is being presupposed (and one must assume that this may be very commonly the case, even among humans) is probably only possible with either induction and abduction. But even if the learners are made aware of the logic of their thinking mechanisms, is there any guarantee that it would change the outcome?

It’s an interesting side question as to whether we can learn something without being aware that we are learning it but without much doubt I’d say that when we are aware that we are learning, it is in the deductive form of thinking where we spend most of our time. This shouldn’t be a surprise. Deductive thinking has been at the heart of the scientific method for two hundred years, and has served as a useful short-cut in the natural sciences to a system of the ‘eternal verities’ or laws of physics, mathematics etc. and it is therefore deduction that we now consider to be the higher form of sense-making. A deduction is a logical prediction or inference, based on the necessary truth of a general covering rule, about a specific case in point.   It relies, usually, for its utility on there being enough general agreement about the covering rule for us to take its premises for granted (otherwise it would be a rather tedious process of inductive trial and error every time to establish each time the general rule – which, in any case, we could never do since induction proves nothing about future cases).

Deduction rules the roost, and has been adopted just as rigorously (unless you subscribe to an extreme form of inductive method, such as Grounded Theory) in Social Science. But…. deduction begins to come apart as a useful way of explaining things  if either the grounds for the covering rule or the case in question have not been established in accordance with reality. The logic of deduction will operate and compute in either case –  we’re just no better off than we were before. In fact, we may be much worse off since it may hurt…

The other day I was in a medium-size Tesco, located near a ring-road of a medium-size English city. I know that the company has invested a lot in its image as an environmentally concerned business, eager to cut its impact in terms of how it carries out its ever-increasing) business activities. Their web site has several clearly worded statements about this sort of thing and I must leave aside for a moment whether the drive for perpetual growth is must eventually end up destroy the environment since, for all I know, they may well be genuine in this desire to be able to compete in the “green business” space. That green space has a whole set of rules of its own, and none of the players in that space are either completely independent or completely aware of what those rules are.

However, what struck me walking around the store, was the emphasis Tesco had placed in just about all their choices on offers for consumer products that either encouraged waste (i.e. buying more than you would need because, well, you’d be stupid not to at those prices) or targeted foodstuffs that represented comparatively poor nutrition  choices, the effects of which our health service will eventually end up paying for years down the line. It seems to me that the logic of “All tactics that encourage profit-making and growth are positive and ethical”, followed by “All other things being equal, consumers will tend to buy more foodstuffs that are convenient to consume, high in sugar, or high in salt.

Well, I’m not sure how I got from one topic to another in this posting, but sometimes it’s healthy to rant. Somewhere in here is a suspicion of whatever logic it is we are using to justify the unquestioning approach to size in business. If you can find it.

Read Full Post »

With so much written about systems thinking in management and leadership over the last twenty years or so, people may feel that this principle is bordering on the cliché. “The whole is greater than the sum of its parts” is now almost a truism, and certainly the language of systemic thinking has been increasingly and uncontroversially used in discussions of Organisational Development, in certain views on Leadership, and in one guise or another in operations and production management for at least two decades, if not more.

I would argue, though, that this apparent application has been more one of vocabulary than of fundamental principles. What’s more, the appetite for the topic of complexity has frequently been faddish, second-hand and poorly thought-through; a handy bandwagon for those with a book to sell, a seminar to fill or a paper to publish. This is not a rant; it has always been so and probably (sadly) always will be. So let me explain how I think that a systemic view is essential to the nature of Reflection, Personal Development and for management practice, and in doing so argue that this is still a fairly radical, exciting idea.  

Thus, the fourth PD principle, and one which (I trust) follows logically from the first three, is “Practice Awareness of the whole, not the parts.”

Since the 1980s, the predominant interpretation of reflective learning in management has been via an analytic approach, of what many would call ‘the scientific method’ of measuring cause and effect, just redressed in the clothes of humanism. This is not new, nor is it always the useless thing to do. It is, in fact, the defining pattern of thought from Renaissance times to the present day, a process characterised by Russell Ackoff as a three step process of analysis;

1) take it apart,

2) try to understand what the parts do, and

3) assemble understanding of the parts into an understanding of the whole.

In modern business education it is the same – management is broken down into its parts because the assumption is that knowledge of the parts taken separately allows integration into an understanding of the whole.  Analysis permeates corporations, which are divided into parts, which are then aggregated into the running of the whole – an analytical process.  Business Schools also have curricula separated into parts, which vie with one another in silos of analysis, which occasionally leads to academics vigorously defending the grounds for their view, their models and their theories entirely in relation to the views, models and theories of other competing domains. People, too, are units for and of analysis. Their personalities, traits and characteristics can be measured, their roles assessed and their actions studied in isolation to see how they work.

By contrast, in systems thinking every system is contained in and defined by its function in a larger system. Explanations always lie outside the system, never inside it.  Where analysis takes you inside the system, synthetic thinking contrasts the three analytical steps by:

1) asking “what is this a part of?”,

2) then explaining  the behaviour of the containing whole, and finally

3) disaggregating understanding of the containing whole by explaining the role or function of what I’m trying to explain.

We tend to think of ourselves as individuals, more or less free agents operating more or less effectively, making conscious choices alongside others who are (more or less) in a similar situation of individual free-will and choice. In Personal Development, a systemic approach means setting aside, at least temporarily, certain parts of our training, thinking, or education. Where problems just seem to be repeating themselves, or a more piecemeal approach to change doesn’t resolve things, or the issue just isn’t clear, seeing PD from a systemic point of view can very liberating, with surprisingly rapid insights and results.

Elsewhere in this blog I have posted about systemic coaching, and I have come to the conclusion the basic principles underlying this approach work equally well when applied to Personal and Professional systems. This is easy to say and difficult to talk about since the dynamics that work within a system are best understood when experienced (phenomenologically) yourself.  The invisible ordering forces of a system or whole which are listed below (and the descriptors) are taken from John Whittington’s excellent new book on Systemic Coaching & Constellations:

Acknowledgement (this is the first principle of PD in my list, and here refers to “standing in the truth of the current situation”)

Time (“what comes first has a natural precedence over what follows”)

Place (“everyone, and everything, has a right to a different but unique ad respected place in the system”)

Exchange (“a dynamic balance of giving and receiving is required in systems”)

Seeing the order from the outside…?

Read Full Post »

A couple in dialogue with nature in a rainforest pool in northern Queensland.

I’m fond of telling anyone who’ll listen not only that reflection is at the heart of Personal Development,  but also that “introspection is necessary but not sufficient” for reflection. This second assertion is prompted by observation and supported by deduction.

The observation is of the shyness exhibited by most MBAs when it comes to sharing thoughts and feelings with others in a learning context. Hardened managers who would not hesitate to chip (or butt) in with their views when it comes to business decisions turn deafeningly silent when it comes to surfacing assumptions about themselves in a collective setting. This silent tendency is even more pronounced, if that’s the right word, when the sharing requires those thoughts to be expressed in writing. This is despite an intellectual acceptance of three ideas; that telling others helps reveal our thinking to ourselves, that listening to others somehow provides a boundary and shape for our own thoughts, and that the process of writing (especially for publication to an audience) is a distillation and perhaps a transformation of our thoughts (when we speak we do not use exactly the same language structure as when we write). Anecdotally, when you have a situation where trust has been established between managers who are all committed to learning, the efficacy of dialogue for PD is very often apparent, with rapid results.

Nevertheless, these observations cannot easily explain why dialogue is a principle of PD. That explanation comes from a deduction, itself following on from the second principle (which spoke of the concept of difference), of what must necessarily be going on in dialogue, intrinsic to reflection and therefore part of the Personal Development process.

Whenever a second view or reference point is made available, and difference created, a new level is not just a possiblity but a logical necessity. Gregory Bateson used the example of binocular vision to illustrate this. On its own, each of our eyes is sensitive to information or sense data. But a single eye cannot see distance; this facility is a property of the information processed from both eyes. However, the fact that we can perceive depth in three dimensions is not simply a matter of addition. Binocular vision is at a logical level hierarchically above the levels represented by what each eye “sees” on its own. As Bateson pointed out, this is a sort of multiplication, “[in] principle, extra “depth” in some metaphoric sense is to be expected whenever the information for the two descriptions is differently collected or differently coded.” (Bateson, 1979: 70).

So it may be said that dialogue in reflection results in a depth not present in either person’s thoughts on their own. A ‘conversation’ is an idea one level removed from the individual sets of utterances that make it up. A dialogue is, then, a double description which is the relationship between components (remember that a relationship or difference between things is not a property of those things and has zero dimensions) and when we engage in a dialogue what results is a viewpoint that we could not have seen only from our introspection. At least, deductively, this is what out to be so and what we may then investigate.

The 1st principle

The 2nd principle

Read Full Post »

The first principle of Personal Development that was outlined in an earlier post was “acknowledge, without judgement, things as they are.”

Although not in itself sufficient for PD, this is certainly a necessary pre-requisite mental attitude; a stance of genuine curiosity about (and a conceptual uncoupling from) things. It turns out that we are already in trouble when we assert that there such things as “things”, but until now we have had little choice because the English language tends to be steadfastly material in its assumptions of the world. The world of difference, however, is notable for being immaterial.

So the second principle of Personal Development is an invitation to understand and then actively look for ‘difference’. This idea is perhaps the most elemental in Gregory Bateson’s relational view of the world and one that I have blogged about several times over the years. However, as a very brief resume, in a world of almost limitless potential bits of information which our senses detect, a difference is that bit of information that makes a difference. In other words it is an “elemental idea” whereby we become aware of the boundaries between one thing and another thing. In noting difference we must make some of sort comparison, but our comparison literally carries no weight, occupies no space, and is non-dimensional. A difference, in short, is a no-thing. Crucially, it is also not a property of any of the things we are comparing. Bateson went on to note that differences travel in recursive circuits of cause and effect in systems, and that they are transformed successively over time and are at the heart of what make living systems different, so to speak, from non-living ones.

But what does this have to do with PD and what does it mean in practice?

1. Without the relationship between ‘that which is’ and ‘that which is not’ it would be impossible to have any notion of “things as they are”, the first PD principle. 

2. Meaning is achieved by the ever-present question “compared to what?” (a question that is almost always an implicit or unconscious one).

3. Every notion implies its opposite, its negation.

4. Development implies learning, learning implies change of one sort or another, and change implies some sort of novelty which would be impossible if the world were a closed system.

An example, perhaps. I recently found out that I have had a development paper accepted to a management conference in September. The paper’s purpose is just to stimulate discussion, in  contribution to a given subject area (in this case ‘knowledge and learning’) and partly in order to give me some developmental feedback in peer review. The acceptance process involved some blind peer reviews, which I got to see. Two of the reviews were largely positive and quite supportive, but the third was a lot more critical. My first reaction was to accept the compliments and look for comforting support from their gentle suggestions for improvement. I dismissed the less complimentary review as being irrelevant, its author too far from my position to be of any use to me. On reflection it may be that the reviewer I didn’t  agree with that will help me understand my own thinking for what it is as it exposes it to its antithesis.  My job is first to note that this is the situation (acknowledge it) and note too how I feel about it, and then get curious about how such a different view clarifies my thinking. To to that, I’d need to understand that alternative argument.

In summary, in their daily working lives managers constantly (if unknowingly) make sense of what’s going on by embracing or ignoring the concept of difference and the world is an open system which operates according to an underlying pattern (or law?), regardless of our awareness of this being the case. Incidentally, because it is a property of the relationship between things and not of things themselves, the nature of “difference” is a very curious one to explore. In short, the difference between one thing and another is at a higher logical level than either of the things themselves. Bateson spent much of his life playing with the consequence of this, i.e. that ideas operate in a pattern, a hierarchy of  logical levels which are immanent in social structures and systems.

Read Full Post »

Older Posts »

%d bloggers like this: