In Part 1 (Talent Management: Part 1 – Role Design and Performance and their Convergence with Uncertainty and Risk) I explored the implications of role design on role performance. My key thesis was role design establishes a powerful context for role and personal performance. This means the way we design a role (says a sales one) with its surrounding supports establishes the limits and even the characteristics of individual performance within said role. Individual/personal performance is of course the principal purview of most HR related PM systems.
I often lament that HR seems oblivious to the function and significance of role design and performance.
Why are these PM distinctions and demarcations so important? Poor/flawed role design sets up a higher potential for performance failures and even catastrophes such as Air France’s tragic flight 447. Jonas’s article suggests that the way pilot roles have been designed for modern commercial aircraft made it more difficult for this tragedy’s avoidance. While the plane’s automatic systems were functioning, nothing untoward developed. It is when the systems fail that the role of the pilot becomes critically operative. Unfortunately the pilot was unable to fulfill his role in the case of this tragedy. Jonas’s article explores how the roles of modern aircraft pilots can contribute to such failures. Who “drove” how these roles were designed? People in roles (bureaucrats and lawyers) who did not understand what they were doing in terms of the unintended consequences of their regulations and rules. I explored in an earlier post (Organizational Development Perspective on Uncertainty & Risk: How we bring it into our lives) how we introduce U&R into our lives through our decisions and actions.
One could argue that it was a “personal performance failure” that lead to flight 447’s tragedy. Sure, but this is a cockpit copout (sorry for the humour). Flight safety is a paramount issue, even to the ill informed lawyers and bureaucrats. So, we should assume a scenario where the pilot is unavailable and the systems fail is grey swan like uncertainty event. If it is foreseeable, then some form of “compensating” strategy could be identified and established. The pilot was out of the cockpit for reasons of flight safety (fatigue and the need for rest).
The George Jonas article also touches on the issue of uncertainty and risk (U&R) more generally:
- The article notes that aircraft being complex system of mechanical, electrical, and electronic subsystems means that there is always some U&R arising from this complexity. In the specific airbus aircraft there was a recognized specific speed sensor known to be more susceptible to icing than other designs. There was a program underway to replace these vulnerable components. Flight 447’s craft had not been retrofitted. To ground every craft with a known “weak point” until fixed is often uneconomical and unnecessary (assuming the pilot is able to do their role if the “event’ takes place in flight). This is analogous to a driver steering a car to safety when a tire goes flat while going along at speed on a highway.
- The article also highlights the distinction between qualifications and actual competence. This is where role design can “confuse” former as being equivalent to the former. the U&R here is of course is that if we are confused about the distinction we may end up putting a “know it all” in a role where it is critical to be a “knowhow it all”. The copilots were all certified, but they lack sufficient competence to react properly when there was a system failure.
We deal with U&R in many situations by putting in place redundant control systems. Airplanes, control centres, and many other parallel situations incorporate this safety engineering principle. The key thing to remember is each redundant system introduces its own U&R in their own right (e.g., they may indicate in some circumstances false positives/negatives) but often it is the interrelationship they have with other connected systems (can become so complex we lose the intellectual capacity to understand the underlying U&R) and they can contribute to a sense of complacency and unthinking reliance. This last observation is where I would put the “rules and regulations”. My rule of thumb is: “Every U&R mitigation strategy establishes its own doorway for tertiary, tangential, adjacent U&R circumstances to become connected to our primary area of concern!”
I have explored this notion of how our protection schemes can contribute to our U&R by masking adjacent sources of U&R (OD Concept of Uncertainty and Risk – the limits of human understandability). We at the very least understand this U&R complicating concern. This is why we often want a “skilled human” nearby. We recognize that no automated system can be safe against all U&R circumstances.
Now we cycle back to the performance issue: If we want a skilled human to be our last line of defence, what do we need to do to ensure they are going to be maximally personally effective when called upon? This is a role design issue first and foremost. It would be insane to have the best equipped person figuratively knee-capped by a poor set of adjacent circumstances (i.e., the role design features) that distract, undercut, delay, etc. his/her ability to act to the very best of their ability. Once the role design is right, we obviously want to put fully capable people into it.
We now are coming to a key talent management notion: Having great talent is nothing if we are unable to capitalize on its potential. Performance is what matters in the end. And I suggest that performance is a two level inquiry: role and personal.
Successful PM is the result of being crystal clear to what are the desired outcomes (flight safety when systems fail). Results/outputs, throughputs, and inputs are all setup and aligned to ensuring we obtain this outcome. This is a process for laser like judging on “Does doing this this way the surest way we know of achieving our desired outcome?” I suspect a lot of rules, regulations, procedures, support systems, etc. would not fair well under this inquiry. So why do we do them? The justification cannot because we/someone want(s) to ensure the best outcome possible.
There is an ethical question too: “Does what I propose introduce unintended U&R consequences that if they materialize will belie/mock my espoused best intentions?” I like to think of the Hippocrates Oath: “Do/introduce no harm.” Making a choice in a U&R like universe means nothing is “fool proof”. So we have to act when we realize we don’t control everything. This leads me to conclude, if we want to live the spirit of the Hippocrates Oath, we must be prepared to understand and accept we will have to live with and to deal with unintended consequences.