Monash University Publishing | Contacts Page
Monash University Publishing: Advancing knowledge

The Project as a Social System: Asia-Pacific Perspectives on Project Management

Introduction

Keynote address

Rethinking information systems project risk: Implications for research

Chris Sauer
University of Oxford, UK

Andrew Gemino
Simon Fraser University, Canada

Blaize Horner Reich
Simon Fraser University, Canada

Introduction

This is a research conference so I’m going to talk about what seems to me a very significant issue in research into projects and project management. I want to discuss risk – ‘discuss’ is code for manhandling it, roughing it up, kicking its legs from under it and seeing whether it can still stand up. And I shall draw out what I think should be some very discussable – code for possibly contentious – implications for research.

Risk is at the heart of the study and practice of project management. Some even equate project management and risk management though I do not go that far. So, the centrality of risk makes it important that we are confident that our research is robust.

One of the seductive features of research into project risk is that it seems to offer an empirically objective domain for research. That is we can attach numbers! Combine this with the central place of risk in our field’s thinking and it is easy to slide into the assumption that project management is relatively easily studied by empirical methods to secure an objective understanding.

This objectivist view has been put on notice by, among others, Mark Winter and those colleagues who contributed to the Rethinking Project Management project in the mid-2000s (Winter et al. 2006). Their stance, that project management is a social process and therefore subject to all the arguments that militate in favour of interpretivist approaches in other fields, offers an important corrective to what I would term the traditional or conventional wisdom of project management. According to that ‘wisdom’, project management is a technical field with mechanistic relationships governing cause and effect. Witness Rodney Turner’s quixotic attempt to formulate an axiomatic theory of project management (Turner 2006a; 2006b; 2006c; 2006d), a position I have argued against (Sauer & Reich 2007). The conference theme and many of the papers in this conference indicate that there is no need for sermonising on this point.

Notwithstanding this corrective, it remains relatively easy to think that risk is independently objective: project size can be measured in dollars, person-years etc; project performance can be measured in percentage variance against objectively stated targets. Risk therefore can be understood independently of all those problematic, sophisticated explanatory accounts you find in interpretivist case analyses.

So behind the critique that I’m going to offer of our thinking about risk is a more philosophical concern about the very nature of the field of project management. I’ll return to this at the end.

Before I launch into risk, let me make a couple of qualifications, an admission of guilt and an acknowledgement.

First, my research domain is principally that of information technology (IT) projects by which I mean IT-enabled business process change projects. Don’t ask me to define it precisely. I think we all know what we mean. Nothing in my argument hinges on it. Indeed, I think that while IT projects are thought to be intrinsically more risky than other types of project, the arguments I shall advance should be more generally applicable.

Second, this is very much work in progress. I have been thinking about it and bouncing it around with Andrew Gemino and Blaize Reich for a couple of years and we have one conference paper that was our initial stake in the ground prior to this (Sauer et al. 2008). The arguments here are intended to be indicative not exhaustive.

Third, my admission of guilt. I shall criticise what I take to be our normal understandings of and research into risk. In doing this, it may look as though I’m setting up a straw man. If it does, it’s because I am or have been that straw man. I am one of the guilty parties (Sauer et al. 2007). I think that the Standish Group stands accused alongside me – and implicitly many of those who have cited Standish Group findings in their papers.

Maybe some researchers have always had more sophisticated views than I have but I suspect there are plenty of others in the same camp as me. In this connection, perhaps it’s worth saying that although many of my early publications related to information system (IS) project failure (e.g. Sauer 1993; Sauer et al. 1997), it is only in the last six years that I have published papers that explicitly examine risk. It has been in writing these that I have become increasingly uneasy about the concept.

Now the acknowledgement – I have recently happened upon a paper by Paul Bannerman of NICTA and UNSW (Bannerman 2008). In it, he starts to critique risk. His conclusion is that project risk researchers have not paid adequate attention to the needs of practitioners and that practical tools have taken too little heed of research. While there is some overlap between his critique and mine, mine is more extreme and more extensive in its implications.

So, I’m going to start by examining the concept of risk, the research construct and its constituents. Then I shall turn to the problem of data to support our understanding of risk. Then I want to ask about the analysis of data and its interpretation. Finally, I shall turn to the implications in relation to four avenues for research. And for good measure, I’ll sign off with a philosophical speculation to give us something to talk about over morning tea.

The concept of risk

Starting with the concept of risk, the traditional definition looks something like:

Project risk = probability of outcome x impact

So, for example, there’s a 30 per cent probability that the project will incur a loss of $1m would imply that the project risk is $300,000.

Does anybody know what that means in language I can understand?

It seems to mean something like on average I should expect to lose $300,000 on a project with this set of characteristics (risk factors). Except of course that on seven occasions in ten I lose nothing and on three occasions I lose $1m. So, I worry that the idea of risk as the product of an outcome’s impact and its probability has no necessary correlate in reality.

I also worry that the outcome/impact in this case is rather specific and does not reflect the range of other outcomes that might occur. For example, there are non-budgetary considerations such as job satisfaction, image etc. But there may be many other budgetary outcome possibilities. For example, there may be a 70 per cent probability of losing $100,000 alongside the 30 per cent probability of losing $1m. And many other possibilities besides.

So let’s try to be more helpful to risk.

I’m at the capital projects funding committee with my wish list of 10 projects and their business cases including net present values (NPVs). The business case allows us to factor in both costs and benefits. For ease of argument, let’s pretend that all costs and benefits can be translated into monetary values.

Project 1 has a business case with NPV of $4m.

Project 2 has a business case with NPV of $5m.

Project 3 has a business case with NPV of $3m.

Etc.

Therefore,

Project 1 has risk to the business case of 30 per cent.

Project 2 has risk to the business case of 45 per cent.

Project 3 has risk to the business case of 20 per cent.

So, if we adjust the NPVs for the risk weighting, Project 1’s risk adjusted NPV is $4m x (100–30%) = $2.8m

That is, if we did enough of these projects, on average we’d end up with $2.8m per project rather than the projected $4m.

Project 2’s risk adjusted NPV is $5m x (100–45%) = $2.75m.

Project 3’s risk adjusted NPV is $3m x (100–20%) = $2.4m.

Etc, etc …

Note that as I have constructed the concept of risk I have surreptitiously conflated the frequency with which downside outcomes occur with their impact. Even in the context of project portfolio investment decisions, we would want to know whether my 30 per cent risk to the business case was derived from a data set of 100 projects in which all 100 missed their business case by 30 per cent or whether 90 per cent hit their business case target but 10 per cent had negative NPV of $8m. Both distributions of data would generate us 30 per cent risk to the business case but would probably have different implications for decision makers. I’m going to set this complication aside. In principle, we could accommodate it.

So what risk gives us is the ability to compare the likely outcomes of different projects and if we were economically rational to choose from among them. I’m gambling, but at least I know the odds!

But, this assumes many things including:

  • I have confidence in the business cases.
  • I have confidence in the risk benchmarks.
  • The projects are totally independent of each other.

Actually, I only refer to confidence rhetorically, because we all know that in practice business cases are dodgy dossiers. We also know that in practice most risk assessment falls short of being rigorous (one of Paul Bannerman’s key points). What I’m interested in is not so much how things may fall short in practice and more whether we ever could expect to develop sufficiently accurate measures of risk that it would be rational to have confidence in them.

Let’s take stock. So far I have implied three things. One, it is not easy to be clear what the meaning is of risk, especially in the context of a single project. Two, that it is seemingly more useful to apply risk at the portfolio level – it is a probabilistic concept so not surprisingly it is hard to relate it to the reality of a single project. Third, that it is all too easy to be slack in our use of the concept.

Risk data

Let me turn now to the data that might support the discovery of sufficiently accurate measures of risk – what I shall refer to as risk benchmarks.

So the data question is something like – what data could we use to justify our saying that – ‘a project with this set of characteristics (risk factors) would on average miss its business case by such and such a percentage’?

First, I’m going to need sufficient projects with similar characteristics to be able to draw statistically significant conclusions. What exactly are similar characteristics? How similar in size is a project costing $1m and another costing $2m and another costing $10m? We’d probably say it depends. On what? The duration of the project might be one factor, the number of team members another, the labour-capital ratio, that is it manpower intensive or is a lot of the cost the purchase of hardware/software? So a particular $1m project might be more like another $10m project than the $2m.

Apparently similar objective attributes do not necessarily support the presumption that they are similar projects.

What about things like complexity and uncertainty of requirements? These are far more problematic in that we do not have objective measures. Researchers at best use subjective measures, typically Likert scales, often as undiscriminating as 1–5.

Second, I need to be able to calculate levels of risk for projects with a given set of characteristics. How do I do that? So for each project that fits the criteria I need to be able to identify the degree to which the project outcome delivered the business case. Two problems worry me: Problem A and Problem B.

Problem A is the problem of measuring the achievement of the business case. This has several sub-problems such as how to identify which performance effects in the organisation result from the project and which result from parallel initiatives; how to treat non-financial outcomes; when to assess the performance outcomes, e.g. one month after implementation, five years after implementation.

Problem B is the problem of using the business case as a baseline. What if the business case changes during the project – for example, the project runs into trouble and as the result of a gateway review the objectives are scaled back and a revised business case is developed? Well, you might say you should stick with the original business case. But what if the objectives are scaled back because the organisation decides that it wants to redirect some of its investment to a more urgent need? Surely I don’t stick with the original business case? As yet nobody has developed (or even tried to develop) a satisfactory decision rule for identifying the appropriate business case as a baseline.

As a digression I would mention that my colleague at Oxford, Bent Flyvbjerg argues that the original business case is the gold standard for estimating risk (Flyvbjerg et al. 2003). But, he is interested in showing how business cases are fictions dreamt up by the incurably optimistic and the unscrupulously manipulative. While I accept that his observation of what he calls strategic optimism reflects a common project experience, I do not think that it constitutes a reason for believing that changes to a business case during the course of a project might not sometimes be perfectly legitimate and reasonable. So to properly understand and measure risk we would need to account for such cases.

A sub-problem here relates to the question of when the original business case was created. Across organisations this can vary from being at a very early stage before any groundwork has been done through to the point when a project team has been established and an initial prototype created.

That’s probably a soluble problem. But there are further complications. If there have been multiple business cases developed (certainly possible in an environment with a strict gateway discipline), if you ask project managers about the original business case, which one are they thinking of? My colleague, Andrew Gemino, has asked about this. He found that project managers vary quite considerably as to what they are thinking about when asked about original targets. For some it was the first ever, for others it was the most recent or ‘live’ targets (see Figure A).

Figure A. Relative frequency of initial estimate definitions

So, there are plenty of problems associated with the task of collecting data that would allow us with a large enough sample to calculate risk benchmarks. I do not say that they are all insuperable. My point is to paint a picture of the distance we would have to travel in our research to get me, as a rigorous researcher, to say that I have confidence in the numbers we hawk around as levels of risk.

Analysing data to generate risk benchmarks

My next step, as promised, is to examine what we would need to do to analyse the data to generate accurate risk benchmarks. Let’s start by assuming that the problems about the data themselves can all be overcome. We have a set of risk factors that we can measure accurately and are unambiguous in what they mean.

The obvious approach is to correlate risk factors with project performance outcomes – achievement of the business case. For example, we examine projects of a certain size (100–200 person months). We find that on average they achieve 45 per cent of their business case.

So what? Examine another 200 projects and would it be reasonable to expect that approximately 90 of those would achieve their business case? This would only be so if the original 45 per cent had already been shown to model some aspect of reality such as, for example, that there was a functional relationship between size and business case achievement. So as size increases there is a correlative reduction in the frequency with which business cases are achieved. But I have to tell you that with two samples of 400 and 200 we have found it very hard to detect a definite size-performance relationship. So if you did sample another 200, my guess is that you’d get quite a different number the second time.

The problem is not merely that our data might not have been sufficiently well-defined and rigorous. Two other considerations apply. First, risk factors are rarely solo artists. They usually perform in bands. They appear also to be interdependent. So size and complexity are often associated as are volatility and requirements uncertainty. In principle we could perhaps identify types bearing similar combinations of risks at similar levels of measurement. Then we might have a better basis for expecting the second 200 to have the same success rate as the first. Except …!

Except that none of this takes account of project management. If we believe that project managers do something useful and can make a difference then we surely believe something more like: risk factors are managed/moderated by project management to achieve outcomes (see Figure B).

Figure B. The intermediate role of project management

Moreover, I think we believe that the quality of project management varies from project to project. So, on this model, the best we could ever say is that projects with such and such attributes managed to a given level have X per cent risk of not achieving their business case. This requires us to be able to predict the level or quality of project management. This in turn requires a predictive model. And that model would need to be developed and validated against performance outcomes such as achievement of the business case and in the presence of combinations of risk.

You can probably sense that I’m worrying about the possibility of a circular argument. I think we can avoid it. If we can detect a relationship between a given type of project and level of risk and differing qualities of project management leading to consistently different levels of project performance, then we could specify differing levels of project management and if all goes well we may be able to identify antecedents of those levels of project management that would then enable us to predict the level or quality of project management.

I’m making an assumption here – that project management is uniformly effective. That is that in all projects certain management practices are effective. Put crudely, if methodological discipline is good, more methodological discipline is better in all cases.

But what if project management is contingent? Certain project management practices work well in certain contexts; other project management practices work well in other contexts. This has been Barki, Rivard and Talbot’s claim (2001). And whether you are persuaded by the analysis or not, it’s hard not to feel a sneaking sympathy with the idea that for example in a hotel building project you need tight formal control whereas in a new product development project in high tech you might achieve a better outcome with looser formal control.

So if you wanted to predict the level of project management to be applied under given conditions of risk in a world where the impact of project management is contingent, you would also have to predict whether it would be applied contingently.

My point here is that if we are to develop rigorous assessments of project risk, some heavy duty modelling and theorising about project management is required.

By this stage you’re probably dropping with exhaustion at this catalogue of difficulties. Bad luck, I’m going to kick you while you’re down! Two further points:

One, suppose we did all of this research perfectly. Could we be confident of its validity across time? Would data collected in 1990 or 2000 be a robust basis for applying risk benchmarks in 2010? Would it translate from IT to construction to new product development to policy-making etc? I don’t know the answer but we’d need to have one.

Two, what if project management turns out to be much more contingent than any of us imagines? That is, we can’t model the contingencies and responses on a two by two matrix! Or even worse, what if there are no clear relationships between risk factors, project management practice and performance outcomes? I’ll come back to this.

Review and conclusions

Let’s take stock again. I have presented a series of reasons for doubting that we shall have rigorously based risk benchmarks in my lifetime or probably ever. There is too much work to do requiring degrees of resource, rigour and research excellence that are not readily available. It seems a thesis of despair.

What are the implications for us as researchers?

I draw four conclusions. One relates to the pursuit of scientifically rigorous project risk benchmarks. The second concerns what we might do pragmatically that would be helpful to practitioners. The third suggests reducing our ambitions. The fourth argues for the value of qualitative research.

Finally, I want to return to those philosophical roots I referred to earlier and speculate a little about the nature of project management as revealed by today’s discussion and connect that to the question of ‘what can we ever hope to know about project management?’

Let me start with the pursuit of scientifically rigorous project risk benchmarks. I have suggested that there are issues as to what exactly we are talking about when we talk about risk and what it refers to in reality. I have queried the measures and the data that we’d expect to use. I’ve raised questions about the lack of underlying theory to justify the logic by which risk benchmarks might be calculated. If I were starting a research career, I would not view this as a promising avenue on which to build a reputation.

Let us move on to a different point. What would be helpful to practitioners? I don’t know. I have my prejudices but little evidence on which to base them. Why? To the best of my knowledge, there has been little or no work carried out within the project management field examining how executives use risk in making project investment decisions, or how project managers think about risk and apply their knowledge, or how sponsors, gateway reviewers and other stakeholders use risk benchmarks. Do they use whatever information they have rationally? Do executives in practice want all of the possible information about risk? Are there some risk metrics that are used more than others? Are they the most appropriate metrics for making effective decisions? There is an ample literature on the limits of human cognitive capacity in decision-making (Simon 1955; 1977) to suggest that only a limited set of metrics is likely to be employed in most information system project investment decisions. The interesting question for researchers is to discover what could be useful short of our scientifically rigorous benchmarks. Now, if I were starting a research career, I’d view this area as much more promising.

I guess that this connects to my third conclusion – about limiting our ambitions as researchers. The fundamental point is that the complex relationships between risk factors at the start of a project and final outcomes make it practically impossible to develop scientifically rigorous benchmarks. There’s too big a gap between independent and dependent variables. Which is partly why I see focus on smaller slices or segments of projects such as the practice of investment decision-making as a better prospect for research – at least quantitatively based research.

Finally, we are left with the qualitative avenue for research. Might it be the case that the best we could do is to demonstrate through longitudinal studies the causal webs that link the starting conditions of projects with the events and actions that make up the project with its final outcomes? We might choose to acknowledge the riskiness of projects, that is there is an indeterminate probability that the organisation will not get what it wants/expects, without attempting to put numbers on the risks. Instead, we show how projects are risky without committing to any underlying beliefs about objectively detectible patterns of measurable conditions. In effect, I’m suggesting that we might bring into the foreground in all their horrific majesty the reality of the devils that lie in the detail.

All of which leads me to that speculation about the nature of projects and project management. I have argued that the promise of scientifically rigorous approaches to IT project risk research is not encouraging. Some of the reasons behind this conclusion are based on our recognition of the distance in time and activity between independent and dependent variables and on the complexity of the setting which includes all kinds of moderating or mediating activity – viz project management and the events and actions to which it responds. In short, we all know that most projects other than the most trivial are highly complex. We also know that to call something a project is to apply a social construction. It is to project a human interpretation on to social and organisational phenomena. It is not the same as identifying physical phenomena as of a particular natural kind. So we cannot take for granted that projects will exhibit much by way of regular patterns of behaviour. While calling something a project may be handy for the purposes of organising and allocating resources, it does not follow that it is therefore handy from the point of view of research. While I would not recommend trying to build a research career on addressing this question, it does seem a salutary thought with which to leave this conference on research into project management.

References

Bannerman, P. 2008. ‘Risk and risk assessment in software projects: A reassessment’. Journal of Systems and Software 81: 2118–2133.

Barki, H; Rivard, S; Talbot, J. 2001. ‘An integrative contingency model of software project risk management’. Journal of MIS 17 (4): 37–69.

Flyvbjerg, B; Bruzelius, N; Rothengatter, W. 2003. Megaprojects and Risk: An Anatomy of Ambition. Cambridge: Cambridge University Press.

Sauer, C. 1993. Why Information Systems Fail: a Case Study Approach. Henley-on-Thames: Alfred Waller.

Sauer, C; Southon, G; Dampney, CNG. 1997. ‘Fit, failure, and the house of horrors: Towards a configurational theory of IS project failure’. In Proceedings of the 18th International Conference on Information Systems, edited by Kumar, K; de Gross, J I. Atlanta, Georgia, December, Association for Information Systems Atlanta, GA, USA 349–366.

Sauer, C; Gemino, A; Reich, B. 2007. ‘Managing projects for success: The impact of size and volatility on IT project performance’. Communications of the ACM 50 (11): 79–84.

Sauer, C; Gemino, A; Reich, BH. 2008. ‘Of what use is research on IS project risk? A proposal to make IS risk research fit for practice’. Paper presented at Administrative Sciences Association of Canada Conference, Halifax, Nova Scotia, 24–27 May.

Sauer, C; Reich, BH. 2007. ‘What do we want from a theory of project management? A response to Rodney Turner’. International Journal of Project Management 25 (1): 1–2.

Simon, H A. 1955. ‘A behavioral model of rational choice’. The Quarterly Journal of Economics 69 (1) (Feb): 99–118.

Simon, H A. 1977. The New Science of Management Decision. Englewood Cliffs, New Jersey: Prentice-Hall.

Turner, J R. 2006a. ‘Towards a theory of project management: The nature of the project’. International Journal of Project Management 24 (1): 1–3

Turner, J R. 2006b. ‘Towards a theory of project management: The nature of the project governance and project management’. International Journal of Project Management 24 (2): 93–95

Turner, J R. 2006c. ‘Towards a theory of project management: The functions of project management’. International Journal of Project Management 24 (3): 187–189

Turner, J R. 2006d. ‘Towards a theory of project management: The nature of the functions of project management’. International Journal of Project Management. 24 (4): 277–279

Winter, M; Smith, C; Morris, P; Cicmil, S. 2006. ‘Directions for future research in project management: The main findings of a UK government-funded research network’. International Journal of Project Management 24 (8): 638–649.

The Project as a Social System: Asia-Pacific Perspectives on Project Management

   by Henry Linger and Jill Owen