Monday, April 23, 2007

Operational Risk Markets?

One of my favorite reads from several years ago is Surowiecki's The Wisdom of Crowds", in which the decision-making ability of uneducated participants consistently exceeds that of the brilliant, enlightened few. Coming from a significantly Bohemian ancestry that is likely to avoid nobility and entitlement, Surowiecki's writings are encouraging and hit the nail on the head of a risk phenomenon risk managers continually encounter: diverse, unrelated and uncorrelated viewpoints simply do a superior job in correctly assessing a problem than a few privileged geniuses.

Group decision-making can be extrapolated to market behavior as we're essentially talking about an environment which facilitates autonomous decision-making. In this environment, the actor participating in the system has autonomy to make decisions (as outlandish as they may be), and derives benefit or consequence from this contribution.

One of the more notable aspects of market/group behavior from the perspective of a risk manager is that the behavior of a market or group tends to be the most likely to end up approximating a normal distribution. Contrast that distribution with the excessively leptokurtic distributions found in command economies and a risk manager finds market dynamics to be his primary friend in keeping risk manageable. History is ripe with examples. Khrushchev's infamous corn program, which originated in my home state of Iowa, Hitler's selection of highly speculative weaponry (including missiles, jets and heavy-water nuclear weapons) at the expense of more traditional and demonstratively successful conventional weapons advocated by his generals, Ireland's consolidation on a primary variety of potato and countless other examples illustrate the cost of command-economy decisions.

Why is it that command economies often facilitate the perfect storm? As my 14-year-old son can recall from a few too many History channel episodes his dad forced him to watch, it takes more than one mistake to create a catastrophe. Command economies suffer at least three "mistake coefficients":

  • Poor information: command decision-makers are at a serious disadvantage to a market in obtaining and assessing information. Personal bias alone screens considerable useful data and distorts the process. Political dynamics, including the preference of information from favorite subordinates (versus reports by difficult, annoying and antagonizing employees who just don't know how to put the right polish on things like a valued counterpart of mine), prospect theory dynamics, irrational risk aversion and other factors all come into play to distort even the most objective manager's ability to correctly assess a situation.

  • Homogeneous solutions: Command decisions often focus on a single approach to a problem, such as the decision to unilaterally grow corn, to ban handguns owned by law-abiding citizens (one can't ban the possession by those who circumvent the system), to grow one variety of potato, or to select a particular information technology vendor as the exclusive solution-provider (e.g. Microsoft). When a threat breaks out, it becomes horribly difficult to control as natural resistance that should slow the fire's spread have been removed through the homogeneous structure. If you doubt the significance of this effect, spend a weekend in greenhouse school and learn about micro-cultures as my son and I did. Then try applying your "natural" organic philosophy in this terribly unnatural environment and discover how viruses, bacteria and pests quickly dominate your micro-culture.

  • Inflexible response: In spite of their great enthusiasm and confidence, command-economy leaders suffer an inevitable response lag as compared to market participants. While someone in the normal distribution of the market has inevitably conjectured the real risk (due to likely paranoid outlier behavior that for once in their life pays off), our command-economy decision-maker is a natural laggard. Confident in their original assessment, they're one of the last to abandon the false premise, inviting liquidity risk in to do its damage. By the time the brilliant manager realizes the fallacy of his/her assumptions, markets are illiquid, corn crops are frozen and campus students are dead.

So what's the point from an operational risk perspective? Given the structure of corporate decision-making, it is probable that many of us encounter environments that are little different from the flawed command-economy models which are highly associated with catastrophic loss and failure. Decisions are made lacking information, typically select homogeneous outcomes due to vendor preference or simply a fear of complexity, and inevitably suffer inflexibility in response.

An appropriate solution given the premise may be a market-oriented risk management model, something which I've been working on for some time but certainly classifies as more conjecture than solution at this point. In the mean time, we all need to work on the those three factors identified and reduce their severity as much as possible.

Thoughts? Feedback is certainly welcome.


Wednesday, April 18, 2007

Leptokurtic operations: Shunning Certain Risk

With the recent Virginia Tech shooting, I was reminded this week of the difficulties policy makers face in comprehending risk. In particular, executives and politicians alike tend to fear certain risk and end up structuring policies that trade off manageable "expected risk" for the mostly unmanageable, statistically difficult world of outlier uncertainty events. This choice unfortunately distorts the probability model of the risk environment and trades out often normal distributions with what are known as leptokurtic models.

Consider VT's recent change in policy that banned the legal concealed carrying of handguns by individuals confirmed by the State of Virginia to be free from felony convictions, mental disability, and other disqualifying conditions. The VT administration held a mostly irrational fear (when measured from national statistical data of crimes committed by legal concealed carry handgun owners, which is conclusively nearly zero) that a permitted concealed carry individual would commit a violent crime in an act of uncontrolled rage. By enacting policy which prohibited this improbable event, VT administration narrowed the probability of such events, at the unfortunate expense of significantly fattening the risk tails. With a mostly undefended campus (and certainly given a lack of deterrence), they chose to make a highly catastrophic outlier event much more certain. A campus lacking any capacity to defend itself in real-time is an environment ripe for out-of-control outlier events.

Such policy behavior is most unfortunate and exceptionally irresponsible. While catastrophic risk is impossibly difficult to quantify with precise certainty, expected risk usually yields to risk management techniques we possess. VT's experience is a lesson corporate executives and risk managers could certainly learn from as well.

From a policy perspective, it's valuable to examine operational policies and procedures for indications that they're inducing leptokurtosis. Is certain risk (which can and should be managed through insurance, reduction, transfer, etc.) instead being avoided or shunned, causing participants in the process to distort information or engage in activities that transfer the expected risk to outlier categories?

Leptokurtic Model
Orange = Standard Distribution
Yellow = Leptokurtic Distribution
A good example of this dynamic is in the handling of capital technology assets past the conclusion of their underlying depreciation schedule and/or capital lease interval. Normally, the forecasted financial life of the asset should provide a reasonable indication of the low-risk lifetime. Not only is such equipment in a new state within engineered MTBF (mean-time between failure) and usually supported by insurance or vendor replacement provisions, but it is recent enough to have current internal expertise and documentation. A loss of the equipment during this period is well protected by risk mitigation measures in place at various levels.

Once an asset has been fully depreciated, management experiences an increase in profitability from its operation, at the expense of increased catastrophic/outlier risk. This is where our normal distribution gets narrower and taller at the center, while developing alarming fat tails at the outliers. Replacement parts become rare, personnel familiar with the configuration and operation of the system depart, and documentation disappears. Normal vendor support for the antiquated technology asset also disappears. While the center of the risk model rises with the gains realized from lower-cost operation of the asset, leptokurtic fat-tails grow. When the inevitable system failure occurs, recovery is nearly impossible. Configurations are lost, internal and vendor knowledge absent and companies are often left with very few options for recovery. Occasionally, some firms don't survive the experience.

The best solution for leptokurtic operational risk may be no different than practices accepted in credit and market risk: incur certain risk daily to keep risk models normal. As the certain expense of an insurance policy represents in a sense a real, certain loss every month for the policy holder, operational risk environments should seek similar policy counterparts. Operate only current technology assets under depreciation and schedule their replacement at the conclusion of depreciation. Specify certain lifespans of applications and kill legacy system with regularity. Force the company to bear the expense to keep the risk distribution mostly normal so that traditional risk management controls can be effective. Absent this recurring culling effort and known expense, catastrophic risk is certain to develop.

Sunday, April 01, 2007

Operational Risk Faultlines

One of the more interesting phenomenon in operational risk is why managers experience such abnormally high levels of catastrophic risk when compared to credit and market risk environments. A potential answer may lie in the choice of risk outcomes managers make in the design and operation of their programs. Experience from credit and market risk suggests that when we refuse to recognize every day risk which allows it to be reconciled daily, we force it to hide, only to emerge when it reaches catastrophic levels.

In credit risk, accountants and auditors would quickly jump on firms that extend commercial credit but fail to establish, forecast and maintain a reserve allowance for bad debt. No level of rationalization about the firm's exceptional credit screening process would convince a prudent accountant to ignore reserve requirements. Instead, managers use historical or probability models in accounting for expected risk, allowing the firm to experience the daily feedback from current lending practices. Through this process, risk is forced into the budgeted "expected loss" category, materially driving down the alternate and default "unexpected loss" category where catastrophic demons reside. Feedback is daily and risk appetites appropriately adjusted.

Likewise, market risk managers have equally mature practices in forecasting recurring risk. Value-at-Risk or Capital-at-Risk models, portfolio loss reserves and other methods anticipate a certain level of loss from expected risk. Derivatives markets, intentionally structured to administer risk, often use daily settlement methods so that a day's gains or losses are immediately felt. Instead of deferring the realization of gains and losses to the end of a contract term, which attracts liquidity and default risk into the equation, winners and losers in these markets settle daily and risk levels are moderate but balanced. Risk is rarely allowed to stray too far from the expected loss category, and the demons of unexpected loss are again kept at bay. Again, feedback is provided on a near-daily basis, allowing the organization to appropriately adjust its appetite for risk.

Operational risk, on the other hand, is rarely permitted to reconcile daily. Rather than maintaining an expectation of budgeted loss, managers set unrealistic "zero fault" expectations. Systems are pronounced to have "five nines" (99.999%) uptime, change processes are put in place that apply punitive, often professionally terminal consequences for technical administrators who recognize a risky condition, creating an atmosphere where participants in the system are encouraged to ignore risk or worse yet, cover up minor risk manifestations. Game theorists would find these outcomes unsurprising, yet many operations environments are littered with these risk landmines.

The result is that normal risk is shuffled aside and hidden. Feedback is prevented, or worse yet, prohibited by organizational policy. In this state, catastrophic cousins are invited in, as the risk demon demands settlement in full, with obscenely compounded interest for his absence. Worse yet, operational environments tend to be rich with risk collinearity opportunities, as system failures tend to experience amplification due to homogeneous operating systems, hardware platforms and administrative personnel. A failure in risk mitigation to one system, such as an e-commerce webserver, bypasses most prevention-oriented defenses and permits a complete compromise to the environment to occur.

What's the solution to reducing operational risk conditions favorable to catastrophic consequence? John Milton, the well-know author of "Paradise Lost" and other works wrote the significant essay "Areopagitica" addressing the importance of allowing a free press, in which the concept of an open "grappling of truth and falsehood" is encouraged. Applied to risk management treatments, we see Milton's prescription present in credit risk management, where individual accounts are periodically evaluated and forecasted for default treatment. Derivatives markets (in most cases) are forced to grapple daily, with gains and losses applied before one's position becomes too extreme.

Operational risk requires this same treatment in order to reduce the opportunity for catastrophic loss, but can only occur in environments where the organization encourages the forecasting of expected loss. Loss must be recognized as a probabilitistic reality, rather than a culturally prohibited outcome. Absent recognition as a recurring expected loss, risk's demons will find their expression in unexpected catastrophic outcomes.

As the FuzzyNumbers blog evolves, I'll share ideas about the practical treatment of operational
risk that converts risk from unexpected to forecasted (and moderated) states.

Recent Posts

Older Posts