un evento a 25 deviazioni standard - gz
By: GZ on Mercoledì 15 Agosto 2007 12:41
Nell'agosto 1998 il modello computerizzato per il trading del reddito fisso di un singolo mega-iper fondo hedge, Long Term Capital, creò da solo una crisi perchè non aveva previsto che la Russia potesse non pagare i suoi bonds
Il risultato è che il fondo perse un -10% circa sulle posizioni in quanto tale, ma avendo una leva di 1 a 20 perdeva il 200% e fu spazzato via. Le banche centrali tagliarono i tassi a ottobre per impedire che le banche che gli prestavano i soldi andassero sotto e tenere su i mercati. I premi Nobel e matematici che vi lavoravano dissero che si era verificato "un evento che teoricamente accade ogni 10mila anni".
Ieri Goldam Sachsche ha un fondo che perde il -30% in 12 mesi (cone le borse mondiali ancora in attivo del +12%) ha scritto agli investitori:
"..“We are seeing things that were 25-standard deviation events, several days in a row,” said David Viniar, Goldman’s chief financial officer. “There have been some issues [before] in some of the other quantitative spaces, but nothing like what we saw last week.” a “25-standard deviation event” – something that only happens once every 100,000 years or more. ...
("..un evento a 25 deviazioni standard, qualcosa che accade ogni 100 mila secondo il modello del computer...")
Oggi hai che i il modello computerizzato per il trading del reddito fisso, delle azioni, dei derivati, dei bonds immobiliari di decine di mega hedge fund da quelli di Goldman Sachs, Barclay's, Lehman, ARQ, Renaissance, D.E. Shaw (e tanti altri che non avete mai sentito nominare perchè sono nell isole vergini e non fanno pubblicità) sta creando una crisi perchè non aveva previsto che i derivati sui mutui immobiliari crollassero in alcuni casi del -60%
Il problema è che questi PHD in fisica e matematica che lavorano per le banche e creano i modelli computerizzati usano algoritmi basati su dati in un periodo in cui c'era liquidità e li estrapolano. Poi avendo successo da cinque fondi che usano questi modelli nei hai cinquanta, poi cinquecento e alla fine tutti hanno su le stesse posizioni e quando liquidano si massacrano a vicenda e mandano giù i mercati.
Ma i loro modelli non contemplano il caso in cui manca la liquidità, il denaro e lettera e il casi in cui tanti fondi usano tutti lo stesso modello alterando da soli il mercato...
Limitations of computer models
By Gillian Tett and Anuj Gangahar
August 14 2007 19:28
In recent years, Goldman Sachs has become renowned as one of the savviest players on Wall Street. This week, however, the mighty US bank was forced into an embarrassing admission.
In a rare unplanned investor call, the bank revealed that a flagship global equity fund had lost over 30 per cent of its value in a week because of problems with its trading strategies created by computer models. In particular, the computers had failed to foresee recent market movements to such a degree that they labelled them a “25-standard deviation event” – something that only happens once every 100,000 years or more.
“We are seeing things that were 25-standard deviation events, several days in a row,” said David Viniar, Goldman’s chief financial officer. “There have been some issues [before] in some of the other quantitative spaces, but nothing like what we saw last week.”
By any standards, it is a striking admission, given that these losses at the Goldman fund could top $1.5bn (£750m, €1.1bn). But what is more startling still is that Goldman Sachs is not alone in seeing its models go haywire. On the contrary, in recent days a host of other funds have experienced similar difficulties, including highly renowned funds at Renaissance Technologies.
James Simons, founder of Renaissance and one of the most respected quantitative fund managers, last week wrote a letter to investors saying losses were about 9 per cent in the first few days of August (the funds have since recovered at least some of the losses). He also tellingly wrote that “we cannot predict the duration of the current environment,” highlighting the fact that even a group such as Renaissance – whose flagship fund, Medallion, has had an annual return of 30 per cent since 1988 – is suffering badly from recent movements.
Other big-name funds that have been hit include Highbridge Capital (controlled by JPMorgan), DE Shaw, AQR Capital and Barclays Global Investors – as well as funds run by groups such as Lehman Brothers. “Models (ours including) are behaving in the opposite way we would predict and have seen and tested for over very long time periods,” said Lehman Brothers last week.
A glance at recent financial history shows that this type of “rare” event is not so unusual at all. Back in 1998, for example, a key reason for the near-implosion of Long Term Capital Management was that the fund’s economic whizzkids – who included some Nobel prize-winning economists – had devised model-based trading strategies that turned sour when markets moved in unforeseen ways. Similarly, two years ago the financial industry received a shock when General Motors, the US car group, was downgraded – a move that left the price of financial assets gyrating in relation to each other in ways computers had not predicted.
The question now being asked by some bankers – and regulators – is whether this week’s events show that the modern financial industry is foolish to be placing so much faith in these complex computer-driven models.
“People say these are one-in-a-100,000-years events but they seem to happen every year,” says Satyajit Das, a consultant to hedge funds and investment banks. “This episode should make people ask questions about models – I think it could lead to a real reassessment.”
Any such reassessment could have far-reaching consequences. The spread of financial models is at the heart of the growth of modern banking. Indeed, were it not for modern computing power, this decade’s remarkable explosion in finance would not have occurred at all.
The roots of this revolution go back to the 1970s, when computers became small and flexible enough to be easily used by bankers – and bright minds in the world of economics started to move into finance. Initially, their techniques were mostly used to help asset managers decide which equities to buy. But in the 1980s, bankers started to use these tools to analyse complex debt securities, a development that later enabled them to create, price and trade instruments such as derivatives.
This decade, the use of models has moved on to a whole new plane. As computing capabilities increased and global markets became more closely integrated, asset managers started relying on models to track asset prices and detect tiny anomalies that a human eye might struggle to see. Initially, people then traded on these anomalies; but soon they started using computers not just to spot anomalies but to execute trades too. Computers are thus now using models to make trades – and often trading with other computers – with barely any human intervention.
This shift has delivered many powerful benefits for finance. Trading by computer is cheaper than using humans and can be quickly expanded in scale. It tends to be more consistent, since machines – unlike people – never get tired. More important still, computers can trade faster than humans, which is crucial when investment groups are racing one other to exploit tiny price differentials.
As a result, computer-driven trading has proliferated, particularly in markets such as equities that tend to be readily accessed and highly liquid. In many cases, these strategies have delivered excellent investor results, as highlighted in Mr Simons’ letter.
But while computers are often able to operate better than humans in “normal” markets, this month’s events demonstrate that during times of stress they have some crucial flaws. One problem is that models typically predict the future on the basis of past data. This can lead to distortions, given the speed at which the financial industry is currently evolving. Indeed, many of the instruments at the heart of the current credit storm barely existed before this decade – which means that computers can only model these markets based on the benign conditions of the past few years.
Another big problem is that computer models do not always take account of the way that their own behaviour is affecting markets. The essential danger, as Donald Mackenzie, a British finance professor, points out, is a tendency to view models as “cameras”, snapping pictures of market movements. However, models are now so widely used that they often drive markets as well, Mr Mackenzie says, which means they are probably better viewed as an “engine”. “The emergence of modern economic theories of finance [have] affected markets in fundamental ways . . . models are not simply external analyses but intrinsic parts of economic processes,” he notes.
In practical terms, this means that when models evaluate markets, they often fail to recognise how their own behaviour is distorting prices. Take the case of Amaranth, the hedge fund that imploded with $6bn of losses last year. Before this collapse, Amaranth was so dominant in the natural gas market that when it bought it tended to push up prices. These prices were then used in models that calculated Amaranth’s trading risk.
But when Amaranth was forced to sell, gas prices collapsed much faster than any model might have predicted. Although Amaranth itself was not trading on the basis of models, this pattern of events can be doubly dangerous for asset managers using computer-driven programmes, for these computers have a nasty habit of all using similar strategies – partly because they are created by humans who have studied at the same institutions. Thus they can all dash for the exits at the same time.
The issue of computer “herding” appears to be a key factor behind this month’s problems at the Goldman Sachs funds and others. Although aspects of this saga are still unknown, it appears to have started a few weeks ago when some large investment managers suffered losses on subprime securities. This prompted investment banks to demand that hedge funds post more cash against their trades – which in turn forced these funds to sell assets.
However, since subprime securities were hard to trade, the forced sales occurred in other, more liquid markets such as equities. The consequence was a wave of triggered price movements that seemed utterly “irrational”, according to models. Last week, for example, the stock price of some highly valued companies suffered in relation to lowly-rated stocks such as US homebuilders. This appears to have been particularly devastating for the computer strategies used by Goldman’s fund, since such programmes typically assume that low-rated stocks will perform badly in a credit crunch.
Since then, many of these extreme market swings have corrected themselves. Consequently, many of the so called “quants” (experts in quantitative models) who work in the financial industry insist that it is premature to criticise all these strategies. After all, they point out, the vast majority of models that are used in the markets work perfectly well. Moreover, efforts are under way to address problems such as the “feedback loop”, or danger of computer herding. One key focus of some banks, for example, is the search for ways to apply research in the field of artificial intelligence, or neural networking, to financial models. This, they hope, will enable them to “learn” from mistakes and bouts of irrationality – and thus perform better at times of market stress.
“Academic research has been shifting to some degree from a focus on ‘efficient market’ theories to focus more on ‘inefficient market’ theories [and] there is an increased recognition of inefficient market trading strategies,” says Colm Fitzgerald, head of quantitative trading at the Bank of Ireland. “Investors in funds with strategies based on the latter models are not likely to be currently facing any trouble.”
Nevertheless, whether these new “super-intelligent” models will do better remains to be seen. “Bankers talk about self-learning models, with neural networks and things, but a lot of that is hogwash,” says Mr Das.