Today’s post is by Markus Schuller of Panthera Solutions, a participant in the recent OECD Financial Roundtable.
On October 17, delegations from the OECD’s 34 member countries, central bankers and financial market actors met to talk about financial fragmentation (geographical and sectoral segmenting) at the semi-annual OECD Financial Roundtable. It was an interesting discussion on how serious fragmentation was and how much financial integration we need, but the focus turned quickly on financial regulation, with the lobbyists talking their book (they could have sent a recording and saved themselves the time and expense of coming to Paris).
In the Eurozone for example, financial fragmentation means that a company in the peripheral countries will face higher financing costs than it would in one of the core countries of the currency union, for the same level of idiosyncratic risks and opportunities. This prompts the question of whether financial fragmentation hampers economic convergence and contributes to the painfully slow recovery.
Fragmentation isn’t necessarily negative: it can for instance allow more accurate pricing of the different, separable components of a system. But since the early 1980s, the increasingly global exchange of goods and services has relied on a more globally-integrated banking system and financial markets to allocate real economy resources. As a consequence, inter-connectivity increased, separability decreased.
Nationally-based regulation was clearly not the best framework to deal with the emerging financial system, but instead of a multilateral approach, governments listened to the banks’ “self-regulation” mantra, in harmony with the 1980s deregulation zeitgeist advocated by Reaganomics and Thatcherism.
It didn’t work. The dysfunctionality of self-regulation is not a specific characteristic of the banking industry. In numerous industries we got cartelization, price rigging and other forms of wheeling and dealing. In the banking industry, the combination of globalization, deregulation and digitization led to balance sheet inflation, followed by a fast-increasing gap between real economy utility and financial markets profiteering. We also got what Nassim Taleb and Warren Buffett called participants with “no skin in the game” – not playing with their own money. The people making the decisions didn’t stand to lose anything personally if they got it wrong. Investors and taxpayers took the hits for “Too-Big-To-Fail” institutions.
The blame game between market participants is still hard fought, especially between the big banks which are impacted most by the current re-regulation wave and governments. The banks complain about insufficient international coordination for the new set of rules. Governments blame the banks for socializing losses and accuse them of being responsible for the current regulatory countermovement.
Ladies and gentlemen, you both screwed up. But a couple of examples allow us to see that it didn’t happen overnight.
First, the repeal of the 1933 Glass-Steagall-Act (GSA) that prevented US investment banks operating as universal banks, contrary to their European peers. In the 1980s, European institutions aggressively entered the US underwriting market with competitive offers, thanks to significantly larger balance sheets and higher gearing (debt to equity) and started taking IPO business away from US banks. In 1999, insisting lobbying by US banks bore fruit and Glass-Steagall was repealed.
Second, investment banking partnership. For over 100 years, US investment banks were organized as private partnerships. Partners clearly understood the meaning of being finally responsible, as they ultimately had to pay for mistakes with their private wealth. Merrill Lynch went public in 1971, Bear Stearns followed in 1985, Morgan Stanley in 1986 and Lehman Bros in 1994. Goldman Sachs was waiting until GSA was repealed and got listed in 1999. These changes in status can be seen as a sign that the banks had less “skin in the game”. Or as the last stage of a rocket blasting them into irresponsibility.
Governments thought they were doing the market – or at least their largest party donors – a favor by loosening regulatory norms. Banks took this an invitation for unethical behavior and for unproductive profit maximization as far as the real economy is concerned. Big central banks had managed to position themselves as independent guardians of a stable monetary system. Now, almost as innocent bystanders, they (that is you and I) have to pick up the bill for both blame-game parties. The central banks pay with their independence.
The re-regulation wave symbolizes the attempts of governments and regulators to level the playing field with financial market participants. These attempts lack proper international coordination, for two reasons I explained in my contribution to the Roundtable.
Governments and regulators need to decide whether they want to use their systemically relevant banks as a tool for geostrategic power or for establishing an efficient market. Officially, the latter is the goal. In off-the-record conversations though, some officials express concern at the fast rise of banks from emerging economies, especially Asian banks, in terms of balance sheet and market capitalization, allowing their governments to use them as strategic options in the global power game. Given that, the argument goes, developed market representatives should not to be too restrictive regarding their own systemically relevant banks.
The second reason is more fundamental: economics is a social science, and the same goes for finance as a subset of economics. We are dealing with human behavior and social interactions that cannot be modeled using the deterministic approach of natural sciences, where for instance gravity always pulls the apple down from the tree. In systems theory, we would call that a trivial machine, but we’d have to call humans and human society “non-trivial”. With us, the same inputs can lead to different outputs. Pity the poor regulators trying to tame a moving, non-trivial social construct while being in motion themselves. Even when assuming high technical competence, high ethical standards and a strong will for implementation, this is an extremely complex task. Consequently, a lasting, single-shot version of re-regulation cannot be expected, and all financial market participants are currently paying the price for a culture of irresponsibility.
While developed market governments fail to make it clear explicitly whether they’re going for the geostrategic power option or for efficient markets, an implicit preference for the power option remains, and the foundation for further social and economic collateral damage is laid.
Finanzielle Fragmentierung. Mein Beitrag. Markus Schuller
Today’s post is from Gert Wehinger of the OECD’s Directorate for Financial and Enterprise Affairs
Investors’ behaviour on stock markets has been likened to the irrationality described in Charles Mackay’s 1841 classic Extraordinary Popular Delusions and the Madness of Crowds. But there is also a more positive view of what crowds can achieve. In his 2005 book The Wisdom of Crowds, James Surowiecki argued that “diversity and independence are important because the best collective decisions are the product of disagreement and contest, not consensus or compromise.” Diversity and disagreement certainly characterise this year’s Nobel prize for economics, even if Eugene F. Fama, Lars Peter Hansen and Robert J. Shiller shared the award “for their empirical analysis of asset prices”.
Fama’s work is based on the idea that asset returns should be impossible to predict if asset prices reflect all relevant information. He tested empirically (and found new methods like event studies to do so) the efficient markets hypothesis (EMH), for which he and his followers found ample evidence. That is, in the very short run, like a day or a week, if all available information is incorporated in share prices, you cannot beat the market. While he accepts that there are factors like information or transaction costs that weaken the pure EMH, any anomalies – like the difference between value and growth stocks he analysed in a 1998 paper – are explained within the same rational investor framework, and risk factors would account for the differences. However, such anomalies (or “market imperfections”) may open up short-run arbitrage opportunities (that Fama himself exploits in his fund-management firm). Hedge funds and algorithmic traders in particular thrive on these imperfections.
Thus, Fama’s EMH is perhaps a great insight for theorists and algo-traders, but it is, in principle, tautological and often becomes useless in practice (yes, algo traders can also go bust) where market anomalies can go beyond rationality. While these anomalies still tend to leave markets unpredictable, the reasons underlying such unpredictability may be quite different from the agnostic view of market rationality, that deprives the researcher almost by definition from gaining better insights into the “true” functioning of markets and a better understanding of longer-term price movements, including bubbles.
Shiller thought outside the EMH box by exploring departures from Fama’s efficient market rationality using insights from behavioural finance (many of the ideas were developed by the 2002 Laureates Daniel Kahneman and Amos Tversky). Such departures, if they can be identified in asset prices, may open up arbitrage opportunities by rational investors to take advantage of misperceptions of irrational investors. While rational arbitrage trading would push prices back toward the levels predicted by non-behavioral theories, this is still not the world as described by Fama, where rational information is processed immediately. In Shiller’s framework, bubbles can not only exist, but there is also a possibility that they can be identified.
Hansen has tested many of these theories in his generalised method of moments (GMM) framework. If you cannot forecast stock prices, maybe you can find patterns in their volatility or other statistical moments that can be exploited. Hansen found, for example, that asset prices fluctuate too much for a rational expectations-based model to hold. This work has been carried forward in several ways, for example improving measures of risk and attitudes towards risk that may change depending on the economic situation. This is just one example of how this research can generate new insights about human behaviour more broadly.
Shiller showed the importance of social psychology for finance and economics using evidence from investor surveys. In the 2005 edition of his book Irrational exuberance, Shiller extended his analysis to real estate, arguing that the real estate market was irrationally overvalued, and he predicted large problems for financial institutions with the eventual burst of the real estate market “bubble”. But he was also aware of the problems a bursting stock market bubble would have on retirement income from pension plans that rely on equity investments, and he wondered about the “curious lack of public concern about this risk.” More generally, he also pointed out that the “tendency for speculative bubbles to grow and then contract can make for very uneven distribution of wealth” that may even make us “question the very viability of our capitalist and free market institutions”. He saw an important role for policy to address these issues.
The first line of policy defence against asset price bubbles is monetary policy, but bubbles are hard to identify and policy makers are reluctant to ‘prick’ a bubble. Robert Lucas, the 1995 Nobel Prize winner, noted that the main lesson from the efficient market hypothesis “for policymaking purposes is the futility of trying to deal with crises and recessions by finding central bankers and regulators who can identify and puncture bubbles. If these people exist, we will not be able to afford them.” (The Economist, Aug 6th 2009)
The Prize Committee in its decision seemed to want to reconcile a short-run and a long-run theory of asset prices, with Fama’s finding of that stock prices are unpredictable in the short run and Shiller showing that there is some predictability in the long run (Fama would not dispute that idea in principle as his own research found that stock returns become more predictable the longer the period considered).
But knowing that Fama as recently as January 2010) defiantly denied the existence of asset price bubbles because they cannot be identified and predicted, and Shiller (along with others) recognises bubbles and calls the EMH argument that stock prices reflect fundamentals “one of the most remarkable errors in the history of economic thought”, it will be interesting to see how the the three laureates interact on stage at the award ceremony in December in Stockholm.
Perhaps Hansen will have to play the role of a mediator with a humble remark like the one he made shortly after hearing about his award: “We are making a little bit of progress, but there’s a lot more to be done.”
OECD Journal Financial Market Trends
Today we publish the second of a summer series in which Kimberley Botwright of the OECD Public Affairs and Communications Directorate looks at OECD work through a Shakespearean lens.
Sixteenth century Venice was a global centre of merchant capitalism, and The Merchant of Venice offers an excellent examination of human behaviour and its effects on financial markets. The point of this article is not to dwell on the appalling anti-Semitism of the period, but rather on the story of the hapless eponymous character and his reckless friend.
With the majority of his wealth at sea, Antonio uses credit to leverage capital to lend to his friend Bassanio (“Try what my credit can in Venice do”). Bassanio requires funding to seduce the wealthy heiress Portia. On Bassanio’s behalf, Antonio borrows 3,000 ducats for a three-month period from Shylock, who offers a 0% interest rate but takes the promise of one pound (around half a kilo) of Antonio’s flesh as collateral.
By Act 3, the audience discovers that Antonio’s ships have sunk, leading to a catastrophic devaluation of his net worth. To redeem his losses, he must pay the gruesome corporeal price under the terms of a notarized contract:
“Hath all his ventures failed? What, not one hit? From Tripolis, from Mexico and England, / From Lisbon, Barbary and India? And not one vessel scape the dreadful touch of merchant-marring rocks?”
Antonio is significantly over-leveraged and he overconfidently manages risk, based on an uncritical acceptance of the present. If only he’d read the OECD’s Future Global Shocks: Improving Risk Governance! He would have learned that disruptive events, such as a cargo ship sinking, can destabilise critical supply systems and have far-reaching economic effects.
He might also have learnt something about financial crises: “Arguably, financial crises both occur more frequently and produce more severe monetary damage than other types of risks described. There is a concern that the tools for risk analysis have not worked as well.” It goes on to emphasise that financial crises involve human, non-malicious choices and their re-occurrence should encourage us to search for new approaches to economic challenges and models “that use data on how agents actually behave.”
Bassanio provides an illustration of the erratic behaviour of individuals in financial markets. His justification for borrowing money from Antonio is based on the logic that if one shoots and loses an arrow, one should promptly shoot another in the same direction, in order to find out where the first went – not the most rational of approaches, seeing as it is very likely your second arrow will go the same way as the first. In short, Bassanio throws good money after bad.
Since the financial crisis, traditional economic models have become increasingly criticised for being blind to herd behaviour, network effects or information asymmetries and irrational action. Agent-based models (ABM) provide an alternative modelling approach. They focus on possible interactions between agents according to certain behaviour rules, running millions of simulations to approximate the millions of potential interactions between actors, gaining a better insight into possible outcomes of the complex system. In complex systems such as debt markets or financial institutions, shocks can be caused by external pressures (ships sinking) or internal (erratic individuals). It is therefore important to understand these systems at both the macro and micro-level.
Another important human aspect of financial systems is trust and expectations. Towards the end of the play, Antonio is dragged to court, with Shylock demanding his pound of flesh. While the presiding Duke of Venice initially proposes that Shylock might assume certain losses and forgive part of Antonio’s debt, “Forgive a moiety of the principal, / Glancing an eye of pity on his losses”, this raises deep concerns:
“It must not be; there is no power in Venice
Can alter a decree established.
‘Twill be recorded for a precedent,
And many an error by the same example
Will rush into the state. It cannot be.”
A major fall-out of the financial crisis was the possible creation of “moral hazard”, the expectation, or guarantee, that public authorities will bail out uninsured and unsecured creditors of systemically important bank debt. When such guarantees are perceived, behaviour incentives may be distorted.
As two OECD papers on implicit guarantees and banking in a challenging environment make clear, solutions for our modern day financial dilemmas lie in internationally coordinated responses. For example, the first paper suggests that an effective cross-border EU bank failure resolution network would lower the value (and danger) of implicit sovereign guarantees. The second notes that as banks deleverage and assets become renationalised, a European Banking Union would sever the link between weak sovereigns and weak banks.
But knowing what to do and doing it are two different things, as the quick-witted heiress Portia reminds us; “If to do were as easy as to know what were good to do…”
Today’s post is from Adrian Blundell-Wignall, Special Advisor to the OECD Secretary-General on Financial Markets. The view expressed here is his own and does not necessarily reflect that of any OECD government.
This week, the German Parliament’s Finance Committee invited Paul Atkinson and me to comment on a draft bank-separation law. The draft is strongly influenced by the 2012 Final Report of the High-level Expert Group on reforming the structure of the EU banking sector chaired by Erkki Liikanen.
Liikanen proposes assigning trading and available for sale securities above a threshold of 15-25% and all activities related to market making to a separate, well-capitalized, subsidiary. This would maintain the advantages of the universal bank model in a holding company structure, but insured deposits could not be used to subsidize the trading activities. The OECD has long proposed a non-operating holding company structure for separation, by ring fencing and separately capitalizing the different activities without restricting a bank from offering a complete range of services to customers. In general terms Liikanen is similar to this. It has the advantage of also of promoting a level playing field for bank competition with stand-alone securities trading firms.
However, the proposal involving a €100 bn “separation trigger” for total assets held for trading and available for sale is not sensible. No bank under €400bn total assets with any amount of derivatives would ever be considered as long as it kept no more than 24.9% of trading assets? If a bank with around €100bn trading assets was subject to a plus or minus 10% volatility cycle in asset values, it could separated and reunited as prices rose and fell.
The Liikanen report urges the Basel Committee and the EU to deal with the shortcomings of model-based risk weighting approach in the capital rules, so that the trading subsidiary (in particular) is well capitalized and not subject to error. The Expert Group also stresses that the directives on resolution and bail-in are an essential complement to its separation proposal. Clarification and pre-notification of instruments that are not guaranteed and will be subject to bail-in should be transparent to promote trust. As far as possible, such instruments should be marketed to non-banks.
Strengthening boards, promoting risk management and disclosure, tackling incentive schemes and sanctions to ensure compliance were also recommended by the Liikanen group.
While the OECD has supported much of this over the years, we disagree with Liikanen on two important points, concerning separation and minimum capital standards.
The first major problem is that, while idea of a threshold for separation is good, the Liikanen group has not chosen the right variable on which to base the threshold. Recent OECD work has sought empirically to explore the factors that make a bank more or less risky: i.e. that take it towards or away from the default point. This research was necessary, because policy after the crisis had to be made ‘on the run’ without enough detailed empirical evidence on which to base reform. Most of the research referred to in the Liikanen report pre-dates the crisis, or is related to recent policy developments, but none of it contains research relating business model features to banks’ distance to default. Yet it is crucial to know which mechanisms are and are not supported by the data.
With respect to the business model features of bank risk, the OECD study shows that liquid trading assets, properly separated from the gross market value of (mainly over the counter illiquid) derivatives, helps to increase the bank’s distance from default and make it less (not more) risky. On the other hand derivatives are overwhelmingly the business model feature that gives rise to interconnectedness risk and default paths arising from illiquidity in crisis conditions (for example the massive margin calls in 2008-2009).
This makes intuitive sense too. Most derivatives are not standardized and trade over-the-counter, i.e. directly between the two parties without being supervised by an exchange. An institution can find itself in a position where it cannot operate because it doesn’t have the liquidity to meet immediate calls for payments on derivatives markets. Dexia is a recent example of such a failure, and if AIG’s derivative commitments had not been met from official sources, bank collapses through interdependence channels would have been difficult to contain. Liquid trading securities, on the other hand, can be sold precisely to meet margin and collateral calls —a very good thing.
This is a fairly major problem for the Liikanen report—they are not looking at the right threshold variable. The idea of a threshold makes sense, but it must be based on the key variable if banks are to be safer. In the OECD view this is derivatives: any bank with a gross market value of derivatives above 10%-15% should be considered for separation.
Putting to one side the empirical evidence for a moment, consider intuitively the case of Wells Fargo (appropriately converted to International Financial Reporting Standards – IFRS – derivatives concepts) and Deutsche Bank. Wells Fargo offers most essential services to its customers, has very low leverage, had no issues in the crisis and is one of the most profitable banks in the world. Wells Fargo received no payments from the US government in settlement of the AIG counterparty positions. Yet Wells Fargo would be considered for separation under Liikanen ‘percent-of-assets’ threshold test. Its trading and available for sale assets were around 21% in mid 2012, but its derivatives were only 6.5% of its adjusted balance sheet—a safe business model for interconnectedness risk according to the OECD research.
Deutsche Bank, on the other hand, with 40% of its balance sheet in derivatives and only 14% of liquid trading and available for sale securities would not be considered for separation on this rule. Deutsche Bank received a payment from the US government in settlement of its AIG positions equal to 37% of equity less goodwill. Enough said.
The second main issue after separation, minimum capital requirements, is dealt with extensively in the Liikanen Report, with calls for “…more robust risk weights…more consistent treatment of risk in internal models [and that] the treatment of real estate lending…should be reconsidered,…”
The Expert Group should have had the courage of its convictions. The core problem is the risk weighting system as proposed in the so-called Basel III international regulatory framework for banks. This introduces an illusory “risk sensitivity” that relates minimum capital requirements to “risk-weighted assets (RWA)”, instead of to actual balance sheets. This has evolved into a system of extreme complexity that invites institutions to look for regulatory ways to reduce RWA relative to total assets (including negotiating with supervisory authorities) rather than ensuring they really have enough capital, defeating the entire purpose of capital adequacy rules.
So long as capital requirements are based on RWA, whose relationship to the actual balance sheet is effectively a management tool, many banks (and the system as a whole) are likely to be under-capitalized. The best solution would be to scrap the risk-weight system at both global and European levels in favor of something vastly simpler and more effective. Failing that, the equivalent can be achieved by strengthening the role of the (non-risk-weighted) leverage ratio to the point where it overrides the risk-weight system.
Today’s article is from John Hulls, of the Cambiant Project at the Dominican University of California that uses a fluid dynamics modeling concept he developed to simulate economic performance. John is also an affiliate at Lawrence Berkeley National Laboratory, working principally in the area of environmental applications of the LBL Phylochip microarray technology.
A basic capitalist tenet is that the market represents the most efficient way to allocate capital. How well is it working?
We are rapidly evolving a fast-moving, increasingly cybernetically interlinked capital marketplace that, as Lord May observes in the Santa Fe Institute Journal, has become intertwined in ever-more complex interdependent patterns. He goes on to ask how much are we, societally, paying the financial sector to allocate capital? More importantly, is the sector allocating capital to further societal goals, or merely enriching itself and a narrow segment of the world’s population? Human nature is powerful. John Stuart Mills said, in Social Freedom: “Men do not merely desire to be rich, but richer than other men”.
Benjamin Friedman holds, in The Moral Consequences of Economic Growth, that “greater opportunity, tolerance of diversity, social mobility, commitment to fairness and dedication to democracy” derive directly from economic growth. He shows that even during stagnation–let alone recession and depression–those values can vanish easily. Brad Delong observes, in reviewing Friedman, that if the majority of the people do not see an improving future, these values are at risk even in countries where absolute material prosperity remains high. Given rising political intransigence and loss of common social purpose in the U.S., and the rise of nationalistic political sentiments in Europe, the effects of increasing stagnation and inequality are becoming more evident, despite the financial sector’s phenomenal growth.
In a 2006 speech on the growing integration of the financial sector and the broader economy, Rodrigo deRato, Managing Director of the IMF, noted its supposed general stability and growth, and that from 1990-2005 the estimated sum of equity-market capitalization, outstanding total bond issues (sovereign and corporate) and global bank assets rose from 81% to 137% of GDP, while over-the-counter derivatives markets tripled in the latter five years to $285 trillion, six times global GDP, 50 times the U.S. public debt. So if the financial sector has worked, we should see proportional acceleration of growth plus improved consequences for all society.
This is not happening, as Cornia and Court report in Inequality, Growth and Poverty in the Era of Liberalization and Globalization.Global poverty reduction has stalled for 30-40 years, despite an approach to growth based on “…a neo-liberal policy package, [including] stringent focus on macroeconomic stability, liberalization of domestic markets, privatization, market solutions to the provision of public goods, and rapid external trade and financial liberalization.” They reveal that inequality has grown faster during the same period in the majority of countries for which data is available. The paper also shows that increased inequality greatly encumbers the climb from poverty and that excessively low or high levels of inequality impede growth, provoking various ills, including crime, social conflict and uncertain property rights. In the US, bank employees were found to be signing thousands of foreclosure documents without checking the information in them in so-called robo-signings that rendered the documents illegal.
All the data seem to affirm Friedman’s assertion that all societal strata should participate to maximize the moral benefits of economic growth. Further support can be found in Court’s conjecture about an optimum range of equality. This is confirmed by modeling work at Dominican University, discussed in a previous OECD Insights post, which shows that there is indeed an optimum level of equality for a given economic structure useable for policy planning, to insure capital allocation to economic growth for public purposes. Returning us to Lord May’s point that we must know how much economies are ‘paying’ the financial sector to allocate capital, including payments to banks, sovereign funds, hedge funds, private equity, and the managers, often in major international banks, of the estimated $21-32 trillion of largely secret “offshore” financial assets.
The financial crisis and subsequent Euro problems show that we are paying vast sums for a system that, as Joseph Stiglitz, former chief economist of the IMF, points out, doesn’t allocate capital where needed, causing capital flows that are pro-cyclic, exacerbating peaks and lows of business cycles. What efficient capital distributive function is served by the approximately $1.5 trillion of daily flows sloshing about in the casino of OTC foreign exchange activities, and the nearly 70% of all U.S. market trades conducted algorithmically, without human intervention?
Keynes may have lost the 1944 Bretton Woods battle for a solution that transcended national financial self-interest but his plans for an international clearing agency are prophetic, especially considering how the combined financial sector dominates national and international policy for its own ends “ As Keynes said, “… no country can . . . safely allow the flight of funds for political reasons or to evade domestic taxation or in anticipation of the owner turning refugee. Equally, there is no country that can safely receive fugitive funds, which constitute an unwanted import of capital, yet cannot safely be used for fixed investment.” Right again, Lord Maynard.
This month marks the centennial of the birth of mathematician Alan Turing, the “father” of modern computing and artificial intelligence. To celebrate the occasion, we’ll be publishing a series of articles on modelling and economics. Today’s article is from David Leinweber, head of the Lawrence Berkeley National Laboratory’s Center for Innovative Financial Technology and author of “Nerds on Wall Street: Math, Machines and Wired Markets“
The Flash Crash wiped one trillion dollars off US stocks in 20 minutes on May 6, 2010, with most of the damage being done in only five minutes. But it took the Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) nearly five months to produce a report on those five minutes. If it takes so long to reconstruct and analyze an event that has already happened, imagine the difficulties in trying to regulate and prevent such incidents in markets where 1500 trades are made in the time it takes you to blink and where dozens of globally interconnected exchanges and trading facilities have replaced a small number of centralized stock markets.
The Lawrence Berkeley National Laboratory (LBNL) is actually a Department of Energy national laboratory, but we work in a number of data-intensive scientific areas where detecting and predicting particular events is crucial, ranging from cosmology to climate change. In 2010, Horst Simon, then director of LBNL’s Computational Research Division (now deputy director of LBNL) and I co-founded LBNL’s Center for Innovative Financial Technology (CIFT) to help build a bridge between the computational science and financial markets communities.
At present, a basic tool in regulating financial markets is the “circuit breaker” that stops trading, and after the Flash Crash new circuit breakers were instituted that stop the trading of individual stocks if their price variations exceed a prescribed threshold. However, as different markets and venues become more interdependent, sudden halts in one market segment can ripple into others and cause new problems.
What’s needed is a system to detect and predict hazardous conditions in real-time to allow the regulatory agencies to slow down rather than stop markets. Energy networks do this with brownouts to prevent blackouts, but we can also seek inspiration in NASCAR racing, where, faced with a growing number of increasingly gruesome crashes as the cars got too fast for the tire technology of the day, officials introduced the yellow flag to slow the races down when things got too dangerous.
Racetrack officials (like air traffic controllers or weather forecasters) can see trouble looming and intervene to prevent disaster. We are exploring the possibility of using supercomputers to survey markets in real time and turn on a “warning light” to advise regulators to slow things down when anomalies start to appear. Anomaly is in fact a rather bland term for some of the weirdness seen during the Flash Crash. For instance, you could buy Accenture shares for one cent or more than $30 during the same second at one point.
Based on recommendations from traders, regulators, and academicians, we have implemented two sets of indicators that have “early warning” properties when applied to the data for the period preceding the Flash Crash. The Volume Synchronized Probability of Informed Trading (VPIN) measures the balance between buy and sell activities using volume intervals rather than time intervals. A variant of Herfindahl-Hirschman Index (HHI) of market fragmentation measures how concentrated the exchange operations are, since fragmentation is considered as a source of market instability.
Because of the computational demands, computing indicators like VPIN and HHI in real-time will require high performance computing (HPC) resources. It will also need reliable data. For example, we discovered that different sources disagree on how many trades there were of Apple Inc at $100,000 per share on May 6, 2010.
Is real-time high frequency monitoring needed? The SEC/CFTC has announced their intention to direct many billions from the financial industry to this effort, which has been criticized by others as unnecessary overkill. We disagree with the critics. It is worth spending money on ways to improve on regulatory approaches based on circuit breakers. Stopping trading is a very blunt instrument that does not allow the market to self-correct and stabilize, and can easily make a bad situation worse.
Our tests show that VPIN, HHI and similar indicators could provide early warning signals for a more gradual slow down, rather than stop, replacement for on/off circuit breakers and our high frequency trading and academic collaborators hold this opinion strongly as well. Furthermore, we believe that the same approach, likely with additional computation, is applicable in the area of financial market cyber-security, which is widely acknowledged as important, but largely ignored in the regulatory debate.
For a detailed account the work summarized above, see Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing
On September 5th 1661, Louis XIV ordered D’Artagnan and his musketeers to arrest Nicolas Fouquet, the “Surintendant des finances”, for the capital offences of embezzlement and crimes against the state (or Louis XIV as it was known in those days). Fouquet was accused of ruining the king through exorbitant interest rates on sovereign debt as well as diverting some of the financial flows from lenders to the royal coffers into his own pocket.
Fouquet defended himself well though, and after a three-year trial was exiled rather than executed. However today he’s mostly remembered for parties that even the Sun King found a tad extravagant (although Louis did build the Palace of Versailles as a bigger, blingier version of Fouquet’s château at Vaux-le-Vicomte).
There’s a lot of truth in the popular image, but it doesn’t tell the whole story. Fouquet tackled problems that would be familiar to any European finance minister today, using means that are still part of the policy response to the current crisis such as cutting public spending, rescheduling debt and raising taxes and improving their collection. (He also used a few that aren’t so common or so blatant anymore such as selling public offices to his cronies.)
Fouquet understood something that is key to the present crisis: the need to restore confidence and get the economy moving. He did this thanks to a number of instruments including reassigning to solvable funds sovereign debt that had in today’s terms become junk bonds and even providing collateral himself for sovereign borrowing.
So, restore trust, fix the financial system, stimulate growth. Three and a half centuries later, you can read a similar argument in a paper by Adrian Blundell-Wignall, Special Advisor to the OECD Secretary-General on Financial Markets. In Solving the Financial and Sovereign Debt Crisis in Europe, Blundell-Wignall looks at why the crisis is worse in Europe than elsewhere and what can be done.
Monetary union means that euro members can’t devalue their currencies to help exports, and pressures on international competitiveness are transmitted directly to the labour market, leading to increased unemployment. Some governments responded by allowing their deficits to grow, and debt with it. Moreover, monetary union has resulted in high levels of debt in the household and corporate sectors in many of the countries that are in the worst competitive positions, leaving little hope that savings can be spent to stimulate growth.
The crisis and recession have increased indebtedness, contributing to underlying financial instability. One of the main reasons the situation is worse in Europe is the nature of its banking system. European banks mix traditional business such as loans to firms and households with activities in capital markets. Countries with large capital markets banks are heavily exposed to the sovereign debt of larger EU countries like Spain and Italy. Traditionally, holding this dull but dependable debt was a safe form of collateral for other activities, but the sharp price fluctuations that are now typical of sovereign debt trading affects the true value of this collateral and the price that shares in this debt could be sold for at any given time.
Deregulation and innovation in financial markets are to blame too. Apart from capital market banking, “re-hypothecation” has grown massively – the practice of reusing the same collateral repeatedly. This increases risk, given that the value of this collateral can drop suddenly, plus the fact that the banks are doing deals for themselves using collateral originally pledged by clients. As the number of deals using the same collateral multiplies, so does counterparty risk, the risk that one of the parties involved can’t meet their obligations.
Blundell-Wignall argues that underpricing of risk is the core cause of the financial crisis and that excessive risk in banking can always be traced to two basic causes: too much leverage, and for a given leverage, increased dealing in high risk products. Far from acting to contain the risk of the proliferation of these products, such as derivatives, regulators cleared the way for them, for example by removing barriers to mixing different types of banking business such as those in the Glass-Steagall Act in the US.
At one time, derivatives were used for practical day-to-day business operations, such as an airline hedging against a big rise in fuel prices, but they rose from 2.5 times world GDP in 1998 (already a staggering figure) to 12 times world GDP before the crisis.
Derivatives trading needs collateral and the price shifts we mentioned above can result in calls for collateral the banks can’t meet. This provokes a liquidity crisis, and the banks don’t have time to recapitalise through earnings, so they stop lending to businesses, especially small and medium-sized enterprises, adding a further twist to the downwards economic spiral.
It’s easy to feel helpless in the face of such arcane and seemingly uncontrollable forces, but solutions exist. Fracturing the eurozone would be one, but while this may lead to a short-term improvement for certain countries, it would weaken the status of the euro as a global currency, increase pressure on countries that stayed in the euro, and create legal uncertainty about financial contracts in euros.
A more coherent approach would include solving the Greek crisis via a 50% or bigger “haircut” on its sovereign debt (reduction in its stated value) and granting the European Financial Stability Facility a bank license. The European Central Bank should continue to support economic growth and investor confidence via funding for banks and putting a lid on sovereign bond rates in key countries. Private banking should be reformed too, with investment banking separated from traditional retail and commercial banking.
That said, sending musketeers to arrest the financiers would appeal to many people.