The New Science of Finance

Don M Chance & Pamela P Peterson. American Scientist. Volume 87, Issue 3. May/Jun 1999.

Money fascinates most people. And whether endowed with a little or a lot, we all tend to want more. This common desire has produced a seemingly insatiable demand for popular books on money and investing, which shows that people with limited financial training are in awe of those who know-or profess to know-a great deal about the subject. Yet finance (the study of how money is acquired and invested) is a relatively young field, having emerged out of the shadows of economics after World War II. Since then, finance has evolved into a critically important pursuit, as evidenced by the influence it has had on so many people and institutions.

Although finance has only recently become a recognized academic discipline, its roots go back centuries. From the days when bankers were called moneychangers, financiers have had to perform tricky computations, borrowing frequently from higher mathematics. In recent years, they have also begun to embrace some elements of physics. To many scientists this marriage of quantitative fields has brought lucrative new jobs. For bankers and investors, it has spawned opportunities to expand the products they offer and to carry out previously intractable calculations.

The scientific character of finance arises largely from its preoccupation with risk. Since the time when sailing the high seas meant subjecting one’s life and fortune to great dangers and uncertainties, people have sought to analyze, understand and control the various hazards they face. The development of probability theory, which began in earnest during the 16th and 17th centuries with the works of Italian scholar Girolamo Cardano and the French mathematicians Blaise Pascal and Pierre de Fermat, has allowed risk to be studied, understood and if not reduced, at least faced with greater confidence and awareness.

Over the past half century, economists studying finance have taken the body of knowledge about how human beings behave when faced with uncertainty and translated it into mathematical descriptions of the way people obtain and invest funds. With the advances in computers and the development of increasingly powerful statistical techniques, finance has become a truly empirical science, demanding that its various experiments be as objective, accurate and repeatable as those in particle physics or microbiology. This evolution has not been confined to the hallowed halls of academe: Financial theories and tests are now as likely to be formulated at major financial institutions as at universities.

The importance of such research has not gone unnoticed. Since 1985 Nobel prizes in economics have gone to Robert Merton, Myron Scholes, William Sharpe, Franco Modigliani, Merton Miller and Harry Markowitz for their influential research on how investors set prices. Indeed, making such determinations, a process called valuation, is one of the central tasks of finance.

The Price is Right

Valuation is the science, and sometimes the art, of estimating what something in the future is worth today. The underlying principles involved are not new. For example, Alfred Marshall, a professor of political economy at the University of Cambridge, published a textbook in 1890 in which he discussed how the present value of an anticipated future benefit could be ascertained. His reasoning began with something that every schoolchild learns: Money in the bank grows considerably faster when the interest is compounded. More precisely, it multiplies by a factor of (1 + r)^sup t^, where r represents the interest earned during one period and t is the number of elapsed periods. Translating future values to the present day requires only that (1 + r)^sup t^ be placed in the denominator to give the correct multiplicative factor.

Similar calculations apply to the price of stocks. As long ago as 1938, the economist John Burr Williams of Harvard University argued that the appropriate price for a stock is the present value of all future dividends paid to its owner. In 1959, Myron Gordon (then at the University of Rochester) took this notion further, assuming that dividends increase gradually at a constant rate, which is indeed the pattern for many mature companies. Some elementary mathematical manipulations of infinite series will show that this assumption implies that the price of a stock today (denoted by P) equals the ratio of the next period’s dividend (call it D^sub 1^) to the difference between the rate of return investors demand from the company (“the cost of capital” or “discount rate,” k) and the expected growth rate of dividends (g). That is, P = D^sub 1^(k – g). This formulation, sometimes called the Gordon model, can be widely applied because g is almost always smaller than k.

Financial analysts still use the Gordon model or one of its many variants to determine the value of certain stocks-although it is clearly not appropriate for those many publicly traded companies that have not yet paid any dividends. (In theory, investors buying stock in those firms are expecting to receive substantial dividends sometime in the future.)

Formulations like the Gordon model use the discount rate to account both for the time value of money and for the fact that the returns earned on securities also depend on the amount of risk involved-a connection that holds for many types of investments. For example, owning stock in small, upstart companies typically subjects investors to more variation from year to year-higher volatility-than owning stock in large, stable corporations, but the small companies generally provide higher returns.

The analysis of how the price of various assets reflects volatility has occupied many economists since Gordon first introduced his model. William Sharpe (then at the University of Washington) and John Lintner (then at Harvard University) independently developed the first formulation specifically to address this problem during the 1960s. Called the capital asset pricing model, their theory describes the returns on securities as comprising compensation for the time value of money and for the risk associated with overall movements of the stock market. In 1976, Stephen Ross (then at the University of Pennsylvania) published a more general method, which he named the arbitrage pricing theory. His approach is based on the idea that if the prices of equivalent assets become unbalanced, there will always be clever investors ready to take advantage of the misalignment.

Both the capital asset pricing model and arbitrage pricing theory are difficult to test definitively, so the theoretical foundation for establishing the price of stocks is far from solid. But the valuation of options-agreements that grant the purchaser the right to buy or sell something at a fixed price for a predetermined period of time-is the area where the science of finance is better equipped to handle risk.

An Optional Aside

Most of the progress in calculating the price of options has been quite recent, even though the need has long been present. Indeed, options have been traded for centuries. They served, in essence, as insurance policies for firms that needed to assure themselves that the future price of a commodity they bought or sold regularly would not rise or fall too dramatically. Yet it was not until the turn of the last century that Louis Bachelier, a student at the Sorbonne studying under the renowned mathematician Henri Poincare, first addressed the problem of assigning prices to options. Poincare was not impressed with such an applied dissertation topic, and, though approving the research, he gave Bachelier a mark of less than the highest distinction, thereby condemning the young scholar to teach at one of France’s lesserknown institutions, where little was heard from him for the rest of his career.

Bachelier’s model was unrealistic (his theory allowed the fluctuating cost of commodities sometimes to become negative), and so his methods for determining the value of options were not reliable. Still, Bachelier’s mathematical description of erratically shifting commodity prices holds the distinction of having anticipated the formulation Einstein used in 1905 to describe Brownian motion, the jittering of small particles suspended in liquid that the Scottish botanist Robert Brown observed in 1827. Norbert Wiener subsequently refined the mathematics of Brownian motion at the Massachusetts Institute of Technology during the 1920s.

The equation of Brownian motion that plays so important a role in option pricing relates the shift in price ((Delta)P) that occurs over a small interval of time ((Delta)t) to the expected rate of rise or fall of the price (symbolized by mu) and its inherent volatility (sigma). The equation also involves the variable (Delta)Z, which is a random process characterized by a normal, bell-shaped distribution with a mean of zero and a dispersion proportional to (Delta)t. Specifically, the equation states that (Delta)P = mu P(Delta)t + sigma P(Delta)Z, where mu and sigma can be functions of both time and price.

Physicists recognize the variable P as describing generalized Brownian motion, a formulation that is widely applicable to a variety of physical and financial phenomena. This mathematical description of Brownian motion belongs to the family of stochastic differential equations, which are characterized by extremely rapid oscillations that decrease in magnitude as the time interval, (Delta)t, shrinks. Ordinary derivatives and their corresponding integrals do not exist for these functions, necessitating the invention of something called stochastic calculus.

This new branch of mathematics blossomed in the period immediately following World War II from the pioneering work of the Japanese mathematician Kiyosi Ito. Probably his most influential contribution was the development of an equation that describes the evolution of a random variable driven by Brownian motion. Ito’s Lemma, as mathematicians now call it, is a series expansion of a stochastic function giving the total differential. Just as Taylor’s theorem leads to the total differential of ordinary calculus and is considered “the fundamental theorem of calculus,” Ito’s Lemma has become known as the fundamental theorem of stochastic calculus.

Although these mathematical abstractions may seem far removed from the nitty-gritty world of finance, they are in fact intimately linked. And it was M. F. M. Osborne, a physicist in the U.S. Navy, who first realized in 1959 that financial market prices followed the equations of Brownian motion that Einstein and Wiener had forged decades earlier.

Soon after Osborne’s observation, mathematical techniques for analyzing Brownian motion reached business schools and economics departments, where scholars applied them to one of the perennial problems of finance: the valuation of options. Economists then created various new pricing schemes, but all of these prescriptions either required knowledge of expected changes in the price of the underlying asset or other measures that were equally hard to quantify, such as how investors react to uncertainty.

A tractable solution emerged only after Fischer Black, then an employee of the consulting company Arthur D. Little, and Myron Scholes, a young professor of finance at MIT, applied the concept of arbitrage to the problem. Arbitrage relies on the fact that two financial instruments (or combinations of financial instruments) that produce the same return in every situation should, logically, sell for the same price-and clever investors will quickly take advantage of cases where this premise is violated. The arbitrageur purchases the security or portfolio where the price is low, sells it where the price is high and profits at no risk. If financial markets are working properly, such opportunities should not exist.

Black and Scholes applied this principle to the strategy of purchasing some stock and simultaneously selling someone else an option to buy that stock at a fixed price. They showed that the risk in owning the stock can be eliminated by continually revising the ratio of options sold to stock owned. The resulting combination should therefore earn the equivalent of the return offered by buying risk-free bonds. From there Black and Scholes backed out the option price from a parabolic partial-differential equation based on the premise that stock prices exhibit Brownian motion. Black held a bachelor’s in physics and a doctorate in applied mathematics, but he was not a specialist in differential equations and only later learned that the equation he solved could be transformed into the heat-diffusion equation of thermodynamics, for which the solution was already known.

Although it is now called the Black-Scholes model, Robert C. Merton, a young economist also at MIT (working independently but in contact with Black and Scholes), derived the same solution at around the same time. Merton graciously delayed publication of his article in deference to Black and Scholes, whom he felt deserved primary recognition. After some difficulty convincing journal editors of its merits, their article appeared in The Journal of Political Economy in the spring of 1973-timed, coincidentally, with the opening of the Chicago Board Options Exchange, the first organized facility for trading options in the U.S.

The Black-Scholes model found its way into finance textbooks over the next decade, and by the middle of the 1980s, Black was busy putting his theories to work at the New York investment firm of Goldman Sachs. Many other economists followed Black from academe to investment banking and brokerage houses. And when the next generation of business school graduates reached the working world filled with knowledge of the Black-Scholes model, a revolution ensued. Corporations and investment fund managers were barraged with new products, such as swaps, structured notes, asset-backed securities and exotic kinds of options, which integrated well with the array of forward contracts, futures contracts and the standard options that had been trading for many years. These instruments collectively came to be known as derivatives, their values being derived from the values of stocks, bonds, currencies or commodities.

Derivatives have become increasingly elaborate, owing not just to the ingenious scheming of high-tech financiers but also to a recognition that risk is both complicated and pervasive. The new instruments allow the buying and selling of financial uncertainty in its many incarnations, so that firms needing to reduce risk can transfer it to firms willing to bear it. The theories of Black and Scholes laid the groundwork for this complex edifice, which came to be called financial engineering. And in 1997 the Nobel committee awarded its prize in economics to Merton and Scholes. Black had died two years earlier and thus could not be named for the prize that he, too, clearly deserved.

For many scientists, the burgeoning market in derivatives and the quantitative nature of their pricing has brought a bonanza. Mathematicians, physicists, systems engineers and computer scientists-”quants” in the parlance of the trade-are now widely employed at brokerage firms and large corporations.

Wanna Swap?

Much of the quantitative work in finance seeks to establish the value of new and oftentimes complex derivatives. One of us (Chance), for example, has recently studied equity swaps, transactions between two parties in which the first agrees to make a series of payments to the second and the second agrees to make a series of payments to the first, where at least one of the two sets of payments depends on the course of a stock price or stock index.

People make equity swaps to lessen the financial hazards associated with the ups and downs of the stock market. Some executives, for example, use these swaps to rid themselves of the risk of being heavily invested in their own companies-while maintaining voting rights on the shares they, technically, still hold. For example, an executive owning 30 percent of the shares of her firm, which are worth perhaps many millions, might enter into an agreement to pay a bank on a quarterly basis the return on her stock plus dividends. The bank in turn gives the executive a fixed interest rate. How can the bank decide what rate to offer without knowing the future returns to expect from the stock?

Working with Donald Rich of Northeastern University, Chance realized that no definitive system for pricing these instruments had been published and, after some study, concluded that what at first glance seems a thorny problem is really no problem at all. The solution is merely an extension of the brilliant insight that Merton, Black and Scholes brought to the problem of pricing options a quarter century ago: Risk premiums do not need to be explicitly predicted if one applies the principle of arbitrage.

This conclusion is most obvious for a swap that involves only one payment (which, technically, would be considered a forward contract). For example, suppose the firm of Goldman Sachs agrees to pay an executive of Microsoft a fixed rate of interest, r, and the executive agrees to pay the firm the return on Microsoft stock, which (because this stock pays no dividends) is simply the percentage change in the stock price. The two parties agree to make the transaction at the end of a year, paid on the basis of the assumption that $100,000 is invested. Assume also that the rate of interest on a one-year loan from a bank is also exactly r.

If the return on the Microsoft stock over the year turns out to be x, the net cash flow to the executive will be $100,000(r – x), which could be positive or negative. For the firm, it is the same amount with an opposite sign, $100,000(x – r). An alternative way to accomplish the same thing would be for Goldman Sachs to borrow $100,000 from a bank and purchase that much worth of Microsoft stock. After a year, the firm can sell the Microsoft stock and pay off the loan. Goldman Sachs earns $100,000(x) from the stock but loses $100,000(r) from paying interest on the loan. In total it earns $100,000(x – r)—generating the same cash flow as a swap for which the rate of fixed payment is equal to the going rate of interest at the bank.

For realistic equity swaps, which involve a series of payments spaced over time, the algebra becomes considerably more involved. But this simple example nevertheless illustrates the principle behind the calculation: The price of an equivalent portfolio determines the price of the derivative.

One might ask whether the underlying assumption-that the market operates so well that opportunities for arbitrage are few and far between-is valid. The answer, in short, is that modern financial markets do even out prices rapidly when they get out of kilter. In fact, real markets act in ways that fit closely (though not exactly) with the ideals of economic theory.

A Well-Oiled Machine

Many economists have mounted studies of market efficiency and found that the prices of the assets being traded quickly reflect the news available to all investors. This line of research has roots that go back several decades, but it began to mature with the work of Eugene Fama, who published an influential paper on market efficiency in 1970 while he was at the University of Chicago.

Fama classified efficiency according to how traders react to various types of information. If the values of the assets being bought and sold take into account all past prices and other market-generated figures (such as the number of transactions), the market would be, in Fama’s terminology, weak-form efficient. If values also mirror all publicly available knowledge, the market would be semi-strong-form efficient. In that case, trading on the basis of information available to all investors does not help any of them earn abnormal returns-profits in excess of what one expects given returns on the market and the inherent volatility of the stock being traded. (That is, performing financial analyses of companies will not help you beat the market-other investors have already done so, and the prices of the stocks have adjusted immediately in response.) If the values already reflect not only all publicly available information but also all private or “insider” knowledge, Fama classified the market as strong-form efficient.

The bulk of the evidence indicates that U.S. stock markets are at least weak-form efficient. But as with much of the analytical work in economics and finance, conclusions sometimes differ. For example, some controversial studies purport to show regular annual patterns (for example, a rising “January effect”) in the prices of some stocks.

One of us (Peterson) has recently examined whether the stock market exhibits Fama’s semi-strong form of market efficiency. The investigation involved how the U.S. stock markets respond to announcements of earnings that are higher or lower than expected. If the profits of a company are better than forecast, the price of its stock generally rises; if earnings are worse, the stock price usually plummets. In either case, the stock should quickly reach a new equilibrium if the market is efficient.

Several studies conducted over the past two decades have indicated that after the public declaration that earnings that are higher than expected, savvy investors may be able to obtain abnormal returns for as long as 40 days before the news filters through the market and the rising stock price for the prospering company reaches a plateau. But such analyses are questionable. Because investors receive countless reports about the economy, financial markets and individual companies, stock prices bounce up and down constantly in response to the continual bombardment of information. It thus proves difficult to judge just when an upward jog in price after a report of surprisingly robust earnings is truly a reaction to that statement. And even if the cause-and-effect relation is obvious, it may be hard to determine exactly when the stock attains the new level of equilibrium.

One solution is to look at many companies, aligning the records of returns on their stock by the date of their earnings announcements and averaging over the complete set. The results of that exercise demonstrate semi-strong market efficiency, with stock prices typically responding to new information within about 15 minutes. Interestingly, buying stocks within one or two days following the announcement of earnings that are better than expected sometimes provides abnormal profits-although it is not clear whether the modest gains that can be garnered in this way would compensate for the various fees brokers charge for making the transaction.

There is limited formal research on whether markets qualify as strong-form efficient. But the general suspicion-confirmed by many well-publicized prosecutions-is that there are hefty profits to be made if investors trade illegally using insider information.

Expert Attention

Financial markets are indeed vulnerable to the ills of insider trading, but they are for the most part fair to investors. And although many of these people feel compelled to learn the nuances of financial engineering, few truly need to be able to manipulate infinite series or solve parabolic partial-differential equations. Nor do they need to become experts in chaos theory, Benoit Mandelbrot’s now-famous fractal geometry (which, ironically, he had first considered in a financial context before applying it to the natural sciences) or neural network programming-all hot areas of research for the many quantitatively minded professionals now studying finance in academic settings and on Wall Street.