An admittedly flawed history of stock markets and investment strategies (39:56)
Videos are available at one organized Quant 101 Playlist on YouTube (opens in a new browser window).
Welcome. Today's goal is to understand the dynamic and changing direction of stock markets to see where we've been and where we're headed next.
I'm Paul, and if you're like me, then you've heard way too many pitches on investment processes and philosophies. It seems as though every person on the planet has a unique approach, which confuses beginners.
So here we will add some structure with a review of stock market history and conveniently segment approaches onto a grid of four strategies.
This tutorial sits within a series called Quant 101 where we focus on equity portfolio management. This one in particular is more of a historical flyby than our more traditional in-depth tutorials. A transcript of the video can be found at the first link in the video's description and it sits here as a reference should you need a reminder.
Let's talk about our plan here today.
First, we will cover why knowing the timelines of innovation in Finance is important, including themes to look out for during this slice of history relevant to portfolio managers and investment analysts.
Second, we introduce five schools or paths practitioners and scholars often take as a convenient way to organize readings and academic research.
Third, we connect the decades, showing the links between advancements leading us to where we are today.
Fourth, we gather takeaways or themes as we look forward to future directions as you plan your career, which could be to build an asset management firm.
And in our next episode we will pick up where we left off in the last tutorial with a stock return histogram and proceed to calculate four important risk measures.
So why does history matter?
First, performance is everything.
Our focus here is on stock investments at the Institutional level and what differentiates investment managers at this level is performance.
So this historical backdrop of events that helped shape where we are today will help the aspiring portfolio manager gain a broad view unlike what is found elsewhere. Material here is captured from a variety of novels, textbooks, and academic journal articles and presented from a practitioner's viewpoint.
In my experience, the best portfolio managers understand the strengths and weaknesses of their competitors and can act accordingly, which is why we will cover multiple viewpoints. Investing is a competitive exercise and it, like the study of economics, is a behavioral science after all so knowing the human aspect is as important as knowing the numbers.
The history of Finance is highly interconnected. Not only that, but history-makers reacted to what was going on at the time. For example, the Great Depression in the 1930s spawned a great deal of new regulation. And later, in the 1950s, the dissemination of computing power ushered in advancements in Portfolio Theory.
The scope, or focus, of this timeline and this series is on publicly-traded liquid securities, and specifically common stocks. Liquid securities are those traded on public exchanges with prices quoted at least daily.
A second focus will be on portfolios. We will not focus on individual bonds, because trading and valuation concepts differ; however, some of the risk modeling concepts covered here are universal and cover any type of risky security.
Much of the history presented here is based on academic advancements and early work published at the University level, at schools such as the University of Chicago, Stanford and Berkeley. These advancements then made their way into the hands of practitioners around the world.
Also, while reviewing this history you will see the constant theme recur again-and-again, that computers changed everything.
Like in so many other industries the adoption of computing power has been a constant influence over the last 60 years and the asset management industry is no different.
A key to smartly crafting your skill and your career will be to anticipate these changes and position yourself accordingly.
Next, let's add structure to this chronological timeline.
First, under the heading "News, Structure and Competition" we see the backdrop for what was going on during each timeframe. This includes stock market crashes, wars and the evolution of businesses due to competition.
I have broken out subsequent advancements in five categories and we will see how these relate to four broad investment strategies or philosophies.
In the first category, we will cover advancements in Portfolio Theory, and include the mathematical and scientific aspects of investing in stocks, including studies on efficient markets and portfolio risk, mostly from scholarly articles.
Advancements often arise from competitive tit-for-tat arguments published by scholars with different viewpoints. Here we see people as passionate about their views as they are about religion or politics. While there are a lot of opinions, there are no correct answers. You will see this philosophical tug-of-war recur beyond just this tutorial, and throughout this series.
Fundamental Analysis is about reviewing financial statements, projecting future dividends, or cash flows, and then discounting them back to the present to derive an intrinsic value. The analyst then compares this intrinsic value to the current market price and makes a buy or sell recommendation.
Quantitative Analysis is the practice of comparing financial statement ratios, and other factors, across firms to determine which firms are over-valued, and which are under-valued. Quantitative Analysis was born after the broad dissemination of computing power after the 1950s.
Advancements here are helpful for any portfolio manager to help set weights to stocks in a portfolio and to evaluate risk and performance.
Next, Behavioral Finance, is the study of human psychology as it relates to financial decision-making.
Portfolio Theory, discussed earlier, was born in academia under two assumptions. First, that investors are rational, and second that the distribution of returns is bell-shaped, with thin tails.
We know history refutes these assumptions from our frequent bouts with bubbles and crashes, and it is the Behavioral Finance group that puts some academic muscle behind the argument for irrational behavior. Investors are human after all; humans are flawed and have been known to behave irrationally.
Technical Analysis is the practice of using historical price and volume information to predict future trends. Think of this as trader-speak, and while more academically-focused individuals gloss over Technical Analysis, we won't here.
While managing billions in assets during the 2008 financial crisis, I gained an appreciation for the human element of investing. And nothing captures the human element better than the minute-by-minute supply-and-demand relationship as reflected in a stock's price and captured best by Technical Analysis.
The price is the price after all.
Now how are these organized in a practical way? Meaning, how do practitioners communicate to clients about their investment processes and how do these all come together?
For this we have a convenient grid with three active investment strategies, Fundamental, Technical and Quantitative, plus one Passive strategy, also called Indexing.
Each philosophy has its own strengths and weaknesses.
Those on the left can be categorized and benefit from processes that are:
Those on the right benefit from processes that are:
To be successful in this field we need to understand the scientific side and the behavioral side. Investing is a behavioral science after all.
I should mention that most investment practitioners use a blend of strategies in the end. Someone might say they are purely Passive, investing in a variety of index funds, for example. But in the end they still need to allocate a portfolio among several funds and this is an active decision. Asset allocations don't just fall from the sky. In the end, someone needs to decide on how much to invest in different funds, allocating between stocks, bonds and cash.
Also, many Quantitative processes incorporate Fundamental and Technical signals.
Portfolio managers who employ Fundamental and Technical analysis still need tools from the Quantitative camp, maybe not to analyze stocks, but to build and analyze portfolios. They also need Passive indexes to benchmark and compare how they are doing against other managers. So again, they're all interconnected.
Yes, maybe from a marketing standpoint, investment managers have an incentive to align themselves with one investment philosophy and claim they are "pure", and sitting in the corners, but in practice and behind the scenes, a blend is more realistic.
With that as a backdrop, let's get into the history and don't worry about memorizing all of this. Instead, see it as one person's view of how history shaped where we are today, with an eye on where we are headed in the future.
That's where this review is admittedly flawed, there are way too many industry and academic buzzwords in here for the beginner to grasp, but the point is more to see the connections and biases offered by the different camps. This will likely lead to frustration, opinions, debate, maybe even anger, and that's okay.
Where this short history can add value is to give a newcomer, a high-level overview of the landscape, offering breadth instead of depth, with context that can't be captured in deep dives into financial modeling tutorials, or from biased asset managers with an objective to raise money instead of educate newcomers.
That said, let's get started.
Our first timeframe is 1600 to 1899. The trading of securities started in the 1600s, and prior to trading the ownership of public stocks, investors traded rights to real estate, private companies and even tulip bulbs. Interestingly, much of the surviving history of these times revolved around financial euphoria and the madness of investors during boom-and-bust economic cycles.
Here I won't dive deepling into the irrational behavior of investors and the history of financial bubbles, but if the topic interests you, I recently published a video on Bitcoin, titled Is Bitcoin in a bubble?, a relevant topic making news in 2018.
There I lean heavily on work by John Kenneth Galbraith who spent much of his professional life researching and writing on financial euphoria. It explains why, to many people, the behavior of trading on stock markets seems like one big casino.
In 1711, one of the first bubbles in history, the South Sea Company was a joint-stock company based in England. The company raised money to fund a ship to go to the Americas. Due to a government-issued monopoly, shares of the South Sea Company rose dramatically, until its collapse in 1720. This marked one of the first recorded bubbles in history.
During this same time block, stock exchanges were established. The Amsterdam Stock Exchange is considered the oldest, established in 1602, for trading of the Dutch East India Company. Later other stocks followed. In the late 1600s the London Stock Exchange was established. Then, in the 1790, the first in the United States was in Philadelphia, followed by the New York Stock Exchange in 1817.
Also during this period, Gustave le Bon, a French researcher became known for his writings on crowd psychology, herd behavior and the study of human traits. His 1895 novel is known as the first broad history of crowd behavior. It was only natural that people started using his research to explain irrational behaviors exhibited by investors.
Let's move on to the timeframe between 1900 and 1929.
Here we start with another speculative event, captured during the Roaring Twenties. This post-World-War-One period brought financial boom times and equity market values rose materially throughout the decade. Many Americans were moving to cities to benefit from the post-war expansion and advancements brought on by the dissemination of information and entertainment in new forms like film, music and radio.
This euphoria all came to a halt with the Wall Street Crash of 1929. To describe how euphoric the stock market was at the time, investors experienced a winning streak on the Dow Jones Industrials that saw stock values increase tenfold. As with the popping of other financial bubbles, confidence bred confidence and in the late 1920s participants increasingly used leverage in trading accounts.
There were warning signs, as there usually are. There were two material dips in March and May of 1929, followed by a warning from the Federal Reserve. Despite this, stocks recovered and climbed 20% through September, and after a difficult month following a London Stock Exchange crash, selling returned and on October 24, 1929, equities lost 11% at the open only to recover much of that loss when bank executives stepped in to stem the fall.
Over the weekend, the mainstream media covered the events and selling picked up pace the following week. By mid-November, equity values fell by 50%. It took another two and a half years for the market to find a bottom and the index was down almost 90% from its peak. It took 25 years, or until 1954, for the Dow Industrials to regain the value set at the peak of 1929. These events changed the course of financial markets for good.
I mentioned the Dow Jones Industrials. The index was first published in 1896 and is, even to this day, one of the most recognized stock market indexes. So even before the beginning of the 20th century, Charles Dow developed the first equity indexes, with Edward Jones. He was also one of the founders of the Wall Street Journal. Not only did Charles Dow touch newspapers and indexes, he also was influential in Technical Analysis. Much of his work, on what was called Dow Theory, in 1903, is still highly regarded by Technical Analysts today.
Let's move on to the post-crash time period.
The popping of the bubble from the Roaring Twenties brought on the greatest economic crisis the United States has seen. A lot of soul searching and reflection went into the post-crash period and lawmakers were busy creating regulations to ensure that the speculative excesses wouldn't occur again. Many of these rules are still in place today.
The Securities Act of 1933 codified the registration of securities with the US Securities and Exchange Commission (SEC). This shifted securities regulation from state governments to the national government.
In 1940, the Investment Advisors Act brought the activities of investment advisors under the realm of the SEC as well. These Acts are still in place today and govern the practices of fee-based investment advisors, including mutual funds and some financial planners and consultants.
During this period of self-reflection, investors were advancing their knowledge as to how to value stocks. In 1934, Benjamin Graham and David Dodd published a book called Security Analysis which described an orderly review of a company's financial statements. This work was instrumental in the development of Fundamental Analysis.
Then in 1938, John Burr Williams wrote The Theory of Investment Value which further developed the Fundamental Analysis concept of intrinsic value. This established a structure to the technique called discounting, which is broadly in use today.
In the meantime, and also in 1938, a book titled The Wave Principle credited Ralph Nelson Elliott with new principles in Technical Analysis, equating security prices to investor behavior and crowd psychology.
Next, during the period of the 1940s, much of the world was focused on World War II. At that time, company-related information was primarily distributed in the form of charts, drawn by a select few companies that had access to the data and the capability to distribute the charts to traders.
And in 1949, Benjamin Graham detailed what was referred to as a 'cigar butt' approach to valuing companies. In other words, seeking companies that investors gave up on, and tossed to the ground. His work popularized techniques to read and interpret financial statements and added fundamentals as an important alternative to Technical Analysis.
Earlier, in 1944, John von Neumann and Oskar Morgenstern wrote academic papers on the expected utility of investors, adding academic structure to the Behavioral and Fundamental aspects of investing.
And in 1948, Robert D. Edwards and John Magee published a book titled Technical Analysis of Stock Trends which advanced many concepts in Technical Analysis.
Next, in the decade of the 1950s, with the advent and assimilation of computers, more data could now be included in the analysis of stocks. Computers made their way into many financial applications and ushered in advancements in stock and portfolio risk.
With the help of this computing power, the academic community started getting more involved. In 1952, Harry Markowitz brought his experience from physics, and developed Modern Portfolio Theory, which is a view that rational investors will seek to maximize utility by maximizing expected return versus expected variance.
This work advanced portfolio theory to the mainstream, and Markowitz was eventually awarded a Nobel Memorial Prize in Economic Science for his work. Thereafter, studies performed in schools of Economics, with their emphasis on equilibrium and normal distributions, helped bring scientific rigor to the analysis of stock prices. This basically offered an alternative to the more subjective forms of active portfolio management used at the time.
Further dashing the hopes of active managers, and especially Technical Analysts, was the work of Maurice G. Kendall, who in 1953 wrote The Analysis of Economic Time Series.
Kendall was one of the first researchers to use the term 'random walk' to refer to the pattern of stock prices. This was a comprehensive academic study on determining if patterns existed.
Technical Analysts believe that patterns in security prices do exist. So Kendall argued that if such a pattern could be found, then someone could devise a strategy to pick winning stocks and ride them to guaranteed profits. Kendall eventually concluded that it wasn't that prices moved irrationally, instead they moved in a random fashion as investors searched for information on each specific security.
In 1956 and then again in 1959, Myron J. Gordon and Eli Shapiro advanced many of the concepts from John Burr Williams before them, to give practitioners a set of formulas for projecting future dividends and then to back out growth rates based on current market prices. Their Gordon Growth Model is still given a lot of emphasis by those in the Fundamental Analysis camp, even today.
At that time, societies of Fundamental Analysts in large financial centers began to spring up, giving practitioners a venue to share ideas in the valuation of stocks.
Then, in 1963, 284 candidates sat for the first Chartered Financial Analyst (CFA) test. The association, called The Association for Investment Management and Research, eventually changed its name to the CFA Institute in 2004.
Now there are well over 100 thousand CFA Charterholders in 60 countries. Many of these analysts work for sell-side brokerage firms and participate in quarterly company conference calls, providing feedback to CEOs and CFOs and holding them accountable.
Both chartered and non-chartered members of the sell-side analyst community can be influential in the pricing of stocks, particularly in the near-term. Their reports are disseminated to many Institutional-level managers responsible for great sums of investor capital in the form of mutual funds and pension funds.
Simultaneously, in the Behavioral Finance camp, Ward Edwards wrote The Theory of Decision Making for the Psychological Bulletin, which was a fairly exhaustive effort that bridged the gap between research relating to psychological tests and those performed by economists.
In the meantime, the body of work in Technical Analysis kept moving forward. William D. Gann in the 1950s created charting methods that were based on geometry, astronomy and ancient mathematics, called Gann charts.
Let's move on to the 1960s. Here is where index funds really came on to the scene. With a view of efficient markets developing, these funds offered clients low fees, market exposure and a benchmark for active managers to attempt to outperform. Most of these advancements fall in the Portfolio Theory camp.
First was the development of CAPM, which stands for Capital Asset Pricing Model. William Sharpe in 1964 was given much of the credit for CAPM, however Jack Treynor, John Lintner and Jan Mossin were working on the theory during the same time frame.
CAPM is an asset pricing theory that establishes an expected return on a stock based on a component of time, the risk free rate, plus the market price of risk, or beta multiplied by the expected market return.
CAPM is so important to modeling that we will spend all of Chapter 6 on the topic.
At the end of the decade, Eugene Fama, from the University of Chicago, developed the Efficient Market Hypothesis, and with it, he postulated that there were three forms of market efficiency.
This gave researchers a framework with which to test and publish scholarly articles thereafter.
Those wishing to advance to a job in academia relating to stocks, and a PhD, will spend a lot of time with the Efficient Market Hypothesis as it offers a way to measure whether an active process is valid or not.
Also in the 1960s, Merton Miller and Franco Modigliani aided in the theoretical aspects of valuation techniques. They contributed research to the Fundamental Analysis camp with work on the capital structure of public companies.
Let's move on to the decade of the 1970s.
With the advancements coming out of academia and the use of computers to perform higher-frequency calculations, consulting firms burst onto the scene. A number of these firms are still active today. Firms such as Wilshire, in 1972, Callan Associates in '73, Mercer Investment Consulting in '75, Hewitt Associates in '74 and RogersCasey, in '76. Russell Investments, established in 1936, also pivoted to the lucrative and in-demand field of pension consulting, also in the 1970s.
This new breed of consultant, armed with optimizers and risk management tools, educated Institutional investors and large pension sponsors on how to manage and oversee portfolios using new technologies.
Their emphasis on benchmarks helped to fragment the asset management marketplace into specialist active portfolio managers who could be measured against appropriate benchmarks.
Later firms such as Lipper and Morningstar, in the mutual fund space, arrived to help Individuals and Professionals do basically the same thing, but with publicly-held mutual funds. Here almost anyone could participate, even if the investor with $100.
APT stands for Arbitrage Pricing Theory, which was first postulated by Stephen Ross in 1976. This was the first foray into using non-market independent variables to create, a more accurately-measured, and intuitive, beta than CAPM provided. In one specific APT case, by Chen, Roll and Ross, they implemented a model using multiple macroeconomic factors.
BARRA was founded in 1975, by Barr Rosenberg, a noted scholar at UC Berkeley who frequently wrote on predicting beta. He was instrumental in the development of the fundamental factor version of APT models.
In this series, we will not walk through the math associated with multi-factor models as the Statistics gets pretty complex. That said keep this in mind, because these models sit in software on the desks of Institutional investors around the world.
BARRA, followed by Northfield and other risk model providers used new computing power to build risk models for portfolio optimization, risk analysis and portfolio attribution. We will build a highly simplified version of a single-factor risk model later in this series.
In 1976, Fisher Black wrote Studies in Stock Price Volatility Changes which proposed a model to forecast risk. This came after the decomposition of risk into two pieces: systematic and specific, using beta and multi-factor APT models.
However, it wasn't until 1982, when Robert F. Engle wrote a paper about forecasting variance, that risk forecasting was incorporated into third-party risk models. This statistical forecasting method called ARCH, for Autoregressive Conditional Heteroskedasticity, proved to be more accurate than using historical variance alone.
Throughout the 1970s, the sell-side analyst community started to flourish. As an agent for Institutional investors and brokerage firms, the analysts performed deep dives into the financial statements and business practices of the largest firms. They presented this information to investors as a service, generally working for commissions and order flow.
This structure is still in place today. Large US companies are typically covered by 20-30 analysts. These analysts participate on quarterly conference calls, query management, build models and set up visits between large Institutional investors and company management.
Most importantly analysts publish their buy or sell opinion on a stock, which to varying degrees is incorporated into stock prices.
In the same decade, Michael Porter took his engineering background, and pursued business and economics at Harvard. He eventually published numerous books on competitive strategy and is best known for the Porter's Five Forces Framework for analyzing industries. This helped the Fundamental Analyst community to subjectively evaluate corporate strategy, which is another way to project future stock returns.
As consultants were segmenting managers and building systems to evaluate risk characteristics, the practice of evaluating companies based on ratios came into focus.
Here metrics like the price-to-earnings and price-to-book ratios were used to evaluate stocks. This was the beginning of the Quantitative Analysis approach which relied heavily on computing power to find mispriced stocks, scanning thousands of stocks at a time.
In 1979, Daniel Kahneman and Amos Tversky published advances in Behavioral Finance which gave psychological explanations for why investors behave irrationally. They pushed forward with influential papers on heuristics, biases and eventually Prospect Theory.
In the early part of the decade, ending in 1973, in the Technical camp, another bubble formed and eventually popped. Investing in fifty popular large cap growth stocks had become a fad. Investors were quoted saying these stocks could be, if you can believe it, "purchased at any price". These were also deemed "one decision stocks", meaning they never needed to be sold.
We all know how those episodes normally end, and this one did too, in 1973.
In the 1980s, hedge funds started sprouting up and gaining attention. These non-traditional and non-public investment pools are typically free to roam away from benchmarks in the search for higher returns. Hedge funds also use leverage, options and other non-traditional methods such as investing in less liquid investments.
Then, in 1987, and specifically October 19th, we witnessed a stock market crash amounting to a loss of 23% on the Dow Jones Industrial Average, in one day! This crash was experienced around the globe.
A relatively quick recovery, when compared to the crash of 1929, was due to the fact that economic conditions barely dropped as a result of the 1987 crash.
Finger pointing ensued and it was another feather in the cap for those arguing that investors do, in fact, behave irrationally.
Also in that decade, Eugene Fama and Ken French made academic advancements in asset pricing, by using three factors, instead of just the one used in CAPM.
To this day, Ken French publishes returns on these factors that practitioners use to evaluate fund and manager performance.
Throughout the 1980s, academic researchers using Fama's framework were successful at finding a number of anomalies to the Efficient Market Hypothesis, including calendar effects, size effects and valuation effects.
One of these anomalies, identified in 1985, when Werner F.M. De Bondt and Richard Thaler wrote "Does the Stock Market Overreact?" They noted that stock prices do overreact to unexpected and dramatic news events.
To make the connection here, studies of this type often cover thousands of stocks and are conducted over long time periods. Once identified and published in academic journals, the anomalies go into the Quantitative Analyst's arsenal allowing them to profit from the anomaly. Eventually the market adjusts and the opportunity, often called a "free lunch", goes away.
The 1990s ushered in competition for brokerage commissions. Up to that time, full service brokerage firms like EF Hutton, Merrill Lynch and Smith Barney, pitched their research to investors for commissions that by today's standards are very high.
Investments in the range of 5-10 thousand US dollars often came with commissions of $300 per trade. Discount brokers such as Charles Schwab, Etrade and Ameritrade in the United States separated the sale of research and execution. This allowed them to offer commission rates of around $30 for a trade execution instead of $300. A pricing war ensued.
Also, later with the adoption of the Internet, this allowed the do-it-yourself investor to trade for themselves at a greatly reduced rate. We have the discount brokers to thank for commissions that have dropped to, and leveled off, at about $10 per trade today.
Due to the Internet craze, another bubble started to form at the end of the 1990s. Complicating matters, stock market indexes became highly concentrated due to the growing weight of technology stocks. As with other bubbles, there were warning signs, but they were ignored.
After the bubble burst in the early part of the 2000s, investors were forced to refocus their valuation models to focus on profits hitting the bottom line, instead of clicks on a webpage.
In 1991, researchers Gary Brinson, Randolph Hood and Gilbert Beebower studied large pension plan investments and published a paper arguing that replacing active managers with index funds would generate higher returns. In their most-quoted paper, they reported that over 90% of variance was explained by asset allocation.
Financial planners and investment advisors used this research to convince clients to focus on asset allocation. Pitching asset allocation using mutual funds quickly overtook individual stock recommendations as a result.
This allowed financial planners to focus less on the nitty-gritty details of individual stocks, allowing them to concentrate on building client relationships, also known as sales. Asset allocation to mutual funds is likely the most common approach used today by practitioners in a group we call Professionals.
Research on anomalies continued in the 1990s. In 1993, Narasimhan Jegadeesh and Sheridan Titman published research on the intermediate term price momentum anomaly, which covers a period from 3 to 12 months, meaning stocks that outperformed over that period, tended to continue to outperform.
And in 1996, Richard Sloan wrote on the accruals anomaly. Accruals research refers to measuring earnings based on real cashflows instead of formal earnings.
Prior to that time, most investors focused on the net earnings number. Accruals research helped to identify overvalued companies that subsequently earned lower rates of return.
Later, Quantitative Analysts built the accruals anomaly into their models and the mis-pricing "free lunch" went away.
In the Behavioral Finance camp, Robert Shiller, in his book Irrational Exuberance argued that stock prices were too high just before the bursting of the Internet bubble. This became a classic book, because it demonstrated a scholar predicting the end of an equity bubble, in real time.
Also, Andrei Shleifer and Lawrence Summers laid the groundwork for two areas of research in Behavioral Finance for academic studies.
First, are studies of investor psychology. Second, reasons why arbitrageurs may be unable to quickly adjust prices to their fundamental value.
Let's move on to the next decade.
Here exchange traded funds, or ETFs, went mainstream. ETFs are mutual funds that price throughout the trading day instead of once, at the close. Since their adoption, assets in ETFs have ballooned, and Professional and Individual investors were given access to new highly liquid, exotic instruments, including some that involve leverage.
Again, as a warning sign of troubled times to come, for 4 days starting on August 6, 2007, firms who followed a Quantitative approach suffered material losses.
A de-risking of portfolios had a cascading effect as many investment firms and bank proprietary trading departments employing Quantitative Analysis held many stocks in common.
Around the same time, several short-term bond funds imploded.
Shortly after that came the second largest financial crisis in the United States and one that triggered a global economic slowdown. The stock market collapsed after the bursting of a US housing bubble. Economic activity came to a standstill and the ripple effects have been felt around the globe even to the present day, fully 10 years after the onset of the crisis.
Earlier in the decade, two noted scholars with ties to the firm Barclays, now BlackRock, put forth much-needed structure to the active management process. Their Fundamental Law of Active Management, which is a framework for measuring active risk and return, added discipline and structure from risk modeling that had yet to be applied to alpha modeling.
In an attempt to ride the ETF and indexing trend, many investment firms created funds and ETFs that tilted portfolio weights towards financial characteristics instead of capitalization-weighted index funds. The marketplace was open to this because several indexes had become overly concentrated in Tech stocks during the run up to the Internet bubble.
Smart beta funds are hybrids between passive funds with a tilt to a specific characteristic, much like a Quantitative approach. They provide the investor with a known and predictable tilt, to cheaper stocks, lower volatility stocks, or dividend payers, for example.
Along with the adoption of trading on the Internet, advancements in charting and the dissemination of company-specific information on websites like Yahoo Finance, gave more information to investors directly, thus sidestepping brokerage firms.
In addition, online trading platforms for active traders, pushed by brokerage firms, commoditized the delivery of charting techniques used for Technical Analysis.
Let's now move on to the next decade.
The financial marketplace is always evolving, and the success of corporate activism by several large hedge fund managers offered a new spin on active management. Fundamental active managers were being left out of the indexing party, and a focus on activism helped this group differentiate their offerings.
Also, as more math and science entered the market, and after the success of several noted hedge funds, high frequency trading started to have a big impact on the markets. Recently, flash crashes and other unusual market behaviors have been attributed to high frequency firms and their hyperactive computer trading programs.
So where does all of this take us, as we look forward to the future?
A new class of investment advisor has evolved, called the robo-advisor. Robo-advisors deliver asset allocation models to clients through the Internet. This was normally the practice used by discount brokerage firms, financial planners and investment advisors. This meant that the new generation of investor could go straight to the Internet for advice, without building a relationship with an advisor. Doing so at a fraction of the cost.
This, in my view, is squeezing the Professionals category, who now need to differentiate their offerings to compete against cheaper online competitors.
The last point I'd like to make is that managers of fund products are increasingly being held accountable for their performance.
As technology and the access to information and knowledge increases, investors will care less about "stories" and relationships. So, the level of accountability for performance is increasing.
That is what we learn in this series called Quant 101.
With that, let's stop, take a deep breath and gather takeaways that will help us put this into context.
Is this history complete? No.
Are important moments from history left out? Certainly.
Is this one person's view of history. Yes.
So what is the point and value in all of this? The objective here was to put in one place a discussion of links that when taken together provide context for the highly integrated, complex and confusing aspects of portfolio management.
The point here was not to go into depth, but to see the interconnected nature of stock markets and stock valuation from different schools of thought.
Some of these concepts will recur later in this series and others won't. But having a historical backdrop to understand why and how we, as humans, reacted and adapted to current events over time, will give you context and help you understand these concepts better.
At least, that's one flawed human's view.
By way of summary, we covered over a century's worth of historical advancements in the field of Finance and Investments from a variety of perspectives in academia offering a glimpse at how money is managed in the real world and at Institutional firms around the globe.
If you would like references to the periodicals please connect with us here as we explore these in more detail in future tutorials.
In the next episode we return to our real live case where we are building financial models, taking the histogram of returns, and moving on to the four statistical measures we will use to evaluate a stock's risk: variance, standard deviation, covariance and correlation.
And if you are new here, you are more than welcome to join in our journey. Have a nice day.
This tutorial in particular is best suited for the video presentation.
I encourage you to check out our YouTube Channel. Subscribe straight from here.