The Big Bang That No One Noticed

The Parallel-Processing Revolution Has Only Just Begun

Trying to Speed Up Video Games, Nvidia Rocked the Tech World

For the next four weeks in The Big Secret on Wall Street (through June 28), we’ll be exploring an extraordinary… massive… big-picture investment story. It’s the next step of the technological revolution… it involves (but goes far beyond) artificial intelligence and machine learning… and right now, no one is talking about it like we are.

Because this month marks Porter & Co.’s second anniversary, we are making this vital four-part series free to all our readers. You’ll get the 10,000-foot view of this tech revolution, see how it affects you, and receive a number of valuable investment ideas along the way. However, detailed recommendations and portfolio updates will be reserved for our paid subscribers.

We call this big story The Parallel-Processing Revolution, and we’re thrilled to share it with you, starting today.

Read Part 1 below.


In October 2012, three obscure academics at the University of Toronto accidentally changed the world.

Earlier that year, in search of little more than bragging rights, PhD candidates Alex Krizhevsky and Ilya Sutskever, and their advisor, Professor Geoffrey Hinton, entered the ImageNet Large Scale Visual Recognition Challenge (“ILSVRC”). This annual competition aimed to advance the field of computer vision – training computers to recognize that a photo of a man or a woman represents a person, and that an image of a poodle or a German shepherd depicts a dog.

The competition, which that year took place in Florence, Italy, centered on the ImageNet dataset, a collection of 14 million individually-labeled images of everyday items across thousands of categories.

Participants in the Challenge set out to design algorithms – an algorithm is a set of rules that programmers create – to enable computers to correctly (and autonomously) identify as many of these images as possible. This feat, known as object categorization, was already recognized as one of the most fundamental capabilities of both human and machine vision, and was an early goal of the nascent field of machine learning.

The University of Toronto team’s algorithm – dubbed AlexNet, after its lead developer – won the challenge that year. 

In fact, it trounced the competition. AlexNet performed significantly better than any algorithm ever had, by a wide margin. It identified images with an error rate of just 16%, while previous-year winners had error rates – that is, the pace at which algorithms incorrectly identified images – of 25% or more. 

In the field of computer vision, this margin of victory was akin to Roger Bannister running the first sub-four-minute mile in 1954 – a feat that runners had been trying (and failing) to accomplish for nearly a century. It was a huge improvement over the small, incremental progress that had previously been made to date, and was a major step in the evolution of the industry.

However, while the team’s low margin of error was extraordinary, what was particularly noteworthy was how they achieved it.

In short, Alex and his PhD colleagues had trained their algorithm on a graphics processing unit (“GPU”) – a specialized computer processor originally designed to speed up graphics rendering in video games – rather than the standard central processing units (“CPU”) that run traditional computers. 

How the Shift to GPUs Changed Everything

Prior to AlexNet in 2012, ILSVRC teams had trained their algorithms exclusively on CPUs. CPUs are the brain of a computer. They’re fast and powerful, but they have a significant limitation: they can only execute one operation or instruction at a time, one after another – a process known as serial computing.

In serial computing, the speed of a CPU is determined in part by the number and density of transistors built into its circuits. (Transistors regulate electrical signals and are the basic building blocks of modern computers.) All things equal, the greater the density of transistors a CPU has, the higher its “clock speed” – the number of operation cycles it can carry out per second – and the greater its processing power.

In the 1970s, Gordon Moore, the co-founder of Intel, noted that the number of transistors that can be manufactured into a microprocessor doubles every 18 to 24 months with minimal increase in cost. This observation became known as Moore’s Law and helped the industry anticipate that CPU performance would improve at roughly the same rate, or around 50% per year.

However, with serial computing, even the most advanced supercomputers weren’t powerful enough to efficiently run neural networks like AlexNet – machine learning programs that make decisions in a similar manner to the human brain. For example, the number of computational operations required to train just one advanced algorithm can rival the total grains of sand on earth… that is, it’s an unfathomably large number of calculations that even the most powerful CPU would struggle to execute.

GPUs were originally designed to process graphics, which required handling hundreds or thousands of individual pixels – short for “picture elements,” the smallest units of a digital image or display. Modern displays can contain 8 million or more individual pixels. Because of this, GPUs must be able to execute multiple independent operations simultaneously.

This process is known as parallel computing. A GPU can carry out tens of thousands of operations at once, creating a total processing capacity that is exponentially greater than that of a CPU. And as shown in the chart below, the relative performance advantage of GPUs versus CPUs continues to increase over time:

(This growing advantage is the result of a significant slowdown in the pace of CPU performance increases to around 1.1x per year – versus the 1.5x per year Moore’s Law predicts – as increases in transistor density are beginning to run into the limits of physics. Meanwhile GPU performance has continued to increase at that same 1.5x per year rate.)

A simple analogy can help to explain the differences between CPUs and GPUs: A CPU is like the owner of a burger joint that serves hundreds of customers a day. The owner could potentially make all the burgers himself – a simple but time-intensive task – but it would leave no bandwidth to manage other aspects of the business. Instead, the owner could hire line cooks to make the burgers for him. In this case, a GPU is like a specialized line cook with 10 arms that can make dozens of burgers at the same time.

By running their algorithm on GPUs rather than CPUs, the AlexNet team was able to dramatically outperform other challengers.

The University of Toronto team’s victory was a big deal in the computer-science world, but at the time it didn’t raise any eyebrows in the broader technology universe. More than a decade later, though, it is remembered as the Big Bang moment for the artificial-intelligence (“AI”) and machine-learning revolution that is sweeping the world today. 

And one company has been leading the way…

How Nvidia Became the King of Parallel Computing

Parallel computing has revolutionized the tech world, and Nvidia (NVDA) stands at the forefront of this transformation.

Founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, Nvidia set out to create the first 3D graphics cards for consumer PCs. At the time, consumer video graphics were flat (two-dimensional). High-end video graphics required expensive professional workstations that were primarily the domain of the military and big-budget movie studios (Jurassic Park would enthrall movie-goers with its computer-generated imagery of dinosaurs that same year).

The company’s first graphics cards were a big hit with consumers… so much so that it soon faced competition from as many as 90 other companies producing similar cards.

Nvidia’s first major GPU breakthrough came with the RIVA 128 in 1997. The company used emulation technology – essentially using software to test its processors in a virtual rather than real-world environment – to speed up the development of this processor. This new process allowed Nvidia to begin bringing its new GPUs to market in just six to nine months versus the industry standard of 18 to 24 months. This is a key factor that helped establish Nvidia’s lead in this field, outpacing potential rivals.

Building on the success of the RIVA 128, the fast-growing California-based company released the GeForce 256 in 1999. This GPU further differentiated Nvidia by introducing programmable shaders, a feature that enables developers to create more realistic levels of light, darkness, and color by leveraging parallel processing capabilities to the fullest.

In 2006, Nvidia unveiled the CUDA (Compute Unified Device Architecture) platform. CUDA is a proprietary framework for general-purpose GPU computing, which opened up the parallel processing power of GPUs for a wide range of applications beyond graphics for the first time. Putting all these advancements together, CEO Jensen Huang envisioned a “full-stack” solution, equipping developers across industries with all the tools they needed to tap into parallel processing power.

CUDA’s impact was profound but not immediate. At the time, there simply wasn’t a market for general-purpose GPU computing. But it was this CUDA platform that enabled the University of Toronto researchers to use Nvidia GPUs for their contest-winning AlexNet algorithm some six years later, igniting the use of GPUs in AI and machine learning. And it helped further distance Nvidia’s offerings from those of its competitors.

This CUDA breakthrough soon led to widespread adoption by tech giants like Google, Facebook, and Baidu, which operates China’s largest search engine. And over the past decade, strategic acquisitions and product developments have solidified Nvidia’s leadership in parallel computing.

For instance, in 2020, Nvidia acquired networking products company Mellanox, which helped Nvidia enhance data transfer speeds for its high-performance computing (“HPC”) data centers. And in 2022, Nvidia introduced its new Hopper GPU architecture – designed specifically for modern data centers, AI, and HPC use – as well as its Grace CPU, its first-ever data center CPU. (These two products were named after Rear Admiral Grace Hopper, one of the first female computer scientists and a pioneer of computer programming.)

Then in 2022, the rise of generative AI models cemented Nvidia’s dominance in HPC. Using Nvidia’s advancements, OpenAI released ChatGPT in November 2022, creating a watershed moment that showcased practical applications of AI for the first time.

Some of the real-world use cases for ChatGPT and other generative AI applications include customer service (providing automated and multilingual support), content creation (generating high-quality, human-like text for websites, blogs, social media, and marketing materials), marketing (providing personalized product recommendations to customers), legal and compliance (quickly analyzing legal documents, extracting relevant information, and summarizing them), and real-time language translation.

Nvidia’s innovations in parallel computing were instrumental to these advancements. Its GPUs, with their immense parallel-processing capabilities, have enabled the training of large-scale AI models that were previously unimaginable. Rapid adoption and integration of these new GPUs by companies like Microsoft and Google over the past 18 months has only further strengthened Nvidia’s lead in parallel computing.

Bigger Than AI: How Parallel Computing Will Change the World

Nvidia’s advancements in parallel computing extend far beyond AI. It might sound like an exaggeration, but we believe these advancements could be as transformative to the global economy as the printing press, the Industrial Revolution, or the rise of the internet.

Consider Gutenberg’s printing press, which revolutionized the spread of knowledge by making books widely accessible. Nvidia’s progress in parallel computing has similarly democratized access to high-performance computing capabilities, giving researchers, scientists, and entrepreneurs – as well as normal people – access to computational power that was previously reserved for large supercomputing facilities.

The Industrial Revolution overhauled manufacturing through mechanization, boosting productivity and economic growth. Parallel computing could usher in similar improvements by enabling intelligent systems, autonomous vehicles, advanced simulations, and smart robots that can work 24/7. This could drive unimaginable productivity and efficiency gains across the economy in the coming decades.

And the internet transformed communication, connectivity, and information sharing, leading to profound social and economic changes. Parallel-computing advancements could ultimately enable new forms of human-to-human and human-to-machine communication and connectivity – such as the metaverse or direct brain-computer interfaces – to dramatically add new possibilities to business, to society, and to human interaction. 

Finding Ways to Profit From the Parallel Computing Revolution

These advancements in computing could ultimately create trillions of dollars of wealth in the decades ahead. However, profiting from this great leap – via direct investment in Nvidia stock or in a handful of other important ancillary companies – will require discipline and patience.

At its current $1,200 share price, Nvidia commands a market capitalization of nearly $3 trillion. This makes it the second-largest company in the global economy, just behind Microsoft (valued at $3.15 trillion). Despite its gargantuan valuation, Nvidia shares are not outrageously priced, given two key assumptions: 1) demand for its GPUs can continue producing robust revenue growth exceeding 30% annually, and 2) Nvidia can maintain its world-class 55% profit margins. 

If these assumptions hold true, Nvidia should generate roughly $160 billion in revenue next year and $35 in earnings per share – roughly double what it generated in the last 12 months on both metrics. That puts its forward price-to-earnings ratio at just 34x, compared with just over 20x for the S&P 500. Given its dominant market position, growth, and profitability, this is arguably not an extreme valuation, considering it’s one of the best businesses on earth. 

That said, investors should be wary of the risks embedded in the two assumptions laid out above. On the growth assumption, it’s easy to imagine a scenario where Nvidia’s biggest customers – Microsoft (MSFT), Meta Platforms (META), and Amazon (AMZN) – suffer from a slowdown in their businesses, should the U.S. or the global economy enter a recession. 

Demand for cloud computing and digital advertising, two tech sectors driving economic growth at the moment, will not be immune from a broader slowdown. If that occurs, these companies will likely pull back on their capital spending – which means less money flowing to Nvidia for its GPUs. It’s also worth noting these same companies each have their own development programs in the works to produce their own GPUs to compete against Nvidia.

In addition, it’s important to note that Nvidia doesn’t actually manufacture its own chips – it designs them and relies on others to actually make them. So it’s conceivable that Nvidia’s key suppliers – companies like Taiwan Semiconductor Manufacturing (which we’ll cover in next week’s issue) – could begin charging Nvidia higher prices to manufacture its chips. Last week, news broke that TSM was in talks with Nvidia to do exactly that. Given that TSM controls 90% of the global manufacturing capacity for high-end GPU manufacturing, the company holds a powerful negotiating hand, and could begin chipping away at Nvidia’s margins. 

Where the Parallel Computing Revolution Is Headed Next

This current scenario reminds us of the work of the great author, technology advocate, and free-market thinker George Gilder – an early prophet of the internet who correctly predicted the rise of many of today’s most successful technology companies. 

The problem with visionaries is that they can see too far too fast – and the market cannot always keep up. Virtually all of the technology stocks Gilder recommended in the late 1990s produced market-beating returns over several decades – but only after first shooting up like flares before falling back to Earth. Investors who bought in when Gilder initially recommended these companies first suffered through gut-wrenching drawdowns of 70% to 90%.

Internet leader Microsoft (MSFT) is a quintessential example. Microsoft’s share price fell more than 65% as the dot-com boom turned to bust in 2000. Investors who owned MSFT at that time would have to wait nearly 17 years to see the stock return to those prior highs, even as the company’s revenues continued growing by double-digit rates each year.

In reality, the best time to invest in MSFT was not in the late 1990s but rather in late 2000, after shares had plunged and most investors had given up.

We believe a similar dynamic is playing out with many of the AI and machine-learning companies utilizing the chips and the parallel-computing trend started by Nvidia. The market is clearly in a bubble phase. However, there is simply no way to know how long it will continue – or what the pin that finally pops it will be.

To return to the dot-com bubble analogy, this year could be equivalent to 1996, which kicked off several years of double-digit gains before the broad market reached its ultimate peak; or 1997-1998, when volatility exploded higher following the Asian Financial Crisis, yet the biggest gains were still to come; or 1999-2000, when many internet stocks were already peaking, and a prolonged bear market was just around the corner.

While nimble investors and traders may be able to capture significant short-term gains in these stocks before the bubble pops, the surest way to profit from this trend is to wait for the current mania to wane (and it will eventually) and for valuations to fall back to earth.

In the meantime, if you currently own these stocks and don’t want to miss out on gains by selling too soon, we urge you to protect yourself by using reasonable position sizes and a trailing stop loss on all positions. (TradeStops is a great tool to help you with this.)

Over the next few weeks, we’ll be digging deeper into the parallel-computing revolution, including laying out an analysis on which companies (in addition to Nvidia) are likely to be the biggest winners, how the macroeconomic concerns we’re following could impact these companies in the near to intermediate term, and – most important – how you can build generational wealth in these stocks as safely as possible.

In fact, in next week’s The Big Secret on Wall Street, we will divert from delivering a full report on one company and continue to explore the parallel-computing revolution by revealing some of the businesses that will both drive it and benefit from it.

Porter & Co.
Stevenson, MD