15th January 2025
2024 was a transformative year for AI and one of the best on record in terms of the number of companies exposed to AI-related themes, including standout performers like Nvidia (171.2%) and Broadcom (110.4%). However, as we enter 2025, it feels as though the market's perspective has shifted, and the risks inherent in AI investments are now grossly underestimated by many market participants.
Current valuations across various sectors of the market often reflect unrealistic future growth expectations. Furthermore, the capital required to develop newer, more advanced generative AI models has grown so exorbitantly large that one must question whether companies like Alphabet, Amazon, and Microsoft will ever recoup their investments.
This is not the only issue these companies face. According to The Wall Street Journal, OpenAI's highly anticipated GPT-5 project, codenamed "Orion," is encountering significant challenges with data availability. Despite over 18 months of development and substantial financial investment, multiple large-scale training runs have yielded disappointing results. GPT-5 has demonstrated limited performance gains despite immense computational costs. Each training run demands months of processing power and potentially costs hundreds of millions of dollars, making it critical to prove that the mountain of capital deployed for GPT-5 does not ultimately become a mountain of malinvestment.
The primary challenge in GPT-5's development is the scarcity of high-quality training data. While larger models generally exhibit improved capabilities, acquiring sufficient amounts of diverse and reliable data to train a model at GPT-5’s scale has proven to be exceptionally difficult. OpenAI is attempting to address this issue by generating synthetic data, such as human-written code and mathematical solutions, along with detailed explanations of the reasoning behind these solutions. This approach aims to enrich the model's learning experience and enhance its ability to solve complex problems. However, GPT-5 has frequently exhibited "hallucinations," where it provides factually incorrect or misleading information presented as fact.
Number of parameters*, by GPT generation
GPT-1 117 million, GPT-2 1.5 billion, GPT-3 175 billion, GPT-4 1.76 trillion†
*Settings that determine how an AI processes information and makes decisions †Estimate Source: OpenAI (GPT-1, -2, -3); SemiAnalysis (GPT-4)
With AI revenues still virtually non-existent, the stakes for OpenAI and other companies to deliver breakthroughs with next-generation models like GPT-5 couldn't be higher. Yet, the project's current trajectory raises concerns—not only about GPT-5's feasibility and potential impact but also about the viability of many next-generation generative AI models in the short term. While solutions to the data scarcity issue may emerge in the near future, these challenges are not easily overcome. Achieving a level of performance that justifies the immense investment remains a critical hurdle for success in this space. Failure could reverberate throughout the supply chain, likely dampening future capital-raising efforts for companies like OpenAI.
These challenges are further exacerbated by a macroeconomic environment characterized by higher interest rates and tighter liquidity conditions. With a looming wave of corporate debt reissuance in 2026, OpenAI and similar companies may soon find themselves operating in a far less favorable fundraising environment, significantly altering the industry's dynamics. Architecture software—an area where Nvidia has demonstrated leadership—may benefit from such an environment as hyperscalers look to optimize hardware efficiency. However, this market remains highly fragmented, with hyperscaler companies like Meta supporting players such as AMD, which has struggled to keep pace with Nvidia.
After having little to no exposure to the primary beneficiaries of the AI trade during 2024, the evolving dynamics in the AI space have prompted us to reflect on our biases and consider where new trends and opportunities may emerge. While we remain open to the possibilities AI presents, we approach this space with caution, believing the real winners are likely to emerge in the application software layer, much like they did following the dot-com boom of the early 2000s. Healthcare technology, in particular, is an area where we are cautiously optimistic, given promising early applications of AI, especially in imaging and automation.
2nd February 2020
Within twenty-four hours of the Chinese authorities uploading the genetic code for the Corona virus to the internet, a San Diego based biotech company, Inovio, had digitally designed a vaccine and produced the first samples in its own lab. They started pre-clinical trials within a week and their vaccine, INO-4800, should to be tested on humans (assuming it’s found to be safe) by the early summer. Inovio is not the only company working on a vaccine - they are in healthy competition with, amongst others, Johnson & Johnson, Moderna Therapeutics and scientists in Australia.
It’s a great example of how exponential growth in computing power is leading to a revolution in drug development. During the SARS outbreak in 2003 it took nearly two years before a vaccine was ready for human trials, for the Zika virus of 2015 this was down to six months – this time it will be a matter of weeks.
Digitally designed molecules to fight pathogens might look like the stuff of sci-fi but as processing speeds continue to double every eighteen months, the ability to design and test drugs without ever entering the lab is now normal.
I’ve been following the development of a US based private company, Schroedinger, which has industrialised molecule design on a grand scale. Whereas traditional approaches to drug discovery might have synthesized 1,000 compounds each year, Schroedinger’s platform can evaluates billions of molecules “in silico” per week with only the most promising molecules reaching the lab – some within the company’s own drug development programs. It’s not possible for us to buy shares in the company but their shareholder base is further testament to the convergence of computing power and bioscience – one of the company’s early investors was no other than Bill Gates.
The connection between processing speeds and drug development is especially clear in genetic science. The cost of sequencing the human genome has fallen from $100,000,000 in 2001 to a little over $1,000 today. It’s no surprise, therefore, that patent filings for gene-based therapies are growing exponentially.
Whether a vaccine for the Corona virus will be available in time to stop it becoming a pandemic - or more likely before it burns itself out - is yet to be seen. The battle between silicon and pathogens, however, is in full swing.
One can speculate on where this might lead. Along with ubiquitous computing power, smartphone health monitoring, home testing kits and so on, small companies and individuals can now innovate in a way that was previously the preserve of large corporations. And so, if there are any hobbyists out there who fancy their chances, here are the first 1,020 nucleotides out of 29,904 that make up the RNA of the Wuhan-Hu-Q. Good luck.