OpenAI currently dominates the generative AI market, with its GPT-4 standing out as the industry’s best-performing model to date. However, businesses are increasingly choosing to develop their own, smaller AI models tailored to their specific needs.
For instance, Salesforce has initiated trials of two coding AI assistants, namely Einstein for Developers and Einstein for Flow. These assistants are trained on a combination of Salesforce’s proprietary programming data and open-source data, designed as “small” AI models for niche business applications. While these assistants can also write poems, they may not excel at it as they lack training on the broader internet, as highlighted by Patrick Stokes, Salesforce’s executive vice president of products.
Major players like OpenAI, Google, Amazon, and Meta are heavily invested in constructing larger and more sophisticated AI models. However, there is a growing argument for companies to observe and understand the capabilities emerging from these models. Simultaneously, the rise of numerous smaller AI models specialized for specific tasks could create a scenario where people interact with different AI bots for various daily activities. Yoon Kim, an assistant professor at the Massachusetts Institute of Technology, emphasizes that focusing on specific applications might offer a less costly path for companies to adopt AI.
“You can’t use ChatGPT out of the box.”
Braden Hancock, the chief technology officer of Snorkel AI, a Redwood City, California-based company refining AI models, notes his assistance to businesses, particularly in the financial sector, in building small AI models dedicated to singular tasks, such as a customer service assistant or a coding assistant.
“There was maybe a moment early at the beginning of the year, right after ChatGPT came out, where people weren’t quite sure—like, oh my gosh, is this game over? Is AI just solved now?” said Hancock. Then, on closer inspection, companies realized that there are few if any, business applications that could be addressed by ChatGPT without any modifications.
What does this mean for OpenAI?
“If hardware costs come down enough, then there’s a scenario where GPT-4 will do everything for everyone,” said Amin Ahmad, founder and CEO of Vectera, a software company focused on semantic search. AMD has just released a set of chips that could lower the costs of developing AI models.
However, there’s another scenario where the proliferation of more large-language models (LLMs) in the market could intensify competition for OpenAI. This could explain why OpenAI has been actively lobbying for more regulation, aiming to stay ahead of AI competitors and create barriers for others to participate.