Byte-sized models will have a huge impact on the democratization of AI
When it comes to using AI systems, many companies are fishing with dynamite.
That’s because the largest and most impressive large language models (LLMs), including OpenAI’s GPT-4, are trained on more than a trillion parameters, and cost hundreds of thousands of dollars per day to run.
Using such models for everyday tasks of minimal impact or complexity, or for small-scale personal inquiries, is a bit overkill. After all, asking a chatbot to write something like a Valentine’s sonnet does not necessarily require the computational intensity – let alone justify the computational cost – that today’s most advanced AI algorithms have.
While large AI models produced by major technology companies have served their purpose of disseminating revolutionary technology across a wide global audience, the future of commercial applications of AI likely lies in smaller models that have fewer parameters but perform well on specialized tasks. .
Despite the tech sector’s well-publicized push for increasingly large AI systems that should one day reach superhuman capabilities, creating smaller models that can do one thing very well — rather than a single model that can Doing it all – represents a more cost-effective solution. And a scalable future for both the AI ecosystem and enterprise landscape.
This is because a group of researchers at New York University taught an artificial intelligence system to recognize objects using only 250,000 words and their corresponding images. Their paper, published on Thursday (February 1), claimed that the smaller neural network was successful 62% of the time, with similar results to a similar, albeit much larger, AI model trained on 400 million image-text pairs.
The research could lead to smaller, more efficient and deployable AI models.
Read also: Who will run the GenAI operating system?
Big AI and little AI will chart two different futures for AI technology
The generative AI boom began with OpenAI’s ChatGPT product, which epitomized the “more parameters = better models” ethos, but even Sam Altman, CEO of OpenAI, has said that new ideas, not bigger models, will drive AI development.
For AI to become truly democratized, its future may be built on smaller, more cost-effective systems.
After all, smaller AI models are exactly what OpenAI hopes to commercialize with its recently launched GPT store, and the AI pioneer isn’t alone in this regard.
Both Google and Apple are developing on-device AI solutions for their mobile products, with Google’s Gemini Nano AI model powering next-generation apps across its Pixel phones, and Apple incorporating smaller, use-case-specific AI models on The device is on her iPhone.
As the PYMNTS Intelligence report “Consumer Interest in AI” revealed, consumers interact with about five AI-powered technologies each week on average, including browsing the web, using navigation apps and reviewing online product recommendations. Additionally, nearly two-thirds of Americans want an AI-powered co-pilot to help them do things like book travel.
These narrow applications of AI do not require an AI system that is more intelligent than a human, just a system that can do the task asked of it – something that smaller AI models are more than capable of.
As just one example, Google Maps said Thursday that it has begun rolling out a new AI-powered feature that will help people discover places based on their specific needs.
“AI is a tool, and like any other tool, the potential of a tool lies in the way you use it,” says Akli Adjawti, founder and general partner at venture capital fund Exponion and author of “The Reality and Delusion of AI,” which is scheduled to be published April 30. he told PYMNTS in November.
See also: Tailoring AI solutions to industry is key to scalability
Can small AI systems win on cost?
Large-scale AI models offered by big tech companies come at an equally hefty price tag, resulting in development being shifted to only some of the world’s most valuable companies.
But by deploying a broader scope and number of AI models that are fine-tuned for specific tasks or domains, the AI landscape can be democratized and commercialized, which is increasingly important as the business world shifts from encountering AI for the first time to integrating it into its workflow. the job. .
“In 2024, we will go from a world where trying to use generative AI to become more efficient was dangerous, to a world where there is actually a greater risk of falling behind if you don’t try,” said James Clough, CTO and co-founder of Robin AI. , told PYMNTS during a conversation for our “AI Effect” series.
Small AI systems are often better suited than large systems in specific scenarios where efficiency, simplicity, or resource constraints play an important role.
Internet of Things (IoT) devices, which may include sensors, wearables, or smart home devices, take advantage of small AI systems that can run on low-power devices, and across scenarios where real-time processing is necessary, such as video analysis, And live streaming – streaming applications and even payments, small AI systems can provide faster responses due to lower computational load.
Additionally, tailoring an AI model to fit a specific use case can lead to better performance and cost-effectiveness. In situations where available training data is limited, small AI models, such as those based on transfer learning, can still provide useful results.
Separately, small businesses with budget constraints may find smaller AI systems more accessible and cost-effective to meet specific business needs.