Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› The Inflation of AI: Is More Always Better?

The Inflation of AI: Is More Always Better?

by Benjamin Thurer
7 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

New ML models emerge hourly but this fast pace comes with drawbacks; hypothesis-driven development can help to mitigate those.

We live in the age of AI! Every day, many new AI tools and ML models are being created, trained, released, and often advertised. When looking at 

Hugging Face for instance, we see almost 400,000 models available today (2023–11–06) compared to the ~84,000 models available in November 2022 (see Figure 1). In just one year, there’s been a massive increase of roughly 470% in the number of models. Remember, Hugging Face isn’t the only ML model platform out there. Plus, many models aren’t even open-sourced. So, it’s safe to say the actual number of available ML models is much higher.

is there truly a need for such an overwhelming inflation of models?

The excitement about AI is huge and that is first and foremost good. AI has the potential to find solutions for — or at least mitigate — some of the most severe global challenges like climate change or pandemics. In addition, AI can make everyday tasks more efficient and, thus, improve our work-life balance. Hence, research and development of AI and making ML models available to the community is the right and necessary step! However, with the given development speed and excitement in the AI community I am wondering: is there truly a need for such an overwhelming inflation of models? Who, ultimately, will benefit from this?

Figure 1: This shows the amount of available open-source models on Hugging Face over time (blue). The red line represents the release of ChatGPT in November 2022 and the orange dashed line represents a cubic polynomial fit based on the data before ChatGPT release.

Potential Risks of Model Inflation

One of the common drawbacks of “excitement” and “hype” for a given topic is when the derived motivation and work are not specifically directed to a goal but are more superficial and broad. The aforementioned potential benefits of AI do not come from having a lot of superficial models, they come from specialized models which tackle hard problems.

In addition to that, the current speed in model development, deployment, and advertisement comes with some disadvantages we all have probably encountered already. It is important to address these issues to ensure the best outcomes in the future. Some potential drawbacks of the current speed in AI development:

  • Quality: with the speed to market, it is already too hard to follow up with the community and properly review model outcomes and research papers. The downside will be a large number of available models and services with low quality since they have not been rigorously tested and reviewed. Also supporting quality metrics like confidence intervals are mostly dropped due to speed to market.
  • Impact + Safety: a large amount of models being developed these days are not human (or nature) centric and do not have a very useful target or use case in mind. However, every product development should always focus on making the world a better place. Developers must focus on what can have a positive impact and not develop “just another chatbot”. In addition to that, developers also have to rule out potential harm coming from their model and ensure safety (similar to this proposal).
  • Privacy + Copyright: Models are rarely documented and it is hard to follow how privacy and copyright have been addressed. This can have negative consequences for individuals. When working with sensitive data, modeling is dangerous since even a vector database after embedding is not privacy-friendly and can easily be reverse-engineered (as shown by Morris et al. 2023). Also, new regulations like the EU AI Act will have an impact on those models, enforcing privacy compliance.
  • Investment Loss: Even with a fast speed to market, any AI project requires resources (highly skilled engineers, substantial computing costs, product maintenance). The return on investment for a business is not given if the resulting AI product is of low quality or doesn’t serve a clear user purpose. It is common practice to have a product discovery phase before developing a product to anticipate the potential return on investment. This practice is often violated for AI with the current speed to market.

In summary, the fast speed in AI development is not only a good thing, but it also causes potential friction and downsides for businesses and individuals.

Figure 2: The left chart indicates how the exponential growth of available ML Models limits human model oversight in the future by having less time per model (left). The right side shows categories in AI that might be impacted by this fast development.

Reduce Model Inflation by Hypothesis-Driven Development

As mentioned earlier, the current excitement around AI is foremost a good thing. The purpose of this story is not to stop AI development or to slow it down. Quite the opposite. The intention is to direct the positive excitement on specific objectives and create quality rather than quantity. The idea is to encourage every AI Engineer and Data Scientist to just take a little bit more time at the very beginning of each project and ask some fundamental questions, like: “Who would benefit from this?” and “What do we want to achieve?”.

what this story is proposing is not revolutionary at all, it is simply to follow scientific methods.

Instead of an exploratory approach where one starts to just develop the next LLM without a clear vision and, thus, inflating the community, how about starting at the very end and discussing the product use case? That could be for instance to come up with an objective for the project first, for instance: “Current foundational LLMs are very complex and can hardly run on-premises. That we would like to tackle”. With this, the project becomes meaningful. But meaning is not everything, a clear hypothesis will make the work even more streamlined. For instance:

“It is possible to train a lightweight LLM that can be run on-premises and still performs >70 on the LLMU benchmark set”.

Having a clear objective and derived hypothesis in mind will help to streamline the entire development work. It will also help measure success and make a meaningful contribution to the community. Combined with a literature/model review, it will immediately outline if the proposed project has been already achieved elsewhere and, thus, is only creating redundant work. In other words, what this story is proposing is not revolutionary at all, it is simply to follow scientific methods.

Figure 3: This schema shows the framework of a scientific project. It emphasizes that there are several steps (Hypothesis, Literature Review, Proposal) before actually starting the desired development of an ML model.

The Spark of Innovation

The reason for proposing scientific methods is that the current issues derived from a fast development speed in AI are well-known to the scientific community. Scientists have to learn from an overwhelming amount of research papers to make a meaningful contribution to the research community. Together with the speed of large research labs, it is easy to develop a feeling of “not having enough time to read everything”. This and the publication pressure have already led to the reproducibility crisis. Scientific methods are here to overcome those issues.

Scientific methods have been developed and improved over centuries and are standing at the core of any scientific project. Given that the fast pace of AI development is very similar to the overwhelming amount of scientific literature, it makes sense to adapt those principles.

A lot of scientific breakthroughs did not start in the lab, they started with a thought which was shaped into a hypothesis

As a positive side effect, the scientific methods were not only developed to standardize work and experimental outcomes but they were also made to enhance innovation. Taking time to review existing literature and formulating hypotheses is at the core of innovation. A lot of scientific breakthroughs did not start in the lab, they started with a thought which was shaped into a hypothesis.

The scientific community, for instance, offers the option of preregistration. That means, scientists publish their objectives, hypotheses, and methodology first before actually conducting the experiments and analysis. This concept could also be applied to AI development.

That being said, I highly encourage everyone to outline an objective with hypotheses before starting any AI or ML project! In addition, I hope that 

Hugging Face and other prominent platforms someday require Engineers and Scientists to preregister their objectives and hypotheses first before they can start working on a model. I am sure if a large platform like Hugging Face starts, others will follow.

Summary

The current speed in AI development is both exciting and challenging at the same time. Exciting about the benefits those new models bring but challenging due to the overwhelming amount of available models and the unknown about their underlying quality, privacy, safety, and return on investment.

Scientific methods, like hypothesis-driven development, can help overcome those issues and can even foster innovation by ensuring AI / ML Engineers and Data Scientists are focused on developing towards a pre-defined objective and hypothesis.

It is the age of AI so it is most important to make sure we make the best future out of it for all of us.

All images, unless otherwise noted, are by the author.

This article was originally published on Towards Data Science.

post authorBenjamin Thurer

Benjamin Thurer
Benjamin Thürer is a Data Scientist and currently in the position of Director of Data Science at Unacast. In his position, he and his team are responsible for building scalable dataset products providing insights on human mobility. Before that, he did his PhD investigating motor learning in the human brain and performed a post-doc in Neuroscience investigating the neural correlates of consciousness.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article challenges the need for the rapid growth of AI and ML models, highlighting risks and advocating for hypothesis-driven development to ensure meaningful contributions, enhance quality, and foster innovation in the AI community.

Related Articles

Discover the hidden costs of AI-driven connectivity, from environmental impacts to privacy risks. Explore how our increasing reliance on AI is reshaping personal relationships and raising ethical challenges in the digital age.

Article by Louis Byrd
The Hidden Cost of Being Connected in the Age of AI
  • The article discusses the hidden costs of AI-driven connectivity, focusing on its environmental and energy demands.
  • It examines how increased connectivity exposes users to privacy risks and weakens personal relationships.
  • The article also highlights the need for ethical considerations to ensure responsible AI development and usage.
Share:The Hidden Cost of Being Connected in the Age of AI
9 min read

Is AI reshaping creativity as we know it? This thought-provoking article delves into the rise of artificial intelligence in various creative fields, exploring its impact on innovation and the essence of human artistry. Discover whether AI is a collaborator or a competitor in the creative landscape.

Article by Oliver Inderwildi
The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
  • The article explores the transformative impact of AI on creativity, questioning whether it is enhancing or overshadowing human ingenuity.
  • It discusses the implications of AI-generated content across various fields, including art, music, and writing, and its potential to redefine traditional creative processes.
  • The piece emphasizes the need for a balanced approach that values human creativity while leveraging AI’s capabilities, advocating for a collaborative rather than competitive relationship between the two.
Share:The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
6 min read

Discover how GPT Researcher is transforming the research landscape by using multiple AI agents to deliver deeper, unbiased insights. With Tavily, this approach aims to redefine how we search for and interpret information.

Article by Assaf Elovic
You Are Doing Research Wrong
  • The article introduces GPT Researcher, an AI tool that uses multiple specialized agents to enhance research depth and accuracy beyond traditional search engines.
  • It explores how GPT Researcher’s agentic approach reduces bias by simulating a collaborative research process, focusing on factual, well-rounded responses.
  • The piece presents Tavily, a search engine aligned with GPT Researcher’s framework, aimed at delivering transparent and objective search results.
Share:You Are Doing Research Wrong
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and