Connect with us

FEATURED

Open source in machine learning: experts weigh in on the future

Published

on

In a recent event by the University of California, Berkeley, focused on “Open Source vs. Closed Source: Will Open Source Win? Challenges & Future in Open Source LLM,” experts in the field of machine learning shared their insights into the role of open source technology in shaping the future of this dynamic industry.

In the world of AI development, the acronym “LLM” typically stands for “Language Model,” more specifically, Large Language Models. These sophisticated AI models are designed with the purpose of understanding and generating human language.

Through rigorous training using vast amounts of data, they acquire the remarkable ability to tackle diverse tasks like natural language understanding, text generation, and translation, among others. An exemplary illustration of such language models is GPT-3 (Generative Pre-trained Transformer 3).

Recently, the use of AI has become a huge topic for discussion around the world. An integral part of the conversation is whether open source will be the future of machine learning as it relates to AI or a closed system.

Ion Stoica, Professor of Computer Science at the University of California, Berkeley, outlined three key reasons why open-source technology will play a pivotal role in the future of machine learning. 

Firstly, he said that the current limitation in the availability of high-quality data for training machine learning models is a challenge for further development. 

However, with the use of open-source systems, more quality data becomes accessible, and the cost of training models will decrease, making larger models more effective.

Secondly, Ion pointed out that machine-learning technology is becoming increasingly strategic for many countries. 

Unlike search technology which still requires human intervention, machine learning models can make autonomous decisions, rendering them highly valuable for specific applications.

He also added that there is a need for fine-tuning machine learning models for particular tasks rather than creating general-purpose models. 

He believes that experts should focus on developing models that excel in specific use cases. And an open-source model will make this even easier to implement.

Open source will create simpler tools

Another speaker at the event, Nazneen Rajani, Research Lead at Hugging Face, said that open-source technology is essential for crafting smaller, more specialized language models tailored for specific use cases. 

She revealed that most companies and consumers do not require large, general intelligence models; instead, they need models that excel at specific tasks. 

The Researcher also expressed excitement about Meta’s entry into the open-source arena, anticipating increased funding and resources for open-source projects, paving the way for further innovation and development.

Expanded usage with government open-source LLMs

In support of the first two speakers, Tatsunori Hashimoto, an Assistant Professor at Stanford University, proposed that language models could become a public good and serve as a foundational layer for intelligent agents. 

He cited initiatives like the UK’s Brit-GPT, government-run language models available to everyone, as examples. Once these models are open and accessible, they can form the basis of an open-source innovation ecosystem. 

Tatsunori also noted that the future of open source depends on who provides the base layer and how much innovation is generated atop it.

Read also; Coinbase CEO unveils 10 game-changing crypto ideas to build

0 0 votes
Article Rating
Advertisement Earnathon.com
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments

Crypto News Update

Latest Episode on Inside Blockchain

Crypto Street

Advertisement



Trending

ALL Sections

Recent Posts

0
Would love your thoughts, please comment.x
()
x