Universal Mentors Association

Datasaur launches LLM tool for training custom ChatGPT models


Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Data labeling platform Datasaur today unveiled a new feature that empowers users to label data and train their own customized ChatGPT model. This latest tool offers a user-friendly interface that allows technical and non-technical individuals to evaluate and rank language model responses, which are further transformed into actionable insights.

With OpenAI’s president Greg Brockman an early investor, the company announced that its new offering is in direct response to the escalating significance of natural language processing (NLP), specifically ChatGPT and large language models (LLMs).

Datasaur said that professionals across various industries are eager to harness this technology effectively. However, the need for more clarity and standardized approaches to building and training custom models have posed ongoing challenges. Many individuals face difficulties in fine-tuning and improving the performance of the numerous open-source models available.

In response to this evolving landscape, the company aims to provide comprehensive support for users in assembling their training data.


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

“We aim to provide users with the highest-quality training data and help remove unwanted biases from the resulting model through our new offerings, by inheriting powerful capabilities from the existing Datasaur platform,” Ivan Lee, CEO and founder of Datasaur, told VentureBeat. “Our platform supports all types of NLP, whether those be ‘traditional’ models like entity extraction and text classification or new ones like LLMs. The goal is to ensure all the NLP labeling can occur on a single platform instead of using spreadsheets for one type and open-source tools for another.”

Evaluating quality of LLM responses

Datasaur asserts that its latest additions, Evaluation and Ranking, are the most user-friendly model training tools presently available in the market.

With Evaluation, human annotators can evaluate the quality of the LLM’s outputs and establish whether the responses meet specific quality criteria.

Ranking facilitates the process of reinforcement learning from human feedback (RLHF).

In addition to its new features, the platform introduces a reviewer mode that enables data scientists to assign multiple annotators, thus minimizing subjective biases. This mode facilitates identifying and resolving discrepancies among annotators when it comes to specific questions, allowing data scientists to make the final judgment call.

The platform’s Inter-Annotator Agreement (IAA) feature uses statistical calculations to assess the level of agreement or disagreement among annotators. This tool assists data scientists in identifying annotators who may require additional training and recognizing those who demonstrate a natural aptitude for this type of work.

Additionally, the platform presents the original document from which the LLM sourced the information. This serves two purposes: to prevent any potential misinterpretations, and to provide transparency in demonstrating the process employed by the LLM.

Streamlining broader adoption of large language models

Datasaur’s Lee said that industry professionals may not consider OpenAI’s models as viable options because of factors like compliance, data privacy or strategic considerations. Lee also pointed out that the current focus of LLMs on the English language restricts users worldwide from fully benefiting from these technological advancements.

“NLP has made many advancements in the past decade, and one of our important goals at Datasaur is to help automate as much of the manual work away as possible,” said Lee. “Datasaur’s mission is to democratize access to NLP by enabling users to work with any language, whether French, Korean or Arabic. We want this offering to help everyone more easily train and develop LLMs for their purposes.”

The company asserts that its platform has the potential to reduce the time and expenses associated with data labeling by 30% to 80%.

To automate data labeling, the platform uses a range of techniques. It uses established open-source models like spaCy and NLTK to identify common entities. It also employs the weak supervision method for data programming, enabling engineers to create simple functions that automatically label specific entity types. For instance, if a text contains keywords like “pizza” or “burger,” the platform applies the “food” classification.

Moreover, the platform incorporates a built-in OpenAI API, allowing customers to request ChatGPT to label their documents on their behalf. The company says this approach can achieve high levels of success, depending on the task’s complexity, while also opening new avenues for automation.

According to Lee, the platform’s RLHF feature stands as one of the most effective methods for enhancing an LLM’s training capabilities. This approach, he said, enables users to swiftly and effortlessly evaluate a set of model outputs and identify the superior ones, eliminating manual intervention.

“Our platform allows the user to showcase various options and stack-rank them from best to worst. The easy drag-and-drop interface is easy for a non-technical user to operate, and the resulting output includes every permutation of the ranking preferences (e.g. 1 is better than 2, 1 is better than 3, 2 is better than 3) to make it readily consumable by the technical data scientist and the reward model,” explained Lee.

A future of opportunities in NLP 

Lee observed that the investment in NLP within the market is thriving, and he anticipates a swift evolution of LLM-based products.

He asserted that in the coming years, there will be a surge in the development of applications that prioritize LLM technology.

“The upcoming interfaces will not be a chatbox; it will be baked right into the applications we use daily, such as Gmail, Word, etc.,” he said. “Just as we have learned how to optimize our Google search queries (e.g. “Starbucks hours Saturday”), the mainstream public will get comfortable interfacing with applications through this natural language interface. Datasaur aims to be ready to empower and support organizations in building such models and data workflows.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Source link

Leave a Comment

Your email address will not be published. Required fields are marked *