NVIDIA Launches Large-Scale Language Modeling Cloud Service to Advance AI and Digital Biology

Author: Release time:2022-09-24 Source: Font: Big Middle Small View count:231

NVIDIA NeMo Large Language Modeling (LLM) service helps developers customize large-scale language models; NVIDIA BioNeMo service helps researchers generate and predict molecules, proteins and DNA.


NVIDIA announced two new large-scale language model (LLM) cloud AI services, NVIDIA NeMo Large Language Model Service and NVIDIA BioNeMo LLM Service, enabling developers to easily tune LLM and deploy custom AI applications for content generation, text summarization, chatbots, code development, and protein structure and biomolecular property prediction.


The NVIDIA BioNeMo LLM service is a cloud application programming interface (API) that extends LLM use cases to scientific applications beyond languages, accelerating drug development for pharmaceutical and biotech companies. Speed.


Large-scale language models have the potential to transform every industry," said Jen-Hsun Huang, founder and CEO of NVIDIA. By adapting the underlying model, the power of LLM can be brought to millions of developers, allowing them to create a variety of language services and drive scientific discovery without having to re-build huge models."


NeMo LLM Service Improves Accuracy and Speeds Deployment with Prompted Learning

With the NeMo LLM service, developers can customize the base model using their training data - from 3 billion parameters to Megatron 530B, one of the world's largest LLMs - a process that takes minutes to hours compared to the weeks or months it would take to train a model from scratch.


Cue learning uses P-tuning to customize models, enabling developers to quickly customize the base model that initially needs to be trained using billions of data points using only a few hundred examples. The customization process generates task-exclusive cue markers combined with the base model to provide higher accuracy and more relevant responses for specific use cases.


Developers can use the same model to customize multiple use cases and generate many different Prompt Tokens. Playground functionality provides code-free options to experiment and interact with the model easily, further improving the effectiveness and accessibility of LLM for industry-specific use cases.


Once ready for deployment, tuned models can run on cloud instances, local systems, or APIs.


BioNeMo LLM Service Enables Researchers to Leverage the Power of Large-Scale Models

The BioNeMo LLM service includes two new BioNeMo language models for chemistry and biology applications. The service supports protein, DNA, and biochemical data, helping researchers discover patterns and insights in biological sequences.


BioNeMo enables researchers to expand the scope of their research with models containing billions of parameters. These large models can store more information about protein structures and evolutionary relationships between genes and even generate novel biomolecules for therapeutic use.

     

Cloud API provides access to Megatron 530B and other off-the-shelf models.

In addition to tuning the base model, the LLM service offers the option to use off-the-shelf and custom models through a cloud API.

     

This gives developers access to various pre-trained LLMs, including Megatron 530B and T5 and GPT-3 models created using the NVIDIA NeMo Megatron framework, which is now in public beta to support a variety of applications and multilingual service requirements.     


Leaders in the automotive, computing, education, healthcare, and telecommunications industries use NeMo Megatron to provide customers with leading-edge services in Chinese, English, Korean, Swedish, and other languages.


Availability

NeMo LLM and BioNeMo services and cloud APIs are expected to be available for a sneak preview next month, so developers can apply and learn more now.


A public beta version of the NeMo Megatron framework is now available from NVIDIA NGC™. It is optimized to run on NVIDIA DGX™ Foundry and NVIDIA DGX SuperPOD™, as well as accelerated cloud instances for Amazon Web Services, Microsoft Azure and Oracle Cloud Infrastructure. Infrastructure, and on accelerated cloud instances of Amazon Web Services, Microsoft Azure, and Oracle Cloud Infrastructure.


Developers who want to experience the NeMo Megatron framework can try NVIDIA LaunchPad Labs for free.


Hot News

Hot product