The science behind training a model with InstructLab

The science behind training a model with InstructLab

Red Hat

4 месяца назад

1,442 Просмотров

How does InstructLab enhance a large language model using far less human-generated information and far fewer computing resources than are typically used to retrain a model? The answer is the LAB method (short for Large-scale Alignment for chatBots) that was created by members of the MIT-IBM Watson AI Lab and IBM Research. Join Red Hat’s Senior Principal UX Engineer Máirín Duffy to find out how the LAB method makes it possible for upstream contributions to continuously make an AI model better.


00:00 What is the LAB method?
00:17 Tuning a model with a taxonomy based skill and knowledge tree
01:38 What is synthetic data generation?
02:41 Validating the synthetic data
03:40 How is synthetic data generated?

Learn how to get involved with the InstructLab community at:
https://github.com/instructlab

Subscribe to Red Hat's YouTube channel: https://www.youtube.com/redhat/?sub_confirmation=1

#RedHat #InstructLab #AI

Тэги:

#Red_Hat #AI #InstructLab #large_language_model #LLM #AI_model_training #generative_AI #artificial_intelligence #fine_tuning_AI_models #synthetic_data
Ссылки и html тэги не поддерживаются


Комментарии: