LLMOps: EVALUATING AND FINE TUNING LLM MODELS FOR GENERATIVE AI

Authors

  • Amreth Chandrasehar Informatica, CA, USA Author

Keywords:

Generative AI, LLM, MLOps, LLM Ops, Cost Optimization, Observability, Model CI

Abstract

As companies adopt Generative AI using LLM models, many pre-trained and fine-tuned models needs to be evaluated for its accuracy. LLMOps is a derivative of MLOps but specialized on model training and finetuning LLMs. Model evaluation process can take a lot of time and cost a fortune for companies, with LLMOps using model CI pipelines, frameworks and automation, the drawbacks are addressed and help organizations evaluate models quickly and, in a cost, optimized manner. This paper discusses how the pipelines, frameworks, Observability metrics collected during training can be used to optimally evaluate LLM models.

 

References

Downloads

Published

2023-08-19

How to Cite

LLMOps: EVALUATING AND FINE TUNING LLM MODELS FOR GENERATIVE AI. (2023). INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS (IJMLC), 1(1), 25-34. https://lib-index.com/index.php/IJMLC/article/view/IJMLC_01_01_003