Skip to content

Latest commit

 

History

History
21 lines (14 loc) · 1.03 KB

tuning-a-model-by-using-the-training-operator.adoc

File metadata and controls

21 lines (14 loc) · 1.03 KB

Tuning a model by using the Training Operator

To tune a model by using the Kubeflow Training Operator, you configure and run a training job.

Optionally, you can use Low-Rank Adaptation (LoRA) to efficiently fine-tune large language models, such as Llama 3. The integration optimizes computational requirements and reduces memory footprint, allowing fine-tuning on consumer-grade GPUs. The solution combines PyTorch Fully Sharded Data Parallel (FSDP) and LoRA to enable scalable, cost-effective model training and inference, enhancing the flexibility and performance of AI workloads within OpenShift environments.