The demand for technical generative AI (GenAI) skills is increasing, and businesses are actively seeking AI engineers who can work with large language models (LLMs). This IBM course is designed to build job-ready skills that can accelerate your AI career.



Generative AI Engineering and Fine-Tuning Transformers
This course is part of multiple programs.



Instructors: Joseph Santarcangelo +2 more
8,075 already enrolled
Included with
(58 reviews)
Recommended experience
What you'll learn
Sought-after, job-ready skills businesses need for working with transformer-based LLMs in generative AI engineering
How to perform parameter-efficient fine-tuning (PEFT) using methods like LoRA and QLoRA to optimize model training
How to use pretrained transformer models for language tasks and fine-tune them for specific downstream applications
How to load models, run inference, and train models using the Hugging Face and PyTorch frameworks
Skills you'll gain
- Category: Performance Tuning
- Category: Applied Machine Learning
- Category: Large Language Modeling
- Category: Application Frameworks
- Category: Natural Language Processing
- Category: Prompt Engineering
- Category: PyTorch (Machine Learning Library)
- Category: Generative AI
Details to know

Add to your LinkedIn profile
4 assignments
Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate

There are 2 modules in this course
In this module, you will be introduced to Fine Tuning. You’ll get an overview of generative models and compare Hugging Face and PyTorch frameworks. You’ll also gain insights into model quantization and learn to use pre-trained transformers and then fine-tune them using Hugging Face and PyTorch.
What's included
5 videos4 readings2 assignments4 app items
In this module, you will gain knowledge about parameter efficient fine-tuning (PEFT) and also learn about adapters such as LoRA (Low-Rank Adaptation) and QLoRA (Quantized Low-Rank Adaptation). In hands-on labs you will train a base model and pre-train LLMs with Hugging Face.
What's included
4 videos5 readings2 assignments2 app items4 plugins
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.
Instructors

Offered by

Why people choose Coursera for their career




Learner reviews
58 reviews
- 5 stars
78.68%
- 4 stars
9.83%
- 3 stars
4.91%
- 2 stars
4.91%
- 1 star
1.63%
Showing 3 of 58
Reviewed on Jan 16, 2025
The labs all too often failed on environment issues - packages, version alignment, etc. This should be seamless in your controlled environment.
Reviewed on Jan 1, 2025
The course is good but lacks depth on complex subjects.
Reviewed on Nov 16, 2024
The coding part in the labs provided in this course was very helpful and helped me to stabilize my learning.
Frequently asked questions
It takes about 8 hours to complete this course, so you can have the job-ready skills you need to impress an employer within just one week!
This course is intermediate level, so to get the most out of your learning, you must have basic knowledge of Python, PyTorch, and transformer architecture. You should also be familiar with machine learning and neural network concepts.
This course is part of the Generative AI Engineering with LLMs specialization. When you complete the specialization, you will have the skills and confidence to take on job roles such as AI engineer, NLP engineer, machine learning engineer, deep learning engineer, data scientist, or software developer who want to apply seeking to work with LLMs.