Abbie’s AI Tutorials
About
AI Jupyter Notebook Tutorials
Categories
All
(7)
CNNs
(1)
LLMs
(4)
PEFT
(2)
architectures
(1)
caching
(2)
inferencing
(2)
reinforcement-learning
(1)
training
(3)
vision
(2)
Fine Tuning LLaVA with and without LoRA
training
PEFT
vision
LLMs
In this notebook, we dive deep into the architecture of LLaVA, with the goal of fine-tuning it (both with and without LoRA) to adapt it to determining the morphologies of GalaxyZoo2 images.
Aug 6, 2025
Abbie Petulante
PEFT Deep Dive: LoRA
training
PEFT
Mathematical foundations and practical implementation of LoRA, one of the most common paremeter efficient fine tuning methods.
Aug 5, 2025
Abbie Petulante
RL for LLMs
reinforcement-learning
training
LLMs
An in-depth guide to reinforcement learning using proximal policy optimization (PPO), with a focus as it applies to modern large language models (and with human feedback!).
Feb 25, 2025
Abbie Petulante
Prompt Caching
caching
LLMs
inferencing
A guide to how prompt caching - a modular approach to KV caching. We’ll talk through how and why this modular implementation works, and include a practical example of implementing it for LLaMa 3.2 1B.
Jan 10, 2025
Abbie Petulante
KV Caching
caching
LLMs
inferencing
A guide to how KV caching is implemented for LLMs, including a practical example of implementing it for LLaMa 3.2 1B.
Jan 9, 2025
Abbie Petulante
The U-Net Architecture
architectures
CNNs
vision
A complete guide to the U-Net architecture. Covers all aspects of a layer, from the convolution operation, to pooling, to what makes U-Nets so special.
Jan 31, 2020
Abbie Petulante
Welcome To My Tutorials Blog!
During my Ph.D., I utilized a U-Net to attempt to learn the final distribution of dark matter in a computer simulation from its initial conditions. Being in an Astrophysics…
Feb 25, 2025
Abbie Petulante
No matching items