Understanding Reasoning LLMs

Building LLMs from the Ground Up: A 3-Hour Coding Workshop

Implementing Weight-Decomposed Low-Rank Adaptation (DoRA) from Scratch

Coding Self-Attention, Multi-Head Attention, Cross-Attention, Causal-Attention

Ten Noteworthy AI Research Papers of 2023

Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation)

AI and Open Source in 2023

Training and aligning LLMs with RLHF and RLHF alternatives

Understanding Llama 2 and the New Code Llama LLMs

Why the original transformer figure is wrong, and some other tidbits about LLMs

Finetuning Large Language Models

Understanding large language models: A cross-section of the relevant literature