325
4
Qwen-Image-2512 (qwen.ai)
2
Kimi K2 Thinking: How to Run Locally (unsloth.ai)
5
Thinking Machines – LoRA Without Regret (thinkingmachines.ai)
2
Long context GPT-OSS fine-tuning (unsloth.ai)
1
Show HN: GPT OSS: How to run and fine-tune (unsloth.ai)
1
Qwen3-30B-A3B-Instruct-2507 (huggingface.co)
234
Qwen3-Coder: Agentic coding in the world (qwenlm.github.io)
2
2.71bit DeepSeek-V3-0324 (unsloth.ai)
5
Gemma 3: Google's new multimodal models (ai.google.dev)
4
How to Run QwQ-32B effectively (unsloth.ai)
12
Train your own R1 reasoning model (unsloth.ai)
6
How to run 1.58bit DeepSeek R1 with Open WebUI (openwebui.com)
50
Phi-4 Bug Fixes (unsloth.ai)
4
My take on the Post Pretraining world (twitter.com/danielhanchen)
2
Dynamic 4bit Quantization (unsloth.ai)
1
Show HN: Finetune Llama 3.2 Vision in a Colab (colab.research.google.com)
2
Python 3.11 is 1.25x faster than 3.10 (python.org)
2
Fixing Gradient Accumulation (huggingface.co)
3
Unit Economics of LLM APIs (lesswrong.com)
2
LoRA Learns Less and Forgets Less Updated (openreview.net)
2
VLLM automatic prefix / prompt caching (vllm.ai)
2
Higher Temperatures and Min_p Sampling (arxiv.org)
1
Show HN: Open-source fine-tuning in a Colab notebook (colab.research.google.com)
4
Sahm rule signals start of recession (stlouisfed.org)
2
Low Level Technicals of LLMs [video] (youtube.com)
3
Gemma-2 2B beats GPT3.5 on Chatbot Arena (huggingface.co)
2
HuggingChat – Chat UI for Llama 3.1 405B (huggingface.co)
1
Fine-Tune Llama 3.1 Ultra-Efficiently with Unsloth (huggingface.co)
2
Yield Curve and Predicted GDP Growth (clevelandfed.org)
3
Cloudflare DNS + Malware Blocking (one.one)
1
SIMD at Insomniac Games: How We Do the Shuffle (gdcvault.com)
3
Some Machine Learning Notes (danielhanchen.github.io)
3
My Analysis of Llama 3.1 (twitter.com/danielhanchen)
2
Show HN: Finetune Llama-3.1 2x faster in a Colab (colab.research.google.com)
1
Show HN: Mistral NeMo finetuning fits in Colab (colab.research.google.com)
2
TextGrad – Backpropagation through text feedback (arxiv.org)
16
Nemotron-4 340B open weights model (nvidia.com)
7
Show HN: Finetune Llama-3 2x faster in a Colab notebook (colab.research.google.com)
5
Try Llama-3 in a Colab Notebook (colab.research.google.com)
64
Fixing Gemma Bugs (unsloth.ai)
3
Finetuning Gemma 2.4x Faster (research.google.com)
1
Show HN: Gemma finetuning 243% faster, 58% less VRAM (unsloth.ai)
1
CodeLlama-34B 13x faster finetuning (unsloth.ai)
2
Reducing FLOPs for transformers (unsloth.ai)
89
Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning (github.com/unslothai)
1