Skip to content

Conversation

@Tawns-lab
Copy link

Implement comprehensive training setup using Unsloth framework for efficient fine-tuning of 1 billion parameter models. This adds multi-capability and adaptability features for LLM training on consumer hardware.

Features:

  • Complete training pipeline for Qwen2.5-0.5B/1.5B models
  • 2x faster training with 70% less VRAM via Unsloth
  • QLoRA, LoRA, and full fine-tuning support
  • Single GPU training (6-8GB VRAM minimum)
  • Interactive inference and batch processing tools
  • Configurable presets for different use cases

Files added:

  • llm-training/train_qwen_1b.py: Main training script with comprehensive config
  • llm-training/quickstart.py: 5-minute quick start for testing
  • llm-training/inference.py: Interactive and batch inference tool
  • llm-training/config.py: Training configuration templates and presets
  • llm-training/setup.sh: Automated environment setup script
  • llm-training/requirements.txt: Python dependencies
  • llm-training/README.md: Complete documentation and guides

Supports:

  • Multi-modal capabilities (text, code, reasoning)
  • Custom datasets and fine-tuning workflows
  • Export to GGUF, GPTQ formats
  • Experiment tracking (WandB, TensorBoard)

🤖 Generated with Claude Code

Implement comprehensive training setup using Unsloth framework for efficient
fine-tuning of 1 billion parameter models. This adds multi-capability and
adaptability features for LLM training on consumer hardware.

Features:
- Complete training pipeline for Qwen2.5-0.5B/1.5B models
- 2x faster training with 70% less VRAM via Unsloth
- QLoRA, LoRA, and full fine-tuning support
- Single GPU training (6-8GB VRAM minimum)
- Interactive inference and batch processing tools
- Configurable presets for different use cases

Files added:
- llm-training/train_qwen_1b.py: Main training script with comprehensive config
- llm-training/quickstart.py: 5-minute quick start for testing
- llm-training/inference.py: Interactive and batch inference tool
- llm-training/config.py: Training configuration templates and presets
- llm-training/setup.sh: Automated environment setup script
- llm-training/requirements.txt: Python dependencies
- llm-training/README.md: Complete documentation and guides

Supports:
- Multi-modal capabilities (text, code, reasoning)
- Custom datasets and fine-tuning workflows
- Export to GGUF, GPTQ formats
- Experiment tracking (WandB, TensorBoard)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants