Skip to content

Latest commit

 

History

History
220 lines (160 loc) · 8.84 KB

File metadata and controls

220 lines (160 loc) · 8.84 KB

hello-rocm

AMD YES! 🚀

Open Source · Community Driven · Making AMD AI Ecosystem More Accessible

中文 | English

  Since ROCm 7.10.0 (released on December 11, 2025), ROCm now supports seamless installation in Python virtual environments just like CUDA, and officially supports both Linux and Windows platforms. This marks a major breakthrough for AMD in the AI field — learners and LLM enthusiasts are no longer limited to NVIDIA for hardware choices, and AMD GPUs are becoming a strong competitive alternative.

  Dr. Lisa Su announced at the launch event that ROCm will maintain a new release every 6 weeks iteration pace, fully pivoting towards AI. The future looks exciting!

  However, there is currently a global lack of systematic learning tutorials for ROCm LLM inference, deployment, training, fine-tuning, and infrastructure. hello-rocm was created to fill this gap.

  The main content of this project is tutorials, helping more students and future practitioners understand and become familiar with AMD ROCm! Anyone can raise issues or submit PRs to help build and maintain this project together.

  Learning Suggestion: We recommend starting with environment configuration and deployment, then moving on to model fine-tuning, and finally exploring Infra operator optimization. Beginners can start with LM Studio or vLLM deployment.

Project Significance

  What is ROCm?

ROCm (Radeon Open Compute) is an open-source GPU computing platform launched by AMD, designed to provide an open software stack for high-performance computing and machine learning. It supports parallel computing on AMD GPUs and serves as an alternative to CUDA on the AMD platform.

  The battle of LLMs is in full swing, with open-source LLMs emerging one after another. However, most current LLM tutorials and development tools are based on the NVIDIA CUDA ecosystem. For developers who want to use AMD GPUs, the lack of systematic learning resources is a pain point.

  Starting from ROCm 7.10.0 (December 11, 2025), AMD restructured the underlying ROCm architecture through TheRock project, decoupling the compute runtime from the operating system. This allows the same ROCm upper-level interfaces to run on both Linux and Windows, and supports direct installation into Python virtual environments just like CUDA. This means ROCm is no longer just an "engineering tool" for Linux, but has evolved into a truly cross-platform GPU computing platform for AI learners and developers — whether using Windows or Linux, users can now use AMD GPUs for training and inference with a lower barrier to entry. LLM and AI enthusiasts are no longer bound to the single NVIDIA ecosystem for hardware choices. AMD GPUs are gradually becoming an AI computing platform that can be genuinely used by ordinary users.

  This project aims to provide complete tutorials for LLM deployment, fine-tuning, and training on the AMD ROCm platform based on the experience of core contributors. We hope to gather co-creators to enrich the AMD AI ecosystem together.

  We hope to become a bridge between AMD GPUs and the general public, embracing a broader AI world with the spirit of freedom and equality in open source.

Target Audience

  This project is suitable for the following learners:

  • Want to use AMD GPUs for LLM development but can't find systematic tutorials;
  • Hope to deploy and run LLMs at low cost with high value;
  • Interested in the ROCm ecosystem and want to get hands-on experience;
  • AI learners who want to expand knowledge beyond NVIDIA GPU platforms;
  • Want to build domain-specific private LLMs on the AMD platform;
  • And the broadest, most ordinary student community.

Project Roadmap and Progress

  This project is organized around the full workflow of ROCm LLM applications, including environment configuration, deployment, fine-tuning, and operator optimization:

Latest News

Project Structure

hello-rocm/
├── 01-Deploy/              # ROCm LLM Deployment Practice
├── 02-Fine-tune/           # ROCm LLM Fine-tuning Practice
├── 03-Infra/               # ROCm Operator Optimization Practice
├── 04-References/          # ROCm Quality Reference Materials
└── 05-AMD-YES/             # AMD Project Case Collection

01. Deploy - ROCm LLM Deployment

🚀 ROCm LLM Deployment Practice
Quick start guide for LLM deployment on AMD GPUs from scratch
📖 Getting Started with ROCm Deploy

• LM Studio LLM Deployment from Scratch
• vLLM LLM Deployment from Scratch
• SGLang LLM Deployment from Scratch
• ATOM LLM Deployment from Scratch

02. Fine-tune - ROCm LLM Fine-tuning

🔧 ROCm LLM Fine-tuning Practice
Efficient model fine-tuning on AMD GPUs
📖 Getting Started with ROCm Fine-tune

• LLM Fine-tuning Tutorial from Scratch
• Single-machine LLM Fine-tuning Scripts
• Multi-node Multi-GPU Fine-tuning Tutorial

03. Infra - ROCm Operator Optimization

⚙️ ROCm Operator Optimization Practice
Migration and optimization guide from CUDA to ROCm
📖 Getting Started with ROCm Infra

• HIPify Automated Migration in Practice
• Seamless Switching of BLAS and DNN
• Migration from NCCL to RCCL
• Mapping from Nsight to Rocprof

04. References - ROCm Quality Reference Materials

📚 ROCm Quality Reference Materials
Curated AMD official and community resources
📖 ROCm References

ROCm Official Documentation
AMD GitHub
ROCm Release Notes
• Related News

05. AMD-YES - AMD Project Case Collection

✨ AMD Project Case Collection
Community-driven AMD GPU project practices
📖 Getting Started with ROCm AMD-YES

• AMchat - Advanced Mathematics
• Chat-Huanhuan
• Tianji
• Digital Life
• happy-llm

Contributing

  We welcome all forms of contributions! Whether it's:

  • Improving or adding new tutorials
  • Fixing errors and bugs
  • Sharing your AMD projects
  • Providing suggestions and ideas

  Please refer to CONTRIBUTING.md for details.

  If you want to participate deeply, please contact us, and we will add you to the project maintainers.

Acknowledgments

Core Contributors

Note: More contributors are welcome to join!

Others

  • If you have any ideas, please contact us. Issues are also very welcome!
  • Special thanks to the following contributors who have contributed to the tutorials!
  • Thanks to the AMD University Program for supporting this project!!

License

MIT License


Let's build the future of AMD AI together! 💪

Made with ❤️ by the hello-rocm community