ArcForge is an open-source toolkit for fine-tuning large language models (LLMs) using Intel Arc™ Battlemage and other next-generation Intel GPUs. It integrates cutting-edge tools like HuggingFace Transformers, Intel IPEX-LLM, and QLoRA/LoRA methods to deliver scalable, efficient training pipelines optimized for the Intel XPU architecture.
git clone https://github.com/arcforge-tune/bmg-lora.git
conda create -n fine-tune python=3.11.13
conda activate fine-tune
cd bmg-lora
pip install -r requirements.txt
tokenizers=0.21.2
transformers=4.53.1
To run training with LLaMA-2 on the Alpaca dataset examples:
Clone the repository and install dependencies.
Configure your training parameters in src/config/
File Name | Model ID | Model Name |
---|---|---|
gpt2_lora_finetune_config.yaml | gpt2 | GPT-2 |
llama2_hf_qlora_xpu_config.yaml | meta-llama/Llama-2-7b-hf | Llama 2 (7B) |
llamma2_chat_hf_qlora_xpu_config.yaml | meta-llama/Llama-2-7b-chat-hf | Llama 2 (7B Chat) |
mistral-7B-v0.1_xpu_config.yaml | mistralai/Mistral-7B-v0.1 | Mistral (7B) |
llama3.18B_qlora_config.yaml | meta-llama/Llama-3.1-8B-Instruct | llama 3.1 |
python src/main.py --config .\src\config\gpt2_lora_finetune_config.yaml
python src/main.py --config .\src\config\gpt2_lora_finetune_config.yaml --resume .\outputs\lora_llama3_1_8b_instruct_xpu\checkpoint-epoch1-step11\
.\Run‑FineTune.ps1 -config .\src\config\gpt2_lora_finetune_config.yaml
For GPU support, ensure you have the correct Intel GPU drivers installed