pip install free-gpu
61
providers
free-gpu sits on top of llmfit to show what your hardware can handle and when it makes sense to move fine-tuning, inference, or longer runs to free and near-free providers.
pip install free-gpu
{
"tool": "plan_provider_workflow",
"arguments": {
"workload": "finetune-lora",
"model": "llama-3.1-8b",
"budget": "under-25",
"task_hours": 6,
"min_vram_gb": 16
}
}
codex mcp add freeGpu --url https://free-gpu.vercel.app/mcp codex mcp add free-gpu-local -- free-gpu-mcp
claude mcp add --transport http free-gpu https://free-gpu.vercel.app/mcp claude mcp add --transport stdio free-gpu -- free-gpu-mcp
{
"mcpServers": {
"free-gpu": {
"url": "https://free-gpu.vercel.app/mcp"
}
}
}
{
"servers": {
"freeGpu": {
"type": "http",
"url": "https://free-gpu.vercel.app/mcp"
}
}
}
pip install free-gpu free-gpu ui
https://free-gpu.vercel.app/mcp
free-gpu-mcp
scratch-train finetune-lora inference batch-eval agent-loop
Best for quick inference, demos, notebooks, and small agent loops that should not require a long-lived allocation.
This is where credits, starter plans, and API-friendly providers start outperforming pure free-tier options.
For larger VRAM and longer jobs, the planner shifts toward programs, allocations, and grant-style infrastructure.