CanIRun.ai detects GPU, CPU and RAM specs directly in the browser and tells you which open-source LLMs you can run locally. Built by midudev, it covers models from 0.8B to 685B parameters including Llama 3.1, Qwen, Phi-4, Gemma 3 and DeepSeek, drawing data from llama.cpp, Ollama and LM Studio.
A browser-based tool that reads hardware capabilities (VRAM, CPU cores, RAM bandwidth) via browser APIs and cross-references them against a catalog of open-source LLMs to determine which models you can run locally — and at what performance level. It covers models from major open-source providers (Meta, Alibaba, Microsoft, Google, Mistral) and integrates with popular local inference runtimes including llama.cpp, Ollama, and LM Studio. No account or installation required — runs entirely in-browser.
This is a read-only browser tool — no data is uploaded, no installation required. Hardware detection uses browser APIs with acknowledged limitations in accuracy.
Relevant to ODS agent infrastructure and AI-assisted development workflows. Helps ODS developers determine which local LLMs (Llama 3.1, Qwen, Phi-4, Gemma 3, DeepSeek) can run on their machines or GCP VPS instances without cloud API costs. Useful for: