FIND-20260323-010

ADHOC MEDIUM 2026-03-23 — via @thismacapital on X

CanIRun.ai — Browser-based local LLM hardware compatibility checker

"Si vous etes dans le rabbit hole des LLMs locaux : voici un site qui vous permet de vous informer sur les modeles et leur vitesse d'inference que vous pouvez run sur votre machine"
@thismacapital — 285 likes, 65,382 views

CanIRun.ai detects GPU, CPU and RAM specs directly in the browser and tells you which open-source LLMs you can run locally. Built by midudev, it covers models from 0.8B to 685B parameters including Llama 3.1, Qwen, Phi-4, Gemma 3 and DeepSeek, drawing data from llama.cpp, Ollama and LM Studio.

local-llm ai hardware developer-tools llama ollama inference

What is CanIRun.ai?

A browser-based tool that reads hardware capabilities (VRAM, CPU cores, RAM bandwidth) via browser APIs and cross-references them against a catalog of open-source LLMs to determine which models you can run locally — and at what performance level. It covers models from major open-source providers (Meta, Alibaba, Microsoft, Google, Mistral) and integrates with popular local inference runtimes including llama.cpp, Ollama, and LM Studio. No account or installation required — runs entirely in-browser.

Security Review

Unknown (web tool)
N/A
0
ACTIVE
LOW
SAFE_TO_USE

This is a read-only browser tool — no data is uploaded, no installation required. Hardware detection uses browser APIs with acknowledged limitations in accuracy.

ODS Impact

Relevant to ODS agent infrastructure and AI-assisted development workflows. Helps ODS developers determine which local LLMs (Llama 3.1, Qwen, Phi-4, Gemma 3, DeepSeek) can run on their machines or GCP VPS instances without cloud API costs. Useful for:

Visit CanIRun.ai →
View original tweet →