Who Is This For?

Two minimum profiles: building this project from scratch vs installing and running it at home.

Part 1 - Why this matters

This project can be approached from two angles, but in practice a third profile shows up once you start refining the product:

The key idea is simple: an LLM speeds up execution, but it does not replace debugging discipline, operational hygiene, or product decision-making.

Part 2 - Candidate CV (minimum viable): build from scratch

This is the minimum profile realistically able to build and evolve the app (not just run it), with an advanced LLM assisting.

Mandatory skills (minimum level):

Recommended skills (nice-to-have):

Part 3 - Candidate CV (minimum viable): installer/operator

This is the minimum profile realistically able to install the repo, run the system locally, and keep it working at home using the docs and a modern LLM for assistance.

Mandatory skills (minimum level):

Recommended skills (nice-to-have):

Part 4 - Skill level matrix (creator vs installer)

The table below summarizes the minimum skill levels required for each profile.

Skill Build from scratch Install & operate
macOS/LinuxAdvancedMedium
Terminal / shellAdvancedMedium
HTML/CSS (responsive)AdvancedBasic
JavaScript (DOM + async)AdvancedBasic (read-only)
Python backendMedium-AdvancedBasic (read-only)
LLM operations (Ollama, models)MediumBasic
Speech pipeline (STT/TTS)MediumBasic
Networking (ports, LAN)AdvancedBasic
Reverse proxy (Caddy)MediumBasic
GitMedium-AdvancedBasic-Medium
Dependency management (venv/brew/pip)Medium-AdvancedMedium
Debugging disciplineAdvancedBasic-Medium
Security basicsMediumBasic
Docs discipline (follow steps accurately)HighHigh

This project is local-first: reliable operation depends more on system hygiene and debugging than on cloud infrastructure.

Part 5 - Quick self-check

If you are evaluating whether you can install and run this project at home, these questions are a good litmus test:

  1. Can you install dependencies and read terminal errors without panic?
  2. Do you understand what localhost and a port are?
  3. Can you clone a repo, run ./scripts/run.sh, and verify the app is up?
  4. Can you pull models with Ollama and recognize a missing-model error?
  5. If something fails, can you capture logs and ask a modern LLM with context?

If the answers are mostly yes, the installer profile should work fine.