The default LLM is Llama 2 run locally by Ollama. You'll need to install the Ollama desktop app and run the following commands to give this site access to the locally running model:
$ OLLAMA_ORIGINS=https://webml-demo.vercel.app OLLAMA_HOST=127.0.0.1:11435 ollama serve
Then, in another window:
$ OLLAMA_HOST=127.0.0.1:11435 ollama pull mistral