Download the model:
For example, we will use OpenChat 3.5 model, which is what is used on the demo instance. There are many models to choose from.
Navigate to TheBloke/openchat_3.5-GGUF and download one of the models, such as openchat_3.5.Q5_K_M.gguf. Place this file inside the ./models directory.
Build the server:
make llama-server
Run the server:
Read the llama.cpp documentation for more information on the server options. Or run ./server --help.