Working Locally with Llama.cpp and Llama 2

Working Locally with Llama.cpp and Llama 2

Great Lancaster AI event on working locally with Llama.cpp and its Python wrappers to get things going for Llama 2 on our laptops! Awesome questions on embeddings, installations, and Burning Man!

The basic steps were pretty easy.

  1. Install the Python wrapper for the appropriate operating system.
  2. Download the GGML model you want from Hugging Face. We used this one for the demo today.
  3. Profit!

Special thanks to everyone who came out to work through it and look forward to seeing everyone shortly!


Leave a Reply

Your email address will not be published. Required fields are marked *