Petals allows you to run large language models collaboratively — you load a small part of the model, then team up with people serving the other parts.
To access the Llama 2 model, you need to set the environment variable HUGGINGFACE_API_KEY
with a Hugging Face API key that has accepted the terms and conditions (in ownAI, select "Connect to external AI providers" in the user menu in the upper right corner, or set the variables globally in your server's .env
file).
Then download the aifile and load it with ownAI (in ownAI, click on the logo in the upper left corner to open the menu, then select "AI Workshop", then "New AI" and "Load Aifile").
Your data will be processed by other people in the public swarm. Learn more about privacy here. For sensitive data, you can set up a private swarm among people you trust.