Skip to main content

Getting Started

Quickstart

Getting Started with PromptPanel

The absolute easiest way to get started with PromptPanel is using the Docker Run command below:

docker run --name promptpanel -p 4000:4000 -v PROMPT_DB:/app/database -v PROMPT_MEDIA:/app/media --pull=always promptpanel/promptpanel:latest

This method is great for getting to know PromptPanel and for using external models like OpenAI, Anthropic, Google Gemini, and Cohere.

If you'd like to run your own models locally, check out our Docker Compose below with our integration with Ollama to run models on your local devices.

PromptPanel currently requires Docker.

Getting started with Docker Compose, Ollama for Local and Offline Inference

To run Ollama for local / offline inference using our Docker Compose definition, you can use:

curl -sSL https://promptpanel.com/content/media/manifest/docker-compose.yml | docker compose -f - up
  • PromptPanel will be used for the interface, chat data model, users / authentication, and processing of your AI prompts.
  • Ollama will be used for local / offline AI inference engine (either CPU or GPU).

We include a management GUI so you can pull new local / offline models from the Ollama library (https://ollama.com/library), manage loaded models, and use them in your plugins.

You can download the docker-compose.yml file here.

services:
  promptpanel:
    image: promptpanel/promptpanel:latest
    container_name: promptpanel
    restart: always
    volumes:
      - PROMPT_DB:/app/database
      - PROMPT_MEDIA:/app/media
    ports:
      - 4000:4000
    environment:
      PROMPT_OLLAMA_HOST: http://ollama:11434
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: always
volumes:
  PROMPT_DB:
  PROMPT_MEDIA:

Your container's interface will be available at http://localhost:4000.

Connecting with a GPU?

If you're looking to setup inference using your own GPU hardware, Ollama has broad support for most vendors.

We advise reading more on their documentation on GPU compatibility and also more information on their official Docker image.

That being said, the following docker-compose services should work for Nvidia CUDA and AMD ROCM support if you replace or modify the Ollama service defined above.

Nvidia (CUDA):

ollama:
  image: ollama/ollama:latest
  container_name: ollama
  restart: always
  deploy:
    resources:
      reservations:
        devices:
          - driver: nvidia
            count: 1
            capabilities: [gpu]
  ports:
    - 11434:11434

AMD (ROCM):

ollama:
  image: ollama/ollama:rocm
  container_name: ollama
  restart: always
  devices:
    - /dev/kfd
    - /dev/dri
  ports:
    - 11434:11434

Your container's interface will be available at http://localhost:4000.

Backing up your container

  • The embedded database is found at /app/database/db.sqlite3 inside of the container.
  • Your media folder (for uploads is found at) /app/media/... inside of the container.

It's important to backup and persist a copy of it when the container is destroyed or modified.

You can find a guide for backing up your installation located here.