On-Premise Setup
Set up custom self-hosted ComfyDeploy on-premise
Set up custom self-hosted ComfyDeploy on-premise to run ComfyUI workflows on your infrastructure.
Overview
ComfyDeploy supports three self-hosted machine types:
- Classic machines: Direct ComfyUI instances with endpoint configuration
- RunPod serverless: Serverless scaling capabilities
- Custom configurations: Docker-based setups with custom nodes
Prerequisites
Hardware: NVIDIA GPU with CUDA, 16GB+ RAM, 100GB+ SSD storage Software: Docker with GPU support, Python 3.11+, Git, CUDA drivers Network: Internet access with public IP or port forwarding on chosen port
Machine Creation
Create a self-hosted machine in ComfyDeploy dashboard:
{
  "name": "My On-Premise Machine",
  "type": "classic",
  "endpoint": "http://your-server-ip:8188",
  "auth_token": "your-secure-token"
}Types: classic (standard ComfyUI) or runpod-serverless (RunPod deployment)
ComfyUI Installation
# Clone and install ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git /comfyui
cd /comfyui
python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
python -m pip install xformers -r requirements.txt aioboto3
# Create directories
mkdir -p /private_models/input /comfyui/custom_nodes /comfyui/modelsComfyDeploy Custom Node Setup
# Install ComfyDeploy custom node
cd /comfyui/custom_nodes
git clone https://github.com/BennyKok/comfyui-deploy --recursive
cd comfyui-deploy
git reset --hard 7b734c415aabd51b8bb8fad9fd719032b3b8a36fa
if [ -f requirements.txt ]; then python -m pip install -r requirements.txt; fi
if [ -f install.py ]; then python install.py || echo 'install script failed'; fi
# Optional: Install ComfyUI Manager
cd /comfyui/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git --recursive
cd ComfyUI-Manager && git reset --hard fd2d285af5ae257a4d1f3c1146981ce41ac5adf5
if [ -f requirements.txt ]; then python -m pip install -r requirements.txt; fi
if [ -f install.py ]; then python install.py || echo 'install script failed'; fiCustom Node Installation
# Install additional custom nodes
cd /comfyui/custom_nodes
git clone <git-url> --recursive
cd <custom-node-name>
git reset --hard <commit-hash>
if [ -f requirements.txt ]; then python -m pip install -r requirements.txt; fi
if [ -f install.py ]; then python install.py || echo "install script failed"; fiEndpoint Configuration
# Start ComfyUI server
cd /comfyui
python main.py --dont-print-server --enable-cors-header --listen --port 8188 \
  --input-directory /private_models/input --preview-method auto
# Test connectivity
curl http://your-server-ip:8188/system_stats
curl -X POST http://your-server-ip:8188/comfyui-deploy/run \
  -H "Content-Type: application/json" \
  -H "Authorization: Basic $(echo -n 'your-auth-token' | base64)" \
  -d '{"cd_token": "test"}'Docker Deployment
FROM nvidia/cuda:12.1-devel-ubuntu22.04
RUN apt-get update && apt-get install -y python3 python3-pip git wget curl libgl1-mesa-glx libglib2.0-0 && rm -rf /var/lib/apt/lists/*
RUN python3 -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
RUN python3 -m pip install xformers aioboto3
RUN git clone https://github.com/comfyanonymous/ComfyUI.git /comfyui
WORKDIR /comfyui
RUN python3 -m pip install -r requirements.txt
WORKDIR /comfyui/custom_nodes
RUN git clone https://github.com/BennyKok/comfyui-deploy --recursive
WORKDIR /comfyui/custom_nodes/comfyui-deploy
RUN git reset --hard 7b734c415aabd51b8bb8fad9fd719032b3b8a36fa && \
    if [ -f requirements.txt ]; then python3 -m pip install -r requirements.txt; fi && \
    if [ -f install.py ]; then python3 install.py || echo 'install script failed'; fi
RUN mkdir -p /private_models/input
WORKDIR /comfyui
EXPOSE 8188
CMD ["python3", "main.py", "--dont-print-server", "--enable-cors-header", "--listen", "--port", "8188", "--input-directory", "/private_models/input", "--preview-method", "auto"]# Build and run
docker build -t comfyui-deploy-onpremise .
docker run -d --name comfyui-deploy --gpus all -p 8188:8188 \
  -v /path/to/models:/comfyui/models -v /path/to/outputs:/comfyui/output comfyui-deploy-onpremiseAuthentication Flow: ComfyDeploy → Runner
Classic Machines: ComfyDeploy sends Basic auth requests to your ngrok-protected endpoint RunPod Serverless: ComfyDeploy sends RunPod-specific authentication
# ComfyDeploy sends requests like this:
curl -X POST https://your-domain.ngrok.app/comfyui-deploy/run \
  -H "Authorization: Basic <base64-encoded-token>" \
  -H "Content-Type: application/json" \
  -d '{"workflow": {...}, "inputs": {...}, "cd_token": "your-token"}'Troubleshooting
Connection Issues: Check ComfyUI port 8188, firewall settings, endpoint accessibility
Authentication Errors: Verify auth_token matches, check header formatting, test with curl
Custom Node Issues: Restart ComfyUI, check dependencies, verify compatibility
GPU Not Detected: Install nvidia-docker2, verify CUDA drivers, use --gpus all flag
Performance: Monitor GPU memory (nvidia-smi -l 1), use model caching, faster storage (NVMe SSD)
Logging: tail -f /comfyui/comfyui.log, monitor response times, set up alerts
Security: Use HTTPS/TLS, strong auth tokens, rotate credentials, implement rate limiting
Ngrok Integration for Local Machines
Expose your local ComfyUI instance securely without port forwarding:
# Install and configure ngrok
curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null
echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list
sudo apt update && sudo apt install ngrok
ngrok config add-authtoken YOUR_NGROK_AUTHTOKEN
# Start ComfyUI and ngrok tunnel
cd /comfyui && python main.py --dont-print-server --enable-cors-header --listen --port 8188 --input-directory /private_models/input --preview-method auto &
ngrok http 8188 --domain=your-domain.ngrok.appAuthentication Protection - Create ngrok.yml:
version: "2"
authtoken: YOUR_NGROK_AUTHTOKEN
tunnels:
  comfyui:
    addr: 8188
    proto: http
    domain: your-domain.ngrok.app
    traffic_policy:
      inbound:
        - name: "ComfyDeploy Auth Protection"
          expressions:
            - "!(hasReqHeader('Authorization') && getReqHeader('Authorization')[0].startsWith('Basic YOUR_BASE64_TOKEN'))"
          actions:
            - type: "deny"
              config:
                status_code: 403Machine Configuration in ComfyDeploy:
{
  "name": "My Ngrok Machine",
  "type": "classic", 
  "endpoint": "https://your-domain.ngrok.app",
  "auth_token": "your-secure-token"
}Enterprise Support
Need help with enterprise deployment, custom integrations, or scaling? Book a call with our team:
Our enterprise support includes:
- Custom deployment assistance
- Advanced authentication setups
- Load balancing and scaling guidance
- Priority technical support
Next Steps
- Test workflow execution through ComfyDeploy interface
- Monitor performance and optimize as needed
- Set up monitoring/alerting for production use
- Implement backup and disaster recovery procedures
- Scale horizontally by adding more machines