{% extends "admin/base.html" %} {% block title %}Ollama Instance Manager{% endblock %} {% block header_title %}Local Instance Manager{% endblock %} {% block content %}
{% if is_installed %} Detected {% else %} Not Found {% endif %}
{% if is_vllm_installed %}Detected{% else %}Optional{% endif %}
{% if is_openllm_installed %}Detected{% else %}Optional{% endif %}
Internal llama.cpp/vLLM engines.
Search Hugging Face and download GGUF or Safetensors directly to your server.
The following Ollama instances are running locally but are not yet managed by the Fortress supervisor.
| Name | Port | Config | Status | Actions |
|---|---|---|---|---|
| {{ item.config.name }} | {{ item.config.port }} | GPU: {{ item.config.gpu_ids or 'All' }} | Alive: {{ item.config.keep_alive }} | {% if item.state == 'RUNNING' %} MANAGED {% elif item.state == 'SYSTEM' %} SYSTEM SERVICE {% elif item.state == 'CONFLICT' %} PORT CONFLICT {% else %} OFFLINE {% endif %} | {% if item.state == 'SYSTEM' %} Manage via OS {% else %} {% endif %} |