Initial upload
This commit is contained in:
221
Cookbook.md
Normal file
221
Cookbook.md
Normal file
@@ -0,0 +1,221 @@
|
||||
# Homelab Installer Cookbook (Vollversion)
|
||||
|
||||
Dieses Dokument dient als zentrale Wissensbasis für die Entwicklung, Erweiterung und Wartung des Homelab Installationssystems.
|
||||
Es ermöglicht zukünftigen Sessions sofort einzusteigen, ohne Kontext erneut aufzubauen.
|
||||
|
||||
---
|
||||
|
||||
## 1. Projektüberblick
|
||||
|
||||
Das Ziel des Systems ist die automatisierte Einrichtung von Servern im Homelab mittels wiederverwendbarer und modularer Installationsrezepte.
|
||||
|
||||
### Anforderungen
|
||||
|
||||
* Keine externen Tools außer Bash + optional Ansible
|
||||
* Kein Git notwendig
|
||||
* Server lädt Installationsdefinitionen dynamisch von einem Webserver
|
||||
* Vollständig menügeführt, interaktiv, ohne Vorwissen
|
||||
* Wiederholbare und stabile Installationen
|
||||
|
||||
### High-Level Ablauf
|
||||
|
||||
```
|
||||
install.sh (läuft lokal)
|
||||
↓ Lädt Kategorien + Rezepte vom API-Endpoint
|
||||
↓ Benutzer wählt Kategorie und Rezept
|
||||
↓ Rezept enthält entweder install.sh (Shell) oder playbook.yml (Ansible)
|
||||
↓ Rezept wird ausgeführt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Webserver-Struktur
|
||||
|
||||
Auf dem Webserver liegt alles, was der Installer benötigt:
|
||||
|
||||
```
|
||||
public_html/
|
||||
├─ info.php → liefert JSON Index der verfügbaren Rezepte
|
||||
└─ recipes/ → enthält Rezepte (Modularblöcke)
|
||||
├─ system/ → Systemnahe Dinge (z.B. base-system, docker)
|
||||
├─ services/ → Einzeldienste (z.B. ollama, open-webui)
|
||||
└─ stacks/ → zusammengesetzte Sets aus mehreren Services
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Lokale Struktur auf dem Zielsystem
|
||||
|
||||
```
|
||||
/opt/homelab/playbooks/ → gespeicherte Playbooks
|
||||
/srv/docker/ → Zielort für alle Docker-Container
|
||||
/tmp/homelab-installer/ → temporäre Downloads
|
||||
/var/log/homelab-installer.log → Installer Logfile
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Rezepte
|
||||
|
||||
Jedes Rezept kann **eine oder beide** Varianten enthalten:
|
||||
|
||||
| Datei | Bedeutung |
|
||||
| -------------- | --------------------------------------------- |
|
||||
| `install.sh` | Shell-Installation (prozedural) |
|
||||
| `playbook.yml` | Ansible Installation (deklarativ, idempotent) |
|
||||
|
||||
Der Installer erkennt automatisch:
|
||||
|
||||
* Nur Shell → Shell-Modus
|
||||
* Nur Playbook → Ansible-Modus
|
||||
* Beide → Benutzer darf wählen
|
||||
|
||||
### 4.1 Gemeinsame Basisfunktionen für Shell-Rezepte
|
||||
|
||||
Alle Shell-Rezepte können folgende Funktionen voraussetzen, die vom Haupt-Installer bereitgestellt werden:
|
||||
|
||||
| Funktion | Zweck |
|
||||
| ------------------------------------ | --------------------------------------------------------------------------------------- |
|
||||
| `ensure_root` | Stellt sicher, dass Befehle mit Root-Rechten ausgeführt werden (direkt oder über sudo). |
|
||||
| `detect_pkg_manager` | Erkennt automatisch ob apt, dnf, pacman oder apk verwendet wird. |
|
||||
| `pkg_install <pakete...>` | Installiert Pakete unabhängig vom Paketmanager (inkl. `apt update` bei Bedarf). |
|
||||
| `install_docker` | Installiert Docker und Docker Compose Plugin, falls noch nicht vorhanden. |
|
||||
| `ask_to_install "NAME"` | Fragt den Benutzer, ob ein bestimmtes Element installiert werden soll (J/n, Default J). |
|
||||
| `begin_password_section "NAME"` | Startet einen Passwortblock in der zentralen Schlüsseldatei. |
|
||||
| `generate_password "variablen_name"` | Erzeugt ein starkes Passwort und speichert es automatisch in keys.txt. |
|
||||
| `end_password_section "NAME"` | Schließt den Passwortblock wieder ab. |
|
||||
|
||||
Shell-Rezepte dürfen keine eigene Root-Abfrage, sudo-Logik oder Paketmanager-Abfragen enthalten.
|
||||
Diese Logik liegt zentral im Haupt-Installer.
|
||||
|
||||
---
|
||||
|
||||
## 5. Shell-Recipe Style Guide (aktualisiert)
|
||||
|
||||
Shell-Rezepte sollen:
|
||||
|
||||
* immer mit `#!/usr/bin/env bash` beginnen
|
||||
* `set -euo pipefail` für robustes Fehlerverhalten verwenden
|
||||
* **keine** direkten `apt`, `dnf`, `pacman`, `apk`, `sudo`, `docker` Befehle enthalten
|
||||
* statt dessen `ensure_root`, `detect_pkg_manager`, `$SUDO`, `pkg_install`, `install_docker`, `ask_to_install` verwenden
|
||||
* falls Passwörter benötigt werden, diese mit `generate_password` erzeugen
|
||||
* Passwörter **immer in einem benannten Block speichern**, damit die zentrale keys.txt später lesbar bleibt
|
||||
|
||||
### Minimalbeispiel
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
|
||||
begin_password_section "OLLAMA"
|
||||
ADMIN_PASS="$(generate_password "ollama_admin")"
|
||||
end_password_section "OLLAMA"
|
||||
|
||||
pkg_install curl gnupg lsb-release
|
||||
|
||||
$SUDO mkdir -p /srv/docker/ollama
|
||||
cd /srv/docker/ollama
|
||||
|
||||
$SUDO tee docker-compose.yml >/dev/null <<EOF
|
||||
services:
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "OLLAMA wurde erfolgreich installiert."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Ansible Playbook Style Guide
|
||||
|
||||
* Jede Aufgabe muss wiederholbar sein
|
||||
* Keine Raw-Shell-Kommandos, wenn es ein Modul gibt
|
||||
* Immer `become: true`
|
||||
* Lokaler Modus: `ansible-playbook -i localhost, …`
|
||||
|
||||
---
|
||||
|
||||
## 7. Stacks
|
||||
|
||||
Ein Stack ist einfach ein Shell- oder YAML-Script, das mehrere Recipes hintereinander ausführt.
|
||||
|
||||
---
|
||||
|
||||
## 8. Neues Rezept hinzufügen
|
||||
|
||||
1. Wähle Kategorie (`system`, `services`, `stacks`)
|
||||
2. Erstelle neuen Ordner innen:
|
||||
`recipes/<kategorie>/<name>/`
|
||||
3. Lege `install.sh` oder `playbook.yml` ab
|
||||
4. Rezepte verwenden ab jetzt **immer** `ensure_root`, `detect_pkg_manager`, `pkg_install`
|
||||
|
||||
### 8.1 Passwort-Handling Standard
|
||||
|
||||
Wenn Rezepte Zugangsdaten erzeugen, werden sie automatisch in einer Datei gespeichert:
|
||||
|
||||
```
|
||||
keys.txt
|
||||
(im Verzeichnis, in dem der Haupt-Installer ausgeführt wurde)
|
||||
```
|
||||
|
||||
Format:
|
||||
|
||||
```
|
||||
===== REZEPTNAME =====
|
||||
schluessel_name = wert
|
||||
...
|
||||
===== ENDE REZEPTNAME =====
|
||||
```
|
||||
|
||||
Beispiel für ein Rezept namens `ollama`:
|
||||
|
||||
```
|
||||
===== OLLAMA =====
|
||||
ollama_admin = Gs92hs7shs8192hsbs8==
|
||||
===== ENDE OLLAMA =====
|
||||
```
|
||||
|
||||
Dies ermöglicht es, viele Installationen durchzuführen, ohne später den Überblick zu verlieren.
|
||||
|
||||
---
|
||||
|
||||
## 9. Namensregeln
|
||||
|
||||
| Element | Regel |
|
||||
| ----------------- | -------------------- |
|
||||
| Ordnernamen | nur `a-z0-9-` |
|
||||
| Shell Skripte | immer `install.sh` |
|
||||
| Playbooks | immer `playbook.yml` |
|
||||
| Keine Leerzeichen | sonst Menü kaputt |
|
||||
|
||||
---
|
||||
|
||||
## 10. Troubleshooting
|
||||
|
||||
| Problem | Lösung |
|
||||
| ----------------------------------- | ------------------------------------------- |
|
||||
| "Installer findet Rezepte nicht" | `info.php` prüfen / Webserver Schreibrechte |
|
||||
| Playbook hängt bei apt | `dpkg --configure -a` ausführen |
|
||||
| Shell-Skript bricht ohne Meldung ab | `set -x` debug aktivieren |
|
||||
|
||||
---
|
||||
|
||||
## 11. Roadmap
|
||||
|
||||
* Docker Playbook erstellen
|
||||
* KI-Stack bauen (Ollama + Open-WebUI + Embeddings)
|
||||
* Optional Identity-Stack (Authelia + Traefik)
|
||||
|
||||
---
|
||||
|
||||
## 12. Nächster Schritt
|
||||
|
||||
Weiter mit:
|
||||
**docker ansible bitte**
|
||||
|
||||
BIN
beispiel-installer.zip
Normal file
BIN
beispiel-installer.zip
Normal file
Binary file not shown.
42
index.html
Normal file
42
index.html
Normal file
@@ -0,0 +1,42 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="de">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>Homelab Installer</title>
|
||||
<style>
|
||||
body { background:#111; color:#eee; font-family: monospace; padding:40px; }
|
||||
h1 { color:#6cf; }
|
||||
pre { background:#222; padding:12px; border-radius:6px; }
|
||||
a { color:#6cf; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<h1>Homelab Installer</h1>
|
||||
<p>Starte den Installer auf deinem Server:</p>
|
||||
|
||||
<h3>Mit <b>curl</b>:</h3>
|
||||
<pre id="cmd-curl"></pre>
|
||||
|
||||
<h3>Mit <b>wget</b>:</h3>
|
||||
<pre id="cmd-wget"></pre>
|
||||
|
||||
<p>API JSON Übersicht:
|
||||
<a id="api-link" href="#">API</a></p>
|
||||
|
||||
<script>
|
||||
let base = window.location.origin;
|
||||
|
||||
document.getElementById("cmd-curl").textContent =
|
||||
`curl -s ${base}/install.sh -o install.sh && chmod +x install.sh && ./install.sh`;
|
||||
|
||||
document.getElementById("cmd-wget").textContent =
|
||||
`wget ${base}/install.sh -O install.sh && chmod +x install.sh && ./install.sh`;
|
||||
|
||||
let api = document.getElementById("api-link");
|
||||
api.href = `${base}/info.php`;
|
||||
api.textContent = `${base}/info.php`;
|
||||
</script>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
25
info.php
Normal file
25
info.php
Normal file
@@ -0,0 +1,25 @@
|
||||
<?php
|
||||
header('Content-Type: application/json; charset=utf-8');
|
||||
|
||||
$recipesDir = __DIR__ . '/recipes';
|
||||
|
||||
if (!is_dir($recipesDir)) {
|
||||
echo json_encode(["recipes" => new stdClass()], JSON_PRETTY_PRINT);
|
||||
exit;
|
||||
}
|
||||
|
||||
$categories = array_filter(glob($recipesDir . '/*'), 'is_dir');
|
||||
$output = ["recipes" => []];
|
||||
|
||||
foreach ($categories as $categoryPath) {
|
||||
$categoryName = basename($categoryPath);
|
||||
$items = array_filter(glob($categoryPath . '/*'), 'is_dir');
|
||||
$itemNames = array_map('basename', $items);
|
||||
|
||||
if (!empty($itemNames)) {
|
||||
$output["recipes"][$categoryName] = array_values($itemNames);
|
||||
}
|
||||
}
|
||||
|
||||
echo json_encode($output, JSON_PRETTY_PRINT | JSON_UNESCAPED_UNICODE | JSON_UNESCAPED_SLASHES);
|
||||
?>
|
||||
298
install.sh
Normal file
298
install.sh
Normal file
@@ -0,0 +1,298 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# --- Root / Sudo Logik ---
|
||||
if [[ $EUID -ne 0 ]]; then
|
||||
if ! command -v sudo >/dev/null 2>&1; then
|
||||
echo -e "\033[0;31mFehler: Dieses Script benötigt sudo, ist aber nicht installiert.\033[0m"
|
||||
echo "Bitte installiere sudo zuerst oder führe das Script als root aus."
|
||||
exit 1
|
||||
fi
|
||||
SUDO="sudo"
|
||||
else
|
||||
SUDO=""
|
||||
fi
|
||||
|
||||
BASE_URL="https://install-daten.ploeger-online.de"
|
||||
API_URL="$BASE_URL/info.php"
|
||||
LOG_FILE="/tmp/homelab-installer.log"
|
||||
TMP_DIR="/tmp/homelab-installer"
|
||||
mkdir -p "$TMP_DIR"
|
||||
|
||||
# Farben
|
||||
GREEN="\033[0;32m"
|
||||
YELLOW="\033[1;33m"
|
||||
RED="\033[0;31m"
|
||||
NC="\033[0m"
|
||||
|
||||
log() {
|
||||
echo -e "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
echo ""
|
||||
log "${RED}⛔ Installation abgebrochen durch Benutzer.${NC}"
|
||||
rm -rf "$TMP_DIR" 2>/dev/null || true
|
||||
log "🛑 Abbruch abgeschlossen."
|
||||
exit 130
|
||||
}
|
||||
|
||||
trap cleanup INT
|
||||
|
||||
# --- Globale Pfadbasis für Passwort-Speicherung ---
|
||||
INSTALLER_BASE_DIR="$(pwd)"
|
||||
PASSWORD_STORE_FILE="$INSTALLER_BASE_DIR/keys.txt"
|
||||
|
||||
# --- Root / Sudo Helper ---
|
||||
ensure_root() {
|
||||
if [[ $EUID -ne 0 ]]; then
|
||||
if ! command -v sudo >/dev/null 2>&1; then
|
||||
log "${RED}Fehler: Dieses Script benötigt Root oder sudo.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
SUDO="sudo"
|
||||
else
|
||||
SUDO=""
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Package Manager Erkennung ---
|
||||
detect_pkg_manager() {
|
||||
if command -v apt >/dev/null 2>&1; then PKG="apt"
|
||||
elif command -v dnf >/dev/null 2>&1; then PKG="dnf"
|
||||
elif command -v pacman >/dev/null 2>&1; then PKG="pacman"
|
||||
elif command -v apk >/dev/null 2>&1; then PKG="apk"
|
||||
else
|
||||
log "${RED}Kein unterstützter Paketmanager gefunden.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
pkg_install() {
|
||||
case "$PKG" in
|
||||
apt) $SUDO apt update && $SUDO apt install -y "$@" ;;
|
||||
dnf) $SUDO dnf install -y "$@" ;;
|
||||
pacman) $SUDO pacman --noconfirm -Sy "$@" ;;
|
||||
apk) $SUDO apk add "$@" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# --- Passwort Generator ---
|
||||
generate_password() {
|
||||
local name="$1"
|
||||
local pass
|
||||
pass="$(openssl rand -base64 24)"
|
||||
echo "$name = $pass" >> "$PASSWORD_STORE_FILE"
|
||||
log "${GREEN}🔐 Passwort erzeugt und gespeichert unter:${NC} $PASSWORD_STORE_FILE"
|
||||
echo "$pass"
|
||||
}
|
||||
|
||||
check_internet() {
|
||||
ping -c 1 1.1.1.1 &>/dev/null || {
|
||||
log "${RED}❗ Kein Internet erkannt.${NC}"
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
log "🔍 Prüfe benötigte Programme..."
|
||||
|
||||
MISSING_PKGS=()
|
||||
|
||||
# --- Passwort Bereich schreiben ---
|
||||
begin_password_section() {
|
||||
local section="$1"
|
||||
echo "" >> "$PASSWORD_STORE_FILE"
|
||||
echo "===== $section =====" >> "$PASSWORD_STORE_FILE"
|
||||
}
|
||||
|
||||
end_password_section() {
|
||||
echo "===== ENDE $1 =====" >> "$PASSWORD_STORE_FILE"
|
||||
echo "" >> "$PASSWORD_STORE_FILE"
|
||||
}
|
||||
|
||||
# Passwort Generator (nutzt nun Section-Kontext)
|
||||
generate_password() {
|
||||
local key_name="$1"
|
||||
local password
|
||||
password="$(openssl rand -base64 24)"
|
||||
echo "$key_name = $password" >> "$PASSWORD_STORE_FILE"
|
||||
log "${GREEN}🔐 Passwort erzeugt:${NC} $key_name"
|
||||
echo "$password"
|
||||
}
|
||||
|
||||
need_cmd() {
|
||||
local c="$1"
|
||||
if ! command -v "$c" &>/dev/null; then
|
||||
MISSING_PKGS+=("$c")
|
||||
else
|
||||
log "${GREEN}OK:${NC} $c vorhanden."
|
||||
fi
|
||||
}
|
||||
|
||||
need_cmd curl
|
||||
need_cmd wget
|
||||
need_cmd jq
|
||||
need_cmd whiptail
|
||||
|
||||
if (( ${#MISSING_PKGS[@]} > 0 )); then
|
||||
log "${YELLOW}Fehlende Pakete:${NC} ${MISSING_PKGS[*]}"
|
||||
read -rp "Soll ich diese installieren? [n/Y]: " ans
|
||||
[[ "$ans" =~ ^[YyJj]$ ]] || { log "Abbruch."; exit 1; }
|
||||
$SUDO apt update
|
||||
$SUDO apt install -y "${MISSING_PKGS[@]}"
|
||||
fi
|
||||
|
||||
log "${GREEN}✅ Grundpakete vollständig.${NC}"
|
||||
|
||||
# --- Optional Ansible ---
|
||||
if ! command -v ansible-playbook &>/dev/null; then
|
||||
echo ""
|
||||
echo "Ansible wird nur benötigt, wenn du Playbook-basierte Rezepte nutzen möchtest."
|
||||
read -rp "Möchtest du Ansible installieren? [n/Y]: " install_ansible
|
||||
if [[ "$install_ansible" =~ ^[YyJj]$ ]]; then
|
||||
echo ""
|
||||
echo "Installationsart:"
|
||||
echo " 1) apt (einfach, aber ältere Version möglich)"
|
||||
echo " 2) pip (empfohlen; ARM & x86 kompatibel; immer aktuell)"
|
||||
read -rp "Auswahl [1/2, default 2]: " mode
|
||||
mode="${mode:-2}"
|
||||
|
||||
if [[ "$mode" == "1" ]]; then
|
||||
$SUDO apt update
|
||||
$SUDO apt install -y ansible
|
||||
else
|
||||
if ! command -v pip3 &>/dev/null; then
|
||||
$SUDO apt update
|
||||
$SUDO apt install -y python3-pip
|
||||
fi
|
||||
pip3 install --break-system-packages ansible
|
||||
fi
|
||||
log "${GREEN}✅ Ansible installiert.${NC}"
|
||||
else
|
||||
log "${YELLOW}⏭ Ansible wird übersprungen.${NC}"
|
||||
fi
|
||||
else
|
||||
log "${GREEN}OK:${NC} ansible-playbook vorhanden."
|
||||
fi
|
||||
|
||||
install_docker() {
|
||||
if ! command -v docker &> /dev/null; then
|
||||
log "📦 Installiere Docker..."
|
||||
pkg_install ca-certificates curl gnupg lsb-release
|
||||
$SUDO install -m 0755 -d /etc/apt/keyrings
|
||||
curl -fsSL https://download.docker.com/linux/debian/gpg | $SUDO gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||
$SUDO chmod a+r /etc/apt/keyrings/docker.gpg
|
||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | $SUDO tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
pkg_install docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
||||
log "${GREEN}✅ Docker installiert.${NC}"
|
||||
else
|
||||
log "${GREEN}OK:${NC} Docker ist bereits installiert."
|
||||
fi
|
||||
}
|
||||
|
||||
ask_to_install() {
|
||||
local name="$1"
|
||||
read -rp "Möchtest du '$name' installieren? [J/n]: " ans
|
||||
[[ "$ans" =~ ^[JjYy]$ || -z "$ans" ]]
|
||||
}
|
||||
|
||||
choose_from_list() {
|
||||
local title="$1"
|
||||
shift
|
||||
local items=("$@")
|
||||
|
||||
local ROWS COLS H W
|
||||
ROWS=$(tput lines)
|
||||
COLS=$(tput cols)
|
||||
H=$((ROWS * 80 / 100))
|
||||
W=$((COLS * 80 / 100))
|
||||
(( H < 15 )) && H=15
|
||||
(( W < 40 )) && W=40
|
||||
|
||||
local menu_items=()
|
||||
for item in "${items[@]}"; do
|
||||
menu_items+=("$item" "")
|
||||
done
|
||||
|
||||
local choice
|
||||
choice=$(whiptail --title "$title" --menu "Mit ↑ ↓ und ENTER auswählen:" \
|
||||
"$H" "$W" 15 \
|
||||
"${menu_items[@]}" \
|
||||
3>&1 1>&2 2>&3) || echo "back"
|
||||
|
||||
echo "$choice"
|
||||
}
|
||||
|
||||
run_shell_recipe() {
|
||||
local category="$1"
|
||||
local recipe="$2"
|
||||
local url="$BASE_URL/recipes/$category/$recipe/install.sh"
|
||||
local script="$TMP_DIR/${category}_${recipe}_$(date +%s).sh"
|
||||
log "📥 Lade Shell Installer..."
|
||||
curl -fsSL "$url" -o "$script"
|
||||
chmod +x "$script"
|
||||
log "🚀 Starte Shell Installer..."
|
||||
bash "$script"
|
||||
}
|
||||
|
||||
run_ansible_recipe() {
|
||||
local category="$1"
|
||||
local recipe="$2"
|
||||
local url="$BASE_URL/recipes/$category/$recipe/playbook.yml"
|
||||
local file="/opt/homelab/playbooks/${category}_${recipe}.yml"
|
||||
$SUDO mkdir -p /opt/homelab/playbooks
|
||||
log "📥 Lade Ansible Playbook..."
|
||||
curl -fsSL "$url" -o "$file"
|
||||
log "🔧 Führe Ansible Playbook aus..."
|
||||
ansible-playbook -i localhost, "$file"
|
||||
}
|
||||
|
||||
run_recipe() {
|
||||
local category="$1"
|
||||
local recipe="$2"
|
||||
|
||||
local base="$BASE_URL/recipes/$category/$recipe"
|
||||
local has_shell=$(curl -s --head "$base/install.sh" | grep -q "200" && echo yes || echo no)
|
||||
local has_playbook=$(curl -s --head "$base/playbook.yml" | grep -q "200" && echo yes || echo no)
|
||||
|
||||
if [[ "$has_shell" == "yes" && "$has_playbook" == "no" ]]; then run_shell_recipe "$category" "$recipe"; return; fi
|
||||
if [[ "$has_playbook" == "yes" && "$has_shell" == "no" ]]; then run_ansible_recipe "$category" "$recipe"; return; fi
|
||||
|
||||
if [[ "$has_shell" == "yes" && "$has_playbook" == "yes" ]]; then
|
||||
mode=$(choose_from_list "Installationsmodus wählen" "Shell" "Ansible")
|
||||
[[ "$mode" == "Shell" ]] && run_shell_recipe "$category" "$recipe"
|
||||
[[ "$mode" == "Ansible" ]] && run_ansible_recipe "$category" "$recipe"
|
||||
return
|
||||
fi
|
||||
|
||||
whiptail --title "Fehler" --msgbox "Kein Installer gefunden." 10 50
|
||||
}
|
||||
|
||||
open_category() {
|
||||
local category="$1"
|
||||
mapfile -t recipes < <(jq -r ".recipes.\"$category\"[]" "$TMP_DIR/info.json")
|
||||
|
||||
while true; do
|
||||
choice=$(choose_from_list "Rezept wählen ($category)" "${recipes[@]}")
|
||||
[[ "$choice" == "back" || -z "$choice" ]] && return
|
||||
run_recipe "$category" "$choice"
|
||||
done
|
||||
}
|
||||
|
||||
main_menu() {
|
||||
mapfile -t categories < <(jq -r '.recipes | keys[]' "$TMP_DIR/info.json")
|
||||
while true; do
|
||||
choice=$(choose_from_list "Kategorie wählen" "${categories[@]}")
|
||||
[[ "$choice" == "back" || -z "$choice" ]] && continue
|
||||
open_category "$choice"
|
||||
done
|
||||
}
|
||||
|
||||
check_internet
|
||||
log "📥 Lade Menüstruktur..."
|
||||
curl -fsSL "$API_URL" -o "$TMP_DIR/info.json"
|
||||
|
||||
log "🚀 Starte Homelab Installer"
|
||||
main_menu
|
||||
|
||||
15
recipes/HOW_TO_USE_AGENTS.md
Normal file
15
recipes/HOW_TO_USE_AGENTS.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# KI-Agenten Bedienungsanleitung (für dein Homelab)
|
||||
|
||||
## Überblick
|
||||
Du hast fünf Kernagenten:
|
||||
|
||||
| Agent | Wofür? | Wann benutzen? | Key-Skills |
|
||||
|------|--------|----------------|-----------|
|
||||
| **Strategie-Agent** | Planen & Strukturieren | Projektbesprechung | Roadmaps, Tabellen, UI/UX, Anforderungen |
|
||||
| **Denker-Agent** | Tiefes Denken & Lösungsfindung | Komplexe Probleme | Chain-of-Thought, Architektur, Logik |
|
||||
| **Gedächtnis-Agent** | Wissen abrufen (RAG) | Dokumente, Regeln, Gesetze | Quellen zitieren, Fakten sammeln |
|
||||
| **Builder-Agent** | Code wirklich umsetzen | „Setz es um“ | Schreibt Code + Tests + korrigiert Fehler selbst |
|
||||
| **Diagramm-Agent** | Flussdiagramme, UI-Layouts, Netzwerk-Karten | Prozess- und Strukturvisualisierung | Mermaid, UML, Wireframes |
|
||||
|
||||
...
|
||||
|
||||
64
recipes/ai/agent-config/install.sh
Normal file
64
recipes/ai/agent-config/install.sh
Normal file
@@ -0,0 +1,64 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
if ask_to_install "Agent-Konfiguration"; then
|
||||
echo ""
|
||||
read -rp "Ollama Router Base-URL (z.B. http://192.168.3.21:11437): " ROUTER_URL
|
||||
ROUTER_URL=${ROUTER_URL:-http://localhost:11437}
|
||||
BASE="/srv/ai/agents"
|
||||
$SUDO mkdir -p "${BASE}"
|
||||
$SUDO tee "${BASE}/agents.yml" >/dev/null <<'EOF'
|
||||
language: de
|
||||
autonomy: soft
|
||||
scope: global
|
||||
agents:
|
||||
- name: Strategie-Agent
|
||||
purpose: "Lange Planungsdialoge, Roadmaps, Tabellen, UI/UX-Brainstorming."
|
||||
default_models:
|
||||
primary: "llama3.1:8b-instruct"
|
||||
secondary: "mistral-nemo:12b"
|
||||
cpu_fallback: "phi3:mini"
|
||||
endpoint: "${ROUTER_URL}"
|
||||
prompt_preset: |
|
||||
Du bist ein strategischer Planer. Arbeite iterativ, strukturiert und deutschsprachig.
|
||||
Liefere Tabellen (Markdown), klare Meilensteine, Risiken, Abhängigkeiten.
|
||||
Frage NUR nach, wenn kritische Annahmen fehlen; sonst entscheide pragmatisch.
|
||||
Modus: soft – Vorschläge machen, aber Details selbstständig ausarbeiten.
|
||||
- name: Denker-Agent
|
||||
purpose: "Tiefes Reasoning (CoT), Architektur- und Lösungsentwürfe, Mathe/Logik."
|
||||
default_models:
|
||||
primary: "huihui_ai/deepseek-r1-abliterated:14b"
|
||||
secondary: "phi3:medium-128k"
|
||||
cpu_fallback: "phi3:mini"
|
||||
endpoint: "${ROUTER_URL}"
|
||||
prompt_preset: |
|
||||
Denke in überprüfbaren Schritten. Erkläre Annahmen, bevor du entscheidest.
|
||||
Bevorzuge Beweise, Gegenbeispiele und Tests. Schließe mit TL;DR.
|
||||
- name: Gedächtnis-Agent
|
||||
purpose: "RAG, Wissensquellen, Zitationen, Abruf & Zusammenführung von Fakten."
|
||||
default_models:
|
||||
retriever_llm: "phi3:mini"
|
||||
embed_model: "mxbai-embed-large"
|
||||
cpu_fallback: "gemma2:2b-instruct-q6_K"
|
||||
endpoint: "${ROUTER_URL}"
|
||||
prompt_preset: |
|
||||
Orchestriere Nachschlagen in Wissenssammlungen (RAG). Zitiere Fundstellen (Datei/Seite/Abschnitt).
|
||||
Antworte nüchtern, fasse Unsicherheit transparent zusammen.
|
||||
sources:
|
||||
- name: "Gesetze"
|
||||
type: "pdf"
|
||||
location: "/srv/ai/corpus/law"
|
||||
- name: "Shadowrun-Regeln"
|
||||
type: "pdf"
|
||||
location: "/srv/ai/corpus/shadowrun"
|
||||
- name: "Tech-Docs"
|
||||
type: "mixed"
|
||||
location: "/srv/ai/corpus/tech"
|
||||
EOF
|
||||
$SUDO sed -i "s|\${ROUTER_URL}|${ROUTER_URL}|g" "${BASE}/agents.yml"
|
||||
echo "✅ Agenten-Profile: ${BASE}/agents.yml"
|
||||
else
|
||||
log "${YELLOW}⏭ Agent-Konfiguration übersprungen.${NC}"
|
||||
fi
|
||||
16
recipes/ai/budibase-server/docker-compose.yml
Normal file
16
recipes/ai/budibase-server/docker-compose.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
services:
|
||||
budibase:
|
||||
image: budibase/budibase:latest
|
||||
container_name: budibase
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "10000:80"
|
||||
environment:
|
||||
- JWT_SECRET=changeme
|
||||
- MINIO_ACCESS_KEY=budibase
|
||||
- MINIO_SECRET_KEY=budibase_secret
|
||||
volumes:
|
||||
- budibase_data:/data
|
||||
|
||||
volumes:
|
||||
budibase_data:
|
||||
56
recipes/ai/budibase-server/install.sh
Normal file
56
recipes/ai/budibase-server/install.sh
Normal file
@@ -0,0 +1,56 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
if ask_to_install "Budibase Server"; then
|
||||
echo "=== BUDIBASE INSTALLATION ==="
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
install_docker
|
||||
|
||||
echo "[+] Erstelle Verzeichnis: /srv/docker/budibase"
|
||||
$SUDO mkdir -p /srv/docker/budibase
|
||||
cd /srv/docker/budibase
|
||||
|
||||
# Funktion für automatisches Finden des nächsten freien Ports
|
||||
find_free_port() {
|
||||
PORT=10000
|
||||
while ss -lnt | awk '{print $4}' | grep -q ":$PORT$"; do
|
||||
PORT=$((PORT + 1))
|
||||
done
|
||||
echo "$PORT"
|
||||
}
|
||||
|
||||
FREE_PORT=$(find_free_port)
|
||||
echo "✅ Freier Port gefunden: $FREE_PORT"
|
||||
|
||||
echo "[+] Schreibe docker-compose.yml"
|
||||
$SUDO tee docker-compose.yml >/dev/null <<EOF
|
||||
services:
|
||||
budibase:
|
||||
image: budibase/budibase:latest
|
||||
container_name: budibase-$FREE_PORT
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "$FREE_PORT:80"
|
||||
environment:
|
||||
- JWT_SECRET=changeme
|
||||
- MINIO_ACCESS_KEY=budibase
|
||||
- MINIO_SECRET_KEY=budibase_secret
|
||||
volumes:
|
||||
- budibase_data:/data
|
||||
|
||||
volumes:
|
||||
budibase_data:
|
||||
EOF
|
||||
|
||||
echo "[+] Starte Budibase..."
|
||||
$SUDO docker compose up -d
|
||||
|
||||
echo ""
|
||||
echo "✅ Budibase ist installiert!"
|
||||
echo "→ Öffne im Browser: http://<IP>:$FREE_PORT"
|
||||
else
|
||||
log "${YELLOW}⏭ Budibase Server übersprungen.${NC}"
|
||||
fi
|
||||
|
||||
80
recipes/ai/builder-agent/install.sh
Normal file
80
recipes/ai/builder-agent/install.sh
Normal file
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
pkg_install git || true
|
||||
|
||||
if ask_to_install "Builder-Agent"; then
|
||||
echo ""
|
||||
read -rp "Ollama Router Base-URL (z.B. http://192.168.3.21:11437): " ROUTER_URL
|
||||
ROUTER_URL=${ROUTER_URL:-http://localhost:11437}
|
||||
echo ""
|
||||
read -rp "Projektverzeichnis (leer = auto-detect): " PROJECT_DIR
|
||||
if [ -z "${PROJECT_DIR}" ]; then
|
||||
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||
PROJECT_DIR="$(git rev-parse --show-toplevel)"
|
||||
else
|
||||
PROJECT_DIR="$(pwd)"
|
||||
fi
|
||||
fi
|
||||
PROJECT_DIR="$(readlink -f "${PROJECT_DIR}")"
|
||||
BASE="/srv/ai/builder"
|
||||
$SUDO mkdir -p "${BASE}"
|
||||
$SUDO tee "${BASE}/builder.yml" >/dev/null <<'EOF'
|
||||
name: Builder-Agent
|
||||
language: de
|
||||
autonomy: soft
|
||||
endpoint: "${ROUTER_URL}"
|
||||
models:
|
||||
planner: "llama3.1:8b-instruct"
|
||||
reasoner: "huihui_ai/deepseek-r1-abliterated:14b"
|
||||
coder_primary: "qwen2.5-coder:14b"
|
||||
coder_secondary: "deepseek-coder-v2:16b"
|
||||
cpu_fallback: "qwen2.5-coder:7b"
|
||||
workspace:
|
||||
project_dir: "${PROJECT_DIR}"
|
||||
tests:
|
||||
enabled: true
|
||||
force_languages: []
|
||||
prompts:
|
||||
system: |
|
||||
Du bist ein Builder-Agent (soft). Ziel: Probleme lösen mit minimaler Rückfrage.
|
||||
Strategie:
|
||||
1) Plane kurz (ToDo-Liste), dann implementiere iterativ im Workspace.
|
||||
2) Führe nach jedem Schritt Tests/Lints aus (falls verfügbar). Repariere Fehler selbstständig.
|
||||
3) Schreibe klare Commits; dokumentiere Änderungen kompakt in CHANGELOG.md.
|
||||
4) Nur bei sicherheitsrelevanten/zerstörerischen Aktionen Rückfrage.
|
||||
Liefere am Ende: TL;DR + nächste Schritte.
|
||||
EOF
|
||||
$SUDO sed -i "s|\${ROUTER_URL}|${ROUTER_URL}|g; s|\${PROJECT_DIR}|${PROJECT_DIR}|g" "${BASE}/builder.yml"
|
||||
$SUDO tee "${BASE}/run_tests.sh" >/dev/null <<'EOF'
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
ROOT="${1:-.}"
|
||||
cd "${ROOT}"
|
||||
if [ -f "requirements.txt" ] || ls -1 **/requirements.txt >/dev/null 2>&1; then
|
||||
command -v pytest >/dev/null 2>&1 && pytest -q || true
|
||||
fi
|
||||
if [ -f "package.json" ]; then
|
||||
if npm run | grep -q "test"; then npm test --silent || true; fi
|
||||
if npm run | grep -q "lint"; then npm run lint --silent || true; fi
|
||||
if npm run | grep -q "typecheck"; then npm run typecheck --silent || true; fi
|
||||
fi
|
||||
if [ -f "composer.json" ]; then
|
||||
if [ -f "vendor/bin/pest" ]; then vendor/bin/pest || true
|
||||
elif [ -f "vendor/bin/phpunit" ]; then vendor/bin/phpunit || true
|
||||
fi
|
||||
fi
|
||||
if [ -f "Dockerfile" ]; then
|
||||
docker build -q -t tmp-builder-test . || true
|
||||
fi
|
||||
if command -v shellcheck >/dev/null 2>&1; then
|
||||
find . -type f -name "*.sh" -print0 | xargs -0 -r shellcheck || true
|
||||
fi
|
||||
EOF
|
||||
$SUDO chmod +x "${BASE}/run_tests.sh"
|
||||
echo "✅ Builder-Agent konfiguriert unter ${BASE} (Workspace: ${PROJECT_DIR})"
|
||||
else
|
||||
log "${YELLOW}⏭ Builder-Agent übersprungen.${NC}"
|
||||
fi
|
||||
2
recipes/ai/diagram-agent/install.sh
Normal file
2
recipes/ai/diagram-agent/install.sh
Normal file
@@ -0,0 +1,2 @@
|
||||
#!/usr/bin/env bash
|
||||
echo "Diagram-Agent placeholder install script"
|
||||
12
recipes/ai/memory/README.md
Normal file
12
recipes/ai/memory/README.md
Normal file
@@ -0,0 +1,12 @@
|
||||
|
||||
# Memory Stack (External Ollama)
|
||||
|
||||
## Deploy
|
||||
```
|
||||
bash deploy.sh http://<OLLAMA-IP>:<PORT>
|
||||
```
|
||||
|
||||
## Test
|
||||
```
|
||||
curl http://localhost:8085/health
|
||||
```
|
||||
25
recipes/ai/memory/compose.yaml
Normal file
25
recipes/ai/memory/compose.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
|
||||
version: "3.8"
|
||||
services:
|
||||
qdrant:
|
||||
image: qdrant/qdrant:latest
|
||||
container_name: memory-qdrant
|
||||
volumes:
|
||||
- /srv/docker/services/memory/qdrant:/qdrant/storage
|
||||
ports:
|
||||
- "127.0.0.1:6333:6333"
|
||||
restart: unless-stopped
|
||||
|
||||
memory-api:
|
||||
build:
|
||||
context: ./memory-api
|
||||
container_name: memory-api
|
||||
environment:
|
||||
- QDRANT_URL=http://qdrant:6333
|
||||
- OLLAMA_API={{OLLAMA_API}}
|
||||
- COLLECTION_NAME=chat-memory
|
||||
ports:
|
||||
- "127.0.0.1:8085:8085"
|
||||
depends_on:
|
||||
- qdrant
|
||||
restart: unless-stopped
|
||||
32
recipes/ai/memory/deploy.sh
Normal file
32
recipes/ai/memory/deploy.sh
Normal file
@@ -0,0 +1,32 @@
|
||||
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
install_docker
|
||||
|
||||
if ask_to_install "RAG Memory Stack (Qdrant + Memory API)"; then
|
||||
log "=== RAG Memory Stack Installation ==="
|
||||
|
||||
read -rp "Ollama API URL (z.B. http://127.0.0.1:11434): " OLLAMA_API_URL
|
||||
OLLAMA_API_URL=${OLLAMA_API_URL:-http://127.0.0.1:11434}
|
||||
|
||||
BASE="/srv/docker/services/memory"
|
||||
$SUDO mkdir -p "$BASE/qdrant"
|
||||
$SUDO cp -r "$(dirname "${BASH_SOURCE[0]}")/memory-api" "$BASE/"
|
||||
$SUDO cp "$(dirname "${BASH_SOURCE[0]}")/compose.yaml" "$BASE/docker-compose.yml"
|
||||
cd "$BASE"
|
||||
|
||||
$SUDO sed -i "s|{{OLLAMA_API}}|$OLLAMA_API_URL|g" docker-compose.yml
|
||||
|
||||
log "🚀 Starte RAG Memory Stack..."
|
||||
$SUDO docker compose up -d --build
|
||||
|
||||
log "Attempting to pull embedding model from remote Ollama..."
|
||||
$SUDO curl -s -X POST "$OLLAMA_API_URL/api/pull" -H 'Content-Type: application/json' -d '{"name": "nomic-embed-text"}' || log "Notice: Model pull failed (possibly using a gateway). Continuing."
|
||||
|
||||
log "✅ RAG Memory Stack läuft unter: http://<server-ip>:8085"
|
||||
else
|
||||
log "${YELLOW}⏭ RAG Memory Stack übersprungen.${NC}"
|
||||
fi
|
||||
8
recipes/ai/memory/memory-api/Dockerfile
Normal file
8
recipes/ai/memory/memory-api/Dockerfile
Normal file
@@ -0,0 +1,8 @@
|
||||
|
||||
FROM python:3.11-slim
|
||||
WORKDIR /app
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
COPY app.py .
|
||||
EXPOSE 8085
|
||||
CMD ["python", "app.py"]
|
||||
40
recipes/ai/memory/memory-api/app.py
Normal file
40
recipes/ai/memory/memory-api/app.py
Normal file
@@ -0,0 +1,40 @@
|
||||
|
||||
from fastapi import FastAPI
|
||||
import requests, os
|
||||
from qdrant_client import QdrantClient
|
||||
from qdrant_client.models import PointStruct
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
QDRANT_URL = os.getenv("QDRANT_URL")
|
||||
OLLAMA_API = os.getenv("OLLAMA_API")
|
||||
COLLECTION_NAME = os.getenv("COLLECTION_NAME", "chat-memory")
|
||||
|
||||
client = QdrantClient(url=QDRANT_URL)
|
||||
|
||||
@app.get("/health")
|
||||
def health():
|
||||
return {"status": "ok", "qdrant": QDRANT_URL, "ollama": OLLAMA_API}
|
||||
|
||||
def embed(text):
|
||||
r = requests.post(f"{OLLAMA_API}/api/embeddings", json={"model":"nomic-embed-text","prompt":text})
|
||||
return r.json()["embedding"]
|
||||
|
||||
@app.post("/store")
|
||||
def store(item: dict):
|
||||
text = item["text"]
|
||||
metadata = item.get("metadata", {})
|
||||
vec = embed(text)
|
||||
pid = hashlib.sha256(text.encode()).hexdigest()
|
||||
client.upsert(collection_name=COLLECTION_NAME, points=[PointStruct(id=pid, vector=vec, payload={"text": text, **metadata})])
|
||||
return {"stored": True}
|
||||
|
||||
@app.post("/search")
|
||||
def search(query: dict):
|
||||
q = query["text"]
|
||||
top_k = query.get("top_k", 5)
|
||||
vec = embed(q)
|
||||
result = client.search(collection_name=COLLECTION_NAME, query_vector=vec, limit=top_k)
|
||||
return [{"score": r.score, "text": r.payload["text"]} for r in result]
|
||||
4
recipes/ai/memory/memory-api/requirements.txt
Normal file
4
recipes/ai/memory/memory-api/requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
fastapi
|
||||
uvicorn
|
||||
requests
|
||||
qdrant-client
|
||||
14
recipes/ai/ollama-router/README.md
Normal file
14
recipes/ai/ollama-router/README.md
Normal file
@@ -0,0 +1,14 @@
|
||||
# Ollama Router (new schema)
|
||||
|
||||
Dieses Paket folgt dem Beispiel-Schema (beispiel.zip). Es enthält:
|
||||
- `recipes/services/ollama-router/install.sh` – interaktive IP/Port-Abfrage (ohne ENV)
|
||||
- `recipes/services/ollama-router/docker-compose.yml` – nutzt externes Netzwerk `ai`
|
||||
- `recipes/services/ollama-router/config.yml` – wird vom Install-Skript erzeugt
|
||||
|
||||
## Install
|
||||
```bash
|
||||
bash recipes/services/ollama-router/install.sh
|
||||
cd /srv/docker/services/ollama-router
|
||||
docker compose up -d
|
||||
```
|
||||
CPU-Fallback-Modelle werden automatisch auf dem CPU-Node gepullt, damit **Strategie-/Denker-/Gedächtnis-Agenten** immer laufen.
|
||||
102
recipes/ai/ollama-router/install.sh
Normal file
102
recipes/ai/ollama-router/install.sh
Normal file
@@ -0,0 +1,102 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
install_docker
|
||||
|
||||
if ask_to_install "Ollama Router"; then
|
||||
echo ""
|
||||
read -rp "Listen-Port des Router (Default 11437): " ROUTER_PORT
|
||||
ROUTER_PORT=${ROUTER_PORT:-11437}
|
||||
echo ""
|
||||
read -rp "NVIDIA Node IP: " NVIDIA_IP
|
||||
read -rp "NVIDIA Node Port (Default 11436): " NVIDIA_PORT
|
||||
NVIDIA_PORT=${NVIDIA_PORT:-11436}
|
||||
echo ""
|
||||
read -rp "AMD (ROCm) Node IP: " AMD_IP
|
||||
read -rp "AMD Node Port (Default 11435): " AMD_PORT
|
||||
AMD_PORT=${AMD_PORT:-11435}
|
||||
echo ""
|
||||
read -rp "CPU-only Node IP: " CPU_IP
|
||||
read -rp "CPU Node Port (Default 11434): " CPU_PORT
|
||||
CPU_PORT=${CPU_PORT:-11434}
|
||||
BASE="/srv/docker/services/ollama-router"
|
||||
$SUDO mkdir -p "${BASE}"
|
||||
cd "${BASE}"
|
||||
$SUDO tee config.yml >/dev/null <<'EOF'
|
||||
routes:
|
||||
llama3.1:8b-instruct:
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
mistral-nemo:12b:
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
huihui_ai/deepseek-r1-abliterated:14b:
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
phi3:medium-128k:
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
mxbai-embed-large:
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
phi3:mini:
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
gemma2:2b-instruct-q6_K:
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
qwen2.5-coder:14b:
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
deepseek-coder-v2:16b:
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
qwen2.5-coder:7b:
|
||||
- url: http://${CPU_IP}:${CPU_PORT}
|
||||
- url: http://${NVIDIA_IP}:${NVIDIA_PORT}
|
||||
- url: http://${AMD_IP}:${AMD_PORT}
|
||||
EOF
|
||||
$SUDO sed -i "s|\${NVIDIA_IP}|${NVIDIA_IP}|g; s|\${NVIDIA_PORT}|${NVIDIA_PORT}|g; s|\${AMD_IP}|${AMD_IP}|g; s|\${AMD_PORT}|${AMD_PORT}|g; s|\${CPU_IP}|${CPU_IP}|g; s|\${CPU_PORT}|${CPU_PORT}|g" config.yml
|
||||
$SUDO tee docker-compose.yml >/dev/null <<EOF
|
||||
version: "3.9"
|
||||
services:
|
||||
ollama-router:
|
||||
image: ghcr.io/ollama/ollama-router:latest
|
||||
container_name: ollama-router
|
||||
restart: unless-stopped
|
||||
networks: [ai]
|
||||
volumes:
|
||||
- ./config.yml:/app/config.yml:ro
|
||||
ports:
|
||||
- "${ROUTER_PORT}:11437"
|
||||
networks:
|
||||
ai:
|
||||
external: true
|
||||
EOF
|
||||
$SUDO docker network inspect ai >/dev/null 2>&1 || $SUDO docker network create ai
|
||||
CPU_MODELS=(
|
||||
"phi3:mini"
|
||||
"gemma2:2b-instruct-q6_K"
|
||||
"mxbai-embed-large"
|
||||
"qwen2.5-coder:7b"
|
||||
)
|
||||
for m in "${CPU_MODELS[@]}"; do
|
||||
echo "→ Pull ${m} on CPU node ${CPU_IP}:${CPU_PORT}"
|
||||
$SUDO curl -fsSL -X POST "http://${CPU_IP}:${CPU_PORT}/api/pull" -d "{"name":"${m}"}" || true
|
||||
done
|
||||
log "✅ Router konfiguriert in ${BASE}"
|
||||
log "ℹ️ Start: cd ${BASE} && docker compose up -d"
|
||||
else
|
||||
log "${YELLOW}⏭ Ollama Router übersprungen.${NC}"
|
||||
fi
|
||||
11
recipes/ai/ollama-server/docker-compose.yml
Normal file
11
recipes/ai/ollama-server/docker-compose.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
services:
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
container_name: ollama
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "11434:11434"
|
||||
volumes:
|
||||
- ollama_data:/root/.ollama
|
||||
volumes:
|
||||
ollama_data:
|
||||
53
recipes/ai/ollama-server/install.sh
Normal file
53
recipes/ai/ollama-server/install.sh
Normal file
@@ -0,0 +1,53 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
if ask_to_install "Ollama Server"; then
|
||||
echo "=== OLLAMA SERVER INSTALLATION ==="
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
install_docker
|
||||
|
||||
$SUDO mkdir -p /srv/docker/ollama
|
||||
cd /srv/docker/ollama
|
||||
|
||||
# Funktion, die den nächsten freien Port sucht
|
||||
find_free_port() {
|
||||
PORT=11434
|
||||
while ss -lnt | awk '{print $4}' | grep -q ":$PORT$"; do
|
||||
PORT=$((PORT + 1))
|
||||
done
|
||||
echo "$PORT"
|
||||
}
|
||||
|
||||
FREE_PORT=$(find_free_port)
|
||||
echo "✅ Freier Port gefunden: $FREE_PORT"
|
||||
|
||||
$SUDO tee docker-compose.yml >/dev/null <<EOF
|
||||
services:
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
container_name: ollama-$FREE_PORT
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "$FREE_PORT:11434"
|
||||
volumes:
|
||||
- ollama_data:/root/.ollama
|
||||
volumes:
|
||||
ollama_data:
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
echo "Ollama Server läuft auf Port $FREE_PORT"
|
||||
|
||||
read -p "Modell jetzt herunterladen? (z.B. llama3 / Enter = nein): " MODEL
|
||||
if [ ! -z "$MODEL" ]; then
|
||||
$SUDO curl -N -X POST http://127.0.0.1:$FREE_PORT/api/pull \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"name\":\"$MODEL\"}" || true
|
||||
fi
|
||||
|
||||
echo "✅ Fertig! URL: http://<server-ip>:$FREE_PORT"
|
||||
else
|
||||
log "${YELLOW}⏭ Ollama Server übersprungen.${NC}"
|
||||
fi
|
||||
|
||||
32
recipes/ai/rag-crawler/EXTRAS.md
Normal file
32
recipes/ai/rag-crawler/EXTRAS.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# EXTRAS: systemd Timer (optional)
|
||||
|
||||
## /etc/systemd/system/rag-crawler.service
|
||||
```
|
||||
[Unit]
|
||||
Description=RAG Crawler Update (drip)
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
User=root
|
||||
ExecStart=/bin/bash -lc 'source /srv/ai/rag-crawler/venv/bin/activate && python3 /srv/ai/rag-crawler/crawler/main.py --mode=drip --budget 1'
|
||||
```
|
||||
|
||||
## /etc/systemd/system/rag-crawler.timer
|
||||
```
|
||||
[Unit]
|
||||
Description=Run RAG Crawler drip hourly
|
||||
|
||||
[Timer]
|
||||
OnCalendar=hourly
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
## Enable
|
||||
```
|
||||
systemctl daemon-reload
|
||||
systemctl enable --now rag-crawler.timer
|
||||
```
|
||||
40
recipes/ai/rag-crawler/README.md
Normal file
40
recipes/ai/rag-crawler/README.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# RAG Crawler – Vollversion (freundlich & getrennt vom RAG-Speicher)
|
||||
|
||||
Dieser Crawler läuft **separat** vom RAG/Memory-Stack. Er:
|
||||
- respektiert `robots.txt`
|
||||
- nutzt zufällige Delays (min/max), per-Domain-Quoten & Limitierung der Parallelität
|
||||
- unterstützt zwei Modi: `update` (normal) und `drip` (sehr langsam/menschlich)
|
||||
- speichert Texte/PDFs im Dateisystem (Corpus), optional „drippt“ er nur wenige Seiten je Lauf
|
||||
- hat einen separaten **Ingest** nach deiner Memory-API (`/store`), kompatibel zu deiner `memory-api`
|
||||
|
||||
## Schnellstart
|
||||
```bash
|
||||
# 1) installieren
|
||||
bash recipes/services/rag-crawler/install.sh
|
||||
|
||||
# 2) Quellen bearbeiten
|
||||
nano /srv/ai/rag-crawler/crawler/sources.yml
|
||||
|
||||
# 3) Crawl (vollständig/regelmäßig)
|
||||
source /srv/ai/rag-crawler/venv/bin/activate
|
||||
python3 /srv/ai/rag-crawler/crawler/main.py --mode=update
|
||||
|
||||
# 4) „Drip“-Modus (z.B. stündlich je Domain nur 1 URL)
|
||||
python3 /srv/ai/rag-crawler/crawler/main.py --mode=drip --budget 1
|
||||
|
||||
# 5) Ingest aller neuen/aktualisierten Texte in die Memory-API
|
||||
python3 /srv/ai/rag-crawler/crawler/ingest.py --root /srv/ai/corpus --memory http://127.0.0.1:8085
|
||||
```
|
||||
|
||||
## Scheduling (Beispiele)
|
||||
- Crontab:
|
||||
`@hourly source /srv/ai/rag-crawler/venv/bin/activate && python3 /srv/ai/rag-crawler/crawler/main.py --mode=drip --budget 1`
|
||||
`*/10 * * * * source /srv/ai/rag-crawler/venv/bin/activate && python3 /srv/ai/rag-crawler/crawler/ingest.py --root /srv/ai/corpus --memory http://127.0.0.1:8085`
|
||||
- systemd Timer (optional): siehe `EXTRAS.md`
|
||||
|
||||
## Ordner
|
||||
- `/srv/ai/rag-crawler` – Crawler + venv
|
||||
- `/srv/ai/corpus` – Rohdaten (Text/PDF) + `.crawler_state.json`
|
||||
|
||||
## Hinweis
|
||||
- **Keine ENV notwendig** – alle Werte werden interaktiv abgefragt oder in `sources.yml` gepflegt.
|
||||
43
recipes/ai/rag-crawler/crawler/ingest.py
Normal file
43
recipes/ai/rag-crawler/crawler/ingest.py
Normal file
@@ -0,0 +1,43 @@
|
||||
#!/usr/bin/env python3
|
||||
import os, sys, json, pathlib, argparse, requests
|
||||
|
||||
def iter_texts(root):
|
||||
for p in pathlib.Path(root).rglob("*.txt"):
|
||||
yield p
|
||||
|
||||
def store(memory_url, collection, text, meta):
|
||||
payload = {"text": text, "metadata": {"source": meta.get("source"), "path": meta.get("path")}}
|
||||
r = requests.post(f"{memory_url}/store", json=payload, timeout=30)
|
||||
r.raise_for_status()
|
||||
return r.json()
|
||||
|
||||
def main():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("--root", required=True, help="Corpus-Root (z.B. /srv/ai/corpus)")
|
||||
ap.add_argument("--memory", required=False, default=None, help="Memory-API URL (z.B. http://127.0.0.1:8085)")
|
||||
ap.add_argument("--collection", default="chat-memory")
|
||||
args = ap.parse_args()
|
||||
|
||||
# Optional: memory-URL aus sources.yml lesen
|
||||
if not args.memory:
|
||||
conf = pathlib.Path(__file__).with_name("sources.yml")
|
||||
if conf.exists():
|
||||
import yaml
|
||||
cfg = yaml.safe_load(conf.read_text())
|
||||
args.memory = cfg.get("memory", {}).get("url")
|
||||
|
||||
if not args.memory:
|
||||
print("Bitte --memory <URL> angeben oder in sources.yml hinterlegen.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
for p in iter_texts(args.root):
|
||||
try:
|
||||
text = p.read_text(errors="ignore")
|
||||
meta = {"path": str(p), "source": "crawler"}
|
||||
store(args.memory, args.collection, text, meta)
|
||||
print("✔ stored", p)
|
||||
except Exception as e:
|
||||
print("✖", p, e, file=sys.stderr)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
254
recipes/ai/rag-crawler/crawler/main.py
Normal file
254
recipes/ai/rag-crawler/crawler/main.py
Normal file
@@ -0,0 +1,254 @@
|
||||
#!/usr/bin/env python3
|
||||
import asyncio, aiohttp, aiohttp.client_exceptions as aiox
|
||||
import os, time, random, hashlib, json, re, pathlib
|
||||
from urllib.parse import urljoin, urldefrag, urlparse
|
||||
from bs4 import BeautifulSoup
|
||||
from dateutil.parser import parse as dtparse
|
||||
import yaml, tldextract, ssl
|
||||
|
||||
try:
|
||||
import uvloop
|
||||
uvloop.install()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# ---- Config laden ----
|
||||
BASE = os.environ.get("RAG_CRAWLER_BASE", os.getcwd())
|
||||
CONF_PATH = os.path.join(BASE, "crawler", "sources.yml")
|
||||
with open(CONF_PATH, "r") as f:
|
||||
CFG = yaml.safe_load(f)
|
||||
|
||||
POLICY = CFG.get("policy", {})
|
||||
STORAGE = CFG.get("storage", {})
|
||||
MEMORY = CFG.get("memory", {})
|
||||
SEEDS = CFG.get("seeds", [])
|
||||
|
||||
ROOT = pathlib.Path(STORAGE.get("root", "/srv/ai/corpus")).resolve()
|
||||
TEXT_DIR = ROOT / STORAGE.get("text_subdir", "text")
|
||||
PDF_DIR = ROOT / STORAGE.get("pdf_subdir", "pdf")
|
||||
TEXT_DIR.mkdir(parents=True, exist_ok=True)
|
||||
PDF_DIR.mkdir(parents=True, exist_ok=True)
|
||||
STATE_PATH = ROOT / ".crawler_state.json"
|
||||
|
||||
STATE = {"visited": {}} # url -> {etag, last_modified, ts}
|
||||
if STATE_PATH.exists():
|
||||
try:
|
||||
STATE = json.loads(STATE_PATH.read_text())
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def save_state():
|
||||
try:
|
||||
STATE_PATH.write_text(json.dumps(STATE))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# ---- Robots & Quoten ----
|
||||
ROBOTS_CACHE = {}
|
||||
DOMAIN_NEXT_ALLOWED = {}
|
||||
|
||||
def domain_key(url):
|
||||
ext = tldextract.extract(url)
|
||||
return f"{ext.domain}.{ext.suffix}"
|
||||
|
||||
async def fetch_robots(session, base_url):
|
||||
dom = domain_key(base_url)
|
||||
if dom in ROBOTS_CACHE:
|
||||
return ROBOTS_CACHE[dom]
|
||||
robots_url = urljoin(f"{urlparse(base_url).scheme}://{urlparse(base_url).netloc}", "/robots.txt")
|
||||
from robotexclusionrulesparser import RobotExclusionRulesParser as Robots
|
||||
rp = Robots()
|
||||
try:
|
||||
async with session.get(robots_url, timeout=10) as r:
|
||||
if r.status == 200:
|
||||
rp.parse(await r.text())
|
||||
else:
|
||||
rp.parse("")
|
||||
except Exception:
|
||||
rp.parse("")
|
||||
ROBOTS_CACHE[dom] = rp
|
||||
return rp
|
||||
|
||||
def polite_delay_for(url):
|
||||
dmin = int(POLICY.get("delay_min_seconds", 5))
|
||||
dmax = int(POLICY.get("delay_max_seconds", 60))
|
||||
d = domain_key(url)
|
||||
t = DOMAIN_NEXT_ALLOWED.get(d, 0)
|
||||
now = time.time()
|
||||
if now < t:
|
||||
return max(0, t - now)
|
||||
# Setze nächste erlaubte Zeit (random Delay) – eigentlicher Sleep erfolgt in fetch()
|
||||
DOMAIN_NEXT_ALLOWED[d] = now + random.uniform(dmin, dmax)
|
||||
return 0
|
||||
|
||||
def norm_url(base, link):
|
||||
href = urljoin(base, link)
|
||||
href, _ = urldefrag(href)
|
||||
return href
|
||||
|
||||
def fnmatch(text, pat):
|
||||
pat = pat.replace("**", ".*").replace("*", "[^/]*")
|
||||
return re.fullmatch(pat, text) is not None
|
||||
|
||||
def allowed_by_patterns(url, inc, exc):
|
||||
ok_inc = True if not inc else any(fnmatch(url, pat) for pat in inc)
|
||||
ok_exc = any(fnmatch(url, pat) for pat in exc) if exc else False
|
||||
return ok_inc and not ok_exc
|
||||
|
||||
def should_revisit(url, revisit_str):
|
||||
info = STATE["visited"].get(url, {})
|
||||
if not info:
|
||||
return True
|
||||
try:
|
||||
days = int(revisit_str.rstrip("d"))
|
||||
except Exception:
|
||||
days = 30
|
||||
last_ts = info.get("ts", 0)
|
||||
return (time.time() - last_ts) > days * 86400
|
||||
|
||||
async def fetch(session, url, etag=None, lastmod=None):
|
||||
headers = {"User-Agent": POLICY.get("user_agent", "polite-crawler/1.0")}
|
||||
if etag:
|
||||
headers["If-None-Match"] = etag
|
||||
if lastmod:
|
||||
headers["If-Modified-Since"] = lastmod
|
||||
ssl_ctx = ssl.create_default_context()
|
||||
try:
|
||||
delay = polite_delay_for(url)
|
||||
if delay > 0:
|
||||
await asyncio.sleep(delay)
|
||||
async with session.get(url, headers=headers, ssl=ssl_ctx, timeout=30) as r:
|
||||
if r.status == 304:
|
||||
return None, {"status": 304, "headers": {}}
|
||||
body = await r.read()
|
||||
return body, {"status": r.status, "headers": dict(r.headers)}
|
||||
except Exception as e:
|
||||
return None, {"status": "error", "error": str(e)}
|
||||
|
||||
def save_binary(path: pathlib.Path, content: bytes):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_bytes(content)
|
||||
|
||||
def save_text(path: pathlib.Path, text: str):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
path.write_text(text)
|
||||
|
||||
def is_pdf(headers):
|
||||
ct = headers.get("Content-Type", "").lower()
|
||||
return "application/pdf" in ct or ct.endswith("/pdf")
|
||||
|
||||
def extract_text_html(body: bytes) -> str:
|
||||
soup = BeautifulSoup(body, "lxml")
|
||||
for tag in soup(["script","style","noscript","nav","footer","header","aside"]):
|
||||
tag.decompose()
|
||||
text = soup.get_text("\n")
|
||||
return "\n".join(line.strip() for line in text.splitlines() if line.strip())
|
||||
|
||||
def path_for(url, typ="text"):
|
||||
h = hashlib.sha256(url.encode()).hexdigest()[:16]
|
||||
if typ == "text":
|
||||
return TEXT_DIR / f"{h}.txt"
|
||||
return PDF_DIR / f"{h}.pdf"
|
||||
|
||||
async def crawl_seed(session, seed, budget=0):
|
||||
base = seed["url"]
|
||||
include = seed.get("include", [])
|
||||
exclude = seed.get("exclude", [])
|
||||
revisit = seed.get("revisit", "30d")
|
||||
|
||||
# robots
|
||||
if POLICY.get("obey_robots_txt", True):
|
||||
rp = await fetch_robots(session, base)
|
||||
if not rp.is_allowed("*", base):
|
||||
return
|
||||
|
||||
queue = [base]
|
||||
seen = set()
|
||||
processed = 0
|
||||
|
||||
while queue:
|
||||
url = queue.pop(0)
|
||||
if url in seen:
|
||||
continue
|
||||
seen.add(url)
|
||||
|
||||
if POLICY.get("obey_robots_txt", True):
|
||||
rp = await fetch_robots(session, url)
|
||||
if not rp.is_allowed("*", url):
|
||||
continue
|
||||
|
||||
if not allowed_by_patterns(url, include, exclude):
|
||||
continue
|
||||
|
||||
info = STATE["visited"].get(url, {})
|
||||
etag = info.get("etag")
|
||||
lastmod = info.get("last_modified")
|
||||
if not should_revisit(url, revisit):
|
||||
continue
|
||||
|
||||
body, meta = await fetch(session, url, etag, lastmod)
|
||||
status = meta.get("status")
|
||||
headers = meta.get("headers", {})
|
||||
|
||||
if status == 304:
|
||||
STATE["visited"][url] = {"etag": etag, "last_modified": lastmod, "ts": time.time()}
|
||||
save_state()
|
||||
continue
|
||||
if status != 200 or body is None:
|
||||
continue
|
||||
|
||||
if is_pdf(headers):
|
||||
out_pdf = path_for(url, "pdf")
|
||||
save_binary(out_pdf, body)
|
||||
# Grobe Textextraktion (best-effort)
|
||||
try:
|
||||
from pdfminer.high_level import extract_text as pdf_extract
|
||||
txt = pdf_extract(str(out_pdf))
|
||||
save_text(path_for(url, "text"), txt)
|
||||
except Exception:
|
||||
pass
|
||||
else:
|
||||
txt = extract_text_html(body)
|
||||
save_text(path_for(url, "text"), txt)
|
||||
# Links sammeln (nur gleiche Domain leicht erweitern)
|
||||
soup = BeautifulSoup(body, "lxml")
|
||||
for a in soup.find_all("a", href=True):
|
||||
href = urljoin(url, a["href"])
|
||||
href, _ = urldefrag(href)
|
||||
if href.startswith("http"):
|
||||
# Begrenze Tiefe implizit über revisit/budget
|
||||
queue.append(href)
|
||||
|
||||
STATE["visited"][url] = {
|
||||
"etag": headers.get("ETag"),
|
||||
"last_modified": headers.get("Last-Modified"),
|
||||
"ts": time.time(),
|
||||
}
|
||||
save_state()
|
||||
|
||||
processed += 1
|
||||
if budget and processed >= budget:
|
||||
break
|
||||
|
||||
async def main(mode="update", budget=0):
|
||||
con_total = int(POLICY.get("concurrency_total", 4))
|
||||
timeout = aiohttp.ClientTimeout(total=120)
|
||||
connector = aiohttp.TCPConnector(limit=con_total, ssl=False)
|
||||
async with aiohttp.ClientSession(timeout=timeout, connector=connector) as session:
|
||||
tasks = []
|
||||
if mode == "drip":
|
||||
budget = budget or 1
|
||||
else:
|
||||
budget = 0 # unbegrenzt im update-Modus
|
||||
for seed in SEEDS:
|
||||
tasks.append(crawl_seed(session, seed, budget=budget))
|
||||
await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--mode", choices=["update","drip"], default="update",
|
||||
help="update=vollständig, drip=sehr langsam mit Budget je Seed")
|
||||
parser.add_argument("--budget", type=int, default=1, help="URLs pro Seed (nur drip)")
|
||||
args = parser.parse_args()
|
||||
asyncio.run(main(args.mode, args.budget))
|
||||
90
recipes/ai/rag-crawler/install.sh
Normal file
90
recipes/ai/rag-crawler/install.sh
Normal file
@@ -0,0 +1,90 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Helfer-Funktionen aus deinem Basis-Framework (siehe beispiel.zip) werden erwartet:
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install python3
|
||||
pkg_install python3-venv || true
|
||||
pkg_install curl
|
||||
|
||||
if ask_to_install "RAG Crawler"; then
|
||||
echo ""
|
||||
read -rp "Basis-Pfad für den Crawler [default: /srv/ai/rag-crawler]: " BASE
|
||||
BASE=${BASE:-/srv/ai/rag-crawler}
|
||||
$SUDO mkdir -p "${BASE}"
|
||||
else
|
||||
log "${YELLOW}⏭ RAG Crawler übersprungen.${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
read -rp "Zielverzeichnis für den Corpus [default: /srv/ai/corpus]: " CORPUS_DIR
|
||||
CORPUS_DIR=${CORPUS_DIR:-/srv/ai/corpus}
|
||||
$SUDO mkdir -p "${CORPUS_DIR}"
|
||||
|
||||
echo ""
|
||||
read -rp "Memory-API URL (z.B. http://127.0.0.1:8085) [default: http://127.0.0.1:8085]: " MEMORY_URL
|
||||
MEMORY_URL=${MEMORY_URL:-http://127.0.0.1:8085}
|
||||
|
||||
# Dateien in BASE kopieren
|
||||
SRC_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
$SUDO mkdir -p "${BASE}/crawler"
|
||||
$SUDO cp -r "${SRC_DIR}/crawler"/* "${BASE}/crawler/"
|
||||
$SUDO cp "${SRC_DIR}/requirements.txt" "${BASE}/requirements.txt"
|
||||
|
||||
# Virtualenv
|
||||
$SUDO python3 -m venv "${BASE}/venv"
|
||||
$SUDO source "${BASE}/venv/bin/activate"
|
||||
$SUDO pip install -U pip
|
||||
$SUDO pip install -r "${BASE}/requirements.txt"
|
||||
$SUDO deactivate
|
||||
|
||||
# sources.yml initialisieren/ersetzen
|
||||
if [ ! -f "${BASE}/crawler/sources.yml" ]; then
|
||||
$SUDO tee "${BASE}/crawler/sources.yml" >/dev/null <<'EOF'
|
||||
# Quellen-Definitionen
|
||||
seeds:
|
||||
- url: "https://www.gesetze-im-internet.de/stvo_2013/"
|
||||
include: ["**"]
|
||||
exclude: ["**/impressum*", "**/kontakt*"]
|
||||
revisit: "30d"
|
||||
- url: "https://www.gesetze-im-internet.de/bgb/"
|
||||
include: ["**"]
|
||||
exclude: []
|
||||
revisit: "30d"
|
||||
- url: "https://www.php.net/manual/en/"
|
||||
include: ["**"]
|
||||
exclude: ["**/search.php*", "**/my.php*"]
|
||||
revisit: "14d"
|
||||
|
||||
policy:
|
||||
concurrency_total: 4
|
||||
concurrency_per_domain: 1
|
||||
delay_min_seconds: 10
|
||||
delay_max_seconds: 120
|
||||
user_agent: "Mozilla/5.0 (compatible; polite-crawler/1.0)"
|
||||
obey_robots_txt: true
|
||||
store_html: false
|
||||
store_text: true
|
||||
store_pdf: true
|
||||
|
||||
storage:
|
||||
root: "/srv/ai/corpus" # wird ersetzt
|
||||
text_subdir: "text"
|
||||
pdf_subdir: "pdf"
|
||||
|
||||
memory:
|
||||
url: "http://127.0.0.1:8085" # wird ersetzt
|
||||
collection: "chat-memory"
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Pfade/URLs deterministisch in sources.yml ersetzen
|
||||
$SUDO sed -i "s|/srv/ai/corpus|${CORPUS_DIR}|g" "${BASE}/crawler/sources.yml"
|
||||
$SUDO sed -i "s|http://127.0.0.1:8085|${MEMORY_URL}|g" "${BASE}/crawler/sources.yml"
|
||||
|
||||
echo "✅ Installiert unter: ${BASE}"
|
||||
echo " Corpus: ${CORPUS_DIR}"
|
||||
echo " Memory-API: ${MEMORY_URL}"
|
||||
echo "➡️ Aktivieren: source ${BASE}/venv/bin/activate && python3 ${BASE}/crawler/main.py --help"
|
||||
12
recipes/ai/rag-crawler/requirements.txt
Normal file
12
recipes/ai/rag-crawler/requirements.txt
Normal file
@@ -0,0 +1,12 @@
|
||||
aiohttp
|
||||
aiodns
|
||||
beautifulsoup4
|
||||
tldextract
|
||||
urllib3
|
||||
pdfminer.six
|
||||
python-dateutil
|
||||
pydantic
|
||||
pyyaml
|
||||
robotexclusionrulesparser
|
||||
uvloop; sys_platform != 'win32'
|
||||
readability-lxml
|
||||
44
recipes/db/mariadb/install.sh
Normal file
44
recipes/db/mariadb/install.sh
Normal file
@@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
|
||||
pkg_install curl
|
||||
|
||||
cd /srv/docker
|
||||
$SUDO mkdir -p mariadb
|
||||
cd mariadb
|
||||
|
||||
# Passwortblock
|
||||
begin_password_section "MARIADB"
|
||||
DB_ROOT_PASS="$(generate_password "mariadb_root")"
|
||||
end_password_section "MARIADB"
|
||||
|
||||
# .env schreiben
|
||||
$SUDO tee .env >/dev/null <<EOF
|
||||
MYSQL_ROOT_PASSWORD=$DB_ROOT_PASS
|
||||
MYSQL_DATABASE=defaultdb
|
||||
EOF
|
||||
|
||||
# docker-compose schreiben
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
mariadb:
|
||||
image: mariadb:11
|
||||
container_name: mariadb_server
|
||||
restart: unless-stopped
|
||||
env_file:
|
||||
- .env
|
||||
ports:
|
||||
- "3306:3306"
|
||||
volumes:
|
||||
- ./data:/var/lib/mysql
|
||||
command: --transaction-isolation=READ-COMMITTED --log-bin=mysqld-bin --binlog-format=ROW
|
||||
EOF
|
||||
|
||||
$SUDO mkdir -p data
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "MariaDB Server wurde installiert. Root-Passwort in keys.txt gespeichert."
|
||||
96
recipes/services/frigate/install.sh
Normal file
96
recipes/services/frigate/install.sh
Normal file
@@ -0,0 +1,96 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/services/frigate"
|
||||
$SUDO mkdir -p "$BASE/config"
|
||||
$SUDO mkdir -p "$BASE/media"
|
||||
cd "$BASE"
|
||||
|
||||
echo ""
|
||||
echo "Möchtest du Coral TPU verwenden?"
|
||||
echo " y = USB / PCIe TPU einbinden"
|
||||
echo " n = ohne TPU (CPU Only)"
|
||||
read -p "Auswahl (y/n): " TPU
|
||||
|
||||
TPU_CONFIG=""
|
||||
if [[ "$TPU" == "y" || "$TPU" == "Y" ]]; then
|
||||
TPU_CONFIG=" devices:
|
||||
- /dev/apex_0:/dev/apex_0
|
||||
- /dev/bus/usb:/dev/bus/usb"
|
||||
echo "Coral TPU-Unterstützung aktiviert."
|
||||
else
|
||||
echo "Installiere ohne TPU."
|
||||
fi
|
||||
|
||||
echo ""
|
||||
read -p "Soll direkt eine Kamera eingetragen werden? (y/n): " ADD_CAM
|
||||
|
||||
CAMERA_CONFIG=""
|
||||
if [[ "$ADD_CAM" == "y" || "$ADD_CAM" == "Y" ]]; then
|
||||
read -p "Name der Kamera (z.B. wohnzimmer): " CAM_NAME
|
||||
read -p "RTSP URL (z.B. rtsp://user:pass@192.168.x.x/stream): " CAM_URL
|
||||
|
||||
CAMERA_CONFIG="cameras:
|
||||
$CAM_NAME:
|
||||
ffmpeg:
|
||||
inputs:
|
||||
- path: \"$CAM_URL\"
|
||||
input_args: preset-rtsp-restream"
|
||||
else
|
||||
CAMERA_CONFIG="cameras: {}"
|
||||
fi
|
||||
|
||||
$SUDO tee "$BASE/config/config.yml" >/dev/null <<EOF
|
||||
mqtt:
|
||||
enabled: false
|
||||
|
||||
${CAMERA_CONFIG}
|
||||
EOF
|
||||
|
||||
$SUDO tee "$BASE/docker-compose.yml" >/dev/null <<EOF
|
||||
services:
|
||||
frigate:
|
||||
image: ghcr.io/blakeblackshear/frigate:stable
|
||||
container_name: frigate
|
||||
privileged: true
|
||||
restart: unless-stopped
|
||||
shm_size: "64m"
|
||||
volumes:
|
||||
- ./config:/config
|
||||
- ./media:/media/frigate
|
||||
- /dev/bus/usb:/dev/bus/usb
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
${TPU_CONFIG}
|
||||
ports:
|
||||
- "5000:5000"
|
||||
- "8554:8554"
|
||||
- "8555:8555/tcp"
|
||||
- "8555:8555/udp"
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "Frigate wurde installiert."
|
||||
log "Web UI: http://<server-ip>:5000"
|
||||
log "Konfiguration: $BASE/config/config.yml"
|
||||
|
||||
echo ""
|
||||
read -p "Soll NGINX Proxy für Frigate eingerichtet werden? (y/n): " PROXY
|
||||
|
||||
if [[ "$PROXY" == "y" || "$PROXY" == "Y" ]]; then
|
||||
PROXY_SCRIPT="/srv/docker/system/nginx-proxy-path/install.sh"
|
||||
|
||||
if [ ! -f "$PROXY_SCRIPT" ]; then
|
||||
log "Fehler: nginx-proxy-path nicht installiert."
|
||||
log "Bitte erst das Rezept 'nginx-proxy-path' installieren."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Starte Proxy-Konfiguration:"
|
||||
bash "$PROXY_SCRIPT"
|
||||
fi
|
||||
65
recipes/services/grafana/install.sh
Normal file
65
recipes/services/grafana/install.sh
Normal file
@@ -0,0 +1,65 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/services/grafana"
|
||||
$SUDO mkdir -p "$BASE/data"
|
||||
cd "$BASE"
|
||||
|
||||
echo "Starte Installation von Grafana..."
|
||||
|
||||
# Funktion: finde den nächsten freien Port ab 3000
|
||||
find_free_port() {
|
||||
PORT=3000
|
||||
while ss -lnt | awk '{print $4}' | grep -q ":$PORT$"; do
|
||||
PORT=$((PORT + 1))
|
||||
done
|
||||
echo "$PORT"
|
||||
}
|
||||
|
||||
FREE_PORT=$(find_free_port)
|
||||
echo "✅ Freier Port für Grafana: $FREE_PORT"
|
||||
|
||||
$SUDO tee docker-compose.yml >/dev/null <<EOF
|
||||
services:
|
||||
grafana:
|
||||
image: grafana/grafana:latest
|
||||
container_name: grafana-$FREE_PORT
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "$FREE_PORT:3000"
|
||||
volumes:
|
||||
- ./data:/var/lib/grafana
|
||||
environment:
|
||||
GF_SECURITY_ADMIN_USER=admin
|
||||
GF_SECURITY_ADMIN_PASSWORD=admin
|
||||
TZ=Europe/Berlin
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "Grafana wurde installiert."
|
||||
log "Web UI: http://<server-ip>:$FREE_PORT"
|
||||
log "Standard Login: admin / admin (bitte ändern!)"
|
||||
log "Daten liegen in: $BASE/data"
|
||||
|
||||
echo ""
|
||||
read -p "Soll ein NGINX Proxy-Pfad eingerichtet werden? (y/n): " PROXY
|
||||
|
||||
if [[ "$PROXY" == "y" || "$PROXY" == "Y" ]]; then
|
||||
PROXY_SCRIPT="/srv/docker/system/nginx-proxy-path/install.sh"
|
||||
|
||||
if [ ! -f "$PROXY_SCRIPT" ]; then
|
||||
log "Fehler: nginx-proxy-path nicht installiert."
|
||||
log "Bitte zuerst das Rezept 'nginx-proxy-path' installieren."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Bitte Proxy-Pfad einrichten:"
|
||||
bash "$PROXY_SCRIPT"
|
||||
fi
|
||||
|
||||
49
recipes/services/homeassistant/install.sh
Normal file
49
recipes/services/homeassistant/install.sh
Normal file
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/services/homeassistant"
|
||||
$SUDO mkdir -p "$BASE/config"
|
||||
cd "$BASE"
|
||||
|
||||
echo ""
|
||||
echo "Starte Installation von Home Assistant (Container-Modus)."
|
||||
|
||||
# docker-compose schreiben
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
homeassistant:
|
||||
image: ghcr.io/home-assistant/home-assistant:stable
|
||||
container_name: homeassistant
|
||||
restart: unless-stopped
|
||||
network_mode: host
|
||||
volumes:
|
||||
- ./config:/config
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "Home Assistant wurde installiert."
|
||||
log "Web UI (wenn kein Proxy): http://<server-ip>:8123"
|
||||
log "Konfiguration: $BASE/config/"
|
||||
|
||||
echo ""
|
||||
read -p "Soll ein NGINX Proxy-Pfad eingerichtet werden? (y/n): " PROXY
|
||||
|
||||
if [[ "$PROXY" == "y" || "$PROXY" == "Y" ]]; then
|
||||
PROXY_SCRIPT="/srv/docker/system/nginx-proxy-path/install.sh"
|
||||
|
||||
if [ ! -f "$PROXY_SCRIPT" ]; then
|
||||
log "Fehler: nginx-proxy-path nicht installiert."
|
||||
log "Bitte zuerst das Rezept 'nginx-proxy-path' installieren."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Bitte Proxy-Pfad einrichten:"
|
||||
bash "$PROXY_SCRIPT"
|
||||
fi
|
||||
49
recipes/services/myspeed/install.sh
Normal file
49
recipes/services/myspeed/install.sh
Normal file
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/services/myspeed"
|
||||
$SUDO mkdir -p "$BASE/data"
|
||||
cd "$BASE"
|
||||
|
||||
echo "Starte Installation von MySpeed (germannewsmaker/myspeed)..."
|
||||
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
myspeed:
|
||||
image: germannewsmaker/myspeed:latest
|
||||
container_name: myspeed
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "52100:52100"
|
||||
volumes:
|
||||
- ./data:/myspeed/data
|
||||
environment:
|
||||
TZ=Europe/Berlin
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "MySpeed wurde installiert."
|
||||
log "Web UI: http://<server-ip>:52100"
|
||||
log "Daten liegen in: $BASE/data"
|
||||
|
||||
echo ""
|
||||
read -p "Soll ein NGINX Proxy-Pfad eingerichtet werden? (y/n): " PROXY
|
||||
|
||||
if [[ "$PROXY" == "y" || "$PROXY" == "Y" ]]; then
|
||||
PROXY_SCRIPT="/srv/docker/system/nginx-proxy-path/install.sh"
|
||||
|
||||
if [ ! -f "$PROXY_SCRIPT" ]; then
|
||||
log "Fehler: nginx-proxy-path nicht installiert."
|
||||
log "Bitte zuerst das Rezept 'nginx-proxy-path' installieren."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Bitte Proxy-Pfad einrichten:"
|
||||
bash "$PROXY_SCRIPT"
|
||||
fi
|
||||
70
recipes/services/nginx-ai-configurator/install.sh
Normal file
70
recipes/services/nginx-ai-configurator/install.sh
Normal file
@@ -0,0 +1,70 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
echo ""
|
||||
echo "=== NGINX KI-Proxy Konfigurator ==="
|
||||
echo ""
|
||||
|
||||
read -p "Pfad unter dem KI erreichbar sein soll (z.B. /ai): " KI_PATH
|
||||
read -p "Backend-Adresse (z.B. http://127.0.0.1:11434): " KI_BACKEND
|
||||
|
||||
echo ""
|
||||
echo "Ist Ollama installiert? (y/n)"
|
||||
read OLLAMA
|
||||
|
||||
if [[ "$OLLAMA" =~ ^[Yy]$ ]]; then
|
||||
echo "Ollama wird geprüft..."
|
||||
if ! systemctl is-active --quiet ollama && ! pgrep ollama >/dev/null; then
|
||||
echo "⚠️ Ollama läuft nicht. Bitte vorher installieren/starten."
|
||||
else
|
||||
echo "✅ Ollama läuft."
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Soll zusätzlich der Memory-Server integriert werden? (y/n)"
|
||||
read MEM
|
||||
|
||||
if [[ "$MEM" =~ ^[Yy]$ ]]; then
|
||||
read -p "Memory Server URL (z.B. http://127.0.0.1:8085): " MEMORY_URL
|
||||
fi
|
||||
|
||||
NGINX_CONF="/etc/nginx/conf.d/ai-proxy.conf"
|
||||
|
||||
$SUDO tee $NGINX_CONF >/dev/null <<EOF
|
||||
location $KI_PATH/ {
|
||||
proxy_pass $KI_BACKEND/;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade \$http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host \$host;
|
||||
}
|
||||
EOF
|
||||
|
||||
if [[ "$MEM" =~ ^[Yy]$ ]]; then
|
||||
$SUDO tee -a $NGINX_CONF >/dev/null <<EOF
|
||||
|
||||
location ${KI_PATH}_memory/ {
|
||||
proxy_pass $MEMORY_URL/;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade \$http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host \$host;
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Reloading nginx..."
|
||||
$SUDO systemctl reload nginx
|
||||
|
||||
echo ""
|
||||
echo "✅ Fertig!"
|
||||
echo "KI UI erreichbar unter: http://<server-ip>$KI_PATH/"
|
||||
if [[ "$MEM" =~ ^[Yy]$ ]]; then
|
||||
echo "Memory erreichbar unter: http://<server-ip>${KI_PATH}_memory/"
|
||||
fi
|
||||
62
recipes/services/node-red/install.sh
Normal file
62
recipes/services/node-red/install.sh
Normal file
@@ -0,0 +1,62 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/services/node-red"
|
||||
$SUDO mkdir -p "$BASE/data"
|
||||
cd "$BASE"
|
||||
|
||||
echo "Starte Installation von Node-RED..."
|
||||
|
||||
# Funktion: finde den nächsten freien Port ab 1880
|
||||
find_free_port() {
|
||||
PORT=1880
|
||||
while ss -lnt | awk '{print $4}' | grep -q ":$PORT$"; do
|
||||
PORT=$((PORT + 1))
|
||||
done
|
||||
echo "$PORT"
|
||||
}
|
||||
|
||||
FREE_PORT=$(find_free_port)
|
||||
echo "✅ Freier Port für Node-RED gefunden: $FREE_PORT"
|
||||
|
||||
$SUDO tee docker-compose.yml >/dev/null <<EOF
|
||||
services:
|
||||
node-red:
|
||||
image: nodered/node-red:latest
|
||||
container_name: node-red-$FREE_PORT
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "$FREE_PORT:1880"
|
||||
volumes:
|
||||
- ./data:/data
|
||||
environment:
|
||||
TZ=Europe/Berlin
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "Node-RED wurde installiert."
|
||||
log "Web UI: http://<server-ip>:$FREE_PORT"
|
||||
log "Konfiguration / Flows: $BASE/data/"
|
||||
|
||||
echo ""
|
||||
read -p "Soll ein NGINX Proxy-Pfad eingerichtet werden? (y/n): " PROXY
|
||||
|
||||
if [[ "$PROXY" == "y" || "$PROXY" == "Y" ]]; then
|
||||
PROXY_SCRIPT="/srv/docker/system/nginx-proxy-path/install.sh"
|
||||
|
||||
if [ ! -f "$PROXY_SCRIPT" ]; then
|
||||
log "Fehler: nginx-proxy-path nicht installiert."
|
||||
log "Bitte zuerst das Rezept 'nginx-proxy-path' installieren."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Bitte Proxy-Pfad einrichten:"
|
||||
bash "$PROXY_SCRIPT"
|
||||
fi
|
||||
|
||||
57
recipes/services/omada/install.sh
Normal file
57
recipes/services/omada/install.sh
Normal file
@@ -0,0 +1,57 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/services/omada"
|
||||
$SUDO mkdir -p "$BASE/data"
|
||||
$SUDO mkdir -p "$BASE/logs"
|
||||
cd "$BASE"
|
||||
|
||||
echo "Starte Installation des Omada Controllers..."
|
||||
|
||||
# docker-compose
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
omada:
|
||||
image: mbentley/omada-controller:latest
|
||||
container_name: omada-controller
|
||||
restart: unless-stopped
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: Europe/Berlin
|
||||
MANAGE_HTTP_PORT: 8088
|
||||
MANAGE_HTTPS_PORT: 8043
|
||||
PORTAL_HTTP_PORT: 8086
|
||||
PORTAL_HTTPS_PORT: 8843
|
||||
volumes:
|
||||
- ./data:/opt/tplink/EAPController/data
|
||||
- ./logs:/opt/tplink/EAPController/logs
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "Omada Controller wurde installiert."
|
||||
log "Web UI (HTTPS): https://<server-ip>:8043"
|
||||
log "Mobile App Discovery funktioniert automatisch (host network mode)."
|
||||
|
||||
echo ""
|
||||
read -p "Soll ein NGINX Proxy-Pfad eingerichtet werden? (y/n): " PROXY
|
||||
|
||||
if [[ "$PROXY" == "y" || "$PROXY" == "Y" ]]; then
|
||||
PROXY_SCRIPT="/srv/docker/system/nginx-proxy-path/install.sh"
|
||||
|
||||
if [ ! -f "$PROXY_SCRIPT" ]; then
|
||||
log "Fehler: nginx-proxy-path nicht installiert."
|
||||
log "Bitte zuerst das Rezept 'nginx-proxy-path' installieren."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Hinweis: Omada UI benötigt HTTPS Proxy!"
|
||||
echo "Proxy-Ziel: <server-ip>:8043"
|
||||
echo ""
|
||||
bash "$PROXY_SCRIPT"
|
||||
fi
|
||||
87
recipes/services/paperless-ai-multi/install.sh
Normal file
87
recipes/services/paperless-ai-multi/install.sh
Normal file
@@ -0,0 +1,87 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
echo ""
|
||||
read -p "Instanz Nummer (z.B. 1, 2, 3...): " INSTANCE
|
||||
BASE="/srv/docker/services/paperless-$INSTANCE"
|
||||
$SUDO mkdir -p "$BASE/data" "$BASE/media" "$BASE/consume"
|
||||
cd "$BASE"
|
||||
|
||||
PORT=$((8100 + INSTANCE))
|
||||
echo "Web-Port wird: $PORT"
|
||||
|
||||
echo ""
|
||||
echo "Paperless Variante:"
|
||||
echo " 1) Paperless-NGX (ohne KI)"
|
||||
echo " 2) Paperless-AI (mit KI/RAG)"
|
||||
read -p "Auswahl (1/2): " MODE
|
||||
|
||||
if [[ "$MODE" == "2" ]]; then
|
||||
read -p "KI Backend URL (z.B. http://127.0.0.1:11434): " AI_URL
|
||||
read -p "Memory Server URL (z.B. http://127.0.0.1:8085): " MEMORY_URL
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Instanz $INSTANCE ersetzen ohne Daten zu löschen?"
|
||||
read -p "(y/n): " REPLACE
|
||||
|
||||
if [[ "$REPLACE" =~ ^[Yy]$ ]]; then
|
||||
$SUDO docker compose down || true
|
||||
fi
|
||||
|
||||
if [[ "$MODE" == "1" ]]; then
|
||||
cat > docker-compose.yml <<EOF
|
||||
services:
|
||||
paperless-$INSTANCE:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
container_name: paperless-$INSTANCE
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "$PORT:8000"
|
||||
volumes:
|
||||
- ./data:/usr/src/paperless/data
|
||||
- ./media:/usr/src/paperless/media
|
||||
- ./consume:/usr/src/paperless/consume
|
||||
environment:
|
||||
TZ=Europe/Berlin
|
||||
EOF
|
||||
fi
|
||||
|
||||
if [[ "$MODE" == "2" ]]; then
|
||||
cat > docker-compose.yml <<EOF
|
||||
services:
|
||||
paperless-$INSTANCE:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
container_name: paperless-$INSTANCE
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "$PORT:8000"
|
||||
volumes:
|
||||
- ./data:/usr/src/paperless/data
|
||||
- ./media:/usr/src/paperless/media
|
||||
- ./consume:/usr/src/paperless/consume
|
||||
environment:
|
||||
TZ=Europe/Berlin
|
||||
|
||||
paperless-ai-$INSTANCE:
|
||||
image: clusterzx/paperless-ai:latest
|
||||
container_name: paperless-ai-$INSTANCE
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
PAPERLESS_AI_OPENAI_API_BASE_URL: "$AI_URL"
|
||||
PAPERLESS_AI_EMBEDDING_MODEL: "nomic-embed-text"
|
||||
PAPERLESS_AI_CHAT_MODEL: "qwen2.5:0.5b"
|
||||
PAPERLESS_AI_MEMORY_SERVER_URL: "$MEMORY_URL"
|
||||
TZ: Europe/Berlin
|
||||
EOF
|
||||
fi
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
echo ""
|
||||
echo "✅ Instanz $INSTANCE installiert!"
|
||||
echo "Web UI: http://<server-ip>:$PORT"
|
||||
44
recipes/services/portainer-watchtower/install.sh
Normal file
44
recipes/services/portainer-watchtower/install.sh
Normal file
@@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/system/portainer-watchtower"
|
||||
$SUDO mkdir -p "$BASE"
|
||||
cd "$BASE"
|
||||
|
||||
echo "Starte Installation von Portainer + Watchtower..."
|
||||
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
portainer:
|
||||
image: portainer/portainer-ce:latest
|
||||
container_name: portainer
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "9443:9443"
|
||||
- "9000:9000"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- ./portainer-data:/data
|
||||
|
||||
watchtower:
|
||||
image: containrrr/watchtower:latest
|
||||
container_name: watchtower
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
command:
|
||||
- --schedule=0 0 3 * * *
|
||||
- --cleanup
|
||||
- --rolling-restart
|
||||
- --update-delay=72h
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "Portainer + Watchtower installiert."
|
||||
log "Portainer UI: https://<server-ip>:9443 (oder http://<server-ip>:9000)"
|
||||
log "Watchtower aktualisiert Container täglich um 03:00 Uhr mit 72h Verzögerung."
|
||||
55
recipes/services/unifi/install.sh
Normal file
55
recipes/services/unifi/install.sh
Normal file
@@ -0,0 +1,55 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/services/unifi"
|
||||
$SUDO mkdir -p "$BASE/data"
|
||||
$SUDO mkdir -p "$BASE/logs"
|
||||
cd "$BASE"
|
||||
|
||||
echo "Starte Installation des UniFi Controllers..."
|
||||
|
||||
# docker-compose
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
unifi-controller:
|
||||
image: linuxserver/unifi-controller:latest
|
||||
container_name: unifi-controller
|
||||
restart: unless-stopped
|
||||
network_mode: host
|
||||
environment:
|
||||
PUID: 1000
|
||||
PGID: 1000
|
||||
TZ: Europe/Berlin
|
||||
volumes:
|
||||
- ./data:/config
|
||||
- ./logs:/config/logs
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "UniFi Controller wurde installiert."
|
||||
log "Web UI (HTTPS): https://<server-ip>:8443"
|
||||
log "Geräte-Erkennung funktioniert automatisch (host network mode)."
|
||||
|
||||
echo ""
|
||||
read -p "Soll ein NGINX Proxy-Pfad eingerichtet werden? (y/n): " PROXY
|
||||
|
||||
if [[ "$PROXY" == "y" || "$PROXY" == "Y" ]]; then
|
||||
PROXY_SCRIPT="/srv/docker/system/nginx-proxy-path/install.sh"
|
||||
|
||||
if [ ! -f "$PROXY_SCRIPT" ]; then
|
||||
log "Fehler: nginx-proxy-path nicht installiert."
|
||||
log "Bitte zuerst das Rezept 'nginx-proxy-path' installieren."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Hinweis: UniFi UI benötigt HTTPS Proxy!"
|
||||
echo "Proxy-Ziel: <server-ip>:8443"
|
||||
echo ""
|
||||
bash "$PROXY_SCRIPT"
|
||||
fi
|
||||
47
recipes/services/uptime-kuma/install.sh
Normal file
47
recipes/services/uptime-kuma/install.sh
Normal file
@@ -0,0 +1,47 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
BASE="/srv/docker/services/uptime-kuma"
|
||||
$SUDO mkdir -p "$BASE/data"
|
||||
cd "$BASE"
|
||||
|
||||
echo "Starte Installation von Uptime Kuma..."
|
||||
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
uptime-kuma:
|
||||
image: louislam/uptime-kuma:latest
|
||||
container_name: uptime-kuma
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "3001:3001"
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "Uptime Kuma wurde installiert."
|
||||
log "Web UI: http://<server-ip>:3001"
|
||||
log "Daten liegen in: $BASE/data"
|
||||
|
||||
echo ""
|
||||
read -p "Soll ein NGINX Proxy-Pfad eingerichtet werden? (y/n): " PROXY
|
||||
|
||||
if [[ "$PROXY" == "y" || "$PROXY" == "Y" ]]; then
|
||||
PROXY_SCRIPT="/srv/docker/system/nginx-proxy-path/install.sh"
|
||||
|
||||
if [ ! -f "$PROXY_SCRIPT" ]; then
|
||||
log "Fehler: nginx-proxy-path nicht installiert."
|
||||
log "Bitte zuerst das Rezept 'nginx-proxy-path' installieren."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Bitte Proxy-Pfad einrichten:"
|
||||
bash "$PROXY_SCRIPT"
|
||||
fi
|
||||
37
recipes/system/base-system/install.sh
Normal file
37
recipes/system/base-system/install.sh
Normal file
@@ -0,0 +1,37 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
echo "---------------------------------------------"
|
||||
echo "🔧 Starte Base-System Vorbereitung"
|
||||
echo "---------------------------------------------"
|
||||
sleep 1
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
|
||||
log "📦 Aktualisiere Paketlisten und installiere Basis-Werkzeuge..."
|
||||
pkg_install curl wget git htop zip unzip nano vim ca-certificates gnupg lsb-release apt-transport-https software-properties-common ufw screen mc rsync
|
||||
|
||||
echo "⏱ Richte Zeit-Synchronisation ein..."
|
||||
timedatectl set-timezone Europe/Berlin
|
||||
timedatectl set-ntp true
|
||||
|
||||
echo "🗣 Stelle Locale ein..."
|
||||
sed -i 's/# de_DE.UTF-8 UTF-8/de_DE.UTF-8 UTF-8/' /etc/locale.gen
|
||||
locale-gen
|
||||
update-locale LANG=de_DE.UTF-8
|
||||
|
||||
echo "✅ Basis-System eingerichtet!"
|
||||
echo ""
|
||||
|
||||
if [ -f /var/run/reboot-required ]; then
|
||||
echo "⚠️ Es wird ein Neustart empfohlen."
|
||||
read -rp "Jetzt neu starten? (j/n) " answer
|
||||
if [[ "$answer" =~ ^[JjYy]$ ]]; then
|
||||
reboot
|
||||
else
|
||||
echo "👉 Bitte später neu starten."
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "🎉 Base-System Setup abgeschlossen."
|
||||
echo "---------------------------------------------"
|
||||
64
recipes/system/base-system/playbook.yml
Normal file
64
recipes/system/base-system/playbook.yml
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
- name: Base System Setup
|
||||
hosts: localhost
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
base_packages:
|
||||
- screen
|
||||
- mc
|
||||
- rsync
|
||||
- curl
|
||||
- wget
|
||||
- htop
|
||||
- ca-certificates
|
||||
- gnupg
|
||||
- lsb-release
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Ensure apt index is up to date
|
||||
ansible.builtin.apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 3600
|
||||
|
||||
- name: Upgrade system packages
|
||||
ansible.builtin.apt:
|
||||
upgrade: safe
|
||||
|
||||
- name: Install base utility packages
|
||||
ansible.builtin.apt:
|
||||
name: "{{ base_packages }}"
|
||||
state: present
|
||||
|
||||
- name: Ensure /srv exists
|
||||
ansible.builtin.file:
|
||||
path: /srv
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
|
||||
- name: Ensure /srv/docker exists
|
||||
ansible.builtin.file:
|
||||
path: /srv/docker
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
|
||||
- name: Set timezone to Europe/Berlin
|
||||
ansible.builtin.timezone:
|
||||
name: Europe/Berlin
|
||||
|
||||
- name: Ensure system locale is de_DE.UTF-8
|
||||
ansible.builtin.locale_gen:
|
||||
name: de_DE.UTF-8
|
||||
state: present
|
||||
|
||||
- name: Apply locale permanently
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/default/locale
|
||||
regexp: '^LANG='
|
||||
line: 'LANG=de_DE.UTF-8'
|
||||
107
recipes/system/docker/playbook.yml
Normal file
107
recipes/system/docker/playbook.yml
Normal file
@@ -0,0 +1,107 @@
|
||||
# Save this file as: recipes/system/docker/playbook.yml
|
||||
---
|
||||
- name: Install and configure Docker
|
||||
hosts: localhost
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
docker_packages:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- containerd.io
|
||||
- docker-buildx-plugin
|
||||
- docker-compose-plugin
|
||||
|
||||
tasks:
|
||||
- name: Ensure docker runtime user exists
|
||||
ansible.builtin.user:
|
||||
name: dockeruser
|
||||
shell: /usr/sbin/nologin
|
||||
create_home: yes
|
||||
state: present
|
||||
|
||||
- name: Add current user to docker group
|
||||
ansible.builtin.user:
|
||||
name: "{{ ansible_env.USER }}"
|
||||
groups: docker
|
||||
append: yes
|
||||
|
||||
- name: Ensure /srv/docker owned by dockeruser
|
||||
ansible.builtin.file:
|
||||
path: /srv/docker
|
||||
state: directory
|
||||
owner: dockeruser
|
||||
group: docker
|
||||
mode: '0755'
|
||||
|
||||
# Existing tasks continue below
|
||||
- name: Ensure required packages are installed
|
||||
ansible.builtin.apt:
|
||||
name: ["ca-certificates", "curl", "gnupg", "lsb-release"]
|
||||
state: present
|
||||
update_cache: yes
|
||||
|
||||
- name: Add Docker GPG key
|
||||
ansible.builtin.shell: |
|
||||
install -m 0755 -d /etc/apt/keyrings
|
||||
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||
chmod a+r /etc/apt/keyrings/docker.gpg
|
||||
args:
|
||||
creates: /etc/apt/keyrings/docker.gpg
|
||||
|
||||
- name: Add Docker APT repository
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/apt/sources.list.d/docker.list
|
||||
content: |
|
||||
deb [arch={{ ansible_architecture }} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian {{ ansible_lsb.codename }} stable
|
||||
|
||||
- name: Update apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: yes
|
||||
|
||||
- name: Install Docker packages
|
||||
ansible.builtin.apt:
|
||||
name: "{{ docker_packages }}"
|
||||
state: present
|
||||
|
||||
- name: Ensure systemd is refreshed after Docker install
|
||||
ansible.builtin.systemd:
|
||||
daemon_reload: yes
|
||||
|
||||
- name: Start and enable Docker
|
||||
ansible.builtin.service:
|
||||
name: docker
|
||||
state: started
|
||||
enabled: yes
|
||||
|
||||
- name: Add current user to docker group
|
||||
ansible.builtin.user:
|
||||
name: "{{ ansible_user_id }}"
|
||||
groups: docker
|
||||
append: yes
|
||||
|
||||
- name: Create /srv/docker base directory
|
||||
ansible.builtin.file:
|
||||
path: /srv/docker
|
||||
state: directory
|
||||
owner: dockeruser
|
||||
group: docker
|
||||
mode: '0755'
|
||||
|
||||
- name: Create /srv/docker/services directory
|
||||
ansible.builtin.file:
|
||||
path: /srv/docker/services
|
||||
state: directory
|
||||
owner: dockeruser
|
||||
group: docker
|
||||
mode: '0755'
|
||||
|
||||
- name: Create /srv/docker/stacks directory
|
||||
ansible.builtin.file:
|
||||
path: /srv/docker/stacks
|
||||
state: directory
|
||||
owner: dockeruser
|
||||
group: docker
|
||||
mode: '0755'
|
||||
|
||||
17
recipes/system/nginx-php/docker-compose.yml
Normal file
17
recipes/system/nginx-php/docker-compose.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
services:
|
||||
php:
|
||||
image: php:8.2-fpm
|
||||
container_name: nginx-php_php
|
||||
volumes:
|
||||
- ./www:/var/www/html
|
||||
|
||||
nginx:
|
||||
image: nginx:latest
|
||||
container_name: nginx-php_nginx
|
||||
ports:
|
||||
- "80:80"
|
||||
volumes:
|
||||
- ./www:/var/www/html
|
||||
- ./nginx.conf:/etc/nginx/conf.d/default.conf
|
||||
depends_on:
|
||||
- php
|
||||
64
recipes/system/nginx-php/install.sh
Normal file
64
recipes/system/nginx-php/install.sh
Normal file
@@ -0,0 +1,64 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
|
||||
pkg_install curl
|
||||
|
||||
$SUDO mkdir -p /srv/docker/nginx-php/www
|
||||
cd /srv/docker/nginx-php
|
||||
|
||||
if [ ! -f /srv/docker/nginx-php/www/index.php ]; then
|
||||
$SUDO tee /srv/docker/nginx-php/www/index.php >/dev/null <<EOF
|
||||
<?php
|
||||
phpinfo();
|
||||
EOF
|
||||
fi
|
||||
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
php:
|
||||
image: php:8.2-fpm
|
||||
container_name: nginx-php_php
|
||||
volumes:
|
||||
- ./www:/var/www/html
|
||||
|
||||
nginx:
|
||||
image: nginx:latest
|
||||
container_name: nginx-php_nginx
|
||||
ports:
|
||||
- "80:80"
|
||||
volumes:
|
||||
- ./www:/var/www/html
|
||||
- ./nginx.conf:/etc/nginx/conf.d/default.conf
|
||||
depends_on:
|
||||
- php
|
||||
EOF
|
||||
|
||||
if [ ! -f nginx.conf ]; then
|
||||
$SUDO tee nginx.conf >/dev/null <<'EOF'
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
root /var/www/html;
|
||||
|
||||
index index.php index.html;
|
||||
|
||||
location / {
|
||||
try_files $uri /index.php?$args;
|
||||
}
|
||||
|
||||
location ~ \.php$ {
|
||||
fastcgi_pass php:9000;
|
||||
fastcgi_index index.php;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
}
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "NGINX + PHP erfolgreich installiert. Öffne http://<server-ip>/"
|
||||
18
recipes/system/nginx-php/nginx.conf
Normal file
18
recipes/system/nginx-php/nginx.conf
Normal file
@@ -0,0 +1,18 @@
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
root /var/www/html;
|
||||
|
||||
index index.php index.html;
|
||||
|
||||
location / {
|
||||
try_files $uri /index.php?$args;
|
||||
}
|
||||
|
||||
location ~ \.php$ {
|
||||
fastcgi_pass php:9000;
|
||||
fastcgi_index index.php;
|
||||
include fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
}
|
||||
}
|
||||
1
recipes/system/nginx-php/www/index.php
Normal file
1
recipes/system/nginx-php/www/index.php
Normal file
@@ -0,0 +1 @@
|
||||
<?php phpinfo();
|
||||
86
recipes/system/nginx-proxy-path/install.sh
Normal file
86
recipes/system/nginx-proxy-path/install.sh
Normal file
@@ -0,0 +1,86 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
|
||||
NGINX_PATH="/srv/docker/nginx-php/nginx.conf"
|
||||
|
||||
if [ ! -f "$NGINX_PATH" ]; then
|
||||
log "Fehler: nginx-php scheint nicht installiert zu sein. Datei fehlt:"
|
||||
log "$NGINX_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
read -p "Welcher Pfad soll erstellt werden? (Beispiel: /homeassistant): " LOCATION_PATH_RAW
|
||||
|
||||
# Normalize path
|
||||
LOCATION_PATH="${LOCATION_PATH_RAW#/}" # führenden "/" entfernen
|
||||
LOCATION_PATH="/${LOCATION_PATH}/" # sauber neu setzen /xyz/
|
||||
|
||||
echo ""
|
||||
read -p "Backend Zielserver (z.B. 192.168.3.21:8123): " PROXY_TARGET
|
||||
|
||||
echo ""
|
||||
echo "Konfiguration:"
|
||||
echo " NGINX-Pfad: $LOCATION_PATH"
|
||||
echo " Proxy Zielserver: $PROXY_TARGET"
|
||||
echo ""
|
||||
|
||||
# Konfliktprüfung
|
||||
if grep -q "location $LOCATION_PATH" "$NGINX_PATH"; then
|
||||
echo "WARNUNG: Ein Eintrag für diesen Pfad existiert bereits!"
|
||||
read -p "Überschreiben? (y/n): " OVERWRITE
|
||||
if [[ "$OVERWRITE" != "y" && "$OVERWRITE" != "Y" ]]; then
|
||||
log "Abgebrochen."
|
||||
exit 0
|
||||
fi
|
||||
# entferne bestehenden block
|
||||
$SUDO sed -i "\|location $LOCATION_PATH|,/}|d" "$NGINX_PATH"
|
||||
fi
|
||||
|
||||
read -p "Fortfahren und anwenden? (y/n): " CONFIRM
|
||||
if [[ "$CONFIRM" != "y" && "$CONFIRM" != "Y" ]]; then
|
||||
log "Abgebrochen."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Robust Proxy Block
|
||||
$SUDO tee -a "$NGINX_PATH" >/dev/null <<EOF
|
||||
|
||||
# Automatisch hinzugefügt: Reverse Proxy für $LOCATION_PATH
|
||||
location $LOCATION_PATH {
|
||||
proxy_pass http://$PROXY_TARGET/;
|
||||
|
||||
# Standard Header
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# WebSocket Support
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
# Buffer & Timeout Tuning für Streams & Video
|
||||
proxy_read_timeout 3600;
|
||||
proxy_send_timeout 3600;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
client_max_body_size 0;
|
||||
|
||||
# Optional: Fließender Videoverkehr
|
||||
chunked_transfer_encoding on;
|
||||
}
|
||||
EOF
|
||||
|
||||
log "NGINX-Konfiguration erweitert."
|
||||
|
||||
(
|
||||
cd /srv/docker/nginx-php
|
||||
$SUDO docker compose restart nginx
|
||||
)
|
||||
|
||||
log "NGINX neu geladen."
|
||||
log "Aufruf nun möglich unter: http://<server-ip>$LOCATION_PATH"
|
||||
51
recipes/tools/phpmyadmin_multi/install.sh
Normal file
51
recipes/tools/phpmyadmin_multi/install.sh
Normal file
@@ -0,0 +1,51 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
ensure_root
|
||||
detect_pkg_manager
|
||||
pkg_install curl
|
||||
|
||||
$SUDO mkdir -p /srv/docker/phpmyadmin
|
||||
cd /srv/docker/phpmyadmin
|
||||
|
||||
# docker-compose erstellen (ohne PMA_HOST)
|
||||
$SUDO tee docker-compose.yml >/dev/null <<'EOF'
|
||||
services:
|
||||
phpmyadmin:
|
||||
image: phpmyadmin:latest
|
||||
container_name: phpmyadmin
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8080:80"
|
||||
volumes:
|
||||
- ./config.user.php:/etc/phpmyadmin/config.user.php
|
||||
EOF
|
||||
|
||||
# config.user.php für freie Serverwahl
|
||||
$SUDO tee config.user.php >/dev/null <<'EOF'
|
||||
<?php
|
||||
$cfg['Servers'][1]['auth_type'] = 'cookie';
|
||||
$cfg['AllowArbitraryServer'] = true;
|
||||
EOF
|
||||
|
||||
$SUDO docker compose up -d
|
||||
|
||||
log "phpMyAdmin läuft unter: http://<server-ip>:8080/"
|
||||
|
||||
echo ""
|
||||
read -p "Soll NGINX so erweitert werden, dass /phpmyadmin funktioniert? (y/n): " ANSW
|
||||
if [[ "$ANSW" == "y" || "$ANSW" == "Y" ]]; then
|
||||
if [ -f /srv/docker/nginx-php/nginx.conf ]; then
|
||||
$SUDO tee -a /srv/docker/nginx-php/nginx.conf >/dev/null <<'EOF'
|
||||
|
||||
location /phpmyadmin/ {
|
||||
proxy_pass http://phpmyadmin:80/;
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
EOF
|
||||
(cd /srv/docker/nginx-php && $SUDO docker compose restart nginx || true)
|
||||
log "NGINX wurde angepasst: http://<server-ip>/phpmyadmin/"
|
||||
else
|
||||
log "Keine nginx-php Installation gefunden. Überspringe NGINX Integration."
|
||||
fi
|
||||
fi
|
||||
Reference in New Issue
Block a user