Compare commits

..

69 Commits

Author SHA1 Message Date
duffyduck aa54765b03 release: bump version to 0.0.2.7 2026-04-10 02:24:58 +02:00
duffyduck 8929bc99bb fix: XTTS groups sentences into ~250 char chunks for consistent voice quality
- 2-3 sentences per chunk (more context = stable voice/volume)
- Max 250 chars per chunk (keeps WebSocket packets manageable)
- Dots re-added between sentences within a chunk (natural pauses)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 02:23:29 +02:00
duffyduck 0428c06612 fix: Audio preloading to prevent stuttering, remove trailing dots for XTTS
- Preload next audio while current plays (eliminates gap between sentences)
- Remove trailing dots from sentences (XTTS reads them aloud)
- stopPlayback cleans up preloaded audio

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 02:21:19 +02:00
duffyduck a7eb3cf433 release: bump version to 0.0.2.6 2026-04-10 02:11:04 +02:00
duffyduck e4e0e793a8 fix: Audio queue for sequential TTS playback (no overlap/skip)
- Audio packets queued instead of stopping previous
- _playNext() plays sequentially, each sentence after the previous
- stopPlayback() clears queue
- Fixes overlapping/skipping with XTTS sentence-by-sentence rendering

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 02:09:35 +02:00
duffyduck b3d3b8b6bc fix: XTTS bridge splits text into sentences sequentially
- XTTS-Bridge does sentence splitting (not ARIA-Bridge)
- Sequential rendering: correct order guaranteed
- Each sentence sent as separate xtts_response
- Markdown removal before splitting
- App starts playback after first sentence (faster UX)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 02:03:29 +02:00
duffyduck 06bc456221 fix: XTTS splits long text into sentences before sending (WebSocket size limit)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 01:56:25 +02:00
duffyduck 3461f45207 docs: update README with XTTS v2 setup details, voice cloning guide
- Architecture diagram for XTTS flow (Gaming-PC ↔ RVS ↔ ARIA-VM)
- Port 8020 (not 8000), token must match, model caching
- Voice cloning step-by-step guide
- TTS engine switching (Piper/XTTS) with fallback
- Known limitation: RVS zombie connections

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 01:49:08 +02:00
duffyduck a17d4acc13 fix: XTTS bridge shares /voices volume with XTTS server
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 01:40:41 +02:00
duffyduck 62fd9193a1 fix: XTTS voice dropdown shows saved voice after page reload
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 01:34:00 +02:00
duffyduck 2329645df4 fix: XTTS voices list + upload use fresh RVS connection with response wait
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 01:24:55 +02:00
duffyduck 8a435ddf6c fix: voice upload uses send() via server, not client-side sendToRVS_raw
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 01:15:29 +02:00
duffyduck 25b754ba31 fix: voice upload Base64 conversion (chunked, no stack overflow)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 01:08:32 +02:00
duffyduck b734593bf2 fix: Bridge _send_to_rvs ping-check before send, force reconnect on zombie
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 00:37:22 +02:00
duffyduck 16847ce6f7 fix: TTS toggle global above engine selector, health check /docs
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 00:27:55 +02:00
duffyduck 6300829317 fix: XTTS model cache volume path /app/xtts_models
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 23:44:29 +02:00
duffyduck a1e1ee31bd fix: XTTS bridge port 8020, longer startup wait
- XTTS API runs on port 8020 (not 8000)
- Bridge waits up to 5min for model download (30x10s)
- Health check uses / instead of /docs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 23:39:45 +02:00
duffyduck 7ed70b876d updated image public path 2026-04-07 23:06:26 +02:00
duffyduck 3ca85da906 release: bump version to 0.0.2.5 2026-04-05 20:12:56 +02:00
duffyduck d6a89168ef release: bump version to 0.0.2.4 2026-04-05 19:51:19 +02:00
duffyduck cb33a20694 docs: update README with XTTS, auto-update, watchdog, TTS settings
- Architecture: Added XTTS v2 (Gaming-PC) and auto-update flow
- Diagnostic: Thinking indicator, cancel button, TTS tab, voice cloning
- App: Play button, chat search, auto-update, voice speed settings
- RVS: Auto-update APK distribution over WebSocket
- Watchdog: 2min warning → 5min doctor --fix → 8min container restart
- Roadmap: Phase 1 fully completed, updated Phase 2+3

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 19:46:16 +02:00
duffyduck a242693751 feat: XTTS v2 integration, auto-update system, TTS engine abstraction
- XTTS v2: Docker setup for Gaming-PC (GPU), bridge via RVS relay
- XTTS: Voice cloning UI in Diagnostic (multi-file upload)
- XTTS: Engine selectable (Piper local vs XTTS remote) with fallback
- Auto-Update: RVS serves APK over WebSocket (no HTTP needed)
- Auto-Update: App checks version on start, prompts install
- Auto-Update: release.sh copies APK to RVS via scp
- Bridge: TTS engine abstraction (piper/xtts), config persistent
- Bridge: xtts_response handler, tts_request on-demand
- Diagnostic: TTS engine dropdown, XTTS voice panel, voice cloning
- App: Play button on ARIA messages, chat search, update service
- Wake word: Disabled LiveAudioStream (crash fix), Phase 1 placeholder
- Watchdog: Container restart after 8min stuck
- Chat backup: on-the-fly to /shared/config/chat_backup.jsonl

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 19:42:10 +02:00
duffyduck 81ca3cc7a7 Ohr-Button Absturz gefixt (LiveAudioStream entfernt, Phase 1 , Play-Button in ARIA-Nachrichten fuer Sprachwiedergabe
- [x] Chat-Suche in der App (Lupe in Statusleiste)
- [x] Watchdog mit Container-Restart (2min Warnung → 5min doctor --fix → 8min Restart),Abbrechen-Button im Diagnostic Chat
- [x] Nachrichten Backup on-the-fly (/shared/config/chat_backup.jsonl)
- [x] Grosse Nachrichten satzweise aufteilen fuer TTS
- [x] RVS Nachrichten vom Smartphone gehen durch
2026-04-01 23:45:25 +02:00
duffyduck 1a32098c9e release: bump version to 0.0.2.3 2026-04-01 23:45:15 +02:00
duffyduck fa4c32270b sst immer 2026-03-29 19:18:41 +02:00
duffyduck 9c43b875f4 release: bump version to 0.0.2.2 2026-03-29 19:04:31 +02:00
duffyduck 63560e290b two speed 2026-03-29 19:03:40 +02:00
duffyduck 1ab8a6a2fe addes speed config for voice 2026-03-29 18:50:09 +02:00
duffyduck a2c0196e05 release: bump version to 0.0.2.1 2026-03-29 18:49:37 +02:00
duffyduck 680f7a64e2 slpit setnteces 2026-03-29 18:42:24 +02:00
duffyduck 4893616a5a playback issue 2026-03-29 18:36:00 +02:00
duffyduck 04e8c0245d voiice settings permanent 2026-03-29 18:23:31 +02:00
duffyduck 10cefaf1cd changed connection model 2026-03-29 18:12:26 +02:00
duffyduck adbb1fe80a changed docker file 2026-03-29 17:46:27 +02:00
duffyduck 79c50aedcc release: bump version to 0.0.2.0 2026-03-29 17:42:23 +02:00
duffyduck eb72b35e23 added voice settings in adroid app and diagnostic, higlight trigger in app und diagnostic
change voicec
2026-03-29 17:41:28 +02:00
duffyduck bbd02d46a6 changed issue md 2026-03-29 17:28:40 +02:00
duffyduck 3d3c8ce973 fixed tts format, added trigger words settings 2026-03-29 17:27:43 +02:00
duffyduck 562f929056 added setting for states and voices in setting diagnostic, added states in diagnostic, added watchdog and debug tts do diagnostic 2026-03-29 17:12:25 +02:00
duffyduck ff03d8ce62 release: bump version to 0.0.1.9 2026-03-29 17:11:33 +02:00
duffyduck 8281131432 tts fix big pictures 2026-03-29 17:02:02 +02:00
duffyduck 8a6bd4e0e7 voice message are send double to diagnostic 2026-03-29 16:50:48 +02:00
duffyduck 1b4df0565a wait at an attachment for instructions, show picture in diagnostic chat 2026-03-29 16:42:56 +02:00
duffyduck eb3692ef81 fixed arai proxy shared volume 2026-03-29 16:34:55 +02:00
duffyduck 46a9ac9f84 release: bump version to 0.0.1.8 2026-03-29 16:25:37 +02:00
duffyduck a012ec65ef filter own sender to hide own messages, these ar sendet from rvs twice 2026-03-29 16:15:10 +02:00
duffyduck b86c4a0d1a fixed double diagnostic message 2026-03-29 16:12:24 +02:00
duffyduck 11de9a01b9 error through loops no message received, fixed 2026-03-29 16:08:37 +02:00
duffyduck 80dec2daf9 reset connection as every send message 2026-03-29 16:04:43 +02:00
duffyduck da591bb53c fixed fallback issue clodes before sessions 2026-03-29 15:58:39 +02:00
duffyduck 7545c9c823 check still open 2026-03-29 15:53:11 +02:00
duffyduck ecc3d59a8f change rvs server 2026-03-29 15:40:17 +02:00
duffyduck b8862f025b fixed, thinking in webgui 2026-03-29 15:10:41 +02:00
duffyduck db20a07b27 fixed time out aria-core 2026-03-29 14:56:55 +02:00
duffyduck 8dadd5c9fe release: bump version to 0.0.1.7 2026-03-29 14:26:22 +02:00
duffyduck b7cecb2a8b fixed double message the second, fixed no own message from diagnotic to aria 2026-03-29 14:24:13 +02:00
duffyduck 6c7b631cb7 fixed doeuble answer 2026-03-29 14:16:53 +02:00
duffyduck 892c6403eb changed .gitignore issue vreated 2026-03-29 14:09:22 +02:00
duffyduck f6834f49d4 cleanup: remove android build artifacts from git, fix .gitignore 2026-03-29 14:08:32 +02:00
duffyduck 75752eefc0 release: bump version to 0.0.1.6 2026-03-29 14:00:25 +02:00
duffyduck fbdd4274ac fixed auto download 2026-03-29 13:58:51 +02:00
duffyduck 867b03aa1e fixed, message send in bridge und android app send file 2026-03-29 13:36:35 +02:00
duffyduck 457b469c96 added shared volume to diagnostic, added folder picker to android app, fixed bridge for attachment uploading, fixed hopefully chat history in android app 2026-03-29 13:20:58 +02:00
duffyduck 94691f12ab added folder select dialog, fixed chat loading 2026-03-29 13:01:26 +02:00
duffyduck 5c8d11824e fixed, long chats not loading to end, saved attachments in local folder on android., if file missing redownload over shared folde via rvs server, andord app added settingss for local storage path, updated readme 2026-03-29 12:51:38 +02:00
duffyduck db053c2dbd fixed sst to milliseconds and autoscroll the the third, attachments added shared volume, addes attachments at chats, updateded readme 2026-03-29 12:34:28 +02:00
duffyduck 8c1dac86d5 fixed autoscroll, second case, update received messages, resend text for information if voice message sendet 2026-03-29 12:09:17 +02:00
duffyduck 8fb95b884f added auto scroll, fixed stt for voice messages, fixed get answers in chat, hope fixed attachments 2026-03-29 11:56:13 +02:00
duffyduck f1f297b3a7 fixed voice button apk and update readme 2026-03-29 11:41:32 +02:00
66 changed files with 3123 additions and 13802 deletions
+5
View File
@@ -29,9 +29,14 @@ yarn-error.log*
android/build/
android/.gradle/
android/app/build/
android/android/.gradle/
android/android/app/build/
android/android/local.properties
android/local.properties
android/package-lock.json
*.apk
*.aab
rvs/updates/*.apk
# ── Tauri / Desktop Build ───────────────────────
desktop/src-tauri/target/
+209 -19
View File
@@ -29,11 +29,18 @@ ARIA hat zwei Rollen:
┌─────────────────────────────────────────────────────────┐
│ RVS — Rendezvous-Server │
│ Node.js WebSocket Relay (Docker, Rechenzentrum) │
│ Reiner Relay — kennt keine Tokens, leitet durch
│ Relay + Auto-Update (APK-Verteilung)
│ rvs/docker-compose.yml │
└───────────────────────┬─────────────────────────────────┘
│ WebSocket Tunnel
└───────────┬───────────────────────────┬─────────────────┘
│ WebSocket Tunnel │ WebSocket Tunnel
┌───────────────────────────┐
│ Gaming-PC (optional) │
│ RTX 3060, Docker+WSL2 │
│ XTTS v2 (natuerliche │
│ Stimmen, Voice Cloning) │
│ xtts/docker-compose.yml │
└───────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ ARIA-VM (Proxmox, Debian 13) — ARIAs Wohnung │
│ Basissystem + Docker. Rest richtet ARIA selbst ein. │
@@ -66,13 +73,14 @@ ARIA hat zwei Rollen:
└─────────────────────────────────────────────────────────┘
```
**Drei separate Deployments:**
**Vier separate Deployments:**
| Was | Wo | Wie |
|-----|----|-----|
| RVS | Rechenzentrum | `cd rvs && docker compose up -d` |
| ARIA Core | Debian 13 VM | `docker compose up -d && ./aria-setup.sh` |
| Android App | Stefans Handy | APK installieren, QR-Code scannen |
| XTTS v2 (optional) | Gaming-PC (GPU) | `cd xtts && docker compose up -d` |
| Android App | Stefans Handy | APK installieren (Auto-Update via RVS) |
---
@@ -271,9 +279,13 @@ Die Bridge verbindet die Android App mit ARIA und bietet lokale Sprachverarbeitu
**Nachrichtenfluss:**
```
App → RVS → Bridge → aria-core
aria-core → Bridge → RVS → App
→ Lautsprecher (TTS)
Text: App → RVS → Bridge → chat.send → aria-core
Audio: App → RVS → Bridge → FFmpeg → Whisper STT → chat.send → aria-core
Datei: App → RVS → Bridge → /shared/uploads/ → chat.send (mit Pfad) → aria-core
aria-core → Antwort → Gateway → Diagnostic → RVS → App
→ Bridge → Piper TTS → RVS → App (Audio)
→ Bridge → Lautsprecher (lokal)
```
### Features
@@ -310,13 +322,19 @@ Erreichbar unter `http://<VM-IP>:3001`. Teilt das Netzwerk mit aria-core.
### Features
- **Status-Karten**: Gateway (Handshake), RVS (TLS-Fallback), Proxy (Auth)
- **Chat-Test**: Nachrichten direkt an ARIA senden (Gateway oder via RVS)
- **Chat-Test**: Nachrichten direkt an ARIA senden (Gateway oder via RVS), Vollbild-Modus
- **"ARIA denkt..." Indikator**: Zeigt live was ARIA gerade tut (Denken, Tool, Schreiben)
- **Abbrechen-Button**: Stoppt laufende Anfragen + doctor --fix
- **Session-Verwaltung**: Sessions auflisten, wechseln, erstellen, loeschen
- **Chat-History**: Wird beim Laden und Session-Wechsel angezeigt (read-only aus JSONL)
- **TTS-Diagnose Tab**: Stimmen testen, Status pruefen, Fehler anzeigen
- **Einstellungen**: TTS-Engine (Piper/XTTS), Stimmen, Speed, Highlight-Trigger, Betriebsmodi
- **XTTS Voice Cloning**: Audio-Samples hochladen, eigene Stimme erstellen
- **Claude Login**: Browser-Terminal zum Einloggen in den Proxy
- **Core Terminal**: Shell in aria-core (openclaw CLI)
- **Container-Logs**: Echtzeit-Logs aller Container (gefiltert nach Tab)
- **Container-Logs**: Echtzeit-Logs aller Container (gefiltert nach Tab + Pipeline)
- **SSH Terminal**: Direkter SSH-Zugang zu aria-wohnung
- **Watchdog**: Erkennt stuck Runs (2min Warnung → 5min doctor --fix → 8min Container-Restart)
### Session-Verwaltung
@@ -335,9 +353,14 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
- Text-Chat mit ARIA
- **Sprachaufnahme**: Push-to-Talk (halten) oder Tap-to-Talk (tippen, Auto-Stop bei Stille)
- **VAD (Voice Activity Detection)**: Erkennt 1.8s Stille und stoppt automatisch
- **Wake Word**: Toggle-Button aktiviert kontinuierliches Mikrofon-Monitoring
- **TTS-Wiedergabe**: ARIA antwortet per Lautsprecher (Ramona/Thorsten)
- Datei- und Kamera-Upload
- **STT (Speech-to-Text)**: Audio wird in der Bridge per Whisper transkribiert, transkribierter Text erscheint im Chat
- **TTS-Wiedergabe**: ARIA antwortet per Lautsprecher (Piper oder XTTS v2)
- **Play-Button**: Jede ARIA-Nachricht kann nochmal vorgelesen werden
- **Chat-Suche**: Lupe in der Statusleiste filtert Nachrichten live
- **Datei- und Bild-Upload**: Bilder inline im Chat (Vollbild-Tap), Dateien mit Icon + Name + Groesse
- **Anhaenge**: Bridge speichert in Shared Volume, ARIA kann darauf zugreifen, Re-Download ueber RVS
- **Einstellungen**: TTS Engine, Stimmen, Speed pro Stimme, Speicherort, Auto-Download, GPS
- **Auto-Update**: Prueft beim Start auf neue Version, Download + Installation ueber RVS
- GPS-Position (optional)
- QR-Code Scanner fuer Token-Pairing
@@ -361,15 +384,79 @@ cd android
# APK liegt unter android/app/build/outputs/apk/release/
```
### Audio-Pipeline
### Release auf Gitea veroeffentlichen
```bash
./release.sh 1.2.0
```
Das Script macht alles in einem Schritt:
1. Setzt Versionsnummern (package.json, build.gradle, SettingsScreen)
2. Fragt Gitea-Kennwort ab (wird nirgends gespeichert)
3. Baut die Release-APK
4. Git Commit + Tag + Push
5. Erstellt Gitea Release + laedt APK hoch
6. Kopiert APK auf RVS-Server (Auto-Update, optional)
Voraussetzung in `.env`:
```bash
GITEA_URL=https://gitea.hackersoft.de
GITEA_REPO=stefan/aria-agent
GITEA_USER=stefan
RVS_UPDATE_HOST=root@aria-rvs # Optional: fuer Auto-Update
```
### Auto-Update
Die App prueft beim Start ob eine neuere Version auf dem RVS liegt.
Der Update-Flow:
1. `./release.sh 0.0.3.0` → APK wird auf RVS kopiert (via scp)
2. Alternativ: `git pull` auf dem RVS-Server → APK in `rvs/updates/`
3. App sendet `update_check` mit aktueller Version
4. RVS vergleicht → sendet `update_available`
5. App zeigt Dialog → Download ueber WebSocket → Installation
### Audio-Pipeline (Spracheingabe)
```
App (Mikrofon) → AAC/MP4 Aufnahme → Base64 → RVS → Bridge
Bridge: FFmpeg (16kHz PCM) → Whisper STT → Text → aria-core
Bridge: STT-Ergebnis → RVS → App (Placeholder wird durch transkribierten Text ersetzt)
aria-core → Antwort → Bridge → Piper TTS (WAV) → Base64 → RVS → App
App: Base64 → WAV → Lautsprecher
```
### Datei-Pipeline (Bilder & Anhaenge)
```
App (Kamera/Dateimanager) → Base64 → RVS → Bridge
Bridge: Speichert in /shared/uploads/ (Shared Volume, fuer aria-core sichtbar)
Bridge: chat.send → "Stefan hat ein Bild geschickt: foto.jpg — liegt unter /shared/uploads/..."
ARIA: Kann Datei per Bash/Read-Tool oeffnen und analysieren
```
**Unterstuetzte Formate:** Bilder (JPG, PNG), Dokumente (PDF, DOCX, TXT), beliebige Dateien.
Bilder werden in der App inline angezeigt, andere Dateien als Icon + Dateiname.
**Re-Download:** Wird der lokale Cache in der App geleert (Einstellungen → Anhang-Speicher → Cache leeren),
werden fehlende Anhaenge automatisch ueber RVS vom Server neu geladen. Der Speicherort
ist in den App-Einstellungen konfigurierbar.
> **Tipp Speicherplatz:** Das Docker Volume `aria-shared` liegt standardmaessig auf ARIAs VM-Disk.
> Bei vielen Uploads kann das den Speicher der VM belasten (dort laufen auch alle Container).
> Empfehlung: Das Volume auf ein Netzwerk-Filesystem mounten (CephFS, NFS, GlusterFS):
> ```yaml
> # docker-compose.yml
> volumes:
> aria-shared:
> driver: local
> driver_opts:
> type: nfs
> o: addr=nas.local,rw
> device: ":/exports/aria-uploads"
> ```
> So bleibt ARIAs VM-Disk sauber und die Uploads liegen auf dediziertem Storage.
---
## Datenverzeichnis — aria-data/
@@ -396,6 +483,11 @@ aria-data/
│ ├── aria.env ← Voice Bridge Config
│ └── diag-state/ ← Diagnostic persistenter State
│ (im Shared Volume /shared/config/):
│ ├── voice_config.json ← TTS-Einstellungen (Stimme, Speed, Engine)
│ ├── highlight_triggers.json ← Highlight-Trigger Woerter
│ └── chat_backup.jsonl ← Nachrichten-Backup (on-the-fly)
└── ssh/ ← SSH Keys fuer VM-Zugriff
├── id_ed25519 ← Private Key (generiert von aria-setup.sh)
├── id_ed25519.pub ← Public Key (muss in VM authorized_keys!)
@@ -411,7 +503,7 @@ tar -czf aria-backup-$(date +%Y%m%d).tar.gz aria-data/
## RVS — Rendezvous-Server
Laeuft im Rechenzentrum. Reiner Relay — kennt keine Tokens, speichert nichts.
Laeuft im Rechenzentrum. WebSocket Relay + Auto-Update Server.
Wer sich mit dem gleichen Token verbindet, landet im gleichen Room.
```bash
@@ -419,10 +511,90 @@ cd rvs
docker compose up -d
```
**Features:**
- WebSocket Relay (alle Message-Types: chat, audio, file, config, xtts, update, etc.)
- Auto-Update: APK-Verteilung an Apps ueber WebSocket
- Heartbeat + tote Verbindungen aufraeumen
**Auto-Update APK bereitstellen:**
```bash
# APK in updates/ legen (manuell oder via release.sh)
cp ARIA-v0.0.3.0.apk ~/ARIA-AGENT/rvs/updates/
# RVS erkennt die Version aus dem Dateinamen
```
**Multi-Instanz:** Mehrere ARIA-VMs koennen denselben RVS nutzen — jede mit eigenem Token.
---
## XTTS v2 — GPU TTS Server (optional)
Laeuft auf einem separaten Rechner mit NVIDIA GPU (z.B. Gaming-PC mit RTX 3060).
Verbindet sich ueber RVS mit der ARIA-Infrastruktur — kein VPN noetig, funktioniert
ueber verschiedene Netze hinweg.
### Architektur
```
Gaming-PC (Windows, RTX 3060, Docker Desktop + WSL2)
├── aria-xtts XTTS v2 GPU Server (Port 8020 intern)
└── aria-xtts-bridge RVS-Relay (empfaengt Requests, sendet Audio)
└── Beide teilen ./voices/ Volume fuer Voice Cloning
↕ RVS (Rechenzentrum, WebSocket Relay)
ARIA-VM
└── aria-bridge: tts_engine="xtts" → xtts_request via RVS → wartet auf xtts_response
```
### Voraussetzungen
- Docker Desktop mit WSL2 (Windows) oder Docker mit NVIDIA Runtime (Linux)
- NVIDIA Container Toolkit
- GPU mit mindestens 4GB VRAM (6GB+ empfohlen)
- **Gleicher RVS_TOKEN wie auf der ARIA-VM!**
### Setup
```bash
cd xtts
cp .env.example .env
# .env mit RVS-Verbindungsdaten fuellen (gleicher Token wie ARIA-VM!)
docker compose up -d
# Erster Start laedt ~2GB Model herunter (danach gecacht)
```
**Wichtig:** Der XTTS-Server laeuft intern auf Port **8020** (nicht 8000).
Das Model wird im Volume `xtts-models` gecacht und muss nur einmal geladen werden.
### Features
- **Natuerliche Stimmen**: Deutlich bessere Qualitaet als Piper
- **Voice Cloning**: Eigene Stimme mit 6-10s Audio-Sample (~2s Latenz auf RTX 3060)
- **16 Sprachen**: Deutsch, Englisch, Franzoesisch, etc.
- **Fallback**: Wenn XTTS nicht erreichbar, nutzt die Bridge automatisch Piper
### TTS-Engine umschalten
In der Diagnostic unter Einstellungen → Sprachausgabe:
- **TTS aktiv**: Global An/Aus
- **TTS Engine**: Piper (lokal, CPU, schnell) oder XTTS v2 (remote, GPU, natuerlich)
- **Piper**: Standard-Stimme, Highlight-Stimme, Speed pro Stimme
- **XTTS**: Stimmen-Auswahl, Voice Cloning
### Stimme klonen
1. TTS Engine auf "XTTS v2" stellen
2. "Stimme klonen" → Audio-Dateien hochladen (WAV/MP3, 1-10 Dateien, min. 6-10s gesamt)
3. Name vergeben → "Stimme erstellen"
4. "Laden" klicken → neue Stimme in der Auswahl
5. Stimme auswaehlen → Config wird automatisch gespeichert
> **Tipp:** Fuer beste Ergebnisse: saubere Aufnahme, eine Stimme, kein Hintergrund,
> 10-30 Sekunden Gesamtlaenge. Mehrere kurze Dateien werden zusammengefuegt.
---
## Docker Volumes
| Volume | Pfad im Container | Zweck |
@@ -433,6 +605,8 @@ docker compose up -d
| `./aria-data/ssh` (bind) | `/root/.ssh`, `/home/node/.ssh` | SSH Keys |
| `./aria-data/brain` (bind) | `/home/node/.openclaw/workspace/memory` | Gedaechtnis |
| `./aria-data/skills` (bind) | `/home/node/.openclaw/workspace/skills` | Skills |
| `aria-shared` | `/shared` (Core + Bridge + Proxy + Diag) | Datei-Austausch, Config, Uploads |
| `./aria-data/config/diag-state` (bind) | `/data` (Diagnostic) | Persistenter State (aktive Session) |
---
@@ -487,8 +661,15 @@ docker exec aria-core ssh aria-wohnung hostname
Dadurch ist ARIA langsamer als die direkte Claude CLI. Timeout ist auf 900s (15 Min).
- **Kein Streaming zur App**: Die App zeigt erst die fertige Antwort, keine Streaming-Tokens.
- **Wake Word nur auf VM**: Die Bridge hoert auf "ARIA" ueber das lokale Mikrofon der VM.
In der App gibt es Energy-basierte Erkennung (Phase 1).
In der App gibt es Energy-basierte Erkennung (Phase 1). On-device "ARIA"-Keyword (Porcupine) ist Phase 2.
- **Audio-Format**: App nimmt AAC/MP4 auf, Bridge konvertiert via FFmpeg zu 16kHz PCM.
- **RVS Zombie-Connections**: WebSocket-Verbindungen sterben gelegentlich ohne Fehlermeldung.
Bridge hat Ping-Check (5s), Diagnostic nutzt frische Verbindungen pro Request.
- **Bildanalyse eingeschraenkt**: Bilder werden in `/shared/uploads/` gespeichert. ARIA kann
sie per Bash/Read-Tool oeffnen, aber Claude Vision (direkte Bildanalyse) ist ueber den
Proxy-Pfad (`claude --print`) noch nicht moeglich. ARIA sieht den Dateipfad, nicht das Bild.
- **Dateigroesse**: Grosse Dateien (>5MB) koennen WebSocket-Limits ueberschreiten.
Bilder werden in der App auf max 1920x1920px @ 80% Qualitaet komprimiert.
---
@@ -504,8 +685,15 @@ docker exec aria-core ssh aria-wohnung hostname
- [x] Android App (Chat + Sprache + Uploads)
- [x] Tool-Permissions (alle Tools freigeschaltet)
- [x] SSH-Zugriff auf VM (aria-wohnung)
- [x] Diagnostic Web-UI
- [x] Diagnostic Web-UI + Einstellungen
- [x] Session-Verwaltung + Chat-History
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed, Highlight-Trigger)
- [x] TTS satzweise fuer lange Texte
- [x] Datei-/Bild-Upload mit Shared Volume
- [x] Watchdog (stuck Run Erkennung + Auto-Fix + Container-Restart)
- [x] Auto-Update System (APK via RVS)
- [x] Chat-Suche, Play-Button, Abbrechen-Button
- [x] XTTS v2 Integration (GPU, Voice Cloning, remote ueber RVS)
### Phase 2 — ARIA wird produktiv
@@ -513,7 +701,8 @@ docker exec aria-core ssh aria-wohnung hostname
- [ ] Gitea-Integration
- [ ] VM einrichten (Desktop, Browser, Tools)
- [ ] Heartbeat (periodische Selbst-Checks)
- [ ] Lokales LLM als Wächter (Triage vor Claude-Call)
- [ ] Lokales LLM als Waechter (Triage vor Claude-Call)
- [ ] Auto-Compacting / Memory-Verwaltung
### Phase 3 — Erweiterungen
@@ -521,3 +710,4 @@ docker exec aria-core ssh aria-wohnung hostname
- [ ] Desktop Client (Tauri)
- [ ] bKVM Remote IT-Support
- [ ] Porcupine Wake Word (on-device "ARIA" in der App)
- [ ] Claude Vision direkt (Bildanalyse ohne Dateipfad-Umweg)
Binary file not shown.
@@ -1,245 +0,0 @@
package org.gradle.accessors.dm;
import org.gradle.api.NonNullApi;
import org.gradle.api.artifacts.MinimalExternalModuleDependency;
import org.gradle.plugin.use.PluginDependency;
import org.gradle.api.artifacts.ExternalModuleDependencyBundle;
import org.gradle.api.artifacts.MutableVersionConstraint;
import org.gradle.api.provider.Provider;
import org.gradle.api.model.ObjectFactory;
import org.gradle.api.provider.ProviderFactory;
import org.gradle.api.internal.catalog.AbstractExternalDependencyFactory;
import org.gradle.api.internal.catalog.DefaultVersionCatalog;
import java.util.Map;
import org.gradle.api.internal.attributes.ImmutableAttributesFactory;
import org.gradle.api.internal.artifacts.dsl.CapabilityNotationParser;
import javax.inject.Inject;
/**
* A catalog of dependencies accessible via the `libs` extension.
*/
@NonNullApi
public class LibrariesForLibs extends AbstractExternalDependencyFactory {
private final AbstractExternalDependencyFactory owner = this;
private final AndroidLibraryAccessors laccForAndroidLibraryAccessors = new AndroidLibraryAccessors(owner);
private final KotlinLibraryAccessors laccForKotlinLibraryAccessors = new KotlinLibraryAccessors(owner);
private final VersionAccessors vaccForVersionAccessors = new VersionAccessors(providers, config);
private final BundleAccessors baccForBundleAccessors = new BundleAccessors(objects, providers, config, attributesFactory, capabilityNotationParser);
private final PluginAccessors paccForPluginAccessors = new PluginAccessors(providers, config);
@Inject
public LibrariesForLibs(DefaultVersionCatalog config, ProviderFactory providers, ObjectFactory objects, ImmutableAttributesFactory attributesFactory, CapabilityNotationParser capabilityNotationParser) {
super(config, providers, objects, attributesFactory, capabilityNotationParser);
}
/**
* Creates a dependency provider for gson (com.google.code.gson:gson)
* This dependency was declared in catalog libs.versions.toml
*/
public Provider<MinimalExternalModuleDependency> getGson() {
return create("gson");
}
/**
* Creates a dependency provider for guava (com.google.guava:guava)
* This dependency was declared in catalog libs.versions.toml
*/
public Provider<MinimalExternalModuleDependency> getGuava() {
return create("guava");
}
/**
* Creates a dependency provider for javapoet (com.squareup:javapoet)
* This dependency was declared in catalog libs.versions.toml
*/
public Provider<MinimalExternalModuleDependency> getJavapoet() {
return create("javapoet");
}
/**
* Creates a dependency provider for junit (junit:junit)
* This dependency was declared in catalog libs.versions.toml
*/
public Provider<MinimalExternalModuleDependency> getJunit() {
return create("junit");
}
/**
* Returns the group of libraries at android
*/
public AndroidLibraryAccessors getAndroid() {
return laccForAndroidLibraryAccessors;
}
/**
* Returns the group of libraries at kotlin
*/
public KotlinLibraryAccessors getKotlin() {
return laccForKotlinLibraryAccessors;
}
/**
* Returns the group of versions at versions
*/
public VersionAccessors getVersions() {
return vaccForVersionAccessors;
}
/**
* Returns the group of bundles at bundles
*/
public BundleAccessors getBundles() {
return baccForBundleAccessors;
}
/**
* Returns the group of plugins at plugins
*/
public PluginAccessors getPlugins() {
return paccForPluginAccessors;
}
public static class AndroidLibraryAccessors extends SubDependencyFactory {
private final AndroidGradleLibraryAccessors laccForAndroidGradleLibraryAccessors = new AndroidGradleLibraryAccessors(owner);
public AndroidLibraryAccessors(AbstractExternalDependencyFactory owner) { super(owner); }
/**
* Returns the group of libraries at android.gradle
*/
public AndroidGradleLibraryAccessors getGradle() {
return laccForAndroidGradleLibraryAccessors;
}
}
public static class AndroidGradleLibraryAccessors extends SubDependencyFactory {
public AndroidGradleLibraryAccessors(AbstractExternalDependencyFactory owner) { super(owner); }
/**
* Creates a dependency provider for plugin (com.android.tools.build:gradle)
* This dependency was declared in catalog libs.versions.toml
*/
public Provider<MinimalExternalModuleDependency> getPlugin() {
return create("android.gradle.plugin");
}
}
public static class KotlinLibraryAccessors extends SubDependencyFactory {
private final KotlinGradleLibraryAccessors laccForKotlinGradleLibraryAccessors = new KotlinGradleLibraryAccessors(owner);
public KotlinLibraryAccessors(AbstractExternalDependencyFactory owner) { super(owner); }
/**
* Returns the group of libraries at kotlin.gradle
*/
public KotlinGradleLibraryAccessors getGradle() {
return laccForKotlinGradleLibraryAccessors;
}
}
public static class KotlinGradleLibraryAccessors extends SubDependencyFactory {
public KotlinGradleLibraryAccessors(AbstractExternalDependencyFactory owner) { super(owner); }
/**
* Creates a dependency provider for plugin (org.jetbrains.kotlin:kotlin-gradle-plugin)
* This dependency was declared in catalog libs.versions.toml
*/
public Provider<MinimalExternalModuleDependency> getPlugin() {
return create("kotlin.gradle.plugin");
}
}
public static class VersionAccessors extends VersionFactory {
public VersionAccessors(ProviderFactory providers, DefaultVersionCatalog config) { super(providers, config); }
/**
* Returns the version associated to this alias: agp (8.1.1)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getAgp() { return getVersion("agp"); }
/**
* Returns the version associated to this alias: gson (2.8.9)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getGson() { return getVersion("gson"); }
/**
* Returns the version associated to this alias: guava (31.0.1-jre)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getGuava() { return getVersion("guava"); }
/**
* Returns the version associated to this alias: javapoet (1.13.0)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getJavapoet() { return getVersion("javapoet"); }
/**
* Returns the version associated to this alias: junit (4.13.2)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getJunit() { return getVersion("junit"); }
/**
* Returns the version associated to this alias: kotlin (1.8.0)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getKotlin() { return getVersion("kotlin"); }
}
public static class BundleAccessors extends BundleFactory {
public BundleAccessors(ObjectFactory objects, ProviderFactory providers, DefaultVersionCatalog config, ImmutableAttributesFactory attributesFactory, CapabilityNotationParser capabilityNotationParser) { super(objects, providers, config, attributesFactory, capabilityNotationParser); }
}
public static class PluginAccessors extends PluginFactory {
private final KotlinPluginAccessors paccForKotlinPluginAccessors = new KotlinPluginAccessors(providers, config);
public PluginAccessors(ProviderFactory providers, DefaultVersionCatalog config) { super(providers, config); }
/**
* Returns the group of plugins at plugins.kotlin
*/
public KotlinPluginAccessors getKotlin() {
return paccForKotlinPluginAccessors;
}
}
public static class KotlinPluginAccessors extends PluginFactory {
public KotlinPluginAccessors(ProviderFactory providers, DefaultVersionCatalog config) { super(providers, config); }
/**
* Creates a plugin provider for kotlin.jvm to the plugin id 'org.jetbrains.kotlin.jvm'
* This plugin was declared in catalog libs.versions.toml
*/
public Provider<PluginDependency> getJvm() { return createPlugin("kotlin.jvm"); }
}
}
@@ -1,298 +0,0 @@
package org.gradle.accessors.dm;
import org.gradle.api.NonNullApi;
import org.gradle.api.artifacts.MinimalExternalModuleDependency;
import org.gradle.plugin.use.PluginDependency;
import org.gradle.api.artifacts.ExternalModuleDependencyBundle;
import org.gradle.api.artifacts.MutableVersionConstraint;
import org.gradle.api.provider.Provider;
import org.gradle.api.model.ObjectFactory;
import org.gradle.api.provider.ProviderFactory;
import org.gradle.api.internal.catalog.AbstractExternalDependencyFactory;
import org.gradle.api.internal.catalog.DefaultVersionCatalog;
import java.util.Map;
import org.gradle.api.internal.attributes.ImmutableAttributesFactory;
import org.gradle.api.internal.artifacts.dsl.CapabilityNotationParser;
import javax.inject.Inject;
/**
* A catalog of dependencies accessible via the `libs` extension.
*/
@NonNullApi
public class LibrariesForLibsInPluginsBlock extends AbstractExternalDependencyFactory {
private final AbstractExternalDependencyFactory owner = this;
private final AndroidLibraryAccessors laccForAndroidLibraryAccessors = new AndroidLibraryAccessors(owner);
private final KotlinLibraryAccessors laccForKotlinLibraryAccessors = new KotlinLibraryAccessors(owner);
private final VersionAccessors vaccForVersionAccessors = new VersionAccessors(providers, config);
private final BundleAccessors baccForBundleAccessors = new BundleAccessors(objects, providers, config, attributesFactory, capabilityNotationParser);
private final PluginAccessors paccForPluginAccessors = new PluginAccessors(providers, config);
@Inject
public LibrariesForLibsInPluginsBlock(DefaultVersionCatalog config, ProviderFactory providers, ObjectFactory objects, ImmutableAttributesFactory attributesFactory, CapabilityNotationParser capabilityNotationParser) {
super(config, providers, objects, attributesFactory, capabilityNotationParser);
}
/**
* Creates a dependency provider for gson (com.google.code.gson:gson)
* This dependency was declared in catalog libs.versions.toml
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public Provider<MinimalExternalModuleDependency> getGson() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return create("gson");
}
/**
* Creates a dependency provider for guava (com.google.guava:guava)
* This dependency was declared in catalog libs.versions.toml
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public Provider<MinimalExternalModuleDependency> getGuava() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return create("guava");
}
/**
* Creates a dependency provider for javapoet (com.squareup:javapoet)
* This dependency was declared in catalog libs.versions.toml
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public Provider<MinimalExternalModuleDependency> getJavapoet() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return create("javapoet");
}
/**
* Creates a dependency provider for junit (junit:junit)
* This dependency was declared in catalog libs.versions.toml
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public Provider<MinimalExternalModuleDependency> getJunit() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return create("junit");
}
/**
* Returns the group of libraries at android
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public AndroidLibraryAccessors getAndroid() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return laccForAndroidLibraryAccessors;
}
/**
* Returns the group of libraries at kotlin
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public KotlinLibraryAccessors getKotlin() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return laccForKotlinLibraryAccessors;
}
/**
* Returns the group of versions at versions
*/
public VersionAccessors getVersions() {
return vaccForVersionAccessors;
}
/**
* Returns the group of bundles at bundles
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public BundleAccessors getBundles() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return baccForBundleAccessors;
}
/**
* Returns the group of plugins at plugins
*/
public PluginAccessors getPlugins() {
return paccForPluginAccessors;
}
/**
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public static class AndroidLibraryAccessors extends SubDependencyFactory {
private final AndroidGradleLibraryAccessors laccForAndroidGradleLibraryAccessors = new AndroidGradleLibraryAccessors(owner);
public AndroidLibraryAccessors(AbstractExternalDependencyFactory owner) { super(owner); }
/**
* Returns the group of libraries at android.gradle
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public AndroidGradleLibraryAccessors getGradle() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return laccForAndroidGradleLibraryAccessors;
}
}
/**
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public static class AndroidGradleLibraryAccessors extends SubDependencyFactory {
public AndroidGradleLibraryAccessors(AbstractExternalDependencyFactory owner) { super(owner); }
/**
* Creates a dependency provider for plugin (com.android.tools.build:gradle)
* This dependency was declared in catalog libs.versions.toml
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public Provider<MinimalExternalModuleDependency> getPlugin() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return create("android.gradle.plugin");
}
}
/**
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public static class KotlinLibraryAccessors extends SubDependencyFactory {
private final KotlinGradleLibraryAccessors laccForKotlinGradleLibraryAccessors = new KotlinGradleLibraryAccessors(owner);
public KotlinLibraryAccessors(AbstractExternalDependencyFactory owner) { super(owner); }
/**
* Returns the group of libraries at kotlin.gradle
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public KotlinGradleLibraryAccessors getGradle() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return laccForKotlinGradleLibraryAccessors;
}
}
/**
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public static class KotlinGradleLibraryAccessors extends SubDependencyFactory {
public KotlinGradleLibraryAccessors(AbstractExternalDependencyFactory owner) { super(owner); }
/**
* Creates a dependency provider for plugin (org.jetbrains.kotlin:kotlin-gradle-plugin)
* This dependency was declared in catalog libs.versions.toml
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public Provider<MinimalExternalModuleDependency> getPlugin() {
org.gradle.internal.deprecation.DeprecationLogger.deprecateBehaviour("Accessing libraries or bundles from version catalogs in the plugins block.").withAdvice("Only use versions or plugins from catalogs in the plugins block.").willBeRemovedInGradle9().withUpgradeGuideSection(8, "kotlin_dsl_deprecated_catalogs_plugins_block").nagUser();
return create("kotlin.gradle.plugin");
}
}
public static class VersionAccessors extends VersionFactory {
public VersionAccessors(ProviderFactory providers, DefaultVersionCatalog config) { super(providers, config); }
/**
* Returns the version associated to this alias: agp (8.1.1)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getAgp() { return getVersion("agp"); }
/**
* Returns the version associated to this alias: gson (2.8.9)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getGson() { return getVersion("gson"); }
/**
* Returns the version associated to this alias: guava (31.0.1-jre)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getGuava() { return getVersion("guava"); }
/**
* Returns the version associated to this alias: javapoet (1.13.0)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getJavapoet() { return getVersion("javapoet"); }
/**
* Returns the version associated to this alias: junit (4.13.2)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getJunit() { return getVersion("junit"); }
/**
* Returns the version associated to this alias: kotlin (1.8.0)
* If the version is a rich version and that its not expressible as a
* single version string, then an empty string is returned.
* This version was declared in catalog libs.versions.toml
*/
public Provider<String> getKotlin() { return getVersion("kotlin"); }
}
/**
* @deprecated Will be removed in Gradle 9.0.
*/
@Deprecated
public static class BundleAccessors extends BundleFactory {
public BundleAccessors(ObjectFactory objects, ProviderFactory providers, DefaultVersionCatalog config, ImmutableAttributesFactory attributesFactory, CapabilityNotationParser capabilityNotationParser) { super(objects, providers, config, attributesFactory, capabilityNotationParser); }
}
public static class PluginAccessors extends PluginFactory {
private final KotlinPluginAccessors paccForKotlinPluginAccessors = new KotlinPluginAccessors(providers, config);
public PluginAccessors(ProviderFactory providers, DefaultVersionCatalog config) { super(providers, config); }
/**
* Returns the group of plugins at plugins.kotlin
*/
public KotlinPluginAccessors getKotlin() {
return paccForKotlinPluginAccessors;
}
}
public static class KotlinPluginAccessors extends PluginFactory {
public KotlinPluginAccessors(ProviderFactory providers, DefaultVersionCatalog config) { super(providers, config); }
/**
* Creates a plugin provider for kotlin.jvm to the plugin id 'org.jetbrains.kotlin.jvm'
* This plugin was declared in catalog libs.versions.toml
*/
public Provider<PluginDependency> getJvm() { return createPlugin("kotlin.jvm"); }
}
}
@@ -1,2 +0,0 @@
#Sun Mar 29 11:32:18 CEST 2026
gradle.version=8.3
Binary file not shown.
+2 -2
View File
@@ -79,8 +79,8 @@ android {
applicationId "com.ariacockpit"
minSdkVersion rootProject.ext.minSdkVersion
targetSdkVersion rootProject.ext.targetSdkVersion
versionCode 1
versionName "1.0"
versionCode 207
versionName "0.0.2.7"
// Fallback fuer Libraries mit Product Flavors
missingDimensionStrategy 'react-native-camera', 'general'
}
@@ -1,97 +0,0 @@
package com.facebook.react;
import android.app.Application;
import android.content.Context;
import android.content.res.Resources;
import com.facebook.react.ReactPackage;
import com.facebook.react.shell.MainPackageConfig;
import com.facebook.react.shell.MainReactPackage;
import java.util.Arrays;
import java.util.ArrayList;
// react-native-screens
import com.swmansion.rnscreens.RNScreensPackage;
// react-native-safe-area-context
import com.th3rdwave.safeareacontext.SafeAreaContextPackage;
// react-native-document-picker
import com.reactnativedocumentpicker.RNDocumentPickerPackage;
// react-native-sound
import com.zmxv.RNSound.RNSoundPackage;
// @react-native-community/geolocation
import com.reactnativecommunity.geolocation.GeolocationPackage;
// react-native-image-picker
import com.imagepicker.ImagePickerPackage;
// react-native-permissions
import com.zoontek.rnpermissions.RNPermissionsPackage;
// react-native-camera-kit
import com.rncamerakit.RNCameraKitPackage;
// @react-native-async-storage/async-storage
import com.reactnativecommunity.asyncstorage.AsyncStoragePackage;
// react-native-fs
import com.rnfs.RNFSPackage;
// react-native-audio-recorder-player
import com.dooboolab.audiorecorderplayer.RNAudioRecorderPlayerPackage;
// react-native-live-audio-stream
import com.imxiqi.rnliveaudiostream.RNLiveAudioStreamPackage;
public class PackageList {
private Application application;
private ReactNativeHost reactNativeHost;
private MainPackageConfig mConfig;
public PackageList(ReactNativeHost reactNativeHost) {
this(reactNativeHost, null);
}
public PackageList(Application application) {
this(application, null);
}
public PackageList(ReactNativeHost reactNativeHost, MainPackageConfig config) {
this.reactNativeHost = reactNativeHost;
mConfig = config;
}
public PackageList(Application application, MainPackageConfig config) {
this.reactNativeHost = null;
this.application = application;
mConfig = config;
}
private ReactNativeHost getReactNativeHost() {
return this.reactNativeHost;
}
private Resources getResources() {
return this.getApplication().getResources();
}
private Application getApplication() {
if (this.reactNativeHost == null) return this.application;
return this.reactNativeHost.getApplication();
}
private Context getApplicationContext() {
return this.getApplication().getApplicationContext();
}
public ArrayList<ReactPackage> getPackages() {
return new ArrayList<>(Arrays.<ReactPackage>asList(
new MainReactPackage(mConfig),
new RNScreensPackage(),
new SafeAreaContextPackage(),
new RNDocumentPickerPackage(),
new RNSoundPackage(),
new GeolocationPackage(),
new ImagePickerPackage(),
new RNPermissionsPackage(),
new RNCameraKitPackage(),
new AsyncStoragePackage(),
new RNFSPackage(),
new RNAudioRecorderPlayerPackage(),
new RNLiveAudioStreamPackage()
));
}
}
@@ -1,16 +0,0 @@
/**
* Automatically generated file. DO NOT MODIFY
*/
package com.ariacockpit;
public final class BuildConfig {
public static final boolean DEBUG = false;
public static final String APPLICATION_ID = "com.ariacockpit";
public static final String BUILD_TYPE = "release";
public static final int VERSION_CODE = 1;
public static final String VERSION_NAME = "1.0";
// Field from default config.
public static final boolean IS_HERMES_ENABLED = true;
// Field from default config.
public static final boolean IS_NEW_ARCHITECTURE_ENABLED = false;
}
@@ -1,24 +0,0 @@
{
"schemaVersion": "1.1.0",
"buildSystem": "Gradle",
"buildSystemVersion": "8.3",
"buildPlugin": "org.jetbrains.kotlin.gradle.plugin.KotlinAndroidPluginWrapper",
"buildPluginVersion": "1.8.0",
"projectSettings": {
"isHmppEnabled": true,
"isCompatibilityMetadataVariantEnabled": false,
"isKPMEnabled": false
},
"projectTargets": [
{
"target": "org.jetbrains.kotlin.gradle.plugin.mpp.KotlinAndroidTarget",
"platformType": "androidJvm",
"extras": {
"android": {
"sourceCompatibility": "17",
"targetCompatibility": "17"
}
}
}
]
}
-12840
View File
File diff suppressed because it is too large Load Diff
+2 -3
View File
@@ -1,6 +1,6 @@
{
"name": "aria-cockpit",
"version": "0.1.0",
"version": "0.0.2.7",
"private": true,
"scripts": {
"android": "react-native run-android",
@@ -24,8 +24,7 @@
"react-native-camera-kit": "^13.0.0",
"@react-native-async-storage/async-storage": "^1.21.0",
"react-native-fs": "^2.20.0",
"react-native-audio-recorder-player": "^3.6.7",
"react-native-live-audio-stream": "^1.1.1"
"react-native-audio-recorder-player": "^3.6.7"
},
"devDependencies": {
"typescript": "^5.3.3",
+7 -4
View File
@@ -17,6 +17,7 @@ import {
import DocumentPicker, {
DocumentPickerResponse,
} from 'react-native-document-picker';
import RNFS from 'react-native-fs';
// --- Typen ---
@@ -74,15 +75,17 @@ const FileUpload: React.FC<FileUploadProps> = ({ onFileSelected, onCancel }) =>
setLoading(true);
try {
// In Produktion: Datei lesen und zu Base64 konvertieren
// const base64 = await RNFS.readFile(selectedFile.fileCopyUri || selectedFile.uri, 'base64');
const base64Placeholder = '';
// Datei lesen und zu Base64 konvertieren
const filePath = selectedFile.fileCopyUri || selectedFile.uri;
// URI-Schema entfernen fuer RNFS (file:// → absoluter Pfad)
const cleanPath = filePath.replace('file://', '');
const base64 = await RNFS.readFile(cleanPath, 'base64');
const fileData: FileData = {
name: selectedFile.name || 'unbenannt',
type: selectedFile.type || 'application/octet-stream',
size: selectedFile.size || 0,
base64: base64Placeholder,
base64,
uri: selectedFile.uri,
};
+417 -61
View File
@@ -15,12 +15,15 @@ import {
KeyboardAvoidingView,
Platform,
StyleSheet,
Image,
Modal,
} from 'react-native';
import AsyncStorage from '@react-native-async-storage/async-storage';
import RNFS from 'react-native-fs';
import rvs, { RVSMessage, ConnectionState } from '../services/rvs';
import audioService from '../services/audio';
import wakeWordService from '../services/wakeword';
import updateService from '../services/updater';
import VoiceButton from '../components/VoiceButton';
import FileUpload, { FileData } from '../components/FileUpload';
import CameraUpload, { PhotoData } from '../components/CameraUpload';
@@ -33,6 +36,9 @@ interface Attachment {
type: 'image' | 'file' | 'audio';
name: string;
size?: number;
uri?: string; // Lokaler Pfad (file://) fuer Anzeige
mimeType?: string;
serverPath?: string; // Pfad auf dem Server (/shared/uploads/...) fuer Re-Download
}
interface ChatMessage {
@@ -47,6 +53,33 @@ interface ChatMessage {
const CHAT_STORAGE_KEY = 'aria_chat_messages';
const MAX_STORED_MESSAGES = 500;
const DEFAULT_ATTACHMENT_DIR = `${RNFS.DocumentDirectoryPath}/chat_attachments`;
const STORAGE_PATH_KEY = 'aria_attachment_storage_path';
async function getAttachmentDir(): Promise<string> {
try {
const saved = await AsyncStorage.getItem(STORAGE_PATH_KEY);
return saved || DEFAULT_ATTACHMENT_DIR;
} catch { return DEFAULT_ATTACHMENT_DIR; }
}
/** Speichert Base64-Daten als Datei, gibt file:// Pfad zurueck */
async function persistAttachment(base64Data: string, msgId: string, fileName: string): Promise<string> {
const cacheDir = await getAttachmentDir();
await RNFS.mkdir(cacheDir);
// Dateiendung aus originalem Dateinamen oder Fallback
const ext = fileName.includes('.') ? fileName.split('.').pop() : 'bin';
const safeName = `${msgId}_${fileName.replace(/[^a-zA-Z0-9._-]/g, '_')}`;
const filePath = `${cacheDir}/${safeName}`;
await RNFS.writeFile(filePath, base64Data, 'base64');
return `file://${filePath}`;
}
/** Prueft ob eine lokale Datei noch existiert */
async function checkFileExists(uri: string): Promise<boolean> {
if (!uri || !uri.startsWith('file://')) return false;
return RNFS.exists(uri.replace('file://', ''));
}
// --- Komponente ---
@@ -58,6 +91,9 @@ const ChatScreen: React.FC = () => {
const [showCameraUpload, setShowCameraUpload] = useState(false);
const [gpsEnabled, setGpsEnabled] = useState(false);
const [wakeWordActive, setWakeWordActive] = useState(false);
const [fullscreenImage, setFullscreenImage] = useState<string | null>(null);
const [searchQuery, setSearchQuery] = useState('');
const [searchVisible, setSearchVisible] = useState(false);
const flatListRef = useRef<FlatList>(null);
const messageIdCounter = useRef(0);
@@ -69,34 +105,125 @@ const ChatScreen: React.FC = () => {
};
// Chat-Verlauf aus AsyncStorage laden
const isInitialLoad = useRef(true);
useEffect(() => {
const loadMessages = async () => {
try {
const stored = await AsyncStorage.getItem(CHAT_STORAGE_KEY);
console.log('[Chat] AsyncStorage geladen:', stored ? `${stored.length} Bytes` : 'leer');
if (stored) {
const parsed: ChatMessage[] = JSON.parse(stored);
setMessages(parsed);
// ID-Counter auf hoechsten Wert setzen um Kollisionen zu vermeiden
const maxId = parsed.reduce((max, msg) => {
const num = parseInt(msg.id.split('_').pop() || '0', 10);
return num > max ? num : max;
}, 0);
messageIdCounter.current = maxId;
if (Array.isArray(parsed) && parsed.length > 0) {
console.log('[Chat] ${parsed.length} Nachrichten geladen');
setMessages(parsed);
const maxId = parsed.reduce((max, msg) => {
const num = parseInt(msg.id.split('_').pop() || '0', 10);
return num > max ? num : max;
}, 0);
messageIdCounter.current = maxId;
}
}
} catch (err) {
console.error('[Chat] Fehler beim Laden des Verlaufs:', err);
} finally {
isInitialLoad.current = false;
}
};
loadMessages();
loadMessages().then(async () => {
// Auto-Re-Download: fehlende Anhänge vom Server nachladen (wenn aktiviert)
const autoDownload = await AsyncStorage.getItem('aria_auto_download');
if (autoDownload === 'false') return;
setTimeout(() => {
setMessages(prev => {
const missing: {id: string, serverPath: string}[] = [];
for (const msg of prev) {
for (const att of msg.attachments || []) {
if (att.serverPath && !att.uri) {
missing.push({ id: msg.id, serverPath: att.serverPath });
}
}
}
if (missing.length > 0) {
console.log(`[Chat] ${missing.length} fehlende Anhaenge — lade nach...`);
for (const m of missing) {
rvs.send('file_request' as any, { serverPath: m.serverPath, requestId: m.id });
}
}
return prev;
});
}, 2000); // Warten bis RVS verbunden ist
});
}, []);
// RVS-Nachrichten abonnieren
useEffect(() => {
const unsubMessage = rvs.onMessage((message: RVSMessage) => {
// file_saved: Bridge meldet Server-Pfad — in Attachment merken fuer Re-Download
if (message.type === 'file_saved') {
const serverPath = (message.payload.serverPath as string) || '';
const name = (message.payload.name as string) || '';
if (serverPath) {
setMessages(prev => prev.map(m => ({
...m,
attachments: m.attachments?.map(a =>
a.name === name && !a.serverPath ? { ...a, serverPath } : a
),
})));
}
return;
}
// file_response: Re-Download von Server — lokal speichern
if (message.type === 'file_response') {
const reqId = (message.payload.requestId as string) || '';
const b64 = (message.payload.base64 as string) || '';
const serverPath = (message.payload.serverPath as string) || '';
if (b64 && reqId) {
const fileName = (message.payload.name as string) || 'download';
persistAttachment(b64, reqId, fileName).then(filePath => {
setMessages(prev => prev.map(m => ({
...m,
attachments: m.attachments?.map(a =>
a.serverPath === serverPath ? { ...a, uri: filePath } : a
),
})));
}).catch(() => {});
}
return;
}
if (message.type === 'chat') {
// Nur Nachrichten von ARIA anzeigen — eigene Nachrichten werden lokal hinzugefuegt
const sender = (message.payload.sender as string) || '';
if (sender === 'user' || sender === 'diagnostic') return;
// STT-Ergebnis: Transkribierten Text in die Sprach-Bubble schreiben
if (sender === 'stt') {
const sttText = (message.payload.text as string) || '';
if (sttText) {
setMessages(prev => prev.map(m =>
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
? { ...m, text: `\uD83C\uDFA4 ${sttText}` }
: m
));
}
return;
}
// Eigene App-Nachrichten ignorieren (werden lokal hinzugefuegt)
if (sender === 'user') return;
// Diagnostic-Nachrichten als User-Nachricht anzeigen
if (sender === 'diagnostic') {
const diagText = (message.payload.text as string) || '';
if (diagText) {
setMessages(prev => [...prev, {
id: nextId(),
sender: 'user',
text: diagText,
timestamp: message.timestamp,
}]);
}
return;
}
const text = (message.payload.text as string) || '';
const ts = message.timestamp;
@@ -136,6 +263,16 @@ const ChatScreen: React.FC = () => {
};
}, []);
// Auto-Update: Bei App-Start pruefen
useEffect(() => {
const unsubUpdate = updateService.onUpdateAvailable((info) => {
updateService.promptUpdate(info);
});
// Nach 5s pruefen (RVS muss erst verbunden sein)
const timer = setTimeout(() => updateService.checkForUpdate(), 5000);
return () => { unsubUpdate(); clearTimeout(timer); };
}, []);
// Wake Word: "ARIA" Erkennung → Auto-Aufnahme starten
useEffect(() => {
const unsubWake = wakeWordService.onWakeWord(async () => {
@@ -159,7 +296,7 @@ const ChatScreen: React.FC = () => {
const userMsg: ChatMessage = {
id: nextId(),
sender: 'user',
text: '[Sprachnachricht]',
text: '🎙 Spracheingabe wird verarbeitet...',
timestamp: Date.now(),
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
};
@@ -192,23 +329,52 @@ const ChatScreen: React.FC = () => {
}
}, [wakeWordActive]);
// Chat-Verlauf in AsyncStorage speichern (letzte N Nachrichten)
// Chat-Verlauf in AsyncStorage speichern (debounced, nur nach initialem Laden)
const saveTimer = useRef<ReturnType<typeof setTimeout> | null>(null);
useEffect(() => {
if (messages.length === 0) return;
const toStore = messages.slice(-MAX_STORED_MESSAGES);
AsyncStorage.setItem(CHAT_STORAGE_KEY, JSON.stringify(toStore)).catch(err =>
console.error('[Chat] Fehler beim Speichern:', err),
);
if (messages.length === 0 || isInitialLoad.current) return;
// Debounce: 1s warten damit persistAttachment fertig werden kann
if (saveTimer.current) clearTimeout(saveTimer.current);
saveTimer.current = setTimeout(() => {
const toStore = messages.slice(-MAX_STORED_MESSAGES).map(msg => ({
...msg,
attachments: msg.attachments?.map(att => ({
...att,
// Nur file:// URIs speichern, data: URIs rausfiltern (zu gross fuer AsyncStorage)
uri: att.uri?.startsWith('file://') ? att.uri : undefined,
})),
}));
const json = JSON.stringify(toStore);
// Sicherheitscheck: nicht speichern wenn >4MB (AsyncStorage Limit)
if (json.length > 4 * 1024 * 1024) {
console.warn('[Chat] Speicher zu gross, kuerze auf 100 Nachrichten');
const shortened = JSON.stringify(toStore.slice(-100));
AsyncStorage.setItem(CHAT_STORAGE_KEY, shortened).catch(() => {});
} else {
AsyncStorage.setItem(CHAT_STORAGE_KEY, json).catch(err =>
console.error('[Chat] Speichern fehlgeschlagen:', err),
);
}
}, 1000);
return () => { if (saveTimer.current) clearTimeout(saveTimer.current); };
}, [messages]);
// Auto-Scroll bei neuen Nachrichten
useEffect(() => {
if (messages.length > 0) {
setTimeout(() => {
flatListRef.current?.scrollToEnd({ animated: true });
}, 100);
// Auto-Scroll wird ueber onContentSizeChange der FlatList gesteuert
const shouldAutoScroll = useRef(true);
const handleContentSizeChange = useCallback(() => {
if (shouldAutoScroll.current) {
flatListRef.current?.scrollToEnd({ animated: false });
}
}, [messages]);
}, []);
const handleScrollBeginDrag = useCallback(() => {
shouldAutoScroll.current = false;
}, []);
const handleScrollEndDrag = useCallback((e: any) => {
// Auto-Scroll wieder aktivieren wenn User ganz unten ist
const { contentOffset, contentSize, layoutMeasurement } = e.nativeEvent;
const isAtBottom = contentOffset.y + layoutMeasurement.height >= contentSize.height - 50;
shouldAutoScroll.current = isAtBottom;
}, []);
// GPS-Position holen (optional)
const getCurrentLocation = useCallback((): Promise<{ lat: number; lon: number } | null> => {
@@ -262,9 +428,8 @@ const ChatScreen: React.FC = () => {
const userMsg: ChatMessage = {
id: nextId(),
sender: 'user',
text: '[Sprachnachricht]',
text: '🎙 Spracheingabe wird verarbeitet...',
timestamp: Date.now(),
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
};
setMessages(prev => [...prev, userMsg]);
@@ -281,15 +446,34 @@ const ChatScreen: React.FC = () => {
setShowFileUpload(false);
const location = await getCurrentLocation();
const isImage = file.type.startsWith('image/');
const msgId = nextId();
let imageUri = isImage && file.base64 ? `data:${file.type};base64,${file.base64}` : file.uri;
const userMsg: ChatMessage = {
id: nextId(),
id: msgId,
sender: 'user',
text: `[Datei: ${file.name}]`,
text: 'Anhang empfangen',
timestamp: Date.now(),
attachments: [{ type: 'file', name: file.name, size: file.size }],
attachments: [{
type: isImage ? 'image' : 'file',
name: file.name,
size: file.size,
uri: imageUri,
mimeType: file.type,
}],
};
setMessages(prev => [...prev, userMsg]);
// Anhang auf Disk speichern fuer Persistenz
if (file.base64) {
persistAttachment(file.base64, msgId, file.name).then(filePath => {
setMessages(prev => prev.map(m =>
m.id === msgId ? { ...m, attachments: m.attachments?.map(a => ({ ...a, uri: filePath })) } : m
));
}).catch(() => {});
}
rvs.send('file', {
name: file.name,
type: file.type,
@@ -304,15 +488,32 @@ const ChatScreen: React.FC = () => {
setShowCameraUpload(false);
const location = await getCurrentLocation();
const msgId = nextId();
const dataUri = photo.base64 ? `data:${photo.type};base64,${photo.base64}` : undefined;
const userMsg: ChatMessage = {
id: nextId(),
id: msgId,
sender: 'user',
text: `[Foto: ${photo.fileName}]`,
text: 'Anhang empfangen',
timestamp: Date.now(),
attachments: [{ type: 'image', name: photo.fileName }],
attachments: [{
type: 'image',
name: photo.fileName,
uri: dataUri,
mimeType: photo.type,
}],
};
setMessages(prev => [...prev, userMsg]);
// Foto auf Disk speichern fuer Persistenz
if (photo.base64) {
persistAttachment(photo.base64, msgId, photo.fileName).then(filePath => {
setMessages(prev => prev.map(m =>
m.id === msgId ? { ...m, attachments: m.attachments?.map(a => ({ ...a, uri: filePath })) } : m
));
}).catch(() => {});
}
rvs.send('file', {
name: photo.fileName,
type: photo.type,
@@ -334,16 +535,77 @@ const ChatScreen: React.FC = () => {
return (
<View style={[styles.messageBubble, isUser ? styles.userBubble : styles.ariaBubble]}>
<Text style={[styles.messageText, isUser ? styles.userText : styles.ariaText]}>
{item.text}
</Text>
{/* Anhang-Vorschau */}
{item.attachments?.map((att, idx) => (
<View key={idx} style={styles.attachmentBadge}>
<Text style={styles.attachmentText}>
{att.type === 'image' ? '\uD83D\uDDBC\uFE0F' : att.type === 'audio' ? '\uD83C\uDFA4' : '\uD83D\uDCC4'} {att.name}
</Text>
<View key={idx}>
{att.type === 'image' && att.uri ? (
<TouchableOpacity onPress={() => setFullscreenImage(att.uri || null)} activeOpacity={0.8}>
<Image
source={{ uri: att.uri }}
style={styles.attachmentImage}
resizeMode="cover"
onError={() => {
setMessages(prev => prev.map(m =>
m.id === item.id ? { ...m, attachments: m.attachments?.map((a, i) =>
i === idx ? { ...a, uri: undefined } : a
)} : m
));
}}
/>
</TouchableOpacity>
) : att.type === 'image' && !att.uri ? (
<TouchableOpacity
style={styles.attachmentFile}
onPress={() => {
if (att.serverPath) {
rvs.send('file_request' as any, { serverPath: att.serverPath, requestId: item.id });
}
}}
>
<Text style={styles.attachmentFileIcon}>{'\uD83D\uDDBC\uFE0F'}</Text>
<Text style={styles.attachmentFileName} numberOfLines={1}>{att.name}</Text>
<Text style={styles.attachmentFileSize}>
{att.serverPath ? '(tippen zum Laden)' : '(nicht verfuegbar)'}
</Text>
</TouchableOpacity>
) : (
<View style={styles.attachmentFile}>
<Text style={styles.attachmentFileIcon}>
{att.mimeType?.includes('pdf') ? '\uD83D\uDCC4' :
att.mimeType?.includes('word') || att.mimeType?.includes('document') ? '\uD83D\uDCC3' :
att.mimeType?.includes('sheet') || att.mimeType?.includes('excel') ? '\uD83D\uDCC8' :
'\uD83D\uDCC1'}
</Text>
<Text style={styles.attachmentFileName} numberOfLines={1}>{att.name}</Text>
{att.size ? <Text style={styles.attachmentFileSize}>{Math.round(att.size / 1024)}KB</Text> : null}
{!att.uri && att.serverPath && (
<TouchableOpacity onPress={() => rvs.send('file_request' as any, { serverPath: att.serverPath, requestId: item.id })}>
<Text style={[styles.attachmentFileSize, {color: '#0096FF'}]}>(laden)</Text>
</TouchableOpacity>
)}
{!att.uri && !att.serverPath && <Text style={styles.attachmentFileSize}>(nicht verfuegbar)</Text>}
</View>
)}
</View>
))}
{/* Text (nicht anzeigen wenn nur "Anhang empfangen" und ein Bild da ist) */}
{!(item.text === 'Anhang empfangen' && item.attachments?.some(a => a.type === 'image' && a.uri)) && (
<Text style={[styles.messageText, isUser ? styles.userText : styles.ariaText]}>
{item.text}
</Text>
)}
{/* Play-Button fuer ARIA-Nachrichten */}
{!isUser && item.text.length > 0 && (
<TouchableOpacity
style={styles.playButton}
onPress={() => {
// TTS-Request an Bridge senden
rvs.send('tts_request' as any, { text: item.text, voice: '' });
}}
>
<Text style={styles.playButtonText}>{'\uD83D\uDD0A'}</Text>
</TouchableOpacity>
)}
<Text style={styles.timestamp}>{time}</Text>
</View>
);
@@ -366,16 +628,39 @@ const ChatScreen: React.FC = () => {
{connectionState === 'connected' ? 'Verbunden' :
connectionState === 'connecting' ? 'Verbinde...' : 'Getrennt'}
</Text>
<TouchableOpacity onPress={() => setSearchVisible(!searchVisible)} style={{marginLeft: 'auto', paddingHorizontal: 8}}>
<Text style={{fontSize: 16}}>{'\uD83D\uDD0D'}</Text>
</TouchableOpacity>
</View>
{/* Suchleiste */}
{searchVisible && (
<View style={styles.searchBar}>
<TextInput
style={styles.searchInput}
value={searchQuery}
onChangeText={setSearchQuery}
placeholder="Chat durchsuchen..."
placeholderTextColor="#555570"
autoFocus
/>
<TouchableOpacity onPress={() => { setSearchVisible(false); setSearchQuery(''); }}>
<Text style={{color: '#FF3B30', fontSize: 14, paddingHorizontal: 8}}>X</Text>
</TouchableOpacity>
</View>
)}
{/* Nachrichtenliste */}
<FlatList
ref={flatListRef}
data={messages}
data={searchQuery ? messages.filter(m => m.text.toLowerCase().includes(searchQuery.toLowerCase())) : messages}
keyExtractor={item => item.id}
renderItem={renderMessage}
contentContainerStyle={styles.messageList}
showsVerticalScrollIndicator={false}
onContentSizeChange={handleContentSizeChange}
onScrollBeginDrag={handleScrollBeginDrag}
onScrollEndDrag={handleScrollEndDrag}
ListEmptyComponent={
<View style={styles.emptyContainer}>
<Text style={styles.emptyIcon}>{'\uD83E\uDD16'}</Text>
@@ -421,20 +706,39 @@ const ChatScreen: React.FC = () => {
<Text style={styles.sendIcon}>{'\u2B06\uFE0F'}</Text>
</TouchableOpacity>
) : (
<VoiceButton
onRecordingComplete={handleVoiceRecording}
disabled={connectionState !== 'connected'}
wakeWordActive={wakeWordActive}
/>
<TouchableOpacity
style={[styles.wakeWordBtn, wakeWordActive && styles.wakeWordBtnActive]}
onPress={toggleWakeWord}
>
<Text style={styles.wakeWordIcon}>{wakeWordActive ? '👂' : '🔇'}</Text>
</TouchableOpacity>
<>
<VoiceButton
onRecordingComplete={handleVoiceRecording}
disabled={connectionState !== 'connected'}
wakeWordActive={wakeWordActive}
/>
<TouchableOpacity
style={[styles.wakeWordBtn, wakeWordActive && styles.wakeWordBtnActive]}
onPress={toggleWakeWord}
>
<Text style={styles.wakeWordIcon}>{wakeWordActive ? '👂' : '🔇'}</Text>
</TouchableOpacity>
</>
)}
</View>
{/* Bild-Vollbild Modal */}
<Modal visible={!!fullscreenImage} transparent animationType="fade" onRequestClose={() => setFullscreenImage(null)}>
<TouchableOpacity
style={styles.fullscreenOverlay}
activeOpacity={1}
onPress={() => setFullscreenImage(null)}
>
{fullscreenImage && (
<Image
source={{ uri: fullscreenImage }}
style={styles.fullscreenImage}
resizeMode="contain"
/>
)}
</TouchableOpacity>
</Modal>
{/* Datei-Upload Modal */}
<Modal visible={showFileUpload} transparent animationType="slide">
<View style={styles.modalOverlay}>
@@ -515,17 +819,35 @@ const styles = StyleSheet.create({
ariaText: {
color: '#E0E0F0',
},
attachmentBadge: {
backgroundColor: 'rgba(255,255,255,0.1)',
borderRadius: 6,
paddingHorizontal: 8,
paddingVertical: 4,
marginTop: 6,
alignSelf: 'flex-start',
attachmentImage: {
width: '100%',
minHeight: 200,
maxHeight: 400,
borderRadius: 8,
marginBottom: 6,
backgroundColor: '#0D0D1A',
},
attachmentText: {
color: '#CCCCDD',
fontSize: 12,
attachmentFile: {
flexDirection: 'row',
alignItems: 'center',
backgroundColor: 'rgba(255,255,255,0.1)',
borderRadius: 8,
padding: 10,
marginBottom: 6,
},
attachmentFileIcon: {
fontSize: 24,
marginRight: 8,
},
attachmentFileName: {
flex: 1,
color: '#E0E0F0',
fontSize: 13,
},
attachmentFileSize: {
color: '#8888AA',
fontSize: 11,
marginLeft: 8,
},
timestamp: {
color: 'rgba(255,255,255,0.4)',
@@ -610,6 +932,40 @@ const styles = StyleSheet.create({
wakeWordIcon: {
fontSize: 16,
},
searchBar: {
flexDirection: 'row',
alignItems: 'center',
backgroundColor: '#12122A',
paddingHorizontal: 12,
paddingVertical: 6,
borderBottomWidth: 1,
borderBottomColor: '#1E1E2E',
},
searchInput: {
flex: 1,
color: '#FFFFFF',
fontSize: 14,
paddingVertical: 4,
},
playButton: {
alignSelf: 'flex-end',
paddingHorizontal: 8,
paddingVertical: 2,
marginTop: 4,
},
playButtonText: {
fontSize: 16,
},
fullscreenOverlay: {
flex: 1,
backgroundColor: 'rgba(0,0,0,0.95)',
justifyContent: 'center',
alignItems: 'center',
},
fullscreenImage: {
width: '100%',
height: '100%',
},
modalOverlay: {
flex: 1,
backgroundColor: 'rgba(0,0,0,0.6)',
+377 -1
View File
@@ -16,10 +16,16 @@ import {
Alert,
Platform,
} from 'react-native';
import AsyncStorage from '@react-native-async-storage/async-storage';
import RNFS from 'react-native-fs';
import DocumentPicker from 'react-native-document-picker';
import rvs, { ConnectionState, RVSMessage, ConnectionConfig, ConnectionLogEntry } from '../services/rvs';
import ModeSelector from '../components/ModeSelector';
import QRScanner from '../components/QRScanner';
const STORAGE_PATH_KEY = 'aria_attachment_storage_path';
const DEFAULT_STORAGE_PATH = `${RNFS.DocumentDirectoryPath}/chat_attachments`;
// --- Typen ---
interface LogEntry {
@@ -62,6 +68,16 @@ const SettingsScreen: React.FC = () => {
const [logs, setLogs] = useState<LogEntry[]>([]);
const [events, setEvents] = useState<EventEntry[]>([]);
const [connLog, setConnLog] = useState<ConnectionLogEntry[]>(rvs.getConnectionLog());
const [storagePath, setStoragePath] = useState(DEFAULT_STORAGE_PATH);
const [autoDownload, setAutoDownload] = useState(true);
const [storageSize, setStorageSize] = useState('...');
const [ttsEnabled, setTtsEnabled] = useState(true);
const [defaultVoice, setDefaultVoice] = useState('ramona');
const [highlightVoice, setHighlightVoice] = useState('thorsten');
const [speedRamona, setSpeedRamona] = useState(1.0);
const [speedThorsten, setSpeedThorsten] = useState(1.0);
const [editingPath, setEditingPath] = useState(false);
const [tempPath, setTempPath] = useState('');
let logIdCounter = 0;
@@ -73,8 +89,122 @@ const SettingsScreen: React.FC = () => {
setManualPort(String(config.port));
setManualToken(config.token);
}
// Speicherpfad + Auto-Download laden
AsyncStorage.getItem(STORAGE_PATH_KEY).then(saved => {
if (saved) setStoragePath(saved);
});
AsyncStorage.getItem('aria_auto_download').then(saved => {
if (saved !== null) setAutoDownload(saved === 'true');
});
AsyncStorage.getItem('aria_tts_enabled').then(saved => {
if (saved !== null) setTtsEnabled(saved === 'true');
});
AsyncStorage.getItem('aria_default_voice').then(saved => {
if (saved) setDefaultVoice(saved);
});
AsyncStorage.getItem('aria_highlight_voice').then(saved => {
if (saved) setHighlightVoice(saved);
});
AsyncStorage.getItem('aria_speed_ramona').then(saved => {
if (saved) setSpeedRamona(parseFloat(saved));
});
AsyncStorage.getItem('aria_speed_thorsten').then(saved => {
if (saved) setSpeedThorsten(parseFloat(saved));
});
}, []);
// Speichergroesse berechnen
useEffect(() => {
const calcSize = async () => {
try {
const exists = await RNFS.exists(storagePath);
if (!exists) { setStorageSize('0 KB'); return; }
const items = await RNFS.readDir(storagePath);
const totalBytes = items.reduce((sum, f) => sum + (f.size || 0), 0);
if (totalBytes > 1024 * 1024) {
setStorageSize(`${(totalBytes / 1024 / 1024).toFixed(1)} MB (${items.length} Dateien)`);
} else {
setStorageSize(`${Math.round(totalBytes / 1024)} KB (${items.length} Dateien)`);
}
} catch { setStorageSize('nicht verfuegbar'); }
};
calcSize();
}, [storagePath]);
const saveStoragePath = useCallback(async (newPath: string) => {
const clean = newPath.trim();
if (!clean) return;
await AsyncStorage.setItem(STORAGE_PATH_KEY, clean);
setStoragePath(clean);
setEditingPath(false);
Alert.alert('Gespeichert', `Neuer Speicherort:\n${clean}\n\nWird ab der naechsten Nachricht verwendet.`);
}, []);
const showPathPicker = useCallback(() => {
Alert.alert(
'Speicherort waehlen',
'Wo sollen Anhaenge gespeichert werden?',
[
{
text: 'Ordner auswaehlen...',
onPress: async () => {
try {
const result = await DocumentPicker.pickDirectory();
if (result?.uri) {
// SAF URI decodieren (content://com.android.externalstorage...)
const decoded = decodeURIComponent(result.uri);
// Versuche einen lesbaren Pfad zu extrahieren
const match = decoded.match(/primary[:%]3A(.+)/);
const readablePath = match
? `/storage/emulated/0/${match[1].replace(/%2F|%3A/g, '/')}`
: decoded;
saveStoragePath(readablePath);
}
} catch (e: any) {
if (!DocumentPicker.isCancel(e)) {
Alert.alert('Fehler', 'Ordnerauswahl fehlgeschlagen');
}
}
},
},
{
text: 'App-intern (Standard)',
onPress: () => saveStoragePath(DEFAULT_STORAGE_PATH),
},
{
text: 'Pfad manuell eingeben',
onPress: () => { setTempPath(storagePath); setEditingPath(true); },
},
{ text: 'Abbrechen', style: 'cancel' as const },
],
);
}, [storagePath]);
const clearStorageCache = useCallback(async () => {
Alert.alert(
'Cache loeschen',
`Alle lokalen Anhaenge in\n${storagePath}\nloeschen?\n\nDateien koennen ueber RVS erneut heruntergeladen werden.`,
[
{ text: 'Abbrechen', style: 'cancel' },
{
text: 'Loeschen',
style: 'destructive',
onPress: async () => {
try {
const exists = await RNFS.exists(storagePath);
if (exists) await RNFS.unlink(storagePath);
await RNFS.mkdir(storagePath);
setStorageSize('0 KB (0 Dateien)');
Alert.alert('Erledigt', 'Cache geleert. Anhaenge werden bei Bedarf neu geladen.');
} catch (e: any) {
Alert.alert('Fehler', e.message);
}
},
},
],
);
}, [storagePath]);
// RVS-Nachrichten und Verbindungslog abonnieren
useEffect(() => {
const unsubState = rvs.onStateChange(setConnectionState);
@@ -332,6 +462,208 @@ const SettingsScreen: React.FC = () => {
</View>
</View>
{/* === Sprachausgabe === */}
<Text style={styles.sectionTitle}>Sprachausgabe</Text>
<View style={styles.card}>
{/* TTS An/Aus */}
<View style={styles.toggleRow}>
<View style={styles.toggleInfo}>
<Text style={styles.toggleLabel}>Sprachausgabe</Text>
<Text style={styles.toggleHint}>ARIA antwortet per Sprache (TTS)</Text>
</View>
<Switch
value={ttsEnabled}
onValueChange={(val) => {
setTtsEnabled(val);
AsyncStorage.setItem('aria_tts_enabled', String(val));
rvs.send('config' as any, { ttsEnabled: val });
}}
trackColor={{ false: '#2A2A3E', true: '#0096FF' }}
thumbColor={ttsEnabled ? '#FFFFFF' : '#666680'}
/>
</View>
{/* Standard-Stimme */}
<View style={{marginTop: 16}}>
<Text style={styles.toggleLabel}>Standard-Stimme</Text>
<Text style={styles.toggleHint}>Fuer normale Antworten und Gespraeche</Text>
<View style={{flexDirection: 'row', gap: 8, marginTop: 8}}>
<TouchableOpacity
style={[styles.voiceBtn, defaultVoice === 'ramona' && styles.voiceBtnActive]}
onPress={() => { setDefaultVoice('ramona'); AsyncStorage.setItem('aria_default_voice', 'ramona'); rvs.send('config' as any, { defaultVoice: 'ramona' }); }}
>
<Text style={styles.voiceBtnIcon}>{'\uD83D\uDE4E\u200D\u2640\uFE0F'}</Text>
<Text style={[styles.voiceBtnText, defaultVoice === 'ramona' && styles.voiceBtnTextActive]}>Ramona</Text>
<Text style={styles.voiceBtnHint}>Weiblich, warm</Text>
</TouchableOpacity>
<TouchableOpacity
style={[styles.voiceBtn, defaultVoice === 'thorsten' && styles.voiceBtnActive]}
onPress={() => { setDefaultVoice('thorsten'); AsyncStorage.setItem('aria_default_voice', 'thorsten'); rvs.send('config' as any, { defaultVoice: 'thorsten' }); }}
>
<Text style={styles.voiceBtnIcon}>{'\uD83E\uDDD4'}</Text>
<Text style={[styles.voiceBtnText, defaultVoice === 'thorsten' && styles.voiceBtnTextActive]}>Thorsten</Text>
<Text style={styles.voiceBtnHint}>Maennlich, tief</Text>
</TouchableOpacity>
</View>
</View>
{/* Highlight-Stimme */}
<View style={{marginTop: 16}}>
<Text style={styles.toggleLabel}>Highlight-Stimme</Text>
<Text style={styles.toggleHint}>Fuer besondere Ereignisse (Deploy, Alarm, Erfolg)</Text>
<View style={{flexDirection: 'row', gap: 8, marginTop: 8}}>
<TouchableOpacity
style={[styles.voiceBtn, highlightVoice === 'thorsten' && styles.voiceBtnActive]}
onPress={() => { setHighlightVoice('thorsten'); AsyncStorage.setItem('aria_highlight_voice', 'thorsten'); rvs.send('config' as any, { highlightVoice: 'thorsten' }); }}
>
<Text style={styles.voiceBtnIcon}>{'\uD83E\uDDD4'}</Text>
<Text style={[styles.voiceBtnText, highlightVoice === 'thorsten' && styles.voiceBtnTextActive]}>Thorsten</Text>
</TouchableOpacity>
<TouchableOpacity
style={[styles.voiceBtn, highlightVoice === 'ramona' && styles.voiceBtnActive]}
onPress={() => { setHighlightVoice('ramona'); AsyncStorage.setItem('aria_highlight_voice', 'ramona'); rvs.send('config' as any, { highlightVoice: 'ramona' }); }}
>
<Text style={styles.voiceBtnIcon}>{'\uD83D\uDE4E\u200D\u2640\uFE0F'}</Text>
<Text style={[styles.voiceBtnText, highlightVoice === 'ramona' && styles.voiceBtnTextActive]}>Ramona</Text>
</TouchableOpacity>
</View>
</View>
{/* Sprechgeschwindigkeit Ramona */}
<View style={{marginTop: 16}}>
<Text style={styles.toggleLabel}>Ramona Speed: {speedRamona.toFixed(1)}x</Text>
<View style={{flexDirection: 'row', justifyContent: 'space-around', marginTop: 8}}>
{[0.5, 0.75, 1.0, 1.25, 1.5, 2.0].map(speed => (
<TouchableOpacity
key={speed}
onPress={() => {
setSpeedRamona(speed);
AsyncStorage.setItem('aria_speed_ramona', String(speed));
rvs.send('config' as any, { speedRamona: speed });
}}
style={{
paddingHorizontal: 10, paddingVertical: 6, borderRadius: 6,
backgroundColor: speedRamona === speed ? '#0096FF' : '#1E1E2E',
}}
>
<Text style={{color: speedRamona === speed ? '#fff' : '#8888AA', fontSize: 12, fontWeight: '600'}}>
{speed}x
</Text>
</TouchableOpacity>
))}
</View>
</View>
{/* Sprechgeschwindigkeit Thorsten */}
<View style={{marginTop: 16}}>
<Text style={styles.toggleLabel}>Thorsten Speed: {speedThorsten.toFixed(1)}x</Text>
<View style={{flexDirection: 'row', justifyContent: 'space-around', marginTop: 8}}>
{[0.5, 0.75, 1.0, 1.25, 1.5, 2.0].map(speed => (
<TouchableOpacity
key={speed}
onPress={() => {
setSpeedThorsten(speed);
AsyncStorage.setItem('aria_speed_thorsten', String(speed));
rvs.send('config' as any, { speedThorsten: speed });
}}
style={{
paddingHorizontal: 10, paddingVertical: 6, borderRadius: 6,
backgroundColor: speedThorsten === speed ? '#0096FF' : '#1E1E2E',
}}
>
<Text style={{color: speedThorsten === speed ? '#fff' : '#8888AA', fontSize: 12, fontWeight: '600'}}>
{speed}x
</Text>
</TouchableOpacity>
))}
</View>
</View>
{/* Highlight-Trigger Info */}
<View style={{marginTop: 16, padding: 10, backgroundColor: '#1E1E2E', borderRadius: 8}}>
<Text style={styles.toggleLabel}>{'\u26A1'} Highlight-Trigger</Text>
<Text style={[styles.toggleHint, {marginTop: 4}]}>
Die Highlight-Stimme wird automatisch bei diesen Woertern verwendet:{'\n'}
deploy, erfolgreich, alarm, so soll es sein, kritisch, server down, sicherheitswarnung, ticket geloest, aufgabe abgeschlossen
</Text>
</View>
</View>
{/* === Speicher === */}
<Text style={styles.sectionTitle}>Anhang-Speicher</Text>
<View style={styles.card}>
<View style={styles.toggleRow}>
<View style={styles.toggleInfo}>
<Text style={styles.toggleLabel}>Auto-Download</Text>
<Text style={styles.toggleHint}>
Fehlende Anhaenge beim App-Start automatisch vom Server laden
</Text>
</View>
<Switch
value={autoDownload}
onValueChange={(val) => {
setAutoDownload(val);
AsyncStorage.setItem('aria_auto_download', String(val));
}}
trackColor={{ false: '#2A2A3E', true: '#0096FF' }}
thumbColor={autoDownload ? '#FFFFFF' : '#666680'}
/>
</View>
<View style={{height: 16}} />
<Text style={styles.toggleLabel}>Lokaler Speicherort</Text>
<Text style={styles.toggleHint}>
Hier werden Bilder und Dateien aus dem Chat gespeichert.
{autoDownload ? ' Fehlende Dateien werden automatisch nachgeladen.' : ' Fehlende Dateien koennen per Tippen geladen werden.'}
</Text>
{editingPath ? (
<View style={{marginTop: 10}}>
<TextInput
style={styles.input}
value={tempPath}
onChangeText={setTempPath}
placeholder="z.B. /storage/emulated/0/ARIA/attachments"
placeholderTextColor="#555570"
autoCapitalize="none"
/>
<View style={{flexDirection: 'row', gap: 8}}>
<TouchableOpacity
style={[styles.connectButton, {flex: 1}]}
onPress={() => saveStoragePath(tempPath)}
>
<Text style={styles.connectButtonText}>Speichern</Text>
</TouchableOpacity>
<TouchableOpacity
style={[styles.clearButton, {flex: 1, marginTop: 0}]}
onPress={() => setEditingPath(false)}
>
<Text style={styles.clearButtonText}>Abbrechen</Text>
</TouchableOpacity>
</View>
</View>
) : (
<View style={{marginTop: 10}}>
<Text style={styles.storagePathText} numberOfLines={2}>{storagePath}</Text>
<Text style={styles.storageSizeText}>{storageSize}</Text>
<View style={{flexDirection: 'row', gap: 8, marginTop: 8}}>
<TouchableOpacity
style={[styles.clearButton, {flex: 1, marginTop: 0}]}
onPress={showPathPicker}
>
<Text style={styles.clearButtonText}>Pfad aendern</Text>
</TouchableOpacity>
<TouchableOpacity
style={[styles.clearButton, {flex: 1, marginTop: 0, backgroundColor: 'rgba(255,59,48,0.15)'}]}
onPress={clearStorageCache}
>
<Text style={[styles.clearButtonText, {color: '#FF3B30'}]}>Cache leeren</Text>
</TouchableOpacity>
</View>
</View>
)}
</View>
{/* === Logs === */}
<Text style={styles.sectionTitle}>Protokoll</Text>
<View style={styles.card}>
@@ -416,7 +748,7 @@ const SettingsScreen: React.FC = () => {
<Text style={styles.sectionTitle}>{'\u00DC'}ber</Text>
<View style={styles.card}>
<Text style={styles.aboutTitle}>ARIA Cockpit</Text>
<Text style={styles.aboutVersion}>Version 0.1.0 (Alpha)</Text>
<Text style={styles.aboutVersion}>Version 0.0.2.7 </Text>
<Text style={styles.aboutInfo}>
Stefans Kommandozentrale f{'\u00FC'}r ARIA.{'\n'}
Gebaut mit React Native + TypeScript.
@@ -559,6 +891,50 @@ const styles = StyleSheet.create({
marginTop: 2,
},
// Stimmen
voiceBtn: {
flex: 1,
padding: 12,
borderRadius: 10,
backgroundColor: '#1E1E2E',
alignItems: 'center',
borderWidth: 2,
borderColor: 'transparent',
},
voiceBtnActive: {
borderColor: '#0096FF',
backgroundColor: '#0D1A2E',
},
voiceBtnIcon: {
fontSize: 28,
marginBottom: 4,
},
voiceBtnText: {
color: '#8888AA',
fontSize: 14,
fontWeight: '600',
},
voiceBtnTextActive: {
color: '#FFFFFF',
},
voiceBtnHint: {
color: '#555570',
fontSize: 11,
marginTop: 2,
},
// Speicher
storagePathText: {
color: '#0096FF',
fontSize: 12,
fontFamily: Platform.OS === 'ios' ? 'Menlo' : 'monospace',
},
storageSizeText: {
color: '#8888AA',
fontSize: 12,
marginTop: 4,
},
// Logs
tabRow: {
flexDirection: 'row',
+86 -29
View File
@@ -55,6 +55,12 @@ class AudioService {
private recorder: AudioRecorderPlayer;
private recordingPath: string = '';
// Audio-Queue fuer sequentielle TTS-Wiedergabe
private audioQueue: string[] = [];
private isPlaying: boolean = false;
private preloadedSound: Sound | null = null;
private preloadedPath: string = '';
// VAD State
private vadEnabled: boolean = false;
private lastSpeechTime: number = 0;
@@ -198,47 +204,98 @@ class AudioService {
// --- Wiedergabe ---
/** Base64-kodiertes Audio abspielen (z.B. TTS-Antwort von ARIA) */
/** Base64-kodiertes Audio in die Queue stellen und abspielen */
async playAudio(base64Data: string): Promise<void> {
if (!base64Data) return;
// Laufende Wiedergabe stoppen
this.stopPlayback();
try {
// Base64 -> temporaere WAV-Datei -> Sound abspielen
const tmpPath = `${RNFS.CachesDirectoryPath}/aria_tts_${Date.now()}.wav`;
await RNFS.writeFile(tmpPath, base64Data, 'base64');
this.currentSound = new Sound(tmpPath, '', (error) => {
if (error) {
console.error('[Audio] Fehler beim Laden:', error);
RNFS.unlink(tmpPath).catch(() => {});
return;
}
this.currentSound?.play((success) => {
if (success) {
console.log('[Audio] Wiedergabe abgeschlossen');
} else {
console.warn('[Audio] Wiedergabe fehlgeschlagen');
}
this.currentSound?.release();
this.currentSound = null;
RNFS.unlink(tmpPath).catch(() => {});
});
});
} catch (err) {
console.error('[Audio] Wiedergabefehler:', err);
this.audioQueue.push(base64Data);
if (!this.isPlaying) {
this._playNext();
}
}
/** Laufende Wiedergabe stoppen */
/** Naechstes Audio aus der Queue abspielen */
private async _playNext(): Promise<void> {
if (this.audioQueue.length === 0) {
this.isPlaying = false;
return;
}
this.isPlaying = true;
// Preloaded Sound verwenden wenn verfuegbar, sonst neu laden
let sound: Sound;
let soundPath: string;
if (this.preloadedSound) {
sound = this.preloadedSound;
soundPath = this.preloadedPath;
this.preloadedSound = null;
this.preloadedPath = '';
// Daten aus Queue entfernen (wurde schon preloaded)
this.audioQueue.shift();
} else {
const base64Data = this.audioQueue.shift()!;
try {
soundPath = `${RNFS.CachesDirectoryPath}/aria_tts_${Date.now()}.wav`;
await RNFS.writeFile(soundPath, base64Data, 'base64');
sound = await new Promise<Sound>((resolve, reject) => {
const s = new Sound(soundPath, '', (err) => err ? reject(err) : resolve(s));
});
} catch (err) {
console.error('[Audio] Laden fehlgeschlagen:', err);
this._playNext();
return;
}
}
this.currentSound = sound;
// Naechstes Audio schon vorbereiten waehrend dieses abspielt
this._preloadNext();
sound.play((success) => {
if (!success) console.warn('[Audio] Wiedergabe fehlgeschlagen');
sound.release();
this.currentSound = null;
RNFS.unlink(soundPath).catch(() => {});
this._playNext();
});
}
/** Naechstes Audio im Hintergrund vorladen (verhindert Stottern) */
private async _preloadNext(): Promise<void> {
if (this.audioQueue.length === 0 || this.preloadedSound) return;
const base64Data = this.audioQueue[0]; // Nicht shift — bleibt in Queue
try {
const tmpPath = `${RNFS.CachesDirectoryPath}/aria_tts_pre_${Date.now()}.wav`;
await RNFS.writeFile(tmpPath, base64Data, 'base64');
this.preloadedSound = await new Promise<Sound>((resolve, reject) => {
const s = new Sound(tmpPath, '', (err) => err ? reject(err) : resolve(s));
});
this.preloadedPath = tmpPath;
} catch {
this.preloadedSound = null;
this.preloadedPath = '';
}
}
/** Laufende Wiedergabe stoppen + Queue leeren */
stopPlayback(): void {
this.audioQueue = [];
this.isPlaying = false;
if (this.currentSound) {
this.currentSound.stop();
this.currentSound.release();
this.currentSound = null;
}
if (this.preloadedSound) {
this.preloadedSound.release();
this.preloadedSound = null;
if (this.preloadedPath) RNFS.unlink(this.preloadedPath).catch(() => {});
this.preloadedPath = '';
}
}
// --- Status & Callbacks ---
+1 -1
View File
@@ -12,7 +12,7 @@ import AsyncStorage from '@react-native-async-storage/async-storage';
export type ConnectionState = 'connecting' | 'connected' | 'disconnected';
export type MessageType = 'chat' | 'audio' | 'file' | 'location' | 'mode' | 'log' | 'event';
export type MessageType = 'chat' | 'audio' | 'file' | 'location' | 'mode' | 'log' | 'event' | 'update_available' | string;
export interface RVSMessage {
type: MessageType;
+149
View File
@@ -0,0 +1,149 @@
/**
* Auto-Update Service — prueft und installiert App-Updates via RVS
*
* Flow:
* 1. App sendet "update_check" mit aktueller Version an RVS
* 2. RVS vergleicht → sendet "update_available" mit Download-URL
* 3. App zeigt Benachrichtigung → User bestaetigt → Download + Install
*/
import { Alert, Linking, Platform } from 'react-native';
import RNFS from 'react-native-fs';
import rvs, { RVSMessage } from './rvs';
// Aktuelle App-Version (aus package.json via Build)
const APP_VERSION = '0.0.2.3'; // TODO: aus nativer Build-Config lesen
type UpdateCallback = (info: UpdateInfo) => void;
export interface UpdateInfo {
version: string;
downloadUrl: string;
size: number;
}
class UpdateService {
private listeners: UpdateCallback[] = [];
private checking = false;
private downloading = false;
constructor() {
// Auf update_available Nachrichten lauschen
rvs.onMessage((msg: RVSMessage) => {
if (msg.type === 'update_available' as any) {
const info: UpdateInfo = {
version: (msg.payload.version as string) || '',
downloadUrl: (msg.payload.downloadUrl as string) || '',
size: (msg.payload.size as number) || 0,
};
if (info.version && this.isNewer(info.version)) {
console.log(`[Update] Neue Version verfuegbar: ${info.version} (aktuell: ${APP_VERSION})`);
this.listeners.forEach(cb => cb(info));
}
}
});
}
/** Bei App-Start Update pruefen */
checkForUpdate(): void {
if (this.checking) return;
this.checking = true;
console.log(`[Update] Pruefe auf Updates (aktuell: ${APP_VERSION})`);
rvs.send('update_check' as any, { version: APP_VERSION });
setTimeout(() => { this.checking = false; }, 10000);
}
/** Callback registrieren */
onUpdateAvailable(callback: UpdateCallback): () => void {
this.listeners.push(callback);
return () => {
this.listeners = this.listeners.filter(cb => cb !== callback);
};
}
/** Update-Dialog anzeigen */
promptUpdate(info: UpdateInfo): void {
const sizeMB = (info.size / 1024 / 1024).toFixed(1);
Alert.alert(
'ARIA Update verfuegbar',
`Version ${info.version} (${sizeMB} MB)\n\nAktuell: ${APP_VERSION}\n\nJetzt herunterladen und installieren?`,
[
{ text: 'Spaeter', style: 'cancel' },
{
text: 'Installieren',
onPress: () => this.downloadAndInstall(info),
},
],
);
}
/** APK ueber WebSocket herunterladen und installieren */
async downloadAndInstall(info: UpdateInfo): Promise<void> {
if (this.downloading) return;
this.downloading = true;
try {
console.log(`[Update] Fordere APK v${info.version} an...`);
Alert.alert('Download gestartet', `Version ${info.version} wird ueber RVS heruntergeladen...`);
// APK ueber WebSocket anfordern
rvs.send('update_download' as any, {});
// Auf update_data warten (einmalig)
const apkData = await new Promise<{base64: string, fileName: string}>((resolve, reject) => {
const timeout = setTimeout(() => reject(new Error('Download-Timeout (60s)')), 60000);
const unsub = rvs.onMessage((msg: RVSMessage) => {
if ((msg.type as string) === 'update_data') {
clearTimeout(timeout);
unsub();
if (msg.payload.error) {
reject(new Error(msg.payload.error as string));
} else {
resolve({
base64: msg.payload.base64 as string,
fileName: msg.payload.fileName as string || `ARIA-${info.version}.apk`,
});
}
}
});
});
// Base64 als APK-Datei speichern
const destPath = `${RNFS.CachesDirectoryPath}/${apkData.fileName}`;
await RNFS.writeFile(destPath, apkData.base64, 'base64');
const fileSize = await RNFS.stat(destPath);
console.log(`[Update] APK gespeichert: ${destPath} (${(parseInt(fileSize.size) / 1024 / 1024).toFixed(1)}MB)`);
// APK installieren (oeffnet Android-Installer)
if (Platform.OS === 'android') {
await Linking.openURL(`file://${destPath}`);
}
} catch (err: any) {
console.error(`[Update] Fehler: ${err.message}`);
Alert.alert('Update fehlgeschlagen', err.message);
} finally {
this.downloading = false;
}
}
/** Versionsvergleich */
private isNewer(remote: string): boolean {
const r = remote.split('.').map(Number);
const l = APP_VERSION.split('.').map(Number);
for (let i = 0; i < Math.max(r.length, l.length); i++) {
const diff = (r[i] || 0) - (l[i] || 0);
if (diff > 0) return true;
if (diff < 0) return false;
}
return false;
}
getCurrentVersion(): string {
return APP_VERSION;
}
}
const updateService = new UpdateService();
export default updateService;
+7 -77
View File
@@ -1,21 +1,12 @@
/**
* Wake Word Service — "ARIA" Erkennung
*
* Nutzt react-native-live-audio-stream fuer kontinuierliches Mikrofon-Monitoring.
* Erkennt Sprache per Energie-Schwellwert und sendet kurze Audio-Clips
* zur serverseitigen Wake-Word-Pruefung (openwakeword in der Bridge).
* Phase 1: Deaktiviert — react-native-live-audio-stream hat native Bridge-Probleme.
* Nutzt stattdessen Tap-to-Talk (VoiceButton) als primaeren Eingabemodus.
*
* Architektur:
* App (Mikrofon) → Energie-Erkennung → Audio-Buffer
* → RVS "wake_check" → Bridge → openwakeword → Bestaetigung
* → App startet Aufnahme
*
* Aktuell (Phase 1): Einfacher Tap-to-Talk + Auto-Stop.
* Spaeter (Phase 2): Porcupine on-device "ARIA" Keyword.
* Phase 2: Porcupine on-device "ARIA" Keyword (geplant).
*/
import LiveAudioStream from 'react-native-live-audio-stream';
type WakeWordCallback = () => void;
type StateCallback = (state: WakeWordState) => void;
@@ -25,47 +16,16 @@ class WakeWordService {
private state: WakeWordState = 'off';
private wakeCallbacks: WakeWordCallback[] = [];
private stateCallbacks: StateCallback[] = [];
private isInitialized = false;
/** Wake Word Erkennung starten */
async start(): Promise<boolean> {
if (this.state === 'listening') return true;
try {
if (!this.isInitialized) {
LiveAudioStream.init({
sampleRate: 16000,
channels: 1,
bitsPerSample: 16,
audioSource: 6, // VOICE_RECOGNITION
bufferSize: 4096,
});
this.isInitialized = true;
}
// Audio-Stream starten und auf Energie pruefen
LiveAudioStream.start();
LiveAudioStream.on('data', (base64Chunk: string) => {
if (this.state !== 'listening') return;
// Base64 → Int16 Array → RMS berechnen
const raw = this._base64ToInt16(base64Chunk);
const rms = this._calculateRMS(raw);
// Schwellwert: wenn laut genug → Wake Word erkannt
// Phase 1: Einfache Energie-Erkennung (jemand spricht)
// Phase 2: Porcupine "ARIA" Keyword
if (rms > 2000) {
this.setState('detected');
this.wakeCallbacks.forEach(cb => cb());
// Nach Detection kurz pausieren, Aufnahme uebernimmt das Mikrofon
this.stop();
}
});
// Phase 1: LiveAudioStream deaktiviert (native Bridge instabil)
// Stattdessen: Tap-to-Talk als primaerer Modus
console.log('[WakeWord] Wake Word ist in Phase 1 noch nicht verfuegbar — nutze Tap-to-Talk');
this.setState('listening');
console.log('[WakeWord] Listening gestartet');
return true;
} catch (err) {
console.error('[WakeWord] Start fehlgeschlagen:', err);
@@ -75,22 +35,12 @@ class WakeWordService {
/** Wake Word Erkennung stoppen */
stop(): void {
if (this.state === 'off') return;
try {
LiveAudioStream.stop();
} catch {}
this.setState('off');
console.log('[WakeWord] Gestoppt');
}
/** Nach Aufnahme erneut starten */
async resume(): Promise<void> {
// Kurze Pause damit Aufnahme das Mikrofon freigeben kann
setTimeout(() => {
if (this.state === 'off') {
this.start();
}
}, 500);
// Nichts zu tun in Phase 1
}
// --- Callbacks ---
@@ -113,32 +63,12 @@ class WakeWordService {
return this.state;
}
// --- Hilfsfunktionen ---
private setState(state: WakeWordState): void {
if (this.state !== state) {
this.state = state;
this.stateCallbacks.forEach(cb => cb(state));
}
}
private _base64ToInt16(base64: string): Int16Array {
const binary = atob(base64);
const bytes = new Uint8Array(binary.length);
for (let i = 0; i < binary.length; i++) {
bytes[i] = binary.charCodeAt(i);
}
return new Int16Array(bytes.buffer);
}
private _calculateRMS(samples: Int16Array): number {
if (samples.length === 0) return 0;
let sum = 0;
for (let i = 0; i < samples.length; i++) {
sum += samples[i] * samples[i];
}
return Math.sqrt(sum / samples.length);
}
}
const wakeWordService = new WakeWordService();
+379 -42
View File
@@ -38,6 +38,7 @@ import websockets
from faster_whisper import WhisperModel
from openwakeword.model import Model as WakeWordModel
from piper import PiperVoice
from piper.config import SynthesisConfig
from modes import Mode, detect_mode_switch, should_speak
@@ -72,7 +73,7 @@ BLOCK_SIZE = 1280 # 80ms bei 16kHz — gut fuer Wake-Word-Erkennung
RECORD_SECONDS = 8 # Max. Aufnahmedauer nach Wake-Word
# Epische Trigger — bei diesen Woertern spricht Thorsten
EPIC_TRIGGERS = [
EPIC_TRIGGERS_DEFAULT = [
"deploy",
"erfolgreich",
"alarm",
@@ -84,6 +85,24 @@ EPIC_TRIGGERS = [
"aufgabe abgeschlossen",
]
# Trigger aus Shared-Config laden (von Diagnostic gespeichert)
TRIGGERS_FILE = "/shared/config/highlight_triggers.json"
def load_epic_triggers():
"""Laedt Highlight-Trigger aus Shared-Config oder nutzt Defaults."""
try:
if os.path.exists(TRIGGERS_FILE):
with open(TRIGGERS_FILE) as f:
triggers = json.load(f)
if isinstance(triggers, list) and len(triggers) > 0:
logger.info("Highlight-Trigger geladen: %d aus %s", len(triggers), TRIGGERS_FILE)
return triggers
except Exception as e:
logger.warning("Highlight-Trigger laden fehlgeschlagen: %s — nutze Defaults", e)
return EPIC_TRIGGERS_DEFAULT
EPIC_TRIGGERS = load_epic_triggers()
def load_config() -> dict[str, str]:
"""Laedt Konfiguration aus /config/aria.env."""
@@ -111,6 +130,9 @@ class VoiceEngine:
def __init__(self, voices_dir: Path) -> None:
self.voices_dir = voices_dir
self.voices: dict[str, PiperVoice] = {}
self.default_voice = "ramona"
self.highlight_voice = "thorsten"
self.speech_speed = {"ramona": 1.0, "thorsten": 1.0}
def initialize(self) -> None:
"""Laedt die Piper-Stimmen aus dem Voices-Verzeichnis."""
@@ -154,14 +176,14 @@ class VoiceEngine:
if requested_voice and requested_voice in self.voices:
return requested_voice
# Epische Trigger pruefen
# Highlight-Trigger pruefen
text_lower = text.lower()
for trigger in EPIC_TRIGGERS:
if trigger in text_lower:
logger.info("Epischer Trigger erkannt: '%s'Thorsten spricht", trigger)
return "thorsten"
logger.info("Highlight-Trigger erkannt: '%s'%s spricht", trigger, self.highlight_voice)
return self.highlight_voice
return "ramona"
return self.default_voice
def synthesize(self, text: str, voice_name: str = "ramona") -> Optional[bytes]:
"""Erzeugt Audio-Daten aus Text mit der gewaehlten Stimme.
@@ -179,20 +201,50 @@ class VoiceEngine:
return None
try:
# Piper gibt PCM-Samples zurueck, wir schreiben sie als WAV
# Langen Text in Saetze aufteilen (Piper hat Limits bei langen Texten)
import re
sentences = re.split(r'(?<=[.!?])\s+', text.strip())
# Markdown-Formatierung entfernen
sentences = [re.sub(r'\*\*([^*]+)\*\*', r'\1', s).strip() for s in sentences if s.strip()]
if not sentences:
return None
# Jeden Satz einzeln synthetisieren und WAVs zusammenfuegen
all_audio = b""
sample_rate = None
for sentence in sentences:
if not sentence:
continue
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as tmp:
tmp_path = tmp.name
speed = self.speech_speed.get(voice_name, 1.0)
syn_config = SynthesisConfig(length_scale=1.0 / max(0.3, speed))
with wave.open(tmp_path, "wb") as wav_file:
voice.synthesize_wav(sentence, wav_file, syn_config=syn_config)
with wave.open(tmp_path, "rb") as wav_file:
if sample_rate is None:
sample_rate = wav_file.getframerate()
all_audio += wav_file.readframes(wav_file.getnframes())
Path(tmp_path).unlink(missing_ok=True)
# Zusammengefuegtes WAV erstellen
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as tmp:
tmp_path = tmp.name
final_path = tmp.name
with wave.open(final_path, "wb") as wav_file:
wav_file.setnchannels(1)
wav_file.setsampwidth(2)
wav_file.setframerate(sample_rate or 22050)
wav_file.writeframes(all_audio)
with wave.open(tmp_path, "wb") as wav_file:
voice.synthesize(text, wav_file)
audio_data = Path(tmp_path).read_bytes()
Path(tmp_path).unlink(missing_ok=True)
audio_data = Path(final_path).read_bytes()
Path(final_path).unlink(missing_ok=True)
logger.info(
"TTS: %d bytes erzeugt mit %s'%s'",
"TTS: %d bytes erzeugt mit %s (%d Saetze)'%s'",
len(audio_data),
voice_name,
len(sentences),
text[:60],
)
return audio_data
@@ -437,6 +489,25 @@ class ARIABridge:
# Komponenten
self.voice_engine = VoiceEngine(VOICES_DIR)
self.tts_enabled = True
# Gespeicherte Voice-Config laden
try:
vc_path = "/shared/config/voice_config.json"
if os.path.exists(vc_path):
with open(vc_path) as f:
vc = json.load(f)
self.voice_engine.default_voice = vc.get("defaultVoice", "ramona")
self.voice_engine.highlight_voice = vc.get("highlightVoice", "thorsten")
self.voice_engine.speech_speed = {
"ramona": vc.get("speedRamona", 1.0),
"thorsten": vc.get("speedThorsten", 1.0),
}
self.tts_enabled = vc.get("ttsEnabled", True)
self.tts_engine_type = vc.get("ttsEngine", "piper")
self.xtts_voice = vc.get("xttsVoice", "")
logger.info("Voice-Config geladen: %s", vc)
except Exception as e:
logger.warning("Voice-Config laden fehlgeschlagen: %s", e)
self.stt_engine = STTEngine(
model_size=self.config.get("WHISPER_MODEL", WHISPER_MODEL),
language=self.config.get("WHISPER_LANGUAGE", WHISPER_LANGUAGE),
@@ -461,17 +532,20 @@ class ARIABridge:
# Voice-Engine IMMER laden — rendert Audio fuer die App (auch ohne Soundkarte)
self.voice_engine.initialize()
# STT IMMER laden — verarbeitet Audio von der App (braucht kein Sounddevice)
self.stt_engine.initialize()
# Audio-Hardware pruefen (fuer lokales Mikro/Lautsprecher)
self.audio_available = False
try:
sd.query_devices()
devices = sd.query_devices()
sd.query_devices(kind='output')
self.audio_available = True
logger.info("Audio-Geraet gefunden — Wake-Word und lokale TTS aktiv")
self.stt_engine.initialize()
self.wake_word.initialize()
except (sd.PortAudioError, Exception):
logger.warning("Kein Audio-Geraet — Wake-Word und lokale TTS deaktiviert")
logger.info("Piper TTS rendert Audio fuer die App (via RVS)")
logger.warning("Kein Audio-Geraet — Wake-Word und lokale Wiedergabe deaktiviert")
logger.info("TTS rendert fuer App (via RVS), STT verarbeitet App-Audio")
logger.info("Alle Komponenten initialisiert")
logger.info("aria-core: %s", self.ws_url)
@@ -773,18 +847,48 @@ class ARIABridge:
})
# TTS-Audio rendern und an die App senden (wenn Modus es erlaubt)
if should_speak(self.current_mode, is_critical):
audio_data = self.voice_engine.synthesize(text, voice_name)
if audio_data:
audio_b64 = base64.b64encode(audio_data).decode("ascii")
await self._send_to_rvs({
"type": "audio",
"payload": {
"base64": audio_b64,
"mimeType": "audio/wav",
"voice": voice_name,
},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
if getattr(self, 'tts_enabled', True) and should_speak(self.current_mode, is_critical):
tts_engine = getattr(self, 'tts_engine_type', 'piper')
if tts_engine == "xtts":
# XTTS: Ganzen Text senden, XTTS-Bridge teilt satzweise auf
xtts_voice = getattr(self, 'xtts_voice', '')
try:
await self._send_to_rvs({
"type": "xtts_request",
"payload": {
"text": text,
"voice": xtts_voice,
"language": "de",
"requestId": str(uuid.uuid4()),
},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
logger.info("[core] XTTS-Request gesendet (%s): '%s'", xtts_voice or "default", text[:60])
except Exception as e:
logger.warning("[core] XTTS-Request fehlgeschlagen: %s — Fallback auf Piper", e)
# Fallback auf Piper
audio_data = self.voice_engine.synthesize(text, voice_name)
if audio_data:
audio_b64 = base64.b64encode(audio_data).decode("ascii")
await self._send_to_rvs({
"type": "audio",
"payload": {"base64": audio_b64, "mimeType": "audio/wav", "voice": voice_name},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
else:
# Piper: Lokal rendern
audio_data = self.voice_engine.synthesize(text, voice_name)
if audio_data:
audio_b64 = base64.b64encode(audio_data).decode("ascii")
await self._send_to_rvs({
"type": "audio",
"payload": {
"base64": audio_b64,
"mimeType": "audio/wav",
"voice": voice_name,
},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
logger.info("[core] TTS-Audio gesendet: %d bytes (%s)", len(audio_data), voice_name)
@@ -893,10 +997,22 @@ class ARIABridge:
retry_delay = min(retry_delay * 2, 30)
async def _rvs_heartbeat(self) -> None:
"""Sendet Heartbeats an den RVS damit die Verbindung offen bleibt."""
"""Sendet Heartbeats + WebSocket Pings an den RVS damit die Verbindung offen bleibt."""
while True:
await asyncio.sleep(25)
if self.ws_rvs and self.ws_rvs.open:
await asyncio.sleep(15)
if self.ws_rvs:
try:
# WebSocket Protocol-Level Ping (haelt TCP-Verbindung am Leben)
pong = await self.ws_rvs.ping()
await asyncio.wait_for(pong, timeout=10)
except Exception:
logger.warning("[rvs] Ping fehlgeschlagen — Verbindung tot, erzwinge Reconnect")
try:
await self.ws_rvs.close()
except Exception:
pass
self.ws_rvs = None
break
try:
await self.ws_rvs.send(json.dumps({
"type": "heartbeat",
@@ -925,7 +1041,109 @@ class ARIABridge:
payload = message.get("payload", {})
if msg_type == "chat":
# Text von der App → an aria-core
# Nur User-Nachrichten weiterleiten — ARIA/Diagnostic-Antworten ignorieren (sonst Loop!)
sender = payload.get("sender", "")
if sender in ("aria", "stt"):
return
elif msg_type == "xtts_response":
# XTTS-Audio vom Gaming-PC empfangen → an App weiterleiten
audio_b64 = payload.get("base64", "")
error = payload.get("error", "")
if error:
logger.warning("[rvs] XTTS Fehler: %s", error)
return
if audio_b64:
logger.info("[rvs] XTTS-Audio empfangen: %dKB", len(audio_b64) // 1365)
await self._send_to_rvs({
"type": "audio",
"payload": {
"base64": audio_b64,
"mimeType": payload.get("mimeType", "audio/wav"),
"voice": payload.get("voice", "xtts"),
},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
return
elif msg_type == "tts_request":
# App fordert TTS-Audio fuer einen Text an (Play-Button)
text = payload.get("text", "")
requested_voice = payload.get("voice", "")
if text:
voice_name = requested_voice or self.voice_engine.select_voice(text)
audio_data = self.voice_engine.synthesize(text, voice_name)
if audio_data:
audio_b64 = base64.b64encode(audio_data).decode("ascii")
try:
await self._send_to_rvs({
"type": "audio",
"payload": {
"base64": audio_b64,
"mimeType": "audio/wav",
"voice": voice_name,
},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
logger.info("[rvs] TTS on-demand: %d bytes (%s)", len(audio_data), voice_name)
except Exception as e:
logger.warning("[rvs] TTS on-demand senden fehlgeschlagen: %s", e)
return
elif msg_type == "config":
# Konfiguration von App/Diagnostic empfangen + persistent speichern
changed = False
if "defaultVoice" in payload:
new_voice = payload["defaultVoice"]
if new_voice in self.voice_engine.voices:
self.voice_engine.default_voice = new_voice
logger.info("[rvs] Standard-Stimme gewechselt: %s", new_voice)
changed = True
if "highlightVoice" in payload:
new_voice = payload["highlightVoice"]
if new_voice in self.voice_engine.voices:
self.voice_engine.highlight_voice = new_voice
logger.info("[rvs] Highlight-Stimme gewechselt: %s", new_voice)
changed = True
if "ttsEnabled" in payload:
self.tts_enabled = bool(payload["ttsEnabled"])
logger.info("[rvs] TTS %s", "aktiviert" if self.tts_enabled else "deaktiviert")
changed = True
if "ttsEngine" in payload:
self.tts_engine_type = payload["ttsEngine"]
logger.info("[rvs] TTS-Engine: %s", self.tts_engine_type)
changed = True
if "xttsVoice" in payload:
self.xtts_voice = payload["xttsVoice"]
logger.info("[rvs] XTTS-Stimme: %s", self.xtts_voice)
changed = True
if "speedRamona" in payload:
self.voice_engine.speech_speed["ramona"] = max(0.3, min(2.0, float(payload["speedRamona"])))
logger.info("[rvs] Speed Ramona: %.1f", self.voice_engine.speech_speed["ramona"])
changed = True
if "speedThorsten" in payload:
self.voice_engine.speech_speed["thorsten"] = max(0.3, min(2.0, float(payload["speedThorsten"])))
logger.info("[rvs] Speed Thorsten: %.1f", self.voice_engine.speech_speed["thorsten"])
changed = True
# Persistent speichern in Shared Volume
if changed:
try:
os.makedirs("/shared/config", exist_ok=True)
config_data = {
"defaultVoice": self.voice_engine.default_voice,
"highlightVoice": self.voice_engine.highlight_voice,
"ttsEnabled": getattr(self, "tts_enabled", True),
"ttsEngine": getattr(self, "tts_engine_type", "piper"),
"xttsVoice": getattr(self, "xtts_voice", ""),
"speedRamona": self.voice_engine.speech_speed.get("ramona", 1.0),
"speedThorsten": self.voice_engine.speech_speed.get("thorsten", 1.0),
}
with open("/shared/config/voice_config.json", "w") as f:
json.dump(config_data, f, indent=2)
logger.info("[rvs] Voice-Config gespeichert: %s", config_data)
except Exception as e:
logger.warning("[rvs] Config speichern fehlgeschlagen: %s", e)
return
text = payload.get("text", "")
if text:
logger.info("[rvs] App-Chat: '%s'", text[:80])
@@ -954,10 +1172,99 @@ class ARIABridge:
await self.ws_core.send(raw_message)
elif msg_type == "file":
# Datei von der App → an aria-core
logger.info("[rvs] Datei empfangen: %s", payload.get("name", "?"))
if self.ws_core:
await self.ws_core.send(raw_message)
# Datei von der App → als Text-Nachricht an aria-core
file_name = payload.get("name", "unbekannt")
file_type = payload.get("type", "")
file_b64 = payload.get("base64", "")
file_size = payload.get("size", 0)
width = payload.get("width", 0)
height = payload.get("height", 0)
logger.info("[rvs] Datei empfangen: %s (%s, %dKB)",
file_name, file_type, len(file_b64) // 1365 if file_b64 else 0)
# Shared Volume: /shared/ ist in Bridge UND aria-core gemountet
SHARED_DIR = "/shared/uploads"
os.makedirs(SHARED_DIR, exist_ok=True)
if file_b64 and file_type.startswith("image/"):
# Bild in Shared Volume speichern
ext = ".jpg" if "jpeg" in file_type or "jpg" in file_type else ".png"
safe_name = f"img_{int(asyncio.get_event_loop().time())}_{file_name.replace('/', '_')}"
file_path = os.path.join(SHARED_DIR, safe_name if safe_name.endswith(ext) else safe_name + ext)
with open(file_path, "wb") as f:
f.write(base64.b64decode(file_b64))
size_kb = len(file_b64) // 1365
logger.info("[rvs] Bild gespeichert: %s (%dKB)", file_path, size_kb)
# ERST an aria-core senden (wichtigster Schritt)
text = (f"Stefan hat dir ein Bild geschickt: {file_name}"
f"{f' ({width}x{height}px)' if width else ''}"
f", {size_kb}KB."
f" Das Bild liegt unter: {file_path}"
f" Warte auf Stefans Anweisung was du damit tun sollst.")
await self.send_to_core(text, source="app-file")
# Dann App informieren (optional, darf nicht crashen)
try:
await self._send_to_rvs({
"type": "file_saved",
"payload": {"name": file_name, "serverPath": file_path, "mimeType": file_type},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
except Exception as e:
logger.warning("[rvs] file_saved konnte nicht an App gesendet werden: %s", e)
elif file_b64:
# Andere Datei in Shared Volume speichern
safe_name = f"file_{int(asyncio.get_event_loop().time())}_{file_name.replace('/', '_')}"
file_path = os.path.join(SHARED_DIR, safe_name)
with open(file_path, "wb") as f:
f.write(base64.b64decode(file_b64))
size_kb = len(file_b64) // 1365
logger.info("[rvs] Datei gespeichert: %s (%dKB)", file_path, size_kb)
# ERST an aria-core senden
text = (f"Stefan hat dir eine Datei geschickt: {file_name}"
f" ({file_type}, {size_kb}KB)."
f" Die Datei liegt unter: {file_path}"
f" Warte auf Stefans Anweisung was du damit tun sollst.")
await self.send_to_core(text, source="app-file")
try:
await self._send_to_rvs({
"type": "file_saved",
"payload": {"name": file_name, "serverPath": file_path, "mimeType": file_type},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
except Exception as e:
logger.warning("[rvs] file_saved konnte nicht an App gesendet werden: %s", e)
else:
text = f"Stefan hat eine Datei gesendet ({file_name}, {file_type}) aber die Daten sind leer angekommen."
await self.send_to_core(text, source="app-file")
elif msg_type == "file_request":
# App fordert eine Datei an (Re-Download nach Cache-Leerung)
server_path = payload.get("serverPath", "")
req_id = payload.get("requestId", "")
if not server_path or not server_path.startswith("/shared/"):
logger.warning("[rvs] Ungueltiger file_request: %s", server_path)
return
if not os.path.isfile(server_path):
logger.warning("[rvs] Datei nicht gefunden: %s", server_path)
await self._send_to_rvs({
"type": "file_response",
"payload": {"requestId": req_id, "error": "Datei nicht gefunden"},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
return
with open(server_path, "rb") as f:
file_b64 = base64.b64encode(f.read()).decode("ascii")
logger.info("[rvs] Re-Download: %s (%dKB)", server_path, len(file_b64) // 1365)
await self._send_to_rvs({
"type": "file_response",
"payload": {
"requestId": req_id,
"serverPath": server_path,
"base64": file_b64,
"name": os.path.basename(server_path),
},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
elif msg_type == "audio":
# Audio von der App → decodieren → STT → an aria-core
@@ -1017,7 +1324,21 @@ class ARIABridge:
if text.strip():
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
# ERST an aria-core senden (wichtigster Schritt)
await self.send_to_core(text, source="app-voice")
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
# sender="stt" damit Bridge es ignoriert (kein Loop)
try:
await self._send_to_rvs({
"type": "chat",
"payload": {
"text": text,
"sender": "stt",
},
"timestamp": int(asyncio.get_event_loop().time() * 1000),
})
except Exception as e:
logger.warning("[rvs] STT-Text konnte nicht an RVS gesendet werden: %s", e)
else:
logger.info("[rvs] Keine Sprache erkannt — ignoriert")
@@ -1033,14 +1354,28 @@ class ARIABridge:
pass
async def _send_to_rvs(self, message: dict) -> None:
"""Sendet eine Nachricht an die App (via RVS)."""
if self.ws_rvs is None or not self.ws_rvs.open:
"""Sendet eine Nachricht an die App (via RVS) mit Verbindungs-Check."""
if self.ws_rvs is None:
return
# Ping-Check: Verbindung wirklich aktiv?
try:
pong = await self.ws_rvs.ping()
await asyncio.wait_for(pong, timeout=5)
except Exception:
logger.warning("[rvs] Ping fehlgeschlagen — Verbindung tot, erzwinge Reconnect")
try:
await self.ws_rvs.close()
except Exception:
pass
self.ws_rvs = None
# Reconnect wird vom connect_to_rvs Loop uebernommen
return
try:
await self.ws_rvs.send(json.dumps(message))
except Exception:
logger.exception("[rvs] Sendefehler")
logger.warning("[rvs] Sendefehler — RVS nicht erreichbar")
# ── Log-Streaming an die App ─────────────────────────────
@@ -1112,8 +1447,10 @@ class ARIABridge:
logger.info("Keine Sprache erkannt — ignoriert")
except sd.PortAudioError:
logger.error("Audio-Geraet nicht verfuegbar — warte 5 Sekunden")
await asyncio.sleep(5)
if not hasattr(self, '_audio_warned'):
logger.warning("Audio-Geraet nicht verfuegbar — lokales Mikrofon deaktiviert (kein Spam mehr)")
self._audio_warned = True
await asyncio.sleep(60) # 60s statt 5s — spart Log-Spam
except Exception:
logger.exception("Fehler in der Audio-Schleife")
await asyncio.sleep(1)
+522 -9
View File
@@ -196,8 +196,15 @@
<!-- Chat Test -->
<div class="grid">
<div class="card full">
<h2>Chat Test</h2>
<div style="display:flex;align-items:center;justify-content:space-between;margin-bottom:8px;">
<h2 style="margin:0;">Chat Test</h2>
<button class="btn secondary" onclick="toggleChatFullscreen()" id="btn-chat-fs" style="padding:4px 10px;font-size:11px;">Vollbild</button>
</div>
<div class="chat-box" id="chat-box"></div>
<div id="thinking-indicator" style="display:none;padding:6px 10px;font-size:12px;color:#FFD60A;background:#1E1E2E;border-radius:0 0 6px 6px;margin-top:-8px;margin-bottom:8px;display:flex;align-items:center;justify-content:space-between;">
<span><span style="animation:pulse 1s infinite;">&#x1F4AD;</span> <span id="thinking-text">ARIA denkt...</span></span>
<button class="btn secondary" onclick="cancelRequest()" style="padding:2px 10px;font-size:11px;color:#FF3B30;border-color:#FF3B30;">Abbrechen</button>
</div>
<div class="input-row">
<input type="text" id="chat-input" placeholder="Nachricht an ARIA...">
<button class="btn" id="btn-gw" onclick="testGateway()">Gateway senden</button>
@@ -206,6 +213,23 @@
</div>
</div>
<!-- Chat Vollbild Modal -->
<div id="chat-fullscreen" style="display:none;position:fixed;top:0;left:0;width:100vw;height:100vh;background:#0D0D1A;z-index:1000;padding:16px;flex-direction:column;">
<div style="display:flex;align-items:center;justify-content:space-between;margin-bottom:12px;">
<h2 style="margin:0;color:#0096FF;">ARIA Chat</h2>
<button class="btn secondary" onclick="toggleChatFullscreen()" style="padding:6px 14px;">Schliessen</button>
</div>
<div id="chat-box-fs" class="chat-box" style="flex:1;max-height:none;min-height:0;overflow-y:auto;"></div>
<div id="thinking-indicator-fs" style="display:none;padding:6px 10px;font-size:12px;color:#FFD60A;background:#1E1E2E;border-radius:6px;margin-top:4px;">
<span style="animation:pulse 1s infinite;">&#x1F4AD;</span> <span id="thinking-text-fs">ARIA denkt...</span>
</div>
<div class="input-row" style="margin-top:8px;">
<input type="text" id="chat-input-fs" placeholder="Nachricht an ARIA..." onkeydown="if(event.key==='Enter'){testRVSFS();event.preventDefault();}">
<button class="btn" onclick="testGatewayFS()">Gateway senden</button>
<button class="btn" onclick="testRVSFS()">Via RVS senden</button>
</div>
</div>
<!-- Session + Brain Viewer -->
<div class="grid" style="grid-template-columns: 1fr 1fr;">
<div class="card">
@@ -260,6 +284,7 @@
<button class="tab-btn" data-tab="bridge" onclick="switchTab('bridge')">Bridge <span class="tab-count" id="count-bridge">0</span></button>
<button class="tab-btn" data-tab="server" onclick="switchTab('server')">Server <span class="tab-count" id="count-server">0</span></button>
<button class="tab-btn" data-tab="pipeline" onclick="switchTab('pipeline')" style="margin-left:auto;border-color:#0096FF44;color:#0096FF">Pipeline <span class="tab-count" id="count-pipeline">0</span></button>
<button class="tab-btn" data-tab="tts" onclick="switchTab('tts')" style="border-color:#34C75944;color:#34C759">TTS</button>
</div>
</div>
<div class="log-panel">
@@ -279,6 +304,36 @@
<div class="log-box hidden" id="log-bridge"></div>
<div class="log-box hidden" id="log-server"></div>
<div class="log-box hidden" id="log-pipeline"></div>
<div class="log-box hidden" id="log-tts" style="padding:12px;">
<h3 style="color:#34C759;margin:0 0 12px;">TTS Diagnose</h3>
<div style="display:grid;grid-template-columns:1fr 1fr;gap:8px;margin-bottom:12px;">
<div style="background:#1E1E2E;padding:8px;border-radius:6px;">
<div style="color:#8888AA;font-size:10px;text-transform:uppercase;">Standard-Stimme</div>
<div style="color:#fff;font-size:14px;margin-top:4px;" id="tts-default-voice">Ramona</div>
</div>
<div style="background:#1E1E2E;padding:8px;border-radius:6px;">
<div style="color:#8888AA;font-size:10px;text-transform:uppercase;">Highlight-Stimme</div>
<div style="color:#fff;font-size:14px;margin-top:4px;" id="tts-highlight-voice">Thorsten</div>
</div>
<div style="background:#1E1E2E;padding:8px;border-radius:6px;">
<div style="color:#8888AA;font-size:10px;text-transform:uppercase;">Status</div>
<div style="font-size:14px;margin-top:4px;" id="tts-status">Unbekannt</div>
</div>
<div style="background:#1E1E2E;padding:8px;border-radius:6px;">
<div style="color:#8888AA;font-size:10px;text-transform:uppercase;">Letzter Fehler</div>
<div style="color:#FF6B6B;font-size:12px;margin-top:4px;word-break:break-all;" id="tts-last-error">-</div>
</div>
</div>
<div style="margin-bottom:8px;">
<input type="text" id="tts-test-text" value="Hallo Stefan, ich bin ARIA." placeholder="Test-Text..." style="background:#1E1E2E;border:1px solid #2A2A3E;border-radius:6px;padding:8px;color:#fff;font-size:13px;width:100%;box-sizing:border-box;">
</div>
<div style="display:flex;gap:8px;">
<button class="btn" onclick="testTTS('ramona')" style="flex:1;">Ramona testen</button>
<button class="btn" onclick="testTTS('thorsten')" style="flex:1;">Thorsten testen</button>
<button class="btn secondary" onclick="checkTTSStatus()" style="flex:1;">Status pruefen</button>
</div>
<div id="tts-log" style="margin-top:12px;max-height:200px;overflow-y:auto;font-size:11px;font-family:monospace;color:#8888AA;"></div>
</div>
</div>
</div>
@@ -317,6 +372,145 @@
<!-- ══════ TAB: Einstellungen ══════ -->
<div id="tab-settings" class="main-tab">
<!-- Betriebsmodus -->
<div class="settings-section">
<h2>Betriebsmodus</h2>
<div class="card" style="max-width:500px;">
<div id="mode-selector" style="display:grid;grid-template-columns:1fr 1fr;gap:8px;">
<button class="btn mode-btn" data-mode="normal" onclick="setMode('normal')" style="background:#1E1E2E;border:2px solid transparent;">
<span style="font-size:18px;">&#x1F7E2;</span> Normal<br><span style="font-size:10px;color:#8888AA;">Hoert zu, antwortet, spricht</span>
</button>
<button class="btn mode-btn" data-mode="dnd" onclick="setMode('dnd')" style="background:#1E1E2E;border:2px solid transparent;">
<span style="font-size:18px;">&#x1F534;</span> Nicht stoeren<br><span style="font-size:10px;color:#8888AA;">Nur Kritikalarme</span>
</button>
<button class="btn mode-btn" data-mode="whisper" onclick="setMode('whisper')" style="background:#1E1E2E;border:2px solid transparent;">
<span style="font-size:18px;">&#x1F7E1;</span> Fluestern<br><span style="font-size:10px;color:#8888AA;">Nur Text, keine Sprache</span>
</button>
<button class="btn mode-btn" data-mode="hangar" onclick="setMode('hangar')" style="background:#1E1E2E;border:2px solid transparent;">
<span style="font-size:18px;">&#x2708;&#xFE0F;</span> Hangar<br><span style="font-size:10px;color:#8888AA;">Nur wichtige Meldungen</span>
</button>
<button class="btn mode-btn" data-mode="gaming" onclick="setMode('gaming')" style="background:#1E1E2E;border:2px solid transparent;grid-column:1/-1;">
<span style="font-size:18px;">&#x1F3AE;</span> Gaming<br><span style="font-size:10px;color:#8888AA;">Nur direkte Fragen</span>
</button>
</div>
<div style="margin-top:8px;font-size:11px;color:#555570;" id="mode-status">Aktueller Modus: Normal</div>
</div>
</div>
<!-- Stimmen -->
<div class="settings-section">
<h2>Sprachausgabe</h2>
<div class="card" style="max-width:500px;">
<!-- TTS aktiv (global fuer alle Engines) -->
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
<label style="color:#8888AA;font-size:12px;">TTS aktiv:</label>
<label class="toggle"><input type="checkbox" id="diag-tts-enabled" checked onchange="sendVoiceConfig()"><span class="slider"></span></label>
</div>
<!-- TTS Engine Auswahl -->
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
<label style="color:#8888AA;font-size:12px;">TTS Engine:</label>
<select id="diag-tts-engine" onchange="sendVoiceConfig();toggleXTTSPanel()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
<option value="piper">Piper (lokal, CPU, schnell)</option>
<option value="xtts">XTTS v2 (remote, GPU, natuerlich)</option>
</select>
</div>
<!-- Piper Stimmen (nur bei Engine=piper) -->
<div id="piper-panel">
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
<label style="color:#8888AA;font-size:12px;">Standard-Stimme:</label>
<select id="diag-default-voice" onchange="sendVoiceConfig()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
<option value="ramona">Ramona (weiblich)</option>
<option value="thorsten">Thorsten (maennlich)</option>
</select>
</div>
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
<label style="color:#8888AA;font-size:12px;">Highlight-Stimme:</label>
<select id="diag-highlight-voice" onchange="sendVoiceConfig()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
<option value="thorsten">Thorsten (maennlich)</option>
<option value="ramona">Ramona (weiblich)</option>
</select>
</div>
<div style="margin-bottom:4px;">
<label style="color:#8888AA;font-size:12px;">Ramona Speed: <span id="speed-ramona-label">1.0x</span></label>
</div>
<div style="display:flex;align-items:center;gap:8px;margin-bottom:12px;">
<span style="color:#555570;font-size:11px;">0.5x</span>
<input type="range" id="diag-speed-ramona" min="0.5" max="2.0" step="0.1" value="1.0"
oninput="document.getElementById('speed-ramona-label').textContent=this.value+'x'"
onchange="sendVoiceConfig()"
style="flex:1;accent-color:#0096FF;">
<span style="color:#555570;font-size:11px;">2.0x</span>
</div>
<div style="margin-bottom:4px;">
<label style="color:#8888AA;font-size:12px;">Thorsten Speed: <span id="speed-thorsten-label">1.0x</span></label>
</div>
<div style="display:flex;align-items:center;gap:8px;">
<span style="color:#555570;font-size:11px;">0.5x</span>
<input type="range" id="diag-speed-thorsten" min="0.5" max="2.0" step="0.1" value="1.0"
oninput="document.getElementById('speed-thorsten-label').textContent=this.value+'x'"
onchange="sendVoiceConfig()"
style="flex:1;accent-color:#0096FF;">
<span style="color:#555570;font-size:11px;">2.0x</span>
</div>
</div><!-- /piper-panel -->
<!-- XTTS Panel (nur bei Engine=xtts) -->
<div id="xtts-panel" style="display:none;">
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
<label style="color:#8888AA;font-size:12px;">XTTS Stimme:</label>
<select id="diag-xtts-voice" onchange="sendVoiceConfig()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
<option value="">Standard (XTTS Default)</option>
</select>
<button class="btn secondary" onclick="loadXTTSVoices()" style="padding:4px 10px;font-size:11px;">Laden</button>
</div>
<!-- Voice Cloning -->
<div style="background:#1E1E2E;border-radius:8px;padding:12px;margin-top:8px;">
<div style="color:#0096FF;font-size:13px;font-weight:600;margin-bottom:8px;">Stimme klonen</div>
<div style="color:#8888AA;font-size:11px;margin-bottom:8px;">
Lade ein oder mehrere Audio-Samples hoch (WAV/MP3, min. 6-10 Sekunden).
Mehrere Dateien werden automatisch zusammengefuegt.
</div>
<div style="margin-bottom:8px;">
<input type="text" id="xtts-clone-name" placeholder="Name fuer die Stimme..." style="background:#0D0D1A;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;color:#fff;font-size:13px;width:100%;box-sizing:border-box;">
</div>
<div style="margin-bottom:8px;">
<input type="file" id="xtts-clone-files" accept="audio/*" multiple style="color:#8888AA;font-size:12px;">
</div>
<div style="display:flex;gap:8px;">
<button class="btn" onclick="uploadVoiceSamples()" style="flex:1;">Stimme erstellen</button>
</div>
<div id="xtts-clone-status" style="font-size:11px;color:#555570;margin-top:6px;"></div>
</div>
<!-- XTTS Status -->
<div style="margin-top:8px;font-size:11px;color:#555570;" id="xtts-status">
XTTS-Server: Nicht verbunden (starte xtts/ auf dem Gaming-PC)
</div>
</div>
</div>
</div>
<!-- Highlight-Trigger -->
<div class="settings-section">
<h2>Highlight-Trigger</h2>
<div style="font-size:11px;color:#8888AA;margin-bottom:8px;">
Woerter die automatisch die Highlight-Stimme (Thorsten) ausloesen.
Eines pro Zeile. Aenderungen werden in der Bridge gespeichert.
</div>
<div class="card" style="max-width:500px;">
<textarea id="highlight-triggers" rows="8" style="width:100%;box-sizing:border-box;background:#1E1E2E;border:1px solid #2A2A3E;border-radius:6px;padding:8px;color:#fff;font-size:13px;font-family:monospace;resize:vertical;"
placeholder="Lade..."></textarea>
<div style="display:flex;gap:8px;margin-top:8px;">
<button class="btn" onclick="saveHighlightTriggers()" style="flex:1;">Speichern</button>
<button class="btn secondary" onclick="loadHighlightTriggers()" style="flex:1;">Neu laden</button>
</div>
<div id="trigger-status" style="font-size:11px;color:#555570;margin-top:6px;"></div>
</div>
</div>
<!-- Tool-Berechtigungen -->
<div class="settings-section">
<h2>Tool-Berechtigungen</h2>
@@ -397,6 +591,7 @@
bridge: document.getElementById('log-bridge'),
server: document.getElementById('log-server'),
pipeline: document.getElementById('log-pipeline'),
tts: document.getElementById('log-tts'),
};
// Scroll-Pause pro aktivem Tab
@@ -490,6 +685,97 @@
if (msg.type === 'state') { updateState(msg.state); return; }
if (msg.type === 'log') { addLog(msg.entry.level, msg.entry.source, msg.entry.message, msg.entry.ts); return; }
if (msg.type === 'tts_result') {
if (msg.ok) {
ttsLog(`\u2705 ${msg.voice}: ${msg.duration}ms, ${msg.size} bytes`);
document.getElementById('tts-status').textContent = 'OK';
document.getElementById('tts-status').style.color = '#34C759';
} else {
ttsLog(`\u274C Fehler: ${msg.error}`);
document.getElementById('tts-status').textContent = 'Fehler';
document.getElementById('tts-status').style.color = '#FF3B30';
document.getElementById('tts-last-error').textContent = msg.error;
}
return;
}
if (msg.type === 'tts_status') {
document.getElementById('tts-default-voice').textContent = msg.defaultVoice || '?';
document.getElementById('tts-highlight-voice').textContent = msg.highlightVoice || '?';
document.getElementById('tts-status').textContent = msg.ok ? 'OK' : 'Fehler';
document.getElementById('tts-status').style.color = msg.ok ? '#34C759' : '#FF3B30';
if (msg.voices) ttsLog(`Stimmen: ${msg.voices.join(', ')}`);
if (msg.error) { document.getElementById('tts-last-error').textContent = msg.error; ttsLog(`Fehler: ${msg.error}`); }
else { document.getElementById('tts-last-error').textContent = '-'; ttsLog('TTS OK'); }
return;
}
if (msg.type === 'agent_activity') {
updateThinkingIndicator(msg);
return;
}
if (msg.type === 'xtts_voices_list') {
const select = document.getElementById('diag-xtts-voice');
// Behalte erste Option (Default)
while (select.options.length > 1) select.remove(1);
for (const v of (msg.payload?.voices || [])) {
const opt = document.createElement('option');
opt.value = v.name;
opt.textContent = `${v.name} (${(v.size / 1024).toFixed(0)}KB)`;
select.appendChild(opt);
}
document.getElementById('xtts-status').textContent = `XTTS: ${msg.payload?.voices?.length || 0} Stimme(n) verfuegbar`;
document.getElementById('xtts-status').style.color = '#34C759';
return;
}
if (msg.type === 'xtts_voice_saved') {
document.getElementById('xtts-clone-status').textContent = `Stimme "${msg.payload?.name}" gespeichert!`;
document.getElementById('xtts-clone-status').style.color = '#34C759';
loadXTTSVoices(); // Liste neu laden
return;
}
if (msg.type === 'voice_config') {
document.getElementById('diag-default-voice').value = msg.defaultVoice || 'ramona';
document.getElementById('diag-highlight-voice').value = msg.highlightVoice || 'thorsten';
document.getElementById('diag-tts-enabled').checked = msg.ttsEnabled !== false;
const sr = msg.speedRamona || 1.0;
const st = msg.speedThorsten || 1.0;
document.getElementById('diag-speed-ramona').value = sr;
document.getElementById('speed-ramona-label').textContent = sr + 'x';
document.getElementById('diag-speed-thorsten').value = st;
document.getElementById('speed-thorsten-label').textContent = st + 'x';
document.getElementById('diag-tts-engine').value = msg.ttsEngine || 'piper';
// XTTS-Voice setzen — Option hinzufuegen falls nicht vorhanden
const xttsSelect = document.getElementById('diag-xtts-voice');
const xttsVoice = msg.xttsVoice || '';
if (xttsVoice && !Array.from(xttsSelect.options).some(o => o.value === xttsVoice)) {
const opt = document.createElement('option');
opt.value = xttsVoice;
opt.textContent = xttsVoice;
xttsSelect.appendChild(opt);
}
xttsSelect.value = xttsVoice;
toggleXTTSPanel();
return;
}
if (msg.type === 'trigger_list') {
const textarea = document.getElementById('highlight-triggers');
textarea.value = (msg.triggers || []).join('\n');
document.getElementById('trigger-status').textContent = msg.triggers.length + ' Trigger geladen';
document.getElementById('trigger-status').style.color = '#8888AA';
return;
}
if (msg.type === 'watchdog') {
const colors = { warning: '#FFD60A', fixing: '#FF9500', fixed: '#34C759', error: '#FF3B30' };
const color = colors[msg.status] || '#FFD60A';
addChat('error', `\u26A0\uFE0F Watchdog: ${msg.message}`, `system — ${msg.status}`);
addLog('warn', 'server', `Watchdog: ${msg.message}`);
return;
}
if (msg.type === 'chat_final') {
addChat('received', msg.text, 'chat:final');
return;
@@ -501,7 +787,12 @@
}
if (msg.type === 'rvs_chat') {
const p = msg.msg.payload || {};
addChat('received', p.text || '?', `via RVS (${p.sender || '?'})`);
const sender = p.sender || '?';
// ARIA-Antworten kommen schon via Gateway (chat:final) — nicht nochmal via RVS anzeigen
if (sender === 'aria') return;
const chatType = 'sent';
const label = sender === 'stt' ? '\uD83C\uDFA4 Spracheingabe' : `via RVS (${sender})`;
addChat(chatType, p.text || '?', label);
return;
}
if (msg.type === 'proxy_result') {
@@ -854,13 +1145,230 @@
}
function addChat(type, text, meta) {
const el = document.createElement('div');
el.className = `chat-msg ${type}`;
const escaped = escapeHtml(text);
const linked = linkifyText(escaped);
el.innerHTML = `${linked}<div class="meta">${escapeHtml(meta)}${new Date().toLocaleTimeString('de-DE')}</div>`;
chatBox.appendChild(el);
chatBox.scrollTop = chatBox.scrollHeight;
let linked = linkifyText(escaped);
// /shared/uploads/ Pfade als Inline-Bilder anzeigen
linked = linked.replace(/\/shared\/uploads\/[^\s<"]+\.(jpg|jpeg|png|gif)/gi, (match) => {
return `<a href="${match}" target="_blank">${match}</a><img src="${match}" class="chat-media" onclick="openLightbox('image','${match}')" onerror="this.style.display='none'">`;
});
const html = `${linked}<div class="meta">${escapeHtml(meta)}${new Date().toLocaleTimeString('de-DE')}</div>`;
// Thinking-Indikator ausblenden bei neuer Nachricht
updateThinkingIndicator({ activity: 'idle' });
// In beide Chat-Boxen schreiben (normal + Vollbild)
for (const box of [chatBox, document.getElementById('chat-box-fs')]) {
if (!box) continue;
const el = document.createElement('div');
el.className = `chat-msg ${type}`;
el.innerHTML = html;
box.appendChild(el);
box.scrollTop = box.scrollHeight;
}
}
let chatFullscreen = false;
function toggleChatFullscreen() {
const modal = document.getElementById('chat-fullscreen');
chatFullscreen = !chatFullscreen;
if (chatFullscreen) {
modal.style.display = 'flex';
// Chat-Inhalt synchronisieren
const fsBox = document.getElementById('chat-box-fs');
fsBox.innerHTML = chatBox.innerHTML;
fsBox.scrollTop = fsBox.scrollHeight;
document.getElementById('chat-input-fs').focus();
} else {
modal.style.display = 'none';
}
}
function testGatewayFS() {
const input = document.getElementById('chat-input-fs');
const text = input.value.trim();
if (!text) return;
addChat('sent', text, 'Gateway direkt');
send({ action: 'test_gateway', text });
input.value = '';
}
function testRVSFS() {
const input = document.getElementById('chat-input-fs');
const text = input.value.trim();
if (!text) return;
addChat('sent', text, 'via RVS');
send({ action: 'test_rvs', text });
input.value = '';
}
// Escape schliesst Vollbild-Chat
document.addEventListener('keydown', (e) => {
if (e.key === 'Escape' && chatFullscreen) toggleChatFullscreen();
});
// ── Thinking-Indikator ─────────────────────────────
let thinkingTimeout = null;
const TOOL_LABELS = {
'Bash': '\uD83D\uDDA5\uFE0F Shell-Befehl',
'WebFetch': '\uD83C\uDF10 Webseite abrufen',
'WebSearch': '\uD83D\uDD0D Suche',
'Read': '\uD83D\uDCC4 Datei lesen',
'Write': '\u270D\uFE0F Datei schreiben',
'Edit': '\u270D\uFE0F Datei bearbeiten',
'Grep': '\uD83D\uDD0D Code durchsuchen',
'Glob': '\uD83D\uDCC1 Dateien suchen',
'Agent': '\uD83E\uDD16 Sub-Agent',
};
function updateThinkingIndicator(msg) {
const indicators = [
document.getElementById('thinking-indicator'),
document.getElementById('thinking-indicator-fs'),
];
const texts = [
document.getElementById('thinking-text'),
document.getElementById('thinking-text-fs'),
];
if (msg.activity === 'idle') {
indicators.forEach(el => { if (el) el.style.display = 'none'; });
if (thinkingTimeout) { clearTimeout(thinkingTimeout); thinkingTimeout = null; }
return;
}
let label = 'ARIA denkt...';
if (msg.activity === 'tool' && msg.tool) {
label = TOOL_LABELS[msg.tool] || `\uD83D\uDD27 ${msg.tool}`;
} else if (msg.activity === 'assistant') {
label = 'ARIA schreibt...';
}
indicators.forEach(el => { if (el) el.style.display = 'block'; });
texts.forEach(el => { if (el) el.textContent = label; });
// Auto-Hide nach 2min (falls idle Event verpasst wird — ARIA arbeitet max 15min)
if (thinkingTimeout) clearTimeout(thinkingTimeout);
thinkingTimeout = setTimeout(() => {
indicators.forEach(el => { if (el) el.style.display = 'none'; });
}, 120000);
}
// ── XTTS Panel ─────────────────────────────
function toggleXTTSPanel() {
const engine = document.getElementById('diag-tts-engine').value;
document.getElementById('piper-panel').style.display = engine === 'piper' ? 'block' : 'none';
document.getElementById('xtts-panel').style.display = engine === 'xtts' ? 'block' : 'none';
if (engine === 'xtts') loadXTTSVoices();
}
function loadXTTSVoices() {
send({ action: 'xtts_list_voices' });
}
function arrayBufferToBase64(buffer) {
const bytes = new Uint8Array(buffer);
let binary = '';
for (let i = 0; i < bytes.length; i += 8192) {
binary += String.fromCharCode.apply(null, bytes.subarray(i, i + 8192));
}
return btoa(binary);
}
async function uploadVoiceSamples() {
const name = document.getElementById('xtts-clone-name').value.trim();
const files = document.getElementById('xtts-clone-files').files;
if (!name) { alert('Bitte einen Namen eingeben'); return; }
if (!files || files.length === 0) { alert('Bitte Audio-Dateien auswaehlen'); return; }
if (files.length > 10) { alert('Maximal 10 Dateien'); return; }
const status = document.getElementById('xtts-clone-status');
status.textContent = `Lade ${files.length} Datei(en)...`;
status.style.color = '#FFD60A';
try {
const samples = [];
for (let i = 0; i < files.length; i++) {
status.textContent = `Lese Datei ${i + 1}/${files.length}: ${files[i].name}...`;
const buffer = await files[i].arrayBuffer();
const base64 = arrayBufferToBase64(buffer);
samples.push({ base64, name: files[i].name, size: files[i].size });
}
const totalSize = samples.reduce((s, f) => s + f.size, 0);
status.textContent = `Sende ${samples.length} Sample(s) (${(totalSize / 1024).toFixed(0)}KB)...`;
send({ action: 'voice_upload', name, samples });
status.textContent = `Gesendet — warte auf Bestaetigung vom XTTS-Server...`;
} catch (err) {
status.textContent = `Fehler: ${err.message}`;
status.style.color = '#FF3B30';
}
}
// ── Abbrechen ──────────────────────────────
function cancelRequest() {
send({ action: 'cancel_request' });
updateThinkingIndicator({ activity: 'idle' });
addChat('error', 'Anfrage abgebrochen', 'system');
}
// ── Stimmen-Config ──────────────────────────
function sendVoiceConfig() {
const defaultVoice = document.getElementById('diag-default-voice').value;
const highlightVoice = document.getElementById('diag-highlight-voice').value;
const ttsEnabled = document.getElementById('diag-tts-enabled').checked;
const speedRamona = parseFloat(document.getElementById('diag-speed-ramona').value);
const speedThorsten = parseFloat(document.getElementById('diag-speed-thorsten').value);
const ttsEngine = document.getElementById('diag-tts-engine').value;
const xttsVoice = document.getElementById('diag-xtts-voice').value;
send({ action: 'send_voice_config', defaultVoice, highlightVoice, ttsEnabled, speedRamona, speedThorsten, ttsEngine, xttsVoice });
}
// ── Highlight-Trigger ────────────────────────
function loadHighlightTriggers() {
send({ action: 'get_triggers' });
}
function saveHighlightTriggers() {
const text = document.getElementById('highlight-triggers').value;
const triggers = text.split('\n').map(t => t.trim()).filter(t => t.length > 0);
send({ action: 'save_triggers', triggers });
document.getElementById('trigger-status').textContent = 'Gespeichert (' + triggers.length + ' Trigger)';
document.getElementById('trigger-status').style.color = '#34C759';
}
// Beim Tab-Wechsel zu Einstellungen: Trigger laden
const origSwitchMainTab = typeof switchMainTab === 'function' ? switchMainTab : null;
// ── Modus-Wechsel ────────────────────────────
let currentMode = 'normal';
const MODE_LABELS = { normal: 'Normal', dnd: 'Nicht stoeren', whisper: 'Fluestern', hangar: 'Hangar', gaming: 'Gaming' };
function setMode(mode) {
currentMode = mode;
// Visuelles Feedback
document.querySelectorAll('.mode-btn').forEach(btn => {
btn.style.borderColor = btn.dataset.mode === mode ? '#0096FF' : 'transparent';
});
document.getElementById('mode-status').textContent = `Aktueller Modus: ${MODE_LABELS[mode] || mode}`;
// An Bridge senden via RVS
sendToRVS(`ARIA, ${MODE_LABELS[mode]}-Modus`, false);
log("info", "server", `Modus gewechselt: ${mode}`);
}
// ── TTS Diagnose ─────────────────────────────
function ttsLog(msg) {
const el = document.getElementById('tts-log');
const time = new Date().toLocaleTimeString('de-DE');
el.innerHTML += `<div>[${time}] ${escapeHtml(msg)}</div>`;
el.scrollTop = el.scrollHeight;
}
function testTTS(voice) {
const text = document.getElementById('tts-test-text').value.trim();
if (!text) return;
ttsLog(`Teste ${voice}: "${text}"...`);
send({ action: 'test_tts', voice, text });
}
function checkTTSStatus() {
ttsLog('Pruefe TTS-Status...');
send({ action: 'check_tts' });
}
function openLightbox(mediaType, url) {
@@ -932,7 +1440,7 @@
// Enter-Taste sendet via Gateway
document.getElementById('chat-input').addEventListener('keydown', (e) => {
if (e.key === 'Enter') testGateway();
if (e.key === 'Enter') testRVS();
});
// Escape schliesst Lightbox
@@ -1261,6 +1769,11 @@
document.querySelectorAll('.main-nav-btn').forEach(b => {
if (b.textContent.trim().toLowerCase().includes(tab === 'main' ? 'main' : 'einstellung')) b.classList.add('active');
});
// Einstellungen: Config + Trigger laden
if (tab === 'settings') {
loadHighlightTriggers();
send({ action: 'get_voice_config' });
}
}
// ── Einstellungen: Tool-Berechtigungen ──────────────────
+381 -27
View File
@@ -74,8 +74,8 @@ function pipelineStart(method, text) {
pipelineStartTime = Date.now();
if (pipelineTimeout) clearTimeout(pipelineTimeout);
pipelineTimeout = setTimeout(() => {
if (pipelineActive) pipelineEnd(false, "Timeout — keine Antwort nach 60s");
}, 60000);
if (pipelineActive) pipelineEnd(false, "Timeout — keine Antwort nach 10min");
}, 600000);
plog(`━━━ Pipeline Start: ${method} ━━━`);
plog(`Nachricht: "${text}"`);
}
@@ -319,10 +319,24 @@ function handleGatewayMessage(msg) {
if (event === "agent") {
const data = payload.data || {};
const delta = data.delta || "";
if (delta && payload.stream === "assistant") {
const stream = payload.stream || "";
if (delta && stream === "assistant") {
broadcast({ type: "chat_delta", delta, payload });
}
// agent Events nicht einzeln loggen (zu viele)
// Tool-Nutzung erkennen und broadcasten
if (stream === "tool_use" || data.type === "tool_use") {
const toolName = data.name || data.tool || payload.tool || "";
if (toolName) {
broadcast({ type: "agent_activity", activity: "tool", tool: toolName, data });
log("info", "gateway", `Tool: ${toolName}`);
}
}
// Genereller Activity-Heartbeat (ARIA denkt)
broadcast({ type: "agent_activity", activity: stream || "thinking" });
updateAgentActivity();
return;
}
@@ -338,6 +352,14 @@ function handleGatewayMessage(msg) {
log("info", "gateway", `ANTWORT: "${text.slice(0, 200)}"`);
if (pipelineActive) pipelineEnd(true, `"${text.slice(0, 120)}"`);
broadcast({ type: "chat_final", text, payload });
broadcast({ type: "agent_activity", activity: "idle" });
pendingMessageTime = 0; // Watchdog: Antwort erhalten
updateAgentActivity();
// Antwort in Backup-Log schreiben
try {
const entry = JSON.stringify({ ts: Date.now(), role: "assistant", text: text.slice(0, 2000), session: activeSessionKey }) + "\n";
fs.appendFileSync("/shared/config/chat_backup.jsonl", entry);
} catch {}
return;
}
@@ -410,8 +432,17 @@ function sendToGateway(text, isPipeline) {
const payload = JSON.stringify(msg);
log("debug", "gateway", `RAW >>> ${payload}`);
gatewayWs.send(payload);
pendingMessageTime = Date.now(); // Watchdog: Nachricht gesendet
// Nachricht sofort in Backup-Log schreiben (OpenClaw speichert erst nach Run-Ende)
try {
fs.mkdirSync("/shared/config", { recursive: true });
const entry = JSON.stringify({ ts: Date.now(), role: "user", text, session: activeSessionKey }) + "\n";
fs.appendFileSync("/shared/config/chat_backup.jsonl", entry);
} catch {}
log("info", "gateway", `chat.send [${reqId}]: "${text}"`);
if (isPipeline) plog(`chat.send [${reqId}] an Gateway gesendet — warte auf ACK...`);
// Gateway-Nachrichten NICHT an RVS senden (sonst doppelter ARIA-Request via Bridge)
return true;
}
@@ -425,7 +456,13 @@ function connectRVS(forcePlain) {
return;
}
// TLS-Logik: wss zuerst, bei Fehler Fallback auf ws (wenn erlaubt)
// Alte Verbindung sauber schliessen
if (rvsWs) {
try { rvsWs.removeAllListeners(); rvsWs.close(); } catch (_) {}
rvsWs = null;
}
// TLS-Logik: wss zuerst, bei Fehler Fallback auf ws
const useTls = RVS_TLS === "true" && !forcePlain;
const proto = useTls ? "wss" : "ws";
const url = `${proto}://${RVS_HOST}:${RVS_PORT}?token=${RVS_TOKEN}`;
@@ -434,7 +471,18 @@ function connectRVS(forcePlain) {
broadcastState();
log("info", "rvs", `Verbinde: ${proto}://${RVS_HOST}:${RVS_PORT}`);
const ws = new WebSocket(url);
let ws;
try {
ws = new WebSocket(url);
} catch (err) {
log("error", "rvs", `WebSocket erstellen fehlgeschlagen: ${err.message}`);
if (useTls && RVS_TLS_FALLBACK === "true") {
connectRVS(true);
}
return;
}
let fallbackTriggered = false;
ws.on("open", () => {
log("info", "rvs", `Verbunden (${proto})`);
@@ -442,6 +490,16 @@ function connectRVS(forcePlain) {
state.rvs.lastError = null;
rvsWs = ws;
broadcastState();
// Keepalive: alle 25s ein Ping senden damit die Verbindung nicht stirbt
const keepalive = setInterval(() => {
if (ws.readyState === WebSocket.OPEN) {
try { ws.ping(); } catch (_) {}
} else {
clearInterval(keepalive);
}
}, 25000);
ws._keepalive = keepalive;
});
ws.on("message", (raw) => {
@@ -449,11 +507,24 @@ function connectRVS(forcePlain) {
const msg = JSON.parse(raw.toString());
if (msg.type === "chat" && msg.payload) {
const sender = msg.payload.sender || "?";
// Eigene Nachrichten ignorieren (Echo)
if (sender === "diagnostic") return;
log("info", "rvs", `Chat von ${sender}: "${(msg.payload.text || "").slice(0, 100)}"`);
if (pipelineActive && sender !== "diagnostic") {
if (pipelineActive) {
pipelineEnd(true, `Antwort via RVS von ${sender}: "${(msg.payload.text || "").slice(0, 120)}"`);
}
broadcast({ type: "rvs_chat", msg });
} else if (msg.type === "file_saved" && msg.payload) {
// Bild/Datei-Upload von der App — im Chat anzeigen
const name = msg.payload.name || "?";
const serverPath = msg.payload.serverPath || "";
const mimeType = msg.payload.mimeType || "";
log("info", "rvs", `Datei empfangen: ${name} (${serverPath})`);
// Als User-Nachricht mit Pfad broadcasten (Diagnostic zeigt Bilder inline)
broadcast({ type: "rvs_chat", msg: {
type: "chat",
payload: { text: `Anhang: ${name}\n${serverPath}`, sender: "user" }
}});
} else if (msg.type === "heartbeat") {
// ignorieren
} else {
@@ -464,10 +535,13 @@ function connectRVS(forcePlain) {
ws.on("close", () => {
log("warn", "rvs", "Verbindung geschlossen");
if (ws._keepalive) clearInterval(ws._keepalive);
state.rvs.status = "disconnected";
rvsWs = null;
if (rvsWs === ws) rvsWs = null;
broadcastState();
setTimeout(() => connectRVS(), 5000);
if (!fallbackTriggered) {
setTimeout(() => connectRVS(), 5000);
}
});
ws.on("error", (err) => {
@@ -475,31 +549,71 @@ function connectRVS(forcePlain) {
state.rvs.lastError = err.message;
broadcastState();
// TLS Fallback: wenn wss fehlschlaegt und Fallback erlaubt → ws versuchen
if (useTls && RVS_TLS_FALLBACK === "true") {
// TLS Fallback
if (useTls && RVS_TLS_FALLBACK === "true" && !fallbackTriggered) {
fallbackTriggered = true;
log("warn", "rvs", "TLS fehlgeschlagen — Fallback auf ws://");
ws.removeAllListeners();
try { ws.close(); } catch (_) {}
try { ws.removeAllListeners(); ws.close(); } catch (_) {}
if (rvsWs === ws) rvsWs = null;
connectRVS(true);
}
});
}
function sendToRVS(text, isPipeline) {
if (!rvsWs || rvsWs.readyState !== WebSocket.OPEN) {
log("error", "rvs", "Nicht verbunden");
if (isPipeline) pipelineEnd(false, "RVS nicht verbunden");
return false;
}
function sendToRVS_withResponse(sendType, sendPayload, expectType, clientWs) {
if (!RVS_HOST || !RVS_TOKEN) return;
const proto = RVS_TLS === "true" ? "wss" : "ws";
const url = `${proto}://${RVS_HOST}:${RVS_PORT}?token=${RVS_TOKEN}`;
const freshWs = new WebSocket(url);
const timeout = setTimeout(() => {
try { freshWs.close(); } catch (_) {}
clientWs.send(JSON.stringify({ type: expectType, payload: { voices: [], error: "Timeout" }, timestamp: Date.now() }));
}, 15000);
freshWs.on("open", () => {
freshWs.send(JSON.stringify({ type: sendType, payload: sendPayload, timestamp: Date.now() }));
});
freshWs.on("message", (raw) => {
try {
const resp = JSON.parse(raw.toString());
if (resp.type === expectType) {
clearTimeout(timeout);
clientWs.send(JSON.stringify(resp));
setTimeout(() => { try { freshWs.close(); } catch (_) {} }, 1000);
}
} catch {}
});
freshWs.on("error", () => {});
}
rvsWs.send(JSON.stringify({
function sendToRVS_raw(msgObj) {
if (!RVS_HOST || !RVS_TOKEN) return;
const proto = RVS_TLS === "true" ? "wss" : "ws";
const url = `${proto}://${RVS_HOST}:${RVS_PORT}?token=${RVS_TOKEN}`;
const freshWs = new WebSocket(url);
freshWs.on("open", () => {
freshWs.send(JSON.stringify(msgObj));
setTimeout(() => { try { freshWs.close(); } catch (_) {} }, 5000);
});
freshWs.on("error", () => {});
}
function sendToRVS(text, isPipeline) {
// Ueber Gateway senden (zuverlaessig) UND an RVS fuer App-Sichtbarkeit
// Die Bridge empfaengt RVS-Nachrichten von der App zuverlaessig,
// aber die Diagnostic→RVS→Bridge Route hat Zombie-Probleme.
// Deshalb: Gateway fuer ARIA, RVS nur fuer App-Anzeige.
// 1. An Gateway senden (damit ARIA antwortet)
const gatewayOk = sendToGateway(text, isPipeline);
// 2. An RVS senden (damit die App die Nachricht sieht)
sendToRVS_raw({
type: "chat",
payload: { text, sender: "diagnostic" },
timestamp: Date.now(),
}));
log("info", "rvs", `Gesendet via RVS: "${text}"`);
if (isPipeline) plog(`Nachricht an RVS gesendet — warte auf Antwort via RVS...`);
return true;
});
return gatewayOk;
}
// ── Claude Proxy Test ────────────────────────────────────
@@ -517,7 +631,7 @@ async function testProxy(prompt) {
const modelsRes = await fetch(healthUrl, {
headers: { "Authorization": "Bearer not-needed" },
signal: AbortSignal.timeout(10000),
signal: AbortSignal.timeout(30000),
});
if (!modelsRes.ok) {
@@ -544,7 +658,7 @@ async function testProxy(prompt) {
}
// Schritt 2: Chat Completion testen (kurzer Prompt)
const testPrompt = prompt || "Antworte mit genau einem Wort: Ping";
const testPrompt = prompt || "Antworte in einem Satz: Wer bist du und funktionierst du?";
log("info", "proxy", `Sende Test-Prompt: "${testPrompt}"`);
const chatRes = await fetch(`${PROXY_URL}/v1/chat/completions`, {
@@ -558,7 +672,7 @@ async function testProxy(prompt) {
messages: [{ role: "user", content: testPrompt }],
max_tokens: 200,
}),
signal: AbortSignal.timeout(30000),
signal: AbortSignal.timeout(120000), // 2min — Cold Start braucht Zeit
});
if (!chatRes.ok) {
@@ -923,6 +1037,64 @@ function waitForMessage(ws, timeoutMs) {
});
}
// ── Watchdog: Stuck Run Erkennung ────────────────────────
let lastAgentActivity = Date.now();
let watchdogWarned = false;
let watchdogFixAttempted = false;
let pendingMessageTime = 0; // Wann wurde die letzte Nachricht gesendet
function updateAgentActivity() {
lastAgentActivity = Date.now();
watchdogWarned = false;
}
// Watchdog prüft alle 30s ob ARIA nach einer gesendeten Nachricht reagiert
setInterval(async () => {
if (pendingMessageTime === 0) return; // Keine Nachricht gesendet
const waitingMs = Date.now() - pendingMessageTime;
// Nach 2min ohne Agent-Activity: Warnung
if (waitingMs > 120000 && !watchdogWarned) {
watchdogWarned = true;
log("warn", "server", `Watchdog: Keine ARIA-Aktivitaet seit ${Math.round(waitingMs / 1000)}s — moeglicherweise stuck`);
broadcast({ type: "watchdog", status: "warning", waitingMs, message: "ARIA reagiert nicht — moeglicherweise stuck Run" });
}
// Nach 5min: doctor --fix
if (waitingMs > 300000 && watchdogWarned && !watchdogFixAttempted) {
watchdogFixAttempted = true;
log("error", "server", "Watchdog: 5min ohne Antwort — fuehre openclaw doctor --fix aus");
broadcast({ type: "watchdog", status: "fixing", message: "Auto-Fix: openclaw doctor --fix" });
try {
await dockerExec("aria-core", "openclaw doctor --fix 2>/dev/null || true");
log("info", "server", "Watchdog: doctor --fix ausgefuehrt");
broadcast({ type: "watchdog", status: "fixed", message: "doctor --fix ausgefuehrt — warte auf Antwort..." });
} catch (err) {
log("error", "server", `Watchdog: doctor --fix fehlgeschlagen: ${err.message}`);
}
}
// Nach 8min: Container neustarten
if (waitingMs > 480000 && watchdogFixAttempted) {
log("error", "server", "Watchdog: 8min ohne Antwort — starte aria-core + aria-proxy neu");
broadcast({ type: "watchdog", status: "restarting", message: "Container-Restart: aria-core + aria-proxy" });
try {
const { execSync } = require("child_process");
execSync("docker restart aria-core aria-proxy", { timeout: 60000 });
log("info", "server", "Watchdog: Container neugestartet");
broadcast({ type: "watchdog", status: "restarted", message: "Container neugestartet — warte auf Gateway-Reconnect..." });
// Gateway wird sich automatisch neu verbinden
} catch (err) {
log("error", "server", `Watchdog: Container-Restart fehlgeschlagen: ${err.message}`);
broadcast({ type: "watchdog", status: "error", message: `Restart fehlgeschlagen: ${err.message}` });
}
pendingMessageTime = 0;
watchdogWarned = false;
watchdogFixAttempted = false;
}
}, 30000);
// ── HTTP Server + WebSocket fuer Browser ────────────────
const htmlPath = path.join(__dirname, "index.html");
@@ -937,6 +1109,28 @@ const server = http.createServer((req, res) => {
} else if (req.url === "/api/session") {
res.writeHead(200, { "Content-Type": "application/json" });
res.end(JSON.stringify({ sessionKey: activeSessionKey }));
} else if (req.url.startsWith("/shared/")) {
// Dateien aus Shared Volume ausliefern (Bilder, Uploads)
const filePath = decodeURIComponent(req.url);
const safePath = path.resolve(filePath);
if (!safePath.startsWith("/shared/")) {
res.writeHead(403);
res.end("Forbidden");
return;
}
try {
if (!fs.existsSync(safePath)) { res.writeHead(404); res.end("Not Found"); return; }
const ext = path.extname(safePath).toLowerCase();
const mimeTypes = { ".jpg": "image/jpeg", ".jpeg": "image/jpeg", ".png": "image/png", ".gif": "image/gif",
".pdf": "application/pdf", ".txt": "text/plain", ".json": "application/json" };
const contentType = mimeTypes[ext] || "application/octet-stream";
const data = fs.readFileSync(safePath);
res.writeHead(200, { "Content-Type": contentType, "Content-Length": data.length });
res.end(data);
} catch (err) {
res.writeHead(500);
res.end("Error");
}
} else {
res.writeHead(404);
res.end("Not Found");
@@ -987,6 +1181,49 @@ wss.on("connection", (ws) => {
if (ws._sshSock) ws._sshSock.write(msg.data);
} else if (msg.action === "live_ssh_close") {
if (ws._sshSock) { ws._sshSock.end(); ws._sshSock = null; }
} else if (msg.action === "cancel_request") {
// Laufende Anfrage abbrechen — doctor --fix beendet stuck runs
log("warn", "server", "Anfrage abgebrochen — fuehre doctor --fix aus");
pendingMessageTime = 0;
watchdogWarned = false;
watchdogFixAttempted = false;
if (pipelineActive) pipelineEnd(false, "Vom Benutzer abgebrochen");
broadcast({ type: "agent_activity", activity: "idle" });
dockerExec("aria-core", "openclaw doctor --fix 2>/dev/null || true").catch(() => {});
} else if (msg.action === "voice_upload") {
// Voice-Samples an XTTS-Bridge via RVS weiterleiten, auf Bestätigung warten
log("info", "server", `Voice-Upload '${msg.name}' (${(msg.samples || []).length} Samples) sende an RVS...`);
sendToRVS_withResponse("voice_upload", { name: msg.name, samples: msg.samples }, "xtts_voice_saved", ws);
} else if (msg.action === "xtts_list_voices") {
// Frische Verbindung die auf Antwort wartet
sendToRVS_withResponse("xtts_list_voices", {}, "xtts_voices_list", ws);
} else if (msg.action === "get_voice_config") {
handleGetVoiceConfig(ws);
} else if (msg.action === "send_voice_config") {
// Stimmen-Config persistent speichern + an Bridge via RVS senden
const voiceConfig = {
defaultVoice: msg.defaultVoice || "ramona",
highlightVoice: msg.highlightVoice || "thorsten",
ttsEnabled: msg.ttsEnabled !== false,
ttsEngine: msg.ttsEngine || "piper",
xttsVoice: msg.xttsVoice || "",
speedRamona: msg.speedRamona || 1.0,
speedThorsten: msg.speedThorsten || 1.0,
};
try {
fs.mkdirSync("/shared/config", { recursive: true });
fs.writeFileSync("/shared/config/voice_config.json", JSON.stringify(voiceConfig, null, 2));
} catch {}
sendToRVS_raw({ type: "config", payload: voiceConfig, timestamp: Date.now() });
log("info", "server", `Voice-Config gespeichert+gesendet: default=${voiceConfig.defaultVoice}, highlight=${voiceConfig.highlightVoice}, tts=${voiceConfig.ttsEnabled}`);
} else if (msg.action === "get_triggers") {
handleGetTriggers(ws);
} else if (msg.action === "save_triggers") {
handleSaveTriggers(ws, msg.triggers || []);
} else if (msg.action === "test_tts") {
handleTestTTS(ws, msg.voice || "ramona", msg.text || "Test");
} else if (msg.action === "check_tts") {
handleCheckTTS(ws);
} else if (msg.action === "check_desktop") {
checkDesktopAvailable(ws);
} else if (msg.action === "load_chat_history") {
@@ -1113,6 +1350,123 @@ function startLiveSSH(clientWs) {
createReq.end(createBody);
}
// ── Voice-Config laden ────────────────────────────────
function handleGetVoiceConfig(clientWs) {
try {
const configPath = "/shared/config/voice_config.json";
if (fs.existsSync(configPath)) {
const config = JSON.parse(fs.readFileSync(configPath, "utf-8"));
clientWs.send(JSON.stringify({ type: "voice_config", ...config }));
} else {
clientWs.send(JSON.stringify({ type: "voice_config", defaultVoice: "ramona", highlightVoice: "thorsten", ttsEnabled: true }));
}
} catch (err) {
clientWs.send(JSON.stringify({ type: "voice_config", defaultVoice: "ramona", highlightVoice: "thorsten", ttsEnabled: true }));
}
}
// ── Highlight-Trigger ─────────────────────────────────
const TRIGGERS_FILE = "/shared/config/highlight_triggers.json";
async function handleGetTriggers(clientWs) {
try {
// Zuerst aus Shared Volume lesen, dann Fallback auf Bridge-Defaults
let triggers;
if (fs.existsSync(TRIGGERS_FILE)) {
triggers = JSON.parse(fs.readFileSync(TRIGGERS_FILE, "utf-8"));
} else {
// Defaults aus der Bridge lesen
const result = await dockerExec("aria-bridge", `python3 -c "
import sys; sys.path.insert(0,'/app')
from aria_bridge import EPIC_TRIGGERS
print('\\n'.join(EPIC_TRIGGERS))
"`);
triggers = result.trim().split("\n").filter(t => t);
}
clientWs.send(JSON.stringify({ type: "trigger_list", triggers }));
} catch (err) {
clientWs.send(JSON.stringify({ type: "trigger_list", triggers: [], error: err.message }));
}
}
async function handleSaveTriggers(clientWs, triggers) {
try {
// In Shared Volume speichern (fuer Bridge lesbar)
fs.mkdirSync("/shared/config", { recursive: true });
fs.writeFileSync(TRIGGERS_FILE, JSON.stringify(triggers, null, 2));
log("info", "server", `${triggers.length} Highlight-Trigger gespeichert`);
// Bridge informieren (wird beim naechsten Start geladen)
clientWs.send(JSON.stringify({ type: "trigger_list", triggers }));
} catch (err) {
log("error", "server", `Trigger speichern fehlgeschlagen: ${err.message}`);
}
}
// ── TTS Diagnose ──────────────────────────────────────
async function handleTestTTS(clientWs, voice, text) {
try {
log("info", "server", `TTS-Test: ${voice} — "${text}"`);
const result = await dockerExec("aria-bridge", `python3 -c "
import time, sys
sys.path.insert(0, '/app')
from piper import PiperVoice
import wave, tempfile, os
voices = {'ramona': '/voices/de_DE-ramona-low.onnx', 'thorsten': '/voices/de_DE-thorsten-high.onnx'}
path = voices.get('${voice}')
if not path or not os.path.exists(path):
print('FEHLER: Stimme nicht gefunden')
sys.exit(1)
v = PiperVoice.load(path)
start = time.time()
tmp = tempfile.NamedTemporaryFile(suffix='.wav', delete=False)
with wave.open(tmp.name, 'wb') as wf:
wf.setnchannels(1)
wf.setsampwidth(2)
wf.setframerate(v.config.sample_rate)
v.synthesize('${text.replace(/'/g, "\\'")}', wf)
size = os.path.getsize(tmp.name)
dur = int((time.time() - start) * 1000)
os.unlink(tmp.name)
print(f'OK:{dur}:{size}')
"`);
const parts = result.trim().split(":");
if (parts[0] === "OK") {
clientWs.send(JSON.stringify({ type: "tts_result", ok: true, voice, duration: parts[1], size: parts[2] }));
} else {
clientWs.send(JSON.stringify({ type: "tts_result", ok: false, voice, error: result.trim() }));
}
} catch (err) {
clientWs.send(JSON.stringify({ type: "tts_result", ok: false, voice, error: err.message }));
}
}
async function handleCheckTTS(clientWs) {
try {
const result = await dockerExec("aria-bridge", `python3 -c "
import os, json
voices = {}
for name, path in [('ramona', '/voices/de_DE-ramona-low.onnx'), ('thorsten', '/voices/de_DE-thorsten-high.onnx')]:
voices[name] = os.path.exists(path)
print(json.dumps(voices))
"`);
const voices = JSON.parse(result.trim());
const available = Object.entries(voices).filter(([,v]) => v).map(([k]) => k);
const missing = Object.entries(voices).filter(([,v]) => !v).map(([k]) => k);
clientWs.send(JSON.stringify({
type: "tts_status",
ok: missing.length === 0,
voices: available,
defaultVoice: "ramona",
highlightVoice: "thorsten",
error: missing.length > 0 ? `Fehlend: ${missing.join(", ")}` : null,
}));
} catch (err) {
clientWs.send(JSON.stringify({ type: "tts_status", ok: false, error: err.message }));
}
}
function checkDesktopAvailable(clientWs) {
// Pruefen ob VNC auf der VM laeuft (Port 5900/5901)
const checkSock = net.connect({ host: "host.docker.internal", port: 5901 }, () => {
+5
View File
@@ -19,6 +19,7 @@ services:
volumes:
- ~/.claude:/root/.claude # Claude CLI Auth (Credentials in /root/.claude/.credentials.json)
- ./aria-data/ssh:/root/.ssh:ro # SSH Keys fuer VM-Zugriff (aria-wohnung)
- aria-shared:/shared # Shared Volume fuer Datei-Austausch (Uploads von App)
environment:
- HOST=0.0.0.0
- SHELL=/bin/bash # Claude Code Bash-Tool braucht bash (nicht nur sh/ash)
@@ -58,6 +59,7 @@ services:
- ./aria-data/ssh:/home/node/.ssh # SSH Keys fuer VM-Zugriff
- /tmp/.X11-unix:/tmp/.X11-unix
- /var/run/docker.sock:/var/run/docker.sock # VM von innen verwalten
- aria-shared:/shared # Shared Volume fuer Datei-Austausch (Bridge <> Core)
restart: unless-stopped
networks:
- aria-net
@@ -72,6 +74,7 @@ services:
volumes:
- ./aria-data/voices:/voices:ro # TTS Stimmen
- ./aria-data/config/aria.env:/config/aria.env
- aria-shared:/shared # Shared Volume fuer Datei-Austausch (Bridge <> Core)
# Audio-Zugriff
- /run/user/1000/pulse:/run/user/1000/pulse
- /dev/snd:/dev/snd
@@ -97,6 +100,7 @@ services:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./aria-data/config/diag-state:/data # Persistenter State (aktive Session etc.)
- aria-shared:/shared # Shared Volume (Uploads + Config)
environment:
- ARIA_AUTH_TOKEN=${ARIA_AUTH_TOKEN:-}
- PROXY_URL=http://proxy:3456
@@ -110,6 +114,7 @@ services:
volumes:
openclaw-config: # Persistiert ~/.openclaw (Model, Auth, Sessions)
claude-config: # Persistiert ~/.claude (Permissions, Settings)
aria-shared: # Datei-Austausch zwischen Bridge und Core
networks:
aria-net:
+36
View File
@@ -0,0 +1,36 @@
# ARIA Issues & Features
## Erledigt
- [x] Bildupload funktioniert (Shared Volume /shared/uploads/)
- [x] Sprachnachrichten werden als Text angezeigt (STT → Chat-Bubble)
- [x] Cache leeren + Auto-Download von Anhaengen
- [x] ARIA liest Nachrichten vor (TTS via Piper)
- [x] Autoscroll zur letzten Nachricht
- [x] Bilder im Chat groesser + Vollbild-Vorschau
- [x] Ohr-Button Absturz gefixt (LiveAudioStream entfernt, Phase 1 Placeholder)
- [x] Play-Button in ARIA-Nachrichten fuer Sprachwiedergabe
- [x] Chat-Suche in der App (Lupe in Statusleiste)
- [x] Watchdog mit Container-Restart (2min Warnung → 5min doctor --fix → 8min Restart)
- [x] Abbrechen-Button im Diagnostic Chat
- [x] Nachrichten Backup on-the-fly (/shared/config/chat_backup.jsonl)
- [x] Grosse Nachrichten satzweise aufteilen fuer TTS
- [x] RVS Nachrichten vom Smartphone gehen durch
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed pro Stimme)
- [x] Highlight-Trigger konfigurierbar in Diagnostic
## Offen
### TTS / Stimmen
- [ ] TTS Engine waehlbar: Piper (CPU, schnell) oder Coqui XTTS v2 (GPU, natuerlicher)
- [ ] Piper Voices Download ueber Diagnostic (neue Sprachen/Stimmen)
- [ ] Coqui XTTS v2 Integration (braucht GPU, bessere deutsche Stimme)
### App
- [ ] Wake Word on-device (Porcupine "ARIA" Keyword, Phase 2)
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
### Architektur
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
- [ ] Auto-Compacting und Memory/Brain Verwaltung (SQLite?)
- [ ] Diagnostic: System-Info Tab (Container-Status, Disk, RAM, CPU)
+50 -5
View File
@@ -51,8 +51,32 @@ fi
echo -e " ${GREEN}${NC} Login erfolgreich"
echo ""
# ── Versionsnummern aktualisieren ─────────────
echo -e "${GREEN}[1/5] Versionsnummern auf $VERSION setzen...${NC}"
# package.json
sed -i "s/\"version\": \"[^\"]*\"/\"version\": \"$VERSION\"/" android/package.json
echo -e " ${GREEN}${NC} package.json → $VERSION"
# build.gradle: versionName + versionCode (aus Version berechnen)
# Unterstuetzt 3-stellig (1.2.3) und 4-stellig (0.0.1.7)
IFS='.' read -ra VER_PARTS <<< "$VERSION"
V1=${VER_PARTS[0]:-0}; V2=${VER_PARTS[1]:-0}; V3=${VER_PARTS[2]:-0}; V4=${VER_PARTS[3]:-0}
VERSION_CODE=$((V1 * 1000000 + V2 * 10000 + V3 * 100 + V4))
# Mindestens 1 (Android erfordert versionCode >= 1)
[ "$VERSION_CODE" -lt 1 ] && VERSION_CODE=1
sed -i "s/versionName \"[^\"]*\"/versionName \"$VERSION\"/" android/android/app/build.gradle
sed -i "s/versionCode [0-9]*/versionCode $VERSION_CODE/" android/android/app/build.gradle
echo -e " ${GREEN}${NC} build.gradle → versionName $VERSION, versionCode $VERSION_CODE"
# SettingsScreen: Anzeige-Version (beliebiges Versionsformat)
sed -i "s/Version [0-9][0-9.]*[^<]*/Version $VERSION /" android/src/screens/SettingsScreen.tsx
echo -e " ${GREEN}${NC} SettingsScreen → Version $VERSION"
echo ""
# ── APK bauen ─────────────────────────────────
echo -e "${GREEN}[1/4] APK bauen...${NC}"
echo -e "${GREEN}[2/5] APK bauen...${NC}"
cd android
./build.sh release
cd ..
@@ -70,7 +94,11 @@ echo -e " ${GREEN}✓${NC} APK gebaut ($APK_SIZE)"
echo ""
# ── Git Tag ───────────────────────────────────
echo -e "${GREEN}[2/4] Git Tag $TAG...${NC}"
echo -e "${GREEN}[3/5] Git Tag $TAG...${NC}"
# Versions-Aenderungen committen
git add android/package.json android/android/app/build.gradle android/src/screens/SettingsScreen.tsx
git commit -m "release: bump version to $VERSION" 2>/dev/null || echo -e " ${YELLOW}Keine Aenderungen zum Committen${NC}"
if git rev-parse "$TAG" &>/dev/null; then
echo -e " ${YELLOW}Tag $TAG existiert bereits — überspringe${NC}"
@@ -79,7 +107,7 @@ else
echo -e " ${GREEN}${NC} Tag $TAG erstellt"
fi
git push origin "$TAG"
git push origin main "$TAG"
echo -e " ${GREEN}${NC} Tag gepusht"
echo ""
@@ -102,7 +130,7 @@ fi
RELEASE_BODY_ESCAPED=$(printf '%s' "$RELEASE_BODY" | python3 -c 'import sys,json; print(json.dumps(sys.stdin.read()))' 2>/dev/null || printf '"%s"' "$RELEASE_BODY" | sed 's/"/\\"/g')
# ── Gitea Release erstellen ───────────────────
echo -e "${GREEN}[3/4] Gitea Release erstellen...${NC}"
echo -e "${GREEN}[4/5] Gitea Release erstellen...${NC}"
RELEASE_RESPONSE=$(curl -s -X POST \
"$GITEA_URL/api/v1/repos/$GITEA_REPO/releases" \
@@ -127,7 +155,7 @@ echo -e " ${GREEN}✓${NC} Release #$RELEASE_ID erstellt"
echo ""
# ── APK hochladen ─────────────────────────────
echo -e "${GREEN}[4/4] APK hochladen...${NC}"
echo -e "${GREEN}[5/5] APK hochladen...${NC}"
UPLOAD_RESPONSE=$(curl -s -X POST \
"$GITEA_URL/api/v1/repos/$GITEA_REPO/releases/$RELEASE_ID/assets?name=$APK_NAME" \
@@ -142,6 +170,22 @@ else
exit 1
fi
# ── Auto-Update: APK auf RVS-Server kopieren ─
RVS_UPDATE_HOST="${RVS_UPDATE_HOST:-}"
if [ -n "$RVS_UPDATE_HOST" ]; then
echo -e "${GREEN}[6/6] APK auf RVS-Server kopieren (Auto-Update)...${NC}"
scp "$APK_PATH" "${RVS_UPDATE_HOST}:~/ARIA-AGENT/rvs/updates/${APK_NAME}" 2>/dev/null
if [ $? -eq 0 ]; then
echo -e " ${GREEN}${NC} APK auf RVS-Server kopiert — Apps werden benachrichtigt"
else
echo -e " ${YELLOW}APK konnte nicht auf RVS kopiert werden (RVS_UPDATE_HOST=$RVS_UPDATE_HOST)${NC}"
echo -e " ${YELLOW}Manuell: scp $APK_PATH $RVS_UPDATE_HOST:~/ARIA-AGENT/rvs/updates/${APK_NAME}${NC}"
fi
else
echo -e "${YELLOW}Auto-Update uebersprungen (RVS_UPDATE_HOST nicht gesetzt)${NC}"
echo -e "${YELLOW}Setze RVS_UPDATE_HOST in .env fuer automatische Verteilung${NC}"
fi
# ── Fertig ────────────────────────────────────
echo ""
echo -e "${GREEN}╔═══════════════════════════════════════════════════╗${NC}"
@@ -149,4 +193,5 @@ echo -e "${GREEN}║ Release $TAG ist live!$(printf '%*s' $((27 - ${#TAG})) ''
echo -e "${GREEN}╠═══════════════════════════════════════════════════╣${NC}"
echo -e "${GREEN}${NC} $GITEA_URL/$GITEA_REPO/releases/tag/$TAG"
echo -e "${GREEN}${NC} APK: $APK_NAME ($APK_SIZE)"
echo -e "${GREEN}${NC} Auto-Update: ${RVS_UPDATE_HOST:-nicht konfiguriert}"
echo -e "${GREEN}╚═══════════════════════════════════════════════════╝${NC}"
+2
View File
@@ -4,5 +4,7 @@ services:
ports:
- "${RVS_PORT:-443}:3000"
restart: always
volumes:
- ./updates:/updates # APK-Dateien fuer Auto-Update
environment:
- MAX_SESSIONS=10
+113
View File
@@ -1,14 +1,21 @@
"use strict";
const { WebSocketServer } = require("ws");
const fs = require("fs");
const path = require("path");
// ── Konfiguration aus Umgebungsvariablen ────────────────────────────
const PORT = parseInt(process.env.PORT || "3000", 10);
const MAX_SESSIONS = parseInt(process.env.MAX_SESSIONS || "10", 10);
const UPDATES_DIR = process.env.UPDATES_DIR || "/updates";
// Kein Polling — APK wird manuell per git pull bereitgestellt
// Erlaubte Nachrichtentypen — alles andere wird verworfen
const ALLOWED_TYPES = new Set([
"chat", "audio", "file", "location", "mode", "log", "event", "heartbeat",
"file_request", "file_response", "file_saved", "stt_result", "config", "tts_request",
"xtts_request", "xtts_response", "xtts_list_voices", "xtts_voices_list", "voice_upload", "xtts_voice_saved",
"update_check", "update_available", "update_download", "update_data",
]);
// Token-Raum: token -> { clients: Set<ws> }
@@ -45,6 +52,9 @@ const wss = new WebSocketServer({ port: PORT });
wss.on("listening", () => {
log(`RVS läuft auf Port ${PORT} | Max Sessions: ${MAX_SESSIONS}`);
// Beim Start pruefen ob eine APK da ist
const apkInfo = getLatestAPK();
if (apkInfo) log(`APK bereit: v${apkInfo.version} (${(fs.statSync(apkInfo.path).size / 1024 / 1024).toFixed(1)}MB)`);
});
wss.on("connection", (ws, req) => {
@@ -106,6 +116,52 @@ function registerClient(ws, token) {
return;
}
// Update-Check: direkt an den anfragenden Client antworten (nicht relay'en)
if (msg.type === "update_check") {
const clientVersion = msg.payload?.version || "0.0.0.0";
const apkInfo = getLatestAPK();
if (apkInfo && compareVersions(apkInfo.version, clientVersion) > 0) {
ws.send(JSON.stringify({
type: "update_available",
payload: {
version: apkInfo.version,
downloadUrl: `/update/latest.apk`,
size: fs.statSync(apkInfo.path).size,
},
timestamp: Date.now(),
}));
}
return;
}
// Update-Download: APK als Base64 ueber WebSocket senden
if (msg.type === "update_download") {
const apkInfo = getLatestAPK();
if (!apkInfo) {
ws.send(JSON.stringify({ type: "update_data", payload: { error: "Keine APK verfuegbar" }, timestamp: Date.now() }));
return;
}
try {
const data = fs.readFileSync(apkInfo.path);
const base64 = data.toString("base64");
const sizeMB = (data.length / 1024 / 1024).toFixed(1);
log(`APK sende: v${apkInfo.version} (${sizeMB}MB) an Client`);
ws.send(JSON.stringify({
type: "update_data",
payload: {
version: apkInfo.version,
base64,
size: data.length,
fileName: `ARIA-v${apkInfo.version}.apk`,
},
timestamp: Date.now(),
}));
} catch (err) {
ws.send(JSON.stringify({ type: "update_data", payload: { error: err.message }, timestamp: Date.now() }));
}
return;
}
// An alle anderen Clients im Raum weiterleiten
for (const client of room.clients) {
if (client !== ws && client.readyState === 1) {
@@ -166,6 +222,63 @@ wss.on("close", () => {
clearInterval(cleanup);
});
// ── Auto-Update: APK-Erkennung + Push ──────────────────────────────
let latestVersion = null;
function getLatestAPK() {
try {
if (!fs.existsSync(UPDATES_DIR)) return null;
const files = fs.readdirSync(UPDATES_DIR)
.filter(f => f.endsWith(".apk"))
.map(f => {
// ARIA-v0.0.2.3.apk oder ARIA-Cockpit-release.apk
const match = f.match(/(\d+\.\d+\.\d+[\.\d]*)/);
return { file: f, path: path.join(UPDATES_DIR, f), version: match ? match[1] : null };
})
.filter(f => f.version)
.sort((a, b) => compareVersions(b.version, a.version)); // Neueste zuerst
return files[0] || null;
} catch {
return null;
}
}
function compareVersions(a, b) {
const pa = a.split(".").map(Number);
const pb = b.split(".").map(Number);
for (let i = 0; i < Math.max(pa.length, pb.length); i++) {
const diff = (pa[i] || 0) - (pb[i] || 0);
if (diff !== 0) return diff;
}
return 0;
}
function notifyClientsAboutUpdate(apkInfo) {
const msg = JSON.stringify({
type: "update_available",
payload: {
version: apkInfo.version,
downloadUrl: `/update/latest.apk`,
size: fs.statSync(apkInfo.path).size,
},
timestamp: Date.now(),
});
// An alle Clients in allen Rooms senden
for (const [, room] of rooms) {
for (const client of room.clients) {
if (client.readyState === 1) {
client.send(msg);
}
}
}
log(`Update-Benachrichtigung gesendet: v${apkInfo.version} (${rooms.size} Raum/Raeume)`);
}
// Kein Polling — Update-Check passiert on-demand (update_check Message von App)
// ── Sauberes Herunterfahren ─────────────────────────────────────────
process.on("SIGTERM", () => {
+11
View File
@@ -0,0 +1,11 @@
# ════════════════════════════════════════════════
# ARIA XTTS v2 — Konfiguration
# Kopieren nach .env und anpassen
# ════════════════════════════════════════════════
# RVS Verbindung (gleiche Daten wie auf der ARIA-VM)
RVS_HOST=mobil.hacker-net.de
RVS_PORT=444
RVS_TLS=true
RVS_TLS_FALLBACK=true
RVS_TOKEN=dein_token_hier
+5
View File
@@ -0,0 +1,5 @@
FROM node:22-alpine
WORKDIR /app
COPY bridge.js package.json ./
RUN npm install --production
CMD ["node", "bridge.js"]
+293
View File
@@ -0,0 +1,293 @@
/**
* ARIA XTTS Bridge — Verbindet XTTS v2 Server mit dem RVS
*
* Empfaengt tts_request ueber RVS → rendert Audio via XTTS API → sendet zurueck
* Empfaengt voice_upload → speichert Voice-Sample fuer Cloning
* Empfaengt xtts_list_voices → listet verfuegbare Stimmen
*/
const WebSocket = require("ws");
const http = require("http");
const https = require("https");
const fs = require("fs");
const path = require("path");
const XTTS_API_URL = process.env.XTTS_API_URL || "http://xtts:8000";
const RVS_HOST = process.env.RVS_HOST || "";
const RVS_PORT = process.env.RVS_PORT || "443";
const RVS_TLS = process.env.RVS_TLS || "true";
const RVS_TLS_FALLBACK = process.env.RVS_TLS_FALLBACK || "true";
const RVS_TOKEN = process.env.RVS_TOKEN || "";
const VOICES_DIR = "/voices";
function log(msg) {
console.log(`[${new Date().toISOString()}] ${msg}`);
}
// ── RVS Verbindung ──────────────────────────────────
let rvsWs = null;
let retryDelay = 2;
function connectRVS(forcePlain) {
if (!RVS_HOST || !RVS_TOKEN) {
log("RVS nicht konfiguriert — beende");
process.exit(1);
}
const useTls = RVS_TLS === "true" && !forcePlain;
const proto = useTls ? "wss" : "ws";
const url = `${proto}://${RVS_HOST}:${RVS_PORT}?token=${RVS_TOKEN}`;
log(`Verbinde zu RVS: ${proto}://${RVS_HOST}:${RVS_PORT}`);
const ws = new WebSocket(url);
ws.on("open", () => {
log("RVS verbunden — warte auf TTS-Requests");
rvsWs = ws;
retryDelay = 2;
// Keepalive
setInterval(() => {
if (ws.readyState === WebSocket.OPEN) {
ws.ping();
ws.send(JSON.stringify({ type: "heartbeat", timestamp: Date.now() }));
}
}, 25000);
});
ws.on("message", async (raw) => {
try {
const msg = JSON.parse(raw.toString());
if (msg.type === "xtts_request") {
await handleTTSRequest(msg.payload);
} else if (msg.type === "voice_upload") {
await handleVoiceUpload(msg.payload);
} else if (msg.type === "xtts_list_voices") {
await handleListVoices();
}
} catch (err) {
log(`Fehler: ${err.message}`);
}
});
ws.on("close", () => {
log("RVS Verbindung geschlossen");
rvsWs = null;
setTimeout(() => connectRVS(), Math.min(retryDelay * 1000, 30000));
retryDelay = Math.min(retryDelay * 2, 30);
});
ws.on("error", (err) => {
log(`RVS Fehler: ${err.message}`);
if (useTls && RVS_TLS_FALLBACK === "true") {
log("TLS fehlgeschlagen — Fallback auf ws://");
ws.removeAllListeners();
try { ws.close(); } catch (_) {}
connectRVS(true);
}
});
}
// ── TTS Request Handler ─────────────────────────────
async function handleTTSRequest(payload) {
const { text, voice, requestId, language } = payload;
if (!text) return;
// Markdown entfernen
const cleanText = text.replace(/\*\*([^*]+)\*\*/g, "$1").trim();
// Text in Saetze aufteilen, dann zu Chunks von 2-3 Saetzen zusammenfassen
// (mehr Kontext = konsistentere Stimme/Lautstaerke, aber nicht zu lang fuer WebSocket)
const sentences = cleanText.split(/(?<=[.!?])\s+/)
.map(s => s.trim())
.filter(s => s.length > 0)
.map(s => s.replace(/[.]+$/, '')); // Punkt am Ende entfernen
const MAX_CHUNK_CHARS = 250; // Max ~250 Zeichen pro Chunk
const chunks = [];
let currentChunk = '';
for (const sentence of sentences) {
if (currentChunk && (currentChunk.length + sentence.length + 2) > MAX_CHUNK_CHARS) {
chunks.push(currentChunk);
currentChunk = sentence;
} else {
currentChunk = currentChunk ? currentChunk + '. ' + sentence : sentence;
}
}
if (currentChunk) chunks.push(currentChunk);
if (chunks.length === 0) return;
log(`TTS-Request: "${cleanText.slice(0, 60)}..." (${sentences.length} Saetze → ${chunks.length} Chunks, voice: ${voice || "default"}, lang: ${language || "de"})`);
try {
const voiceSample = voice ? path.join(VOICES_DIR, `${voice}.wav`) : null;
const hasCustomVoice = voiceSample && fs.existsSync(voiceSample);
// Jeden Chunk sequentiell rendern und sofort senden
for (let i = 0; i < chunks.length; i++) {
const chunk = chunks[i];
try {
const audioBuffer = await callXTTSAPI(chunk, language || "de", hasCustomVoice ? voiceSample : null);
if (audioBuffer && audioBuffer.length > 100) {
const base64 = audioBuffer.toString("base64");
log(`TTS [${i + 1}/${chunks.length}]: ${audioBuffer.length} bytes (${(audioBuffer.length / 1024).toFixed(0)}KB) — "${chunk.slice(0, 50)}..."`);
sendToRVS({
type: "xtts_response",
payload: {
requestId: `${requestId || ""}_${i}`,
base64,
mimeType: "audio/wav",
voice: voice || "default",
engine: "xtts",
},
timestamp: Date.now(),
});
}
} catch (chunkErr) {
log(`TTS [${i + 1}/${chunks.length}] Fehler: ${chunkErr.message} — ueberspringe`);
}
}
log(`TTS komplett: ${chunks.length} Chunks gerendert`);
} catch (err) {
log(`TTS Fehler: ${err.message}`);
sendToRVS({
type: "xtts_response",
payload: { requestId, error: err.message },
timestamp: Date.now(),
});
}
}
function callXTTSAPI(text, language, speakerWav) {
return new Promise((resolve, reject) => {
const body = JSON.stringify({
text,
language,
speaker_wav: speakerWav || "",
});
const url = new URL(`${XTTS_API_URL}/tts_to_audio/`);
const options = {
hostname: url.hostname,
port: url.port,
path: url.pathname,
method: "POST",
headers: {
"Content-Type": "application/json",
"Content-Length": Buffer.byteLength(body),
},
timeout: 60000,
};
const req = http.request(options, (res) => {
const chunks = [];
res.on("data", (chunk) => chunks.push(chunk));
res.on("end", () => {
if (res.statusCode === 200) {
resolve(Buffer.concat(chunks));
} else {
reject(new Error(`XTTS API HTTP ${res.statusCode}: ${Buffer.concat(chunks).toString().slice(0, 200)}`));
}
});
});
req.on("error", reject);
req.on("timeout", () => { req.destroy(); reject(new Error("XTTS API Timeout (60s)")); });
req.write(body);
req.end();
});
}
// ── Voice Upload Handler ────────────────────────────
async function handleVoiceUpload(payload) {
const { name, samples } = payload;
if (!name || !samples || !Array.isArray(samples) || samples.length === 0) {
log("Voice Upload: Ungueltige Daten");
return;
}
log(`Voice Upload: "${name}" (${samples.length} Samples)`);
try {
// Alle Samples zusammenfuegen
const buffers = samples.map(s => Buffer.from(s.base64, "base64"));
const combined = Buffer.concat(buffers);
// Als WAV speichern
fs.mkdirSync(VOICES_DIR, { recursive: true });
const filePath = path.join(VOICES_DIR, `${name.replace(/[^a-zA-Z0-9_-]/g, "_")}.wav`);
fs.writeFileSync(filePath, combined);
log(`Voice gespeichert: ${filePath} (${(combined.length / 1024).toFixed(0)}KB)`);
sendToRVS({
type: "xtts_voice_saved",
payload: { name, size: combined.length, path: filePath },
timestamp: Date.now(),
});
} catch (err) {
log(`Voice Upload Fehler: ${err.message}`);
}
}
// ── Voice List Handler ──────────────────────────────
async function handleListVoices() {
try {
const files = fs.existsSync(VOICES_DIR)
? fs.readdirSync(VOICES_DIR).filter(f => f.endsWith(".wav"))
: [];
const voices = files.map(f => ({
name: path.basename(f, ".wav"),
file: f,
size: fs.statSync(path.join(VOICES_DIR, f)).size,
}));
log(`Stimmen: ${voices.length} verfuegbar`);
sendToRVS({
type: "xtts_voices_list",
payload: { voices },
timestamp: Date.now(),
});
} catch (err) {
log(`Stimmen-Liste Fehler: ${err.message}`);
}
}
// ── RVS senden ──────────────────────────────────────
function sendToRVS(msg) {
if (rvsWs && rvsWs.readyState === WebSocket.OPEN) {
rvsWs.send(JSON.stringify(msg));
}
}
// ── Start ───────────────────────────────────────────
log("ARIA XTTS Bridge startet...");
log(`XTTS API: ${XTTS_API_URL}`);
log(`RVS: ${RVS_HOST}:${RVS_PORT}`);
// Warten bis XTTS API erreichbar ist
function waitForXTTS(callback, attempts) {
if (attempts <= 0) { log("XTTS API nicht erreichbar — starte trotzdem"); callback(); return; }
http.get(`${XTTS_API_URL}/docs`, (res) => {
log(`XTTS API erreichbar (HTTP ${res.statusCode})`);
callback();
}).on("error", () => {
log(`XTTS API noch nicht bereit — warte (${attempts} Versuche uebrig)...`);
setTimeout(() => waitForXTTS(callback, attempts - 1), 10000); // 10s statt 5s (Model laden dauert)
});
}
waitForXTTS(() => connectRVS(), 30); // Max 5min warten
+56
View File
@@ -0,0 +1,56 @@
# ════════════════════════════════════════════════
# ARIA XTTS v2 — GPU TTS Server
# Laeuft auf dem Gaming-PC (RTX 3060)
# Verbindet sich zum RVS fuer TTS-Requests
# ════════════════════════════════════════════════
#
# Voraussetzungen:
# - Docker Desktop mit WSL2
# - NVIDIA Container Toolkit
# - .env mit RVS-Verbindungsdaten
#
# Start: docker compose up -d
# Test: curl http://localhost:8000/docs
# ════════════════════════════════════════════════
services:
# ─── XTTS v2 API Server (GPU) ─────────────────
xtts:
image: daswer123/xtts-api-server:latest
container_name: aria-xtts
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
ports:
- "8000:8020"
volumes:
- xtts-models:/app/xtts_models # Model-Cache (~2GB)
- ./voices:/voices # Custom Voice Samples
environment:
- COQUI_TOS_AGREED=1
restart: unless-stopped
# ─── XTTS Bridge (verbindet zu RVS) ───────────
xtts-bridge:
build: .
container_name: aria-xtts-bridge
depends_on:
- xtts
volumes:
- ./voices:/voices # Shared mit XTTS-Server
environment:
- XTTS_API_URL=http://xtts:8020
- RVS_HOST=${RVS_HOST}
- RVS_PORT=${RVS_PORT:-443}
- RVS_TLS=${RVS_TLS:-true}
- RVS_TLS_FALLBACK=${RVS_TLS_FALLBACK:-true}
- RVS_TOKEN=${RVS_TOKEN}
restart: unless-stopped
volumes:
xtts-models:
+8
View File
@@ -0,0 +1,8 @@
{
"name": "aria-xtts-bridge",
"version": "1.0.0",
"private": true,
"dependencies": {
"ws": "^8.16.0"
}
}