Compare commits
46 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 49089eee4b | |||
| e544992c9f | |||
| 97a1a3089a | |||
| 64f18e97a0 | |||
| 9cbea27455 | |||
| c8881f9e4d | |||
| 028e3b2240 | |||
| c042f27106 | |||
| 4ceadf8be5 | |||
| ddd30b3059 | |||
| 6c8ba5fe2d | |||
| 32ddac002f | |||
| bbbe69d928 | |||
| 23c39d5bba | |||
| 5328dc8595 | |||
| 0c03b4f161 | |||
| 31fe70bab5 | |||
| 39251b3d32 | |||
| 0623de32a0 | |||
| cd5e6e7ee6 | |||
| ee3e0a0af6 | |||
| 0783b1b99d | |||
| 5492c7a46f | |||
| 4cbe184faa | |||
| 647a1cb726 | |||
| 73263b69a6 | |||
| c62ceafdc2 | |||
| 9b5a35cb4a | |||
| 5ac1a0a522 | |||
| a28b46a809 | |||
| 59c8d36a3d | |||
| 79ba7b8487 | |||
| ba62cec78c | |||
| f15b3f583f | |||
| 402bddc18a | |||
| 350069d371 | |||
| 019c078393 | |||
| d411df4074 | |||
| 763e0d79ab | |||
| 47fe4ad655 | |||
| 99cb83202e | |||
| fc2438be2d | |||
| 40e48b046b | |||
| f801d99748 | |||
| 6ab6196739 | |||
| eb12281dfc |
@@ -57,8 +57,8 @@ ARIA hat zwei Rollen:
|
||||
│ │ Liest BOOTSTRAP.md + AGENT.md │ │
|
||||
│ │ │ │
|
||||
│ │ [bridge] ARIA Voice Bridge Container │ │
|
||||
│ │ Whisper STT · Piper TTS · Wake-Word │ │
|
||||
│ │ Ramona (weiblich) + Thorsten (tief) │ │
|
||||
│ │ Whisper STT · Wake-Word │ │
|
||||
│ │ TTS remote via XTTS v2 auf Gaming-PC │ │
|
||||
│ │ Bruecke: App <> RVS <> Bridge <> ARIA │ │
|
||||
│ │ │ │
|
||||
│ │ [diagnostic] Selbstcheck-UI + Einstellungen │ │
|
||||
@@ -143,21 +143,16 @@ claude login
|
||||
**Wichtig:** Der Ordner `~/.claude/` (nicht `~/.config/claude/`!) wird als Volume
|
||||
in den Proxy gemountet. Die Credentials ueberleben Container-Restarts.
|
||||
|
||||
### 3. Stimmen herunterladen
|
||||
|
||||
```bash
|
||||
./get-voices.sh
|
||||
# Laedt Ramona + Thorsten (Piper TTS) nach aria-data/voices/
|
||||
# Ca. 100MB, dauert ein paar Minuten
|
||||
```
|
||||
|
||||
### 4. Voice Bridge konfigurieren
|
||||
### 3. Voice Bridge konfigurieren
|
||||
|
||||
```bash
|
||||
cp aria-data/config/aria.env.example aria-data/config/aria.env
|
||||
# Bei Bedarf anpassen (Whisper-Modell, Sprache, Stimmen-Pfade)
|
||||
# Bei Bedarf anpassen (Whisper-Modell, Sprache, Wake-Word)
|
||||
```
|
||||
|
||||
TTS laeuft ausschliesslich ueber XTTS v2 auf dem Gaming-PC — siehe Abschnitt
|
||||
"XTTS v2 — High-Quality TTS" weiter unten.
|
||||
|
||||
### 5. RVS-Token generieren & Container starten
|
||||
|
||||
```bash
|
||||
@@ -253,7 +248,6 @@ Danach werden per `sed` vier Patches angewendet:
|
||||
- Sicherheitsregeln (kein ClawHub, Prompt Injection abwehren)
|
||||
- Tool-Freigaben (alle Claude Code Tools: WebFetch, Bash, etc.)
|
||||
- SSH-Zugriff auf aria-wohnung (VM)
|
||||
- Stimmen-Auswahl (Ramona vs Thorsten)
|
||||
- Gedaechtnis-System
|
||||
|
||||
### openclaw.json (via aria-setup.sh)
|
||||
@@ -299,15 +293,14 @@ Audio: App → RVS → Bridge → FFmpeg → Whisper STT → chat.send → aria
|
||||
Datei: App → RVS → Bridge → /shared/uploads/ → chat.send (mit Pfad) → aria-core
|
||||
|
||||
aria-core → Antwort → Gateway → Diagnostic → RVS → App
|
||||
→ Bridge → Piper TTS → RVS → App (Audio)
|
||||
→ Bridge → Lautsprecher (lokal)
|
||||
→ Bridge → XTTS (PCM-Stream) → RVS → App AudioTrack
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
- **STT**: faster-whisper (lokal, offline, 16kHz mono)
|
||||
- **TTS**: Piper (Ramona + Thorsten, offline) oder XTTS v2 (remote, GPU, Voice Cloning)
|
||||
- **Markdown-Bereinigung**: Entfernt **fett**, *kursiv*, `code`, Links, Listen etc. vor TTS (natuerliche Sprache)
|
||||
- **TTS**: XTTS v2 (remote auf Gaming-PC, GPU, Voice Cloning) — Streaming ueber PCM-Chunks
|
||||
- **Text-Cleanup**: `<voice>...</voice>` Tag bevorzugt, Markdown/Code/Einheiten/URLs werden TTS-gerecht aufbereitet
|
||||
- **Wake-Word**: openwakeword (lokales Mikrofon auf der VM)
|
||||
- **App-Audio**: Base64 Audio von App → FFmpeg → Whisper STT → Text an aria-core
|
||||
- **Modi**: Normal, Nicht stoeren, Fluestern, Hangar, Gaming
|
||||
@@ -322,13 +315,6 @@ aria-core → Antwort → Gateway → Diagnostic → RVS → App
|
||||
| Hangar | `"ARIA, ich arbeite"` | Nur wichtige Meldungen |
|
||||
| Gaming | `"ARIA, Gaming-Modus"` | Nur auf direkte Fragen antworten |
|
||||
|
||||
### Stimmen
|
||||
|
||||
| Stimme | Modell | Wann |
|
||||
|--------|--------|------|
|
||||
| **Ramona** (weiblich) | `de_DE-ramona-low` | Alltag, Antworten, Gespraeche |
|
||||
| **Thorsten** (maennlich, tief) | `de_DE-thorsten-high` | Epische Momente, Alarme |
|
||||
|
||||
---
|
||||
|
||||
## Diagnostic — Selbstcheck-UI und Einstellungen
|
||||
@@ -344,7 +330,7 @@ Erreichbar unter `http://<VM-IP>:3001`. Teilt das Netzwerk mit aria-core.
|
||||
- **Session-Verwaltung**: Sessions auflisten, wechseln, erstellen, loeschen, als Markdown exportieren (⬇ Button)
|
||||
- **Chat-History**: Wird beim Laden und Session-Wechsel angezeigt (read-only aus JSONL)
|
||||
- **TTS-Diagnose Tab**: Stimmen testen, Status pruefen, Fehler anzeigen
|
||||
- **Einstellungen**: TTS-Engine (Piper/XTTS), Stimmen, Speed, Highlight-Trigger, Betriebsmodi, Whisper-Modell (tiny…large-v3, Hot-Reload)
|
||||
- **Einstellungen**: TTS aktiv-Toggle, XTTS-Voice (gecloned), Betriebsmodi, Whisper-Modell (tiny…large-v3, Hot-Reload)
|
||||
- **XTTS Voice Cloning**: Audio-Samples hochladen, eigene Stimme erstellen
|
||||
- **Claude Login**: Browser-Terminal zum Einloggen in den Proxy
|
||||
- **Core Terminal**: Shell in aria-core (openclaw CLI)
|
||||
@@ -373,13 +359,13 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
||||
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt (kein Rauschen an Whisper)
|
||||
- **STT (Speech-to-Text)**: Audio wird als 16kHz mono aufgenommen und in der Bridge per Whisper transkribiert, transkribierter Text erscheint im Chat
|
||||
- **"ARIA denkt..." Indicator**: Zeigt live den Status vom Core (Denken, Tool, Schreiben) + Abbrechen-Button
|
||||
- **TTS-Wiedergabe**: ARIA antwortet per Lautsprecher (Piper oder XTTS v2), Audio-Queue mit Preloading
|
||||
- **TTS-Wiedergabe**: ARIA antwortet per Lautsprecher — XTTS v2 PCM-Streaming direkt in AudioTrack, keine Wait-Gaps
|
||||
- **Play-Button**: Jede ARIA-Nachricht kann nochmal vorgelesen werden
|
||||
- **Chat-Suche**: Lupe in der Statusleiste filtert Nachrichten live
|
||||
- **Mehrere Anhaenge**: Bilder + Dateien sammeln, Text hinzufuegen, dann zusammen senden
|
||||
- **Paste-Support**: Bilder aus Zwischenablage einfuegen (Diagnostic)
|
||||
- **Anhaenge**: Bridge speichert in Shared Volume, ARIA kann darauf zugreifen, Re-Download ueber RVS
|
||||
- **Einstellungen**: TTS Engine, Stimmen, Speed pro Stimme, Speicherort, Auto-Download, GPS
|
||||
- **Einstellungen**: TTS aktiv, XTTS-Voice, Speicherort, Auto-Download, GPS
|
||||
- **Auto-Update**: Prueft beim Start + per Button auf neue Version, Download + Installation ueber RVS (FileProvider)
|
||||
- GPS-Position (optional)
|
||||
- QR-Code Scanner fuer Token-Pairing
|
||||
@@ -429,7 +415,7 @@ RVS_UPDATE_HOST=root@aria-rvs # Optional: fuer Auto-Update
|
||||
### Docker-Cleanup
|
||||
|
||||
Das Bridge-Image zieht grosse ML-Deps (faster-whisper, ctranslate2, onnxruntime,
|
||||
openwakeword, piper-tts) — bei jedem Rebuild waechst der Docker-Build-Cache. Wenn
|
||||
openwakeword) — bei jedem Rebuild waechst der Docker-Build-Cache. Wenn
|
||||
die VM voll laeuft:
|
||||
|
||||
```bash
|
||||
@@ -453,8 +439,8 @@ Der Update-Flow:
|
||||
App (Mikrofon) → AAC/MP4 Aufnahme → Base64 → RVS → Bridge
|
||||
Bridge: FFmpeg (16kHz PCM) → Whisper STT → Text → aria-core
|
||||
Bridge: STT-Ergebnis → RVS → App (Placeholder wird durch transkribierten Text ersetzt)
|
||||
aria-core → Antwort → Bridge → Piper TTS (WAV) → Base64 → RVS → App
|
||||
App: Base64 → WAV → Lautsprecher
|
||||
aria-core → Antwort → Bridge → XTTS (Gaming-PC) → PCM-Stream → RVS → App
|
||||
App: AudioTrack MODE_STREAM (nahtlos), Cache als WAV pro Message
|
||||
```
|
||||
|
||||
### Datei-Pipeline (Bilder & Anhaenge)
|
||||
@@ -502,10 +488,6 @@ aria-data/
|
||||
│
|
||||
├── skills/ ← ARIAs Faehigkeiten (selbst geschrieben!)
|
||||
│
|
||||
├── voices/ ← Piper TTS Stimmen (offline)
|
||||
│ ├── de_DE-ramona-low.onnx
|
||||
│ └── de_DE-thorsten-high.onnx
|
||||
│
|
||||
├── config/
|
||||
│ ├── BOOTSTRAP.md ← System-Prompt (Identitaet, Regeln, Tools)
|
||||
│ ├── AGENT.md ← Persoenlichkeit & Arbeitsprinzipien
|
||||
@@ -600,26 +582,26 @@ Das Model wird im Volume `xtts-models` gecacht und muss nur einmal geladen werde
|
||||
|
||||
### Features
|
||||
|
||||
- **Natuerliche Stimmen**: Deutlich bessere Qualitaet als Piper
|
||||
- **Natuerliche Stimmen**: Deutlich bessere Qualitaet als TTS der alten Generation
|
||||
- **Voice Cloning**: Eigene Stimme mit 6-10s Audio-Sample (~2s Latenz auf RTX 3060)
|
||||
- **Streaming**: PCM-Chunks alle ~170ms → App spielt ohne Warten nahtlos
|
||||
- **16 Sprachen**: Deutsch, Englisch, Franzoesisch, etc.
|
||||
- **Fallback**: Wenn XTTS nicht erreichbar, nutzt die Bridge automatisch Piper
|
||||
|
||||
### TTS-Engine umschalten
|
||||
### TTS-Config
|
||||
|
||||
In der Diagnostic unter Einstellungen → Sprachausgabe:
|
||||
- **TTS aktiv**: Global An/Aus
|
||||
- **TTS Engine**: Piper (lokal, CPU, schnell) oder XTTS v2 (remote, GPU, natuerlich)
|
||||
- **Piper**: Standard-Stimme, Highlight-Stimme, Speed pro Stimme
|
||||
- **XTTS**: Stimmen-Auswahl, Voice Cloning
|
||||
- **XTTS Stimme**: Default oder gecloned (Maia, etc.)
|
||||
|
||||
> XTTS ist die einzige Engine — wenn der Gaming-PC offline ist, bleibt ARIA stumm.
|
||||
> Chat-Antworten kommen weiter an (nur kein Audio).
|
||||
|
||||
### Stimme klonen
|
||||
|
||||
1. TTS Engine auf "XTTS v2" stellen
|
||||
2. "Stimme klonen" → Audio-Dateien hochladen (WAV/MP3, 1-10 Dateien, min. 6-10s gesamt)
|
||||
3. Name vergeben → "Stimme erstellen"
|
||||
4. "Laden" klicken → neue Stimme in der Auswahl
|
||||
5. Stimme auswaehlen → Config wird automatisch gespeichert
|
||||
1. "Stimme klonen" → Audio-Dateien hochladen (WAV/MP3, 1-10 Dateien, min. 6-10s gesamt)
|
||||
2. Name vergeben → "Stimme erstellen"
|
||||
3. "Laden" klicken → neue Stimme in der Auswahl
|
||||
4. Stimme auswaehlen → Config wird automatisch gespeichert
|
||||
|
||||
> **Tipp:** Fuer beste Ergebnisse: saubere Aufnahme, eine Stimme, kein Hintergrund,
|
||||
> 10-30 Sekunden Gesamtlaenge. Mehrere kurze Dateien werden zusammengefuegt.
|
||||
@@ -718,7 +700,9 @@ docker exec aria-core ssh aria-wohnung hostname
|
||||
- [x] SSH-Zugriff auf VM (aria-wohnung)
|
||||
- [x] Diagnostic Web-UI + Einstellungen
|
||||
- [x] Session-Verwaltung + Chat-History
|
||||
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed, Highlight-Trigger)
|
||||
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed, Highlight-Trigger) — durch XTTS v2 Voice Cloning ersetzt
|
||||
- [x] Piper komplett entfernt — nur noch XTTS v2 als TTS (Gaming-PC)
|
||||
- [x] Streaming TTS: PCM-Chunks direkt in AudioTrack, nahtlose Wiedergabe
|
||||
- [x] TTS satzweise fuer lange Texte
|
||||
- [x] Datei-/Bild-Upload mit Shared Volume
|
||||
- [x] Watchdog (stuck Run Erkennung + Auto-Fix + Container-Restart)
|
||||
|
||||
@@ -79,8 +79,8 @@ android {
|
||||
applicationId "com.ariacockpit"
|
||||
minSdkVersion rootProject.ext.minSdkVersion
|
||||
targetSdkVersion rootProject.ext.targetSdkVersion
|
||||
versionCode 401
|
||||
versionName "0.0.4.1"
|
||||
versionCode 502
|
||||
versionName "0.0.5.2"
|
||||
// Fallback fuer Libraries mit Product Flavors
|
||||
missingDimensionStrategy 'react-native-camera', 'general'
|
||||
}
|
||||
|
||||
@@ -20,6 +20,7 @@ class MainApplication : Application(), ReactApplication {
|
||||
PackageList(this).packages.apply {
|
||||
add(ApkInstallerPackage())
|
||||
add(AudioFocusPackage())
|
||||
add(PcmStreamPlayerPackage())
|
||||
}
|
||||
|
||||
override fun getJSMainModuleName(): String = "index"
|
||||
|
||||
@@ -0,0 +1,236 @@
|
||||
package com.ariacockpit
|
||||
|
||||
import android.media.AudioAttributes
|
||||
import android.media.AudioFormat
|
||||
import android.media.AudioManager
|
||||
import android.media.AudioTrack
|
||||
import android.util.Base64
|
||||
import android.util.Log
|
||||
import com.facebook.react.bridge.Promise
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||
import com.facebook.react.bridge.ReactMethod
|
||||
import java.util.concurrent.LinkedBlockingQueue
|
||||
|
||||
/**
|
||||
* Streamt PCM-s16le Audio direkt via AudioTrack MODE_STREAM mit Pre-Roll.
|
||||
*
|
||||
* Pre-Roll: AudioTrack wird zwar direkt gebaut und gefuttert, aber play()
|
||||
* wird erst aufgerufen wenn PREROLL_SECONDS Audio im Buffer ist. So hat
|
||||
* der Stream Zeit einen Vorrat aufzubauen — wenn XTTS mit RTF>1 rendert
|
||||
* (langsamer als Echtzeit), laeuft der Buffer trotzdem nicht leer.
|
||||
*
|
||||
* Flow:
|
||||
* JS: start(sampleRate, channels) → öffnet AudioTrack (noch nicht play())
|
||||
* JS: writeChunk(base64) → dekodiert, queued, Writer schreibt
|
||||
* Writer: spielt los sobald PREROLL erreicht ist
|
||||
* JS: end() → wartet bis Queue leer, schließt
|
||||
* JS: stop() → Hart stoppen (Cancel)
|
||||
*/
|
||||
class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||
companion object {
|
||||
private const val TAG = "PcmStreamPlayer"
|
||||
// Fallback wenn JS keinen Wert uebergibt.
|
||||
private const val DEFAULT_PREROLL_SECONDS = 3.5
|
||||
private const val MIN_PREROLL_SECONDS = 0.5
|
||||
private const val MAX_PREROLL_SECONDS = 10.0
|
||||
// Stille am Stream-Anfang, damit AudioTrack sauber anfaehrt und die
|
||||
// ersten Samples nicht abgeschnitten werden (XTTS-Warmup + play()-Latenz).
|
||||
private const val LEADING_SILENCE_SECONDS = 0.2
|
||||
}
|
||||
|
||||
override fun getName() = "PcmStreamPlayer"
|
||||
|
||||
private var track: AudioTrack? = null
|
||||
private val queue = LinkedBlockingQueue<ByteArray>()
|
||||
private var writerThread: Thread? = null
|
||||
@Volatile private var writerShouldStop = false
|
||||
@Volatile private var endRequested = false
|
||||
@Volatile private var prerollBytes: Int = 0
|
||||
@Volatile private var playbackStarted = false
|
||||
@Volatile private var bytesBuffered: Long = 0
|
||||
@Volatile private var streamBytesPerFrame: Int = 2 // mono s16le default
|
||||
|
||||
// ── Lifecycle ──
|
||||
|
||||
@ReactMethod
|
||||
fun start(sampleRate: Int, channels: Int, prerollSeconds: Double, promise: Promise) {
|
||||
try {
|
||||
// Alte Session beenden falls vorhanden
|
||||
stopInternal()
|
||||
|
||||
val prerollSec = prerollSeconds
|
||||
.coerceIn(MIN_PREROLL_SECONDS, MAX_PREROLL_SECONDS)
|
||||
.let { if (it.isFinite() && it > 0) it else DEFAULT_PREROLL_SECONDS }
|
||||
|
||||
val channelConfig = if (channels == 2) AudioFormat.CHANNEL_OUT_STEREO else AudioFormat.CHANNEL_OUT_MONO
|
||||
val encoding = AudioFormat.ENCODING_PCM_16BIT
|
||||
val minBuf = AudioTrack.getMinBufferSize(sampleRate, channelConfig, encoding)
|
||||
val bytesPerSecond = sampleRate * channels * 2 // 16-bit = 2 bytes
|
||||
// Buffer muss mindestens PREROLL + etwas Spielraum fassen.
|
||||
val prerollTarget = (bytesPerSecond * prerollSec).toInt()
|
||||
val bufferSize = (minBuf * 32).coerceAtLeast(prerollTarget * 2)
|
||||
prerollBytes = prerollTarget
|
||||
bytesBuffered = 0
|
||||
playbackStarted = false
|
||||
streamBytesPerFrame = channels * 2 // s16 = 2 bytes per sample
|
||||
|
||||
val newTrack = AudioTrack.Builder()
|
||||
.setAudioAttributes(
|
||||
AudioAttributes.Builder()
|
||||
.setUsage(AudioAttributes.USAGE_ASSISTANT)
|
||||
.setContentType(AudioAttributes.CONTENT_TYPE_SPEECH)
|
||||
.build(),
|
||||
)
|
||||
.setAudioFormat(
|
||||
AudioFormat.Builder()
|
||||
.setSampleRate(sampleRate)
|
||||
.setChannelMask(channelConfig)
|
||||
.setEncoding(encoding)
|
||||
.build(),
|
||||
)
|
||||
.setBufferSizeInBytes(bufferSize)
|
||||
.setTransferMode(AudioTrack.MODE_STREAM)
|
||||
.build()
|
||||
|
||||
// AudioTrack erstellen — play() wird erst aufgerufen wenn Pre-Roll erreicht.
|
||||
track = newTrack
|
||||
queue.clear()
|
||||
writerShouldStop = false
|
||||
endRequested = false
|
||||
|
||||
writerThread = Thread({
|
||||
val t = track ?: return@Thread
|
||||
try {
|
||||
// Leading-Silence in den Buffer — gibt AudioTrack Zeit anzufahren.
|
||||
val silenceBytes = ((sampleRate * channels * 2) * LEADING_SILENCE_SECONDS).toInt() and 0x7FFFFFFE
|
||||
if (silenceBytes > 0) {
|
||||
val silence = ByteArray(silenceBytes)
|
||||
var silOff = 0
|
||||
while (silOff < silence.size && !writerShouldStop) {
|
||||
val w = t.write(silence, silOff, silence.size - silOff)
|
||||
if (w <= 0) break
|
||||
silOff += w
|
||||
}
|
||||
bytesBuffered += silence.size
|
||||
}
|
||||
while (!writerShouldStop) {
|
||||
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS) ?: run {
|
||||
if (endRequested) {
|
||||
// Falls wir vor Pre-Roll enden (kurzer Text): trotzdem abspielen
|
||||
if (!playbackStarted) {
|
||||
try { t.play() } catch (_: Exception) {}
|
||||
playbackStarted = true
|
||||
}
|
||||
return@Thread
|
||||
}
|
||||
null
|
||||
} ?: continue
|
||||
|
||||
// Pre-Roll Check: play() erst wenn genug gepuffert
|
||||
if (!playbackStarted && bytesBuffered + data.size >= prerollBytes) {
|
||||
try {
|
||||
t.play()
|
||||
playbackStarted = true
|
||||
Log.i(TAG, "Playback gestartet nach Pre-Roll ${bytesBuffered + data.size} Bytes")
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "play() failed: ${e.message}")
|
||||
}
|
||||
}
|
||||
|
||||
var offset = 0
|
||||
while (offset < data.size && !writerShouldStop) {
|
||||
val written = t.write(data, offset, data.size - offset)
|
||||
if (written <= 0) break
|
||||
offset += written
|
||||
}
|
||||
bytesBuffered += data.size
|
||||
}
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "Writer-Thread Fehler: ${e.message}")
|
||||
} finally {
|
||||
// Warten bis alle geschriebenen Samples tatsaechlich abgespielt sind,
|
||||
// sonst cuttet t.release() die letzten Sekunden ab.
|
||||
try {
|
||||
val totalFrames = (bytesBuffered / streamBytesPerFrame).toInt()
|
||||
var lastPos = -1
|
||||
var stalledCount = 0
|
||||
while (!writerShouldStop) {
|
||||
val pos = t.playbackHeadPosition
|
||||
if (pos >= totalFrames) break
|
||||
// Safety: wenn Position 2s nicht mehr vorwaerts → AudioTrack hing
|
||||
if (pos == lastPos) {
|
||||
stalledCount++
|
||||
if (stalledCount > 40) {
|
||||
Log.w(TAG, "playback stalled at $pos/$totalFrames — give up")
|
||||
break
|
||||
}
|
||||
} else {
|
||||
stalledCount = 0
|
||||
lastPos = pos
|
||||
}
|
||||
Thread.sleep(50)
|
||||
}
|
||||
Log.i(TAG, "Playback fertig: frames=$totalFrames pos=${t.playbackHeadPosition}")
|
||||
} catch (_: Exception) {}
|
||||
try { t.stop() } catch (_: Exception) {}
|
||||
try { t.release() } catch (_: Exception) {}
|
||||
}
|
||||
}, "PcmStreamWriter").apply { start() }
|
||||
|
||||
Log.i(TAG, "Stream gestartet: ${sampleRate}Hz ch=$channels buf=${bufferSize}B preroll=${prerollBytes}B (${prerollSec}s)")
|
||||
promise.resolve(true)
|
||||
} catch (e: Exception) {
|
||||
Log.e(TAG, "start fehlgeschlagen", e)
|
||||
promise.reject("START_FAILED", e.message, e)
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun writeChunk(base64Pcm: String, promise: Promise) {
|
||||
try {
|
||||
if (base64Pcm.isEmpty()) {
|
||||
promise.resolve(true)
|
||||
return
|
||||
}
|
||||
val bytes = Base64.decode(base64Pcm, Base64.DEFAULT)
|
||||
queue.put(bytes)
|
||||
promise.resolve(true)
|
||||
} catch (e: Exception) {
|
||||
promise.reject("WRITE_FAILED", e.message, e)
|
||||
}
|
||||
}
|
||||
|
||||
/** Signalisiert: keine weiteren Chunks. Writer wartet auf Queue-Abschluss, dann stoppt. */
|
||||
@ReactMethod
|
||||
fun end(promise: Promise) {
|
||||
endRequested = true
|
||||
promise.resolve(true)
|
||||
}
|
||||
|
||||
/** Harter Stop (Cancel) — Queue verwerfen. */
|
||||
@ReactMethod
|
||||
fun stop(promise: Promise) {
|
||||
stopInternal()
|
||||
promise.resolve(true)
|
||||
}
|
||||
|
||||
private fun stopInternal() {
|
||||
writerShouldStop = true
|
||||
endRequested = true
|
||||
queue.clear()
|
||||
writerThread?.interrupt()
|
||||
writerThread = null
|
||||
val t = track
|
||||
if (t != null) {
|
||||
try { t.stop() } catch (_: Exception) {}
|
||||
try { t.release() } catch (_: Exception) {}
|
||||
}
|
||||
track = null
|
||||
}
|
||||
|
||||
override fun onCatalystInstanceDestroy() {
|
||||
stopInternal()
|
||||
super.onCatalystInstanceDestroy()
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,16 @@
|
||||
package com.ariacockpit
|
||||
|
||||
import com.facebook.react.ReactPackage
|
||||
import com.facebook.react.bridge.NativeModule
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.uimanager.ViewManager
|
||||
|
||||
class PcmStreamPlayerPackage : ReactPackage {
|
||||
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||
return listOf(PcmStreamPlayerModule(reactContext))
|
||||
}
|
||||
|
||||
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
||||
return emptyList()
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "aria-cockpit",
|
||||
"version": "0.0.4.1",
|
||||
"version": "0.0.5.2",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"android": "react-native run-android",
|
||||
|
||||
@@ -0,0 +1,362 @@
|
||||
/**
|
||||
* VoiceCloneModal — Eigene Stimme aufnehmen und an XTTS uploaden.
|
||||
*
|
||||
* Flow:
|
||||
* - Modal zeigt Vorlesetext (>30s Lesedauer) + Aufnahme-Button
|
||||
* - Bei Aufnahme: max 30s, Fortschrittsbalken, Countdown
|
||||
* - Bei Stop: Name abfragen, dann als voice_upload ueber RVS schicken
|
||||
* - XTTS-Bridge speichert /voices/<name>.wav, antwortet mit xtts_voice_saved
|
||||
*/
|
||||
|
||||
import React, { useCallback, useEffect, useRef, useState } from 'react';
|
||||
import {
|
||||
Modal,
|
||||
View,
|
||||
Text,
|
||||
TouchableOpacity,
|
||||
StyleSheet,
|
||||
Alert,
|
||||
ScrollView,
|
||||
ActivityIndicator,
|
||||
TextInput,
|
||||
} from 'react-native';
|
||||
import audioService from '../services/audio';
|
||||
import rvs from '../services/rvs';
|
||||
|
||||
interface Props {
|
||||
visible: boolean;
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
const SAMPLE_TEXT = `Das ist meine eigene Stimme fuer ARIA. Ich lese jetzt einen laengeren Absatz laut vor, damit das Voice-Cloning eine gute Grundlage hat. Guten Tag, ich heisse Stefan und baue gerade mit grosser Begeisterung an meinem persoenlichen KI-Assistenten. Wir automatisieren Infrastruktur, managen Sessions und spielen mit Sprachsynthese. Die letzten Jahre habe ich viel gelernt, vor allem dass Geduld genauso wichtig ist wie Neugier. Hoert sich das jetzt an wie ich selbst? Wenn alles klappt, spricht ARIA bald mit dieser Stimme.`;
|
||||
|
||||
const MAX_DURATION_MS = 30000;
|
||||
const TARGET_DURATION_MS = 15000;
|
||||
|
||||
const VoiceCloneModal: React.FC<Props> = ({ visible, onClose }) => {
|
||||
const [recording, setRecording] = useState(false);
|
||||
const [durationMs, setDurationMs] = useState(0);
|
||||
const [voiceName, setVoiceName] = useState('');
|
||||
const [processing, setProcessing] = useState(false);
|
||||
const [recordingPath, setRecordingPath] = useState('');
|
||||
const timerRef = useRef<ReturnType<typeof setInterval> | null>(null);
|
||||
const startTimeRef = useRef<number>(0);
|
||||
|
||||
// Zustand zuruecksetzen wenn Modal schliesst/oeffnet
|
||||
useEffect(() => {
|
||||
if (!visible) {
|
||||
setRecording(false);
|
||||
setDurationMs(0);
|
||||
setVoiceName('');
|
||||
setProcessing(false);
|
||||
setRecordingPath('');
|
||||
if (timerRef.current) clearInterval(timerRef.current);
|
||||
}
|
||||
}, [visible]);
|
||||
|
||||
// Cleanup bei Unmount
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (timerRef.current) clearInterval(timerRef.current);
|
||||
if (recording) audioService.stopRecording().catch(() => {});
|
||||
};
|
||||
}, [recording]);
|
||||
|
||||
const startRecording = useCallback(async () => {
|
||||
// Frische Aufnahme
|
||||
setDurationMs(0);
|
||||
setRecordingPath('');
|
||||
const ok = await audioService.startRecording(false);
|
||||
if (!ok) {
|
||||
Alert.alert('Fehler', 'Aufnahme konnte nicht gestartet werden (Mikrofon-Berechtigung?)');
|
||||
return;
|
||||
}
|
||||
setRecording(true);
|
||||
startTimeRef.current = Date.now();
|
||||
timerRef.current = setInterval(async () => {
|
||||
const elapsed = Date.now() - startTimeRef.current;
|
||||
setDurationMs(elapsed);
|
||||
if (elapsed >= MAX_DURATION_MS) {
|
||||
await stopRecording();
|
||||
}
|
||||
}, 100);
|
||||
}, []);
|
||||
|
||||
const stopRecording = useCallback(async () => {
|
||||
if (timerRef.current) {
|
||||
clearInterval(timerRef.current);
|
||||
timerRef.current = null;
|
||||
}
|
||||
if (!recording) return;
|
||||
const result = await audioService.stopRecording();
|
||||
setRecording(false);
|
||||
if (!result) {
|
||||
Alert.alert('Keine Sprache erkannt', 'Versuch es bitte nochmal — sprich bis der Timer mindestens 10 Sekunden anzeigt.');
|
||||
setDurationMs(0);
|
||||
return;
|
||||
}
|
||||
// Temp-Datei wurde schon geloescht (stopRecording cleaned up).
|
||||
// Wir brauchen aber base64 aus result direkt fuers Upload.
|
||||
// result.base64 ist bereits da.
|
||||
setRecordingPath(result.base64);
|
||||
}, [recording]);
|
||||
|
||||
const uploadVoice = useCallback(async () => {
|
||||
const name = voiceName.trim();
|
||||
if (!name) {
|
||||
Alert.alert('Name fehlt', 'Bitte gib der Stimme einen Namen (nur Buchstaben, Zahlen, _ und -).');
|
||||
return;
|
||||
}
|
||||
if (!/^[a-zA-Z0-9_-]+$/.test(name)) {
|
||||
Alert.alert('Ungueltiger Name', 'Nur Buchstaben, Zahlen, _ und - erlaubt.');
|
||||
return;
|
||||
}
|
||||
if (!recordingPath) {
|
||||
Alert.alert('Keine Aufnahme', 'Bitte zuerst aufnehmen.');
|
||||
return;
|
||||
}
|
||||
setProcessing(true);
|
||||
try {
|
||||
// voice_upload erwartet samples als Array mit base64 (aus Diagnostic-Format kopiert)
|
||||
rvs.send('voice_upload' as any, {
|
||||
name,
|
||||
samples: [{ base64: recordingPath }],
|
||||
});
|
||||
Alert.alert('Hochgeladen', `Stimme "${name}" wird vom XTTS-Server verarbeitet. Nach ein paar Sekunden in der Liste verfuegbar.`);
|
||||
onClose();
|
||||
} catch (err: any) {
|
||||
Alert.alert('Fehler', err.message);
|
||||
} finally {
|
||||
setProcessing(false);
|
||||
}
|
||||
}, [voiceName, recordingPath, onClose]);
|
||||
|
||||
const progress = Math.min(durationMs / MAX_DURATION_MS, 1);
|
||||
const sec = Math.floor(durationMs / 1000);
|
||||
const enoughRecorded = durationMs >= TARGET_DURATION_MS;
|
||||
|
||||
return (
|
||||
<Modal visible={visible} animationType="slide" onRequestClose={onClose}>
|
||||
<View style={styles.container}>
|
||||
<View style={styles.header}>
|
||||
<Text style={styles.title}>Eigene Stimme aufnehmen</Text>
|
||||
<TouchableOpacity onPress={onClose}>
|
||||
<Text style={styles.closeX}>{'\u2715'}</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
|
||||
<ScrollView style={styles.content} contentContainerStyle={{padding: 16}}>
|
||||
<Text style={styles.hint}>
|
||||
Lies den Text laut und deutlich vor. Maximal 30 Sekunden. Je mehr du sprichst
|
||||
(ziel: bis zum Ende des Textes, ca. 20-30s), desto besser wird die geklonte
|
||||
Stimme.
|
||||
</Text>
|
||||
|
||||
<View style={styles.sampleTextBox}>
|
||||
<Text style={styles.sampleText}>{SAMPLE_TEXT}</Text>
|
||||
</View>
|
||||
|
||||
{/* Timer + Fortschritt */}
|
||||
<View style={{marginTop: 20, alignItems: 'center'}}>
|
||||
<Text style={[styles.timer, recording && styles.timerActive]}>
|
||||
{sec.toString().padStart(2, '0')} / 30 s
|
||||
</Text>
|
||||
<View style={styles.progressBar}>
|
||||
<View style={[styles.progressFill, {width: `${progress * 100}%`, backgroundColor: recording ? '#FF3B30' : '#0096FF'}]} />
|
||||
</View>
|
||||
</View>
|
||||
|
||||
{/* Aufnahme-Button */}
|
||||
{!recordingPath && (
|
||||
<TouchableOpacity
|
||||
style={[styles.recordBtn, recording && styles.recordBtnActive]}
|
||||
onPress={recording ? stopRecording : startRecording}
|
||||
>
|
||||
<Text style={styles.recordIcon}>{recording ? '\u25A0' : '\u25CF'}</Text>
|
||||
<Text style={styles.recordLabel}>{recording ? 'Stop' : 'Aufnahme starten'}</Text>
|
||||
</TouchableOpacity>
|
||||
)}
|
||||
|
||||
{/* Nach Aufnahme: Name + Upload */}
|
||||
{recordingPath && (
|
||||
<View style={{marginTop: 20}}>
|
||||
<Text style={styles.hint}>
|
||||
Aufnahme ({sec}s) fertig. Vergib einen Namen und lade hoch.
|
||||
</Text>
|
||||
<TextInput
|
||||
style={styles.nameInput}
|
||||
value={voiceName}
|
||||
onChangeText={setVoiceName}
|
||||
placeholder="z.B. stefan"
|
||||
placeholderTextColor="#555570"
|
||||
autoCapitalize="none"
|
||||
autoCorrect={false}
|
||||
/>
|
||||
<View style={{flexDirection: 'row', gap: 8, marginTop: 12}}>
|
||||
<TouchableOpacity
|
||||
style={[styles.secondaryBtn, {flex: 1}]}
|
||||
onPress={() => { setRecordingPath(''); setDurationMs(0); }}
|
||||
>
|
||||
<Text style={styles.secondaryBtnText}>Nochmal aufnehmen</Text>
|
||||
</TouchableOpacity>
|
||||
<TouchableOpacity
|
||||
style={[styles.primaryBtn, {flex: 1}]}
|
||||
onPress={uploadVoice}
|
||||
disabled={processing}
|
||||
>
|
||||
{processing
|
||||
? <ActivityIndicator color="#fff" />
|
||||
: <Text style={styles.primaryBtnText}>Hochladen</Text>
|
||||
}
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</View>
|
||||
)}
|
||||
|
||||
{recording && !enoughRecorded && (
|
||||
<Text style={[styles.hint, {marginTop: 12, color: '#FFD60A', textAlign: 'center'}]}>
|
||||
Bitte weiter lesen — mindestens 15 Sekunden
|
||||
</Text>
|
||||
)}
|
||||
|
||||
{recording && enoughRecorded && (
|
||||
<Text style={[styles.hint, {marginTop: 12, color: '#34C759', textAlign: 'center'}]}>
|
||||
Genug Audio fuer eine gute Clonung. Du kannst stoppen.
|
||||
</Text>
|
||||
)}
|
||||
</ScrollView>
|
||||
</View>
|
||||
</Modal>
|
||||
);
|
||||
};
|
||||
|
||||
const styles = StyleSheet.create({
|
||||
container: {
|
||||
flex: 1,
|
||||
backgroundColor: '#0D0D1A',
|
||||
},
|
||||
header: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
justifyContent: 'space-between',
|
||||
paddingHorizontal: 16,
|
||||
paddingTop: 48,
|
||||
paddingBottom: 16,
|
||||
borderBottomWidth: 1,
|
||||
borderBottomColor: '#1E1E2E',
|
||||
},
|
||||
title: {
|
||||
color: '#FFFFFF',
|
||||
fontSize: 18,
|
||||
fontWeight: '700',
|
||||
},
|
||||
closeX: {
|
||||
color: '#8888AA',
|
||||
fontSize: 24,
|
||||
paddingHorizontal: 8,
|
||||
},
|
||||
content: {
|
||||
flex: 1,
|
||||
},
|
||||
hint: {
|
||||
color: '#8888AA',
|
||||
fontSize: 13,
|
||||
lineHeight: 20,
|
||||
},
|
||||
sampleTextBox: {
|
||||
marginTop: 12,
|
||||
padding: 14,
|
||||
backgroundColor: '#12122A',
|
||||
borderRadius: 10,
|
||||
borderWidth: 1,
|
||||
borderColor: '#1E1E2E',
|
||||
},
|
||||
sampleText: {
|
||||
color: '#E0E0F0',
|
||||
fontSize: 15,
|
||||
lineHeight: 24,
|
||||
},
|
||||
timer: {
|
||||
color: '#666680',
|
||||
fontSize: 42,
|
||||
fontWeight: '700',
|
||||
fontVariant: ['tabular-nums'],
|
||||
},
|
||||
timerActive: {
|
||||
color: '#FF3B30',
|
||||
},
|
||||
progressBar: {
|
||||
marginTop: 8,
|
||||
width: '100%',
|
||||
height: 8,
|
||||
backgroundColor: '#1E1E2E',
|
||||
borderRadius: 4,
|
||||
overflow: 'hidden',
|
||||
},
|
||||
progressFill: {
|
||||
height: '100%',
|
||||
},
|
||||
recordBtn: {
|
||||
marginTop: 24,
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
justifyContent: 'center',
|
||||
gap: 12,
|
||||
backgroundColor: '#1E1E2E',
|
||||
borderRadius: 12,
|
||||
padding: 18,
|
||||
borderWidth: 2,
|
||||
borderColor: '#34C759',
|
||||
},
|
||||
recordBtnActive: {
|
||||
borderColor: '#FF3B30',
|
||||
backgroundColor: 'rgba(255,59,48,0.15)',
|
||||
},
|
||||
recordIcon: {
|
||||
color: '#FF3B30',
|
||||
fontSize: 24,
|
||||
fontWeight: '700',
|
||||
},
|
||||
recordLabel: {
|
||||
color: '#FFFFFF',
|
||||
fontSize: 17,
|
||||
fontWeight: '600',
|
||||
},
|
||||
nameInput: {
|
||||
marginTop: 10,
|
||||
backgroundColor: '#1E1E2E',
|
||||
borderRadius: 8,
|
||||
paddingHorizontal: 14,
|
||||
paddingVertical: 12,
|
||||
color: '#FFFFFF',
|
||||
fontSize: 15,
|
||||
borderWidth: 1,
|
||||
borderColor: '#2A2A3E',
|
||||
},
|
||||
primaryBtn: {
|
||||
backgroundColor: '#0096FF',
|
||||
borderRadius: 10,
|
||||
padding: 14,
|
||||
alignItems: 'center',
|
||||
},
|
||||
primaryBtnText: {
|
||||
color: '#FFFFFF',
|
||||
fontSize: 15,
|
||||
fontWeight: '700',
|
||||
},
|
||||
secondaryBtn: {
|
||||
backgroundColor: '#1E1E2E',
|
||||
borderRadius: 10,
|
||||
padding: 14,
|
||||
alignItems: 'center',
|
||||
borderWidth: 1,
|
||||
borderColor: '#2A2A3E',
|
||||
},
|
||||
secondaryBtnText: {
|
||||
color: '#8888AA',
|
||||
fontSize: 14,
|
||||
fontWeight: '600',
|
||||
},
|
||||
});
|
||||
|
||||
export default VoiceCloneModal;
|
||||
@@ -18,6 +18,7 @@ import {
|
||||
Image,
|
||||
ScrollView,
|
||||
Modal,
|
||||
ToastAndroid,
|
||||
} from 'react-native';
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||
import RNFS from 'react-native-fs';
|
||||
@@ -107,6 +108,11 @@ const ChatScreen: React.FC = () => {
|
||||
const [searchVisible, setSearchVisible] = useState(false);
|
||||
const [pendingAttachments, setPendingAttachments] = useState<{file: any, isPhoto: boolean}[]>([]);
|
||||
const [agentActivity, setAgentActivity] = useState<{activity: string, tool: string}>({activity: 'idle', tool: ''});
|
||||
// Gerätelokale TTS-Config: globaler Toggle (aus Settings) + temporäres Muten (Mund-Button)
|
||||
const [ttsDeviceEnabled, setTtsDeviceEnabled] = useState(true);
|
||||
const [ttsMuted, setTtsMuted] = useState(false);
|
||||
// Gerätelokale XTTS-Voice-Wahl (bevorzugt gegenueber dem globalen Default)
|
||||
const localXttsVoiceRef = useRef<string>('');
|
||||
|
||||
const flatListRef = useRef<FlatList>(null);
|
||||
const messageIdCounter = useRef(0);
|
||||
@@ -117,6 +123,32 @@ const ChatScreen: React.FC = () => {
|
||||
return `msg_${Date.now()}_${messageIdCounter.current}`;
|
||||
};
|
||||
|
||||
// TTS-Settings beim Mount + bei Screen-Fokus neu laden (damit Settings-Toggle sofort greift)
|
||||
useEffect(() => {
|
||||
const loadTtsSettings = async () => {
|
||||
const enabled = await AsyncStorage.getItem('aria_tts_enabled');
|
||||
setTtsDeviceEnabled(enabled !== 'false'); // default true
|
||||
const muted = await AsyncStorage.getItem('aria_tts_muted');
|
||||
setTtsMuted(muted === 'true'); // default false
|
||||
const voice = await AsyncStorage.getItem('aria_xtts_voice');
|
||||
localXttsVoiceRef.current = voice || '';
|
||||
};
|
||||
loadTtsSettings();
|
||||
// Poll alle 2s um Settings-Aenderung mitzubekommen (einfache Loesung ohne Context)
|
||||
const interval = setInterval(loadTtsSettings, 2000);
|
||||
return () => clearInterval(interval);
|
||||
}, []);
|
||||
|
||||
const toggleMute = useCallback(() => {
|
||||
setTtsMuted(prev => {
|
||||
const next = !prev;
|
||||
AsyncStorage.setItem('aria_tts_muted', String(next));
|
||||
// Bei Muten sofort laufende Wiedergabe stoppen
|
||||
if (next) audioService.stopPlayback();
|
||||
return next;
|
||||
});
|
||||
}, []);
|
||||
|
||||
// Chat-Verlauf aus AsyncStorage laden
|
||||
const isInitialLoad = useRef(true);
|
||||
useEffect(() => {
|
||||
@@ -258,12 +290,13 @@ const ChatScreen: React.FC = () => {
|
||||
});
|
||||
}
|
||||
|
||||
// TTS-Audio abspielen wenn vorhanden
|
||||
// TTS-Audio abspielen wenn vorhanden — respektiert geraetelokalen Mute/Disable
|
||||
const canPlay = ttsDeviceEnabled && !ttsMuted;
|
||||
if (message.type === 'audio' && message.payload.base64) {
|
||||
const b64 = message.payload.base64 as string;
|
||||
const refId = (message.payload.messageId as string) || '';
|
||||
audioService.playAudio(b64);
|
||||
// Wenn messageId mitgeliefert wurde: Audio in Cache speichern + Pfad in Message eintragen
|
||||
if (canPlay) audioService.playAudio(b64);
|
||||
// Cache IMMER schreiben — Play-Button soll auch bei Mute spaeter funktionieren
|
||||
if (refId) {
|
||||
audioService.cacheAudio(b64, refId).then(audioPath => {
|
||||
if (!audioPath) return;
|
||||
@@ -274,12 +307,45 @@ const ChatScreen: React.FC = () => {
|
||||
}
|
||||
}
|
||||
|
||||
// XTTS PCM-Stream: Cache IMMER bauen, Playback nur wenn nicht gemutet
|
||||
if (message.type === ('audio_pcm' as any)) {
|
||||
const p = { ...(message.payload as any), silent: !canPlay };
|
||||
const refId = (p.messageId as string) || '';
|
||||
audioService.handlePcmChunk(p).then((audioPath: any) => {
|
||||
if (p.final && audioPath && refId) {
|
||||
setMessages(prev => prev.map(m =>
|
||||
m.messageId === refId ? { ...m, audioPath } : m
|
||||
));
|
||||
}
|
||||
}).catch(() => {});
|
||||
}
|
||||
|
||||
// Thinking-Indicator Status von der Bridge
|
||||
if (message.type === 'agent_activity') {
|
||||
const activity = (message.payload.activity as string) || 'idle';
|
||||
const tool = (message.payload.tool as string) || '';
|
||||
setAgentActivity({ activity, tool });
|
||||
}
|
||||
|
||||
// Voice-Config aus Diagnostic — setzt die lokale App-Stimme auf den
|
||||
// gerade in Diagnostic gewaehlten Wert zurueck. User-Wahl in der App
|
||||
// wird dadurch ueberschrieben.
|
||||
if (message.type === ('config' as any)) {
|
||||
const newVoice = ((message.payload as any).xttsVoice as string) ?? '';
|
||||
localXttsVoiceRef.current = newVoice;
|
||||
AsyncStorage.setItem('aria_xtts_voice', newVoice);
|
||||
}
|
||||
|
||||
// XTTS-Bridge meldet Stimme fertig geladen (kurzer Status-Toast)
|
||||
if (message.type === ('voice_ready' as any)) {
|
||||
const v = ((message.payload as any).voice as string) ?? '';
|
||||
const err = (message.payload as any).error as string | undefined;
|
||||
if (err) {
|
||||
ToastAndroid.show(`Stimme "${v}" Fehler: ${err}`, ToastAndroid.LONG);
|
||||
} else {
|
||||
ToastAndroid.show(`Stimme "${v || 'Standard'}" bereit`, ToastAndroid.SHORT);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
const unsubState = rvs.onStateChange((state) => {
|
||||
@@ -345,6 +411,7 @@ const ChatScreen: React.FC = () => {
|
||||
base64: result.base64,
|
||||
durationMs: result.durationMs,
|
||||
mimeType: result.mimeType,
|
||||
voice: localXttsVoiceRef.current,
|
||||
...(location && { location }),
|
||||
});
|
||||
}
|
||||
@@ -447,9 +514,10 @@ const ChatScreen: React.FC = () => {
|
||||
};
|
||||
setMessages(prev => capMessages([...prev, userMsg]));
|
||||
|
||||
// An RVS senden
|
||||
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
||||
rvs.send('chat', {
|
||||
text,
|
||||
voice: localXttsVoiceRef.current,
|
||||
...(location && { location }),
|
||||
});
|
||||
}, [inputText, getCurrentLocation, pendingAttachments, sendPendingAttachments]);
|
||||
@@ -558,6 +626,7 @@ const ChatScreen: React.FC = () => {
|
||||
if (messageText) {
|
||||
rvs.send('chat', {
|
||||
text: messageText,
|
||||
voice: localXttsVoiceRef.current,
|
||||
...(location && { location }),
|
||||
});
|
||||
}
|
||||
@@ -636,7 +705,7 @@ const ChatScreen: React.FC = () => {
|
||||
{item.text}
|
||||
</Text>
|
||||
)}
|
||||
{/* Play-Button fuer ARIA-Nachrichten — Cache bevorzugt, sonst Regenerierung */}
|
||||
{/* Play-Button fuer ARIA-Nachrichten — Cache bevorzugt, sonst Bridge-TTS mit aktueller Engine */}
|
||||
{!isUser && item.text.length > 0 && (
|
||||
<TouchableOpacity
|
||||
style={styles.playButton}
|
||||
@@ -644,11 +713,17 @@ const ChatScreen: React.FC = () => {
|
||||
if (item.audioPath) {
|
||||
audioService.playFromPath(item.audioPath);
|
||||
} else {
|
||||
rvs.send('tts_request' as any, { text: item.text, voice: '' });
|
||||
// messageId mitschicken damit die Bridge das generierte Audio
|
||||
// wieder mit der Nachricht verknuepft (fuer den naechsten Replay aus Cache)
|
||||
rvs.send('tts_request' as any, {
|
||||
text: item.text,
|
||||
voice: localXttsVoiceRef.current,
|
||||
messageId: item.messageId || '',
|
||||
});
|
||||
}
|
||||
}}
|
||||
>
|
||||
<Text style={styles.playButtonText}>{item.audioPath ? '\uD83D\uDD0A' : '\uD83D\uDD0A'}</Text>
|
||||
<Text style={styles.playButtonText}>{'\uD83D\uDD0A'}</Text>
|
||||
</TouchableOpacity>
|
||||
)}
|
||||
<Text style={styles.timestamp}>{time}</Text>
|
||||
@@ -805,6 +880,17 @@ const ChatScreen: React.FC = () => {
|
||||
disabled={connectionState !== 'connected'}
|
||||
wakeWordActive={wakeWordActive}
|
||||
/>
|
||||
{/* Mund-Button: TTS auf diesem Geraet muten/aufheben.
|
||||
Nur sichtbar wenn TTS in den Settings grundsaetzlich aktiv ist. */}
|
||||
{ttsDeviceEnabled && (
|
||||
<TouchableOpacity
|
||||
style={[styles.wakeWordBtn, ttsMuted && styles.mouthBtnMuted]}
|
||||
onPress={toggleMute}
|
||||
accessibilityLabel={ttsMuted ? 'Sprachausgabe einschalten' : 'Sprachausgabe stumm schalten'}
|
||||
>
|
||||
<Text style={styles.wakeWordIcon}>{ttsMuted ? '🤐' : '👄'}</Text>
|
||||
</TouchableOpacity>
|
||||
)}
|
||||
<TouchableOpacity
|
||||
style={[styles.wakeWordBtn, wakeWordActive && styles.wakeWordBtnActive]}
|
||||
onPress={toggleWakeWord}
|
||||
@@ -1022,6 +1108,9 @@ const styles = StyleSheet.create({
|
||||
wakeWordBtnActive: {
|
||||
backgroundColor: 'rgba(52, 199, 89, 0.3)',
|
||||
},
|
||||
mouthBtnMuted: {
|
||||
backgroundColor: 'rgba(255, 59, 48, 0.25)',
|
||||
},
|
||||
wakeWordIcon: {
|
||||
fontSize: 16,
|
||||
},
|
||||
|
||||
@@ -15,13 +15,22 @@ import {
|
||||
StyleSheet,
|
||||
Alert,
|
||||
Platform,
|
||||
ToastAndroid,
|
||||
ActivityIndicator,
|
||||
} from 'react-native';
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||
import RNFS from 'react-native-fs';
|
||||
import DocumentPicker from 'react-native-document-picker';
|
||||
import rvs, { ConnectionState, RVSMessage, ConnectionConfig, ConnectionLogEntry } from '../services/rvs';
|
||||
import {
|
||||
TTS_PREROLL_DEFAULT_SEC,
|
||||
TTS_PREROLL_MIN_SEC,
|
||||
TTS_PREROLL_MAX_SEC,
|
||||
TTS_PREROLL_STORAGE_KEY,
|
||||
} from '../services/audio';
|
||||
import ModeSelector from '../components/ModeSelector';
|
||||
import QRScanner from '../components/QRScanner';
|
||||
import VoiceCloneModal from '../components/VoiceCloneModal';
|
||||
|
||||
const STORAGE_PATH_KEY = 'aria_attachment_storage_path';
|
||||
const DEFAULT_STORAGE_PATH = `${RNFS.DocumentDirectoryPath}/chat_attachments`;
|
||||
@@ -72,11 +81,12 @@ const SettingsScreen: React.FC = () => {
|
||||
const [autoDownload, setAutoDownload] = useState(true);
|
||||
const [storageSize, setStorageSize] = useState('...');
|
||||
const [ttsEnabled, setTtsEnabled] = useState(true);
|
||||
const [defaultVoice, setDefaultVoice] = useState('ramona');
|
||||
const [highlightVoice, setHighlightVoice] = useState('thorsten');
|
||||
const [speedRamona, setSpeedRamona] = useState(1.0);
|
||||
const [speedThorsten, setSpeedThorsten] = useState(1.0);
|
||||
const [ttsPrerollSec, setTtsPrerollSec] = useState<number>(TTS_PREROLL_DEFAULT_SEC);
|
||||
const [editingPath, setEditingPath] = useState(false);
|
||||
const [xttsVoice, setXttsVoice] = useState('');
|
||||
const [loadingVoice, setLoadingVoice] = useState<string | null>(null);
|
||||
const [availableVoices, setAvailableVoices] = useState<Array<{name: string, size: number}>>([]);
|
||||
const [voiceCloneVisible, setVoiceCloneVisible] = useState(false);
|
||||
const [tempPath, setTempPath] = useState('');
|
||||
|
||||
let logIdCounter = 0;
|
||||
@@ -99,18 +109,19 @@ const SettingsScreen: React.FC = () => {
|
||||
AsyncStorage.getItem('aria_tts_enabled').then(saved => {
|
||||
if (saved !== null) setTtsEnabled(saved === 'true');
|
||||
});
|
||||
AsyncStorage.getItem('aria_default_voice').then(saved => {
|
||||
if (saved) setDefaultVoice(saved);
|
||||
AsyncStorage.getItem(TTS_PREROLL_STORAGE_KEY).then(saved => {
|
||||
if (saved != null) {
|
||||
const n = parseFloat(saved);
|
||||
if (isFinite(n) && n >= TTS_PREROLL_MIN_SEC && n <= TTS_PREROLL_MAX_SEC) {
|
||||
setTtsPrerollSec(n);
|
||||
}
|
||||
}
|
||||
});
|
||||
AsyncStorage.getItem('aria_highlight_voice').then(saved => {
|
||||
if (saved) setHighlightVoice(saved);
|
||||
});
|
||||
AsyncStorage.getItem('aria_speed_ramona').then(saved => {
|
||||
if (saved) setSpeedRamona(parseFloat(saved));
|
||||
});
|
||||
AsyncStorage.getItem('aria_speed_thorsten').then(saved => {
|
||||
if (saved) setSpeedThorsten(parseFloat(saved));
|
||||
AsyncStorage.getItem('aria_xtts_voice').then(saved => {
|
||||
if (saved) setXttsVoice(saved);
|
||||
});
|
||||
// Voice-Liste vom XTTS-Server holen (via RVS)
|
||||
rvs.send('xtts_list_voices' as any, {});
|
||||
}, []);
|
||||
|
||||
// Speichergroesse berechnen
|
||||
@@ -241,6 +252,47 @@ const SettingsScreen: React.FC = () => {
|
||||
const mode = message.payload.mode as string;
|
||||
if (mode) setCurrentMode(mode);
|
||||
}
|
||||
|
||||
// XTTS-Voice-Liste
|
||||
if (message.type === ('xtts_voices_list' as any)) {
|
||||
const voices = ((message.payload as any).voices || []) as Array<{name: string, size: number}>;
|
||||
setAvailableVoices(voices);
|
||||
}
|
||||
|
||||
// Voice wurde gespeichert → Liste neu laden + ggf. auswaehlen
|
||||
if (message.type === ('xtts_voice_saved' as any)) {
|
||||
const name = (message.payload as any).name as string;
|
||||
if (name) {
|
||||
setXttsVoice(name);
|
||||
AsyncStorage.setItem('aria_xtts_voice', name);
|
||||
}
|
||||
rvs.send('xtts_list_voices' as any, {});
|
||||
}
|
||||
|
||||
// Diagnostic-Voice-Wechsel → lokale App-Stimme auf den neuen Default zuruecksetzen.
|
||||
// Zusaetzlich Preload triggern, damit der User weiss wann's geladen ist.
|
||||
if (message.type === ('config' as any)) {
|
||||
const newVoice = ((message.payload as any).xttsVoice as string) ?? '';
|
||||
setXttsVoice(newVoice);
|
||||
AsyncStorage.setItem('aria_xtts_voice', newVoice);
|
||||
if (newVoice) {
|
||||
setLoadingVoice(newVoice);
|
||||
}
|
||||
}
|
||||
|
||||
// XTTS-Bridge meldet: Stimme fertig geladen
|
||||
if (message.type === ('voice_ready' as any)) {
|
||||
const v = ((message.payload as any).voice as string) ?? '';
|
||||
const err = (message.payload as any).error as string | undefined;
|
||||
const ms = (message.payload as any).loadMs as number | undefined;
|
||||
setLoadingVoice(null);
|
||||
if (err) {
|
||||
ToastAndroid.show(`Stimme "${v}" konnte nicht geladen werden: ${err}`, ToastAndroid.LONG);
|
||||
} else {
|
||||
const suffix = ms ? ` (${(ms / 1000).toFixed(1)}s)` : '';
|
||||
ToastAndroid.show(`Stimme "${v || 'Standard'}" bereit${suffix}`, ToastAndroid.SHORT);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return () => {
|
||||
@@ -304,6 +356,43 @@ const SettingsScreen: React.FC = () => {
|
||||
// In Produktion: Wert in AsyncStorage persistieren
|
||||
}, []);
|
||||
|
||||
// --- XTTS Voice ---
|
||||
|
||||
const selectVoice = useCallback((voiceName: string) => {
|
||||
setXttsVoice(voiceName);
|
||||
AsyncStorage.setItem('aria_xtts_voice', voiceName);
|
||||
// Preload nur fuer Custom-Voices — "Standard" braucht keinen Ladevorgang
|
||||
if (voiceName) {
|
||||
setLoadingVoice(voiceName);
|
||||
rvs.send('voice_preload' as any, { voice: voiceName, source: 'app' });
|
||||
} else {
|
||||
setLoadingVoice(null);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const deleteVoice = useCallback((name: string) => {
|
||||
Alert.alert(
|
||||
'Stimme loeschen',
|
||||
`Stimme "${name}" vom Server endgueltig loeschen?\nAlle Apps verlieren sie.`,
|
||||
[
|
||||
{ text: 'Abbrechen', style: 'cancel' },
|
||||
{
|
||||
text: 'Loeschen',
|
||||
style: 'destructive',
|
||||
onPress: () => {
|
||||
rvs.send('xtts_delete_voice' as any, { name });
|
||||
if (xttsVoice === name) {
|
||||
setXttsVoice('');
|
||||
AsyncStorage.setItem('aria_xtts_voice', '');
|
||||
}
|
||||
// Liste nach kurzer Wartezeit neu laden (XTTS-Bridge schickt eh neue Liste)
|
||||
setTimeout(() => rvs.send('xtts_list_voices' as any, {}), 500);
|
||||
},
|
||||
},
|
||||
],
|
||||
);
|
||||
}, [xttsVoice]);
|
||||
|
||||
// --- Modus aendern ---
|
||||
|
||||
const handleModeChange = useCallback((modeId: string) => {
|
||||
@@ -337,6 +426,10 @@ const SettingsScreen: React.FC = () => {
|
||||
onScan={handleQRScan}
|
||||
onClose={() => setScannerVisible(false)}
|
||||
/>
|
||||
<VoiceCloneModal
|
||||
visible={voiceCloneVisible}
|
||||
onClose={() => setVoiceCloneVisible(false)}
|
||||
/>
|
||||
<ScrollView style={styles.container} contentContainerStyle={styles.content}>
|
||||
|
||||
{/* === Verbindung === */}
|
||||
@@ -462,131 +555,125 @@ const SettingsScreen: React.FC = () => {
|
||||
</View>
|
||||
</View>
|
||||
|
||||
{/* === Sprachausgabe === */}
|
||||
{/* === Sprachausgabe (geraetelokal) === */}
|
||||
<Text style={styles.sectionTitle}>Sprachausgabe</Text>
|
||||
<View style={styles.card}>
|
||||
{/* TTS An/Aus */}
|
||||
<View style={styles.toggleRow}>
|
||||
<View style={styles.toggleInfo}>
|
||||
<Text style={styles.toggleLabel}>Sprachausgabe</Text>
|
||||
<Text style={styles.toggleHint}>ARIA antwortet per Sprache (TTS)</Text>
|
||||
<Text style={styles.toggleLabel}>Sprachausgabe auf diesem Geraet</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Nur lokal — andere Geraete sind unabhaengig.
|
||||
Wenn aus, erscheint im Chat auch kein Mund-Button.
|
||||
</Text>
|
||||
</View>
|
||||
<Switch
|
||||
value={ttsEnabled}
|
||||
onValueChange={(val) => {
|
||||
setTtsEnabled(val);
|
||||
AsyncStorage.setItem('aria_tts_enabled', String(val));
|
||||
rvs.send('config' as any, { ttsEnabled: val });
|
||||
}}
|
||||
trackColor={{ false: '#2A2A3E', true: '#0096FF' }}
|
||||
thumbColor={ttsEnabled ? '#FFFFFF' : '#666680'}
|
||||
/>
|
||||
</View>
|
||||
|
||||
{/* Standard-Stimme */}
|
||||
<View style={{marginTop: 16}}>
|
||||
<Text style={styles.toggleLabel}>Standard-Stimme</Text>
|
||||
<Text style={styles.toggleHint}>Fuer normale Antworten und Gespraeche</Text>
|
||||
<View style={{flexDirection: 'row', gap: 8, marginTop: 8}}>
|
||||
<TouchableOpacity
|
||||
style={[styles.voiceBtn, defaultVoice === 'ramona' && styles.voiceBtnActive]}
|
||||
onPress={() => { setDefaultVoice('ramona'); AsyncStorage.setItem('aria_default_voice', 'ramona'); rvs.send('config' as any, { defaultVoice: 'ramona' }); }}
|
||||
>
|
||||
<Text style={styles.voiceBtnIcon}>{'\uD83D\uDE4E\u200D\u2640\uFE0F'}</Text>
|
||||
<Text style={[styles.voiceBtnText, defaultVoice === 'ramona' && styles.voiceBtnTextActive]}>Ramona</Text>
|
||||
<Text style={styles.voiceBtnHint}>Weiblich, warm</Text>
|
||||
</TouchableOpacity>
|
||||
<TouchableOpacity
|
||||
style={[styles.voiceBtn, defaultVoice === 'thorsten' && styles.voiceBtnActive]}
|
||||
onPress={() => { setDefaultVoice('thorsten'); AsyncStorage.setItem('aria_default_voice', 'thorsten'); rvs.send('config' as any, { defaultVoice: 'thorsten' }); }}
|
||||
>
|
||||
<Text style={styles.voiceBtnIcon}>{'\uD83E\uDDD4'}</Text>
|
||||
<Text style={[styles.voiceBtnText, defaultVoice === 'thorsten' && styles.voiceBtnTextActive]}>Thorsten</Text>
|
||||
<Text style={styles.voiceBtnHint}>Maennlich, tief</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</View>
|
||||
|
||||
{/* Highlight-Stimme */}
|
||||
<View style={{marginTop: 16}}>
|
||||
<Text style={styles.toggleLabel}>Highlight-Stimme</Text>
|
||||
<Text style={styles.toggleHint}>Fuer besondere Ereignisse (Deploy, Alarm, Erfolg)</Text>
|
||||
<View style={{flexDirection: 'row', gap: 8, marginTop: 8}}>
|
||||
<TouchableOpacity
|
||||
style={[styles.voiceBtn, highlightVoice === 'thorsten' && styles.voiceBtnActive]}
|
||||
onPress={() => { setHighlightVoice('thorsten'); AsyncStorage.setItem('aria_highlight_voice', 'thorsten'); rvs.send('config' as any, { highlightVoice: 'thorsten' }); }}
|
||||
>
|
||||
<Text style={styles.voiceBtnIcon}>{'\uD83E\uDDD4'}</Text>
|
||||
<Text style={[styles.voiceBtnText, highlightVoice === 'thorsten' && styles.voiceBtnTextActive]}>Thorsten</Text>
|
||||
</TouchableOpacity>
|
||||
<TouchableOpacity
|
||||
style={[styles.voiceBtn, highlightVoice === 'ramona' && styles.voiceBtnActive]}
|
||||
onPress={() => { setHighlightVoice('ramona'); AsyncStorage.setItem('aria_highlight_voice', 'ramona'); rvs.send('config' as any, { highlightVoice: 'ramona' }); }}
|
||||
>
|
||||
<Text style={styles.voiceBtnIcon}>{'\uD83D\uDE4E\u200D\u2640\uFE0F'}</Text>
|
||||
<Text style={[styles.voiceBtnText, highlightVoice === 'ramona' && styles.voiceBtnTextActive]}>Ramona</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</View>
|
||||
|
||||
{/* Sprechgeschwindigkeit Ramona */}
|
||||
<View style={{marginTop: 16}}>
|
||||
<Text style={styles.toggleLabel}>Ramona Speed: {speedRamona.toFixed(1)}x</Text>
|
||||
<View style={{flexDirection: 'row', justifyContent: 'space-around', marginTop: 8}}>
|
||||
{[0.5, 0.75, 1.0, 1.25, 1.5, 2.0].map(speed => (
|
||||
{ttsEnabled && (
|
||||
<View style={{marginTop: 20}}>
|
||||
<Text style={styles.toggleLabel}>Puffer vor Wiedergabestart</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Wie viel Audio gesammelt wird bevor die Wiedergabe startet.
|
||||
Hoeher = robuster gegen Render-Pausen, aber mehr Startverzoegerung.
|
||||
Default: {TTS_PREROLL_DEFAULT_SEC.toFixed(1)}s.
|
||||
</Text>
|
||||
<View style={styles.prerollRow}>
|
||||
<TouchableOpacity
|
||||
key={speed}
|
||||
style={styles.prerollButton}
|
||||
onPress={() => {
|
||||
setSpeedRamona(speed);
|
||||
AsyncStorage.setItem('aria_speed_ramona', String(speed));
|
||||
rvs.send('config' as any, { speedRamona: speed });
|
||||
}}
|
||||
style={{
|
||||
paddingHorizontal: 10, paddingVertical: 6, borderRadius: 6,
|
||||
backgroundColor: speedRamona === speed ? '#0096FF' : '#1E1E2E',
|
||||
const next = Math.max(TTS_PREROLL_MIN_SEC, Math.round((ttsPrerollSec - 0.5) * 10) / 10);
|
||||
setTtsPrerollSec(next);
|
||||
AsyncStorage.setItem(TTS_PREROLL_STORAGE_KEY, String(next));
|
||||
}}
|
||||
disabled={ttsPrerollSec <= TTS_PREROLL_MIN_SEC}
|
||||
>
|
||||
<Text style={{color: speedRamona === speed ? '#fff' : '#8888AA', fontSize: 12, fontWeight: '600'}}>
|
||||
{speed}x
|
||||
</Text>
|
||||
<Text style={styles.prerollButtonText}>−0.5</Text>
|
||||
</TouchableOpacity>
|
||||
))}
|
||||
</View>
|
||||
</View>
|
||||
|
||||
{/* Sprechgeschwindigkeit Thorsten */}
|
||||
<View style={{marginTop: 16}}>
|
||||
<Text style={styles.toggleLabel}>Thorsten Speed: {speedThorsten.toFixed(1)}x</Text>
|
||||
<View style={{flexDirection: 'row', justifyContent: 'space-around', marginTop: 8}}>
|
||||
{[0.5, 0.75, 1.0, 1.25, 1.5, 2.0].map(speed => (
|
||||
<Text style={styles.prerollValue}>{ttsPrerollSec.toFixed(1)} s</Text>
|
||||
<TouchableOpacity
|
||||
key={speed}
|
||||
style={styles.prerollButton}
|
||||
onPress={() => {
|
||||
setSpeedThorsten(speed);
|
||||
AsyncStorage.setItem('aria_speed_thorsten', String(speed));
|
||||
rvs.send('config' as any, { speedThorsten: speed });
|
||||
}}
|
||||
style={{
|
||||
paddingHorizontal: 10, paddingVertical: 6, borderRadius: 6,
|
||||
backgroundColor: speedThorsten === speed ? '#0096FF' : '#1E1E2E',
|
||||
const next = Math.min(TTS_PREROLL_MAX_SEC, Math.round((ttsPrerollSec + 0.5) * 10) / 10);
|
||||
setTtsPrerollSec(next);
|
||||
AsyncStorage.setItem(TTS_PREROLL_STORAGE_KEY, String(next));
|
||||
}}
|
||||
disabled={ttsPrerollSec >= TTS_PREROLL_MAX_SEC}
|
||||
>
|
||||
<Text style={{color: speedThorsten === speed ? '#fff' : '#8888AA', fontSize: 12, fontWeight: '600'}}>
|
||||
{speed}x
|
||||
</Text>
|
||||
<Text style={styles.prerollButtonText}>+0.5</Text>
|
||||
</TouchableOpacity>
|
||||
))}
|
||||
</View>
|
||||
</View>
|
||||
</View>
|
||||
)}
|
||||
|
||||
{/* Highlight-Trigger Info */}
|
||||
<View style={{marginTop: 16, padding: 10, backgroundColor: '#1E1E2E', borderRadius: 8}}>
|
||||
<Text style={styles.toggleLabel}>{'\u26A1'} Highlight-Trigger</Text>
|
||||
<Text style={[styles.toggleHint, {marginTop: 4}]}>
|
||||
Die Highlight-Stimme wird automatisch bei diesen Woertern verwendet:{'\n'}
|
||||
deploy, erfolgreich, alarm, so soll es sein, kritisch, server down, sicherheitswarnung, ticket geloest, aufgabe abgeschlossen
|
||||
</Text>
|
||||
</View>
|
||||
{ttsEnabled && (
|
||||
<View style={{marginTop: 20}}>
|
||||
<Text style={styles.toggleLabel}>Stimme (geraetelokal)</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Eigene Wahl fuer dieses Geraet. Ohne Auswahl gilt der Diagnostic-Default.
|
||||
</Text>
|
||||
|
||||
{/* Default-Option */}
|
||||
<TouchableOpacity
|
||||
style={[styles.voiceRow, xttsVoice === '' && styles.voiceRowActive]}
|
||||
onPress={() => selectVoice('')}
|
||||
>
|
||||
<Text style={[styles.voiceRowName, xttsVoice === '' && styles.voiceRowNameActive]}>
|
||||
Standard (Diagnostic-Default)
|
||||
</Text>
|
||||
{xttsVoice === '' && <Text style={styles.voiceRowCheck}>{'\u2713'}</Text>}
|
||||
</TouchableOpacity>
|
||||
|
||||
{availableVoices.length === 0 ? (
|
||||
<Text style={[styles.toggleHint, {marginTop: 8, textAlign: 'center'}]}>
|
||||
Keine eigenen Stimmen auf dem XTTS-Server.
|
||||
</Text>
|
||||
) : (
|
||||
availableVoices.map(v => (
|
||||
<View key={v.name} style={[styles.voiceRow, xttsVoice === v.name && styles.voiceRowActive]}>
|
||||
<TouchableOpacity
|
||||
style={{flex: 1}}
|
||||
onPress={() => selectVoice(v.name)}
|
||||
>
|
||||
<Text style={[styles.voiceRowName, xttsVoice === v.name && styles.voiceRowNameActive]}>
|
||||
{v.name}
|
||||
</Text>
|
||||
<Text style={styles.voiceRowMeta}>{(v.size / 1024).toFixed(0)} KB</Text>
|
||||
</TouchableOpacity>
|
||||
{loadingVoice === v.name && (
|
||||
<ActivityIndicator size="small" color="#0096FF" style={{marginRight: 8}} />
|
||||
)}
|
||||
{xttsVoice === v.name && loadingVoice !== v.name && <Text style={styles.voiceRowCheck}>{'\u2713'}</Text>}
|
||||
<TouchableOpacity onPress={() => deleteVoice(v.name)} style={styles.voiceRowDelete}>
|
||||
<Text style={styles.voiceRowDeleteIcon}>X</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
))
|
||||
)}
|
||||
|
||||
<View style={{flexDirection: 'row', gap: 8, marginTop: 12}}>
|
||||
<TouchableOpacity
|
||||
style={[styles.connectButton, {flex: 1}]}
|
||||
onPress={() => setVoiceCloneVisible(true)}
|
||||
>
|
||||
<Text style={styles.connectButtonText}>{'\uD83C\uDFA4'} Eigene Stimme aufnehmen</Text>
|
||||
</TouchableOpacity>
|
||||
<TouchableOpacity
|
||||
style={[styles.clearButton, {flex: 0.4, marginTop: 0}]}
|
||||
onPress={() => rvs.send('xtts_list_voices' as any, {})}
|
||||
>
|
||||
<Text style={styles.clearButtonText}>Aktualisieren</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</View>
|
||||
)}
|
||||
</View>
|
||||
|
||||
{/* === Speicher === */}
|
||||
@@ -901,6 +988,55 @@ const styles = StyleSheet.create({
|
||||
marginTop: 2,
|
||||
},
|
||||
|
||||
// XTTS Voice List
|
||||
voiceRow: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
backgroundColor: '#1E1E2E',
|
||||
borderRadius: 8,
|
||||
padding: 10,
|
||||
marginTop: 6,
|
||||
borderWidth: 1,
|
||||
borderColor: 'transparent',
|
||||
},
|
||||
voiceRowActive: {
|
||||
borderColor: '#0096FF',
|
||||
backgroundColor: '#0D1A2E',
|
||||
},
|
||||
voiceRowName: {
|
||||
color: '#CCCCDD',
|
||||
fontSize: 14,
|
||||
fontWeight: '500',
|
||||
},
|
||||
voiceRowNameActive: {
|
||||
color: '#FFFFFF',
|
||||
},
|
||||
voiceRowMeta: {
|
||||
color: '#666680',
|
||||
fontSize: 11,
|
||||
marginTop: 2,
|
||||
},
|
||||
voiceRowCheck: {
|
||||
color: '#34C759',
|
||||
fontSize: 16,
|
||||
fontWeight: '700',
|
||||
marginHorizontal: 6,
|
||||
},
|
||||
voiceRowDelete: {
|
||||
width: 28,
|
||||
height: 28,
|
||||
borderRadius: 14,
|
||||
backgroundColor: 'rgba(255,59,48,0.2)',
|
||||
alignItems: 'center',
|
||||
justifyContent: 'center',
|
||||
marginLeft: 4,
|
||||
},
|
||||
voiceRowDeleteIcon: {
|
||||
color: '#FF3B30',
|
||||
fontSize: 12,
|
||||
fontWeight: '700',
|
||||
},
|
||||
|
||||
// Stimmen
|
||||
voiceBtn: {
|
||||
flex: 1,
|
||||
@@ -1071,6 +1207,34 @@ const styles = StyleSheet.create({
|
||||
bottomSpacer: {
|
||||
height: 40,
|
||||
},
|
||||
|
||||
prerollRow: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
justifyContent: 'center',
|
||||
marginTop: 12,
|
||||
gap: 16,
|
||||
},
|
||||
prerollButton: {
|
||||
backgroundColor: '#2A2A3E',
|
||||
paddingHorizontal: 18,
|
||||
paddingVertical: 10,
|
||||
borderRadius: 8,
|
||||
minWidth: 72,
|
||||
alignItems: 'center',
|
||||
},
|
||||
prerollButtonText: {
|
||||
color: '#FFFFFF',
|
||||
fontSize: 16,
|
||||
fontWeight: '600',
|
||||
},
|
||||
prerollValue: {
|
||||
color: '#FFFFFF',
|
||||
fontSize: 20,
|
||||
fontWeight: '700',
|
||||
minWidth: 80,
|
||||
textAlign: 'center',
|
||||
},
|
||||
});
|
||||
|
||||
export default SettingsScreen;
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
import { Platform, PermissionsAndroid, NativeModules } from 'react-native';
|
||||
import Sound from 'react-native-sound';
|
||||
import RNFS from 'react-native-fs';
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||
import AudioRecorderPlayer, {
|
||||
AudioEncoderAndroidType,
|
||||
AudioSourceAndroidType,
|
||||
@@ -16,13 +17,36 @@ import AudioRecorderPlayer, {
|
||||
OutputFormatAndroidType,
|
||||
} from 'react-native-audio-recorder-player';
|
||||
|
||||
// Base64-Encoder fuer Binary-Strings (Header-Bytes → Base64)
|
||||
const B64_CHARS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
|
||||
function btoaSafe(bin: string): string {
|
||||
let out = '';
|
||||
const len = bin.length;
|
||||
for (let i = 0; i < len; i += 3) {
|
||||
const b1 = bin.charCodeAt(i) & 0xff;
|
||||
const b2 = i + 1 < len ? bin.charCodeAt(i + 1) & 0xff : 0;
|
||||
const b3 = i + 2 < len ? bin.charCodeAt(i + 2) & 0xff : 0;
|
||||
out += B64_CHARS[b1 >> 2];
|
||||
out += B64_CHARS[((b1 & 0x03) << 4) | (b2 >> 4)];
|
||||
out += i + 1 < len ? B64_CHARS[((b2 & 0x0f) << 2) | (b3 >> 6)] : '=';
|
||||
out += i + 2 < len ? B64_CHARS[b3 & 0x3f] : '=';
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
// Native Module fuer Audio-Focus (Ducking/Muten anderer Apps)
|
||||
const { AudioFocus } = NativeModules as {
|
||||
const { AudioFocus, PcmStreamPlayer } = NativeModules as {
|
||||
AudioFocus?: {
|
||||
requestDuck: () => Promise<boolean>;
|
||||
requestExclusive: () => Promise<boolean>;
|
||||
release: () => Promise<boolean>;
|
||||
};
|
||||
PcmStreamPlayer?: {
|
||||
start: (sampleRate: number, channels: number, prerollSeconds: number) => Promise<boolean>;
|
||||
writeChunk: (base64Pcm: string) => Promise<boolean>;
|
||||
end: () => Promise<boolean>;
|
||||
stop: () => Promise<boolean>;
|
||||
};
|
||||
};
|
||||
|
||||
// --- Typen ---
|
||||
@@ -57,6 +81,26 @@ const VAD_SPEECH_MIN_MS = 500; // ms Sprache bevor Aufnahme zaehlt — l
|
||||
// Max-Dauer einer Aufnahme in Gespraechsmodus (Notbremse gegen Runaway-Loops)
|
||||
const MAX_RECORDING_MS = 30000;
|
||||
|
||||
// Pre-Roll: Wie lange Audio im AudioTrack-Buffer liegt bevor play() startet.
|
||||
// Einstellbar via Diagnostic/Settings (Key: aria_tts_preroll_sec).
|
||||
export const TTS_PREROLL_DEFAULT_SEC = 3.5;
|
||||
export const TTS_PREROLL_MIN_SEC = 1.0;
|
||||
export const TTS_PREROLL_MAX_SEC = 6.0;
|
||||
export const TTS_PREROLL_STORAGE_KEY = 'aria_tts_preroll_sec';
|
||||
|
||||
async function loadPrerollSec(): Promise<number> {
|
||||
try {
|
||||
const raw = await AsyncStorage.getItem(TTS_PREROLL_STORAGE_KEY);
|
||||
if (raw != null) {
|
||||
const n = parseFloat(raw);
|
||||
if (isFinite(n) && n >= TTS_PREROLL_MIN_SEC && n <= TTS_PREROLL_MAX_SEC) {
|
||||
return n;
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
return TTS_PREROLL_DEFAULT_SEC;
|
||||
}
|
||||
|
||||
// --- Audio-Service ---
|
||||
|
||||
class AudioService {
|
||||
@@ -79,6 +123,15 @@ class AudioService {
|
||||
private speechDetected: boolean = false;
|
||||
private speechStartTime: number = 0;
|
||||
|
||||
// PCM-Stream (XTTS): aktive Session + Cache-Puffer pro messageId
|
||||
private pcmStreamActive: boolean = false;
|
||||
private pcmMessageId: string = '';
|
||||
private pcmSampleRate: number = 24000;
|
||||
private pcmChannels: number = 1;
|
||||
private pcmBuffer: string[] = []; // base64-chunks zum spaeteren WAV-Build
|
||||
private pcmBytesCollected: number = 0;
|
||||
private readonly PCM_MAX_CACHE_BYTES = 30 * 1024 * 1024; // 30MB
|
||||
|
||||
// VAD State
|
||||
private vadEnabled: boolean = false;
|
||||
private lastSpeechTime: number = 0;
|
||||
@@ -303,6 +356,144 @@ class AudioService {
|
||||
}
|
||||
}
|
||||
|
||||
/** Einen PCM-Chunk aus einer audio_pcm Nachricht empfangen.
|
||||
* silent=true → nur cachen, nicht abspielen (z.B. wenn TTS geraetelokal gemutet).
|
||||
* Gibt bei final=true den Cache-Pfad zurueck (file://) oder '' wenn nicht gecached. */
|
||||
async handlePcmChunk(payload: {
|
||||
base64: string;
|
||||
sampleRate?: number;
|
||||
channels?: number;
|
||||
messageId?: string;
|
||||
chunk?: number;
|
||||
final?: boolean;
|
||||
silent?: boolean;
|
||||
}): Promise<string> {
|
||||
const silent = !!payload.silent;
|
||||
if (!silent && !PcmStreamPlayer) {
|
||||
console.warn('[Audio] PcmStreamPlayer Native Module nicht verfuegbar');
|
||||
return '';
|
||||
}
|
||||
|
||||
const messageId = payload.messageId || '';
|
||||
const sampleRate = payload.sampleRate || 24000;
|
||||
const channels = payload.channels || 1;
|
||||
const base64 = payload.base64 || '';
|
||||
const isFinal = !!payload.final;
|
||||
|
||||
// Neuer Stream? (messageId Wechsel oder nicht aktiv)
|
||||
if (!this.pcmStreamActive || this.pcmMessageId !== messageId) {
|
||||
if (this.pcmStreamActive && !silent) {
|
||||
try { await PcmStreamPlayer!.stop(); } catch {}
|
||||
this.pcmBuffer = [];
|
||||
this.pcmBytesCollected = 0;
|
||||
}
|
||||
this.pcmStreamActive = true;
|
||||
this.pcmMessageId = messageId;
|
||||
this.pcmSampleRate = sampleRate;
|
||||
this.pcmChannels = channels;
|
||||
this.pcmBuffer = [];
|
||||
this.pcmBytesCollected = 0;
|
||||
if (!silent) {
|
||||
const prerollSec = await loadPrerollSec();
|
||||
try {
|
||||
await PcmStreamPlayer!.start(sampleRate, channels, prerollSec);
|
||||
} catch (err) {
|
||||
console.error('[Audio] PcmStreamPlayer.start fehlgeschlagen:', err);
|
||||
this.pcmStreamActive = false;
|
||||
return '';
|
||||
}
|
||||
AudioFocus?.requestDuck().catch(() => {});
|
||||
}
|
||||
}
|
||||
|
||||
// Chunk — immer cachen, nur bei !silent auch abspielen
|
||||
if (base64) {
|
||||
if (!silent) {
|
||||
try { await PcmStreamPlayer!.writeChunk(base64); } catch (err) { console.warn('[Audio] writeChunk', err); }
|
||||
}
|
||||
if (messageId && this.pcmBytesCollected < this.PCM_MAX_CACHE_BYTES) {
|
||||
this.pcmBuffer.push(base64);
|
||||
this.pcmBytesCollected += Math.floor(base64.length * 0.75);
|
||||
}
|
||||
}
|
||||
|
||||
if (isFinal) {
|
||||
if (!silent) {
|
||||
try { await PcmStreamPlayer!.end(); } catch {}
|
||||
AudioFocus?.release().catch(() => {});
|
||||
}
|
||||
this.pcmStreamActive = false;
|
||||
|
||||
if (messageId && this.pcmBuffer.length > 0) {
|
||||
const audioPath = await this._savePcmBufferAsWav(messageId);
|
||||
this.pcmBuffer = [];
|
||||
this.pcmBytesCollected = 0;
|
||||
this.pcmMessageId = '';
|
||||
return audioPath;
|
||||
}
|
||||
this.pcmMessageId = '';
|
||||
}
|
||||
return '';
|
||||
}
|
||||
|
||||
/** Gesammelte PCM-Chunks als WAV speichern. Gibt file:// Pfad zurueck. */
|
||||
private async _savePcmBufferAsWav(messageId: string): Promise<string> {
|
||||
try {
|
||||
const dir = `${RNFS.DocumentDirectoryPath}/tts_cache`;
|
||||
await RNFS.mkdir(dir).catch(() => {});
|
||||
const path = `${dir}/${messageId}.wav`;
|
||||
|
||||
// WAV-Header fuer PCM s16le
|
||||
const sampleRate = this.pcmSampleRate;
|
||||
const channels = this.pcmChannels;
|
||||
const bitsPerSample = 16;
|
||||
const byteRate = sampleRate * channels * bitsPerSample / 8;
|
||||
const blockAlign = channels * bitsPerSample / 8;
|
||||
const dataSize = this.pcmBytesCollected;
|
||||
const fileSize = 36 + dataSize;
|
||||
|
||||
// Header als Base64 (44 bytes)
|
||||
const header = new Uint8Array(44);
|
||||
const dv = new DataView(header.buffer);
|
||||
// "RIFF"
|
||||
header[0] = 0x52; header[1] = 0x49; header[2] = 0x46; header[3] = 0x46;
|
||||
dv.setUint32(4, fileSize, true);
|
||||
// "WAVE"
|
||||
header[8] = 0x57; header[9] = 0x41; header[10] = 0x56; header[11] = 0x45;
|
||||
// "fmt "
|
||||
header[12] = 0x66; header[13] = 0x6d; header[14] = 0x74; header[15] = 0x20;
|
||||
dv.setUint32(16, 16, true); // fmt chunk size
|
||||
dv.setUint16(20, 1, true); // PCM format
|
||||
dv.setUint16(22, channels, true);
|
||||
dv.setUint32(24, sampleRate, true);
|
||||
dv.setUint32(28, byteRate, true);
|
||||
dv.setUint16(32, blockAlign, true);
|
||||
dv.setUint16(34, bitsPerSample, true);
|
||||
// "data"
|
||||
header[36] = 0x64; header[37] = 0x61; header[38] = 0x74; header[39] = 0x61;
|
||||
dv.setUint32(40, dataSize, true);
|
||||
|
||||
// Header als base64
|
||||
let headerB64 = '';
|
||||
const chunk = 1024;
|
||||
for (let i = 0; i < header.length; i += chunk) {
|
||||
headerB64 += String.fromCharCode(...Array.from(header.slice(i, i + chunk)));
|
||||
}
|
||||
headerB64 = btoaSafe(headerB64);
|
||||
|
||||
// Datei schreiben: Header + alle PCM-Chunks
|
||||
await RNFS.writeFile(path, headerB64, 'base64');
|
||||
for (const b64 of this.pcmBuffer) {
|
||||
await RNFS.appendFile(path, b64, 'base64');
|
||||
}
|
||||
console.log(`[Audio] PCM-Cache geschrieben: ${path} (${(dataSize / 1024).toFixed(0)}KB, ${this.pcmBuffer.length} chunks)`);
|
||||
return `file://${path}`;
|
||||
} catch (err) {
|
||||
console.warn('[Audio] _savePcmBufferAsWav fehlgeschlagen:', err);
|
||||
return '';
|
||||
}
|
||||
}
|
||||
|
||||
/** Audio aus lokaler Datei (file:// Pfad) in die Queue und abspielen. */
|
||||
async playFromPath(filePath: string): Promise<void> {
|
||||
if (!filePath) return;
|
||||
@@ -419,6 +610,14 @@ class AudioService {
|
||||
if (this.preloadedPath) RNFS.unlink(this.preloadedPath).catch(() => {});
|
||||
this.preloadedPath = '';
|
||||
}
|
||||
// PCM-Stream ebenfalls hart stoppen (Cancel/Abbruch)
|
||||
if (this.pcmStreamActive) {
|
||||
PcmStreamPlayer?.stop().catch(() => {});
|
||||
this.pcmStreamActive = false;
|
||||
this.pcmBuffer = [];
|
||||
this.pcmBytesCollected = 0;
|
||||
this.pcmMessageId = '';
|
||||
}
|
||||
// Audio-Focus freigeben
|
||||
AudioFocus?.release().catch(() => {});
|
||||
}
|
||||
|
||||
@@ -3,10 +3,6 @@
|
||||
# → localhost ist aria-core
|
||||
ARIA_CORE_WS=ws://127.0.0.1:18789
|
||||
|
||||
# Piper TTS Stimmen
|
||||
PIPER_RAMONA=/voices/de_DE-ramona-low.onnx
|
||||
PIPER_THORSTEN=/voices/de_DE-thorsten-high.onnx
|
||||
|
||||
# Wake-Word
|
||||
WAKE_WORD=aria
|
||||
|
||||
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
# ════════════════════════════════════════════════
|
||||
# ARIA Voice Bridge — Dockerfile
|
||||
# Whisper STT + Piper TTS + Wake-Word
|
||||
# Whisper STT + Wake-Word (TTS via XTTS v2 remote)
|
||||
# ════════════════════════════════════════════════
|
||||
|
||||
FROM python:3.12-slim
|
||||
|
||||
+365
-392
@@ -26,7 +26,6 @@ import ssl
|
||||
import sys
|
||||
import tempfile
|
||||
import uuid
|
||||
import wave
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
@@ -37,10 +36,8 @@ import sounddevice as sd
|
||||
import websockets
|
||||
from faster_whisper import WhisperModel
|
||||
from openwakeword.model import Model as WakeWordModel
|
||||
from piper import PiperVoice
|
||||
from piper.config import SynthesisConfig
|
||||
|
||||
from modes import Mode, detect_mode_switch, should_speak
|
||||
from modes import Mode, canonical_id, detect_mode_switch, mode_from_id, should_speak
|
||||
|
||||
# ── Logging ──────────────────────────────────────────────────
|
||||
|
||||
@@ -72,38 +69,6 @@ CHANNELS = 1
|
||||
BLOCK_SIZE = 1280 # 80ms bei 16kHz — gut fuer Wake-Word-Erkennung
|
||||
RECORD_SECONDS = 8 # Max. Aufnahmedauer nach Wake-Word
|
||||
|
||||
# Epische Trigger — bei diesen Woertern spricht Thorsten
|
||||
EPIC_TRIGGERS_DEFAULT = [
|
||||
"deploy",
|
||||
"erfolgreich",
|
||||
"alarm",
|
||||
"so soll es sein",
|
||||
"kritisch",
|
||||
"server down",
|
||||
"sicherheitswarnung",
|
||||
"ticket geloest",
|
||||
"aufgabe abgeschlossen",
|
||||
]
|
||||
|
||||
# Trigger aus Shared-Config laden (von Diagnostic gespeichert)
|
||||
TRIGGERS_FILE = "/shared/config/highlight_triggers.json"
|
||||
|
||||
def load_epic_triggers():
|
||||
"""Laedt Highlight-Trigger aus Shared-Config oder nutzt Defaults."""
|
||||
try:
|
||||
if os.path.exists(TRIGGERS_FILE):
|
||||
with open(TRIGGERS_FILE) as f:
|
||||
triggers = json.load(f)
|
||||
if isinstance(triggers, list) and len(triggers) > 0:
|
||||
logger.info("Highlight-Trigger geladen: %d aus %s", len(triggers), TRIGGERS_FILE)
|
||||
return triggers
|
||||
except Exception as e:
|
||||
logger.warning("Highlight-Trigger laden fehlgeschlagen: %s — nutze Defaults", e)
|
||||
return EPIC_TRIGGERS_DEFAULT
|
||||
|
||||
EPIC_TRIGGERS = load_epic_triggers()
|
||||
|
||||
|
||||
def load_config() -> dict[str, str]:
|
||||
"""Laedt Konfiguration.
|
||||
|
||||
@@ -145,6 +110,55 @@ def load_config() -> dict[str, str]:
|
||||
|
||||
import re as _re_tts
|
||||
|
||||
_NUM_WORDS_DE = {
|
||||
0: "null", 1: "eins", 2: "zwei", 3: "drei", 4: "vier", 5: "fuenf",
|
||||
6: "sechs", 7: "sieben", 8: "acht", 9: "neun", 10: "zehn",
|
||||
11: "elf", 12: "zwoelf", 13: "dreizehn", 14: "vierzehn", 15: "fuenfzehn",
|
||||
16: "sechzehn", 17: "siebzehn", 18: "achtzehn", 19: "neunzehn", 20: "zwanzig",
|
||||
}
|
||||
_TENS_DE = {30: "dreissig", 40: "vierzig", 50: "fuenfzig"}
|
||||
|
||||
|
||||
def _num_to_words_de(n: int) -> str:
|
||||
"""Zahlen 0-59 als deutsches Wort — fuer Uhrzeiten und kleine Bereiche."""
|
||||
if n in _NUM_WORDS_DE:
|
||||
return _NUM_WORDS_DE[n]
|
||||
if 21 <= n <= 29:
|
||||
return f"{_NUM_WORDS_DE[n - 20]}undzwanzig"
|
||||
if 30 <= n <= 59:
|
||||
tens = (n // 10) * 10
|
||||
ones = n % 10
|
||||
tens_word = _TENS_DE.get(tens, str(tens))
|
||||
if ones == 0:
|
||||
return tens_word
|
||||
return f"{_NUM_WORDS_DE.get(ones, str(ones))}und{tens_word}"
|
||||
return str(n)
|
||||
|
||||
|
||||
def _time_range_to_words(m):
|
||||
"""'8:00-9:00 Uhr' → 'acht bis neun Uhr', '8-9 Uhr' → 'acht bis neun Uhr'."""
|
||||
h1 = int(m.group(1))
|
||||
h2 = int(m.group(3))
|
||||
return f"{_num_to_words_de(h1)} bis {_num_to_words_de(h2)} Uhr"
|
||||
|
||||
|
||||
def _small_range_to_words(m):
|
||||
"""'5-6' → 'fuenf bis sechs' (nur wenn beide Zahlen ≤ 24)."""
|
||||
a, b = int(m.group(1)), int(m.group(2))
|
||||
if a > 24 or b > 24 or a >= b:
|
||||
return m.group(0)
|
||||
return f"{_num_to_words_de(a)} bis {_num_to_words_de(b)}"
|
||||
|
||||
|
||||
def _decimal_to_words(m):
|
||||
"""'0.1' / '0,1' → 'null komma eins', '1,25' → 'eins komma zwei fuenf'."""
|
||||
int_part = int(m.group(1))
|
||||
dec_part = m.group(2)
|
||||
int_word = _num_to_words_de(int_part) if 0 <= int_part <= 59 else str(int_part)
|
||||
dec_words = " ".join(_num_to_words_de(int(d)) for d in dec_part)
|
||||
return f"{int_word} komma {dec_words}"
|
||||
|
||||
|
||||
_UNIT_WORDS = [
|
||||
(r'\bTB\b', 'Terabyte'),
|
||||
(r'\bGB\b', 'Gigabyte'),
|
||||
@@ -215,6 +229,27 @@ def clean_text_for_tts(text: str) -> str:
|
||||
t = _re_tts.sub(r'^>\s*', '', t, flags=_re_tts.MULTILINE)
|
||||
t = _re_tts.sub(r'^[\-\*]\s+', '', t, flags=_re_tts.MULTILINE)
|
||||
|
||||
# Zeitbereiche: "8:00-9:00 Uhr" / "8-9 Uhr" → "acht bis neun Uhr"
|
||||
t = _re_tts.sub(r'\b(\d{1,2})(:\d{2})?\s*[-–]\s*(\d{1,2})(:\d{2})?\s*Uhr\b', _time_range_to_words, t)
|
||||
# Uhrzeiten mit Minuten: "8:30 Uhr" → "acht Uhr dreissig", "8:00 Uhr" → "acht Uhr"
|
||||
def _single_time(m):
|
||||
h = int(m.group(1))
|
||||
mn = int(m.group(2)) if m.group(2) else 0
|
||||
words = _num_to_words_de(h) + " Uhr"
|
||||
if mn > 0:
|
||||
words += " " + _num_to_words_de(mn)
|
||||
return words
|
||||
t = _re_tts.sub(r'\b(\d{1,2}):(\d{2})\s*Uhr\b', _single_time, t)
|
||||
# Volle Uhrzeiten ohne ":" — "15 Uhr" → "fuenfzehn Uhr"
|
||||
t = _re_tts.sub(r'\b(\d{1,2})\s+Uhr\b', lambda m: f"{_num_to_words_de(int(m.group(1)))} Uhr", t)
|
||||
# Kleine Zahlen-Bereiche ohne "Uhr": "5-6" → "fuenf bis sechs"
|
||||
t = _re_tts.sub(r'\b(\d{1,2})\s*[-–]\s*(\d{1,2})\b', _small_range_to_words, t)
|
||||
|
||||
# Dezimalzahlen: "0.1" / "0,5" / "1,25" → "null komma eins" / "null komma fuenf" / ...
|
||||
# Muss vor "Zahl+Einheit" laufen, sonst frisst die Unit-Regel den Nachkommaanteil.
|
||||
# Lookahead verhindert Match auf IP-artigen Strings wie 192.168.1.1.
|
||||
t = _re_tts.sub(r'\b(\d+)[.,](\d+)(?![.,\d])', _decimal_to_words, t)
|
||||
|
||||
# Zahlen + Einheit: "22GB" → "22 Gigabyte" (Leerzeichen einfuegen)
|
||||
t = _re_tts.sub(r'(\d+)([A-Za-z]{1,4})\b', r'\1 \2', t)
|
||||
|
||||
@@ -222,6 +257,12 @@ def clean_text_for_tts(text: str) -> str:
|
||||
for pat, repl in _UNIT_WORDS:
|
||||
t = _re_tts.sub(pat, repl, t)
|
||||
|
||||
# Generisches Buchstabieren: alle verbleibenden 2-5-Zeichen-Grossbuchstaben-Woerter
|
||||
# (XTTS, USB, DNS, JSON, HTML, ...) → "X T T S". Laeuft NACH der expliziten Liste,
|
||||
# damit TTS/GPU/... schon aufgeloest sind. "WLAN"-artige, die als Wort gesprochen
|
||||
# werden, koennen bei Bedarf explizit in _UNIT_WORDS uebersteuert werden.
|
||||
t = _re_tts.sub(r'\b([A-Z]{2,5})\b', lambda m: " ".join(m.group(1)), t)
|
||||
|
||||
# Anfuehrungszeichen
|
||||
t = _re_tts.sub(r'["""„`]', '', t)
|
||||
|
||||
@@ -234,179 +275,6 @@ def clean_text_for_tts(text: str) -> str:
|
||||
return t.strip()
|
||||
|
||||
|
||||
class VoiceEngine:
|
||||
"""Verwaltet Piper TTS mit zwei Stimmen: Ramona und Thorsten."""
|
||||
|
||||
def __init__(self, voices_dir: Path) -> None:
|
||||
self.voices_dir = voices_dir
|
||||
self.voices: dict[str, PiperVoice] = {}
|
||||
self.default_voice = "ramona"
|
||||
self.highlight_voice = "thorsten"
|
||||
self.speech_speed = {"ramona": 1.0, "thorsten": 1.0}
|
||||
|
||||
def initialize(self) -> None:
|
||||
"""Laedt die Piper-Stimmen aus dem Voices-Verzeichnis."""
|
||||
voice_configs = {
|
||||
"ramona": "de_DE-ramona-low",
|
||||
"thorsten": "de_DE-thorsten-high",
|
||||
}
|
||||
|
||||
for name, model_name in voice_configs.items():
|
||||
model_path = self.voices_dir / f"{model_name}.onnx"
|
||||
config_path = self.voices_dir / f"{model_name}.onnx.json"
|
||||
|
||||
if not model_path.exists():
|
||||
logger.error("Stimme nicht gefunden: %s", model_path)
|
||||
continue
|
||||
|
||||
self.voices[name] = PiperVoice.load(
|
||||
str(model_path),
|
||||
config_path=str(config_path) if config_path.exists() else None,
|
||||
)
|
||||
logger.info("Stimme geladen: %s (%s)", name, model_name)
|
||||
|
||||
if not self.voices:
|
||||
logger.error("Keine Stimmen geladen — TTS deaktiviert")
|
||||
|
||||
def select_voice(
|
||||
self, text: str, requested_voice: Optional[str] = None
|
||||
) -> str:
|
||||
"""Waehlt die passende Stimme basierend auf Text oder Anfrage.
|
||||
|
||||
Thorsten wird bei epischen Triggern verwendet,
|
||||
sonst Ramona als Standardstimme.
|
||||
|
||||
Args:
|
||||
text: Der zu sprechende Text (fuer Epic-Trigger-Erkennung).
|
||||
requested_voice: Explizit angeforderte Stimme ("ramona" | "thorsten").
|
||||
|
||||
Returns:
|
||||
Name der gewaehlten Stimme.
|
||||
"""
|
||||
if requested_voice and requested_voice in self.voices:
|
||||
return requested_voice
|
||||
|
||||
# Highlight-Trigger pruefen
|
||||
text_lower = text.lower()
|
||||
for trigger in EPIC_TRIGGERS:
|
||||
if trigger in text_lower:
|
||||
logger.info("Highlight-Trigger erkannt: '%s' — %s spricht", trigger, self.highlight_voice)
|
||||
return self.highlight_voice
|
||||
|
||||
return self.default_voice
|
||||
|
||||
def synthesize(self, text: str, voice_name: str = "ramona") -> Optional[bytes]:
|
||||
"""Erzeugt Audio-Daten aus Text mit der gewaehlten Stimme.
|
||||
|
||||
Args:
|
||||
text: Der zu sprechende Text.
|
||||
voice_name: Name der Stimme ("ramona" oder "thorsten").
|
||||
|
||||
Returns:
|
||||
WAV-Audiodaten als bytes oder None bei Fehler.
|
||||
"""
|
||||
voice = self.voices.get(voice_name)
|
||||
if voice is None:
|
||||
logger.error("Stimme '%s' nicht verfuegbar", voice_name)
|
||||
return None
|
||||
|
||||
try:
|
||||
# Zentraler TTS-Cleanup (Markdown, Code, Einheiten, URLs)
|
||||
import re
|
||||
clean = clean_text_for_tts(text)
|
||||
sentences = re.split(r'(?<=[.!?])\s+', clean)
|
||||
sentences = [s.strip() for s in sentences if s.strip()]
|
||||
|
||||
if not sentences:
|
||||
return None
|
||||
|
||||
# Jeden Satz einzeln synthetisieren und WAVs zusammenfuegen
|
||||
all_audio = b""
|
||||
sample_rate = None
|
||||
for sentence in sentences:
|
||||
if not sentence:
|
||||
continue
|
||||
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as tmp:
|
||||
tmp_path = tmp.name
|
||||
speed = self.speech_speed.get(voice_name, 1.0)
|
||||
syn_config = SynthesisConfig(length_scale=1.0 / max(0.3, speed))
|
||||
with wave.open(tmp_path, "wb") as wav_file:
|
||||
voice.synthesize_wav(sentence, wav_file, syn_config=syn_config)
|
||||
with wave.open(tmp_path, "rb") as wav_file:
|
||||
if sample_rate is None:
|
||||
sample_rate = wav_file.getframerate()
|
||||
all_audio += wav_file.readframes(wav_file.getnframes())
|
||||
Path(tmp_path).unlink(missing_ok=True)
|
||||
|
||||
# Zusammengefuegtes WAV erstellen
|
||||
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as tmp:
|
||||
final_path = tmp.name
|
||||
with wave.open(final_path, "wb") as wav_file:
|
||||
wav_file.setnchannels(1)
|
||||
wav_file.setsampwidth(2)
|
||||
wav_file.setframerate(sample_rate or 22050)
|
||||
wav_file.writeframes(all_audio)
|
||||
|
||||
audio_data = Path(final_path).read_bytes()
|
||||
Path(final_path).unlink(missing_ok=True)
|
||||
|
||||
logger.info(
|
||||
"TTS: %d bytes erzeugt mit %s (%d Saetze) — '%s'",
|
||||
len(audio_data),
|
||||
voice_name,
|
||||
len(sentences),
|
||||
text[:60],
|
||||
)
|
||||
return audio_data
|
||||
|
||||
except Exception:
|
||||
logger.exception("TTS-Fehler bei Stimme '%s'", voice_name)
|
||||
return None
|
||||
|
||||
def speak(self, text: str, requested_voice: Optional[str] = None) -> None:
|
||||
"""Spricht den Text ueber das Audio-Geraet.
|
||||
|
||||
Waehlt automatisch die passende Stimme und gibt das Audio aus.
|
||||
|
||||
Args:
|
||||
text: Der zu sprechende Text.
|
||||
requested_voice: Optionale explizite Stimmenwahl.
|
||||
"""
|
||||
voice_name = self.select_voice(text, requested_voice)
|
||||
audio_data = self.synthesize(text, voice_name)
|
||||
|
||||
if audio_data is None:
|
||||
return
|
||||
|
||||
try:
|
||||
# WAV-Daten lesen und ueber sounddevice abspielen
|
||||
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as tmp:
|
||||
tmp.write(audio_data)
|
||||
tmp_path = tmp.name
|
||||
|
||||
with wave.open(tmp_path, "rb") as wf:
|
||||
frames = wf.readframes(wf.getnframes())
|
||||
sample_width = wf.getsampwidth()
|
||||
rate = wf.getframerate()
|
||||
channels = wf.getnchannels()
|
||||
|
||||
Path(tmp_path).unlink(missing_ok=True)
|
||||
|
||||
# Numpy-Array aus PCM-Daten
|
||||
dtype_map = {1: np.int8, 2: np.int16, 4: np.int32}
|
||||
dtype = dtype_map.get(sample_width, np.int16)
|
||||
audio_array = np.frombuffer(frames, dtype=dtype)
|
||||
|
||||
if channels > 1:
|
||||
audio_array = audio_array.reshape(-1, channels)
|
||||
|
||||
sd.play(audio_array, samplerate=rate)
|
||||
sd.wait() # Warten bis Wiedergabe fertig
|
||||
|
||||
except Exception:
|
||||
logger.exception("Audio-Wiedergabe fehlgeschlagen")
|
||||
|
||||
|
||||
# ── STT Engine ───────────────────────────────────────────────
|
||||
|
||||
|
||||
@@ -457,8 +325,16 @@ class STTEngine:
|
||||
Erkannter Text oder leerer String.
|
||||
"""
|
||||
if self.model is None:
|
||||
logger.error("Whisper-Modell nicht initialisiert")
|
||||
return ""
|
||||
# Lazy-Load: normalerweise laeuft STT remote auf der Gamebox.
|
||||
# Erst wenn das Fallback hier zuschlaegt, laden wir lokal.
|
||||
logger.info("Lokales Whisper-Fallback — Modell wird nachgeladen...")
|
||||
try:
|
||||
self.initialize()
|
||||
except Exception:
|
||||
logger.exception("Lokales Whisper konnte nicht geladen werden")
|
||||
return ""
|
||||
if self.model is None:
|
||||
return ""
|
||||
|
||||
try:
|
||||
# Audio als float32 normalisieren
|
||||
@@ -613,12 +489,13 @@ class ARIABridge:
|
||||
else:
|
||||
self.rvs_url = ""
|
||||
self.rvs_url_fallback = ""
|
||||
self.current_mode = Mode.NORMAL
|
||||
# Mode aus Shared Config laden (persistiert ueber Container-Restarts)
|
||||
self.current_mode = self._load_persisted_mode()
|
||||
self.running = False
|
||||
|
||||
# Komponenten
|
||||
self.voice_engine = VoiceEngine(VOICES_DIR)
|
||||
# Komponenten (TTS: immer XTTS remote, Piper wurde entfernt)
|
||||
self.tts_enabled = True
|
||||
self.xtts_voice = ""
|
||||
vc: dict = {}
|
||||
# Gespeicherte Voice-Config laden
|
||||
try:
|
||||
@@ -626,16 +503,9 @@ class ARIABridge:
|
||||
if os.path.exists(vc_path):
|
||||
with open(vc_path) as f:
|
||||
vc = json.load(f)
|
||||
self.voice_engine.default_voice = vc.get("defaultVoice", "ramona")
|
||||
self.voice_engine.highlight_voice = vc.get("highlightVoice", "thorsten")
|
||||
self.voice_engine.speech_speed = {
|
||||
"ramona": vc.get("speedRamona", 1.0),
|
||||
"thorsten": vc.get("speedThorsten", 1.0),
|
||||
}
|
||||
self.tts_enabled = vc.get("ttsEnabled", True)
|
||||
self.tts_engine_type = vc.get("ttsEngine", "piper")
|
||||
self.xtts_voice = vc.get("xttsVoice", "")
|
||||
logger.info("Voice-Config geladen: %s", vc)
|
||||
logger.info("Voice-Config geladen: tts=%s voice=%s", self.tts_enabled, self.xtts_voice or "default")
|
||||
except Exception as e:
|
||||
logger.warning("Voice-Config laden fehlgeschlagen: %s", e)
|
||||
# Whisper-Modell: Config hat Vorrang, dann env/Default (medium)
|
||||
@@ -655,6 +525,15 @@ class ARIABridge:
|
||||
# Zeitstempel des letzten chat:final — waehrend 3s danach werden
|
||||
# trailing Agent-Events unterdrueckt (Core raeumt manchmal nach).
|
||||
self._last_chat_final_at: float = 0.0
|
||||
# requestId → messageId Map fuer XTTS-Audio-Cache (App-seitige Zuordnung)
|
||||
self._xtts_request_to_message: dict[str, str] = {}
|
||||
# Voice-Override aus letzter Chat-Nachricht einer App.
|
||||
# Wird fuer die direkt folgende ARIA-Antwort genutzt und dann zurueckgesetzt.
|
||||
# So kann jedes Geraet seine bevorzugte Stimme bekommen (pro Request).
|
||||
self._next_voice_override: Optional[str] = None
|
||||
# STT-Requests die aktuell auf Antwort von der whisper-bridge (Gamebox) warten.
|
||||
# requestId → Future mit dem Text (oder None bei Fehler).
|
||||
self._pending_stt: dict[str, asyncio.Future] = {}
|
||||
|
||||
def initialize(self) -> None:
|
||||
"""Initialisiert alle Komponenten.
|
||||
@@ -667,11 +546,9 @@ class ARIABridge:
|
||||
logger.info("ARIA Voice Bridge startet...")
|
||||
logger.info("=" * 50)
|
||||
|
||||
# Voice-Engine IMMER laden — rendert Audio fuer die App (auch ohne Soundkarte)
|
||||
self.voice_engine.initialize()
|
||||
|
||||
# STT IMMER laden — verarbeitet Audio von der App (braucht kein Sounddevice)
|
||||
self.stt_engine.initialize()
|
||||
# STT wird standardmaessig von der whisper-bridge (Gamebox) erledigt.
|
||||
# Lokales Whisper ist nur Fallback und wird lazy geladen wenn remote nicht
|
||||
# antwortet. Das spart RAM auf der VM und Startup-Zeit.
|
||||
|
||||
# Audio-Hardware pruefen (fuer lokales Mikro/Lautsprecher)
|
||||
self.audio_available = False
|
||||
@@ -977,95 +854,114 @@ class ARIABridge:
|
||||
is_critical = metadata.get("critical", False)
|
||||
requested_voice = metadata.get("voice")
|
||||
|
||||
# Modus-Wechsel pruefen
|
||||
# Modus-Wechsel pruefen (Sprachbefehl im Text)
|
||||
new_mode = detect_mode_switch(text)
|
||||
if new_mode is not None:
|
||||
self.current_mode = new_mode
|
||||
self._persist_mode()
|
||||
logger.info(
|
||||
"[core] Modus → %s %s",
|
||||
self.current_mode.config.emoji,
|
||||
self.current_mode.config.name,
|
||||
)
|
||||
await self._send_to_rvs({
|
||||
"type": "mode",
|
||||
"payload": {"mode": self.current_mode.name},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
|
||||
# Stimme auswaehlen
|
||||
voice_name = requested_voice or self.voice_engine.select_voice(text)
|
||||
await self._broadcast_current_mode()
|
||||
|
||||
# Eindeutige Message-ID fuer Audio-Cache-Zuordnung
|
||||
message_id = str(uuid.uuid4())
|
||||
|
||||
# TTS-aufbereitete Variante fuer Debug (Diagnostic zeigt optional)
|
||||
tts_text_preview = clean_text_for_tts(text)
|
||||
|
||||
# Antwort an die App weiterleiten (als Chat-Nachricht)
|
||||
await self._send_to_rvs({
|
||||
"type": "chat",
|
||||
"payload": {
|
||||
"text": text,
|
||||
"sender": "aria",
|
||||
"voice": voice_name,
|
||||
"messageId": message_id,
|
||||
# Debug: aufbereiteter Text fuer TTS (App ignoriert, Diagnostic zeigt optional)
|
||||
"ttsText": tts_text_preview if tts_text_preview != text else "",
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
|
||||
# TTS-Audio rendern und an die App senden (wenn Modus es erlaubt)
|
||||
if getattr(self, 'tts_enabled', True) and should_speak(self.current_mode, is_critical):
|
||||
tts_engine = getattr(self, 'tts_engine_type', 'piper')
|
||||
|
||||
if tts_engine == "xtts":
|
||||
# XTTS: aufbereiteter Text (Code-Bloecke raus, Einheiten ausgeschrieben)
|
||||
xtts_voice = getattr(self, 'xtts_voice', '')
|
||||
tts_text = clean_text_for_tts(text)
|
||||
if not tts_text:
|
||||
logger.info("[core] TTS-Text leer nach Cleanup — XTTS uebersprungen")
|
||||
return
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "xtts_request",
|
||||
"payload": {
|
||||
"text": tts_text,
|
||||
"voice": xtts_voice,
|
||||
"language": "de",
|
||||
"requestId": str(uuid.uuid4()),
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
logger.info("[core] XTTS-Request gesendet (%s): '%s'", xtts_voice or "default", tts_text[:60])
|
||||
except Exception as e:
|
||||
logger.warning("[core] XTTS-Request fehlgeschlagen: %s — Fallback auf Piper", e)
|
||||
# Fallback auf Piper
|
||||
audio_data = self.voice_engine.synthesize(text, voice_name)
|
||||
if audio_data:
|
||||
audio_b64 = base64.b64encode(audio_data).decode("ascii")
|
||||
await self._send_to_rvs({
|
||||
"type": "audio",
|
||||
"payload": {"base64": audio_b64, "mimeType": "audio/wav", "voice": voice_name, "messageId": message_id},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
else:
|
||||
# Piper: Lokal rendern
|
||||
audio_data = self.voice_engine.synthesize(text, voice_name)
|
||||
if audio_data:
|
||||
audio_b64 = base64.b64encode(audio_data).decode("ascii")
|
||||
await self._send_to_rvs({
|
||||
"type": "audio",
|
||||
"payload": {
|
||||
"base64": audio_b64,
|
||||
"mimeType": "audio/wav",
|
||||
"voice": voice_name,
|
||||
"messageId": message_id,
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
logger.info("[core] TTS-Audio gesendet: %d bytes (%s)", len(audio_data), voice_name)
|
||||
|
||||
# Lokal abspielen (nur wenn Soundkarte vorhanden)
|
||||
if self.audio_available:
|
||||
self.voice_engine.speak(text, requested_voice)
|
||||
else:
|
||||
# TTS ueber XTTS (XTTS-Bridge auf Gaming-PC)
|
||||
if not (getattr(self, 'tts_enabled', True) and should_speak(self.current_mode, is_critical)):
|
||||
logger.info("[core] TTS unterdrueckt (Modus: %s)", self.current_mode.config.name)
|
||||
return
|
||||
|
||||
# Voice bestimmen: App-Override fuer diesen Request > globale Default-Voice
|
||||
xtts_voice = self._next_voice_override or getattr(self, 'xtts_voice', '')
|
||||
# Override verbrauchen (gilt nur fuer genau diese naechste Antwort)
|
||||
if self._next_voice_override:
|
||||
logger.info("[core] Nutze Voice-Override: %s", self._next_voice_override)
|
||||
self._next_voice_override = None
|
||||
|
||||
tts_text = tts_text_preview or text
|
||||
if not tts_text:
|
||||
logger.info("[core] TTS-Text leer nach Cleanup — uebersprungen")
|
||||
return
|
||||
try:
|
||||
xtts_request_id = str(uuid.uuid4())
|
||||
self._xtts_request_to_message[xtts_request_id] = message_id
|
||||
if len(self._xtts_request_to_message) > 100:
|
||||
oldest = next(iter(self._xtts_request_to_message))
|
||||
self._xtts_request_to_message.pop(oldest, None)
|
||||
await self._send_to_rvs({
|
||||
"type": "xtts_request",
|
||||
"payload": {
|
||||
"text": tts_text,
|
||||
"voice": xtts_voice,
|
||||
"language": "de",
|
||||
"requestId": xtts_request_id,
|
||||
"messageId": message_id,
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
logger.info("[core] XTTS-Request gesendet (%s): '%s'", xtts_voice or "default", tts_text[:60])
|
||||
except Exception as e:
|
||||
logger.error("[core] XTTS-Request fehlgeschlagen: %s — kein Audio", e)
|
||||
|
||||
# ── Mode Persistence (global, nicht pro Geraet) ──────
|
||||
_MODE_FILE = "/shared/config/mode.json"
|
||||
|
||||
def _load_persisted_mode(self) -> Mode:
|
||||
"""Laedt den zuletzt aktiven Modus aus Shared Config oder NORMAL."""
|
||||
try:
|
||||
if os.path.exists(self._MODE_FILE):
|
||||
data = json.loads(Path(self._MODE_FILE).read_text())
|
||||
mode_name = data.get("mode", "NORMAL")
|
||||
for m in Mode:
|
||||
if m.name == mode_name:
|
||||
logger.info("[mode] Persistierter Modus geladen: %s", m.config.name)
|
||||
return m
|
||||
except Exception as e:
|
||||
logger.warning("[mode] Laden fehlgeschlagen: %s", e)
|
||||
return Mode.NORMAL
|
||||
|
||||
def _persist_mode(self) -> None:
|
||||
"""Speichert den aktuellen Modus in Shared Config."""
|
||||
try:
|
||||
os.makedirs("/shared/config", exist_ok=True)
|
||||
Path(self._MODE_FILE).write_text(json.dumps({"mode": self.current_mode.name}))
|
||||
except Exception as e:
|
||||
logger.warning("[mode] Speichern fehlgeschlagen: %s", e)
|
||||
|
||||
async def _broadcast_current_mode(self) -> None:
|
||||
"""Broadcastet den aktuellen Modus an alle RVS-Clients (App + Diagnostic)."""
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "mode",
|
||||
"payload": {
|
||||
"mode": canonical_id(self.current_mode),
|
||||
"name": self.current_mode.config.name,
|
||||
"emoji": self.current_mode.config.emoji,
|
||||
"sender": "bridge", # Filter in mode-Handler gegen Loops
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
except Exception as e:
|
||||
logger.debug("[mode] Broadcast fehlgeschlagen: %s", e)
|
||||
|
||||
def _fetch_active_session(self) -> None:
|
||||
"""Holt die aktive Session vom Diagnostic-Endpoint."""
|
||||
@@ -1132,6 +1028,10 @@ class ARIABridge:
|
||||
retry_delay = 2
|
||||
logger.info("[rvs] Verbunden — warte auf App-Nachrichten")
|
||||
|
||||
# Aktuellen Modus broadcasten damit gerade verbundene Apps/Diagnostic
|
||||
# ihren UI-State sofort syncen koennen
|
||||
await self._broadcast_current_mode()
|
||||
|
||||
# Heartbeat senden (RVS erwartet Ping alle 30s)
|
||||
heartbeat_task = asyncio.create_task(self._rvs_heartbeat())
|
||||
|
||||
@@ -1215,6 +1115,11 @@ class ARIABridge:
|
||||
if sender in ("aria", "stt"):
|
||||
return
|
||||
text = payload.get("text", "")
|
||||
# Voice-Override fuer die naechste ARIA-Antwort merken
|
||||
voice_override = payload.get("voice", "")
|
||||
if voice_override:
|
||||
self._next_voice_override = voice_override
|
||||
logger.info("[rvs] Voice-Override fuer naechste Antwort: %s", voice_override)
|
||||
if text:
|
||||
logger.info("[rvs] App-Chat: '%s'", text[:80])
|
||||
await self.send_to_core(text, source="app")
|
||||
@@ -1226,106 +1131,99 @@ class ARIABridge:
|
||||
await self._emit_activity("idle", "")
|
||||
return
|
||||
|
||||
elif msg_type == "audio_pcm":
|
||||
# Audio-PCM geht direkt von XTTS-Bridge an die App.
|
||||
# Die aria-bridge darf es NICHT rebroadcasten — sonst bekommt die App
|
||||
# jeden Chunk doppelt (einmal direkt von XTTS-Bridge via RVS-Broadcast,
|
||||
# einmal indirekt via uns).
|
||||
# Wir ignorieren diese Message hier einfach — messageId wird von
|
||||
# XTTS-Bridge selbst im Payload mitgeliefert.
|
||||
return
|
||||
|
||||
elif msg_type == "xtts_response":
|
||||
# XTTS-Audio vom Gaming-PC empfangen → an App weiterleiten
|
||||
# Legacy-Pfad (alte XTTS-Bridge mit WAV-Response). Weiterleiten als
|
||||
# type "audio" — App nutzt den bestehenden WAV-Queue-Spieler.
|
||||
audio_b64 = payload.get("base64", "")
|
||||
error = payload.get("error", "")
|
||||
req_id_full = payload.get("requestId", "")
|
||||
req_id_base = req_id_full.rsplit("_", 1)[0] if "_" in req_id_full else req_id_full
|
||||
linked_message_id = self._xtts_request_to_message.get(req_id_base, "")
|
||||
if error:
|
||||
logger.warning("[rvs] XTTS Fehler: %s", error)
|
||||
return
|
||||
if audio_b64:
|
||||
logger.info("[rvs] XTTS-Audio empfangen: %dKB", len(audio_b64) // 1365)
|
||||
logger.info("[rvs] XTTS-Audio legacy empfangen: %dKB", len(audio_b64) // 1365)
|
||||
await self._send_to_rvs({
|
||||
"type": "audio",
|
||||
"payload": {
|
||||
"base64": audio_b64,
|
||||
"mimeType": payload.get("mimeType", "audio/wav"),
|
||||
"voice": payload.get("voice", "xtts"),
|
||||
"messageId": linked_message_id,
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
return
|
||||
|
||||
elif msg_type == "tts_request":
|
||||
# App fordert TTS-Audio fuer einen Text an (Play-Button)
|
||||
# App fordert TTS-Audio fuer einen Text an (Play-Button) → immer XTTS.
|
||||
text = payload.get("text", "")
|
||||
requested_voice = payload.get("voice", "")
|
||||
if text:
|
||||
voice_name = requested_voice or self.voice_engine.select_voice(text)
|
||||
audio_data = self.voice_engine.synthesize(text, voice_name)
|
||||
if audio_data:
|
||||
audio_b64 = base64.b64encode(audio_data).decode("ascii")
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "audio",
|
||||
"payload": {
|
||||
"base64": audio_b64,
|
||||
"mimeType": "audio/wav",
|
||||
"voice": voice_name,
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
logger.info("[rvs] TTS on-demand: %d bytes (%s)", len(audio_data), voice_name)
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] TTS on-demand senden fehlgeschlagen: %s", e)
|
||||
message_id = payload.get("messageId", "")
|
||||
if not text:
|
||||
return
|
||||
tts_text = clean_text_for_tts(text) or text
|
||||
# Voice aus App-Payload gewinnt, sonst global
|
||||
xtts_voice = payload.get("voice", "") or getattr(self, 'xtts_voice', '')
|
||||
try:
|
||||
xtts_request_id = str(uuid.uuid4())
|
||||
if message_id:
|
||||
self._xtts_request_to_message[xtts_request_id] = message_id
|
||||
await self._send_to_rvs({
|
||||
"type": "xtts_request",
|
||||
"payload": {
|
||||
"text": tts_text,
|
||||
"voice": xtts_voice,
|
||||
"language": "de",
|
||||
"requestId": xtts_request_id,
|
||||
"messageId": message_id,
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
logger.info("[rvs] TTS on-demand via XTTS: '%s'", tts_text[:60])
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] TTS on-demand fehlgeschlagen: %s", e)
|
||||
return
|
||||
|
||||
elif msg_type == "config":
|
||||
# Konfiguration von App/Diagnostic empfangen + persistent speichern
|
||||
changed = False
|
||||
if "defaultVoice" in payload:
|
||||
new_voice = payload["defaultVoice"]
|
||||
if new_voice in self.voice_engine.voices:
|
||||
self.voice_engine.default_voice = new_voice
|
||||
logger.info("[rvs] Standard-Stimme gewechselt: %s", new_voice)
|
||||
changed = True
|
||||
if "highlightVoice" in payload:
|
||||
new_voice = payload["highlightVoice"]
|
||||
if new_voice in self.voice_engine.voices:
|
||||
self.voice_engine.highlight_voice = new_voice
|
||||
logger.info("[rvs] Highlight-Stimme gewechselt: %s", new_voice)
|
||||
changed = True
|
||||
if "ttsEnabled" in payload:
|
||||
self.tts_enabled = bool(payload["ttsEnabled"])
|
||||
logger.info("[rvs] TTS %s", "aktiviert" if self.tts_enabled else "deaktiviert")
|
||||
changed = True
|
||||
if "ttsEngine" in payload:
|
||||
self.tts_engine_type = payload["ttsEngine"]
|
||||
logger.info("[rvs] TTS-Engine: %s", self.tts_engine_type)
|
||||
changed = True
|
||||
if "xttsVoice" in payload:
|
||||
self.xtts_voice = payload["xttsVoice"]
|
||||
logger.info("[rvs] XTTS-Stimme: %s", self.xtts_voice)
|
||||
logger.info("[rvs] XTTS-Stimme: %s", self.xtts_voice or "default")
|
||||
changed = True
|
||||
if "speedRamona" in payload:
|
||||
self.voice_engine.speech_speed["ramona"] = max(0.3, min(2.0, float(payload["speedRamona"])))
|
||||
logger.info("[rvs] Speed Ramona: %.1f", self.voice_engine.speech_speed["ramona"])
|
||||
changed = True
|
||||
if "speedThorsten" in payload:
|
||||
self.voice_engine.speech_speed["thorsten"] = max(0.3, min(2.0, float(payload["speedThorsten"])))
|
||||
logger.info("[rvs] Speed Thorsten: %.1f", self.voice_engine.speech_speed["thorsten"])
|
||||
changed = True
|
||||
whisper_reloaded = False
|
||||
if "whisperModel" in payload:
|
||||
new_model = payload["whisperModel"]
|
||||
if new_model and new_model != self.stt_engine.model_size:
|
||||
logger.info("[rvs] Whisper-Modell Wechsel: %s -> %s (laedt...)", self.stt_engine.model_size, new_model)
|
||||
loop = asyncio.get_event_loop()
|
||||
whisper_reloaded = await loop.run_in_executor(None, self.stt_engine.reload, new_model)
|
||||
if whisper_reloaded:
|
||||
changed = True
|
||||
allowed = {"tiny", "base", "small", "medium", "large-v3"}
|
||||
if new_model in allowed and new_model != self.stt_engine.model_size:
|
||||
# Merken und mitschicken an whisper-bridge (Gamebox).
|
||||
# Lokales Modell wird NICHT geladen — nur das Fallback braucht's,
|
||||
# und das passiert erst on-demand wenn Remote nicht antwortet.
|
||||
logger.info("[rvs] Whisper-Modell → %s (nur Config; Modell laedt Gamebox)",
|
||||
new_model)
|
||||
self.stt_engine.model_size = new_model
|
||||
self.stt_engine.model = None
|
||||
changed = True
|
||||
# Persistent speichern in Shared Volume
|
||||
if changed:
|
||||
try:
|
||||
os.makedirs("/shared/config", exist_ok=True)
|
||||
config_data = {
|
||||
"defaultVoice": self.voice_engine.default_voice,
|
||||
"highlightVoice": self.voice_engine.highlight_voice,
|
||||
"ttsEnabled": getattr(self, "tts_enabled", True),
|
||||
"ttsEngine": getattr(self, "tts_engine_type", "piper"),
|
||||
"xttsVoice": getattr(self, "xtts_voice", ""),
|
||||
"speedRamona": self.voice_engine.speech_speed.get("ramona", 1.0),
|
||||
"speedThorsten": self.voice_engine.speech_speed.get("thorsten", 1.0),
|
||||
"whisperModel": self.stt_engine.model_size,
|
||||
}
|
||||
with open("/shared/config/voice_config.json", "w") as f:
|
||||
@@ -1334,22 +1232,27 @@ class ARIABridge:
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] Config speichern fehlgeschlagen: %s", e)
|
||||
return
|
||||
text = payload.get("text", "")
|
||||
if text:
|
||||
logger.info("[rvs] App-Chat: '%s'", text[:80])
|
||||
await self.send_to_core(text, source="app")
|
||||
|
||||
elif msg_type == "mode":
|
||||
# Moduswechsel von der App
|
||||
# Moduswechsel von der App — ID ('normal', 'dnd', ...) ODER Aktivierungsphrase
|
||||
mode_name = payload.get("mode", "")
|
||||
new_mode = detect_mode_switch(mode_name)
|
||||
if new_mode is not None:
|
||||
# Sender kann der Broadcast der Bridge selbst sein — den ignorieren damit
|
||||
# andere Apps nicht in eine Loop geraten
|
||||
if payload.get("sender") == "bridge":
|
||||
return
|
||||
new_mode = mode_from_id(mode_name) or detect_mode_switch(mode_name)
|
||||
if new_mode is not None and new_mode != self.current_mode:
|
||||
self.current_mode = new_mode
|
||||
self._persist_mode()
|
||||
logger.info(
|
||||
"[rvs] Modus → %s %s (von App)",
|
||||
self.current_mode.config.emoji,
|
||||
self.current_mode.config.name,
|
||||
)
|
||||
# Broadcast an ALLE Clients (App + Diagnostic) damit UI ueberall sync ist
|
||||
await self._broadcast_current_mode()
|
||||
elif new_mode is None:
|
||||
logger.warning("[rvs] Unbekannter Modus: '%s'", mode_name)
|
||||
|
||||
elif msg_type == "location":
|
||||
# GPS-Daten von der App
|
||||
@@ -1464,26 +1367,120 @@ class ARIABridge:
|
||||
if not audio_b64:
|
||||
logger.warning("[rvs] Audio ohne Daten empfangen")
|
||||
return
|
||||
# Voice-Override fuer die kommende ARIA-Antwort (App-lokal gewaehlt)
|
||||
voice_override = payload.get("voice", "")
|
||||
if voice_override:
|
||||
self._next_voice_override = voice_override
|
||||
logger.info("[rvs] Voice-Override (via Audio): %s", voice_override)
|
||||
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB",
|
||||
mime_type, duration_ms, len(audio_b64) // 1365)
|
||||
asyncio.create_task(self._process_app_audio(audio_b64, mime_type))
|
||||
|
||||
elif msg_type == "stt_response":
|
||||
# Antwort der whisper-bridge auf unseren stt_request
|
||||
request_id = payload.get("requestId", "")
|
||||
future = self._pending_stt.get(request_id)
|
||||
if future is None or future.done():
|
||||
return
|
||||
error = payload.get("error", "")
|
||||
if error:
|
||||
logger.warning("[rvs] stt_response Fehler: %s", error)
|
||||
future.set_result(None)
|
||||
else:
|
||||
text = payload.get("text", "")
|
||||
stt_ms = payload.get("sttMs", 0)
|
||||
model = payload.get("model", "?")
|
||||
logger.info("[rvs] Remote-STT OK (%s, %dms): '%s'", model, stt_ms, (text or "")[:80])
|
||||
future.set_result(text)
|
||||
return
|
||||
|
||||
else:
|
||||
logger.debug("[rvs] Unbekannter Typ: %s", msg_type)
|
||||
|
||||
# STT-Orchestrierung: zuerst Remote (Gamebox), Fallback lokal.
|
||||
# Timeout grosszuegig gewaehlt, damit auch ein erstmaliger Modell-Load
|
||||
# auf der Gamebox (bis ~30s bei large-v3) durchgeht.
|
||||
_STT_REMOTE_TIMEOUT_S = 45.0
|
||||
|
||||
async def _process_app_audio(self, audio_b64: str, mime_type: str) -> None:
|
||||
"""Decodiert App-Audio (Base64 AAC/MP4), konvertiert zu 16kHz PCM, STT, sendet an core."""
|
||||
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal."""
|
||||
# Erst Remote versuchen
|
||||
text = await self._stt_remote(audio_b64, mime_type)
|
||||
if text is None:
|
||||
# Remote hat nicht geantwortet → lokales Whisper
|
||||
logger.warning("[rvs] Remote-STT nicht verfuegbar — Fallback auf lokales Whisper")
|
||||
text = await self._stt_local(audio_b64, mime_type)
|
||||
if text is None:
|
||||
return
|
||||
|
||||
if text.strip():
|
||||
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
||||
# ERST an aria-core senden (wichtigster Schritt)
|
||||
await self.send_to_core(text, source="app-voice")
|
||||
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
||||
# sender="stt" damit Bridge es ignoriert (kein Loop)
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "chat",
|
||||
"payload": {
|
||||
"text": text,
|
||||
"sender": "stt",
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] STT-Text konnte nicht an RVS gesendet werden: %s", e)
|
||||
else:
|
||||
logger.info("[rvs] Keine Sprache erkannt — ignoriert")
|
||||
|
||||
async def _stt_remote(self, audio_b64: str, mime_type: str) -> Optional[str]:
|
||||
"""Schickt Audio an die whisper-bridge und wartet auf stt_response.
|
||||
|
||||
Rueckgabe:
|
||||
str — erkannter Text (kann leer sein)
|
||||
None — Remote-STT nicht erreichbar oder Fehler/Timeout (→ Fallback)
|
||||
"""
|
||||
if self.ws_rvs is None:
|
||||
return None
|
||||
|
||||
request_id = str(uuid.uuid4())
|
||||
loop = asyncio.get_event_loop()
|
||||
future: asyncio.Future = loop.create_future()
|
||||
self._pending_stt[request_id] = future
|
||||
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "stt_request",
|
||||
"payload": {
|
||||
"requestId": request_id,
|
||||
"audio": audio_b64,
|
||||
"mimeType": mime_type,
|
||||
"model": getattr(self.stt_engine, "model_size", "small"),
|
||||
"language": getattr(self.stt_engine, "language", "de"),
|
||||
},
|
||||
"timestamp": int(loop.time() * 1000),
|
||||
})
|
||||
return await asyncio.wait_for(future, timeout=self._STT_REMOTE_TIMEOUT_S)
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning("[rvs] Remote-STT Timeout (%.0fs)", self._STT_REMOTE_TIMEOUT_S)
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] Remote-STT Fehler: %s", e)
|
||||
return None
|
||||
finally:
|
||||
self._pending_stt.pop(request_id, None)
|
||||
|
||||
async def _stt_local(self, audio_b64: str, mime_type: str) -> Optional[str]:
|
||||
"""Lokales Whisper-Fallback: FFmpeg → float32 → stt_engine.transcribe."""
|
||||
loop = asyncio.get_event_loop()
|
||||
tmp_in = None
|
||||
tmp_out = None
|
||||
try:
|
||||
# Base64 → temp-Datei
|
||||
ext = ".mp4" if "mp4" in mime_type else ".wav" if "wav" in mime_type else ".ogg"
|
||||
tmp_in = tempfile.NamedTemporaryFile(suffix=ext, delete=False)
|
||||
tmp_in.write(base64.b64decode(audio_b64))
|
||||
tmp_in.close()
|
||||
|
||||
# FFmpeg: beliebiges Format → 16kHz mono PCM (raw float32)
|
||||
tmp_out = tempfile.NamedTemporaryFile(suffix=".raw", delete=False)
|
||||
tmp_out.close()
|
||||
|
||||
@@ -1498,45 +1495,21 @@ class ARIABridge:
|
||||
)
|
||||
if result.returncode != 0:
|
||||
logger.error("[rvs] FFmpeg Fehler: %s", result.stderr.decode()[:200])
|
||||
return
|
||||
return None
|
||||
|
||||
# PCM lesen → numpy float32
|
||||
audio_data = np.fromfile(tmp_out.name, dtype=np.float32)
|
||||
if len(audio_data) == 0:
|
||||
logger.warning("[rvs] Leere Audio-Daten nach Konvertierung")
|
||||
return
|
||||
return None
|
||||
|
||||
duration_s = len(audio_data) / 16000.0
|
||||
logger.info("[rvs] Audio konvertiert: %.1fs, %d samples", duration_s, len(audio_data))
|
||||
|
||||
# STT
|
||||
text = await loop.run_in_executor(None, self.stt_engine.transcribe, audio_data)
|
||||
|
||||
if text.strip():
|
||||
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
||||
# ERST an aria-core senden (wichtigster Schritt)
|
||||
await self.send_to_core(text, source="app-voice")
|
||||
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
||||
# sender="stt" damit Bridge es ignoriert (kein Loop)
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "chat",
|
||||
"payload": {
|
||||
"text": text,
|
||||
"sender": "stt",
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] STT-Text konnte nicht an RVS gesendet werden: %s", e)
|
||||
else:
|
||||
logger.info("[rvs] Keine Sprache erkannt — ignoriert")
|
||||
|
||||
logger.info("[rvs] Lokal-STT: %.1fs Audio, model=%s", duration_s, self.stt_engine.model_size)
|
||||
return await loop.run_in_executor(None, self.stt_engine.transcribe, audio_data)
|
||||
except Exception:
|
||||
logger.exception("[rvs] Audio-Verarbeitung fehlgeschlagen")
|
||||
logger.exception("[rvs] Lokales STT fehlgeschlagen")
|
||||
return None
|
||||
finally:
|
||||
# Temp-Dateien aufraeumen
|
||||
for f in [tmp_in, tmp_out]:
|
||||
for f in (tmp_in, tmp_out):
|
||||
if f:
|
||||
try:
|
||||
os.unlink(f.name)
|
||||
|
||||
@@ -91,6 +91,39 @@ _ACTIVATION_MAP: dict[str, Mode] = {
|
||||
mode.config.activation_phrase.lower(): mode for mode in Mode
|
||||
}
|
||||
|
||||
# ID-Mapping fuer API-Mode-Wechsel (z.B. App ModeSelector schickt 'normal')
|
||||
_ID_MAP: dict[str, Mode] = {
|
||||
"normal": Mode.NORMAL,
|
||||
"nicht_stoeren": Mode.DND,
|
||||
"dnd": Mode.DND,
|
||||
"fluester": Mode.WHISPER,
|
||||
"whisper": Mode.WHISPER,
|
||||
"hangar": Mode.HANGAR,
|
||||
"gaming": Mode.GAMING,
|
||||
}
|
||||
|
||||
|
||||
def mode_from_id(mode_id: str) -> Optional[Mode]:
|
||||
"""ID-basiertes Mapping fuer API-Mode-Wechsel (ohne Aktivierungsphrase)."""
|
||||
if not mode_id:
|
||||
return None
|
||||
return _ID_MAP.get(mode_id.strip().lower())
|
||||
|
||||
|
||||
# Kanonische IDs fuer Broadcasts (matchen die App-UI-IDs in ModeSelector)
|
||||
_CANONICAL_ID: dict[Mode, str] = {
|
||||
Mode.NORMAL: "normal",
|
||||
Mode.DND: "nicht_stoeren",
|
||||
Mode.WHISPER: "fluester",
|
||||
Mode.HANGAR: "hangar",
|
||||
Mode.GAMING: "gaming",
|
||||
}
|
||||
|
||||
|
||||
def canonical_id(mode: Mode) -> str:
|
||||
"""Kanonische ID die App + Diagnostic + Bridge gleichermassen kennen."""
|
||||
return _CANONICAL_ID.get(mode, mode.name.lower())
|
||||
|
||||
|
||||
def detect_mode_switch(text: str) -> Optional[Mode]:
|
||||
"""Erkennt ob ein Text eine Modus-Umschaltung enthaelt.
|
||||
|
||||
@@ -5,8 +5,7 @@
|
||||
# STT — Whisper (lokal, keine API noetig)
|
||||
faster-whisper
|
||||
|
||||
# TTS — Piper (offline, deutsche Stimmen)
|
||||
piper-tts
|
||||
# TTS: laeuft remote ueber XTTS v2 auf dem Gaming-PC (keine lokalen Deps noetig)
|
||||
|
||||
# WebSocket-Verbindung zu aria-core
|
||||
websockets
|
||||
|
||||
+254
-124
@@ -127,6 +127,33 @@
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<!-- Disk-Space Warnung (dynamisch gesetzt) -->
|
||||
<div id="disk-banner" style="display:none;position:sticky;top:0;z-index:500;padding:10px 14px;border-radius:0;margin:-16px -16px 12px -16px;font-size:13px;">
|
||||
<div style="display:flex;align-items:center;gap:10px;flex-wrap:wrap;">
|
||||
<span id="disk-banner-icon" style="font-size:18px;">⚠️</span>
|
||||
<span id="disk-banner-text" style="flex:1;min-width:200px;font-weight:600;"></span>
|
||||
<button onclick="copyDiskCmd('safe')" class="btn secondary" style="padding:4px 10px;font-size:11px;" title="docker builder prune -a -f && docker image prune -a -f">
|
||||
Sicher aufraeumen
|
||||
</button>
|
||||
<button onclick="document.getElementById('disk-banner-aggressive').style.display=(document.getElementById('disk-banner-aggressive').style.display==='none'?'flex':'none')"
|
||||
class="btn secondary" style="padding:4px 10px;font-size:11px;">
|
||||
Mehr ▾
|
||||
</button>
|
||||
<button onclick="document.getElementById('disk-banner').style.display='none'" class="btn secondary" style="padding:4px 10px;font-size:11px;">Schliessen</button>
|
||||
</div>
|
||||
<!-- Aggressive Variante (erst nach Klick sichtbar) -->
|
||||
<div id="disk-banner-aggressive" style="display:none;margin-top:10px;padding:8px;background:rgba(0,0,0,0.25);border-radius:4px;flex-direction:column;gap:6px;font-size:12px;">
|
||||
<div>
|
||||
<b>Sicher</b> (empfohlen) — Build-Cache + ungenutzte Images, keine Volumes:<br>
|
||||
<code style="font-family:monospace;">docker builder prune -a -f && docker image prune -a -f</code>
|
||||
</div>
|
||||
<div style="color:#FFAA55;">
|
||||
<b>Aggressiv</b> — zusaetzlich ungenutzte Volumes. <b>Nur wenn alle ARIA-Container laufen</b>, sonst riskierst du Daten-Verlust (Sessions, SSH-Keys, Shared):<br>
|
||||
<code style="font-family:monospace;">docker system prune -a --volumes -f</code>
|
||||
<button onclick="copyDiskCmd('aggressive')" class="btn secondary" style="padding:2px 8px;font-size:10px;margin-left:6px;">Kopieren</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<h1>ARIA Diagnostic</h1>
|
||||
|
||||
<!-- Haupt-Navigation -->
|
||||
@@ -198,7 +225,13 @@
|
||||
<div class="card full">
|
||||
<div style="display:flex;align-items:center;justify-content:space-between;margin-bottom:8px;">
|
||||
<h2 style="margin:0;">Chat Test</h2>
|
||||
<button class="btn secondary" onclick="toggleChatFullscreen()" id="btn-chat-fs" style="padding:4px 10px;font-size:11px;">Vollbild</button>
|
||||
<div style="display:flex;align-items:center;gap:12px;">
|
||||
<label style="color:#8888AA;font-size:11px;cursor:pointer;">
|
||||
<input type="checkbox" id="tts-debug-toggle" onchange="toggleTtsDebug()" style="margin-right:4px;vertical-align:middle;">
|
||||
TTS-Text einblenden
|
||||
</label>
|
||||
<button class="btn secondary" onclick="toggleChatFullscreen()" id="btn-chat-fs" style="padding:4px 10px;font-size:11px;">Vollbild</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="chat-box" id="chat-box"></div>
|
||||
<div id="thinking-indicator" style="display:none;padding:6px 10px;font-size:12px;color:#FFD60A;background:#1E1E2E;border-radius:0 0 6px 6px;margin-top:-8px;margin-bottom:8px;align-items:center;justify-content:space-between;">
|
||||
@@ -311,16 +344,8 @@
|
||||
<div class="log-box hidden" id="log-server"></div>
|
||||
<div class="log-box hidden" id="log-pipeline"></div>
|
||||
<div class="log-box hidden" id="log-tts" style="padding:12px;">
|
||||
<h3 style="color:#34C759;margin:0 0 12px;">TTS Diagnose</h3>
|
||||
<h3 style="color:#34C759;margin:0 0 12px;">TTS Diagnose (XTTS)</h3>
|
||||
<div style="display:grid;grid-template-columns:1fr 1fr;gap:8px;margin-bottom:12px;">
|
||||
<div style="background:#1E1E2E;padding:8px;border-radius:6px;">
|
||||
<div style="color:#8888AA;font-size:10px;text-transform:uppercase;">Standard-Stimme</div>
|
||||
<div style="color:#fff;font-size:14px;margin-top:4px;" id="tts-default-voice">Ramona</div>
|
||||
</div>
|
||||
<div style="background:#1E1E2E;padding:8px;border-radius:6px;">
|
||||
<div style="color:#8888AA;font-size:10px;text-transform:uppercase;">Highlight-Stimme</div>
|
||||
<div style="color:#fff;font-size:14px;margin-top:4px;" id="tts-highlight-voice">Thorsten</div>
|
||||
</div>
|
||||
<div style="background:#1E1E2E;padding:8px;border-radius:6px;">
|
||||
<div style="color:#8888AA;font-size:10px;text-transform:uppercase;">Status</div>
|
||||
<div style="font-size:14px;margin-top:4px;" id="tts-status">Unbekannt</div>
|
||||
@@ -334,8 +359,7 @@
|
||||
<input type="text" id="tts-test-text" value="Hallo Stefan, ich bin ARIA." placeholder="Test-Text..." style="background:#1E1E2E;border:1px solid #2A2A3E;border-radius:6px;padding:8px;color:#fff;font-size:13px;width:100%;box-sizing:border-box;">
|
||||
</div>
|
||||
<div style="display:flex;gap:8px;">
|
||||
<button class="btn" onclick="testTTS('ramona')" style="flex:1;">Ramona testen</button>
|
||||
<button class="btn" onclick="testTTS('thorsten')" style="flex:1;">Thorsten testen</button>
|
||||
<button class="btn" onclick="testTTS('')" style="flex:1;">XTTS testen</button>
|
||||
<button class="btn secondary" onclick="checkTTSStatus()" style="flex:1;">Status pruefen</button>
|
||||
</div>
|
||||
<div id="tts-log" style="margin-top:12px;max-height:200px;overflow-y:auto;font-size:11px;font-family:monospace;color:#8888AA;"></div>
|
||||
@@ -386,10 +410,10 @@
|
||||
<button class="btn mode-btn" data-mode="normal" onclick="setMode('normal')" style="background:#1E1E2E;border:2px solid transparent;">
|
||||
<span style="font-size:18px;">🟢</span> Normal<br><span style="font-size:10px;color:#8888AA;">Hoert zu, antwortet, spricht</span>
|
||||
</button>
|
||||
<button class="btn mode-btn" data-mode="dnd" onclick="setMode('dnd')" style="background:#1E1E2E;border:2px solid transparent;">
|
||||
<button class="btn mode-btn" data-mode="nicht_stoeren" onclick="setMode('nicht_stoeren')" style="background:#1E1E2E;border:2px solid transparent;">
|
||||
<span style="font-size:18px;">🔴</span> Nicht stoeren<br><span style="font-size:10px;color:#8888AA;">Nur Kritikalarme</span>
|
||||
</button>
|
||||
<button class="btn mode-btn" data-mode="whisper" onclick="setMode('whisper')" style="background:#1E1E2E;border:2px solid transparent;">
|
||||
<button class="btn mode-btn" data-mode="fluester" onclick="setMode('fluester')" style="background:#1E1E2E;border:2px solid transparent;">
|
||||
<span style="font-size:18px;">🟡</span> Fluestern<br><span style="font-size:10px;color:#8888AA;">Nur Text, keine Sprache</span>
|
||||
</button>
|
||||
<button class="btn mode-btn" data-mode="hangar" onclick="setMode('hangar')" style="background:#1E1E2E;border:2px solid transparent;">
|
||||
@@ -407,94 +431,47 @@
|
||||
<div class="settings-section">
|
||||
<h2>Sprachausgabe</h2>
|
||||
<div class="card" style="max-width:500px;">
|
||||
<!-- TTS aktiv (global fuer alle Engines) -->
|
||||
<!-- TTS aktiv (global) -->
|
||||
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
|
||||
<label style="color:#8888AA;font-size:12px;">TTS aktiv:</label>
|
||||
<label class="toggle"><input type="checkbox" id="diag-tts-enabled" checked onchange="sendVoiceConfig()"><span class="slider"></span></label>
|
||||
</div>
|
||||
|
||||
<!-- TTS Engine Auswahl -->
|
||||
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
|
||||
<label style="color:#8888AA;font-size:12px;">TTS Engine:</label>
|
||||
<select id="diag-tts-engine" onchange="sendVoiceConfig();toggleXTTSPanel()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
<option value="piper">Piper (lokal, CPU, schnell)</option>
|
||||
<option value="xtts">XTTS v2 (remote, GPU, natuerlich)</option>
|
||||
<!-- XTTS Stimme -->
|
||||
<div style="display:flex;align-items:center;gap:12px;margin-bottom:6px;">
|
||||
<label style="color:#8888AA;font-size:12px;">XTTS Stimme:</label>
|
||||
<select id="diag-xtts-voice" onchange="sendVoiceConfig()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
<option value="">Standard (XTTS Default)</option>
|
||||
</select>
|
||||
<button class="btn secondary" onclick="loadXTTSVoices()" style="padding:4px 10px;font-size:11px;">Laden</button>
|
||||
</div>
|
||||
<div id="voice-status" style="font-size:11px;min-height:14px;margin-bottom:12px;color:#8888AA;"></div>
|
||||
|
||||
<!-- Piper Stimmen (nur bei Engine=piper) -->
|
||||
<div id="piper-panel">
|
||||
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
|
||||
<label style="color:#8888AA;font-size:12px;">Standard-Stimme:</label>
|
||||
<select id="diag-default-voice" onchange="sendVoiceConfig()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
<option value="ramona">Ramona (weiblich)</option>
|
||||
<option value="thorsten">Thorsten (maennlich)</option>
|
||||
</select>
|
||||
</div>
|
||||
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
|
||||
<label style="color:#8888AA;font-size:12px;">Highlight-Stimme:</label>
|
||||
<select id="diag-highlight-voice" onchange="sendVoiceConfig()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
<option value="thorsten">Thorsten (maennlich)</option>
|
||||
<option value="ramona">Ramona (weiblich)</option>
|
||||
</select>
|
||||
</div>
|
||||
<div style="margin-bottom:4px;">
|
||||
<label style="color:#8888AA;font-size:12px;">Ramona Speed: <span id="speed-ramona-label">1.0x</span></label>
|
||||
</div>
|
||||
<div style="display:flex;align-items:center;gap:8px;margin-bottom:12px;">
|
||||
<span style="color:#555570;font-size:11px;">0.5x</span>
|
||||
<input type="range" id="diag-speed-ramona" min="0.5" max="2.0" step="0.1" value="1.0"
|
||||
oninput="document.getElementById('speed-ramona-label').textContent=this.value+'x'"
|
||||
onchange="sendVoiceConfig()"
|
||||
style="flex:1;accent-color:#0096FF;">
|
||||
<span style="color:#555570;font-size:11px;">2.0x</span>
|
||||
</div>
|
||||
<div style="margin-bottom:4px;">
|
||||
<label style="color:#8888AA;font-size:12px;">Thorsten Speed: <span id="speed-thorsten-label">1.0x</span></label>
|
||||
</div>
|
||||
<div style="display:flex;align-items:center;gap:8px;">
|
||||
<span style="color:#555570;font-size:11px;">0.5x</span>
|
||||
<input type="range" id="diag-speed-thorsten" min="0.5" max="2.0" step="0.1" value="1.0"
|
||||
oninput="document.getElementById('speed-thorsten-label').textContent=this.value+'x'"
|
||||
onchange="sendVoiceConfig()"
|
||||
style="flex:1;accent-color:#0096FF;">
|
||||
<span style="color:#555570;font-size:11px;">2.0x</span>
|
||||
</div>
|
||||
</div><!-- /piper-panel -->
|
||||
<!-- Gecloned Stimmen — Liste mit Loeschen -->
|
||||
<div id="xtts-voice-list" style="margin-bottom:12px;"></div>
|
||||
|
||||
<!-- XTTS Panel (nur bei Engine=xtts) -->
|
||||
<div id="xtts-panel" style="display:none;">
|
||||
<div style="display:flex;align-items:center;gap:12px;margin-bottom:12px;">
|
||||
<label style="color:#8888AA;font-size:12px;">XTTS Stimme:</label>
|
||||
<select id="diag-xtts-voice" onchange="sendVoiceConfig()" style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
<option value="">Standard (XTTS Default)</option>
|
||||
</select>
|
||||
<button class="btn secondary" onclick="loadXTTSVoices()" style="padding:4px 10px;font-size:11px;">Laden</button>
|
||||
<!-- Voice Cloning -->
|
||||
<div style="background:#1E1E2E;border-radius:8px;padding:12px;margin-top:8px;">
|
||||
<div style="color:#0096FF;font-size:13px;font-weight:600;margin-bottom:8px;">Stimme klonen</div>
|
||||
<div style="color:#8888AA;font-size:11px;margin-bottom:8px;">
|
||||
Lade ein oder mehrere Audio-Samples hoch (WAV/MP3, min. 6-10 Sekunden).
|
||||
Mehrere Dateien werden automatisch zusammengefuegt.
|
||||
</div>
|
||||
|
||||
<!-- Voice Cloning -->
|
||||
<div style="background:#1E1E2E;border-radius:8px;padding:12px;margin-top:8px;">
|
||||
<div style="color:#0096FF;font-size:13px;font-weight:600;margin-bottom:8px;">Stimme klonen</div>
|
||||
<div style="color:#8888AA;font-size:11px;margin-bottom:8px;">
|
||||
Lade ein oder mehrere Audio-Samples hoch (WAV/MP3, min. 6-10 Sekunden).
|
||||
Mehrere Dateien werden automatisch zusammengefuegt.
|
||||
</div>
|
||||
<div style="margin-bottom:8px;">
|
||||
<input type="text" id="xtts-clone-name" placeholder="Name fuer die Stimme..." style="background:#0D0D1A;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;color:#fff;font-size:13px;width:100%;box-sizing:border-box;">
|
||||
</div>
|
||||
<div style="margin-bottom:8px;">
|
||||
<input type="file" id="xtts-clone-files" accept="audio/*" multiple style="color:#8888AA;font-size:12px;">
|
||||
</div>
|
||||
<div style="display:flex;gap:8px;">
|
||||
<button class="btn" onclick="uploadVoiceSamples()" style="flex:1;">Stimme erstellen</button>
|
||||
</div>
|
||||
<div id="xtts-clone-status" style="font-size:11px;color:#555570;margin-top:6px;"></div>
|
||||
<div style="margin-bottom:8px;">
|
||||
<input type="text" id="xtts-clone-name" placeholder="Name fuer die Stimme..." style="background:#0D0D1A;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;color:#fff;font-size:13px;width:100%;box-sizing:border-box;">
|
||||
</div>
|
||||
|
||||
<!-- XTTS Status -->
|
||||
<div style="margin-top:8px;font-size:11px;color:#555570;" id="xtts-status">
|
||||
XTTS-Server: Nicht verbunden (starte xtts/ auf dem Gaming-PC)
|
||||
<div style="margin-bottom:8px;">
|
||||
<input type="file" id="xtts-clone-files" accept="audio/*" multiple style="color:#8888AA;font-size:12px;">
|
||||
</div>
|
||||
<div style="display:flex;gap:8px;">
|
||||
<button class="btn" onclick="uploadVoiceSamples()" style="flex:1;">Stimme erstellen</button>
|
||||
</div>
|
||||
<div id="xtts-clone-status" style="font-size:11px;color:#555570;margin-top:6px;"></div>
|
||||
</div>
|
||||
|
||||
<!-- XTTS Status -->
|
||||
<div style="margin-top:8px;font-size:11px;color:#555570;" id="xtts-status">
|
||||
XTTS-Server: Nicht verbunden (starte xtts/ auf dem Gaming-PC)
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -792,11 +769,8 @@
|
||||
return;
|
||||
}
|
||||
if (msg.type === 'tts_status') {
|
||||
document.getElementById('tts-default-voice').textContent = msg.defaultVoice || '?';
|
||||
document.getElementById('tts-highlight-voice').textContent = msg.highlightVoice || '?';
|
||||
document.getElementById('tts-status').textContent = msg.ok ? 'OK' : 'Fehler';
|
||||
document.getElementById('tts-status').style.color = msg.ok ? '#34C759' : '#FF3B30';
|
||||
if (msg.voices) ttsLog(`Stimmen: ${msg.voices.join(', ')}`);
|
||||
if (msg.error) { document.getElementById('tts-last-error').textContent = msg.error; ttsLog(`Fehler: ${msg.error}`); }
|
||||
else { document.getElementById('tts-last-error').textContent = '-'; ttsLog('TTS OK'); }
|
||||
return;
|
||||
@@ -807,18 +781,40 @@
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'disk_status') {
|
||||
updateDiskBanner(msg);
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'mode' && msg.payload) {
|
||||
// Bridge hat den Modus geaendert (evtl. von anderer App/Diagnostic) — UI syncen
|
||||
const mode = (msg.payload.mode || '').toLowerCase();
|
||||
if (MODE_LABELS[mode]) {
|
||||
updateModeUI(mode);
|
||||
log("info", "server", `Mode-Sync: ${mode}`);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'xtts_voices_list') {
|
||||
const select = document.getElementById('diag-xtts-voice');
|
||||
// Behalte erste Option (Default)
|
||||
// Aktuelle Auswahl merken damit Rebuild sie nicht zerstoert
|
||||
const previouslySelected = select.value;
|
||||
while (select.options.length > 1) select.remove(1);
|
||||
for (const v of (msg.payload?.voices || [])) {
|
||||
const voices = msg.payload?.voices || [];
|
||||
for (const v of voices) {
|
||||
const opt = document.createElement('option');
|
||||
opt.value = v.name;
|
||||
opt.textContent = `${v.name} (${(v.size / 1024).toFixed(0)}KB)`;
|
||||
select.appendChild(opt);
|
||||
}
|
||||
document.getElementById('xtts-status').textContent = `XTTS: ${msg.payload?.voices?.length || 0} Stimme(n) verfuegbar`;
|
||||
// Wenn die vorherige Auswahl weiter existiert → wiederherstellen
|
||||
if (previouslySelected && voices.some(v => v.name === previouslySelected)) {
|
||||
select.value = previouslySelected;
|
||||
}
|
||||
document.getElementById('xtts-status').textContent = `XTTS: ${voices.length} Stimme(n) verfuegbar`;
|
||||
document.getElementById('xtts-status').style.color = '#34C759';
|
||||
renderVoiceList(voices);
|
||||
return;
|
||||
}
|
||||
if (msg.type === 'xtts_voice_saved') {
|
||||
@@ -829,16 +825,7 @@
|
||||
}
|
||||
|
||||
if (msg.type === 'voice_config') {
|
||||
document.getElementById('diag-default-voice').value = msg.defaultVoice || 'ramona';
|
||||
document.getElementById('diag-highlight-voice').value = msg.highlightVoice || 'thorsten';
|
||||
document.getElementById('diag-tts-enabled').checked = msg.ttsEnabled !== false;
|
||||
const sr = msg.speedRamona || 1.0;
|
||||
const st = msg.speedThorsten || 1.0;
|
||||
document.getElementById('diag-speed-ramona').value = sr;
|
||||
document.getElementById('speed-ramona-label').textContent = sr + 'x';
|
||||
document.getElementById('diag-speed-thorsten').value = st;
|
||||
document.getElementById('speed-thorsten-label').textContent = st + 'x';
|
||||
document.getElementById('diag-tts-engine').value = msg.ttsEngine || 'piper';
|
||||
// XTTS-Voice setzen — Option hinzufuegen falls nicht vorhanden
|
||||
const xttsSelect = document.getElementById('diag-xtts-voice');
|
||||
const xttsVoice = msg.xttsVoice || '';
|
||||
@@ -849,7 +836,6 @@
|
||||
xttsSelect.appendChild(opt);
|
||||
}
|
||||
xttsSelect.value = xttsVoice;
|
||||
toggleXTTSPanel();
|
||||
// Whisper-Modell wiederherstellen (falls gesetzt)
|
||||
if (msg.whisperModel) {
|
||||
const wSel = document.getElementById('diag-whisper-model');
|
||||
@@ -866,6 +852,25 @@
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'voice_ready') {
|
||||
const v = msg.payload?.voice || '';
|
||||
const err = msg.payload?.error;
|
||||
const ms = msg.payload?.loadMs;
|
||||
const statusEl = document.getElementById('voice-status');
|
||||
if (statusEl) {
|
||||
if (err) {
|
||||
statusEl.textContent = `⚠️ Stimme "${v}" Fehler: ${err}`;
|
||||
statusEl.style.color = '#FF3B30';
|
||||
} else {
|
||||
statusEl.textContent = `✅ Stimme "${v || 'Standard'}" bereit${ms ? ` (${(ms/1000).toFixed(1)}s)` : ''}`;
|
||||
statusEl.style.color = '#34C759';
|
||||
}
|
||||
setTimeout(() => { if (statusEl) statusEl.textContent = ''; }, 5000);
|
||||
}
|
||||
addLog('info', 'xtts', err ? `Voice "${v}": ${err}` : `Voice "${v || 'Standard'}" bereit`);
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'watchdog') {
|
||||
const colors = { warning: '#FFD60A', fixing: '#FF9500', fixed: '#34C759', error: '#FF3B30' };
|
||||
const color = colors[msg.status] || '#FFD60A';
|
||||
@@ -1272,14 +1277,55 @@
|
||||
});
|
||||
}
|
||||
|
||||
function addChat(type, text, meta) {
|
||||
// Debug-Toggle: TTS-aufbereitete Variante unter ARIA-Nachrichten einblenden
|
||||
let showTtsDebug = localStorage.getItem('aria-show-tts-debug') === '1';
|
||||
function toggleTtsDebug() {
|
||||
showTtsDebug = !showTtsDebug;
|
||||
localStorage.setItem('aria-show-tts-debug', showTtsDebug ? '1' : '0');
|
||||
const el = document.getElementById('tts-debug-toggle');
|
||||
if (el) el.checked = showTtsDebug;
|
||||
}
|
||||
|
||||
// Minimal-JS-Port von clean_text_for_tts() (Bridge) — reine Anzeige
|
||||
function previewTtsText(text) {
|
||||
if (!text) return '';
|
||||
// <voice>...</voice>
|
||||
const vm = text.match(/<voice>([\s\S]*?)<\/voice>/i);
|
||||
if (vm) text = vm[1];
|
||||
let t = text;
|
||||
t = t.replace(/```[\s\S]*?```/g, '. ');
|
||||
t = t.replace(/`[^`]+`/g, '');
|
||||
t = t.replace(/\*\*([^*]+)\*\*/g, '$1');
|
||||
t = t.replace(/\*([^*]+)\*/g, '$1');
|
||||
t = t.replace(/\[([^\]]+)\]\([^)]+\)/g, '$1');
|
||||
t = t.replace(/https?:\/\/\S+/g, 'ein Link');
|
||||
t = t.replace(/^#{1,6}\s*/gm, '');
|
||||
t = t.replace(/^>\s*/gm, '');
|
||||
t = t.replace(/^[\-\*]\s+/gm, '');
|
||||
t = t.replace(/(\d+)GB\b/g, '$1 Gigabyte');
|
||||
t = t.replace(/(\d+)MB\b/g, '$1 Megabyte');
|
||||
t = t.replace(/%/g, ' Prozent');
|
||||
t = t.replace(/\bCPU\b/g, 'C P U').replace(/\bAPI\b/g, 'A P I').replace(/\bRAM\b/g, 'R A M');
|
||||
t = t.replace(/\n{2,}/g, '. ').replace(/\n/g, ', ').replace(/\s{2,}/g, ' ');
|
||||
return t.trim();
|
||||
}
|
||||
|
||||
function addChat(type, text, meta, options) {
|
||||
const escaped = escapeHtml(text);
|
||||
let linked = linkifyText(escaped);
|
||||
// /shared/uploads/ Pfade als Inline-Bilder anzeigen
|
||||
linked = linked.replace(/\/shared\/uploads\/[^\s<"]+\.(jpg|jpeg|png|gif)/gi, (match) => {
|
||||
return `<a href="${match}" target="_blank">${match}</a><img src="${match}" class="chat-media" onclick="openLightbox('image','${match}')" onerror="this.style.display='none'">`;
|
||||
});
|
||||
const html = `${linked}<div class="meta">${escapeHtml(meta)} — ${new Date().toLocaleTimeString('de-DE')}</div>`;
|
||||
// Optional: TTS-Variante als zusaetzliches Block unter der Nachricht
|
||||
let ttsBlock = '';
|
||||
if (showTtsDebug && type === 'received') {
|
||||
const ttsText = (options && options.ttsText) || previewTtsText(text);
|
||||
if (ttsText && ttsText !== text) {
|
||||
ttsBlock = `<div style="margin-top:6px;padding:4px 8px;background:rgba(0,150,255,0.08);border-left:2px solid #0096FF;font-size:11px;color:#88AACC;"><span style="color:#0096FF;font-weight:bold;">TTS:</span> ${escapeHtml(ttsText)}</div>`;
|
||||
}
|
||||
}
|
||||
const html = `${linked}${ttsBlock}<div class="meta">${escapeHtml(meta)} — ${new Date().toLocaleTimeString('de-DE')}</div>`;
|
||||
|
||||
// Thinking-Indikator ausblenden bei neuer Nachricht
|
||||
updateThinkingIndicator({ activity: 'idle' });
|
||||
@@ -1382,10 +1428,38 @@
|
||||
}
|
||||
|
||||
// ── XTTS Panel ─────────────────────────────
|
||||
function renderVoiceList(voices) {
|
||||
const box = document.getElementById('xtts-voice-list');
|
||||
if (!box) return;
|
||||
if (!voices || voices.length === 0) {
|
||||
box.innerHTML = '<div style="color:#555570;font-size:11px;">Noch keine eigenen Stimmen vorhanden.</div>';
|
||||
return;
|
||||
}
|
||||
let html = '<div style="color:#8888AA;font-size:11px;margin-bottom:4px;">Geclonte Stimmen:</div>';
|
||||
html += '<div style="display:flex;flex-direction:column;gap:4px;">';
|
||||
for (const v of voices) {
|
||||
const esc = (s) => String(s).replace(/[&<>"']/g, c => ({ "&":"&", "<":"<", ">":">", '"':""", "'":"'" }[c]));
|
||||
html += `<div style="display:flex;align-items:center;gap:8px;background:#1E1E2E;border-radius:4px;padding:4px 8px;font-size:12px;">`
|
||||
+ `<span style="flex:1;color:#E0E0F0;">${esc(v.name)}</span>`
|
||||
+ `<span style="color:#555570;font-size:10px;">${(v.size/1024).toFixed(0)}KB</span>`
|
||||
+ `<button class="btn secondary" onclick="deleteXttsVoice('${esc(v.name).replace(/'/g, "\\'")}')" style="padding:2px 8px;font-size:10px;color:#FF6B6B;" title="Stimme loeschen">X</button>`
|
||||
+ `</div>`;
|
||||
}
|
||||
html += '</div>';
|
||||
box.innerHTML = html;
|
||||
}
|
||||
|
||||
function deleteXttsVoice(name) {
|
||||
if (!confirm(`Stimme "${name}" endgueltig loeschen?`)) return;
|
||||
send({ action: 'xtts_delete_voice', name });
|
||||
// Bei aktueller Auswahl: auf Default zuruecksetzen
|
||||
const sel = document.getElementById('diag-xtts-voice');
|
||||
if (sel.value === name) { sel.value = ''; sendVoiceConfig(); }
|
||||
}
|
||||
|
||||
// Legacy no-op (XTTS ist jetzt die einzige Engine, kein Panel-Toggle noetig)
|
||||
function toggleXTTSPanel() {
|
||||
const engine = document.getElementById('diag-tts-engine').value;
|
||||
document.getElementById('piper-panel').style.display = engine === 'piper' ? 'block' : 'none';
|
||||
document.getElementById('xtts-panel').style.display = engine === 'xtts' ? 'block' : 'none';
|
||||
void 0;
|
||||
if (engine === 'xtts') loadXTTSVoices();
|
||||
}
|
||||
|
||||
@@ -1493,15 +1567,15 @@
|
||||
|
||||
// ── Stimmen-Config ──────────────────────────
|
||||
function sendVoiceConfig() {
|
||||
const defaultVoice = document.getElementById('diag-default-voice').value;
|
||||
const highlightVoice = document.getElementById('diag-highlight-voice').value;
|
||||
const ttsEnabled = document.getElementById('diag-tts-enabled').checked;
|
||||
const speedRamona = parseFloat(document.getElementById('diag-speed-ramona').value);
|
||||
const speedThorsten = parseFloat(document.getElementById('diag-speed-thorsten').value);
|
||||
const ttsEngine = document.getElementById('diag-tts-engine').value;
|
||||
const xttsVoice = document.getElementById('diag-xtts-voice').value;
|
||||
const whisperModel = document.getElementById('diag-whisper-model').value;
|
||||
send({ action: 'send_voice_config', defaultVoice, highlightVoice, ttsEnabled, speedRamona, speedThorsten, ttsEngine, xttsVoice, whisperModel });
|
||||
send({ action: 'send_voice_config', ttsEnabled, xttsVoice, whisperModel });
|
||||
const statusEl = document.getElementById('voice-status');
|
||||
if (statusEl && xttsVoice) {
|
||||
statusEl.textContent = `⏳ Stimme "${xttsVoice}" wird geladen...`;
|
||||
statusEl.style.color = '#FFD60A';
|
||||
}
|
||||
}
|
||||
|
||||
// ── Passwort-Feld Anzeigen/Verbergen ─────────────────────
|
||||
@@ -1631,19 +1705,24 @@
|
||||
const origSwitchMainTab = typeof switchMainTab === 'function' ? switchMainTab : null;
|
||||
|
||||
// ── Modus-Wechsel ────────────────────────────
|
||||
// Kanonische IDs (matchen bridge/modes.py canonical_id + android ModeSelector)
|
||||
const MODE_LABELS = { normal: 'Normal', nicht_stoeren: 'Nicht stoeren', fluester: 'Fluestern', hangar: 'Hangar', gaming: 'Gaming' };
|
||||
let currentMode = 'normal';
|
||||
const MODE_LABELS = { normal: 'Normal', dnd: 'Nicht stoeren', whisper: 'Fluestern', hangar: 'Hangar', gaming: 'Gaming' };
|
||||
|
||||
function setMode(mode) {
|
||||
function updateModeUI(mode) {
|
||||
currentMode = mode;
|
||||
// Visuelles Feedback
|
||||
document.querySelectorAll('.mode-btn').forEach(btn => {
|
||||
btn.style.borderColor = btn.dataset.mode === mode ? '#0096FF' : 'transparent';
|
||||
});
|
||||
document.getElementById('mode-status').textContent = `Aktueller Modus: ${MODE_LABELS[mode] || mode}`;
|
||||
// An Bridge senden via RVS
|
||||
sendToRVS(`ARIA, ${MODE_LABELS[mode]}-Modus`, false);
|
||||
log("info", "server", `Modus gewechselt: ${mode}`);
|
||||
const label = MODE_LABELS[mode] || mode;
|
||||
document.getElementById('mode-status').textContent = `Aktueller Modus: ${label}`;
|
||||
}
|
||||
|
||||
function setMode(mode) {
|
||||
// Optimistic UI-Update — Bridge bestaetigt per Broadcast
|
||||
updateModeUI(mode);
|
||||
// Sauberer Weg: type=mode via RVS an Bridge — die broadcastet an alle Clients
|
||||
send({ action: 'set_mode', mode });
|
||||
}
|
||||
|
||||
// ── TTS Diagnose ─────────────────────────────
|
||||
@@ -2129,6 +2208,57 @@
|
||||
send({ action: 'get_openclaw_config' });
|
||||
}
|
||||
|
||||
// Toggle-Checkbox initial korrekt setzen
|
||||
const ttsToggleEl = document.getElementById('tts-debug-toggle');
|
||||
if (ttsToggleEl) ttsToggleEl.checked = showTtsDebug;
|
||||
|
||||
// Disk-Space Banner aktualisieren (wird vom Server via disk_status gepusht)
|
||||
function updateDiskBanner(status) {
|
||||
const banner = document.getElementById('disk-banner');
|
||||
const icon = document.getElementById('disk-banner-icon');
|
||||
const text = document.getElementById('disk-banner-text');
|
||||
if (!banner) return;
|
||||
if (!status || status.level === 'ok') {
|
||||
banner.style.display = 'none';
|
||||
return;
|
||||
}
|
||||
const gb = (n) => (n / 1024 / 1024 / 1024).toFixed(1);
|
||||
const pct = status.percent;
|
||||
const used = gb(status.usedBytes);
|
||||
const total = gb(status.totalBytes);
|
||||
const avail = gb(status.availBytes);
|
||||
let bg, col, msg;
|
||||
if (status.level === 'critical') {
|
||||
bg = '#5C1A1A'; col = '#FF6B6B'; icon.innerHTML = '🚨'; // 🚨
|
||||
msg = `KRITISCH: Platte ${pct}% voll (${used}GB von ${total}GB, nur noch ${avail}GB frei). aria-core kann bald nicht mehr schreiben — sofort aufraeumen!`;
|
||||
} else if (status.level === 'warn') {
|
||||
bg = '#5C3A1A'; col = '#FFAA55'; icon.innerHTML = '⚠️'; // ⚠️
|
||||
msg = `Warnung: Platte ${pct}% voll (${avail}GB frei). Bald aufraeumen.`;
|
||||
} else {
|
||||
bg = '#4A3A1A'; col = '#FFD60A'; icon.innerHTML = 'ℹ️'; // ℹ️
|
||||
msg = `Hinweis: Platte ${pct}% voll (${avail}GB frei).`;
|
||||
}
|
||||
banner.style.background = bg;
|
||||
banner.style.color = col;
|
||||
banner.style.borderBottom = `2px solid ${col}`;
|
||||
text.textContent = msg;
|
||||
banner.style.display = 'block';
|
||||
}
|
||||
|
||||
function copyDiskCmd(variant) {
|
||||
const cmd = variant === 'aggressive'
|
||||
? 'docker system prune -a --volumes -f'
|
||||
: 'docker builder prune -a -f && docker image prune -a -f';
|
||||
navigator.clipboard.writeText(cmd).then(() => {
|
||||
const btn = event.target;
|
||||
const old = btn.textContent;
|
||||
btn.textContent = 'Kopiert!';
|
||||
setTimeout(() => { btn.textContent = old; }, 1500);
|
||||
}).catch(() => {
|
||||
alert('Kopieren fehlgeschlagen — Befehl: ' + cmd);
|
||||
});
|
||||
}
|
||||
|
||||
connectWS();
|
||||
</script>
|
||||
</body>
|
||||
|
||||
+96
-75
@@ -622,6 +622,21 @@ function connectRVS(forcePlain) {
|
||||
}});
|
||||
} else if (msg.type === "heartbeat") {
|
||||
// ignorieren
|
||||
} else if (msg.type === "mode") {
|
||||
// Mode-Broadcast von der Bridge → an Browser-Clients weiterreichen
|
||||
log("info", "rvs", `Mode-Broadcast: ${msg.payload?.mode} (${msg.payload?.name})`);
|
||||
broadcast({ type: "mode", payload: msg.payload });
|
||||
} else if (msg.type === "voice_ready") {
|
||||
// XTTS-Bridge meldet Stimme fertig geladen → an Browser durchreichen
|
||||
const v = msg.payload?.voice || "";
|
||||
const err = msg.payload?.error;
|
||||
const ms = msg.payload?.loadMs;
|
||||
if (err) {
|
||||
log("warn", "rvs", `Voice-Ready Fehler fuer "${v}": ${err}`);
|
||||
} else {
|
||||
log("info", "rvs", `Voice "${v || "default"}" geladen${ms ? ` in ${(ms/1000).toFixed(1)}s` : ""}`);
|
||||
}
|
||||
broadcast({ type: "voice_ready", payload: msg.payload });
|
||||
} else {
|
||||
log("debug", "rvs", `Nachricht: ${JSON.stringify(msg).slice(0, 150)}`);
|
||||
}
|
||||
@@ -1144,6 +1159,53 @@ function updateAgentActivity() {
|
||||
watchdogWarned = false;
|
||||
}
|
||||
|
||||
// ── Disk-Space Monitor ───────────────────────────────
|
||||
// Prueft regelmaessig die Host-Disk (via gemountetem /shared) und
|
||||
// broadcastet bei kritischen Schwellwerten ein disk_status Event.
|
||||
let lastDiskStatus = null;
|
||||
let currentDiskStatus = null; // Vollstaendig fuer neu verbundene Clients
|
||||
function checkDiskSpace() {
|
||||
const { exec } = require("child_process");
|
||||
exec("df -B1 /shared", (err, stdout) => {
|
||||
if (err) return;
|
||||
const lines = stdout.trim().split("\n");
|
||||
if (lines.length < 2) return;
|
||||
const cols = lines[1].split(/\s+/);
|
||||
// Filesystem Size Used Avail Use% MountedOn
|
||||
const total = parseInt(cols[1], 10);
|
||||
const used = parseInt(cols[2], 10);
|
||||
const avail = parseInt(cols[3], 10);
|
||||
if (!total) return;
|
||||
const pct = Math.round((used / total) * 100);
|
||||
let level = "ok";
|
||||
if (pct >= 95) level = "critical";
|
||||
else if (pct >= 85) level = "warn";
|
||||
else if (pct >= 70) level = "info";
|
||||
const status = {
|
||||
type: "disk_status",
|
||||
level,
|
||||
percent: pct,
|
||||
usedBytes: used,
|
||||
totalBytes: total,
|
||||
availBytes: avail,
|
||||
};
|
||||
currentDiskStatus = status;
|
||||
// Nur broadcasten wenn sich was geaendert hat (oder alle 60s Refresh)
|
||||
const key = `${level}-${pct}`;
|
||||
if (lastDiskStatus !== key) {
|
||||
lastDiskStatus = key;
|
||||
broadcast(status);
|
||||
if (level !== "ok") {
|
||||
log(level === "critical" ? "error" : "warn", "server",
|
||||
`Disk ${pct}% belegt (${(used/1024/1024/1024).toFixed(1)}GB von ${(total/1024/1024/1024).toFixed(1)}GB)`);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
// Beim Start + alle 30s
|
||||
setTimeout(checkDiskSpace, 2000);
|
||||
setInterval(checkDiskSpace, 30000);
|
||||
|
||||
// Watchdog prüft alle 30s ob ARIA nach einer gesendeten Nachricht reagiert
|
||||
setInterval(async () => {
|
||||
if (pendingMessageTime === 0) return; // Keine Nachricht gesendet
|
||||
@@ -1277,6 +1339,8 @@ wss.on("connection", (ws) => {
|
||||
browserClients.add(ws);
|
||||
// Initialen State + letzte Logs senden
|
||||
ws.send(JSON.stringify({ type: "init", state, logs: logs.slice(-100) }));
|
||||
// Letzten Disk-Status mitgeben damit der Client sofort weiss wie's um Platz steht
|
||||
if (currentDiskStatus) ws.send(JSON.stringify(currentDiskStatus));
|
||||
|
||||
ws.on("message", (raw) => {
|
||||
try {
|
||||
@@ -1339,22 +1403,24 @@ wss.on("connection", (ws) => {
|
||||
} else if (msg.action === "xtts_list_voices") {
|
||||
// Frische Verbindung die auf Antwort wartet
|
||||
sendToRVS_withResponse("xtts_list_voices", {}, "xtts_voices_list", ws);
|
||||
} else if (msg.action === "xtts_delete_voice") {
|
||||
// Weiterleiten an XTTS-Bridge, die antwortet mit neuer Liste
|
||||
sendToRVS_raw({ type: "xtts_delete_voice", payload: { name: msg.name }, timestamp: Date.now() });
|
||||
log("info", "server", `Voice-Delete '${msg.name}' an XTTS-Bridge gesendet`);
|
||||
} else if (msg.action === "set_mode") {
|
||||
// Mode-Wechsel → Bridge bearbeitet und broadcastet an alle Clients
|
||||
sendToRVS_raw({ type: "mode", payload: { mode: msg.mode }, timestamp: Date.now() });
|
||||
log("info", "server", `Mode-Wechsel angefordert: ${msg.mode}`);
|
||||
} else if (msg.action === "get_voice_config") {
|
||||
handleGetVoiceConfig(ws);
|
||||
} else if (msg.action === "send_voice_config") {
|
||||
// Stimmen-Config persistent speichern + an Bridge via RVS senden
|
||||
// Bestehende Config lesen um Felder zu mergen die dieser Call nicht setzt
|
||||
let existing = {};
|
||||
try { existing = JSON.parse(fs.readFileSync("/shared/config/voice_config.json", "utf-8")); } catch {}
|
||||
const voiceConfig = {
|
||||
...existing,
|
||||
defaultVoice: msg.defaultVoice || "ramona",
|
||||
highlightVoice: msg.highlightVoice || "thorsten",
|
||||
ttsEnabled: msg.ttsEnabled !== false,
|
||||
ttsEngine: msg.ttsEngine || "piper",
|
||||
xttsVoice: msg.xttsVoice || "",
|
||||
speedRamona: msg.speedRamona || 1.0,
|
||||
speedThorsten: msg.speedThorsten || 1.0,
|
||||
};
|
||||
if (msg.whisperModel !== undefined) voiceConfig.whisperModel = msg.whisperModel;
|
||||
try {
|
||||
@@ -1362,13 +1428,13 @@ wss.on("connection", (ws) => {
|
||||
fs.writeFileSync("/shared/config/voice_config.json", JSON.stringify(voiceConfig, null, 2));
|
||||
} catch {}
|
||||
sendToRVS_raw({ type: "config", payload: voiceConfig, timestamp: Date.now() });
|
||||
log("info", "server", `Voice-Config gespeichert+gesendet: default=${voiceConfig.defaultVoice}, whisper=${voiceConfig.whisperModel || "-"}`);
|
||||
log("info", "server", `Voice-Config gespeichert: xttsVoice=${voiceConfig.xttsVoice || "default"}, whisper=${voiceConfig.whisperModel || "-"}`);
|
||||
} else if (msg.action === "get_triggers") {
|
||||
handleGetTriggers(ws);
|
||||
} else if (msg.action === "save_triggers") {
|
||||
handleSaveTriggers(ws, msg.triggers || []);
|
||||
} else if (msg.action === "test_tts") {
|
||||
handleTestTTS(ws, msg.voice || "ramona", msg.text || "Test");
|
||||
handleTestTTS(ws, msg.text || "Test");
|
||||
} else if (msg.action === "check_tts") {
|
||||
handleCheckTTS(ws);
|
||||
} else if (msg.action === "check_desktop") {
|
||||
@@ -1508,32 +1574,21 @@ function handleGetVoiceConfig(clientWs) {
|
||||
const config = JSON.parse(fs.readFileSync(configPath, "utf-8"));
|
||||
clientWs.send(JSON.stringify({ type: "voice_config", ...config }));
|
||||
} else {
|
||||
clientWs.send(JSON.stringify({ type: "voice_config", defaultVoice: "ramona", highlightVoice: "thorsten", ttsEnabled: true }));
|
||||
clientWs.send(JSON.stringify({ type: "voice_config", ttsEnabled: true, xttsVoice: "" }));
|
||||
}
|
||||
} catch (err) {
|
||||
clientWs.send(JSON.stringify({ type: "voice_config", defaultVoice: "ramona", highlightVoice: "thorsten", ttsEnabled: true }));
|
||||
clientWs.send(JSON.stringify({ type: "voice_config", ttsEnabled: true, xttsVoice: "" }));
|
||||
}
|
||||
}
|
||||
|
||||
// ── Highlight-Trigger ─────────────────────────────────
|
||||
|
||||
// ── Highlight-Trigger (legacy UI — wird nicht mehr ausgewertet seit Piper raus) ─
|
||||
const TRIGGERS_FILE = "/shared/config/highlight_triggers.json";
|
||||
|
||||
async function handleGetTriggers(clientWs) {
|
||||
try {
|
||||
// Zuerst aus Shared Volume lesen, dann Fallback auf Bridge-Defaults
|
||||
let triggers;
|
||||
if (fs.existsSync(TRIGGERS_FILE)) {
|
||||
triggers = JSON.parse(fs.readFileSync(TRIGGERS_FILE, "utf-8"));
|
||||
} else {
|
||||
// Defaults aus der Bridge lesen
|
||||
const result = await dockerExec("aria-bridge", `python3 -c "
|
||||
import sys; sys.path.insert(0,'/app')
|
||||
from aria_bridge import EPIC_TRIGGERS
|
||||
print('\\n'.join(EPIC_TRIGGERS))
|
||||
"`);
|
||||
triggers = result.trim().split("\n").filter(t => t);
|
||||
}
|
||||
const triggers = fs.existsSync(TRIGGERS_FILE)
|
||||
? JSON.parse(fs.readFileSync(TRIGGERS_FILE, "utf-8"))
|
||||
: [];
|
||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers }));
|
||||
} catch (err) {
|
||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers: [], error: err.message }));
|
||||
@@ -1542,74 +1597,40 @@ print('\\n'.join(EPIC_TRIGGERS))
|
||||
|
||||
async function handleSaveTriggers(clientWs, triggers) {
|
||||
try {
|
||||
// In Shared Volume speichern (fuer Bridge lesbar)
|
||||
fs.mkdirSync("/shared/config", { recursive: true });
|
||||
fs.writeFileSync(TRIGGERS_FILE, JSON.stringify(triggers, null, 2));
|
||||
log("info", "server", `${triggers.length} Highlight-Trigger gespeichert`);
|
||||
// Bridge informieren (wird beim naechsten Start geladen)
|
||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers }));
|
||||
} catch (err) {
|
||||
log("error", "server", `Trigger speichern fehlgeschlagen: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// ── TTS Diagnose ──────────────────────────────────────
|
||||
async function handleTestTTS(clientWs, voice, text) {
|
||||
// ── TTS Diagnose (XTTS) ───────────────────────────────
|
||||
async function handleTestTTS(clientWs, text) {
|
||||
try {
|
||||
log("info", "server", `TTS-Test: ${voice} — "${text}"`);
|
||||
const result = await dockerExec("aria-bridge", `python3 -c "
|
||||
import time, sys
|
||||
sys.path.insert(0, '/app')
|
||||
from piper import PiperVoice
|
||||
import wave, tempfile, os
|
||||
voices = {'ramona': '/voices/de_DE-ramona-low.onnx', 'thorsten': '/voices/de_DE-thorsten-high.onnx'}
|
||||
path = voices.get('${voice}')
|
||||
if not path or not os.path.exists(path):
|
||||
print('FEHLER: Stimme nicht gefunden')
|
||||
sys.exit(1)
|
||||
v = PiperVoice.load(path)
|
||||
start = time.time()
|
||||
tmp = tempfile.NamedTemporaryFile(suffix='.wav', delete=False)
|
||||
with wave.open(tmp.name, 'wb') as wf:
|
||||
wf.setnchannels(1)
|
||||
wf.setsampwidth(2)
|
||||
wf.setframerate(v.config.sample_rate)
|
||||
v.synthesize('${text.replace(/'/g, "\\'")}', wf)
|
||||
size = os.path.getsize(tmp.name)
|
||||
dur = int((time.time() - start) * 1000)
|
||||
os.unlink(tmp.name)
|
||||
print(f'OK:{dur}:{size}')
|
||||
"`);
|
||||
const parts = result.trim().split(":");
|
||||
if (parts[0] === "OK") {
|
||||
clientWs.send(JSON.stringify({ type: "tts_result", ok: true, voice, duration: parts[1], size: parts[2] }));
|
||||
} else {
|
||||
clientWs.send(JSON.stringify({ type: "tts_result", ok: false, voice, error: result.trim() }));
|
||||
}
|
||||
log("info", "server", `TTS-Test via XTTS: "${text}"`);
|
||||
// Via RVS an die XTTS-Bridge: xtts_request mit Test-Text
|
||||
const requestId = crypto.randomUUID();
|
||||
sendToRVS_raw({
|
||||
type: "xtts_request",
|
||||
payload: { text, language: "de", requestId, voice: "" },
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
clientWs.send(JSON.stringify({ type: "tts_result", ok: true, duration: "pending", size: "?" }));
|
||||
} catch (err) {
|
||||
clientWs.send(JSON.stringify({ type: "tts_result", ok: false, voice, error: err.message }));
|
||||
clientWs.send(JSON.stringify({ type: "tts_result", ok: false, error: err.message }));
|
||||
}
|
||||
}
|
||||
|
||||
async function handleCheckTTS(clientWs) {
|
||||
try {
|
||||
const result = await dockerExec("aria-bridge", `python3 -c "
|
||||
import os, json
|
||||
voices = {}
|
||||
for name, path in [('ramona', '/voices/de_DE-ramona-low.onnx'), ('thorsten', '/voices/de_DE-thorsten-high.onnx')]:
|
||||
voices[name] = os.path.exists(path)
|
||||
print(json.dumps(voices))
|
||||
"`);
|
||||
const voices = JSON.parse(result.trim());
|
||||
const available = Object.entries(voices).filter(([,v]) => v).map(([k]) => k);
|
||||
const missing = Object.entries(voices).filter(([,v]) => !v).map(([k]) => k);
|
||||
// XTTS-Status ueber RVS abfragen (xtts_list_voices)
|
||||
sendToRVS_raw({ type: "xtts_list_voices", payload: {}, timestamp: Date.now() });
|
||||
clientWs.send(JSON.stringify({
|
||||
type: "tts_status",
|
||||
ok: missing.length === 0,
|
||||
voices: available,
|
||||
defaultVoice: "ramona",
|
||||
highlightVoice: "thorsten",
|
||||
error: missing.length > 0 ? `Fehlend: ${missing.join(", ")}` : null,
|
||||
ok: true,
|
||||
error: null,
|
||||
}));
|
||||
} catch (err) {
|
||||
clientWs.send(JSON.stringify({ type: "tts_status", ok: false, error: err.message }));
|
||||
|
||||
@@ -72,7 +72,6 @@ services:
|
||||
- aria
|
||||
network_mode: "service:aria" # Teilt Netzwerk mit aria-core → localhost:18789
|
||||
volumes:
|
||||
- ./aria-data/voices:/voices:ro # TTS Stimmen
|
||||
- ./aria-data/config/aria.env:/config/aria.env
|
||||
- aria-shared:/shared # Shared Volume fuer Datei-Austausch (Bridge <> Core)
|
||||
# Audio-Zugriff
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
#!/bin/bash
|
||||
# ════════════════════════════════════════════════
|
||||
# ARIA — Piper Stimmen herunterladen
|
||||
# Ramona (Alltag) + Thorsten (epische Momente)
|
||||
# ════════════════════════════════════════════════
|
||||
|
||||
set -e
|
||||
|
||||
VOICES_DIR="aria-data/voices"
|
||||
BASE_URL="https://huggingface.co/rhasspy/piper-voices/resolve/main/de/de_DE"
|
||||
|
||||
mkdir -p "$VOICES_DIR"
|
||||
cd "$VOICES_DIR"
|
||||
|
||||
echo "Lade ARIA Stimmen..."
|
||||
echo ""
|
||||
|
||||
echo "[1/4] Ramona (Modell)..."
|
||||
wget -q --show-progress "$BASE_URL/ramona/low/de_DE-ramona-low.onnx"
|
||||
|
||||
echo "[2/4] Ramona (Config)..."
|
||||
wget -q --show-progress "$BASE_URL/ramona/low/de_DE-ramona-low.onnx.json"
|
||||
|
||||
echo "[3/4] Thorsten (Modell)..."
|
||||
wget -q --show-progress "$BASE_URL/thorsten/high/de_DE-thorsten-high.onnx"
|
||||
|
||||
echo "[4/4] Thorsten (Config)..."
|
||||
wget -q --show-progress "$BASE_URL/thorsten/high/de_DE-thorsten-high.onnx.json"
|
||||
|
||||
echo ""
|
||||
echo "Stimmen geladen!"
|
||||
ls -lh *.onnx
|
||||
@@ -37,6 +37,8 @@
|
||||
- [x] App: "ARIA denkt..." Indicator + Abbrechen-Button (Bridge spiegelt agent_activity via RVS)
|
||||
- [x] Whisper STT: Model-Auswahl in Diagnostic (tiny/base/small/medium/large-v3), Hot-Reload in Bridge, Default auf medium
|
||||
- [x] App: Audio-Aufnahme explizit 16kHz mono (spart Resample, optimal fuer Whisper)
|
||||
- [x] Streaming TTS (Weg A): XTTS → PCM-Stream → aria-bridge → App AudioTrack MODE_STREAM, keine WAV-Gaps mehr
|
||||
- [x] Piper komplett entfernt: nur noch XTTS v2 als TTS-Engine (remote, GPU auf Gaming-PC). Wenn XTTS offline ist, ist ARIA stumm — bewusst akzeptiert.
|
||||
- [x] Gespraechsmodus: Speech-Gate strenger (-28dB / 500ms) — keine Umgebungsgeraeusche mehr
|
||||
- [x] Gespraechsmodus: Max-Dauer 30s pro Aufnahme, Cache-Cleanup alter Files, Messages-Array gekappt (500)
|
||||
- [x] Diagnostic: Archivierte Session-Versionen (.reset.*) werden angezeigt + exportierbar — OpenClaw resettet Sessions bei erster Nutzung nach Container-Restart, Inhalt ist aber in .reset.<timestamp> Dateien gesichert
|
||||
@@ -65,11 +67,7 @@
|
||||
- [ ] QR-Code Onboarding: Diagnostic generiert QR mit RVS-Credentials, App scannt — keine manuelle Eingabe mehr
|
||||
|
||||
### TTS / Audio
|
||||
- [ ] XTTS Audio-Streaming (PCM-Stream statt WAV-Dateien, eliminiert Stottern komplett)
|
||||
- [ ] Audio-Normalisierung (Lautstaerke zwischen Chunks angleichen)
|
||||
- [ ] Piper Voices Download ueber Diagnostic (neue Sprachen/Stimmen)
|
||||
- [ ] TTS-Text-Aufbereitung: Code-Bloecke rausfiltern, Einheiten ausschreiben ("22GB" → "zweiundzwanzig Gigabyte"). Zwei Varianten denkbar: (a) server-side Cleanup in Bridge, (b) ARIA schreibt `<voice></voice>` Block der in UI hidden bleibt aber fuer TTS genutzt wird.
|
||||
- [ ] Piper evtl. komplett entfernen (klingt schlecht vs. XTTS) — oder nur als Fallback wenn XTTS offline ist
|
||||
|
||||
### Architektur
|
||||
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
||||
|
||||
@@ -17,6 +17,10 @@ const ALLOWED_TYPES = new Set([
|
||||
"xtts_request", "xtts_response", "xtts_list_voices", "xtts_voices_list", "voice_upload", "xtts_voice_saved",
|
||||
"update_check", "update_available", "update_download", "update_data",
|
||||
"agent_activity", "cancel_request",
|
||||
"audio_pcm",
|
||||
"xtts_delete_voice",
|
||||
"voice_preload", "voice_ready",
|
||||
"stt_request", "stt_response",
|
||||
]);
|
||||
|
||||
// Token-Raum: token -> { clients: Set<ws> }
|
||||
|
||||
+373
-89
@@ -67,6 +67,20 @@ function connectRVS(forcePlain) {
|
||||
await handleVoiceUpload(msg.payload);
|
||||
} else if (msg.type === "xtts_list_voices") {
|
||||
await handleListVoices();
|
||||
} else if (msg.type === "xtts_delete_voice") {
|
||||
await handleDeleteVoice(msg.payload);
|
||||
} else if (msg.type === "voice_preload") {
|
||||
await handleVoicePreload(msg.payload);
|
||||
} else if (msg.type === "config") {
|
||||
// Diagnostic hat globale Voice gewechselt → Preload damit der naechste
|
||||
// Render ohne Ladewartezeit startet + alle Clients "voice_ready" sehen
|
||||
const v = msg.payload && msg.payload.xttsVoice;
|
||||
if (v && v !== lastDiagnosticVoice) {
|
||||
lastDiagnosticVoice = v;
|
||||
await handleVoicePreload({ voice: v, source: "diagnostic" });
|
||||
} else if (!v) {
|
||||
lastDiagnosticVoice = "";
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
log(`Fehler: ${err.message}`);
|
||||
@@ -93,87 +107,148 @@ function connectRVS(forcePlain) {
|
||||
|
||||
// ── TTS Request Handler ─────────────────────────────
|
||||
|
||||
async function handleTTSRequest(payload) {
|
||||
const { text, voice, requestId, language } = payload;
|
||||
/**
|
||||
* Linearer Fade-In auf einen base64-PCM-Chunk (s16le).
|
||||
* Mascht XTTS-Warmup-Glitches am Anfang eines Renders.
|
||||
*/
|
||||
function applyFadeIn(base64Pcm, sampleRate, channels, fadeMs) {
|
||||
const buf = Buffer.from(base64Pcm, "base64");
|
||||
const totalSamples = buf.length / 2; // s16le
|
||||
const fadeSamples = Math.min(
|
||||
Math.floor((fadeMs / 1000) * sampleRate) * channels,
|
||||
totalSamples
|
||||
);
|
||||
for (let i = 0; i < fadeSamples; i++) {
|
||||
const sample = buf.readInt16LE(i * 2);
|
||||
const gain = i / fadeSamples;
|
||||
buf.writeInt16LE(Math.round(sample * gain), i * 2);
|
||||
}
|
||||
return buf.toString("base64");
|
||||
}
|
||||
|
||||
// ── TTS-Queue ──────────────────────────────────────
|
||||
// XTTS verarbeitet Requests sequenziell, damit Streams sich nicht ueberlappen.
|
||||
// Ohne Queue wuerden parallele Requests parallel streamen → App bekommt
|
||||
// interleaved PCM-Chunks aus zwei Rendern → klingt wie Chaos.
|
||||
let ttsQueue = Promise.resolve();
|
||||
|
||||
// Merkt sich die letzte in Diagnostic gewaehlte Voice, damit wir nicht bei jedem
|
||||
// config-Broadcast (auch ohne Aenderung) einen Preload triggern.
|
||||
let lastDiagnosticVoice = "";
|
||||
|
||||
function handleTTSRequest(payload) {
|
||||
ttsQueue = ttsQueue.then(() => _runTTSRequest(payload)).catch(err => {
|
||||
log(`TTS-Queue Fehler: ${err.message}`);
|
||||
});
|
||||
return ttsQueue;
|
||||
}
|
||||
|
||||
async function _runTTSRequest(payload) {
|
||||
const { text, voice, requestId, language, messageId } = payload;
|
||||
if (!text) return;
|
||||
|
||||
// Markdown + Sonderzeichen entfernen fuer natuerliche Sprache
|
||||
// Markdown-Cleanup (Bridge macht jetzt auch Cleanup, aber safety net)
|
||||
let cleanText = text
|
||||
.replace(/\*\*([^*]+)\*\*/g, "$1") // **fett** → fett
|
||||
.replace(/\*([^*]+)\*/g, "$1") // *kursiv* → kursiv
|
||||
.replace(/`([^`]+)`/g, "$1") // `code` → code
|
||||
.replace(/```[\s\S]*?```/g, "") // Code-Bloecke entfernen
|
||||
.replace(/\[([^\]]+)\]\([^)]+\)/g, "$1") // [text](url) → text
|
||||
.replace(/#{1,6}\s*/g, "") // ### Ueberschriften → entfernen
|
||||
.replace(/>\s*/g, "") // > Zitate → entfernen
|
||||
.replace(/[-*]\s+/g, "") // - Listen → entfernen
|
||||
.replace(/\n{2,}/g, ". ") // Mehrere Newlines → Punkt
|
||||
.replace(/\n/g, ", ") // Einzelne Newlines → Komma
|
||||
.replace(/\s{2,}/g, " ") // Mehrfach-Leerzeichen
|
||||
.replace(/["""„]/g, "") // Anfuehrungszeichen entfernen
|
||||
.replace(/\(\)/g, "") // Leere Klammern
|
||||
.replace(/\*\*([^*]+)\*\*/g, "$1")
|
||||
.replace(/\*([^*]+)\*/g, "$1")
|
||||
.replace(/`([^`]+)`/g, "$1")
|
||||
.replace(/```[\s\S]*?```/g, "")
|
||||
.replace(/\[([^\]]+)\]\([^)]+\)/g, "$1")
|
||||
.replace(/#{1,6}\s*/g, "")
|
||||
.replace(/>\s*/g, "")
|
||||
.replace(/[-*]\s+/g, "")
|
||||
.replace(/\n{2,}/g, ". ")
|
||||
.replace(/\n/g, ", ")
|
||||
.replace(/\s{2,}/g, " ")
|
||||
.replace(/["""„]/g, "")
|
||||
.replace(/\(\)/g, "")
|
||||
.trim();
|
||||
|
||||
// Text in Saetze aufteilen, dann zu Chunks von 2-3 Saetzen zusammenfassen
|
||||
// (mehr Kontext = konsistentere Stimme/Lautstaerke, aber nicht zu lang fuer WebSocket)
|
||||
const sentences = cleanText.split(/(?<=[.!?])\s+/)
|
||||
.map(s => s.trim())
|
||||
.filter(s => s.length > 0)
|
||||
.map(s => s.replace(/[.]+$/, '')); // Punkt am Ende entfernen
|
||||
|
||||
const MAX_CHUNK_CHARS = 150; // Max ~150 Zeichen pro Chunk (schnelles Rendering, Preloading reicht)
|
||||
const chunks = [];
|
||||
let currentChunk = '';
|
||||
for (const sentence of sentences) {
|
||||
if (currentChunk && (currentChunk.length + sentence.length + 2) > MAX_CHUNK_CHARS) {
|
||||
chunks.push(currentChunk);
|
||||
currentChunk = sentence;
|
||||
} else {
|
||||
currentChunk = currentChunk ? currentChunk + ', ' + sentence : sentence;
|
||||
}
|
||||
}
|
||||
if (currentChunk) chunks.push(currentChunk);
|
||||
if (chunks.length === 0) return;
|
||||
|
||||
log(`TTS-Request: "${cleanText.slice(0, 60)}..." (${sentences.length} Saetze → ${chunks.length} Chunks, voice: ${voice || "default"}, lang: ${language || "de"})`);
|
||||
log(`TTS-Request (streaming): "${cleanText.slice(0, 80)}..." (${cleanText.length} chars, voice: ${voice || "default"})`);
|
||||
|
||||
try {
|
||||
const voiceSample = voice ? path.join(VOICES_DIR, `${voice}.wav`) : null;
|
||||
const hasCustomVoice = voiceSample && fs.existsSync(voiceSample);
|
||||
|
||||
// Streaming: Chunk rendern → sofort senden → naechster Chunk
|
||||
// App spielt mit Preloading-Queue nahtlos ab
|
||||
let sentCount = 0;
|
||||
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
const chunk = chunks[i];
|
||||
try {
|
||||
const audioBuffer = await callXTTSAPI(chunk, language || "de", hasCustomVoice ? voiceSample : null);
|
||||
|
||||
if (audioBuffer && audioBuffer.length > 100) {
|
||||
log(`TTS [${i + 1}/${chunks.length}]: ${(audioBuffer.length / 1024).toFixed(0)}KB — "${chunk.slice(0, 50)}"`);
|
||||
|
||||
sendToRVS({
|
||||
type: "xtts_response",
|
||||
payload: {
|
||||
requestId: `${requestId || ""}_${i}`,
|
||||
base64: audioBuffer.toString("base64"),
|
||||
mimeType: "audio/wav",
|
||||
voice: voice || "default",
|
||||
engine: "xtts",
|
||||
part: i + 1,
|
||||
totalParts: chunks.length,
|
||||
},
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
sentCount++;
|
||||
}
|
||||
} catch (chunkErr) {
|
||||
log(`TTS [${i + 1}/${chunks.length}] Fehler: ${chunkErr.message} — ueberspringe`);
|
||||
}
|
||||
// Im local-Mode erwartet daswer123 XTTS speaker_wav als Basename (ohne .wav,
|
||||
// ohne Pfad) — der Server prefixt EXAMPLE_FOLDER selbst. Wir checken hier
|
||||
// nur das physische File ab um Warnungen zu loggen; runter ans API geht
|
||||
// nur der Name.
|
||||
const voiceFilePath = voice ? path.join(VOICES_DIR, `${voice}.wav`) : null;
|
||||
const hasCustomVoice = voiceFilePath && fs.existsSync(voiceFilePath);
|
||||
const speakerName = hasCustomVoice ? voice : "";
|
||||
if (voice && !hasCustomVoice) {
|
||||
log(`WARNUNG: Voice "${voice}" angefordert, aber ${voiceFilePath} existiert nicht — nehme Default`);
|
||||
} else if (hasCustomVoice) {
|
||||
log(`Voice "${voice}" verwendet (speaker_wav="${speakerName}")`);
|
||||
}
|
||||
|
||||
log(`TTS komplett: ${sentCount}/${chunks.length} Chunks gestreamt`);
|
||||
let chunkIndex = 0;
|
||||
let pcmMeta = null;
|
||||
let firstChunkSeen = false;
|
||||
|
||||
const onChunk = (pcmBase64, meta) => {
|
||||
if (!pcmMeta) pcmMeta = meta;
|
||||
let outBase64 = pcmBase64;
|
||||
// Fade-In auf den ersten Chunk — maskiert XTTS-Warmup-Glitches
|
||||
// (autoregressiver Generator hat am Anfang wenig Kontext → Artefakte).
|
||||
if (!firstChunkSeen && pcmBase64) {
|
||||
firstChunkSeen = true;
|
||||
outBase64 = applyFadeIn(pcmBase64, meta.sampleRate, meta.channels, 120);
|
||||
}
|
||||
sendToRVS({
|
||||
type: "audio_pcm",
|
||||
payload: {
|
||||
requestId: requestId || "",
|
||||
messageId: messageId || "",
|
||||
base64: outBase64,
|
||||
format: "pcm_s16le",
|
||||
sampleRate: meta.sampleRate,
|
||||
channels: meta.channels,
|
||||
voice: voice || "default",
|
||||
chunk: chunkIndex++,
|
||||
final: false,
|
||||
},
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
};
|
||||
|
||||
// /tts_stream fuer echtes Streaming (funktioniert im XTTS local-Mode).
|
||||
// Wenn Server im apiManual/api-Mode laeuft: 400 → Fallback auf /tts_to_audio/.
|
||||
try {
|
||||
await streamXTTSAsPCM(
|
||||
cleanText,
|
||||
language || "de",
|
||||
speakerName,
|
||||
onChunk,
|
||||
);
|
||||
} catch (streamErr) {
|
||||
log(`/tts_stream fehlgeschlagen (${streamErr.message.slice(0, 100)}) — Fallback /tts_to_audio/`);
|
||||
await streamXTTSBatch(
|
||||
cleanText,
|
||||
language || "de",
|
||||
speakerName,
|
||||
onChunk,
|
||||
);
|
||||
}
|
||||
|
||||
// Am Ende: final-Flag damit App weiss "fertig" und Cache geschrieben werden kann
|
||||
if (pcmMeta) {
|
||||
sendToRVS({
|
||||
type: "audio_pcm",
|
||||
payload: {
|
||||
requestId: requestId || "",
|
||||
messageId: messageId || "",
|
||||
base64: "",
|
||||
format: "pcm_s16le",
|
||||
sampleRate: pcmMeta.sampleRate,
|
||||
channels: pcmMeta.channels,
|
||||
voice: voice || "default",
|
||||
chunk: chunkIndex++,
|
||||
final: true,
|
||||
},
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
}
|
||||
|
||||
log(`TTS komplett: ${chunkIndex} PCM-Frames gestreamt (${cleanText.length} chars)`);
|
||||
} catch (err) {
|
||||
log(`TTS Fehler: ${err.message}`);
|
||||
sendToRVS({
|
||||
@@ -184,19 +259,120 @@ async function handleTTSRequest(payload) {
|
||||
}
|
||||
}
|
||||
|
||||
function callXTTSAPI(text, language, speakerWav) {
|
||||
/**
|
||||
* Ruft /tts_stream auf — echter Streaming-Endpoint bei daswer123.
|
||||
* Schickt was der Server verlangt (allow: GET), aber mit JSON-Body
|
||||
* als POST scheitert mit 405. Manche Versionen wollen GET + Query,
|
||||
* andere POST + JSON. Testen was funktioniert.
|
||||
*/
|
||||
function streamXTTSAsPCM(text, language, speakerWav, onPcmChunk) {
|
||||
return new Promise((resolve, reject) => {
|
||||
// Wichtig: speaker_wav MUSS als Query-Key dabei sein (Pydantic required) —
|
||||
// auch bei default-voice mit leerem Wert. Sonst gibt's HTTP 422.
|
||||
// stream_chunk_size=250: grosse Chunks = wenige Chunk-Grenzen = wenig
|
||||
// Render-Artefakte. daswer123 erzeugt an Chunk-Boundaries haeufig Glitches
|
||||
// in den Worten die ueber die Grenze gehen. Hoehere Latenz ist OK.
|
||||
const qs = new URLSearchParams();
|
||||
qs.set("text", text);
|
||||
qs.set("language", language || "de");
|
||||
qs.set("speaker_wav", speakerWav || "");
|
||||
qs.set("stream_chunk_size", "250");
|
||||
|
||||
const url = new URL(XTTS_API_URL);
|
||||
const fullPath = `/tts_stream?${qs.toString()}`;
|
||||
const options = {
|
||||
hostname: url.hostname,
|
||||
port: url.port || 80,
|
||||
path: fullPath,
|
||||
method: "GET",
|
||||
timeout: 60000,
|
||||
};
|
||||
|
||||
log(`TTS GET /tts_stream?text=${text.slice(0, 30)}... (voice=${speakerWav ? "custom" : "default"})`);
|
||||
|
||||
const req = http.request(options, (res) => {
|
||||
if (res.statusCode !== 200) {
|
||||
let body = "";
|
||||
res.on("data", (d) => { body += d.toString(); });
|
||||
res.on("end", () => {
|
||||
log(`XTTS /tts_stream ${res.statusCode}: ${body.slice(0, 300)}`);
|
||||
reject(new Error(`XTTS HTTP ${res.statusCode}: ${body.slice(0, 200)}`));
|
||||
});
|
||||
return;
|
||||
}
|
||||
log(`TTS stream verbunden, empfange PCM...`);
|
||||
|
||||
let headerParsed = false;
|
||||
let sampleRate = 24000;
|
||||
let channels = 1;
|
||||
let leftover = Buffer.alloc(0); // ungerade Byte-Reste fuer das naechste Chunk
|
||||
const HEADER_BYTES = 44;
|
||||
let headerBuf = Buffer.alloc(0);
|
||||
const PCM_CHUNK_BYTES = 8192; // ~170ms bei 24kHz s16 mono
|
||||
|
||||
res.on("data", (chunk) => {
|
||||
let data = chunk;
|
||||
|
||||
// WAV-Header konsumieren (44 Bytes)
|
||||
if (!headerParsed) {
|
||||
headerBuf = Buffer.concat([headerBuf, data]);
|
||||
if (headerBuf.length < HEADER_BYTES) return;
|
||||
// Header lesen
|
||||
const header = headerBuf.slice(0, HEADER_BYTES);
|
||||
try {
|
||||
channels = header.readUInt16LE(22);
|
||||
sampleRate = header.readUInt32LE(24);
|
||||
} catch (_) {}
|
||||
headerParsed = true;
|
||||
data = headerBuf.slice(HEADER_BYTES);
|
||||
}
|
||||
|
||||
// leftover aus vorherigem Chunk + neuer data
|
||||
let combined = Buffer.concat([leftover, data]);
|
||||
|
||||
// In PCM_CHUNK_BYTES-Happen zerlegen (Vielfache von 2 damit keine Sample-Splits)
|
||||
while (combined.length >= PCM_CHUNK_BYTES) {
|
||||
const slice = combined.slice(0, PCM_CHUNK_BYTES);
|
||||
combined = combined.slice(PCM_CHUNK_BYTES);
|
||||
onPcmChunk(slice.toString("base64"), { sampleRate, channels });
|
||||
}
|
||||
leftover = combined;
|
||||
});
|
||||
|
||||
res.on("end", () => {
|
||||
// Rest-Daten senden
|
||||
if (leftover.length > 0) {
|
||||
onPcmChunk(leftover.toString("base64"), { sampleRate, channels });
|
||||
}
|
||||
resolve();
|
||||
});
|
||||
|
||||
res.on("error", reject);
|
||||
});
|
||||
|
||||
req.on("error", reject);
|
||||
req.on("timeout", () => { req.destroy(); reject(new Error("XTTS API Timeout (60s)")); });
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Fallback: /tts_to_audio/ (POST JSON) — rendert komplett, dann response.
|
||||
* Kein echtes Streaming, aber stabil als Backup wenn /tts_stream nicht geht.
|
||||
* Shared chunking-Logik mit streamXTTSAsPCM — parst WAV-Header, stueckelt PCM.
|
||||
*/
|
||||
function streamXTTSBatch(text, language, speakerWav, onPcmChunk) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const body = JSON.stringify({
|
||||
text,
|
||||
language,
|
||||
language: language || "de",
|
||||
speaker_wav: speakerWav || "",
|
||||
});
|
||||
|
||||
const url = new URL(`${XTTS_API_URL}/tts_to_audio/`);
|
||||
const url = new URL(XTTS_API_URL);
|
||||
const options = {
|
||||
hostname: url.hostname,
|
||||
port: url.port,
|
||||
path: url.pathname,
|
||||
port: url.port || 80,
|
||||
path: "/tts_to_audio/",
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
@@ -206,19 +382,46 @@ function callXTTSAPI(text, language, speakerWav) {
|
||||
};
|
||||
|
||||
const req = http.request(options, (res) => {
|
||||
const chunks = [];
|
||||
res.on("data", (chunk) => chunks.push(chunk));
|
||||
res.on("end", () => {
|
||||
if (res.statusCode === 200) {
|
||||
resolve(Buffer.concat(chunks));
|
||||
} else {
|
||||
reject(new Error(`XTTS API HTTP ${res.statusCode}: ${Buffer.concat(chunks).toString().slice(0, 200)}`));
|
||||
}
|
||||
});
|
||||
});
|
||||
if (res.statusCode !== 200) {
|
||||
let rb = "";
|
||||
res.on("data", (d) => { rb += d.toString(); });
|
||||
res.on("end", () => reject(new Error(`XTTS Batch HTTP ${res.statusCode}: ${rb.slice(0, 200)}`)));
|
||||
return;
|
||||
}
|
||||
let headerParsed = false;
|
||||
let sampleRate = 24000;
|
||||
let channels = 1;
|
||||
let leftover = Buffer.alloc(0);
|
||||
let headerBuf = Buffer.alloc(0);
|
||||
const HEADER_BYTES = 44;
|
||||
const PCM_CHUNK_BYTES = 8192;
|
||||
|
||||
res.on("data", (chunk) => {
|
||||
let data = chunk;
|
||||
if (!headerParsed) {
|
||||
headerBuf = Buffer.concat([headerBuf, data]);
|
||||
if (headerBuf.length < HEADER_BYTES) return;
|
||||
const header = headerBuf.slice(0, HEADER_BYTES);
|
||||
try { channels = header.readUInt16LE(22); sampleRate = header.readUInt32LE(24); } catch (_) {}
|
||||
headerParsed = true;
|
||||
data = headerBuf.slice(HEADER_BYTES);
|
||||
}
|
||||
let combined = Buffer.concat([leftover, data]);
|
||||
while (combined.length >= PCM_CHUNK_BYTES) {
|
||||
const slice = combined.slice(0, PCM_CHUNK_BYTES);
|
||||
combined = combined.slice(PCM_CHUNK_BYTES);
|
||||
onPcmChunk(slice.toString("base64"), { sampleRate, channels });
|
||||
}
|
||||
leftover = combined;
|
||||
});
|
||||
res.on("end", () => {
|
||||
if (leftover.length > 0) onPcmChunk(leftover.toString("base64"), { sampleRate, channels });
|
||||
resolve();
|
||||
});
|
||||
res.on("error", reject);
|
||||
});
|
||||
req.on("error", reject);
|
||||
req.on("timeout", () => { req.destroy(); reject(new Error("XTTS API Timeout (60s)")); });
|
||||
req.on("timeout", () => { req.destroy(); reject(new Error("XTTS Batch Timeout (60s)")); });
|
||||
req.write(body);
|
||||
req.end();
|
||||
});
|
||||
@@ -257,8 +460,89 @@ async function handleVoiceUpload(payload) {
|
||||
}
|
||||
}
|
||||
|
||||
// ── Voice Delete Handler ────────────────────────────
|
||||
|
||||
async function handleDeleteVoice(payload) {
|
||||
const { name } = payload || {};
|
||||
if (!name || typeof name !== "string") {
|
||||
log("Voice Delete: ungueltiger Name");
|
||||
return;
|
||||
}
|
||||
const safe = name.replace(/[^a-zA-Z0-9_-]/g, "_");
|
||||
const filePath = path.join(VOICES_DIR, `${safe}.wav`);
|
||||
try {
|
||||
if (fs.existsSync(filePath)) {
|
||||
fs.unlinkSync(filePath);
|
||||
log(`Voice geloescht: ${filePath}`);
|
||||
} else {
|
||||
log(`Voice Delete: Datei existiert nicht (${filePath})`);
|
||||
}
|
||||
// Aktualisierte Liste an alle Clients senden
|
||||
await handleListVoices();
|
||||
} catch (err) {
|
||||
log(`Voice Delete Fehler: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// ── Voice List Handler ──────────────────────────────
|
||||
|
||||
/**
|
||||
* Preload einer Stimme — rendert stumm ein kurzes Dummy-Audio, damit XTTS
|
||||
* die Speaker-Latents laedt und der naechste echte Request ohne Wartezeit
|
||||
* loslegen kann. Broadcastet "voice_ready" wenn fertig (oder mit error).
|
||||
*/
|
||||
async function handleVoicePreload(payload) {
|
||||
const voice = (payload && payload.voice) || "";
|
||||
const source = (payload && payload.source) || "unknown";
|
||||
const requestId = (payload && payload.requestId) || "";
|
||||
log(`Voice-Preload angefordert: "${voice}" (source=${source})`);
|
||||
|
||||
try {
|
||||
let speakerName = "";
|
||||
if (voice) {
|
||||
const voiceFilePath = path.join(VOICES_DIR, `${voice}.wav`);
|
||||
if (!fs.existsSync(voiceFilePath)) {
|
||||
sendToRVS({
|
||||
type: "voice_ready",
|
||||
payload: { voice, requestId, error: "voice-file-not-found" },
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
log(`Preload abgebrochen: ${voiceFilePath} existiert nicht`);
|
||||
return;
|
||||
}
|
||||
speakerName = voice;
|
||||
}
|
||||
|
||||
// Dummy-Request via Queue — damit sich Preload nicht mit echtem TTS ueberholt.
|
||||
const t0 = Date.now();
|
||||
await new Promise((resolve, reject) => {
|
||||
ttsQueue = ttsQueue.then(async () => {
|
||||
try {
|
||||
await streamXTTSAsPCM("ja.", "de", speakerName, () => {});
|
||||
resolve();
|
||||
} catch (err) {
|
||||
reject(err);
|
||||
}
|
||||
}).catch(reject);
|
||||
});
|
||||
const ms = Date.now() - t0;
|
||||
log(`Voice "${voice || "default"}" geladen in ${ms}ms`);
|
||||
|
||||
sendToRVS({
|
||||
type: "voice_ready",
|
||||
payload: { voice, requestId, loadMs: ms },
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
} catch (err) {
|
||||
log(`Voice-Preload Fehler: ${err.message}`);
|
||||
sendToRVS({
|
||||
type: "voice_ready",
|
||||
payload: { voice, requestId, error: err.message.slice(0, 200) },
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async function handleListVoices() {
|
||||
try {
|
||||
const files = fs.existsSync(VOICES_DIR)
|
||||
|
||||
@@ -33,6 +33,12 @@ services:
|
||||
- ./voices:/voices # Custom Voice Samples
|
||||
environment:
|
||||
- COQUI_TOS_AGREED=1
|
||||
# Local-Modus statt default "apiManual": Modell bleibt im GPU-VRAM,
|
||||
# Render startet sofort, /tts_stream funktioniert.
|
||||
# Default-CMD des Images liest diese ENV: -ms ${MODEL_SOURCE:-"apiManual"}
|
||||
- MODEL_SOURCE=local
|
||||
# Speaker-Folder auf unsere gemounteten voices zeigen lassen
|
||||
- EXAMPLE_FOLDER=/voices
|
||||
restart: unless-stopped
|
||||
|
||||
# ─── XTTS Bridge (verbindet zu RVS) ───────────
|
||||
@@ -52,5 +58,37 @@ services:
|
||||
- RVS_TOKEN=${RVS_TOKEN}
|
||||
restart: unless-stopped
|
||||
|
||||
# ─── Whisper STT (GPU) ────────────────────────
|
||||
# Faster-Whisper auf der Gamebox statt auf der VM (CPU) —
|
||||
# deutlich schneller. Verbindet sich selbst per WebSocket an
|
||||
# den RVS und nimmt dort stt_request Nachrichten der aria-bridge
|
||||
# entgegen, antwortet mit stt_response. Laedt das Modell beim
|
||||
# Start vor; auf Config-Broadcasts (Diagnostic → whisperModel)
|
||||
# wird zur Laufzeit hot-swapped.
|
||||
whisper-bridge:
|
||||
build: ./whisper
|
||||
container_name: aria-whisper-bridge
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
environment:
|
||||
- RVS_HOST=${RVS_HOST}
|
||||
- RVS_PORT=${RVS_PORT:-443}
|
||||
- RVS_TLS=${RVS_TLS:-true}
|
||||
- RVS_TLS_FALLBACK=${RVS_TLS_FALLBACK:-true}
|
||||
- RVS_TOKEN=${RVS_TOKEN}
|
||||
- WHISPER_MODEL=${WHISPER_MODEL:-small}
|
||||
- WHISPER_DEVICE=${WHISPER_DEVICE:-cuda}
|
||||
- WHISPER_COMPUTE_TYPE=${WHISPER_COMPUTE_TYPE:-float16}
|
||||
- WHISPER_LANGUAGE=${WHISPER_LANGUAGE:-de}
|
||||
volumes:
|
||||
- whisper-models:/root/.cache/huggingface # Model-Cache persistieren
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
xtts-models:
|
||||
whisper-models:
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
FROM nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
python3 python3-pip ffmpeg \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip3 install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY bridge.py .
|
||||
|
||||
CMD ["python3", "bridge.py"]
|
||||
@@ -0,0 +1,247 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
ARIA Whisper Bridge — laeuft auf der Gamebox (RTX 3060).
|
||||
|
||||
Empfaengt stt_request via RVS → FFmpeg-Konvertierung → faster-whisper auf GPU
|
||||
→ sendet stt_response zurueck an die aria-bridge.
|
||||
|
||||
Env:
|
||||
RVS_HOST, RVS_PORT, RVS_TLS, RVS_TLS_FALLBACK, RVS_TOKEN
|
||||
WHISPER_MODEL Default: small
|
||||
WHISPER_DEVICE Default: cuda
|
||||
WHISPER_COMPUTE_TYPE Default: float16
|
||||
WHISPER_LANGUAGE Default: de
|
||||
"""
|
||||
import asyncio
|
||||
import base64
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import numpy as np
|
||||
import websockets
|
||||
from faster_whisper import WhisperModel
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
datefmt="%H:%M:%S",
|
||||
)
|
||||
logger = logging.getLogger("whisper-bridge")
|
||||
|
||||
RVS_HOST = os.getenv("RVS_HOST", "").strip()
|
||||
RVS_PORT = int(os.getenv("RVS_PORT", "443"))
|
||||
RVS_TLS = os.getenv("RVS_TLS", "true").lower() == "true"
|
||||
RVS_TLS_FALLBACK = os.getenv("RVS_TLS_FALLBACK", "true").lower() == "true"
|
||||
RVS_TOKEN = os.getenv("RVS_TOKEN", "").strip()
|
||||
|
||||
WHISPER_MODEL = os.getenv("WHISPER_MODEL", "small")
|
||||
WHISPER_DEVICE = os.getenv("WHISPER_DEVICE", "cuda")
|
||||
WHISPER_COMPUTE_TYPE = os.getenv("WHISPER_COMPUTE_TYPE", "float16")
|
||||
WHISPER_LANGUAGE = os.getenv("WHISPER_LANGUAGE", "de")
|
||||
|
||||
ALLOWED_MODELS = {"tiny", "base", "small", "medium", "large-v3"}
|
||||
|
||||
|
||||
class WhisperRunner:
|
||||
"""Haelt das Whisper-Modell. Hot-Swap bei Konfig-Wechsel via ensure_loaded()."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.model_size: str = WHISPER_MODEL
|
||||
self.model: Optional[WhisperModel] = None
|
||||
self._lock = asyncio.Lock()
|
||||
|
||||
def _load_blocking(self, size: str) -> None:
|
||||
logger.info(
|
||||
"Lade Whisper '%s' (device=%s, compute=%s)",
|
||||
size, WHISPER_DEVICE, WHISPER_COMPUTE_TYPE,
|
||||
)
|
||||
t0 = time.time()
|
||||
self.model = WhisperModel(
|
||||
size, device=WHISPER_DEVICE, compute_type=WHISPER_COMPUTE_TYPE,
|
||||
)
|
||||
self.model_size = size
|
||||
logger.info("Whisper '%s' geladen in %.1fs", size, time.time() - t0)
|
||||
|
||||
async def ensure_loaded(self, desired_size: str) -> None:
|
||||
if desired_size not in ALLOWED_MODELS:
|
||||
logger.warning("Ungueltiges Whisper-Modell '%s' — nutze %s", desired_size, WHISPER_MODEL)
|
||||
desired_size = WHISPER_MODEL
|
||||
async with self._lock:
|
||||
if self.model is not None and self.model_size == desired_size:
|
||||
return
|
||||
loop = asyncio.get_event_loop()
|
||||
await loop.run_in_executor(None, self._load_blocking, desired_size)
|
||||
|
||||
async def transcribe(self, audio: np.ndarray, language: str) -> tuple[str, float]:
|
||||
if self.model is None:
|
||||
return "", 0.0
|
||||
|
||||
def _run():
|
||||
segments, info = self.model.transcribe(
|
||||
audio, language=language, beam_size=5, vad_filter=True,
|
||||
)
|
||||
text = " ".join(seg.text.strip() for seg in segments)
|
||||
return text, info.duration
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
return await loop.run_in_executor(None, _run)
|
||||
|
||||
|
||||
def ffmpeg_to_float32(audio_b64: str, mime_type: str) -> np.ndarray:
|
||||
"""Dekodiert beliebiges Audio-Format → 16kHz mono float32 PCM."""
|
||||
if "mp4" in mime_type or "m4a" in mime_type or "aac" in mime_type:
|
||||
ext = ".mp4"
|
||||
elif "wav" in mime_type:
|
||||
ext = ".wav"
|
||||
elif "ogg" in mime_type or "opus" in mime_type:
|
||||
ext = ".ogg"
|
||||
else:
|
||||
ext = ".bin"
|
||||
|
||||
in_fh = tempfile.NamedTemporaryFile(suffix=ext, delete=False)
|
||||
try:
|
||||
in_fh.write(base64.b64decode(audio_b64))
|
||||
in_fh.close()
|
||||
out_path = in_fh.name + ".raw"
|
||||
cmd = ["ffmpeg", "-y", "-i", in_fh.name, "-ar", "16000", "-ac", "1", "-f", "f32le", out_path]
|
||||
result = subprocess.run(cmd, capture_output=True, timeout=30)
|
||||
if result.returncode != 0:
|
||||
logger.error("FFmpeg Fehler: %s", result.stderr.decode(errors="replace")[:300])
|
||||
return np.zeros(0, dtype=np.float32)
|
||||
try:
|
||||
return np.fromfile(out_path, dtype=np.float32)
|
||||
finally:
|
||||
try:
|
||||
os.unlink(out_path)
|
||||
except OSError:
|
||||
pass
|
||||
finally:
|
||||
try:
|
||||
os.unlink(in_fh.name)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
|
||||
async def _send(ws, mtype: str, payload: dict) -> None:
|
||||
try:
|
||||
await ws.send(json.dumps({
|
||||
"type": mtype,
|
||||
"payload": payload,
|
||||
"timestamp": int(time.time() * 1000),
|
||||
}))
|
||||
except Exception as e:
|
||||
logger.warning("Send fehlgeschlagen (%s): %s", mtype, e)
|
||||
|
||||
|
||||
async def handle_stt_request(ws, payload: dict, runner: WhisperRunner) -> None:
|
||||
request_id = payload.get("requestId", "")
|
||||
audio_b64 = payload.get("audio", "")
|
||||
mime_type = payload.get("mimeType", "audio/mp4")
|
||||
model = payload.get("model") or WHISPER_MODEL
|
||||
language = payload.get("language") or WHISPER_LANGUAGE
|
||||
|
||||
if not audio_b64:
|
||||
await _send(ws, "stt_response", {"requestId": request_id, "error": "no-audio"})
|
||||
return
|
||||
|
||||
try:
|
||||
t_load = time.time()
|
||||
await runner.ensure_loaded(model)
|
||||
load_ms = int((time.time() - t_load) * 1000)
|
||||
|
||||
audio = ffmpeg_to_float32(audio_b64, mime_type)
|
||||
if audio.size == 0:
|
||||
await _send(ws, "stt_response", {"requestId": request_id, "error": "ffmpeg-failed"})
|
||||
return
|
||||
duration_s = len(audio) / 16000.0
|
||||
logger.info("STT-Request: %.1fs Audio, model=%s, lang=%s", duration_s, runner.model_size, language)
|
||||
|
||||
t_stt = time.time()
|
||||
text, detected_duration = await runner.transcribe(audio, language)
|
||||
stt_ms = int((time.time() - t_stt) * 1000)
|
||||
|
||||
logger.info("STT-Ergebnis (%dms): '%s'", stt_ms, text[:100])
|
||||
|
||||
await _send(ws, "stt_response", {
|
||||
"requestId": request_id,
|
||||
"text": text.strip(),
|
||||
"durationS": duration_s,
|
||||
"sttMs": stt_ms,
|
||||
"loadMs": load_ms,
|
||||
"model": runner.model_size,
|
||||
})
|
||||
except Exception as e:
|
||||
logger.exception("STT-Request fehlgeschlagen")
|
||||
await _send(ws, "stt_response", {
|
||||
"requestId": request_id,
|
||||
"error": str(e)[:200],
|
||||
})
|
||||
|
||||
|
||||
async def run_loop(runner: WhisperRunner) -> None:
|
||||
# Modell vorab laden damit erste Anfrage flott ist
|
||||
try:
|
||||
await runner.ensure_loaded(WHISPER_MODEL)
|
||||
except Exception as e:
|
||||
logger.error("Preload fehlgeschlagen: %s — Fortsetzung, wird bei erstem Request nachgeladen", e)
|
||||
|
||||
use_tls = RVS_TLS
|
||||
retry_s = 2
|
||||
tls_fallback_tried = False
|
||||
|
||||
while True:
|
||||
scheme = "wss" if use_tls else "ws"
|
||||
url = f"{scheme}://{RVS_HOST}:{RVS_PORT}/ws?token={RVS_TOKEN}"
|
||||
masked = url.replace(RVS_TOKEN, "***") if RVS_TOKEN else url
|
||||
try:
|
||||
logger.info("Verbinde zu RVS: %s", masked)
|
||||
async with websockets.connect(url, ping_interval=20, ping_timeout=10) as ws:
|
||||
logger.info("RVS verbunden")
|
||||
retry_s = 2
|
||||
tls_fallback_tried = False
|
||||
async for raw in ws:
|
||||
try:
|
||||
msg = json.loads(raw)
|
||||
except Exception:
|
||||
continue
|
||||
mtype = msg.get("type", "")
|
||||
payload = msg.get("payload", {}) or {}
|
||||
|
||||
if mtype == "stt_request":
|
||||
asyncio.create_task(handle_stt_request(ws, payload, runner))
|
||||
elif mtype == "config":
|
||||
new_model = payload.get("whisperModel")
|
||||
if new_model and new_model != runner.model_size:
|
||||
logger.info("Config-Broadcast: Whisper-Modell → %s", new_model)
|
||||
asyncio.create_task(runner.ensure_loaded(new_model))
|
||||
# andere Types (chat, heartbeat, ...) einfach ignorieren
|
||||
except Exception as e:
|
||||
logger.warning("Verbindung verloren: %s", e)
|
||||
if use_tls and RVS_TLS_FALLBACK and not tls_fallback_tried:
|
||||
logger.info("TLS-Verbindung fehlgeschlagen — Fallback auf ws://")
|
||||
use_tls = False
|
||||
tls_fallback_tried = True
|
||||
continue
|
||||
await asyncio.sleep(min(retry_s, 30))
|
||||
retry_s = min(retry_s * 2, 30)
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
if not RVS_HOST:
|
||||
logger.error("RVS_HOST ist nicht gesetzt — Abbruch")
|
||||
sys.exit(1)
|
||||
runner = WhisperRunner()
|
||||
await run_loop(runner)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
asyncio.run(main())
|
||||
except KeyboardInterrupt:
|
||||
sys.exit(0)
|
||||
@@ -0,0 +1,3 @@
|
||||
faster-whisper==1.0.3
|
||||
websockets>=12.0
|
||||
numpy>=1.24
|
||||
Reference in New Issue
Block a user