Compare commits
53 Commits
| Author | SHA1 | Date |
|---|---|---|
|
|
b1ccf29295 | |
|
|
4cd9faece2 | |
|
|
fec8aa977b | |
|
|
20123de827 | |
|
|
8761d1a1b7 | |
|
|
abc5b971f4 | |
|
|
b588dd7e3b | |
|
|
309df9d851 | |
|
|
f2e643d1fb | |
|
|
6ac374621c | |
|
|
efbd306597 | |
|
|
4454613a98 | |
|
|
55cfb752a2 | |
|
|
a4d3449e3a | |
|
|
44d2c6b4fe | |
|
|
0309c95aa5 | |
|
|
2aa2cc70c9 | |
|
|
9d0776c819 | |
|
|
f031fa159e | |
|
|
be373466a3 | |
|
|
bbf9aed3ba | |
|
|
745b4a07c0 | |
|
|
23ca815cb2 | |
|
|
cc3fac8142 | |
|
|
cd89e36ec2 | |
|
|
f5b4285d15 | |
|
|
248e7c9ae4 | |
|
|
7058cc8d8d | |
|
|
7919489543 | |
|
|
feac7f2479 | |
|
|
b80b813703 | |
|
|
e7bb6c37cb | |
|
|
d146ca92c4 | |
|
|
fd95af2c40 | |
|
|
9e12e0001c | |
|
|
1d34143be5 | |
|
|
0fc11e33c8 | |
|
|
dae603541b | |
|
|
87b4cd305c | |
|
|
190352820c | |
|
|
2264f4e3bc | |
|
|
58fd8721e3 | |
|
|
4f494daffb | |
|
|
958c8d6fc6 | |
|
|
5ba89c7191 | |
|
|
b373f915b5 | |
|
|
7748834a0f | |
|
|
8b52f4c92b | |
|
|
dc20570f6d | |
|
|
744a27cfd1 | |
|
|
37c5f6c368 | |
|
|
a361015ff4 | |
|
|
d83b555209 |
76
README.md
76
README.md
|
|
@ -380,6 +380,7 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
|||
- Text-Chat mit ARIA
|
||||
- **Sprachaufnahme**: Push-to-Talk (halten) oder Tap-to-Talk (tippen, Auto-Stop bei Stille)
|
||||
- **Gespraechsmodus** (Ohr-Button): Nach jeder ARIA-Antwort startet automatisch die Aufnahme — wie ein natuerliches Gespraech hin und her
|
||||
- **Wake-Word** (on-device, openWakeWord ONNX): "Hey Jarvis", "Alexa", "Hey Mycroft", "Hey Rhasspy" — Mikrofon hoert passiv mit, Konversation startet beim Schluesselwort. Komplett on-device via ONNX Runtime, kein API-Key, kein Cloud-Roundtrip, Audio verlaesst das Geraet nicht.
|
||||
- **VAD (Voice Activity Detection)**: Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme 120s.
|
||||
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
||||
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
||||
|
|
@ -398,6 +399,45 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
|||
- GPS-Position (optional)
|
||||
- QR-Code Scanner fuer Token-Pairing
|
||||
|
||||
### Wake-Word (openWakeWord, on-device)
|
||||
|
||||
Wake-Word-Erkennung laeuft komplett **on-device** ueber [openWakeWord](https://github.com/dscripka/openWakeWord)
|
||||
mit ONNX Runtime — kein API-Key, kein Cloud-Roundtrip, kein Cent Lizenzgebuehren,
|
||||
und das Audio verlaesst das Geraet nie.
|
||||
|
||||
**Mitgelieferte Wake-Words** (ONNX-Dateien in `android/android/app/src/main/assets/openwakeword/`):
|
||||
- `Hey Jarvis` (Default, openWakeWord-Original)
|
||||
- `Computer` (Star-Trek-Style, Community-Modell)
|
||||
- `Alexa`, `Hey Mycroft`, `Hey Rhasspy` (openWakeWord-Originale)
|
||||
|
||||
Community-Modelle stammen aus [fwartner/home-assistant-wakewords-collection](https://github.com/fwartner/home-assistant-wakewords-collection).
|
||||
|
||||
**Bedienung:**
|
||||
- App → **Einstellungen** → **Wake-Word** → gewuenschtes Keyword waehlen → **Speichern + Aktivieren**
|
||||
- **Ohr-Button (👂)** in der Statusleiste tippen → Wake-Word ist scharf, App hoert passiv mit
|
||||
- Wake-Word sagen → Symbol wechselt auf 🎙️, Konversation laeuft
|
||||
- Nach jeder ARIA-Antwort oeffnet sich das Mikro nochmal — Stille → zurueck zu 👂
|
||||
- Erneut tippen → Ohr aus (🔇)
|
||||
|
||||
**Eigene Wake-Words trainieren** (gratis, ~30 Min):
|
||||
|
||||
1. openWakeWord Trainings-Notebook auf Colab oeffnen (Link im
|
||||
[openWakeWord Repo](https://github.com/dscripka/openWakeWord) unter "Training Custom Models")
|
||||
2. Wake-Word-Phrase eingeben (z.B. "ARIA", "Hey Stefan"), Notebook ausfuehren —
|
||||
das Notebook generiert synthetische Trainings-Beispiele und trainiert das Modell.
|
||||
3. Resultierende `.onnx`-Datei runterladen
|
||||
4. Datei in `android/android/app/src/main/assets/openwakeword/` ablegen
|
||||
5. In `android/src/services/wakeword.ts` den Dateinamen (ohne `.onnx`) zur
|
||||
`WAKE_KEYWORDS`-Liste hinzufuegen
|
||||
6. APK neu bauen
|
||||
|
||||
*(Diagnostic-Upload fuer Custom-`.onnx` ohne Rebuild kommt spaeter.)*
|
||||
|
||||
**Tuning** (in [wakeword.ts](android/src/services/wakeword.ts)):
|
||||
- `DEFAULT_THRESHOLD = 0.5` — Score-Schwelle (raise auf 0.6–0.7 bei False-Positives)
|
||||
- `DEFAULT_PATIENCE = 2` — wie viele Frames ueber Threshold noetig
|
||||
- `DEFAULT_DEBOUNCE_MS = 1500` — Mindestabstand zwischen zwei Triggern
|
||||
|
||||
### Ersteinrichtung (Dev-Maschine, einmalig)
|
||||
|
||||
```bash
|
||||
|
|
@ -650,6 +690,33 @@ In der Diagnostic unter Einstellungen → Sprachausgabe:
|
|||
> **Tipp:** Fuer beste Ergebnisse: saubere Aufnahme, eine Stimme, kein Hintergrund,
|
||||
> 10-30 Sekunden Gesamtlaenge. Mehrere kurze Dateien werden zusammengefuegt.
|
||||
|
||||
### Deutsches Fine-Tune (bessere Qualitaet auf Deutsch)
|
||||
|
||||
Das Default-Modell `F5TTS_v1_Base` ist primaer auf Englisch + Chinesisch trainiert
|
||||
und liefert auf Deutsch merklich schwaechere Voice-Cloning-Qualitaet als XTTS es
|
||||
tat. Community-Fine-Tune von [aihpi](https://huggingface.co/aihpi/F5-TTS-German)
|
||||
auf dem Emilia-Dataset + Common Voice 19.0 funktioniert deutlich besser.
|
||||
|
||||
**Konfiguration ueber Diagnostic → "F5-TTS Modell-Tuning (advanced)":**
|
||||
|
||||
| Feld | Wert |
|
||||
|------|------|
|
||||
| Modell-Architektur | `F5TTS_Base` *(nicht v1_Base! Fine-Tune basiert auf der alten Architektur)* |
|
||||
| Custom Checkpoint | `hf://aihpi/F5-TTS-German/F5TTS_Base/model_365000.safetensors` |
|
||||
| Custom Vocab | `hf://aihpi/F5-TTS-German/vocab.txt` |
|
||||
| cfg_strength | `2.0` |
|
||||
| nfe_step | `32` |
|
||||
|
||||
→ "Anwenden" klicken. Die `hf://`-Pfade werden einmalig automatisch runter-
|
||||
geladen (~3-5GB, landet im `xtts/hf-cache/`) und bei Container-Restart aus
|
||||
dem Cache wiederverwendet.
|
||||
|
||||
> **Warnung zur BigVGAN-Variante** (`F5TTS_Base_bigvgan/model_295000.safetensors`):
|
||||
> funktioniert AKTUELL NICHT mit dieser Bridge. Die f5-tts Library laedt
|
||||
> per Default den Vocos-Vocoder, die BigVGAN-Weights sind damit inkompatibel
|
||||
> → Modell produziert NaN, App bleibt stumm. Nur die **Vocos-Variante
|
||||
> (F5TTS_Base/model_365000.safetensors)** nutzen.
|
||||
|
||||
---
|
||||
|
||||
## Docker Volumes
|
||||
|
|
@ -717,8 +784,10 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||
- **Proxy Cold Start**: Jede Nachricht spawnt einen neuen `claude --print` Prozess.
|
||||
Dadurch ist ARIA langsamer als die direkte Claude CLI. Timeout ist auf 900s (15 Min).
|
||||
- **Kein Streaming zur App**: Die App zeigt erst die fertige Antwort, keine Streaming-Tokens.
|
||||
- **Wake Word nur auf VM**: Die Bridge hoert auf "ARIA" ueber das lokale Mikrofon der VM.
|
||||
In der App gibt es Energy-basierte Erkennung (Phase 1). On-device "ARIA"-Keyword (Porcupine) ist Phase 2.
|
||||
- **Wake-Word in der App nur eingebaute Keywords**: `Hey Jarvis`, `Alexa`, `Hey Mycroft`,
|
||||
`Hey Rhasspy` funktionieren sofort, eigene Wake-Words muessen aktuell noch als
|
||||
`.onnx`-Datei ins App-Bundle gelegt + zur Liste in `wakeword.ts` hinzugefuegt werden.
|
||||
Die Diagnostic-Upload-UI ist Phase 2.
|
||||
- **Audio-Format**: App nimmt AAC/MP4 auf, Bridge konvertiert via FFmpeg zu 16kHz PCM.
|
||||
- **RVS Zombie-Connections**: WebSocket-Verbindungen sterben gelegentlich ohne Fehlermeldung.
|
||||
Bridge hat Ping-Check (5s), Diagnostic nutzt frische Verbindungen pro Request.
|
||||
|
|
@ -773,6 +842,7 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
||||
- [x] VAD-Stille-Toleranz und Max-Aufnahme einstellbar (1-8s, 120s)
|
||||
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
||||
- [x] Wake-Word on-device via openWakeWord (ONNX Runtime, kein API-Key) + State-Icon
|
||||
|
||||
### Phase 2 — ARIA wird produktiv
|
||||
|
||||
|
|
@ -788,5 +858,5 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||
- [ ] STARFACE Telefonie-Skill
|
||||
- [ ] Desktop Client (Tauri)
|
||||
- [ ] bKVM Remote IT-Support
|
||||
- [ ] Porcupine Wake Word (on-device "ARIA" in der App)
|
||||
- [ ] Custom-`.onnx`-Upload fuer Wake-Word ueber Diagnostic (ohne App-Rebuild)
|
||||
- [ ] Claude Vision direkt (Bildanalyse ohne Dateipfad-Umweg)
|
||||
|
|
|
|||
|
|
@ -79,8 +79,8 @@ android {
|
|||
applicationId "com.ariacockpit"
|
||||
minSdkVersion rootProject.ext.minSdkVersion
|
||||
targetSdkVersion rootProject.ext.targetSdkVersion
|
||||
versionCode 506
|
||||
versionName "0.0.5.6"
|
||||
versionCode 701
|
||||
versionName "0.0.7.1"
|
||||
// Fallback fuer Libraries mit Product Flavors
|
||||
missingDimensionStrategy 'react-native-camera', 'general'
|
||||
}
|
||||
|
|
@ -104,6 +104,19 @@ android {
|
|||
proguardFiles getDefaultProguardFile("proguard-android.txt"), "proguard-rules.pro"
|
||||
}
|
||||
}
|
||||
|
||||
// ABI-Split: nur arm64-v8a (jedes Android-Phone seit ~2017). Bringt die
|
||||
// APK von ~136 MB auf ~35 MB — relevant weil ONNX Runtime + die anderen
|
||||
// Native-Libs sonst pro Architektur dazukommen. Wer 32-bit oder Emulator
|
||||
// braucht, kann hier "armeabi-v7a", "x86_64" etc. ergaenzen.
|
||||
splits {
|
||||
abi {
|
||||
enable true
|
||||
reset()
|
||||
include "arm64-v8a"
|
||||
universalApk false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
dependencies {
|
||||
|
|
@ -111,6 +124,9 @@ dependencies {
|
|||
implementation("com.facebook.react:react-android")
|
||||
implementation("com.facebook.react:flipper-integration")
|
||||
|
||||
// ONNX Runtime fuer on-device Wake-Word (openWakeWord ONNX-Modelle in assets/openwakeword/)
|
||||
implementation("com.microsoft.onnxruntime:onnxruntime-android:1.17.1")
|
||||
|
||||
if (hermesEnabled.toBoolean()) {
|
||||
implementation("com.facebook.react:hermes-android")
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -4,6 +4,8 @@
|
|||
<uses-permission android:name="android.permission.CAMERA" />
|
||||
<uses-permission android:name="android.permission.RECORD_AUDIO" />
|
||||
<uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" />
|
||||
<!-- Anruf-State lesen damit TTS bei klingelndem Telefon pausiert -->
|
||||
<uses-permission android:name="android.permission.READ_PHONE_STATE" />
|
||||
|
||||
<application
|
||||
android:name=".MainApplication"
|
||||
|
|
|
|||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
|
@ -21,6 +21,8 @@ class MainApplication : Application(), ReactApplication {
|
|||
add(ApkInstallerPackage())
|
||||
add(AudioFocusPackage())
|
||||
add(PcmStreamPlayerPackage())
|
||||
add(OpenWakeWordPackage())
|
||||
add(PhoneCallPackage())
|
||||
}
|
||||
|
||||
override fun getJSMainModuleName(): String = "index"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,369 @@
|
|||
package com.ariacockpit
|
||||
|
||||
import ai.onnxruntime.OnnxTensor
|
||||
import ai.onnxruntime.OrtEnvironment
|
||||
import ai.onnxruntime.OrtSession
|
||||
import android.Manifest
|
||||
import android.content.pm.PackageManager
|
||||
import android.media.AudioFormat
|
||||
import android.media.AudioRecord
|
||||
import android.media.MediaRecorder
|
||||
import android.util.Log
|
||||
import androidx.core.content.ContextCompat
|
||||
import com.facebook.react.bridge.Promise
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||
import com.facebook.react.bridge.ReactMethod
|
||||
import com.facebook.react.modules.core.DeviceEventManagerModule
|
||||
import java.nio.FloatBuffer
|
||||
import java.util.concurrent.atomic.AtomicBoolean
|
||||
|
||||
/**
|
||||
* Wake-Word Erkennung on-device via openWakeWord (https://github.com/dscripka/openWakeWord).
|
||||
*
|
||||
* Drei-stufige ONNX Pipeline:
|
||||
* 1. Audio (16kHz mono int16, 1280-Sample-Chunks) → Melspectrogram → 32-mel Frames
|
||||
* 2. 76 Mel-Frames Sliding Window (stride 8) → Speech-Embedding → 96-dim Vektor
|
||||
* 3. Letzte 16 Embeddings (~1.28s Kontext) → Wake-Word-Klassifikator → Sigmoid-Score
|
||||
*
|
||||
* Modelle liegen in assets/openwakeword/ (mel + embedding shared, plus pro Keyword
|
||||
* ein eigenes .onnx). Erkennung feuert nach `patience` aufeinanderfolgenden
|
||||
* Frames ueber `threshold` und unterdrueckt Wiederholungen fuer `debounceMs`.
|
||||
*
|
||||
* Emittiert "WakeWordDetected" als RN-Event wenn ein Trigger erkannt wurde.
|
||||
*/
|
||||
class OpenWakeWordModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||
override fun getName() = "OpenWakeWord"
|
||||
|
||||
companion object {
|
||||
private const val TAG = "OpenWakeWord"
|
||||
private const val SAMPLE_RATE = 16000
|
||||
private const val CHUNK_SAMPLES = 1280 // 80ms @ 16kHz
|
||||
private const val MEL_FRAMES_PER_EMBEDDING = 76 // Embedding-Fenster
|
||||
private const val EMBEDDING_STRIDE = 8 // Slide um 8 Mel-Frames
|
||||
private const val EMBEDDING_DIM = 96
|
||||
private const val MEL_BINS = 32
|
||||
private const val DEFAULT_WW_INPUT_FRAMES = 16 // Fallback wenn Modell-Metadata fehlt
|
||||
}
|
||||
|
||||
private val env: OrtEnvironment = OrtEnvironment.getEnvironment()
|
||||
private var melSession: OrtSession? = null
|
||||
private var embSession: OrtSession? = null
|
||||
private var wwSession: OrtSession? = null
|
||||
|
||||
private var melInputName: String = "input"
|
||||
private var embInputName: String = "input_1"
|
||||
private var wwInputName: String = "input"
|
||||
// Anzahl Embedding-Frames die der Wake-Word-Klassifikator pro Inferenz erwartet —
|
||||
// hey_jarvis hat 16, andere Community-Modelle koennen abweichen (z.B. 28).
|
||||
// Wird beim init() aus den Modell-Metadaten gelesen.
|
||||
private var wwInputFrames: Int = DEFAULT_WW_INPUT_FRAMES
|
||||
|
||||
// Konfiguration
|
||||
private var threshold: Float = 0.5f
|
||||
private var patience: Int = 2
|
||||
private var debounceMs: Long = 1500
|
||||
private var modelName: String = "hey_jarvis"
|
||||
|
||||
// Audio-Capture-Thread
|
||||
private var audioRecord: AudioRecord? = null
|
||||
private val running = AtomicBoolean(false)
|
||||
private var captureThread: Thread? = null
|
||||
|
||||
// Inferenz-State
|
||||
private val melBuffer: ArrayList<FloatArray> = ArrayList(256) // Liste von 32-dim Frames
|
||||
private var melProcessedIdx: Int = 0
|
||||
private val embBuffer: ArrayDeque<FloatArray> = ArrayDeque(32) // Ringpuffer letzter Embeddings
|
||||
private var consecutiveAboveThreshold: Int = 0
|
||||
private var lastDetectionMs: Long = 0L
|
||||
|
||||
/**
|
||||
* Initialisiert die ONNX-Sessions fuer ein bestimmtes Wake-Word.
|
||||
* modelName: dateiname ohne Suffix (z.B. "hey_jarvis", "alexa", "hey_mycroft", "hey_rhasspy")
|
||||
*/
|
||||
@ReactMethod
|
||||
fun init(modelName: String, threshold: Double, patience: Int, debounceMs: Int, promise: Promise) {
|
||||
try {
|
||||
disposeSessions()
|
||||
this.modelName = modelName
|
||||
this.threshold = threshold.toFloat()
|
||||
this.patience = patience.coerceAtLeast(1)
|
||||
this.debounceMs = debounceMs.toLong()
|
||||
|
||||
val ctx = reactApplicationContext
|
||||
val melBytes = ctx.assets.open("openwakeword/melspectrogram.onnx").use { it.readBytes() }
|
||||
val embBytes = ctx.assets.open("openwakeword/embedding_model.onnx").use { it.readBytes() }
|
||||
val wwBytes = ctx.assets.open("openwakeword/$modelName.onnx").use { it.readBytes() }
|
||||
|
||||
val opts = OrtSession.SessionOptions()
|
||||
melSession = env.createSession(melBytes, opts)
|
||||
embSession = env.createSession(embBytes, opts)
|
||||
wwSession = env.createSession(wwBytes, opts)
|
||||
|
||||
melInputName = melSession!!.inputNames.first()
|
||||
embInputName = embSession!!.inputNames.first()
|
||||
wwInputName = wwSession!!.inputNames.first()
|
||||
|
||||
// WW-Input-Frame-Count aus dem Modell lesen — variiert pro Keyword.
|
||||
// Erwartete Form: (1, N, 96), N steht in der Modell-Metadaten.
|
||||
val wwInputInfo = wwSession!!.inputInfo[wwInputName]
|
||||
val wwShape = (wwInputInfo?.info as? ai.onnxruntime.TensorInfo)?.shape
|
||||
wwInputFrames = wwShape?.getOrNull(1)?.toInt()?.takeIf { it > 0 } ?: DEFAULT_WW_INPUT_FRAMES
|
||||
|
||||
Log.i(TAG, "Init OK: model=$modelName wwFrames=$wwInputFrames threshold=$threshold patience=$patience " +
|
||||
"debounce=${debounceMs}ms (inputs: mel=$melInputName emb=$embInputName ww=$wwInputName)")
|
||||
promise.resolve(true)
|
||||
} catch (e: Exception) {
|
||||
Log.e(TAG, "Init fehlgeschlagen: ${e.message}", e)
|
||||
disposeSessions()
|
||||
promise.reject("INIT_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun start(promise: Promise) {
|
||||
if (running.get()) {
|
||||
promise.resolve(true)
|
||||
return
|
||||
}
|
||||
if (melSession == null || embSession == null || wwSession == null) {
|
||||
promise.reject("NOT_INITIALIZED", "init() muss vor start() aufgerufen werden")
|
||||
return
|
||||
}
|
||||
// Berechtigung pruefen — der App-Code holt die ueblicherweise schon vorher,
|
||||
// aber wir bestehen hier explizit darauf damit AudioRecord nicht stumm
|
||||
// failt.
|
||||
val perm = ContextCompat.checkSelfPermission(reactApplicationContext, Manifest.permission.RECORD_AUDIO)
|
||||
if (perm != PackageManager.PERMISSION_GRANTED) {
|
||||
promise.reject("NO_MIC_PERMISSION", "RECORD_AUDIO Permission fehlt")
|
||||
return
|
||||
}
|
||||
|
||||
try {
|
||||
val minBuf = AudioRecord.getMinBufferSize(
|
||||
SAMPLE_RATE,
|
||||
AudioFormat.CHANNEL_IN_MONO,
|
||||
AudioFormat.ENCODING_PCM_16BIT,
|
||||
).coerceAtLeast(CHUNK_SAMPLES * 2 * 4)
|
||||
|
||||
val record = AudioRecord(
|
||||
MediaRecorder.AudioSource.MIC,
|
||||
SAMPLE_RATE,
|
||||
AudioFormat.CHANNEL_IN_MONO,
|
||||
AudioFormat.ENCODING_PCM_16BIT,
|
||||
minBuf,
|
||||
)
|
||||
if (record.state != AudioRecord.STATE_INITIALIZED) {
|
||||
record.release()
|
||||
promise.reject("AUDIO_INIT", "AudioRecord nicht initialisiert (Mikro belegt?)")
|
||||
return
|
||||
}
|
||||
audioRecord = record
|
||||
resetInferenceState()
|
||||
running.set(true)
|
||||
record.startRecording()
|
||||
|
||||
captureThread = Thread({ captureLoop() }, "OpenWakeWordCapture").apply {
|
||||
isDaemon = true
|
||||
start()
|
||||
}
|
||||
|
||||
Log.i(TAG, "Lauschen gestartet (model=$modelName)")
|
||||
promise.resolve(true)
|
||||
} catch (e: Exception) {
|
||||
Log.e(TAG, "start fehlgeschlagen", e)
|
||||
running.set(false)
|
||||
audioRecord?.release()
|
||||
audioRecord = null
|
||||
promise.reject("START_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun stop(promise: Promise) {
|
||||
running.set(false)
|
||||
try {
|
||||
captureThread?.join(1500)
|
||||
} catch (_: InterruptedException) {}
|
||||
captureThread = null
|
||||
try { audioRecord?.stop() } catch (_: Exception) {}
|
||||
try { audioRecord?.release() } catch (_: Exception) {}
|
||||
audioRecord = null
|
||||
Log.i(TAG, "Lauschen gestoppt")
|
||||
promise.resolve(true)
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun dispose(promise: Promise) {
|
||||
running.set(false)
|
||||
try { captureThread?.join(1000) } catch (_: InterruptedException) {}
|
||||
captureThread = null
|
||||
try { audioRecord?.stop() } catch (_: Exception) {}
|
||||
try { audioRecord?.release() } catch (_: Exception) {}
|
||||
audioRecord = null
|
||||
disposeSessions()
|
||||
promise.resolve(true)
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun isAvailable(promise: Promise) {
|
||||
// Wake-Word ist immer verfuegbar (kein API-Key, alles on-device)
|
||||
promise.resolve(true)
|
||||
}
|
||||
|
||||
// RN-Event-Subscriptions — RN-Konvention, sonst Warnung im Debug-Build
|
||||
@ReactMethod fun addListener(eventName: String) {}
|
||||
@ReactMethod fun removeListeners(count: Int) {}
|
||||
|
||||
private fun disposeSessions() {
|
||||
try { melSession?.close() } catch (_: Exception) {}
|
||||
try { embSession?.close() } catch (_: Exception) {}
|
||||
try { wwSession?.close() } catch (_: Exception) {}
|
||||
melSession = null
|
||||
embSession = null
|
||||
wwSession = null
|
||||
}
|
||||
|
||||
private fun resetInferenceState() {
|
||||
melBuffer.clear()
|
||||
melProcessedIdx = 0
|
||||
embBuffer.clear()
|
||||
consecutiveAboveThreshold = 0
|
||||
lastDetectionMs = 0L
|
||||
}
|
||||
|
||||
private fun emitDetected() {
|
||||
val params = com.facebook.react.bridge.Arguments.createMap().apply {
|
||||
putString("model", modelName)
|
||||
}
|
||||
try {
|
||||
reactApplicationContext
|
||||
.getJSModule(DeviceEventManagerModule.RCTDeviceEventEmitter::class.java)
|
||||
.emit("WakeWordDetected", params)
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "emit fehlgeschlagen: ${e.message}")
|
||||
}
|
||||
}
|
||||
|
||||
private fun captureLoop() {
|
||||
val buf = ShortArray(CHUNK_SAMPLES)
|
||||
val record = audioRecord ?: return
|
||||
Log.i(TAG, "Capture-Loop gestartet")
|
||||
while (running.get()) {
|
||||
var read = 0
|
||||
while (read < CHUNK_SAMPLES && running.get()) {
|
||||
val n = record.read(buf, read, CHUNK_SAMPLES - read)
|
||||
if (n <= 0) {
|
||||
Log.w(TAG, "AudioRecord.read returned $n — Loop ende")
|
||||
running.set(false)
|
||||
return
|
||||
}
|
||||
read += n
|
||||
}
|
||||
if (!running.get()) break
|
||||
try {
|
||||
processChunk(buf)
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "processChunk: ${e.message}")
|
||||
}
|
||||
}
|
||||
Log.i(TAG, "Capture-Loop beendet")
|
||||
}
|
||||
|
||||
/** Verarbeitet einen 1280-Sample int16 Audio-Chunk. */
|
||||
private fun processChunk(audio: ShortArray) {
|
||||
// 1) Audio → mel (output (1, 1, frames, 32))
|
||||
val floats = FloatArray(audio.size) { audio[it].toFloat() }
|
||||
val melTensor = OnnxTensor.createTensor(
|
||||
env,
|
||||
FloatBuffer.wrap(floats),
|
||||
longArrayOf(1L, audio.size.toLong()),
|
||||
)
|
||||
val melResult = melSession!!.run(mapOf(melInputName to melTensor))
|
||||
val melOut = melResult.get(0).value
|
||||
melTensor.close()
|
||||
@Suppress("UNCHECKED_CAST")
|
||||
val mel4 = melOut as Array<Array<Array<FloatArray>>>
|
||||
val frames = mel4[0][0]
|
||||
// openWakeWord wendet `mel/10 + 2` an, bevor es ans Embedding-Modell geht
|
||||
for (frame in frames) {
|
||||
val scaled = FloatArray(frame.size) { frame[it] / 10f + 2f }
|
||||
melBuffer.add(scaled)
|
||||
}
|
||||
melResult.close()
|
||||
|
||||
// 2) Sliding window: alle vollstaendigen 76-Frame-Fenster verarbeiten
|
||||
while (melBuffer.size >= melProcessedIdx + MEL_FRAMES_PER_EMBEDDING) {
|
||||
val flat = FloatArray(MEL_FRAMES_PER_EMBEDDING * MEL_BINS)
|
||||
var pos = 0
|
||||
for (i in 0 until MEL_FRAMES_PER_EMBEDDING) {
|
||||
val src = melBuffer[melProcessedIdx + i]
|
||||
System.arraycopy(src, 0, flat, pos, MEL_BINS)
|
||||
pos += MEL_BINS
|
||||
}
|
||||
val embIn = OnnxTensor.createTensor(
|
||||
env,
|
||||
FloatBuffer.wrap(flat),
|
||||
longArrayOf(1L, MEL_FRAMES_PER_EMBEDDING.toLong(), MEL_BINS.toLong(), 1L),
|
||||
)
|
||||
val embRes = embSession!!.run(mapOf(embInputName to embIn))
|
||||
val embOut = embRes.get(0).value
|
||||
embIn.close()
|
||||
// Erwartete Output-Form: (1, 1, 1, 96) — rank-4, NICHT (1, 96).
|
||||
// Die Google-Embedding-Pipeline behaelt extra Dimensionen.
|
||||
@Suppress("UNCHECKED_CAST")
|
||||
val embArr = embOut as Array<Array<Array<FloatArray>>>
|
||||
embBuffer.addLast(embArr[0][0][0].copyOf())
|
||||
while (embBuffer.size > wwInputFrames) embBuffer.removeFirst()
|
||||
embRes.close()
|
||||
|
||||
melProcessedIdx += EMBEDDING_STRIDE
|
||||
}
|
||||
// Mel-Buffer trimmen — verhindert Memory-Wachstum
|
||||
if (melProcessedIdx > MEL_FRAMES_PER_EMBEDDING) {
|
||||
val keepFrom = melProcessedIdx - MEL_FRAMES_PER_EMBEDDING
|
||||
val newList = ArrayList<FloatArray>(melBuffer.size - keepFrom)
|
||||
for (i in keepFrom until melBuffer.size) newList.add(melBuffer[i])
|
||||
melBuffer.clear()
|
||||
melBuffer.addAll(newList)
|
||||
melProcessedIdx = MEL_FRAMES_PER_EMBEDDING
|
||||
}
|
||||
|
||||
// 3) Klassifikation — sobald wir 16 Embeddings haben
|
||||
if (embBuffer.size < wwInputFrames) return
|
||||
val flatEmb = FloatArray(wwInputFrames * EMBEDDING_DIM)
|
||||
var p = 0
|
||||
// Letzte wwInputFrames Embeddings nehmen (embBuffer ist auf wwInputFrames begrenzt)
|
||||
for (e in embBuffer) {
|
||||
System.arraycopy(e, 0, flatEmb, p, EMBEDDING_DIM)
|
||||
p += EMBEDDING_DIM
|
||||
}
|
||||
val wwIn = OnnxTensor.createTensor(
|
||||
env,
|
||||
FloatBuffer.wrap(flatEmb),
|
||||
longArrayOf(1L, wwInputFrames.toLong(), EMBEDDING_DIM.toLong()),
|
||||
)
|
||||
val wwRes = wwSession!!.run(mapOf(wwInputName to wwIn))
|
||||
val wwOut = wwRes.get(0).value
|
||||
wwIn.close()
|
||||
// Erwartete Output-Form: (1, 1) → Array<FloatArray>
|
||||
@Suppress("UNCHECKED_CAST")
|
||||
val score = (wwOut as Array<FloatArray>)[0][0]
|
||||
wwRes.close()
|
||||
|
||||
if (score >= threshold) {
|
||||
consecutiveAboveThreshold++
|
||||
if (consecutiveAboveThreshold >= patience) {
|
||||
val now = System.currentTimeMillis()
|
||||
if (now - lastDetectionMs >= debounceMs) {
|
||||
lastDetectionMs = now
|
||||
consecutiveAboveThreshold = 0
|
||||
Log.i(TAG, "Wake-Word erkannt! score=$score model=$modelName")
|
||||
emitDetected()
|
||||
}
|
||||
}
|
||||
} else {
|
||||
consecutiveAboveThreshold = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
package com.ariacockpit
|
||||
|
||||
import com.facebook.react.ReactPackage
|
||||
import com.facebook.react.bridge.NativeModule
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.uimanager.ViewManager
|
||||
|
||||
class OpenWakeWordPackage : ReactPackage {
|
||||
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||
return listOf(OpenWakeWordModule(reactContext))
|
||||
}
|
||||
|
||||
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
||||
return emptyList()
|
||||
}
|
||||
}
|
||||
|
|
@ -32,11 +32,17 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||
private const val TAG = "PcmStreamPlayer"
|
||||
// Fallback wenn JS keinen Wert uebergibt.
|
||||
private const val DEFAULT_PREROLL_SECONDS = 3.5
|
||||
private const val MIN_PREROLL_SECONDS = 0.5
|
||||
// 0.0 = sofortige Wiedergabe — play() direkt beim ersten Chunk.
|
||||
// Macht Sinn fuer F5-TTS weil Render so schnell ist dass ein Puffer
|
||||
// unnoetig ist und bei kurzen Saetzen sogar stoeren kann.
|
||||
private const val MIN_PREROLL_SECONDS = 0.0
|
||||
private const val MAX_PREROLL_SECONDS = 10.0
|
||||
// Stille am Stream-Anfang, damit AudioTrack sauber anfaehrt und die
|
||||
// ersten Samples nicht abgeschnitten werden (XTTS-Warmup + play()-Latenz).
|
||||
private const val LEADING_SILENCE_SECONDS = 0.2
|
||||
private const val LEADING_SILENCE_SECONDS = 0.3
|
||||
// Stille am Ende — puffert das Hardware-Flushen damit die letzten
|
||||
// echten Samples garantiert ausgespielt werden bevor stop() kommt.
|
||||
private const val TRAILING_SILENCE_SECONDS = 0.3
|
||||
}
|
||||
|
||||
override fun getName() = "PcmStreamPlayer"
|
||||
|
|
@ -59,9 +65,12 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||
// Alte Session beenden falls vorhanden
|
||||
stopInternal()
|
||||
|
||||
val prerollSec = prerollSeconds
|
||||
.coerceIn(MIN_PREROLL_SECONDS, MAX_PREROLL_SECONDS)
|
||||
.let { if (it.isFinite() && it > 0) it else DEFAULT_PREROLL_SECONDS }
|
||||
// Nur NaN/Inf → Default. 0.0 ist gueltig (= sofortige Wiedergabe).
|
||||
val prerollSec = if (prerollSeconds.isFinite() && prerollSeconds >= 0.0) {
|
||||
prerollSeconds.coerceIn(MIN_PREROLL_SECONDS, MAX_PREROLL_SECONDS)
|
||||
} else {
|
||||
DEFAULT_PREROLL_SECONDS
|
||||
}
|
||||
|
||||
val channelConfig = if (channels == 2) AudioFormat.CHANNEL_OUT_STEREO else AudioFormat.CHANNEL_OUT_MONO
|
||||
val encoding = AudioFormat.ENCODING_PCM_16BIT
|
||||
|
|
@ -103,9 +112,9 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||
val t = track ?: return@Thread
|
||||
try {
|
||||
// Leading-Silence in den Buffer — gibt AudioTrack Zeit anzufahren.
|
||||
val silenceBytes = ((sampleRate * channels * 2) * LEADING_SILENCE_SECONDS).toInt() and 0x7FFFFFFE
|
||||
if (silenceBytes > 0) {
|
||||
val silence = ByteArray(silenceBytes)
|
||||
val leadingBytes = ((sampleRate * channels * 2) * LEADING_SILENCE_SECONDS).toInt() and 0x7FFFFFFE
|
||||
if (leadingBytes > 0) {
|
||||
val silence = ByteArray(leadingBytes)
|
||||
var silOff = 0
|
||||
while (silOff < silence.size && !writerShouldStop) {
|
||||
val w = t.write(silence, silOff, silence.size - silOff)
|
||||
|
|
@ -114,18 +123,74 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||
}
|
||||
bytesBuffered += silence.size
|
||||
}
|
||||
while (!writerShouldStop) {
|
||||
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS) ?: run {
|
||||
// Bei preroll=0: play() SOFORT nach Leading-Silence aufrufen,
|
||||
// nicht erst bei Ankunft des ersten echten Chunks. Android's
|
||||
// AudioTrack haelt den Play-State und wartet auf neue Samples.
|
||||
// So verschluckt es keine Worte wenn der erste Chunk erst
|
||||
// nach play()-Startup-Latenz eintrifft.
|
||||
if (prerollBytes == 0 && !playbackStarted) {
|
||||
try {
|
||||
t.play()
|
||||
playbackStarted = true
|
||||
Log.i(TAG, "Playback sofort gestartet (preroll=0, ${bytesBuffered}B silence)")
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "play() sofort failed: ${e.message}")
|
||||
}
|
||||
}
|
||||
// Idle-Cutoff: wenn endRequested NICHT kam aber 30s nichts mehr
|
||||
// reinkommt, brechen wir ab (Bridge-Crash, verlorener final).
|
||||
var idleMs = 0L
|
||||
val maxIdleMs = 30_000L
|
||||
// Zielpufferfuellung — unter diesem Wasserstand fuettern wir
|
||||
// Stille rein damit AudioTrack nicht underrunt waehrend die
|
||||
// Bridge den naechsten Satz rendert. Spotify/YouTube reagieren
|
||||
// sonst mit eigenmaechtiger Wiederaufnahme nach ~10s Stille.
|
||||
val underrunGuardFrames = sampleRate / 10 // ~100ms
|
||||
val silenceFillFrames = sampleRate / 20 // ~50ms pro Refill
|
||||
|
||||
mainLoop@ while (!writerShouldStop) {
|
||||
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS)
|
||||
if (data == null) {
|
||||
if (endRequested) {
|
||||
// Falls wir vor Pre-Roll enden (kurzer Text): trotzdem abspielen
|
||||
if (!playbackStarted) {
|
||||
try { t.play() } catch (_: Exception) {}
|
||||
playbackStarted = true
|
||||
try {
|
||||
t.play()
|
||||
playbackStarted = true
|
||||
Log.i(TAG, "Playback gestartet VOR Pre-Roll (kurzer Text, ${bytesBuffered}B gepuffert)")
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "play() fallback failed: ${e.message}")
|
||||
}
|
||||
}
|
||||
return@Thread
|
||||
break@mainLoop
|
||||
}
|
||||
null
|
||||
} ?: continue
|
||||
// Underrun-Schutz: Stille reinfuettern wenn der AudioTrack-
|
||||
// Puffer leerzulaufen droht. Spotify resumed sonst nach
|
||||
// ~10s Pause auf eigene Faust, obwohl wir den Fokus halten.
|
||||
if (playbackStarted) {
|
||||
val framesWritten = bytesBuffered / streamBytesPerFrame
|
||||
val framesPlayed = t.playbackHeadPosition.toLong()
|
||||
val framesInBuffer = framesWritten - framesPlayed
|
||||
if (framesInBuffer < underrunGuardFrames) {
|
||||
val fillBytes = silenceFillFrames * streamBytesPerFrame
|
||||
val silence = ByteArray(fillBytes)
|
||||
var silOff = 0
|
||||
while (silOff < silence.size && !writerShouldStop) {
|
||||
val w = t.write(silence, silOff, silence.size - silOff)
|
||||
if (w <= 0) break
|
||||
silOff += w
|
||||
}
|
||||
bytesBuffered += silence.size
|
||||
}
|
||||
}
|
||||
idleMs += 50L
|
||||
if (idleMs >= maxIdleMs) {
|
||||
Log.w(TAG, "Idle-Cutoff: ${maxIdleMs}ms keine Daten — Stream wird beendet")
|
||||
break@mainLoop
|
||||
}
|
||||
continue@mainLoop
|
||||
}
|
||||
idleMs = 0L
|
||||
|
||||
// Pre-Roll Check: play() erst wenn genug gepuffert
|
||||
if (!playbackStarted && bytesBuffered + data.size >= prerollBytes) {
|
||||
|
|
@ -146,6 +211,19 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||
}
|
||||
bytesBuffered += data.size
|
||||
}
|
||||
// Trailing-Silence damit die letzten echten Samples garantiert
|
||||
// durch das Hardware-Buffering kommen bevor stop() sie abschneidet
|
||||
val trailingBytes = ((sampleRate * channels * 2) * TRAILING_SILENCE_SECONDS).toInt() and 0x7FFFFFFE
|
||||
if (trailingBytes > 0 && !writerShouldStop) {
|
||||
val silence = ByteArray(trailingBytes)
|
||||
var silOff = 0
|
||||
while (silOff < silence.size && !writerShouldStop) {
|
||||
val w = t.write(silence, silOff, silence.size - silOff)
|
||||
if (w <= 0) break
|
||||
silOff += w
|
||||
}
|
||||
bytesBuffered += silence.size
|
||||
}
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "Writer-Thread Fehler: ${e.message}")
|
||||
} finally {
|
||||
|
|
|
|||
|
|
@ -0,0 +1,126 @@
|
|||
package com.ariacockpit
|
||||
|
||||
import android.Manifest
|
||||
import android.content.Context
|
||||
import android.content.pm.PackageManager
|
||||
import android.os.Build
|
||||
import android.telephony.PhoneStateListener
|
||||
import android.telephony.TelephonyCallback
|
||||
import android.telephony.TelephonyManager
|
||||
import android.util.Log
|
||||
import androidx.core.content.ContextCompat
|
||||
import com.facebook.react.bridge.Arguments
|
||||
import com.facebook.react.bridge.Promise
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||
import com.facebook.react.bridge.ReactMethod
|
||||
import com.facebook.react.modules.core.DeviceEventManagerModule
|
||||
|
||||
/**
|
||||
* Lauscht auf Anruf-Statusaenderungen — wenn das Telefon klingelt oder ein
|
||||
* Anruf laeuft, sendet das Modul ein "PhoneCallStateChanged"-Event an JS.
|
||||
*
|
||||
* JS-Side stoppt dann die TTS-Wiedergabe damit ARIA nicht mitten ins Gespraech
|
||||
* weiterredet. Ohne READ_PHONE_STATE-Permission failt start() leise — der Rest
|
||||
* der App funktioniert wie bisher.
|
||||
*
|
||||
* State-Strings: "idle" | "ringing" | "offhook"
|
||||
*/
|
||||
class PhoneCallModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||
override fun getName() = "PhoneCall"
|
||||
|
||||
companion object { private const val TAG = "PhoneCall" }
|
||||
|
||||
private var telephonyManager: TelephonyManager? = null
|
||||
private var legacyListener: PhoneStateListener? = null
|
||||
private var modernCallback: Any? = null // TelephonyCallback ab API 31
|
||||
private var lastState: Int = TelephonyManager.CALL_STATE_IDLE
|
||||
|
||||
@ReactMethod
|
||||
fun start(promise: Promise) {
|
||||
try {
|
||||
val perm = ContextCompat.checkSelfPermission(reactApplicationContext, Manifest.permission.READ_PHONE_STATE)
|
||||
if (perm != PackageManager.PERMISSION_GRANTED) {
|
||||
Log.w(TAG, "READ_PHONE_STATE Permission fehlt — Anruf-Erkennung inaktiv")
|
||||
promise.resolve(false)
|
||||
return
|
||||
}
|
||||
val tm = reactApplicationContext.getSystemService(Context.TELEPHONY_SERVICE) as? TelephonyManager
|
||||
if (tm == null) {
|
||||
Log.w(TAG, "TelephonyManager nicht verfuegbar")
|
||||
promise.resolve(false)
|
||||
return
|
||||
}
|
||||
telephonyManager = tm
|
||||
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
|
||||
val cb = object : TelephonyCallback(), TelephonyCallback.CallStateListener {
|
||||
override fun onCallStateChanged(state: Int) {
|
||||
handleStateChange(state)
|
||||
}
|
||||
}
|
||||
tm.registerTelephonyCallback(reactApplicationContext.mainExecutor, cb)
|
||||
modernCallback = cb
|
||||
} else {
|
||||
@Suppress("DEPRECATION")
|
||||
val l = object : PhoneStateListener() {
|
||||
override fun onCallStateChanged(state: Int, phoneNumber: String?) {
|
||||
handleStateChange(state)
|
||||
}
|
||||
}
|
||||
@Suppress("DEPRECATION")
|
||||
tm.listen(l, PhoneStateListener.LISTEN_CALL_STATE)
|
||||
legacyListener = l
|
||||
}
|
||||
Log.i(TAG, "PhoneCall-Listener aktiv")
|
||||
promise.resolve(true)
|
||||
} catch (e: Exception) {
|
||||
Log.e(TAG, "start fehlgeschlagen", e)
|
||||
promise.reject("START_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun stop(promise: Promise) {
|
||||
try {
|
||||
val tm = telephonyManager
|
||||
if (tm != null) {
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
|
||||
(modernCallback as? TelephonyCallback)?.let { tm.unregisterTelephonyCallback(it) }
|
||||
modernCallback = null
|
||||
} else {
|
||||
@Suppress("DEPRECATION")
|
||||
legacyListener?.let { tm.listen(it, PhoneStateListener.LISTEN_NONE) }
|
||||
legacyListener = null
|
||||
}
|
||||
}
|
||||
telephonyManager = null
|
||||
lastState = TelephonyManager.CALL_STATE_IDLE
|
||||
promise.resolve(true)
|
||||
} catch (e: Exception) {
|
||||
promise.reject("STOP_FAILED", e.message ?: "")
|
||||
}
|
||||
}
|
||||
|
||||
private fun handleStateChange(state: Int) {
|
||||
if (state == lastState) return
|
||||
lastState = state
|
||||
val name = when (state) {
|
||||
TelephonyManager.CALL_STATE_RINGING -> "ringing"
|
||||
TelephonyManager.CALL_STATE_OFFHOOK -> "offhook"
|
||||
TelephonyManager.CALL_STATE_IDLE -> "idle"
|
||||
else -> return
|
||||
}
|
||||
Log.i(TAG, "Telefon-State: $name")
|
||||
val params = Arguments.createMap().apply { putString("state", name) }
|
||||
try {
|
||||
reactApplicationContext.getJSModule(DeviceEventManagerModule.RCTDeviceEventEmitter::class.java)
|
||||
.emit("PhoneCallStateChanged", params)
|
||||
} catch (e: Exception) {
|
||||
Log.w(TAG, "Event-emit fehlgeschlagen: ${e.message}")
|
||||
}
|
||||
}
|
||||
|
||||
@ReactMethod fun addListener(eventName: String) {}
|
||||
@ReactMethod fun removeListeners(count: Int) {}
|
||||
}
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
package com.ariacockpit
|
||||
|
||||
import com.facebook.react.ReactPackage
|
||||
import com.facebook.react.bridge.NativeModule
|
||||
import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.uimanager.ViewManager
|
||||
|
||||
class PhoneCallPackage : ReactPackage {
|
||||
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||
return listOf(PhoneCallModule(reactContext))
|
||||
}
|
||||
|
||||
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
||||
return emptyList()
|
||||
}
|
||||
}
|
||||
|
|
@ -167,10 +167,23 @@ export CI=true
|
|||
|
||||
if [ "$MODE" = "debug" ]; then
|
||||
./gradlew assembleDebug
|
||||
APK_PATH="app/build/outputs/apk/debug/app-debug.apk"
|
||||
OUT_DIR="app/build/outputs/apk/debug"
|
||||
else
|
||||
./gradlew assembleRelease
|
||||
APK_PATH="app/build/outputs/apk/release/app-release.apk"
|
||||
OUT_DIR="app/build/outputs/apk/release"
|
||||
fi
|
||||
|
||||
# Mit ABI-Splits heisst die APK z.B. app-arm64-v8a-release.apk statt
|
||||
# app-release.apk. arm64-v8a-Variante zuerst probieren (das ist unser
|
||||
# Standard), Universal-APK als Fallback falls Splits deaktiviert sind.
|
||||
if [ -f "$OUT_DIR/app-arm64-v8a-${MODE}.apk" ]; then
|
||||
APK_PATH="$OUT_DIR/app-arm64-v8a-${MODE}.apk"
|
||||
elif [ -f "$OUT_DIR/app-${MODE}.apk" ]; then
|
||||
APK_PATH="$OUT_DIR/app-${MODE}.apk"
|
||||
else
|
||||
echo -e "${RED}Keine passende APK in $OUT_DIR gefunden${NC}"
|
||||
cd ..
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd ..
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "aria-cockpit",
|
||||
"version": "0.0.5.6",
|
||||
"version": "0.0.7.1",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"android": "react-native run-android",
|
||||
|
|
@ -24,9 +24,7 @@
|
|||
"react-native-camera-kit": "^13.0.0",
|
||||
"@react-native-async-storage/async-storage": "^1.21.0",
|
||||
"react-native-fs": "^2.20.0",
|
||||
"react-native-audio-recorder-player": "^3.6.7",
|
||||
"@picovoice/porcupine-react-native": "3.0.5",
|
||||
"@picovoice/react-native-voice-processor": "1.2.3"
|
||||
"react-native-audio-recorder-player": "^3.6.7"
|
||||
},
|
||||
"devDependencies": {
|
||||
"typescript": "^5.3.3",
|
||||
|
|
|
|||
|
|
@ -0,0 +1,105 @@
|
|||
/**
|
||||
* MessageText — rendert Chat-Text mit Auto-Linkifizierung:
|
||||
* - http(s)://... → tippbar, oeffnet im Browser
|
||||
* - mailto: oder plain E-Mail → tippbar, oeffnet Mail-App
|
||||
* - Telefonnummern → tippbar, oeffnet Android-Dialer
|
||||
*
|
||||
* Text ist durchgaengig markierbar/kopierbar (selectable).
|
||||
*/
|
||||
|
||||
import React from 'react';
|
||||
import { Text, Linking, TextStyle, StyleProp } from 'react-native';
|
||||
|
||||
// Regex kombiniert URL | Email | Telefonnummer.
|
||||
// Gruppenreihenfolge ist wichtig fuer die Erkennung unten.
|
||||
//
|
||||
// URL: http://... oder https://... bis zum ersten Whitespace / Anfuehrungszeichen.
|
||||
// Email: simpler Standard-Match (kein RFC-kompatibel aber gut genug).
|
||||
// Telefon: internationale Form (+49..., 0049..., 0176...), darf Leerzeichen
|
||||
// / Bindestriche / Schraegstriche / Klammern enthalten, mindestens 7
|
||||
// Ziffern insgesamt. Vermeidet banale Zahlen (Uhrzeiten, Datum).
|
||||
const LINK_REGEX = new RegExp(
|
||||
'(https?:\\/\\/[^\\s<>"]+)' + // 1: URL
|
||||
'|([A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,})' + // 2: Email
|
||||
'|((?:\\+|00)\\d[\\d\\s()\\-\\/]{6,}\\d|0\\d{2,4}[\\s\\/\\-]?[\\d\\s\\-\\/]{5,}\\d)', // 3: Telefon
|
||||
'g',
|
||||
);
|
||||
|
||||
const LINK_STYLE = { color: '#0096FF', textDecorationLine: 'underline' } as TextStyle;
|
||||
|
||||
interface Segment {
|
||||
text: string;
|
||||
kind: 'text' | 'url' | 'email' | 'phone';
|
||||
}
|
||||
|
||||
function tokenize(raw: string): Segment[] {
|
||||
const out: Segment[] = [];
|
||||
let lastEnd = 0;
|
||||
LINK_REGEX.lastIndex = 0;
|
||||
let m: RegExpExecArray | null;
|
||||
while ((m = LINK_REGEX.exec(raw)) !== null) {
|
||||
if (m.index > lastEnd) {
|
||||
out.push({ text: raw.slice(lastEnd, m.index), kind: 'text' });
|
||||
}
|
||||
if (m[1]) out.push({ text: m[1], kind: 'url' });
|
||||
else if (m[2]) out.push({ text: m[2], kind: 'email' });
|
||||
else if (m[3]) out.push({ text: m[3], kind: 'phone' });
|
||||
lastEnd = LINK_REGEX.lastIndex;
|
||||
}
|
||||
if (lastEnd < raw.length) out.push({ text: raw.slice(lastEnd), kind: 'text' });
|
||||
return out;
|
||||
}
|
||||
|
||||
function onPress(seg: Segment) {
|
||||
try {
|
||||
if (seg.kind === 'url') {
|
||||
Linking.openURL(seg.text);
|
||||
} else if (seg.kind === 'email') {
|
||||
Linking.openURL(`mailto:${seg.text}`);
|
||||
} else if (seg.kind === 'phone') {
|
||||
// Android-Dialer erwartet tel:-Schema ohne Leerzeichen/Bindestriche
|
||||
const clean = seg.text.replace(/[\s\-\/()]/g, '');
|
||||
Linking.openURL(`tel:${clean}`);
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
|
||||
interface Props {
|
||||
text: string;
|
||||
style?: StyleProp<TextStyle>;
|
||||
}
|
||||
|
||||
const MessageText: React.FC<Props> = ({ text, style }) => {
|
||||
const segments = React.useMemo(() => tokenize(text), [text]);
|
||||
return (
|
||||
<Text
|
||||
style={style}
|
||||
selectable
|
||||
// dataDetectorType ist Android-only und macht Phone/URL/Email zusaetzlich
|
||||
// ueber System-Detection klickbar — als Fallback falls unsere Regex-
|
||||
// Tokens nicht passen.
|
||||
dataDetectorType="all"
|
||||
>
|
||||
{segments.map((seg, i) => {
|
||||
if (seg.kind === 'text') {
|
||||
return <Text key={i} selectable>{seg.text}</Text>;
|
||||
}
|
||||
return (
|
||||
<Text
|
||||
key={i}
|
||||
selectable
|
||||
style={LINK_STYLE}
|
||||
onPress={() => onPress(seg)}
|
||||
// Long-Press soll an den Parent durch fuer Selection
|
||||
onLongPress={undefined}
|
||||
suppressHighlighting={false}
|
||||
>
|
||||
{seg.text}
|
||||
</Text>
|
||||
);
|
||||
})}
|
||||
</Text>
|
||||
);
|
||||
};
|
||||
|
||||
export default MessageText;
|
||||
|
|
@ -93,18 +93,24 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||
}
|
||||
}, [isRecording]);
|
||||
|
||||
// VAD Silence Callback — Auto-Stop
|
||||
// VAD Silence Callback — Auto-Stop.
|
||||
// WICHTIG: NICHT auf isRecording prüfen (Closure ist stale) — stattdessen
|
||||
// audioService selber fragen. Empty deps → Listener wird EINMAL registriert.
|
||||
// audioService garantiert jetzt dass der Callback pro Aufnahme nur einmal
|
||||
// feuert (silenceFired-Latch).
|
||||
const onCompleteRef = useRef(onRecordingComplete);
|
||||
useEffect(() => { onCompleteRef.current = onRecordingComplete; }, [onRecordingComplete]);
|
||||
useEffect(() => {
|
||||
const unsubSilence = audioService.onSilenceDetected(async () => {
|
||||
if (!isRecording) return;
|
||||
setIsRecording(false);
|
||||
if (audioService.getRecordingState() !== 'recording') return;
|
||||
const result = await audioService.stopRecording();
|
||||
setIsRecording(false);
|
||||
if (result && result.durationMs > 500) {
|
||||
onRecordingComplete(result);
|
||||
onCompleteRef.current(result);
|
||||
}
|
||||
});
|
||||
return unsubSilence;
|
||||
}, [isRecording, onRecordingComplete]);
|
||||
}, []);
|
||||
|
||||
// Auto-Start fuer Wake Word (extern getriggert)
|
||||
const startAutoRecording = useCallback(async () => {
|
||||
|
|
@ -136,23 +142,35 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||
}
|
||||
};
|
||||
|
||||
// Tap-to-Talk: Einmal tippen startet mit Auto-Stop
|
||||
// Tap-to-Talk: Einmal tippen startet mit Auto-Stop.
|
||||
// Guard gegen Doppel-Tap während asyncer Start/Stop.
|
||||
const tapBusy = useRef(false);
|
||||
const handleTap = async () => {
|
||||
if (disabled) return;
|
||||
if (isRecording) {
|
||||
// Aufnahme manuell stoppen
|
||||
setIsRecording(false);
|
||||
const result = await audioService.stopRecording();
|
||||
if (result && result.durationMs > 300) {
|
||||
onRecordingComplete(result);
|
||||
}
|
||||
} else {
|
||||
// Aufnahme mit Auto-Stop starten
|
||||
const started = await audioService.startRecording(true);
|
||||
if (started) {
|
||||
isLongPress.current = false;
|
||||
setIsRecording(true);
|
||||
if (disabled || tapBusy.current) return;
|
||||
tapBusy.current = true;
|
||||
try {
|
||||
// Fragen WIR den Service, nicht den React-State (Closure kann stale sein)
|
||||
const svcState = audioService.getRecordingState();
|
||||
if (svcState === 'recording') {
|
||||
// Aufnahme manuell stoppen
|
||||
const result = await audioService.stopRecording();
|
||||
setIsRecording(false);
|
||||
if (result && result.durationMs > 300) {
|
||||
onRecordingComplete(result);
|
||||
}
|
||||
} else if (svcState === 'idle') {
|
||||
// Aufnahme mit Auto-Stop starten
|
||||
const started = await audioService.startRecording(true);
|
||||
if (started) {
|
||||
isLongPress.current = false;
|
||||
setIsRecording(true);
|
||||
}
|
||||
}
|
||||
// svcState === 'processing': Stopp in progress — nichts tun, User
|
||||
// muss nochmal tippen wenn fertig. Aber wir blockieren mit tapBusy
|
||||
// kurz damit der User's UI-Feedback synchron bleibt.
|
||||
} finally {
|
||||
tapBusy.current = false;
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -25,11 +25,13 @@ import RNFS from 'react-native-fs';
|
|||
import rvs, { RVSMessage, ConnectionState } from '../services/rvs';
|
||||
import audioService from '../services/audio';
|
||||
import wakeWordService from '../services/wakeword';
|
||||
import phoneCallService from '../services/phoneCall';
|
||||
import updateService from '../services/updater';
|
||||
import VoiceButton from '../components/VoiceButton';
|
||||
import FileUpload, { FileData } from '../components/FileUpload';
|
||||
import CameraUpload, { PhotoData } from '../components/CameraUpload';
|
||||
import { RecordingResult, loadConvWindowMs } from '../services/audio';
|
||||
import MessageText from '../components/MessageText';
|
||||
import { RecordingResult, loadConvWindowMs, loadTtsSpeed, TTS_SPEED_DEFAULT } from '../services/audio';
|
||||
import Geolocation from '@react-native-community/geolocation';
|
||||
|
||||
// --- Typen ---
|
||||
|
|
@ -103,6 +105,8 @@ const ChatScreen: React.FC = () => {
|
|||
const [showCameraUpload, setShowCameraUpload] = useState(false);
|
||||
const [gpsEnabled, setGpsEnabled] = useState(false);
|
||||
const [wakeWordActive, setWakeWordActive] = useState(false);
|
||||
// Genauer State (off/armed/conversing) fuer UI-Feedback am Button
|
||||
const [wakeWordState, setWakeWordState] = useState<'off' | 'armed' | 'conversing'>('off');
|
||||
const [fullscreenImage, setFullscreenImage] = useState<string | null>(null);
|
||||
const [searchQuery, setSearchQuery] = useState('');
|
||||
const [searchVisible, setSearchVisible] = useState(false);
|
||||
|
|
@ -116,6 +120,13 @@ const ChatScreen: React.FC = () => {
|
|||
const [ttsMuted, setTtsMuted] = useState(false);
|
||||
// Gerätelokale XTTS-Voice-Wahl (bevorzugt gegenueber dem globalen Default)
|
||||
const localXttsVoiceRef = useRef<string>('');
|
||||
// Geraetelokale TTS-Wiedergabegeschwindigkeit (speed-Param an F5-TTS)
|
||||
const ttsSpeedRef = useRef<number>(TTS_SPEED_DEFAULT);
|
||||
// Spiegelung der TTS-Settings in einer Ref — damit die onMessage-Closure
|
||||
// (useEffect mit []-deps) IMMER die aktuellen Werte sieht. Ohne Ref
|
||||
// bliebe canPlay auf dem Mount-Initial-Wert haengen (mute ignoriert,
|
||||
// oder AsyncStorage-Load nicht beruecksichtigt).
|
||||
const ttsCanPlayRef = useRef<boolean>(true);
|
||||
|
||||
const flatListRef = useRef<FlatList>(null);
|
||||
const messageIdCounter = useRef(0);
|
||||
|
|
@ -135,6 +146,7 @@ const ChatScreen: React.FC = () => {
|
|||
setTtsMuted(muted === 'true'); // default false
|
||||
const voice = await AsyncStorage.getItem('aria_xtts_voice');
|
||||
localXttsVoiceRef.current = voice || '';
|
||||
ttsSpeedRef.current = await loadTtsSpeed();
|
||||
};
|
||||
loadTtsSettings();
|
||||
// Poll alle 2s um Settings-Aenderung mitzubekommen (einfache Loesung ohne Context)
|
||||
|
|
@ -145,8 +157,32 @@ const ChatScreen: React.FC = () => {
|
|||
// Wake Word: einmalig laden + Porcupine vorbereiten (wenn Access Key gesetzt)
|
||||
useEffect(() => {
|
||||
wakeWordService.loadFromStorage().catch(() => {});
|
||||
const unsub = wakeWordService.onStateChange((s) => {
|
||||
setWakeWordState(s);
|
||||
setWakeWordActive(s !== 'off');
|
||||
// Conversation-Focus an Wake-Word-State koppeln: solange wir aktiv im
|
||||
// Dialog sind, soll Spotify dauerhaft gepaust bleiben (auch ueber
|
||||
// Render-Pausen + zwischen Antworten hinweg). Sobald wir zurueck nach
|
||||
// 'armed' oder 'off' fallen, darf Spotify wieder.
|
||||
if (s === 'conversing') audioService.acquireConversationFocus();
|
||||
else audioService.releaseConversationFocus();
|
||||
});
|
||||
return () => unsub();
|
||||
}, []);
|
||||
|
||||
// Anruf-Erkennung: TTS pausieren wenn das Telefon klingelt
|
||||
useEffect(() => {
|
||||
phoneCallService.start().catch(err =>
|
||||
console.warn('[Chat] phoneCall.start fehlgeschlagen', err));
|
||||
return () => { phoneCallService.stop().catch(() => {}); };
|
||||
}, []);
|
||||
|
||||
// ttsCanPlayRef live aktuell halten — Closure in onMessage unten liest
|
||||
// darueber statt direkt ttsDeviceEnabled/ttsMuted (sonst stale).
|
||||
useEffect(() => {
|
||||
ttsCanPlayRef.current = ttsDeviceEnabled && !ttsMuted;
|
||||
}, [ttsDeviceEnabled, ttsMuted]);
|
||||
|
||||
const toggleMute = useCallback(() => {
|
||||
setTtsMuted(prev => {
|
||||
const next = !prev;
|
||||
|
|
@ -248,15 +284,35 @@ const ChatScreen: React.FC = () => {
|
|||
if (message.type === 'chat') {
|
||||
const sender = (message.payload.sender as string) || '';
|
||||
|
||||
// STT-Ergebnis: Transkribierten Text in die Sprach-Bubble schreiben
|
||||
// STT-Ergebnis: Transkribierten Text in die Sprach-Bubble schreiben.
|
||||
// WICHTIG: Nur die ERSTE noch unaufgeloeste Aufnahme matchen — sonst
|
||||
// wuerde bei zwei kurz hintereinander gesendeten Audios beide Bubbles
|
||||
// den gleichen Text bekommen (Bug: zweite Antwort ueberschreibt erste).
|
||||
if (sender === 'stt') {
|
||||
const sttText = (message.payload.text as string) || '';
|
||||
if (sttText) {
|
||||
setMessages(prev => prev.map(m =>
|
||||
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
||||
? { ...m, text: `\uD83C\uDFA4 ${sttText}` }
|
||||
: m
|
||||
));
|
||||
setMessages(prev => {
|
||||
const idx = prev.findIndex(m =>
|
||||
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
||||
);
|
||||
const newText = `\uD83C\uDFA4 ${sttText}`;
|
||||
if (idx < 0) {
|
||||
// Defensiv: wenn keine Placeholder im State (z.B. weil sie nie
|
||||
// hinzugefuegt wurde oder schon durch ein anderes Update verloren
|
||||
// ging), die Sprachnachricht trotzdem als neue Bubble einfuegen.
|
||||
// Sonst kommt ARIAs Antwort ohne sichtbare User-Nachricht.
|
||||
return capMessages([...prev, {
|
||||
id: nextId(),
|
||||
sender: 'user',
|
||||
text: newText,
|
||||
timestamp: message.timestamp,
|
||||
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
|
||||
}]);
|
||||
}
|
||||
const next = prev.slice();
|
||||
next[idx] = { ...next[idx], text: newText };
|
||||
return next;
|
||||
});
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
|
@ -299,7 +355,12 @@ const ChatScreen: React.FC = () => {
|
|||
}
|
||||
|
||||
// TTS-Audio abspielen wenn vorhanden — respektiert geraetelokalen Mute/Disable
|
||||
const canPlay = ttsDeviceEnabled && !ttsMuted;
|
||||
// WICHTIG: via Ref statt direkt state lesen, sonst ist's stale (Closure-Bug).
|
||||
const canPlay = ttsCanPlayRef.current;
|
||||
if (message.type === 'audio_pcm' || (message.type === 'audio' && message.payload.base64)) {
|
||||
console.log('[Chat] audio-msg canPlay=%s (enabled=%s muted=%s)',
|
||||
canPlay, ttsDeviceEnabled, ttsMuted);
|
||||
}
|
||||
if (message.type === 'audio' && message.payload.base64) {
|
||||
const b64 = message.payload.base64 as string;
|
||||
const refId = (message.payload.messageId as string) || '';
|
||||
|
|
@ -439,6 +500,7 @@ const ChatScreen: React.FC = () => {
|
|||
durationMs: result.durationMs,
|
||||
mimeType: result.mimeType,
|
||||
voice: localXttsVoiceRef.current,
|
||||
speed: ttsSpeedRef.current,
|
||||
...(location && { location }),
|
||||
});
|
||||
// resume() wird durch onPlaybackFinished nach ARIAs Antwort getriggert.
|
||||
|
|
@ -460,7 +522,12 @@ const ChatScreen: React.FC = () => {
|
|||
// Wake Word Toggle Handler
|
||||
const toggleWakeWord = useCallback(async () => {
|
||||
if (wakeWordActive) {
|
||||
wakeWordService.stop();
|
||||
// Vor Porcupine-Stop: eventuelle laufende Aufnahme abbrechen. Sonst
|
||||
// bleibt audioService.recordingState=='recording' haengen und der
|
||||
// normale Aufnahme-Button wirkt nicht mehr (startRecording lehnt
|
||||
// ab weil "Aufnahme laeuft bereits").
|
||||
try { await audioService.stopRecording(); } catch {}
|
||||
await wakeWordService.stop();
|
||||
setWakeWordActive(false);
|
||||
} else {
|
||||
const started = await wakeWordService.start();
|
||||
|
|
@ -546,10 +613,13 @@ const ChatScreen: React.FC = () => {
|
|||
};
|
||||
setMessages(prev => capMessages([...prev, userMsg]));
|
||||
|
||||
console.log('[Chat] sende mit voice=%s speed=%s',
|
||||
localXttsVoiceRef.current || '(default)', ttsSpeedRef.current);
|
||||
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
||||
rvs.send('chat', {
|
||||
text,
|
||||
voice: localXttsVoiceRef.current,
|
||||
speed: ttsSpeedRef.current,
|
||||
...(location && { location }),
|
||||
});
|
||||
}, [inputText, getCurrentLocation, pendingAttachments, sendPendingAttachments]);
|
||||
|
|
@ -576,6 +646,8 @@ const ChatScreen: React.FC = () => {
|
|||
base64: result.base64,
|
||||
durationMs: result.durationMs,
|
||||
mimeType: result.mimeType,
|
||||
voice: localXttsVoiceRef.current,
|
||||
speed: ttsSpeedRef.current,
|
||||
...(location && { location }),
|
||||
});
|
||||
}, [getCurrentLocation]);
|
||||
|
|
@ -659,6 +731,7 @@ const ChatScreen: React.FC = () => {
|
|||
rvs.send('chat', {
|
||||
text: messageText,
|
||||
voice: localXttsVoiceRef.current,
|
||||
speed: ttsSpeedRef.current,
|
||||
...(location && { location }),
|
||||
});
|
||||
}
|
||||
|
|
@ -733,9 +806,10 @@ const ChatScreen: React.FC = () => {
|
|||
))}
|
||||
{/* Text (nicht anzeigen wenn nur "Anhang empfangen" und ein Bild da ist) */}
|
||||
{!(item.text === 'Anhang empfangen' && item.attachments?.some(a => a.type === 'image' && a.uri)) && (
|
||||
<Text style={[styles.messageText, isUser ? styles.userText : styles.ariaText]}>
|
||||
{item.text}
|
||||
</Text>
|
||||
<MessageText
|
||||
text={item.text}
|
||||
style={[styles.messageText, isUser ? styles.userText : styles.ariaText]}
|
||||
/>
|
||||
)}
|
||||
{/* Play-Button fuer ARIA-Nachrichten — Cache bevorzugt, sonst Bridge-TTS mit aktueller Engine */}
|
||||
{!isUser && item.text.length > 0 && (
|
||||
|
|
@ -750,6 +824,7 @@ const ChatScreen: React.FC = () => {
|
|||
rvs.send('tts_request' as any, {
|
||||
text: item.text,
|
||||
voice: localXttsVoiceRef.current,
|
||||
speed: ttsSpeedRef.current,
|
||||
messageId: item.messageId || '',
|
||||
});
|
||||
}
|
||||
|
|
@ -970,7 +1045,10 @@ const ChatScreen: React.FC = () => {
|
|||
style={[styles.wakeWordBtn, wakeWordActive && styles.wakeWordBtnActive]}
|
||||
onPress={toggleWakeWord}
|
||||
>
|
||||
<Text style={styles.wakeWordIcon}>{wakeWordActive ? '👂' : '🔇'}</Text>
|
||||
<Text style={styles.wakeWordIcon}>
|
||||
{wakeWordState === 'conversing' ? '🎙️' :
|
||||
wakeWordState === 'armed' ? '👂' : '🔇'}
|
||||
</Text>
|
||||
</TouchableOpacity>
|
||||
</>
|
||||
)}
|
||||
|
|
|
|||
|
|
@ -35,11 +35,15 @@ import {
|
|||
CONV_WINDOW_MIN_SEC,
|
||||
CONV_WINDOW_MAX_SEC,
|
||||
CONV_WINDOW_STORAGE_KEY,
|
||||
TTS_SPEED_DEFAULT,
|
||||
TTS_SPEED_MIN,
|
||||
TTS_SPEED_MAX,
|
||||
TTS_SPEED_STORAGE_KEY,
|
||||
} from '../services/audio';
|
||||
import wakeWordService, {
|
||||
BUILTIN_KEYWORDS,
|
||||
WAKE_KEYWORDS,
|
||||
KEYWORD_LABELS,
|
||||
DEFAULT_KEYWORD,
|
||||
WAKE_ACCESS_KEY_STORAGE,
|
||||
WAKE_KEYWORD_STORAGE,
|
||||
} from '../services/wakeword';
|
||||
import ModeSelector from '../components/ModeSelector';
|
||||
|
|
@ -98,8 +102,7 @@ const SettingsScreen: React.FC = () => {
|
|||
const [ttsPrerollSec, setTtsPrerollSec] = useState<number>(TTS_PREROLL_DEFAULT_SEC);
|
||||
const [vadSilenceSec, setVadSilenceSec] = useState<number>(VAD_SILENCE_DEFAULT_SEC);
|
||||
const [convWindowSec, setConvWindowSec] = useState<number>(CONV_WINDOW_DEFAULT_SEC);
|
||||
const [wakeAccessKey, setWakeAccessKey] = useState<string>('');
|
||||
const [wakeAccessKeyVisible, setWakeAccessKeyVisible] = useState(false);
|
||||
const [ttsSpeed, setTtsSpeed] = useState<number>(TTS_SPEED_DEFAULT);
|
||||
const [wakeKeyword, setWakeKeyword] = useState<string>(DEFAULT_KEYWORD);
|
||||
const [wakeStatus, setWakeStatus] = useState<string>('');
|
||||
const [editingPath, setEditingPath] = useState(false);
|
||||
|
|
@ -153,11 +156,14 @@ const SettingsScreen: React.FC = () => {
|
|||
}
|
||||
}
|
||||
});
|
||||
AsyncStorage.getItem(WAKE_ACCESS_KEY_STORAGE).then(saved => {
|
||||
if (saved) setWakeAccessKey(saved);
|
||||
AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY).then(saved => {
|
||||
if (saved != null) {
|
||||
const n = parseFloat(saved);
|
||||
if (isFinite(n) && n >= TTS_SPEED_MIN && n <= TTS_SPEED_MAX) setTtsSpeed(n);
|
||||
}
|
||||
});
|
||||
AsyncStorage.getItem(WAKE_KEYWORD_STORAGE).then(saved => {
|
||||
if (saved) setWakeKeyword(saved);
|
||||
if (saved && (WAKE_KEYWORDS as readonly string[]).includes(saved)) setWakeKeyword(saved);
|
||||
});
|
||||
AsyncStorage.getItem('aria_xtts_voice').then(saved => {
|
||||
if (saved) setXttsVoice(saved);
|
||||
|
|
@ -667,44 +673,23 @@ const SettingsScreen: React.FC = () => {
|
|||
</View>
|
||||
</View>
|
||||
|
||||
{/* === Wake-Word (geraetelokal) === */}
|
||||
{/* === Wake-Word (komplett on-device, openWakeWord) === */}
|
||||
<Text style={styles.sectionTitle}>Wake-Word</Text>
|
||||
<View style={styles.card}>
|
||||
<Text style={styles.toggleHint}>
|
||||
Wenn ein Picovoice-Access-Key eingetragen ist, hoert die App passiv
|
||||
auf das gewaehlte Wake-Word — du kannst dich mit anderen unterhalten,
|
||||
Musik laufen lassen und mit "{wakeKeyword}" eine Konversation mit
|
||||
ARIA starten. Ohne Key oder bei Fehlschlag startet das Ohr direkt
|
||||
eine Konversation (klassischer Modus).
|
||||
Lokale Erkennung via openWakeWord (ONNX, on-device). Kein API-Key,
|
||||
kein Cloud-Roundtrip — Audio verlaesst das Geraet nicht. Wenn das Ohr
|
||||
aktiv ist, hoerst du normal mit; sagst du das Wake-Word, startet eine
|
||||
Konversation mit ARIA.
|
||||
</Text>
|
||||
|
||||
<Text style={[styles.toggleLabel, {marginTop: 16}]}>Picovoice Access Key</Text>
|
||||
<View style={{flexDirection: 'row', alignItems: 'center', gap: 8, marginTop: 6}}>
|
||||
<TextInput
|
||||
style={[styles.input, {flex: 1}]}
|
||||
value={wakeAccessKey}
|
||||
onChangeText={setWakeAccessKey}
|
||||
placeholder="kostenlos auf console.picovoice.ai"
|
||||
placeholderTextColor="#666680"
|
||||
secureTextEntry={!wakeAccessKeyVisible}
|
||||
autoCapitalize="none"
|
||||
autoCorrect={false}
|
||||
/>
|
||||
<TouchableOpacity
|
||||
onPress={() => setWakeAccessKeyVisible(v => !v)}
|
||||
style={{padding: 8}}
|
||||
>
|
||||
<Text style={{fontSize: 18}}>{wakeAccessKeyVisible ? '🙈' : '👁'}</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
|
||||
<Text style={[styles.toggleLabel, {marginTop: 16}]}>Wake-Word</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Built-In: sofort verwendbar. "ARIA" als Custom-Keyword kommt spaeter
|
||||
ueber Diagnostic-Upload.
|
||||
Eigene Wake-Words via openWakeWord-Notebook trainierbar (gratis).
|
||||
Custom-Upload ueber Diagnostic kommt in einer spaeteren Version.
|
||||
</Text>
|
||||
<View style={{flexDirection: 'row', flexWrap: 'wrap', gap: 6, marginTop: 8}}>
|
||||
{BUILTIN_KEYWORDS.map(kw => (
|
||||
{WAKE_KEYWORDS.map(kw => (
|
||||
<TouchableOpacity
|
||||
key={kw}
|
||||
style={[
|
||||
|
|
@ -717,7 +702,7 @@ const SettingsScreen: React.FC = () => {
|
|||
styles.keywordChipText,
|
||||
wakeKeyword === kw && styles.keywordChipTextActive,
|
||||
]}>
|
||||
{kw}
|
||||
{KEYWORD_LABELS[kw]}
|
||||
</Text>
|
||||
</TouchableOpacity>
|
||||
))}
|
||||
|
|
@ -729,8 +714,8 @@ const SettingsScreen: React.FC = () => {
|
|||
onPress={async () => {
|
||||
setWakeStatus('Initialisiere...');
|
||||
try {
|
||||
const ok = await wakeWordService.configure(wakeAccessKey, wakeKeyword);
|
||||
setWakeStatus(ok ? `✅ "${wakeKeyword}" bereit` : '❌ Fehlgeschlagen — Access Key pruefen');
|
||||
const ok = await wakeWordService.configure(wakeKeyword);
|
||||
setWakeStatus(ok ? `✅ "${KEYWORD_LABELS[wakeKeyword as keyof typeof KEYWORD_LABELS]}" bereit` : '❌ Init-Fehler — Logs pruefen');
|
||||
} catch (err: any) {
|
||||
setWakeStatus('❌ ' + String(err?.message || err).slice(0, 80));
|
||||
}
|
||||
|
|
@ -800,6 +785,38 @@ const SettingsScreen: React.FC = () => {
|
|||
<Text style={styles.prerollButtonText}>+0.5</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
|
||||
<Text style={[styles.toggleLabel, {marginTop: 24}]}>Sprechgeschwindigkeit</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Wie schnell ARIA spricht. 1.0 = Normal. Niedriger = langsamer, hoeher = schneller.
|
||||
Wird an F5-TTS als speed-Param uebergeben und pro Geraet gespeichert.
|
||||
Default: {TTS_SPEED_DEFAULT.toFixed(1)}x.
|
||||
</Text>
|
||||
<View style={styles.prerollRow}>
|
||||
<TouchableOpacity
|
||||
style={styles.prerollButton}
|
||||
onPress={() => {
|
||||
const next = Math.max(TTS_SPEED_MIN, Math.round((ttsSpeed - 0.1) * 10) / 10);
|
||||
setTtsSpeed(next);
|
||||
AsyncStorage.setItem(TTS_SPEED_STORAGE_KEY, String(next));
|
||||
}}
|
||||
disabled={ttsSpeed <= TTS_SPEED_MIN}
|
||||
>
|
||||
<Text style={styles.prerollButtonText}>−0.1</Text>
|
||||
</TouchableOpacity>
|
||||
<Text style={styles.prerollValue}>{ttsSpeed.toFixed(1)} x</Text>
|
||||
<TouchableOpacity
|
||||
style={styles.prerollButton}
|
||||
onPress={() => {
|
||||
const next = Math.min(TTS_SPEED_MAX, Math.round((ttsSpeed + 0.1) * 10) / 10);
|
||||
setTtsSpeed(next);
|
||||
AsyncStorage.setItem(TTS_SPEED_STORAGE_KEY, String(next));
|
||||
}}
|
||||
disabled={ttsSpeed >= TTS_SPEED_MAX}
|
||||
>
|
||||
<Text style={styles.prerollButtonText}>+0.1</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</View>
|
||||
)}
|
||||
|
||||
|
|
|
|||
|
|
@ -92,6 +92,24 @@ export const CONV_WINDOW_MIN_SEC = 3.0;
|
|||
export const CONV_WINDOW_MAX_SEC = 20.0;
|
||||
export const CONV_WINDOW_STORAGE_KEY = 'aria_conv_window_sec';
|
||||
|
||||
// TTS-Wiedergabegeschwindigkeit — wird pro Geraet gespeichert und an die
|
||||
// Bridge mitgegeben (speed-Param im F5-TTS infer()). 1.0 = normal.
|
||||
export const TTS_SPEED_DEFAULT = 1.0;
|
||||
export const TTS_SPEED_MIN = 0.1;
|
||||
export const TTS_SPEED_MAX = 5.0;
|
||||
export const TTS_SPEED_STORAGE_KEY = 'aria_tts_speed';
|
||||
|
||||
export async function loadTtsSpeed(): Promise<number> {
|
||||
try {
|
||||
const raw = await AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY);
|
||||
if (raw != null) {
|
||||
const n = parseFloat(raw);
|
||||
if (isFinite(n) && n >= TTS_SPEED_MIN && n <= TTS_SPEED_MAX) return n;
|
||||
}
|
||||
} catch {}
|
||||
return TTS_SPEED_DEFAULT;
|
||||
}
|
||||
|
||||
export async function loadConvWindowMs(): Promise<number> {
|
||||
try {
|
||||
const raw = await AsyncStorage.getItem(CONV_WINDOW_STORAGE_KEY);
|
||||
|
|
@ -125,7 +143,7 @@ const MAX_RECORDING_MS = 120000;
|
|||
// Pre-Roll: Wie lange Audio im AudioTrack-Buffer liegt bevor play() startet.
|
||||
// Einstellbar via Diagnostic/Settings (Key: aria_tts_preroll_sec).
|
||||
export const TTS_PREROLL_DEFAULT_SEC = 3.5;
|
||||
export const TTS_PREROLL_MIN_SEC = 1.0;
|
||||
export const TTS_PREROLL_MIN_SEC = 0; // 0 = sofort abspielen (F5-TTS ist schnell genug)
|
||||
export const TTS_PREROLL_MAX_SEC = 6.0;
|
||||
export const TTS_PREROLL_STORAGE_KEY = 'aria_tts_preroll_sec';
|
||||
|
||||
|
|
@ -173,11 +191,26 @@ class AudioService {
|
|||
private pcmBytesCollected: number = 0;
|
||||
private readonly PCM_MAX_CACHE_BYTES = 30 * 1024 * 1024; // 30MB
|
||||
|
||||
// AudioFocus wird verzoegert freigegeben — wenn ARIA eine zweite Antwort
|
||||
// direkt hinterherschickt (oder ein neuer Stream startet), bleibt Spotify
|
||||
// pausiert. Ohne diese Verzoegerung springt Spotify im Mikro-Sekunden-Gap
|
||||
// zwischen zwei Streams kurz wieder an.
|
||||
private focusReleaseTimer: ReturnType<typeof setTimeout> | null = null;
|
||||
private readonly FOCUS_RELEASE_DELAY_MS = 800;
|
||||
|
||||
// Conversation-Mode: solange aktiv (Wake-Word Status 'conversing' ODER
|
||||
// wir wissen "ARIA spricht gerade in einem Multi-Turn-Dialog"), halten wir
|
||||
// den AudioFocus DAUERHAFT. Der per-Stream-Release wird unterdrueckt,
|
||||
// damit Spotify nicht in Render-Pausen oder zwischen Antworten zurueckkehrt.
|
||||
private _conversationFocusActive: boolean = false;
|
||||
|
||||
// VAD State
|
||||
private vadEnabled: boolean = false;
|
||||
private lastSpeechTime: number = 0;
|
||||
private vadTimer: ReturnType<typeof setInterval> | null = null;
|
||||
private maxDurationTimer: ReturnType<typeof setTimeout> | null = null;
|
||||
// Latch damit der Silence-Callback pro Aufnahme genau einmal feuert
|
||||
private silenceFired: boolean = false;
|
||||
private noSpeechTimer: ReturnType<typeof setTimeout> | null = null;
|
||||
|
||||
constructor() {
|
||||
|
|
@ -185,6 +218,58 @@ class AudioService {
|
|||
this.recorder.setSubscriptionDuration(0.1); // 100ms Metering-Updates
|
||||
}
|
||||
|
||||
/** AudioFocus mit kleiner Verzoegerung freigeben — Spotify/YouTube
|
||||
* springen sonst im Gap zwischen zwei TTS-Streams (oder wenn ARIA
|
||||
* eine zweite Antwort direkt hinterherschickt) kurz wieder an.
|
||||
* Im Conversation-Mode (Wake-Word conversing) wird das Release komplett
|
||||
* unterdrueckt — der Focus bleibt fuer die ganze Konversation gehalten. */
|
||||
private _releaseFocusDeferred(): void {
|
||||
if (this._conversationFocusActive) {
|
||||
this._cancelDeferredFocusRelease();
|
||||
return;
|
||||
}
|
||||
this._cancelDeferredFocusRelease();
|
||||
this.focusReleaseTimer = setTimeout(() => {
|
||||
this.focusReleaseTimer = null;
|
||||
if (this._conversationFocusActive) return;
|
||||
AudioFocus?.release().catch(() => {});
|
||||
}, this.FOCUS_RELEASE_DELAY_MS);
|
||||
}
|
||||
|
||||
private _cancelDeferredFocusRelease(): void {
|
||||
if (this.focusReleaseTimer) {
|
||||
clearTimeout(this.focusReleaseTimer);
|
||||
this.focusReleaseTimer = null;
|
||||
}
|
||||
}
|
||||
|
||||
/** Conversation-Mode beginnt → AudioFocus dauerhaft halten (Spotify bleibt
|
||||
* pausiert). Idempotent: mehrfaches Aufrufen ist sicher. */
|
||||
acquireConversationFocus(): void {
|
||||
if (this._conversationFocusActive) return;
|
||||
this._conversationFocusActive = true;
|
||||
this._cancelDeferredFocusRelease();
|
||||
console.log('[Audio] Conversation-Focus aktiv (Spotify bleibt gepaust)');
|
||||
AudioFocus?.requestDuck().catch(() => {});
|
||||
}
|
||||
|
||||
/** Conversation-Mode endet → Focus darf wieder freigegeben werden
|
||||
* (verzoegert, damit eine direkt folgende Antwort nichts kaputtmacht). */
|
||||
releaseConversationFocus(): void {
|
||||
if (!this._conversationFocusActive) return;
|
||||
this._conversationFocusActive = false;
|
||||
console.log('[Audio] Conversation-Focus inaktiv');
|
||||
this._releaseFocusDeferred();
|
||||
}
|
||||
|
||||
/** TTS-Wiedergabe haart stoppen — z.B. wenn ein Anruf reinkommt.
|
||||
* Released auch sofort den AudioFocus damit der Anruf-Klingelton hoerbar ist. */
|
||||
haltAllPlayback(reason: string = ''): void {
|
||||
console.log('[Audio] haltAllPlayback: %s', reason || '(no reason)');
|
||||
this._conversationFocusActive = false;
|
||||
this.stopPlayback();
|
||||
}
|
||||
|
||||
// --- Berechtigungen ---
|
||||
|
||||
async requestMicrophonePermission(): Promise<boolean> {
|
||||
|
|
@ -285,35 +370,50 @@ class AudioService {
|
|||
this.setState('recording');
|
||||
|
||||
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
||||
this._cancelDeferredFocusRelease();
|
||||
AudioFocus?.requestExclusive().catch(() => {});
|
||||
|
||||
// VAD aktivieren — Stille-Dauer aus AsyncStorage (Settings-konfigurierbar)
|
||||
// VAD aktivieren — Stille-Dauer aus AsyncStorage (Settings-konfigurierbar).
|
||||
// WICHTIG: jeder Trigger (VAD-Stille / Max-Dauer / No-Speech-Window)
|
||||
// disable SOFORT den VAD-Flag und clear den Timer, BEVOR die Listener
|
||||
// gefeuert werden. Sonst feuert das setInterval weiter alle 200ms und
|
||||
// ruft stopRecording parallel auf → audio-recorder-player crasht.
|
||||
this.vadEnabled = autoStop;
|
||||
this.silenceFired = false;
|
||||
const fireSilenceOnce = (reason: string) => {
|
||||
if (this.silenceFired) return;
|
||||
this.silenceFired = true;
|
||||
this.vadEnabled = false;
|
||||
if (this.vadTimer) { clearInterval(this.vadTimer); this.vadTimer = null; }
|
||||
if (this.maxDurationTimer) { clearTimeout(this.maxDurationTimer); this.maxDurationTimer = null; }
|
||||
if (this.noSpeechTimer) { clearTimeout(this.noSpeechTimer); this.noSpeechTimer = null; }
|
||||
console.log('[Audio] Silence-Fire: %s', reason);
|
||||
this.silenceListeners.forEach(cb => {
|
||||
try { cb(); } catch (e) { console.warn('[Audio] silence listener err:', e); }
|
||||
});
|
||||
};
|
||||
if (autoStop) {
|
||||
const vadSilenceMs = await loadVadSilenceMs();
|
||||
console.log('[Audio] VAD-Stille:', vadSilenceMs, 'ms');
|
||||
console.log('[Audio] startRecording: autoStop=true, VAD-Stille=%dms, MAX=%dms',
|
||||
vadSilenceMs, MAX_RECORDING_MS);
|
||||
this.vadTimer = setInterval(() => {
|
||||
const silenceDuration = Date.now() - this.lastSpeechTime;
|
||||
if (silenceDuration >= vadSilenceMs) {
|
||||
console.log(`[Audio] VAD: ${silenceDuration}ms Stille — Auto-Stop`);
|
||||
this.silenceListeners.forEach(cb => cb());
|
||||
fireSilenceOnce(`VAD ${silenceDuration}ms Stille (Schwelle=${vadSilenceMs}ms)`);
|
||||
}
|
||||
}, 200);
|
||||
// Notbremse: Nach MAX_RECORDING_MS zwangsweise stoppen
|
||||
this.maxDurationTimer = setTimeout(() => {
|
||||
console.warn(`[Audio] Max-Dauer ${MAX_RECORDING_MS}ms erreicht — Zwangs-Stop`);
|
||||
this.silenceListeners.forEach(cb => cb());
|
||||
fireSilenceOnce(`Max-Dauer ${MAX_RECORDING_MS}ms`);
|
||||
}, MAX_RECORDING_MS);
|
||||
}
|
||||
|
||||
// Conversation-Window: Wenn der User innerhalb noSpeechTimeoutMs nicht
|
||||
// anfaengt zu sprechen → Aufnahme abbrechen (Speech-Gate verwirft sie),
|
||||
// ChatScreen erkennt das und beendet die Konversation.
|
||||
// anfaengt zu sprechen → Aufnahme abbrechen (Speech-Gate verwirft sie).
|
||||
if (noSpeechTimeoutMs > 0) {
|
||||
this.noSpeechTimer = setTimeout(() => {
|
||||
if (!this.speechDetected && this.recordingState === 'recording') {
|
||||
console.log(`[Audio] Conversation-Window ${noSpeechTimeoutMs}ms ohne Sprache — Stop`);
|
||||
this.silenceListeners.forEach(cb => cb());
|
||||
fireSilenceOnce(`Conversation-Window ${noSpeechTimeoutMs}ms ohne Sprache`);
|
||||
}
|
||||
}, noSpeechTimeoutMs);
|
||||
}
|
||||
|
|
@ -353,8 +453,9 @@ class AudioService {
|
|||
await this.recorder.stopRecorder();
|
||||
this.recorder.removeRecordBackListener();
|
||||
|
||||
// Audio-Focus freigeben — andere Apps duerfen wieder
|
||||
AudioFocus?.release().catch(() => {});
|
||||
// Audio-Focus verzoegert freigeben — gleich kommt die TTS-Antwort,
|
||||
// im Gap soll Spotify nicht hochkommen.
|
||||
this._releaseFocusDeferred();
|
||||
|
||||
const durationMs = Date.now() - this.recordingStartTime;
|
||||
const hadSpeech = this.speechDetected;
|
||||
|
|
@ -426,7 +527,13 @@ class AudioService {
|
|||
|
||||
/** Einen PCM-Chunk aus einer audio_pcm Nachricht empfangen.
|
||||
* silent=true → nur cachen, nicht abspielen (z.B. wenn TTS geraetelokal gemutet).
|
||||
* Gibt bei final=true den Cache-Pfad zurueck (file://) oder '' wenn nicht gecached. */
|
||||
* Gibt bei final=true den Cache-Pfad zurueck (file://) oder '' wenn nicht gecached.
|
||||
*
|
||||
* Wrapper serialisiert aufeinanderfolgende Chunk-Calls via Promise-Queue —
|
||||
* sonst gabs bei kurzen Streams einen Race: final-Chunk konnte `end()` rufen
|
||||
* BEVOR der vorherige `start()` im Native-Modul fertig war. Der Writer-
|
||||
* Thread sah dann endRequested=true ohne jemals Chunks zu verarbeiten. */
|
||||
private _pcmChunkQueue: Promise<any> = Promise.resolve();
|
||||
async handlePcmChunk(payload: {
|
||||
base64: string;
|
||||
sampleRate?: number;
|
||||
|
|
@ -435,12 +542,37 @@ class AudioService {
|
|||
chunk?: number;
|
||||
final?: boolean;
|
||||
silent?: boolean;
|
||||
}): Promise<string> {
|
||||
const p = this._pcmChunkQueue.then(() => this._handlePcmChunkImpl(payload)).catch(err => {
|
||||
console.warn('[Audio] handlePcmChunk queued err:', err);
|
||||
return '';
|
||||
});
|
||||
// Chain only on the side effect — callers still get the per-call result
|
||||
this._pcmChunkQueue = p;
|
||||
return p;
|
||||
}
|
||||
|
||||
private async _handlePcmChunkImpl(payload: {
|
||||
base64: string;
|
||||
sampleRate?: number;
|
||||
channels?: number;
|
||||
messageId?: string;
|
||||
chunk?: number;
|
||||
final?: boolean;
|
||||
silent?: boolean;
|
||||
}): Promise<string> {
|
||||
const silent = !!payload.silent;
|
||||
if (!silent && !PcmStreamPlayer) {
|
||||
console.warn('[Audio] PcmStreamPlayer Native Module nicht verfuegbar');
|
||||
return '';
|
||||
}
|
||||
// Debug-Log bei Chunk 0 eines neuen Streams — damit man im adb logcat
|
||||
// sieht warum der Auto-Playback greift oder nicht.
|
||||
if ((payload.chunk ?? 0) === 0 && !this.pcmStreamActive) {
|
||||
console.log('[Audio] PCM-Stream start: silent=%s messageId=%s sr=%s ch=%s',
|
||||
silent, payload.messageId || '(none)',
|
||||
payload.sampleRate, payload.channels);
|
||||
}
|
||||
|
||||
const messageId = payload.messageId || '';
|
||||
const sampleRate = payload.sampleRate || 24000;
|
||||
|
|
@ -470,6 +602,7 @@ class AudioService {
|
|||
this.pcmStreamActive = false;
|
||||
return '';
|
||||
}
|
||||
this._cancelDeferredFocusRelease();
|
||||
AudioFocus?.requestDuck().catch(() => {});
|
||||
}
|
||||
}
|
||||
|
|
@ -488,11 +621,12 @@ class AudioService {
|
|||
if (isFinal) {
|
||||
if (!silent) {
|
||||
// end() resolved jetzt erst wenn der native Writer-Thread fertig
|
||||
// ist (alle Samples ausgespielt) — danach erst AudioFocus freigeben,
|
||||
// damit Spotify/YouTube nicht waehrend des Pre-Roll-Ausklangs
|
||||
// wieder aufdrehen.
|
||||
// ist (alle Samples ausgespielt) — danach AudioFocus verzoegert
|
||||
// freigeben, damit Spotify/YouTube nicht im Mikro-Gap zwischen zwei
|
||||
// ARIA-Antworten wieder hochdrehen. Wenn ein neuer Stream innerhalb
|
||||
// FOCUS_RELEASE_DELAY_MS startet, wird das Release abgebrochen.
|
||||
try { await PcmStreamPlayer!.end(); } catch {}
|
||||
AudioFocus?.release().catch(() => {});
|
||||
this._releaseFocusDeferred();
|
||||
}
|
||||
this.pcmStreamActive = false;
|
||||
|
||||
|
|
@ -596,8 +730,9 @@ class AudioService {
|
|||
private async _playNext(): Promise<void> {
|
||||
if (this.audioQueue.length === 0) {
|
||||
this.isPlaying = false;
|
||||
// Audio-Focus abgeben → andere Apps volle Lautstaerke
|
||||
AudioFocus?.release().catch(() => {});
|
||||
// Audio-Focus verzoegert abgeben → wenn gleich noch eine Antwort kommt,
|
||||
// bleibt Spotify pausiert.
|
||||
this._releaseFocusDeferred();
|
||||
// Alle Audio-Teile abgespielt → Listener benachrichtigen
|
||||
this.playbackFinishedListeners.forEach(cb => cb());
|
||||
return;
|
||||
|
|
@ -605,6 +740,7 @@ class AudioService {
|
|||
|
||||
// Beim ersten Playback-Start: andere Apps ducken
|
||||
if (!this.isPlaying) {
|
||||
this._cancelDeferredFocusRelease();
|
||||
AudioFocus?.requestDuck().catch(() => {});
|
||||
}
|
||||
this.isPlaying = true;
|
||||
|
|
@ -690,7 +826,8 @@ class AudioService {
|
|||
this.pcmBytesCollected = 0;
|
||||
this.pcmMessageId = '';
|
||||
}
|
||||
// Audio-Focus freigeben
|
||||
// Audio-Focus sofort freigeben — User hat explizit abgebrochen
|
||||
this._cancelDeferredFocusRelease();
|
||||
AudioFocus?.release().catch(() => {});
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,108 @@
|
|||
/**
|
||||
* PhoneCall-Service — pausiert die TTS-Wiedergabe wenn das Telefon klingelt
|
||||
* oder ein Anruf laeuft. Native-Bindung an PhoneCallModule.kt.
|
||||
*
|
||||
* Bei "ringing" oder "offhook" wird audioService.haltAllPlayback() gerufen —
|
||||
* ARIA verstummt sofort. Nach dem Auflegen passiert nichts automatisch
|
||||
* (Audio kommt nicht zurueck), der User muesste die Antwort manuell
|
||||
* nochmal anfordern (Play-Button auf der Nachricht).
|
||||
*
|
||||
* Permission READ_PHONE_STATE muss vom Nutzer einmalig erteilt werden —
|
||||
* wenn nicht, failed start() leise und der Rest funktioniert wie bisher.
|
||||
*/
|
||||
|
||||
import {
|
||||
NativeEventEmitter,
|
||||
NativeModules,
|
||||
PermissionsAndroid,
|
||||
Platform,
|
||||
ToastAndroid,
|
||||
} from 'react-native';
|
||||
import audioService from './audio';
|
||||
|
||||
interface PhoneCallNative {
|
||||
start(): Promise<boolean>;
|
||||
stop(): Promise<boolean>;
|
||||
}
|
||||
|
||||
const { PhoneCall } = NativeModules as { PhoneCall?: PhoneCallNative };
|
||||
|
||||
type PhoneState = 'idle' | 'ringing' | 'offhook';
|
||||
|
||||
class PhoneCallService {
|
||||
private started: boolean = false;
|
||||
private subscription: { remove: () => void } | null = null;
|
||||
private lastState: PhoneState = 'idle';
|
||||
|
||||
async start(): Promise<boolean> {
|
||||
if (this.started || !PhoneCall) return false;
|
||||
if (Platform.OS !== 'android') return false;
|
||||
|
||||
// Runtime-Permission holen (nur einmal noetig)
|
||||
try {
|
||||
const granted = await PermissionsAndroid.request(
|
||||
PermissionsAndroid.PERMISSIONS.READ_PHONE_STATE,
|
||||
{
|
||||
title: 'ARIA Cockpit — Anruf-Erkennung',
|
||||
message: 'Damit ARIA bei einem eingehenden Anruf nicht weiterredet, '
|
||||
+ 'darf die App den Anruf-Status sehen (Klingeln/Aktiv/Aufgelegt). '
|
||||
+ 'Es werden keine Anrufdaten gelesen oder gespeichert.',
|
||||
buttonPositive: 'Erlauben',
|
||||
buttonNegative: 'Spaeter',
|
||||
},
|
||||
);
|
||||
if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
|
||||
console.warn('[PhoneCall] READ_PHONE_STATE Permission abgelehnt');
|
||||
return false;
|
||||
}
|
||||
} catch (err) {
|
||||
console.warn('[PhoneCall] Permission-Anfrage gescheitert', err);
|
||||
}
|
||||
|
||||
try {
|
||||
const ok = await PhoneCall.start();
|
||||
if (!ok) {
|
||||
console.warn('[PhoneCall] Native start() lieferte false (Permission?)');
|
||||
return false;
|
||||
}
|
||||
const emitter = new NativeEventEmitter(NativeModules.PhoneCall as any);
|
||||
this.subscription = emitter.addListener('PhoneCallStateChanged', (e: { state: PhoneState }) => {
|
||||
this._onStateChanged(e.state);
|
||||
});
|
||||
this.started = true;
|
||||
console.log('[PhoneCall] Listener aktiv');
|
||||
return true;
|
||||
} catch (err: any) {
|
||||
console.warn('[PhoneCall] start gescheitert:', err?.message || err);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async stop(): Promise<void> {
|
||||
if (!this.started || !PhoneCall) return;
|
||||
try {
|
||||
this.subscription?.remove();
|
||||
this.subscription = null;
|
||||
await PhoneCall.stop();
|
||||
} catch {}
|
||||
this.started = false;
|
||||
this.lastState = 'idle';
|
||||
}
|
||||
|
||||
private _onStateChanged(state: PhoneState): void {
|
||||
if (state === this.lastState) return;
|
||||
console.log('[PhoneCall] State: %s → %s', this.lastState, state);
|
||||
this.lastState = state;
|
||||
if (state === 'ringing' || state === 'offhook') {
|
||||
audioService.haltAllPlayback(`Telefon-State: ${state}`);
|
||||
ToastAndroid.show(
|
||||
state === 'ringing' ? 'Anruf — ARIA pausiert' : 'Im Gespraech — ARIA pausiert',
|
||||
ToastAndroid.SHORT,
|
||||
);
|
||||
}
|
||||
// idle: nichts automatisch — User soll nichts unbeabsichtigt re-triggern
|
||||
}
|
||||
}
|
||||
|
||||
const phoneCallService = new PhoneCallService();
|
||||
export default phoneCallService;
|
||||
|
|
@ -29,6 +29,11 @@ class UpdateService {
|
|||
private downloading = false;
|
||||
|
||||
constructor() {
|
||||
// Beim Start alte APK-Reste aus dem Cache wegraeumen — wenn diese App
|
||||
// laeuft, sind frueher heruntergeladene APKs entweder schon installiert
|
||||
// oder unvollstaendig gewesen. Spart sonst pro Update 20-30MB auf dem Handy.
|
||||
this.cleanupOldApks().catch(() => {});
|
||||
|
||||
// Auf update_available Nachrichten lauschen
|
||||
rvs.onMessage((msg: RVSMessage) => {
|
||||
if (msg.type === 'update_available' as any) {
|
||||
|
|
@ -45,6 +50,30 @@ class UpdateService {
|
|||
});
|
||||
}
|
||||
|
||||
/** Raeumt alte heruntergeladene APK-Dateien aus dem Cache auf. */
|
||||
private async cleanupOldApks(): Promise<void> {
|
||||
try {
|
||||
const files = await RNFS.readDir(RNFS.CachesDirectoryPath);
|
||||
const apks = files.filter(f => /\.apk$/i.test(f.name));
|
||||
let freed = 0;
|
||||
for (const f of apks) {
|
||||
try {
|
||||
const size = parseInt(f.size as any, 10) || 0;
|
||||
await RNFS.unlink(f.path);
|
||||
freed += size;
|
||||
console.log(`[Update] Alte APK geloescht: ${f.name} (${(size / 1024 / 1024).toFixed(1)}MB)`);
|
||||
} catch (err: any) {
|
||||
console.warn(`[Update] APK-Loeschen fehlgeschlagen: ${f.name} (${err?.message || err})`);
|
||||
}
|
||||
}
|
||||
if (apks.length > 0) {
|
||||
console.log(`[Update] Cleanup fertig: ${apks.length} APKs entfernt, ${(freed / 1024 / 1024).toFixed(1)}MB freigegeben`);
|
||||
}
|
||||
} catch (err: any) {
|
||||
console.warn(`[Update] Cleanup-Fehler: ${err?.message || err}`);
|
||||
}
|
||||
}
|
||||
|
||||
/** Bei App-Start Update pruefen */
|
||||
checkForUpdate(): void {
|
||||
if (this.checking) return;
|
||||
|
|
@ -111,6 +140,10 @@ class UpdateService {
|
|||
});
|
||||
});
|
||||
|
||||
// Vor dem Schreiben alte APKs im Cache wegraeumen — falls mehrere
|
||||
// Updates in einer Session gezogen werden
|
||||
await this.cleanupOldApks();
|
||||
|
||||
// Base64 als APK-Datei speichern
|
||||
const destPath = `${RNFS.CachesDirectoryPath}/${apkData.fileName}`;
|
||||
await RNFS.writeFile(destPath, apkData.base64, 'base64');
|
||||
|
|
|
|||
|
|
@ -1,21 +1,26 @@
|
|||
/**
|
||||
* Gespraechsmodus / Wake Word Service
|
||||
*
|
||||
* Wake-Word-Engine: openWakeWord (https://github.com/dscripka/openWakeWord),
|
||||
* komplett on-device via ONNX Runtime in Native-Kotlin (siehe
|
||||
* OpenWakeWordModule.kt + assets/openwakeword/). Kein API-Key, kein Cloud-
|
||||
* Roundtrip, kein Cent Lizenzgebuehren.
|
||||
*
|
||||
* Drei Zustaende:
|
||||
* off — Ohr aus, nichts laeuft
|
||||
* armed — Ohr aktiv, Porcupine hoert passiv auf das Wake-Word.
|
||||
* Das Mikro ist von Porcupine belegt; AudioRecorder ist aus.
|
||||
* conversing — Wake-Word getriggert (oder Ohr-Tap ohne Wake-Word):
|
||||
* aktive Konversation. Porcupine pausiert (gibt Mikro frei),
|
||||
* armed — Ohr aktiv, openWakeWord hoert passiv auf das Wake-Word.
|
||||
* Das Mikro ist von OpenWakeWord belegt; AudioRecorder ist aus.
|
||||
* conversing — Wake-Word getriggert (oder Ohr-Tap manuell):
|
||||
* aktive Konversation. OpenWakeWord pausiert (gibt Mikro frei),
|
||||
* AudioRecorder uebernimmt fuer die Aufnahme.
|
||||
* Nach jeder ARIA-Antwort oeffnet das Mikro fuer X Sekunden
|
||||
* (Conversation-Window). Stille im Fenster → zurueck zu armed.
|
||||
*
|
||||
* Wake-Word fallback: ist kein Picovoice-Access-Key gesetzt, geht 'start'
|
||||
* direkt in 'conversing' (klassischer Gespraechsmodus). 'endConversation'
|
||||
* geht dann nach 'off' statt 'armed'.
|
||||
* Faellt das Native-Modul aus (alte App-Version, ONNX-Init-Fehler), geht
|
||||
* 'start' direkt in 'conversing' (klassischer Direkt-Aufnahme-Modus).
|
||||
*/
|
||||
|
||||
import { NativeEventEmitter, NativeModules, ToastAndroid } from 'react-native';
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||
|
||||
type WakeWordCallback = () => void;
|
||||
|
|
@ -23,85 +28,113 @@ type StateCallback = (state: WakeWordState) => void;
|
|||
|
||||
export type WakeWordState = 'off' | 'armed' | 'conversing';
|
||||
|
||||
export const WAKE_ACCESS_KEY_STORAGE = 'aria_wake_access_key';
|
||||
export const WAKE_KEYWORD_STORAGE = 'aria_wake_keyword';
|
||||
|
||||
/** Built-In Keywords von Picovoice — pre-trained, sofort einsetzbar.
|
||||
* Custom Keywords (z.B. "ARIA") brauchen ein .ppn File aus der Picovoice
|
||||
* Console — wird spaeter ueber Diagnostic uploadbar. */
|
||||
export const BUILTIN_KEYWORDS = [
|
||||
'jarvis',
|
||||
/** Verfuegbare Wake-Words — entsprechen den .onnx Dateien in
|
||||
* android/app/src/main/assets/openwakeword/. Custom-Keywords (eigenes
|
||||
* Training via openwakeword Notebook) muessen aktuell als Asset eingebaut
|
||||
* werden — Diagnostic-Upload ist Phase 2. */
|
||||
export const WAKE_KEYWORDS = [
|
||||
'hey_jarvis',
|
||||
'computer',
|
||||
'picovoice',
|
||||
'porcupine',
|
||||
'bumblebee',
|
||||
'terminator',
|
||||
'alexa',
|
||||
'hey google',
|
||||
'ok google',
|
||||
'hey siri',
|
||||
'hey_mycroft',
|
||||
'hey_rhasspy',
|
||||
] as const;
|
||||
export type BuiltinKeyword = typeof BUILTIN_KEYWORDS[number];
|
||||
export const DEFAULT_KEYWORD: BuiltinKeyword = 'jarvis';
|
||||
export type WakeKeyword = typeof WAKE_KEYWORDS[number];
|
||||
export const DEFAULT_KEYWORD: WakeKeyword = 'hey_jarvis';
|
||||
|
||||
/** Hilfs-Mapping fuer die Anzeige im UI. */
|
||||
export const KEYWORD_LABELS: Record<WakeKeyword, string> = {
|
||||
hey_jarvis: 'Hey Jarvis',
|
||||
computer: 'Computer',
|
||||
alexa: 'Alexa',
|
||||
hey_mycroft: 'Hey Mycroft',
|
||||
hey_rhasspy: 'Hey Rhasspy',
|
||||
};
|
||||
|
||||
// Detection-Tuning — kann in Settings spaeter konfigurierbar werden.
|
||||
const DEFAULT_THRESHOLD = 0.5;
|
||||
const DEFAULT_PATIENCE = 2;
|
||||
const DEFAULT_DEBOUNCE_MS = 1500;
|
||||
|
||||
interface OpenWakeWordModule {
|
||||
init(modelName: string, threshold: number, patience: number, debounceMs: number): Promise<boolean>;
|
||||
start(): Promise<boolean>;
|
||||
stop(): Promise<boolean>;
|
||||
dispose(): Promise<boolean>;
|
||||
isAvailable(): Promise<boolean>;
|
||||
}
|
||||
|
||||
const { OpenWakeWord } = NativeModules as { OpenWakeWord?: OpenWakeWordModule };
|
||||
|
||||
class WakeWordService {
|
||||
private state: WakeWordState = 'off';
|
||||
private wakeCallbacks: WakeWordCallback[] = [];
|
||||
private stateCallbacks: StateCallback[] = [];
|
||||
|
||||
// Picovoice Manager (lazy, da Native Module nicht in jedem Build verfuegbar ist)
|
||||
private porcupine: any = null;
|
||||
private accessKey: string = '';
|
||||
private keyword: string = DEFAULT_KEYWORD;
|
||||
private keyword: WakeKeyword = DEFAULT_KEYWORD;
|
||||
private nativeReady: boolean = false;
|
||||
private initInProgress: Promise<boolean> | null = null;
|
||||
private eventSub: { remove: () => void } | null = null;
|
||||
|
||||
/** Beim App-Start aufrufen — laedt Settings, baut Porcupine wenn Key da ist. */
|
||||
/** Beim App-Start aufrufen — laedt Settings, baut Native-Modul. */
|
||||
async loadFromStorage(): Promise<void> {
|
||||
try {
|
||||
const k = await AsyncStorage.getItem(WAKE_ACCESS_KEY_STORAGE);
|
||||
const w = await AsyncStorage.getItem(WAKE_KEYWORD_STORAGE);
|
||||
this.accessKey = (k || '').trim();
|
||||
this.keyword = (w || DEFAULT_KEYWORD).trim();
|
||||
if (this.accessKey) {
|
||||
// Vorinitialisieren — wirft sich nicht durch wenn etwas fehlt
|
||||
await this.initPorcupine();
|
||||
}
|
||||
const wt = (w || DEFAULT_KEYWORD).trim() as WakeKeyword;
|
||||
this.keyword = (WAKE_KEYWORDS as readonly string[]).includes(wt) ? wt : DEFAULT_KEYWORD;
|
||||
await this.initNative();
|
||||
} catch (err) {
|
||||
console.warn('[WakeWord] loadFromStorage', err);
|
||||
}
|
||||
}
|
||||
|
||||
/** Settings-Wechsel — neuer Key oder Keyword. Re-Init Porcupine. */
|
||||
async configure(accessKey: string, keyword: string): Promise<boolean> {
|
||||
this.accessKey = (accessKey || '').trim();
|
||||
this.keyword = (keyword || DEFAULT_KEYWORD).trim();
|
||||
await AsyncStorage.setItem(WAKE_ACCESS_KEY_STORAGE, this.accessKey);
|
||||
await AsyncStorage.setItem(WAKE_KEYWORD_STORAGE, this.keyword);
|
||||
/** Settings-Wechsel: anderes Wake-Word. Re-Init des Native-Moduls. */
|
||||
async configure(keyword: string): Promise<boolean> {
|
||||
const next: WakeKeyword = (WAKE_KEYWORDS as readonly string[]).includes(keyword)
|
||||
? (keyword as WakeKeyword)
|
||||
: DEFAULT_KEYWORD;
|
||||
this.keyword = next;
|
||||
await AsyncStorage.setItem(WAKE_KEYWORD_STORAGE, next);
|
||||
|
||||
// Laufende Instanz stoppen
|
||||
await this.disposePorcupine();
|
||||
if (!this.accessKey) return false;
|
||||
|
||||
// Neu initialisieren
|
||||
return this.initPorcupine();
|
||||
// Laufende Instanz stoppen + neu initialisieren
|
||||
await this.disposeNative();
|
||||
const ok = await this.initNative();
|
||||
if (!ok) {
|
||||
ToastAndroid.show(
|
||||
`Wake-Word "${KEYWORD_LABELS[next]}" konnte nicht initialisiert werden — Logs pruefen`,
|
||||
ToastAndroid.LONG,
|
||||
);
|
||||
}
|
||||
return ok;
|
||||
}
|
||||
|
||||
private async initPorcupine(): Promise<boolean> {
|
||||
private async initNative(): Promise<boolean> {
|
||||
if (!OpenWakeWord) {
|
||||
console.warn('[WakeWord] OpenWakeWord Native-Modul nicht verfuegbar — Direkt-Aufnahme-Fallback aktiv');
|
||||
this.nativeReady = false;
|
||||
return false;
|
||||
}
|
||||
if (this.initInProgress) return this.initInProgress;
|
||||
this.initInProgress = (async () => {
|
||||
try {
|
||||
const { PorcupineManager } = require('@picovoice/porcupine-react-native');
|
||||
// Built-In Keyword-Identifier sind lower-case strings im SDK
|
||||
this.porcupine = await PorcupineManager.fromBuiltInKeywords(
|
||||
this.accessKey,
|
||||
[this.keyword],
|
||||
(_keywordIndex: number) => this.onWakeDetected(),
|
||||
);
|
||||
console.log('[WakeWord] Porcupine init OK (keyword=%s)', this.keyword);
|
||||
await OpenWakeWord.init(this.keyword, DEFAULT_THRESHOLD, DEFAULT_PATIENCE, DEFAULT_DEBOUNCE_MS);
|
||||
// Subscribe nur einmal
|
||||
if (!this.eventSub) {
|
||||
const emitter = new NativeEventEmitter(NativeModules.OpenWakeWord);
|
||||
this.eventSub = emitter.addListener('WakeWordDetected', () => {
|
||||
console.log('[WakeWord] Native Detection-Event empfangen');
|
||||
this.onWakeDetected().catch(err =>
|
||||
console.warn('[WakeWord] onWakeDetected crashed:', err));
|
||||
});
|
||||
}
|
||||
this.nativeReady = true;
|
||||
console.log('[WakeWord] Init OK (model=%s)', this.keyword);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.warn('[WakeWord] Porcupine init fehlgeschlagen:', err);
|
||||
this.porcupine = null;
|
||||
} catch (err: any) {
|
||||
console.warn('[WakeWord] Init fehlgeschlagen:', err?.message || err);
|
||||
this.nativeReady = false;
|
||||
return false;
|
||||
} finally {
|
||||
this.initInProgress = null;
|
||||
|
|
@ -110,30 +143,39 @@ class WakeWordService {
|
|||
return this.initInProgress;
|
||||
}
|
||||
|
||||
private async disposePorcupine() {
|
||||
if (this.porcupine) {
|
||||
try { await this.porcupine.stop(); } catch {}
|
||||
try { await this.porcupine.delete(); } catch {}
|
||||
this.porcupine = null;
|
||||
}
|
||||
private async disposeNative(): Promise<void> {
|
||||
if (!OpenWakeWord) return;
|
||||
try { await OpenWakeWord.dispose(); } catch {}
|
||||
this.nativeReady = false;
|
||||
}
|
||||
|
||||
/** Ohr-Button gedrueckt — startet passives Lauschen oder direkt Konversation. */
|
||||
async start(): Promise<boolean> {
|
||||
if (this.state !== 'off') return true;
|
||||
if (this.porcupine) {
|
||||
// Passives Lauschen via Porcupine
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try {
|
||||
await this.porcupine.start();
|
||||
console.log('[WakeWord] armed — warte auf Wake Word "%s"', this.keyword);
|
||||
await OpenWakeWord.start();
|
||||
console.log('[WakeWord] armed — warte auf "%s"', this.keyword);
|
||||
ToastAndroid.show(`Lausche auf "${KEYWORD_LABELS[this.keyword]}"`, ToastAndroid.SHORT);
|
||||
this.setState('armed');
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.warn('[WakeWord] Porcupine start fehlgeschlagen — Fallback Direkt-Konversation:', err);
|
||||
} catch (err: any) {
|
||||
console.warn('[WakeWord] start fehlgeschlagen — Fallback Direkt-Aufnahme:',
|
||||
err?.message || err);
|
||||
ToastAndroid.show(
|
||||
`Wake-Word-Start failed: ${err?.message || err}`,
|
||||
ToastAndroid.LONG,
|
||||
);
|
||||
}
|
||||
} else {
|
||||
console.warn('[WakeWord] Native-Modul nicht bereit — Direkt-Aufnahme-Fallback');
|
||||
ToastAndroid.show(
|
||||
'Wake-Word nicht aktiv — direkte Aufnahme startet (Mikro hoert mit)',
|
||||
ToastAndroid.LONG,
|
||||
);
|
||||
}
|
||||
// Fallback: direkt in die Konversation
|
||||
console.log('[WakeWord] Konversation startet sofort (kein Wake-Word)');
|
||||
// Fallback: direkt in Konversation
|
||||
console.log('[WakeWord] Direkt-Aufnahme startet (kein Wake-Word)');
|
||||
this.setState('conversing');
|
||||
setTimeout(() => {
|
||||
if (this.state === 'conversing') {
|
||||
|
|
@ -146,20 +188,20 @@ class WakeWordService {
|
|||
/** Komplett ausschalten (Ohr abschalten) */
|
||||
async stop(): Promise<void> {
|
||||
console.log('[WakeWord] Ohr deaktiviert');
|
||||
if (this.porcupine) {
|
||||
try { await this.porcupine.stop(); } catch {}
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try { await OpenWakeWord.stop(); } catch {}
|
||||
}
|
||||
this.setState('off');
|
||||
}
|
||||
|
||||
/** Wake-Word getriggert: Porcupine pausieren, Konversation starten. */
|
||||
/** Wake-Word getriggert: Native-Modul pausieren, Konversation starten. */
|
||||
private async onWakeDetected(): Promise<void> {
|
||||
console.log('[WakeWord] Wake-Word "%s" erkannt!', this.keyword);
|
||||
if (this.porcupine) {
|
||||
try { await this.porcupine.stop(); } catch {}
|
||||
ToastAndroid.show(`Wake-Word "${KEYWORD_LABELS[this.keyword]}" erkannt — sprich jetzt`, ToastAndroid.SHORT);
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try { await OpenWakeWord.stop(); } catch {}
|
||||
}
|
||||
this.setState('conversing');
|
||||
// kurz warten damit Mikrofon frei ist
|
||||
setTimeout(() => {
|
||||
if (this.state === 'conversing') {
|
||||
this.wakeCallbacks.forEach(cb => cb());
|
||||
|
|
@ -168,15 +210,16 @@ class WakeWordService {
|
|||
}
|
||||
|
||||
/** Konversation beenden — User hat im Window nichts gesagt.
|
||||
* Mit Wake-Word: zurueck zu 'armed' (Porcupine wieder an).
|
||||
* Mit Wake-Word: zurueck zu 'armed' (Listener wieder an).
|
||||
* Ohne: zurueck zu 'off'.
|
||||
*/
|
||||
async endConversation(): Promise<void> {
|
||||
if (this.state !== 'conversing') return;
|
||||
if (this.porcupine && this.accessKey) {
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try {
|
||||
await this.porcupine.start();
|
||||
await OpenWakeWord.start();
|
||||
console.log('[WakeWord] Konversation zu Ende — zurueck zu armed');
|
||||
ToastAndroid.show(`Lausche wieder auf "${KEYWORD_LABELS[this.keyword]}"`, ToastAndroid.SHORT);
|
||||
this.setState('armed');
|
||||
return;
|
||||
} catch (err) {
|
||||
|
|
@ -184,6 +227,7 @@ class WakeWordService {
|
|||
}
|
||||
}
|
||||
console.log('[WakeWord] Konversation zu Ende — Ohr aus');
|
||||
ToastAndroid.show('Mikro aus', ToastAndroid.SHORT);
|
||||
this.setState('off');
|
||||
}
|
||||
|
||||
|
|
@ -208,10 +252,10 @@ class WakeWordService {
|
|||
}
|
||||
|
||||
hasWakeWord(): boolean {
|
||||
return !!this.porcupine;
|
||||
return this.nativeReady;
|
||||
}
|
||||
|
||||
getKeyword(): string {
|
||||
getKeyword(): WakeKeyword {
|
||||
return this.keyword;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -541,9 +541,25 @@ class ARIABridge:
|
|||
# Wird fuer die direkt folgende ARIA-Antwort genutzt und dann zurueckgesetzt.
|
||||
# So kann jedes Geraet seine bevorzugte Stimme bekommen (pro Request).
|
||||
self._next_voice_override: Optional[str] = None
|
||||
# Gleiche Logik fuer die Wiedergabegeschwindigkeit (F5-TTS speed-Param,
|
||||
# App-Setting aria_tts_speed, 1.0 = normal).
|
||||
self._next_speed_override: Optional[float] = None
|
||||
# STT-Requests die aktuell auf Antwort von der whisper-bridge (Gamebox) warten.
|
||||
# requestId → Future mit dem Text (oder None bei Fehler).
|
||||
self._pending_stt: dict[str, asyncio.Future] = {}
|
||||
# whisper-bridge service_status: True wenn ready, False/None wenn loading/unbekannt.
|
||||
# Beeinflusst das Timeout fuer stt_request — bei "loading" warten wir laenger,
|
||||
# weil das Modell beim ersten Request noch ~1-2 Min runtergeladen werden kann.
|
||||
self._remote_stt_ready: bool = False
|
||||
# Pending Files: wenn die App ein Bild + Text gleichzeitig schickt, kommen
|
||||
# zwei separate RVS-Events ('file' und 'chat') — wir buffern die Files
|
||||
# kurz und mergen sie mit dem nachfolgenden Chat-Text zu einer einzigen
|
||||
# Anfrage an aria-core. Sonst antwortet ARIA zweimal (einmal "warte auf
|
||||
# Anweisung" beim file, einmal auf den Chat-Text).
|
||||
# Liste von Tuples: (file_path, name, file_type, size_kb, width, height)
|
||||
self._pending_files: list[tuple[str, str, str, int, int, int]] = []
|
||||
self._pending_files_flush_task: Optional[asyncio.Task] = None
|
||||
self._PENDING_FILES_WINDOW_SEC: float = 0.8
|
||||
|
||||
def initialize(self) -> None:
|
||||
"""Initialisiert alle Komponenten.
|
||||
|
|
@ -900,12 +916,13 @@ class ARIABridge:
|
|||
logger.info("[core] TTS unterdrueckt (Modus: %s)", self.current_mode.config.name)
|
||||
return
|
||||
|
||||
# Voice bestimmen: App-Override fuer diesen Request > globale Default-Voice
|
||||
# Voice bestimmen: App-Override (gesetzt durch letzten chat-Event) > globale
|
||||
# Default-Voice. Der Override wird NICHT pro Antwort verbraucht — sonst nutzt
|
||||
# eine Multi-Turn-Antwort von ARIA (Tool-Use + finale Antwort) ab dem zweiten
|
||||
# TTS-Call wieder die alte Default-Stimme. Der Override bleibt gueltig bis
|
||||
# zum naechsten chat-Event, wo er entweder ueberschrieben oder geloescht wird.
|
||||
xtts_voice = self._next_voice_override or getattr(self, 'xtts_voice', '')
|
||||
# Override verbrauchen (gilt nur fuer genau diese naechste Antwort)
|
||||
if self._next_voice_override:
|
||||
logger.info("[core] Nutze Voice-Override: %s", self._next_voice_override)
|
||||
self._next_voice_override = None
|
||||
xtts_speed = self._next_speed_override or 1.0
|
||||
|
||||
tts_text = tts_text_preview or text
|
||||
if not tts_text:
|
||||
|
|
@ -922,13 +939,15 @@ class ARIABridge:
|
|||
"payload": {
|
||||
"text": tts_text,
|
||||
"voice": xtts_voice,
|
||||
"speed": xtts_speed,
|
||||
"language": "de",
|
||||
"requestId": xtts_request_id,
|
||||
"messageId": message_id,
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
logger.info("[core] XTTS-Request gesendet (%s): '%s'", xtts_voice or "default", tts_text[:60])
|
||||
logger.info("[core] XTTS-Request gesendet (voice=%s, speed=%.2fx): '%s'",
|
||||
xtts_voice or "default", xtts_speed, tts_text[:60])
|
||||
except Exception as e:
|
||||
logger.error("[core] XTTS-Request fehlgeschlagen: %s — kein Audio", e)
|
||||
|
||||
|
|
@ -1009,6 +1028,51 @@ class ARIABridge:
|
|||
except Exception as e:
|
||||
logger.debug("[session] Diagnostic nicht erreichbar (%s) — nutze '%s'", e, self._session_key)
|
||||
|
||||
def _build_pending_files_message(self, user_text: str) -> str:
|
||||
"""Baut eine Anweisung an aria-core aus den gepufferten Files + optionalem
|
||||
User-Text. user_text leer → 'warte auf Anweisung'-Variante."""
|
||||
parts: list[str] = []
|
||||
for fp, name, ftype, kb, w, h in self._pending_files:
|
||||
dim = f" {w}x{h}px" if (w and h) else ""
|
||||
kind = "Bild" if ftype.startswith("image/") else "Datei"
|
||||
parts.append(f"- {kind}: {name}{dim} ({ftype}, {kb}KB) liegt unter {fp}")
|
||||
files_summary = "\n".join(parts)
|
||||
n = len(self._pending_files)
|
||||
anhang = "Anhang" if n == 1 else "Anhaenge"
|
||||
if user_text:
|
||||
return (f"Stefan hat dir {n} {anhang} geschickt:\n{files_summary}\n\n"
|
||||
f"Er sagt dazu: \"{user_text}\"")
|
||||
return (f"Stefan hat dir {n} {anhang} geschickt:\n{files_summary}\n\n"
|
||||
f"Warte auf seine Anweisung was du damit tun sollst.")
|
||||
|
||||
async def _flush_pending_files_after(self, delay: float) -> None:
|
||||
"""Wenn nach `delay`s kein chat-Text gekommen ist: Files alleine an
|
||||
aria-core senden ('warte auf Anweisung'-Variante)."""
|
||||
try:
|
||||
await asyncio.sleep(delay)
|
||||
except asyncio.CancelledError:
|
||||
return
|
||||
if not self._pending_files:
|
||||
return
|
||||
text = self._build_pending_files_message("")
|
||||
self._pending_files = []
|
||||
self._pending_files_flush_task = None
|
||||
await self.send_to_core(text, source="app-file")
|
||||
|
||||
async def _flush_pending_files_with_text(self, user_text: str) -> bool:
|
||||
"""Wenn ein chat-Text reinkommt waehrend Files gepuffert sind:
|
||||
Files + Text zu einer einzigen aria-core-Nachricht mergen.
|
||||
Returns True wenn gemerged wurde (Caller soll dann nicht nochmal senden)."""
|
||||
if not self._pending_files:
|
||||
return False
|
||||
if self._pending_files_flush_task and not self._pending_files_flush_task.done():
|
||||
self._pending_files_flush_task.cancel()
|
||||
self._pending_files_flush_task = None
|
||||
text = self._build_pending_files_message(user_text)
|
||||
self._pending_files = []
|
||||
await self.send_to_core(text, source="app-file+chat")
|
||||
return True
|
||||
|
||||
async def send_to_core(self, text: str, source: str = "bridge") -> None:
|
||||
"""Sendet Text an aria-core (OpenClaw chat.send Protokoll)."""
|
||||
if self.ws_core is None:
|
||||
|
|
@ -1154,14 +1218,32 @@ class ARIABridge:
|
|||
if sender in ("aria", "stt"):
|
||||
return
|
||||
text = payload.get("text", "")
|
||||
# Voice-Override fuer die naechste ARIA-Antwort merken
|
||||
voice_override = payload.get("voice", "")
|
||||
if voice_override:
|
||||
self._next_voice_override = voice_override
|
||||
logger.info("[rvs] Voice-Override fuer naechste Antwort: %s", voice_override)
|
||||
# Voice-Override fuer Folgenachrichten setzen — gilt bis zum naechsten
|
||||
# chat-Event. Leerer String "" = explizit Default-Voice (override loeschen).
|
||||
# Field nicht gesendet = vorherigen Override unveraendert lassen (z.B. wenn
|
||||
# cancel_request oder anderer Service die App umgeht).
|
||||
if "voice" in payload:
|
||||
voice_override = payload.get("voice", "") or ""
|
||||
self._next_voice_override = voice_override or None
|
||||
logger.info("[rvs] Voice fuer Antworten: %s",
|
||||
self._next_voice_override or "(Default)")
|
||||
# Speed-Override (TTS-Wiedergabegeschwindigkeit, pro Geraet)
|
||||
if "speed" in payload:
|
||||
try:
|
||||
speed = float(payload.get("speed", 0) or 0)
|
||||
self._next_speed_override = speed if 0.1 <= speed <= 5.0 else None
|
||||
except (TypeError, ValueError):
|
||||
self._next_speed_override = None
|
||||
if text:
|
||||
logger.info("[rvs] App-Chat: '%s'", text[:80])
|
||||
await self.send_to_core(text, source="app")
|
||||
# Wenn Files gerade gepuffert sind (Bild + Text gleichzeitig
|
||||
# gesendet), mergen wir sie zu einer einzigen Anfrage statt
|
||||
# zwei separater send_to_core-Calls.
|
||||
merged = await self._flush_pending_files_with_text(text)
|
||||
if merged:
|
||||
logger.info("[rvs] App-Chat (mit Anhaengen): '%s'", text[:80])
|
||||
else:
|
||||
logger.info("[rvs] App-Chat: '%s'", text[:80])
|
||||
await self.send_to_core(text, source="app")
|
||||
return
|
||||
|
||||
if msg_type == "cancel_request":
|
||||
|
|
@ -1211,8 +1293,14 @@ class ARIABridge:
|
|||
if not text:
|
||||
return
|
||||
tts_text = clean_text_for_tts(text) or text
|
||||
# Voice aus App-Payload gewinnt, sonst global
|
||||
# Voice + Speed aus App-Payload gewinnen, sonst global/default
|
||||
xtts_voice = payload.get("voice", "") or getattr(self, 'xtts_voice', '')
|
||||
try:
|
||||
xtts_speed = float(payload.get("speed", 0) or 0)
|
||||
if not (0.1 <= xtts_speed <= 5.0):
|
||||
xtts_speed = 1.0
|
||||
except (TypeError, ValueError):
|
||||
xtts_speed = 1.0
|
||||
try:
|
||||
xtts_request_id = str(uuid.uuid4())
|
||||
if message_id:
|
||||
|
|
@ -1222,6 +1310,7 @@ class ARIABridge:
|
|||
"payload": {
|
||||
"text": tts_text,
|
||||
"voice": xtts_voice,
|
||||
"speed": xtts_speed,
|
||||
"language": "de",
|
||||
"requestId": xtts_request_id,
|
||||
"messageId": message_id,
|
||||
|
|
@ -1313,70 +1402,54 @@ class ARIABridge:
|
|||
await self.ws_core.send(raw_message)
|
||||
|
||||
elif msg_type == "file":
|
||||
# Datei von der App → als Text-Nachricht an aria-core
|
||||
# Datei von der App: speichern + zu Pending-Queue hinzufuegen.
|
||||
# Wird mit dem nachfolgenden chat-Event (innerhalb PENDING_FILES_WINDOW)
|
||||
# zu einer einzigen aria-core-Anfrage gemerged. Sonst antwortet ARIA
|
||||
# zweimal: einmal "warte auf Anweisung" beim file, einmal auf den Chat.
|
||||
file_name = payload.get("name", "unbekannt")
|
||||
file_type = payload.get("type", "")
|
||||
file_b64 = payload.get("base64", "")
|
||||
file_size = payload.get("size", 0)
|
||||
width = payload.get("width", 0)
|
||||
height = payload.get("height", 0)
|
||||
logger.info("[rvs] Datei empfangen: %s (%s, %dKB)",
|
||||
file_name, file_type, len(file_b64) // 1365 if file_b64 else 0)
|
||||
|
||||
# Shared Volume: /shared/ ist in Bridge UND aria-core gemountet
|
||||
SHARED_DIR = "/shared/uploads"
|
||||
os.makedirs(SHARED_DIR, exist_ok=True)
|
||||
|
||||
if file_b64 and file_type.startswith("image/"):
|
||||
# Bild in Shared Volume speichern
|
||||
if not file_b64:
|
||||
text = f"Stefan hat eine Datei gesendet ({file_name}, {file_type}) aber die Daten sind leer angekommen."
|
||||
await self.send_to_core(text, source="app-file")
|
||||
return
|
||||
|
||||
if file_type.startswith("image/"):
|
||||
ext = ".jpg" if "jpeg" in file_type or "jpg" in file_type else ".png"
|
||||
safe_name = f"img_{int(asyncio.get_event_loop().time())}_{file_name.replace('/', '_')}"
|
||||
file_path = os.path.join(SHARED_DIR, safe_name if safe_name.endswith(ext) else safe_name + ext)
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(base64.b64decode(file_b64))
|
||||
size_kb = len(file_b64) // 1365
|
||||
logger.info("[rvs] Bild gespeichert: %s (%dKB)", file_path, size_kb)
|
||||
# ERST an aria-core senden (wichtigster Schritt)
|
||||
text = (f"Stefan hat dir ein Bild geschickt: {file_name}"
|
||||
f"{f' ({width}x{height}px)' if width else ''}"
|
||||
f", {size_kb}KB."
|
||||
f" Das Bild liegt unter: {file_path}"
|
||||
f" Warte auf Stefans Anweisung was du damit tun sollst.")
|
||||
await self.send_to_core(text, source="app-file")
|
||||
# Dann App informieren (optional, darf nicht crashen)
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "file_saved",
|
||||
"payload": {"name": file_name, "serverPath": file_path, "mimeType": file_type},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] file_saved konnte nicht an App gesendet werden: %s", e)
|
||||
elif file_b64:
|
||||
# Andere Datei in Shared Volume speichern
|
||||
else:
|
||||
safe_name = f"file_{int(asyncio.get_event_loop().time())}_{file_name.replace('/', '_')}"
|
||||
file_path = os.path.join(SHARED_DIR, safe_name)
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(base64.b64decode(file_b64))
|
||||
size_kb = len(file_b64) // 1365
|
||||
logger.info("[rvs] Datei gespeichert: %s (%dKB)", file_path, size_kb)
|
||||
# ERST an aria-core senden
|
||||
text = (f"Stefan hat dir eine Datei geschickt: {file_name}"
|
||||
f" ({file_type}, {size_kb}KB)."
|
||||
f" Die Datei liegt unter: {file_path}"
|
||||
f" Warte auf Stefans Anweisung was du damit tun sollst.")
|
||||
await self.send_to_core(text, source="app-file")
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "file_saved",
|
||||
"payload": {"name": file_name, "serverPath": file_path, "mimeType": file_type},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] file_saved konnte nicht an App gesendet werden: %s", e)
|
||||
else:
|
||||
text = f"Stefan hat eine Datei gesendet ({file_name}, {file_type}) aber die Daten sind leer angekommen."
|
||||
await self.send_to_core(text, source="app-file")
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(base64.b64decode(file_b64))
|
||||
size_kb = len(file_b64) // 1365
|
||||
logger.info("[rvs] Datei gespeichert: %s (%dKB)", file_path, size_kb)
|
||||
|
||||
# In Pending-Queue + Flush-Timer (anti-spam Buffering)
|
||||
self._pending_files.append((file_path, file_name, file_type, size_kb, int(width or 0), int(height or 0)))
|
||||
if self._pending_files_flush_task and not self._pending_files_flush_task.done():
|
||||
self._pending_files_flush_task.cancel()
|
||||
self._pending_files_flush_task = asyncio.create_task(
|
||||
self._flush_pending_files_after(self._PENDING_FILES_WINDOW_SEC)
|
||||
)
|
||||
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
"type": "file_saved",
|
||||
"payload": {"name": file_name, "serverPath": file_path, "mimeType": file_type},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] file_saved konnte nicht an App gesendet werden: %s", e)
|
||||
|
||||
elif msg_type == "file_request":
|
||||
# App fordert eine Datei an (Re-Download nach Cache-Leerung)
|
||||
|
|
@ -1415,11 +1488,18 @@ class ARIABridge:
|
|||
if not audio_b64:
|
||||
logger.warning("[rvs] Audio ohne Daten empfangen")
|
||||
return
|
||||
# Voice-Override fuer die kommende ARIA-Antwort (App-lokal gewaehlt)
|
||||
voice_override = payload.get("voice", "")
|
||||
if voice_override:
|
||||
self._next_voice_override = voice_override
|
||||
logger.info("[rvs] Voice-Override (via Audio): %s", voice_override)
|
||||
# Voice-Override fuer Folgenachrichten — gleiche Semantik wie beim chat-Event.
|
||||
if "voice" in payload:
|
||||
voice_override = payload.get("voice", "") or ""
|
||||
self._next_voice_override = voice_override or None
|
||||
logger.info("[rvs] Voice fuer Antworten (via Audio): %s",
|
||||
self._next_voice_override or "(Default)")
|
||||
if "speed" in payload:
|
||||
try:
|
||||
speed = float(payload.get("speed", 0) or 0)
|
||||
self._next_speed_override = speed if 0.1 <= speed <= 5.0 else None
|
||||
except (TypeError, ValueError):
|
||||
self._next_speed_override = None
|
||||
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB",
|
||||
mime_type, duration_ms, len(audio_b64) // 1365)
|
||||
asyncio.create_task(self._process_app_audio(audio_b64, mime_type))
|
||||
|
|
@ -1442,13 +1522,41 @@ class ARIABridge:
|
|||
future.set_result(text)
|
||||
return
|
||||
|
||||
elif msg_type == "service_status":
|
||||
# Gamebox-Bridges (whisper / f5tts) melden ihren Lade-Status.
|
||||
# Wir nutzen das fuer den dynamischen STT-Timeout: solange whisper
|
||||
# im 'loading' steckt, geben wir der Bridge mehr Zeit (Modell-Download
|
||||
# kann 1-2 Min dauern), statt nach 45s lokal zu fallbacken.
|
||||
svc = payload.get("service", "")
|
||||
state = payload.get("state", "")
|
||||
if svc == "whisper":
|
||||
was_ready = self._remote_stt_ready
|
||||
self._remote_stt_ready = (state == "ready")
|
||||
if self._remote_stt_ready != was_ready:
|
||||
logger.info("[rvs] whisper-bridge -> %s", state)
|
||||
return
|
||||
|
||||
elif msg_type == "config_request":
|
||||
# Eine andere Bridge (whisper/f5tts) bittet um die aktuelle Voice-
|
||||
# Config — passiert wenn sie sich connected, weil sie sonst die
|
||||
# Diagnostic-Settings nicht kennt. Wir broadcasten die persistierte
|
||||
# Config (auch beim normalen Connect von aria-bridge selber, aber
|
||||
# da war eventuell die andere Bridge noch nicht connected).
|
||||
requester = payload.get("service", "?")
|
||||
logger.info("[rvs] config_request von %s — broadcaste Voice-Config", requester)
|
||||
asyncio.create_task(self._broadcast_persisted_config())
|
||||
return
|
||||
|
||||
else:
|
||||
logger.debug("[rvs] Unbekannter Typ: %s", msg_type)
|
||||
|
||||
# STT-Orchestrierung: zuerst Remote (Gamebox), Fallback lokal.
|
||||
# Timeout grosszuegig gewaehlt, damit auch ein erstmaliger Modell-Load
|
||||
# auf der Gamebox (bis ~30s bei large-v3) durchgeht.
|
||||
_STT_REMOTE_TIMEOUT_S = 45.0
|
||||
# Zwei Timeouts:
|
||||
# ready=True → 45s reicht selbst fuer lange Audios
|
||||
# ready=False → 300s, weil das Modell evtl. noch heruntergeladen wird
|
||||
# (large-v3 ~3GB, kann auf der Gamebox 1-2 Min dauern).
|
||||
_STT_REMOTE_TIMEOUT_READY_S = 45.0
|
||||
_STT_REMOTE_TIMEOUT_LOADING_S = 300.0
|
||||
|
||||
async def _process_app_audio(self, audio_b64: str, mime_type: str) -> None:
|
||||
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal."""
|
||||
|
|
@ -1514,7 +1622,12 @@ class ARIABridge:
|
|||
if not ok:
|
||||
logger.warning("[rvs] stt_request konnte nicht gesendet werden — skip Remote")
|
||||
return None
|
||||
return await asyncio.wait_for(future, timeout=self._STT_REMOTE_TIMEOUT_S)
|
||||
timeout_s = (self._STT_REMOTE_TIMEOUT_READY_S
|
||||
if self._remote_stt_ready
|
||||
else self._STT_REMOTE_TIMEOUT_LOADING_S)
|
||||
logger.info("[rvs] STT-Timeout %ds (whisper-bridge %s)",
|
||||
int(timeout_s), "ready" if self._remote_stt_ready else "loading")
|
||||
return await asyncio.wait_for(future, timeout=timeout_s)
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning("[rvs] Remote-STT Timeout (%.0fs)", self._STT_REMOTE_TIMEOUT_S)
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -136,6 +136,34 @@
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Voice-Preview Modal -->
|
||||
<div id="voice-preview-modal" style="display:none;position:fixed;inset:0;z-index:1000;background:rgba(0,0,0,0.7);align-items:center;justify-content:center;">
|
||||
<div style="background:#1A1A2E;border:1px solid #2A2A3E;border-radius:10px;padding:20px;max-width:560px;width:90%;display:flex;flex-direction:column;gap:12px;">
|
||||
<div style="display:flex;align-items:center;justify-content:space-between;">
|
||||
<h3 style="margin:0;color:#fff;">Stimmen-Preview: <span id="voice-preview-name">—</span></h3>
|
||||
<button onclick="closeVoicePreview()" style="background:none;border:none;color:#8888AA;font-size:22px;cursor:pointer;">×</button>
|
||||
</div>
|
||||
<textarea id="voice-preview-text" rows="4"
|
||||
style="background:#0D0D1A;border:1px solid #2A2A3E;border-radius:6px;padding:10px;color:#fff;font-size:13px;resize:vertical;"></textarea>
|
||||
|
||||
<div style="display:flex;align-items:center;gap:10px;font-size:12px;color:#8888AA;">
|
||||
<span style="min-width:120px;">Geschwindigkeit:</span>
|
||||
<button onclick="adjustPreviewSpeed(-0.1)" class="btn secondary" style="padding:4px 10px;font-size:12px;">−0.1</button>
|
||||
<span id="voice-preview-speed-value" style="min-width:52px;text-align:center;color:#fff;font-weight:600;">1.0 x</span>
|
||||
<button onclick="adjustPreviewSpeed(0.1)" class="btn secondary" style="padding:4px 10px;font-size:12px;">+0.1</button>
|
||||
<span style="color:#555570;font-size:11px;">(nur fuer dieses Modal, wird nicht gespeichert)</span>
|
||||
</div>
|
||||
|
||||
<div style="display:flex;gap:8px;align-items:center;">
|
||||
<button id="voice-preview-play" onclick="playVoicePreview()" class="btn primary" style="padding:8px 16px;">
|
||||
▶ Abspielen
|
||||
</button>
|
||||
<span id="voice-preview-status" style="color:#8888AA;font-size:11px;flex:1;"></span>
|
||||
</div>
|
||||
<audio id="voice-preview-audio" controls style="width:100%;display:none;"></audio>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Disk-Space Warnung (dynamisch gesetzt) -->
|
||||
<div id="disk-banner" style="display:none;position:sticky;top:0;z-index:500;padding:10px 14px;border-radius:0;margin:-16px -16px 12px -16px;font-size:13px;">
|
||||
<div style="display:flex;align-items:center;gap:10px;flex-wrap:wrap;">
|
||||
|
|
@ -469,23 +497,25 @@
|
|||
Hardcoded Defaults: F5TTS_v1_Base, cfg_strength=2.5, nfe_step=32.
|
||||
</div>
|
||||
|
||||
<label style="color:#8888AA;font-size:12px;">Modell-ID:</label>
|
||||
<label style="color:#8888AA;font-size:12px;">
|
||||
Modell-Architektur (F5TTS_v1_Base = Default multilingual, F5TTS_Base = fuer die meisten Fine-Tunes):
|
||||
</label>
|
||||
<input type="text" id="diag-f5tts-model"
|
||||
placeholder="F5TTS_v1_Base"
|
||||
style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
|
||||
<label style="color:#8888AA;font-size:12px;">
|
||||
Custom Checkpoint (HF-Repo "user/repo" oder Container-Pfad, leer = Default):
|
||||
Custom Checkpoint — HF-Pfad (hf://user/repo/file) oder lokaler Container-Pfad. Leer = Default.
|
||||
</label>
|
||||
<input type="text" id="diag-f5tts-ckpt"
|
||||
placeholder="z.B. aoxo/F5-TTS-German"
|
||||
placeholder="z.B. hf://aihpi/F5-TTS-German/F5TTS_Base/model_365000.safetensors"
|
||||
style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
|
||||
<label style="color:#8888AA;font-size:12px;">
|
||||
Custom Vocab (passend zum Checkpoint, optional):
|
||||
Custom Vocab — muss zum Checkpoint passen. Leer = Default.
|
||||
</label>
|
||||
<input type="text" id="diag-f5tts-vocab"
|
||||
placeholder="leer = Default"
|
||||
placeholder="z.B. hf://aihpi/F5-TTS-German/vocab.txt"
|
||||
style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
|
||||
<div style="display:flex;gap:12px;">
|
||||
|
|
@ -928,6 +958,24 @@
|
|||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'voice_preview_audio') {
|
||||
const statusEl = document.getElementById('voice-preview-status');
|
||||
const audio = document.getElementById('voice-preview-audio');
|
||||
const playBtn = document.getElementById('voice-preview-play');
|
||||
if (playBtn) playBtn.disabled = false;
|
||||
if (msg.error) {
|
||||
if (statusEl) statusEl.textContent = '❌ Fehler: ' + msg.error;
|
||||
return;
|
||||
}
|
||||
if (msg.base64 && audio) {
|
||||
audio.src = 'data:audio/wav;base64,' + msg.base64;
|
||||
audio.style.display = 'block';
|
||||
audio.play().catch(() => {});
|
||||
if (statusEl) statusEl.textContent = '✅ fertig';
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'voice_ready') {
|
||||
const v = msg.payload?.voice || '';
|
||||
const err = msg.payload?.error;
|
||||
|
|
@ -1577,16 +1625,75 @@
|
|||
html += '<div style="display:flex;flex-direction:column;gap:4px;">';
|
||||
for (const v of voices) {
|
||||
const esc = (s) => String(s).replace(/[&<>"']/g, c => ({ "&":"&", "<":"<", ">":">", '"':""", "'":"'" }[c]));
|
||||
const jsName = esc(v.name).replace(/'/g, "\\'");
|
||||
html += `<div style="display:flex;align-items:center;gap:8px;background:#1E1E2E;border-radius:4px;padding:4px 8px;font-size:12px;">`
|
||||
+ `<span style="flex:1;color:#E0E0F0;">${esc(v.name)}</span>`
|
||||
+ `<span style="color:#555570;font-size:10px;">${(v.size/1024).toFixed(0)}KB</span>`
|
||||
+ `<button class="btn secondary" onclick="deleteXttsVoice('${esc(v.name).replace(/'/g, "\\'")}')" style="padding:2px 8px;font-size:10px;color:#FF6B6B;" title="Stimme loeschen">X</button>`
|
||||
+ `<button class="btn secondary" onclick="openVoicePreview('${jsName}')" style="padding:2px 8px;font-size:12px;" title="Stimme anhoeren">▶</button>`
|
||||
+ `<button class="btn secondary" onclick="deleteXttsVoice('${jsName}')" style="padding:2px 8px;font-size:10px;color:#FF6B6B;" title="Stimme loeschen">X</button>`
|
||||
+ `</div>`;
|
||||
}
|
||||
html += '</div>';
|
||||
box.innerHTML = html;
|
||||
}
|
||||
|
||||
// ── Voice Preview Modal ─────────────────────────
|
||||
const VOICE_PREVIEW_DEFAULT = 'Hallo, ich bin ARIA. Das hier ist ein kleiner Test damit du meine Stimme beurteilen kannst.';
|
||||
const PREVIEW_SPEED_DEFAULT = 1.0;
|
||||
const PREVIEW_SPEED_MIN = 0.1;
|
||||
const PREVIEW_SPEED_MAX = 5.0;
|
||||
let currentPreviewVoice = '';
|
||||
let currentPreviewSpeed = PREVIEW_SPEED_DEFAULT;
|
||||
|
||||
function _refreshPreviewSpeedLabel() {
|
||||
const el = document.getElementById('voice-preview-speed-value');
|
||||
if (el) el.textContent = currentPreviewSpeed.toFixed(1) + ' x';
|
||||
}
|
||||
|
||||
function adjustPreviewSpeed(delta) {
|
||||
const next = Math.round((currentPreviewSpeed + delta) * 10) / 10;
|
||||
if (next < PREVIEW_SPEED_MIN || next > PREVIEW_SPEED_MAX) return;
|
||||
currentPreviewSpeed = next;
|
||||
_refreshPreviewSpeedLabel();
|
||||
}
|
||||
|
||||
function openVoicePreview(name) {
|
||||
currentPreviewVoice = name;
|
||||
// Speed bei jedem Oeffnen zuruecksetzen — bewusst kein persist
|
||||
currentPreviewSpeed = PREVIEW_SPEED_DEFAULT;
|
||||
_refreshPreviewSpeedLabel();
|
||||
document.getElementById('voice-preview-name').textContent = name;
|
||||
// Text bei jedem Oeffnen zuruecksetzen
|
||||
document.getElementById('voice-preview-text').value = VOICE_PREVIEW_DEFAULT;
|
||||
document.getElementById('voice-preview-status').textContent = '';
|
||||
const audio = document.getElementById('voice-preview-audio');
|
||||
audio.style.display = 'none';
|
||||
audio.src = '';
|
||||
document.getElementById('voice-preview-modal').style.display = 'flex';
|
||||
}
|
||||
|
||||
function closeVoicePreview() {
|
||||
document.getElementById('voice-preview-modal').style.display = 'none';
|
||||
const audio = document.getElementById('voice-preview-audio');
|
||||
try { audio.pause(); } catch {}
|
||||
}
|
||||
|
||||
function playVoicePreview() {
|
||||
const text = (document.getElementById('voice-preview-text').value || '').trim();
|
||||
if (!text) {
|
||||
document.getElementById('voice-preview-status').textContent = 'Text leer';
|
||||
return;
|
||||
}
|
||||
document.getElementById('voice-preview-status').textContent = '⏳ Rendere...';
|
||||
document.getElementById('voice-preview-play').disabled = true;
|
||||
send({
|
||||
action: 'preview_voice',
|
||||
voice: currentPreviewVoice,
|
||||
text,
|
||||
speed: currentPreviewSpeed,
|
||||
});
|
||||
}
|
||||
|
||||
function deleteXttsVoice(name) {
|
||||
if (!confirm(`Stimme "${name}" endgueltig loeschen?`)) return;
|
||||
send({ action: 'xtts_delete_voice', name });
|
||||
|
|
|
|||
|
|
@ -653,6 +653,9 @@ function connectRVS(forcePlain) {
|
|||
log("info", "rvs", `service_status ${svc} ${state}${model ? ` (${model})` : ""}`);
|
||||
}
|
||||
broadcast({ type: "service_status", payload: msg.payload });
|
||||
} else if (msg.type === "audio_pcm" && msg.payload && _previewPending.size > 0) {
|
||||
// PCM-Chunks einer laufenden Voice-Preview — sammeln + WAV bauen
|
||||
_handlePreviewChunk(msg.payload);
|
||||
} else {
|
||||
log("debug", "rvs", `Nachricht: ${JSON.stringify(msg).slice(0, 150)}`);
|
||||
}
|
||||
|
|
@ -1439,19 +1442,14 @@ wss.on("connection", (ws) => {
|
|||
xttsVoice: msg.xttsVoice || "",
|
||||
};
|
||||
if (msg.whisperModel !== undefined) voiceConfig.whisperModel = msg.whisperModel;
|
||||
// F5-TTS Tuning-Felder — leere Strings entfernen damit der Default greift
|
||||
if (msg.f5ttsModel !== undefined) {
|
||||
if (msg.f5ttsModel) voiceConfig.f5ttsModel = msg.f5ttsModel;
|
||||
else delete voiceConfig.f5ttsModel;
|
||||
}
|
||||
if (msg.f5ttsCkptFile !== undefined) {
|
||||
if (msg.f5ttsCkptFile) voiceConfig.f5ttsCkptFile = msg.f5ttsCkptFile;
|
||||
else delete voiceConfig.f5ttsCkptFile;
|
||||
}
|
||||
if (msg.f5ttsVocabFile !== undefined) {
|
||||
if (msg.f5ttsVocabFile) voiceConfig.f5ttsVocabFile = msg.f5ttsVocabFile;
|
||||
else delete voiceConfig.f5ttsVocabFile;
|
||||
}
|
||||
// F5-TTS Tuning-Felder — immer mit dem vom User gesendeten Wert setzen,
|
||||
// auch leeren String. Leer = "reset auf Hard-Default". Sonst merkt die
|
||||
// Bridge nicht dass der User den Wert loeschen wollte (absent key war
|
||||
// vorher 'keep current' semantik → BigVGAN blieb drin obwohl User
|
||||
// leer eingetragen hatte).
|
||||
if (msg.f5ttsModel !== undefined) voiceConfig.f5ttsModel = msg.f5ttsModel || "";
|
||||
if (msg.f5ttsCkptFile !== undefined) voiceConfig.f5ttsCkptFile = msg.f5ttsCkptFile || "";
|
||||
if (msg.f5ttsVocabFile !== undefined) voiceConfig.f5ttsVocabFile = msg.f5ttsVocabFile || "";
|
||||
if (msg.f5ttsCfgStrength !== undefined && !isNaN(msg.f5ttsCfgStrength)) {
|
||||
voiceConfig.f5ttsCfgStrength = msg.f5ttsCfgStrength;
|
||||
}
|
||||
|
|
@ -1470,6 +1468,8 @@ wss.on("connection", (ws) => {
|
|||
handleSaveTriggers(ws, msg.triggers || []);
|
||||
} else if (msg.action === "test_tts") {
|
||||
handleTestTTS(ws, msg.text || "Test");
|
||||
} else if (msg.action === "preview_voice") {
|
||||
handleVoicePreview(ws, msg.voice || "", msg.text || "Hallo.", msg.speed);
|
||||
} else if (msg.action === "check_tts") {
|
||||
handleCheckTTS(ws);
|
||||
} else if (msg.action === "check_desktop") {
|
||||
|
|
@ -1642,6 +1642,98 @@ async function handleSaveTriggers(clientWs, triggers) {
|
|||
}
|
||||
|
||||
// ── TTS Diagnose (XTTS) ───────────────────────────────
|
||||
// ── Voice Preview ────────────────────────────────────────
|
||||
// Sammelt audio_pcm Chunks einer Preview-Anfrage, baut am Ende eine WAV
|
||||
// und schickt sie base64-kodiert an den Browser-Client.
|
||||
//
|
||||
// Map requestId → { clientWs, chunks: [Buffer], sampleRate, channels }
|
||||
const _previewPending = new Map();
|
||||
|
||||
function _buildWavFromPcm(pcmBuf, sampleRate, channels) {
|
||||
const bitsPerSample = 16;
|
||||
const byteRate = sampleRate * channels * bitsPerSample / 8;
|
||||
const blockAlign = channels * bitsPerSample / 8;
|
||||
const dataSize = pcmBuf.length;
|
||||
const header = Buffer.alloc(44);
|
||||
header.write("RIFF", 0);
|
||||
header.writeUInt32LE(36 + dataSize, 4);
|
||||
header.write("WAVE", 8);
|
||||
header.write("fmt ", 12);
|
||||
header.writeUInt32LE(16, 16); // subchunk1 size
|
||||
header.writeUInt16LE(1, 20); // PCM
|
||||
header.writeUInt16LE(channels, 22);
|
||||
header.writeUInt32LE(sampleRate, 24);
|
||||
header.writeUInt32LE(byteRate, 28);
|
||||
header.writeUInt16LE(blockAlign, 32);
|
||||
header.writeUInt16LE(bitsPerSample, 34);
|
||||
header.write("data", 36);
|
||||
header.writeUInt32LE(dataSize, 40);
|
||||
return Buffer.concat([header, pcmBuf]);
|
||||
}
|
||||
|
||||
function _handlePreviewChunk(payload) {
|
||||
const reqId = payload?.requestId || "";
|
||||
const entry = _previewPending.get(reqId);
|
||||
if (!entry) return;
|
||||
if (payload.base64) {
|
||||
try { entry.chunks.push(Buffer.from(payload.base64, "base64")); } catch {}
|
||||
}
|
||||
if (!entry.sampleRate && payload.sampleRate) entry.sampleRate = payload.sampleRate;
|
||||
if (!entry.channels && payload.channels) entry.channels = payload.channels;
|
||||
if (payload.final) {
|
||||
_previewPending.delete(reqId);
|
||||
try {
|
||||
const pcm = Buffer.concat(entry.chunks);
|
||||
const wav = _buildWavFromPcm(pcm, entry.sampleRate || 24000, entry.channels || 1);
|
||||
const b64 = wav.toString("base64");
|
||||
if (entry.clientWs && entry.clientWs.readyState === 1) {
|
||||
entry.clientWs.send(JSON.stringify({
|
||||
type: "voice_preview_audio",
|
||||
base64: b64,
|
||||
size: wav.length,
|
||||
}));
|
||||
}
|
||||
} catch (err) {
|
||||
if (entry.clientWs && entry.clientWs.readyState === 1) {
|
||||
entry.clientWs.send(JSON.stringify({
|
||||
type: "voice_preview_audio",
|
||||
error: err.message,
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function handleVoicePreview(clientWs, voice, text, speed) {
|
||||
try {
|
||||
// Speed clampen — Browser-Slider ist 0.1-5.0
|
||||
let spd = parseFloat(speed);
|
||||
if (!isFinite(spd) || spd < 0.1 || spd > 5.0) spd = 1.0;
|
||||
const requestId = crypto.randomUUID();
|
||||
_previewPending.set(requestId, { clientWs, chunks: [], sampleRate: 0, channels: 0 });
|
||||
// Timeout safety net
|
||||
setTimeout(() => {
|
||||
if (_previewPending.has(requestId)) {
|
||||
_previewPending.delete(requestId);
|
||||
if (clientWs && clientWs.readyState === 1) {
|
||||
clientWs.send(JSON.stringify({
|
||||
type: "voice_preview_audio",
|
||||
error: "Timeout (60s) — keine Antwort vom f5tts-bridge",
|
||||
}));
|
||||
}
|
||||
}
|
||||
}, 60000);
|
||||
log("info", "server", `Voice-Preview: voice="${voice}" speed=${spd.toFixed(1)}x text="${text.slice(0, 60)}"`);
|
||||
sendToRVS_raw({
|
||||
type: "xtts_request",
|
||||
payload: { text, language: "de", requestId, voice, speed: spd },
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
} catch (err) {
|
||||
clientWs.send(JSON.stringify({ type: "voice_preview_audio", error: err.message }));
|
||||
}
|
||||
}
|
||||
|
||||
async function handleTestTTS(clientWs, text) {
|
||||
try {
|
||||
log("info", "server", `TTS-Test via XTTS: "${text}"`);
|
||||
|
|
|
|||
26
issue.md
26
issue.md
|
|
@ -70,22 +70,34 @@
|
|||
- [x] VAD-Stille einstellbar in App-Settings (1.0-8.0s, Default 2.8s)
|
||||
- [x] MAX_RECORDING auf 120s — laengere Erklaerungen moeglich
|
||||
- [x] App: Audioausgabe hoert nicht mehr mitten im Satz auf (playbackHeadPosition wait + Stop-Race fix)
|
||||
- [x] F5-TTS: Referenz-WAV-Preprocessing — Loudness-Normalisierung -16 LUFS + Silence-Trim + 10s Clip fuer konsistente Cloning-Quali
|
||||
- [x] F5-TTS: deutsches Fine-Tune (aihpi/F5-TTS-German, Vocos-Variante) via hf:// Pfad in Diagnostic konfigurierbar
|
||||
- [x] Whisper transkribiert Voice-Uploads nicht mehr mit hardcoded "small" — aktuelles Modell wird behalten, kein unnoetiger Modell-Swap
|
||||
- [x] RVS/WebSocket maxPayload 50MB: voice_upload mit WAV als base64 sprengt kein Frame-Limit mehr
|
||||
- [x] Dynamischer STT-Timeout in aria-bridge: 300s waehrend whisper-bridge 'loading', 45s wenn 'ready'
|
||||
- [x] service_status Broadcasts: f5tts/whisper melden Lade-Status, Banner in Diagnostic (unten rechts) + App (oben)
|
||||
- [x] config_request Pattern: Bridges fragen beim Connect die aktuelle Voice-Config an, aria-bridge antwortet
|
||||
- [x] F5-TTS Tuning via Diagnostic (Modell-ID, Checkpoint, cfg_strength, nfe_step) statt ENV-Vars — Hot-Reload bei Modell-Wechsel
|
||||
- [x] Conversation-Window: Gespraechsmodus endet nach X Sekunden Stille (1.0-20.0s, Default 8s, einstellbar in Settings)
|
||||
- [x] Porcupine Wake-Word-Integration in der App (Built-In Keywords + Custom spaeter, per Geraet einstellbar)
|
||||
- [x] HF-Cache als Bind-Mount statt Docker Volume — kein .vhdx-Bloat auf Docker Desktop / Windows
|
||||
- [x] cleanup-windows.ps1 / .bat: VHDX-Cleanup via diskpart (ohne Hyper-V) mit Self-Elevation
|
||||
- [x] App Mute-/Auto-Playback-Bug: Closure-Bug geloest (ttsCanPlayRef live-gespiegelt, nicht mehr stale)
|
||||
- [x] App Zombie-Recording: Ohr-aus kill laufende Aufnahme damit der Aufnahme-Button weiter funktioniert
|
||||
- [x] App Text-Rendering: Nachrichten selektierbar + Autolink fuer URLs/E-Mails/Telefonnummern (Browser/Mail/Dialer)
|
||||
- [x] TTS-Wiedergabegeschwindigkeit pro Geraet einstellbar (Settings → 0.5-2.0x in 0.1-Schritten, Default 1.0)
|
||||
- [x] Diagnostic: Voice-Preview-Modal (Play-Icon vor Delete-X, Textfeld mit Default, WAV im Browser abspielen)
|
||||
|
||||
## Offen
|
||||
|
||||
### Bugs
|
||||
- [ ] NO_REPLY wird als "NO" im Chat angezeigt — sollte still verworfen werden (Token nicht gesaeubert)
|
||||
- [ ] App: Wake-Word "jarvis" triggert nicht zuverlaessig (Porcupine-Debugging via ADB-Logcat ausstehend)
|
||||
- [ ] App: Stuerzt beim Lauschen ab, eventuell bei Nebengeraeuschen (Porcupine + Mic-Race, errorCallback haelt's jetzt zurueck — Dauertest ausstehend)
|
||||
|
||||
### App Features
|
||||
- [ ] Wake Word on-device (Porcupine "ARIA" Keyword, Phase 2 — passives Lauschen)
|
||||
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
||||
- [ ] Background Audio Service (TTS auch bei minimierter App)
|
||||
|
||||
### TTS / Audio
|
||||
- [ ] Audio-Normalisierung (Lautstaerke zwischen Saetzen/Chunks angleichen)
|
||||
- [ ] F5-TTS: Streaming-Inferenz testen (nativ statt satzweise) wenn ein passendes Backend kommt
|
||||
- [ ] F5-TTS: Optional Deepspeed-Beschleunigung pruefen
|
||||
|
||||
### Architektur
|
||||
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
||||
- [ ] Auto-Compacting und Memory/Brain Verwaltung (SQLite?)
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@ const ALLOWED_TYPES = new Set([
|
|||
"voice_preload", "voice_ready",
|
||||
"stt_request", "stt_response",
|
||||
"service_status",
|
||||
"config_request",
|
||||
]);
|
||||
|
||||
// Token-Raum: token -> { clients: Set<ws> }
|
||||
|
|
@ -54,7 +55,10 @@ function cleanupRooms() {
|
|||
|
||||
// ── WebSocket-Server starten ────────────────────────────────────────
|
||||
|
||||
const wss = new WebSocketServer({ port: PORT });
|
||||
// maxPayload 50MB: TTS-Streaming + Voice-Upload (WAV als base64) +
|
||||
// audio_pcm Chunks koennen die ws-Library Default 1MB ueberschreiten.
|
||||
// Default-Limit war der Killer fuer die voice_upload Pipeline.
|
||||
const wss = new WebSocketServer({ port: PORT, maxPayload: 50 * 1024 * 1024 });
|
||||
|
||||
wss.on("listening", () => {
|
||||
log(`RVS läuft auf Port ${PORT} | Max Sessions: ${MAX_SESSIONS}`);
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
# HuggingFace Model-Cache (Whisper + F5-TTS, geteilt zwischen den
|
||||
# beiden Bridges via Bind-Mount, kann mehrere GB werden)
|
||||
hf-cache/
|
||||
|
||||
# Voice-Samples (lokal, gehoert nicht ins Repo)
|
||||
voices/
|
||||
|
||||
|
|
|
|||
|
|
@ -31,11 +31,11 @@ services:
|
|||
capabilities: [gpu]
|
||||
volumes:
|
||||
- ./voices:/voices # WAV + TXT Referenz
|
||||
# KEIN HF-Cache-Mount mehr —
|
||||
# Modell wird beim Start neu
|
||||
# gezogen. Diagnostic zeigt
|
||||
# "TTS laedt..." Banner bis
|
||||
# service_status: ready kommt.
|
||||
- ./hf-cache:/root/.cache/huggingface # HF-Cache als Bind-Mount.
|
||||
# Direkt sichtbar im xtts/hf-cache/,
|
||||
# einfach manuell zu loeschen, kein
|
||||
# Docker-Desktop .vhdx Bloat.
|
||||
# Wird mit whisper-bridge geteilt.
|
||||
environment:
|
||||
# Bootstrap-only — alle anderen F5-TTS-Settings (Modell, cfg_strength,
|
||||
# nfe_step, Custom-Checkpoint) kommen ueber Diagnostic via RVS-config.
|
||||
|
|
@ -77,6 +77,9 @@ services:
|
|||
- WHISPER_DEVICE=${WHISPER_DEVICE:-cuda}
|
||||
- WHISPER_COMPUTE_TYPE=${WHISPER_COMPUTE_TYPE:-float16}
|
||||
- WHISPER_LANGUAGE=${WHISPER_LANGUAGE:-de}
|
||||
# KEIN HF-Cache-Mount — Whisper-Modell wird beim Start neu gezogen.
|
||||
# Wechsel via Diagnostic triggert ebenso Re-Download.
|
||||
volumes:
|
||||
- ./hf-cache:/root/.cache/huggingface # gleicher Cache wie f5tts-bridge —
|
||||
# ein Modell muss nur einmal pro
|
||||
# Maschine geladen werden, kein
|
||||
# Re-Download bei Container-Restart.
|
||||
restart: unless-stopped
|
||||
|
|
|
|||
|
|
@ -73,6 +73,12 @@ VOICES_DIR = Path(os.getenv("VOICES_DIR", "/voices"))
|
|||
|
||||
PCM_CHUNK_BYTES = 8192 # ~170ms @ 24kHz mono s16
|
||||
TARGET_SR = 24000 # F5-TTS native
|
||||
# F5-TTS hat ein 12s Hard-Limit fuer Referenz-Audio. Laengere WAVs werden
|
||||
# vom Modell stumm abgeschnitten — aber unser ref_text bleibt lang und passt
|
||||
# dann nicht mehr zum gekuerzten Audio (Quali leidet, warmup-Render ist
|
||||
# unnoetig lange). Wir clippen explizit auf 10s + re-transkribieren den Text
|
||||
# damit beide synchron bleiben.
|
||||
REF_MAX_SECONDS = 10.0
|
||||
|
||||
# Wird in einer Uebergangsphase als "ungueltige Referenz" erkannt (alte voices,
|
||||
# die hochgeladen wurden bevor die whisper-bridge online war). Bei Erkennung
|
||||
|
|
@ -93,6 +99,33 @@ def _get_f5tts_cls():
|
|||
return _F5TTS_cls
|
||||
|
||||
|
||||
def _resolve_hf_path(p: str) -> str:
|
||||
"""Wenn p mit 'hf://' anfaengt → aus HuggingFace Hub runterladen,
|
||||
lokalen Pfad zurueckgeben. Sonst unveraendert.
|
||||
|
||||
Format: hf://user/repo/path/to/file.ext
|
||||
Beispiel: hf://aihpi/F5-TTS-German/F5TTS_Base/model_365000.safetensors
|
||||
"""
|
||||
if not p or not p.startswith("hf://"):
|
||||
return p
|
||||
try:
|
||||
from huggingface_hub import hf_hub_download
|
||||
rest = p[5:]
|
||||
parts = rest.split("/", 2)
|
||||
if len(parts) < 3:
|
||||
logger.warning("Ungueltiges hf:// Format: %s (erwarte hf://user/repo/path)", p)
|
||||
return p
|
||||
repo_id = f"{parts[0]}/{parts[1]}"
|
||||
filename = parts[2]
|
||||
logger.info("HF-Download: %s aus %s", filename, repo_id)
|
||||
local = hf_hub_download(repo_id=repo_id, filename=filename)
|
||||
logger.info("HF-Download fertig: %s", local)
|
||||
return local
|
||||
except Exception as e:
|
||||
logger.exception("HF-Download fehlgeschlagen fuer %s: %s", p, e)
|
||||
return p
|
||||
|
||||
|
||||
class F5Runner:
|
||||
"""Haelt das F5-TTS-Modell. Synthese laeuft im Executor (blocking).
|
||||
|
||||
|
|
@ -116,14 +149,16 @@ class F5Runner:
|
|||
|
||||
def _load_blocking(self) -> None:
|
||||
cls = _get_f5tts_cls()
|
||||
ckpt_resolved = _resolve_hf_path(self.ckpt_file) if self.ckpt_file else ""
|
||||
vocab_resolved = _resolve_hf_path(self.vocab_file) if self.vocab_file else ""
|
||||
logger.info("Lade F5-TTS '%s' (device=%s, ckpt=%s)...",
|
||||
self.model_id, F5TTS_DEVICE, self.ckpt_file or "default")
|
||||
self.model_id, F5TTS_DEVICE, ckpt_resolved or "default")
|
||||
self._load_started_at = time.time()
|
||||
kwargs = {"model": self.model_id, "device": F5TTS_DEVICE}
|
||||
if self.ckpt_file:
|
||||
kwargs["ckpt_file"] = self.ckpt_file
|
||||
if self.vocab_file:
|
||||
kwargs["vocab_file"] = self.vocab_file
|
||||
if ckpt_resolved:
|
||||
kwargs["ckpt_file"] = ckpt_resolved
|
||||
if vocab_resolved:
|
||||
kwargs["vocab_file"] = vocab_resolved
|
||||
self.model = cls(**kwargs)
|
||||
elapsed = time.time() - self._load_started_at
|
||||
logger.info("F5-TTS geladen in %.1fs (cfg_strength=%.1f, nfe=%d)",
|
||||
|
|
@ -140,10 +175,31 @@ class F5Runner:
|
|||
|
||||
async def update_config(self, payload: dict) -> None:
|
||||
"""Liest f5tts*-Felder aus einem config-Broadcast.
|
||||
Bei Modell-relevantem Wechsel wird neu geladen."""
|
||||
new_model = (payload.get("f5ttsModel") or "").strip() or self.model_id
|
||||
new_ckpt = payload.get("f5ttsCkptFile", self.ckpt_file) or ""
|
||||
new_vocab = payload.get("f5ttsVocabFile", self.vocab_file) or ""
|
||||
Bei Modell-relevantem Wechsel wird neu geladen.
|
||||
|
||||
Semantik:
|
||||
- key fehlt in payload → aktuellen Wert behalten
|
||||
- key da, nicht-leerer str → diesen Wert nehmen
|
||||
- key da, leerer string → RESET auf Hard-Default (User hat Feld
|
||||
in Diagnostic geleert und Apply geklickt)
|
||||
"""
|
||||
if "f5ttsModel" in payload:
|
||||
v = (payload.get("f5ttsModel") or "").strip()
|
||||
new_model = v if v else DEFAULT_F5TTS_MODEL
|
||||
else:
|
||||
new_model = self.model_id
|
||||
|
||||
if "f5ttsCkptFile" in payload:
|
||||
v = payload.get("f5ttsCkptFile") or ""
|
||||
new_ckpt = v.strip() if isinstance(v, str) else ""
|
||||
else:
|
||||
new_ckpt = self.ckpt_file
|
||||
|
||||
if "f5ttsVocabFile" in payload:
|
||||
v = payload.get("f5ttsVocabFile") or ""
|
||||
new_vocab = v.strip() if isinstance(v, str) else ""
|
||||
else:
|
||||
new_vocab = self.vocab_file
|
||||
try:
|
||||
new_cfg = float(payload.get("f5ttsCfgStrength", self.cfg_strength))
|
||||
except (TypeError, ValueError):
|
||||
|
|
@ -181,7 +237,10 @@ class F5Runner:
|
|||
else:
|
||||
logger.info("F5-TTS Live-Config: cfg_strength=%.2f nfe=%d", new_cfg, new_nfe)
|
||||
|
||||
def _infer_blocking(self, gen_text: str, ref_wav: str, ref_text: str) -> tuple[np.ndarray, int]:
|
||||
def _infer_blocking(self, gen_text: str, ref_wav: str, ref_text: str,
|
||||
speed: float = 1.0) -> tuple[np.ndarray, int]:
|
||||
logger.info("infer() text=%d chars, speed=%.2f, cfg=%.2f, nfe=%d",
|
||||
len(gen_text), speed, self.cfg_strength, self.nfe_step)
|
||||
wav, sr, _ = self.model.infer(
|
||||
ref_file=ref_wav,
|
||||
ref_text=ref_text,
|
||||
|
|
@ -190,6 +249,7 @@ class F5Runner:
|
|||
seed=-1,
|
||||
cfg_strength=self.cfg_strength,
|
||||
nfe_step=self.nfe_step,
|
||||
speed=speed,
|
||||
)
|
||||
# F5-TTS gibt float32 1D-Array — auf 24kHz sample-rate standard
|
||||
if not isinstance(wav, np.ndarray):
|
||||
|
|
@ -198,10 +258,11 @@ class F5Runner:
|
|||
wav = wav.squeeze()
|
||||
return wav.astype(np.float32), int(sr)
|
||||
|
||||
async def synthesize(self, gen_text: str, ref_wav: str, ref_text: str) -> tuple[np.ndarray, int]:
|
||||
async def synthesize(self, gen_text: str, ref_wav: str, ref_text: str,
|
||||
speed: float = 1.0) -> tuple[np.ndarray, int]:
|
||||
await self.ensure_loaded()
|
||||
loop = asyncio.get_event_loop()
|
||||
return await loop.run_in_executor(None, self._infer_blocking, gen_text, ref_wav, ref_text)
|
||||
return await loop.run_in_executor(None, self._infer_blocking, gen_text, ref_wav, ref_text, speed)
|
||||
|
||||
|
||||
# ── Helpers ─────────────────────────────────────────────────
|
||||
|
|
@ -233,7 +294,15 @@ def split_sentences(text: str, max_len: int = 350) -> list[str]:
|
|||
|
||||
|
||||
def float_to_pcm16(wav: np.ndarray) -> bytes:
|
||||
"""Float32 (-1..+1) → int16 little-endian bytes."""
|
||||
"""Float32 (-1..+1) → int16 little-endian bytes.
|
||||
|
||||
F5-TTS generiert gelegentlich NaN/Inf bei Instabilitaeten — ohne sanitize
|
||||
waere der Cast zu int16 undefiniert (RuntimeWarning + kaputter Sound).
|
||||
"""
|
||||
nan_count = int(np.isnan(wav).sum() + np.isinf(wav).sum())
|
||||
if nan_count > 0:
|
||||
logger.warning("F5-TTS Output enthaelt %d NaN/Inf samples — ersetze mit 0", nan_count)
|
||||
wav = np.nan_to_num(wav, nan=0.0, posinf=1.0, neginf=-1.0)
|
||||
wav = np.clip(wav, -1.0, 1.0)
|
||||
pcm = (wav * 32767.0).astype(np.int16)
|
||||
return pcm.tobytes()
|
||||
|
|
@ -248,32 +317,51 @@ def voice_paths(name: str) -> tuple[Path, Path]:
|
|||
return VOICES_DIR / f"{safe}.wav", VOICES_DIR / f"{safe}.txt"
|
||||
|
||||
|
||||
def ensure_24k_mono_wav(src_wav: Path) -> Path:
|
||||
"""F5-TTS moechte 24kHz mono als Referenz — ffmpeg konvertiert inplace.
|
||||
def normalize_ref_wav(src_wav: Path, max_seconds: float = REF_MAX_SECONDS) -> tuple[Path, bool]:
|
||||
"""Bringt die Referenz-WAV in F5-TTS-freundliche Form:
|
||||
|
||||
Wenn das File schon passt, wird nichts geaendert. Sonst wird es
|
||||
reingeschrieben (Original wird ueberschrieben).
|
||||
* 24kHz mono
|
||||
* max max_seconds Dauer
|
||||
* Stille am Anfang + Ende abgeschnitten (silenceremove-Filter)
|
||||
* Lautheit auf -16 LUFS normalisiert (loudnorm-Filter) damit
|
||||
das Modell konsistente Amplituden sieht
|
||||
|
||||
F5-TTS reagiert empfindlich auf leise / verrauschte / zerhackte
|
||||
Referenzen. Konsistente, saubere Input-Lautheit hilft der Quali.
|
||||
|
||||
Returns:
|
||||
(path, was_modified) — was_modified=True wenn die Datei wirklich
|
||||
geaendert wurde (Caller sollte dann den passenden .txt invalidieren).
|
||||
"""
|
||||
try:
|
||||
info = sf.info(str(src_wav))
|
||||
if info.samplerate == TARGET_SR and info.channels == 1:
|
||||
return src_wav
|
||||
except Exception:
|
||||
pass
|
||||
tmp_out = src_wav.with_suffix(".conv.wav")
|
||||
# silenceremove am Anfang: bis -50dB gesprochen wird
|
||||
# silenceremove am Ende: ueber -50dB rein, dann 0.5s stille als Cutoff
|
||||
# loudnorm: EBU R128, Ziel -16 LUFS
|
||||
af = ("silenceremove=start_periods=1:start_duration=0.05:start_threshold=-50dB,"
|
||||
"silenceremove=stop_periods=1:stop_duration=0.5:stop_threshold=-50dB,"
|
||||
"loudnorm=I=-16:TP=-1.5:LRA=11")
|
||||
cmd = ["ffmpeg", "-y", "-i", str(src_wav),
|
||||
"-ar", str(TARGET_SR), "-ac", "1", "-f", "wav", str(tmp_out)]
|
||||
"-af", af,
|
||||
"-ar", str(TARGET_SR), "-ac", "1",
|
||||
"-t", str(max_seconds),
|
||||
"-f", "wav", str(tmp_out)]
|
||||
r = subprocess.run(cmd, capture_output=True, timeout=30)
|
||||
if r.returncode != 0:
|
||||
logger.warning("ffmpeg-Konvertierung von %s fehlgeschlagen: %s",
|
||||
src_wav, r.stderr.decode(errors="replace")[:200])
|
||||
logger.warning("ffmpeg-Normalisierung von %s fehlgeschlagen: %s",
|
||||
src_wav, r.stderr.decode(errors="replace")[:300])
|
||||
try:
|
||||
tmp_out.unlink()
|
||||
except OSError:
|
||||
pass
|
||||
return src_wav
|
||||
return src_wav, False
|
||||
os.replace(tmp_out, src_wav)
|
||||
return src_wav
|
||||
try:
|
||||
info = sf.info(str(src_wav))
|
||||
logger.info("Referenz-WAV normalisiert: %s (%.1fs, %dHz mono, -16 LUFS, silence getrimmt)",
|
||||
src_wav.name, info.duration, info.samplerate)
|
||||
except Exception:
|
||||
logger.info("Referenz-WAV normalisiert: %s", src_wav.name)
|
||||
return src_wav, True
|
||||
|
||||
|
||||
async def _send(ws, mtype: str, payload: dict) -> None:
|
||||
|
|
@ -312,7 +400,10 @@ async def request_transcription(ws, wav_path: Path, language: str = "de") -> Opt
|
|||
"requestId": request_id,
|
||||
"audio": audio_b64,
|
||||
"mimeType": "audio/wav",
|
||||
"model": "small", # klein reicht fuer Voice-Referenz
|
||||
# KEIN hardcoded model — whisper-bridge nimmt das bereits
|
||||
# geladene. Sonst wuerde hier ein Swap auf 'small' passieren und
|
||||
# danach muesste das in Diagnostic konfigurierte Modell (z.B.
|
||||
# large-v3) wieder geladen werden → doppelter Download.
|
||||
"language": language,
|
||||
})
|
||||
return await asyncio.wait_for(fut, timeout=_STT_TIMEOUT_S)
|
||||
|
|
@ -335,9 +426,9 @@ _tts_queue: asyncio.Queue[tuple] = asyncio.Queue()
|
|||
async def _tts_worker(ws, runner: F5Runner) -> None:
|
||||
"""Serialisiert Synthesen — GPU kann sonst OOM gehen."""
|
||||
while True:
|
||||
text, voice, request_id, message_id, language = await _tts_queue.get()
|
||||
text, voice, request_id, message_id, language, speed = await _tts_queue.get()
|
||||
try:
|
||||
await _do_tts(ws, runner, text, voice, request_id, message_id, language)
|
||||
await _do_tts(ws, runner, text, voice, request_id, message_id, language, speed)
|
||||
except Exception:
|
||||
logger.exception("TTS-Worker Fehler")
|
||||
finally:
|
||||
|
|
@ -345,10 +436,26 @@ async def _tts_worker(ws, runner: F5Runner) -> None:
|
|||
|
||||
|
||||
async def _do_tts(ws, runner: F5Runner, text: str, voice: str,
|
||||
request_id: str, message_id: str, language: str) -> None:
|
||||
request_id: str, message_id: str, language: str,
|
||||
speed: float = 1.0) -> None:
|
||||
t0 = time.time()
|
||||
ref_wav_path, ref_txt_path = voice_paths(voice) if voice else (None, None)
|
||||
|
||||
# WAV zu lang? F5-TTS limitiert intern auf 12s, dann passt der txt nicht
|
||||
# mehr zum Audio. Wir clippen explizit auf 10s und invalidieren den txt,
|
||||
# damit er on-the-fly passend zum gekuerzten Audio neu transkribiert wird.
|
||||
if voice and ref_wav_path and ref_wav_path.exists():
|
||||
try:
|
||||
info = sf.info(str(ref_wav_path))
|
||||
if info.duration > REF_MAX_SECONDS + 0.5:
|
||||
logger.info("Voice '%s' WAV ist %.1fs (>%.0fs) → clippen + txt neu",
|
||||
voice, info.duration, REF_MAX_SECONDS)
|
||||
_, modified = normalize_ref_wav(ref_wav_path)
|
||||
if modified and ref_txt_path and ref_txt_path.exists():
|
||||
ref_txt_path.unlink()
|
||||
except Exception as e:
|
||||
logger.warning("Konnte WAV-Dauer nicht pruefen: %s", e)
|
||||
|
||||
# Legacy-Platzhalter erkennen → behandeln als "kein txt" und neu transkribieren
|
||||
if voice and ref_txt_path and ref_txt_path.exists():
|
||||
try:
|
||||
|
|
@ -402,13 +509,14 @@ async def _do_tts(ws, runner: F5Runner, text: str, voice: str,
|
|||
ref_wav_str, ref_text = str(pair[0]), pair[1].read_text(encoding="utf-8").strip()
|
||||
|
||||
sentences = split_sentences(text)
|
||||
logger.info("F5-TTS: %d Satz(e), voice=%s (%s)", len(sentences), voice or "default", ref_wav_str)
|
||||
logger.info("F5-TTS: %d Satz(e), voice=%s, speed=%.2fx (%s)",
|
||||
len(sentences), voice or "default", speed, ref_wav_str)
|
||||
|
||||
chunk_index = 0
|
||||
pcm_sr = TARGET_SR
|
||||
for i, sent in enumerate(sentences):
|
||||
try:
|
||||
wav, sr = await runner.synthesize(sent, ref_wav_str, ref_text)
|
||||
wav, sr = await runner.synthesize(sent, ref_wav_str, ref_text, speed)
|
||||
pcm_sr = sr
|
||||
pcm_bytes = float_to_pcm16(wav)
|
||||
# Erste PCM-Chunk des allerersten Satzes bekommt Fade-In (maskiert
|
||||
|
|
@ -491,8 +599,9 @@ async def handle_voice_upload(ws, payload: dict) -> None:
|
|||
size_kb = wav_path.stat().st_size / 1024
|
||||
logger.info("Voice WAV gespeichert: %s (%.0fKB)", wav_path, size_kb)
|
||||
|
||||
# Auf 24kHz mono normalisieren (falls App in anderem Format liefert)
|
||||
ensure_24k_mono_wav(wav_path)
|
||||
# Auf 24kHz mono clippen auf 10s (F5-TTS Hard-Limit ist 12s,
|
||||
# kuerzer = schnellerer Warmup + Text+Audio bleiben aligned)
|
||||
normalize_ref_wav(wav_path)
|
||||
|
||||
# Transkription ueber whisper-bridge anfragen
|
||||
logger.info("Transkribiere '%s' via whisper-bridge...", name)
|
||||
|
|
@ -613,22 +722,30 @@ async def run_loop(runner: F5Runner) -> None:
|
|||
tls_fallback_tried = False
|
||||
|
||||
# Status-Broadcast: erst loading, dann ready nach erfolgreichem Load.
|
||||
# Modell wird hier (nicht ausserhalb der Schleife) gestartet damit
|
||||
# der Loading-Status auch wirklich uebertragen werden kann.
|
||||
# Plus: config_request damit wir die persistierte Diagnostic-Config
|
||||
# bekommen, falls aria-bridge ihre nicht von alleine sendet.
|
||||
async def _load_with_status():
|
||||
if runner.model is not None:
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_id,
|
||||
loadSeconds=runner.last_load_seconds)
|
||||
return
|
||||
await _broadcast_status(ws, "loading", model=runner.model_id)
|
||||
try:
|
||||
await runner.ensure_loaded()
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_id,
|
||||
loadSeconds=runner.last_load_seconds)
|
||||
if runner.model is not None:
|
||||
logger.info("Initial: broadcaste ready (Modell schon im RAM: %s)", runner.model_id)
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_id,
|
||||
loadSeconds=runner.last_load_seconds)
|
||||
else:
|
||||
logger.info("Initial: broadcaste loading + lade Modell '%s'", runner.model_id)
|
||||
await _broadcast_status(ws, "loading", model=runner.model_id)
|
||||
await runner.ensure_loaded()
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_id,
|
||||
loadSeconds=runner.last_load_seconds)
|
||||
logger.info("Initial: sende config_request an aria-bridge")
|
||||
await _send(ws, "config_request", {"service": "f5tts"})
|
||||
except Exception as e:
|
||||
await _broadcast_status(ws, "error", error=str(e)[:200])
|
||||
logger.exception("Initial-Load crashed: %s", e)
|
||||
try:
|
||||
await _broadcast_status(ws, "error", error=str(e)[:200])
|
||||
except Exception:
|
||||
pass
|
||||
asyncio.create_task(_load_with_status())
|
||||
|
||||
# TTS-Worker fuer diese Verbindung starten
|
||||
|
|
@ -644,12 +761,19 @@ async def run_loop(runner: F5Runner) -> None:
|
|||
payload = msg.get("payload", {}) or {}
|
||||
|
||||
if mtype == "xtts_request":
|
||||
try:
|
||||
speed = float(payload.get("speed") or 1.0)
|
||||
except (TypeError, ValueError):
|
||||
speed = 1.0
|
||||
if not (0.1 <= speed <= 5.0):
|
||||
speed = 1.0
|
||||
await _tts_queue.put((
|
||||
payload.get("text", ""),
|
||||
payload.get("voice", "") or "",
|
||||
payload.get("requestId", ""),
|
||||
payload.get("messageId", ""),
|
||||
payload.get("language", "de"),
|
||||
speed,
|
||||
))
|
||||
elif mtype == "voice_upload":
|
||||
asyncio.create_task(handle_voice_upload(ws, payload))
|
||||
|
|
|
|||
|
|
@ -143,7 +143,11 @@ async def handle_stt_request(ws, payload: dict, runner: WhisperRunner) -> None:
|
|||
request_id = payload.get("requestId", "")
|
||||
audio_b64 = payload.get("audio", "")
|
||||
mime_type = payload.get("mimeType", "audio/mp4")
|
||||
model = payload.get("model") or WHISPER_MODEL
|
||||
# Modell-Auswahl:
|
||||
# payload.model gesetzt → nimm das (aria-bridge sendet's basierend auf Config)
|
||||
# sonst + Modell geladen → behalt das aktuelle (kein sinnloser Swap)
|
||||
# sonst → fallback auf ENV-Default
|
||||
model = payload.get("model") or (runner.model_size if runner.model is not None else WHISPER_MODEL)
|
||||
language = payload.get("language") or WHISPER_LANGUAGE
|
||||
|
||||
if not audio_b64:
|
||||
|
|
@ -152,8 +156,17 @@ async def handle_stt_request(ws, payload: dict, runner: WhisperRunner) -> None:
|
|||
|
||||
try:
|
||||
t_load = time.time()
|
||||
# Falls Modell noch nicht geladen (Race-Condition: stt_request vor config)
|
||||
# → Status-Broadcast loading→ready damit der App-Banner aufpoppt
|
||||
needs_load = runner.model is None or runner.model_size != model
|
||||
if needs_load:
|
||||
await _broadcast_status(ws, "loading", model=model)
|
||||
await runner.ensure_loaded(model)
|
||||
load_ms = int((time.time() - t_load) * 1000)
|
||||
if needs_load:
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_size,
|
||||
loadSeconds=load_ms / 1000.0)
|
||||
|
||||
audio = ffmpeg_to_float32(audio_b64, mime_type)
|
||||
if audio.size == 0:
|
||||
|
|
@ -203,27 +216,34 @@ async def run_loop(runner: WhisperRunner) -> None:
|
|||
masked = url.replace(RVS_TOKEN, "***") if RVS_TOKEN else url
|
||||
try:
|
||||
logger.info("Verbinde zu RVS: %s", masked)
|
||||
async with websockets.connect(url, ping_interval=20, ping_timeout=10) as ws:
|
||||
# max_size 50MB damit grosse stt_request (Voice-Cloning-WAVs als
|
||||
# base64 koennen mehrere MB werden) nicht das Frame-Limit sprengen
|
||||
# und die Verbindung mit 1009 'message too big' killen.
|
||||
async with websockets.connect(url, ping_interval=20, ping_timeout=10, max_size=50 * 1024 * 1024) as ws:
|
||||
logger.info("RVS verbunden")
|
||||
retry_s = 2
|
||||
tls_fallback_tried = False
|
||||
|
||||
# Modell laden, dabei loading→ready broadcasten
|
||||
async def _load_with_status():
|
||||
if runner.model is not None:
|
||||
await _broadcast_status(ws, "ready", model=runner.model_size)
|
||||
return
|
||||
await _broadcast_status(ws, "loading", model=WHISPER_MODEL)
|
||||
# Initialer Status-Broadcast — uebertont alten "ready"-State
|
||||
# im App/Diagnostic Banner (sonst denkt der User noch alles ist
|
||||
# gut von vorher). Wenn Modell schon geladen → ready, sonst
|
||||
# loading mit aktuellem (Default-)Namen.
|
||||
# Plus: config_request an aria-bridge — wir wissen nicht ob
|
||||
# sie auch grad reconnected hat oder schon laenger online ist.
|
||||
async def _initial_handshake():
|
||||
try:
|
||||
t0 = time.time()
|
||||
await runner.ensure_loaded(WHISPER_MODEL)
|
||||
elapsed = time.time() - t0
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_size,
|
||||
loadSeconds=elapsed)
|
||||
if runner.model is not None:
|
||||
logger.info("Initial: broadcaste ready (Modell schon im RAM: %s)", runner.model_size)
|
||||
await _broadcast_status(ws, "ready", model=runner.model_size)
|
||||
else:
|
||||
init_model = runner.model_size or WHISPER_MODEL
|
||||
logger.info("Initial: broadcaste loading (model=%s)", init_model)
|
||||
await _broadcast_status(ws, "loading", model=init_model)
|
||||
logger.info("Initial: sende config_request an aria-bridge")
|
||||
await _send(ws, "config_request", {"service": "whisper"})
|
||||
except Exception as e:
|
||||
await _broadcast_status(ws, "error", error=str(e)[:200])
|
||||
asyncio.create_task(_load_with_status())
|
||||
logger.exception("Initial-Handshake crashed: %s", e)
|
||||
asyncio.create_task(_initial_handshake())
|
||||
|
||||
async for raw in ws:
|
||||
try:
|
||||
|
|
@ -240,9 +260,13 @@ async def run_loop(runner: WhisperRunner) -> None:
|
|||
req_id[:8] if req_id != "?" else "?", audio_len // 1365)
|
||||
asyncio.create_task(handle_stt_request(ws, payload, runner))
|
||||
elif mtype == "config":
|
||||
new_model = payload.get("whisperModel")
|
||||
if new_model and new_model != runner.model_size:
|
||||
logger.info("Config-Broadcast: Whisper-Modell -> %s", new_model)
|
||||
new_model = payload.get("whisperModel") or WHISPER_MODEL
|
||||
# Laden wenn (a) noch nix geladen, oder (b) Modell wechselt
|
||||
needs_load = (runner.model is None) or (new_model != runner.model_size)
|
||||
if needs_load:
|
||||
logger.info("Config-Broadcast: Whisper-Modell -> %s%s",
|
||||
new_model,
|
||||
" (initial)" if runner.model is None else " (Wechsel)")
|
||||
async def _swap_with_status(target):
|
||||
await _broadcast_status(ws, "loading", model=target)
|
||||
try:
|
||||
|
|
|
|||
Loading…
Reference in New Issue