Compare commits
48 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 0c43a18402 | |||
| 5bdcc3c65b | |||
| 52795530f9 | |||
| 2eb0b4df90 | |||
| 0c18090351 | |||
| d6b54d3247 | |||
| ead28cf09a | |||
| f682aad4ff | |||
| e0c1a4bcd5 | |||
| a648dad96d | |||
| da5579038e | |||
| 4ba48940b9 | |||
| 568ef9ed10 | |||
| 7682a0ce58 | |||
| 3ca834e633 | |||
| 55ef207454 | |||
| 6651f5937d | |||
| e9e7dd804f | |||
| ec9530f17f | |||
| 97cb7be313 | |||
| 77e927ffcd | |||
| a9a87f12df | |||
| 2a56ac0290 | |||
| edc65ce645 | |||
| d7efaf93b3 | |||
| 31ff20c846 | |||
| 406f4cb3cc | |||
| fa0667088a | |||
| f55329706e | |||
| 6c7fd1d0e3 | |||
| 9d8db111ac | |||
| 482cb6ace3 | |||
| 69c1c49a7d | |||
| b1ccf29295 | |||
| 4cd9faece2 | |||
| fec8aa977b | |||
| 20123de827 | |||
| 8761d1a1b7 | |||
| abc5b971f4 | |||
| b588dd7e3b | |||
| 309df9d851 | |||
| f2e643d1fb | |||
| 6ac374621c | |||
| efbd306597 | |||
| 4454613a98 | |||
| 55cfb752a2 | |||
| a4d3449e3a | |||
| 44d2c6b4fe |
@@ -378,9 +378,13 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
|||||||
### Features
|
### Features
|
||||||
|
|
||||||
- Text-Chat mit ARIA
|
- Text-Chat mit ARIA
|
||||||
- **Sprachaufnahme**: Push-to-Talk (halten) oder Tap-to-Talk (tippen, Auto-Stop bei Stille)
|
- **Sprachaufnahme**: Tap-to-Talk (tippen startet, tippen stoppt, Auto-Stop bei Stille via VAD)
|
||||||
- **Gespraechsmodus** (Ohr-Button): Nach jeder ARIA-Antwort startet automatisch die Aufnahme — wie ein natuerliches Gespraech hin und her
|
- **Gespraechsmodus** (Ohr-Button): Nach jeder ARIA-Antwort startet automatisch die Aufnahme — wie ein natuerliches Gespraech hin und her
|
||||||
- **VAD (Voice Activity Detection)**: Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme 120s.
|
- **Wake-Word** (on-device, openWakeWord ONNX): "Hey Jarvis", "Alexa", "Hey Mycroft", "Hey Rhasspy" — Mikrofon hoert passiv mit, Konversation startet beim Schluesselwort. Komplett on-device via ONNX Runtime, kein API-Key, kein Cloud-Roundtrip, Audio verlaesst das Geraet nicht.
|
||||||
|
- **VAD (Voice Activity Detection)**: Adaptive Schwelle (Baseline aus ersten 500ms Mic-Pegel + 6dB Offset). Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme einstellbar (1–30 min, Default 5 min)
|
||||||
|
- **Barge-In**: Wenn du waehrend ARIAs Antwort eine neue Sprach-/Text-Nachricht reinschickst, wird sie unterbrochen + bekommt den Hint "das ist eine Korrektur"
|
||||||
|
- **Wake-Word waehrend TTS**: Du kannst "Computer" sagen waehrend ARIA noch redet — AcousticEchoCanceler verhindert dass ARIAs eigene Stimme das Wake-Word triggert
|
||||||
|
- **Anruf-Pause**: TTS verstummt automatisch wenn das Telefon klingelt (READ_PHONE_STATE Permission)
|
||||||
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
||||||
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
||||||
- **"ARIA denkt..." Indicator**: Zeigt live den Status vom Core (Denken, Tool, Schreiben) + Abbrechen-Button
|
- **"ARIA denkt..." Indicator**: Zeigt live den Status vom Core (Denken, Tool, Schreiben) + Abbrechen-Button
|
||||||
@@ -398,6 +402,45 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
|||||||
- GPS-Position (optional)
|
- GPS-Position (optional)
|
||||||
- QR-Code Scanner fuer Token-Pairing
|
- QR-Code Scanner fuer Token-Pairing
|
||||||
|
|
||||||
|
### Wake-Word (openWakeWord, on-device)
|
||||||
|
|
||||||
|
Wake-Word-Erkennung laeuft komplett **on-device** ueber [openWakeWord](https://github.com/dscripka/openWakeWord)
|
||||||
|
mit ONNX Runtime — kein API-Key, kein Cloud-Roundtrip, kein Cent Lizenzgebuehren,
|
||||||
|
und das Audio verlaesst das Geraet nie.
|
||||||
|
|
||||||
|
**Mitgelieferte Wake-Words** (ONNX-Dateien in `android/android/app/src/main/assets/openwakeword/`):
|
||||||
|
- `Hey Jarvis` (Default, openWakeWord-Original)
|
||||||
|
- `Computer` (Star-Trek-Style, Community-Modell)
|
||||||
|
- `Alexa`, `Hey Mycroft`, `Hey Rhasspy` (openWakeWord-Originale)
|
||||||
|
|
||||||
|
Community-Modelle stammen aus [fwartner/home-assistant-wakewords-collection](https://github.com/fwartner/home-assistant-wakewords-collection).
|
||||||
|
|
||||||
|
**Bedienung:**
|
||||||
|
- App → **Einstellungen** → **Wake-Word** → gewuenschtes Keyword waehlen → **Speichern + Aktivieren**
|
||||||
|
- **Ohr-Button (👂)** in der Statusleiste tippen → Wake-Word ist scharf, App hoert passiv mit
|
||||||
|
- Wake-Word sagen → Symbol wechselt auf 🎙️, **Bereit-Sound** (Ding-Dong, optional in Settings) + Toast "🎤 sprich jetzt" sobald das Mikro wirklich offen ist
|
||||||
|
- Nach jeder ARIA-Antwort oeffnet sich das Mikro nochmal — Stille → zurueck zu 👂
|
||||||
|
- Erneut tippen → Ohr aus (🔇)
|
||||||
|
|
||||||
|
**Eigene Wake-Words trainieren** (gratis, ~30 Min):
|
||||||
|
|
||||||
|
1. openWakeWord Trainings-Notebook auf Colab oeffnen (Link im
|
||||||
|
[openWakeWord Repo](https://github.com/dscripka/openWakeWord) unter "Training Custom Models")
|
||||||
|
2. Wake-Word-Phrase eingeben (z.B. "ARIA", "Hey Stefan"), Notebook ausfuehren —
|
||||||
|
das Notebook generiert synthetische Trainings-Beispiele und trainiert das Modell.
|
||||||
|
3. Resultierende `.onnx`-Datei runterladen
|
||||||
|
4. Datei in `android/android/app/src/main/assets/openwakeword/` ablegen
|
||||||
|
5. In `android/src/services/wakeword.ts` den Dateinamen (ohne `.onnx`) zur
|
||||||
|
`WAKE_KEYWORDS`-Liste hinzufuegen
|
||||||
|
6. APK neu bauen
|
||||||
|
|
||||||
|
*(Diagnostic-Upload fuer Custom-`.onnx` ohne Rebuild kommt spaeter.)*
|
||||||
|
|
||||||
|
**Tuning** (in [wakeword.ts](android/src/services/wakeword.ts)):
|
||||||
|
- `DEFAULT_THRESHOLD = 0.5` — Score-Schwelle (raise auf 0.6–0.7 bei False-Positives)
|
||||||
|
- `DEFAULT_PATIENCE = 2` — wie viele Frames ueber Threshold noetig
|
||||||
|
- `DEFAULT_DEBOUNCE_MS = 1500` — Mindestabstand zwischen zwei Triggern
|
||||||
|
|
||||||
### Ersteinrichtung (Dev-Maschine, einmalig)
|
### Ersteinrichtung (Dev-Maschine, einmalig)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -525,8 +568,7 @@ aria-data/
|
|||||||
│ └── diag-state/ ← Diagnostic persistenter State
|
│ └── diag-state/ ← Diagnostic persistenter State
|
||||||
│
|
│
|
||||||
│ (im Shared Volume /shared/config/):
|
│ (im Shared Volume /shared/config/):
|
||||||
│ ├── voice_config.json ← TTS-Einstellungen (Stimme, Speed, Engine)
|
│ ├── voice_config.json ← TTS-Einstellungen (Stimme, Speed, F5-TTS-Tuning)
|
||||||
│ ├── highlight_triggers.json ← Highlight-Trigger Woerter
|
|
||||||
│ └── chat_backup.jsonl ← Nachrichten-Backup (on-the-fly)
|
│ └── chat_backup.jsonl ← Nachrichten-Backup (on-the-fly)
|
||||||
│
|
│
|
||||||
└── ssh/ ← SSH Keys fuer VM-Zugriff
|
└── ssh/ ← SSH Keys fuer VM-Zugriff
|
||||||
@@ -744,8 +786,10 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- **Proxy Cold Start**: Jede Nachricht spawnt einen neuen `claude --print` Prozess.
|
- **Proxy Cold Start**: Jede Nachricht spawnt einen neuen `claude --print` Prozess.
|
||||||
Dadurch ist ARIA langsamer als die direkte Claude CLI. Timeout ist auf 900s (15 Min).
|
Dadurch ist ARIA langsamer als die direkte Claude CLI. Timeout ist auf 900s (15 Min).
|
||||||
- **Kein Streaming zur App**: Die App zeigt erst die fertige Antwort, keine Streaming-Tokens.
|
- **Kein Streaming zur App**: Die App zeigt erst die fertige Antwort, keine Streaming-Tokens.
|
||||||
- **Wake Word nur auf VM**: Die Bridge hoert auf "ARIA" ueber das lokale Mikrofon der VM.
|
- **Wake-Word in der App nur eingebaute Keywords**: `Hey Jarvis`, `Alexa`, `Hey Mycroft`,
|
||||||
In der App gibt es Energy-basierte Erkennung (Phase 1). On-device "ARIA"-Keyword (Porcupine) ist Phase 2.
|
`Hey Rhasspy` funktionieren sofort, eigene Wake-Words muessen aktuell noch als
|
||||||
|
`.onnx`-Datei ins App-Bundle gelegt + zur Liste in `wakeword.ts` hinzugefuegt werden.
|
||||||
|
Die Diagnostic-Upload-UI ist Phase 2.
|
||||||
- **Audio-Format**: App nimmt AAC/MP4 auf, Bridge konvertiert via FFmpeg zu 16kHz PCM.
|
- **Audio-Format**: App nimmt AAC/MP4 auf, Bridge konvertiert via FFmpeg zu 16kHz PCM.
|
||||||
- **RVS Zombie-Connections**: WebSocket-Verbindungen sterben gelegentlich ohne Fehlermeldung.
|
- **RVS Zombie-Connections**: WebSocket-Verbindungen sterben gelegentlich ohne Fehlermeldung.
|
||||||
Bridge hat Ping-Check (5s), Diagnostic nutzt frische Verbindungen pro Request.
|
Bridge hat Ping-Check (5s), Diagnostic nutzt frische Verbindungen pro Request.
|
||||||
@@ -771,7 +815,7 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- [x] SSH-Zugriff auf VM (aria-wohnung)
|
- [x] SSH-Zugriff auf VM (aria-wohnung)
|
||||||
- [x] Diagnostic Web-UI + Einstellungen
|
- [x] Diagnostic Web-UI + Einstellungen
|
||||||
- [x] Session-Verwaltung + Chat-History
|
- [x] Session-Verwaltung + Chat-History
|
||||||
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed, Highlight-Trigger) — durch XTTS v2 Voice Cloning ersetzt
|
- [x] Stimmen-Einstellungen (frueher Piper Ramona/Thorsten, Highlight-Trigger) — durch XTTS, dann F5-TTS Voice Cloning ersetzt
|
||||||
- [x] Piper komplett entfernt — nur noch XTTS v2 als TTS (Gaming-PC)
|
- [x] Piper komplett entfernt — nur noch XTTS v2 als TTS (Gaming-PC)
|
||||||
- [x] Streaming TTS: PCM-Chunks direkt in AudioTrack, nahtlose Wiedergabe
|
- [x] Streaming TTS: PCM-Chunks direkt in AudioTrack, nahtlose Wiedergabe
|
||||||
- [x] TTS satzweise fuer lange Texte
|
- [x] TTS satzweise fuer lange Texte
|
||||||
@@ -798,8 +842,19 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- [x] Whisper STT auf die Gamebox ausgelagert (CUDA float16, fast Echtzeit)
|
- [x] Whisper STT auf die Gamebox ausgelagert (CUDA float16, fast Echtzeit)
|
||||||
- [x] **F5-TTS ersetzt XTTS** — bessere Voice-Cloning-Qualitaet, Whisper-auto-transkribierter Referenz-Text
|
- [x] **F5-TTS ersetzt XTTS** — bessere Voice-Cloning-Qualitaet, Whisper-auto-transkribierter Referenz-Text
|
||||||
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
||||||
- [x] VAD-Stille-Toleranz und Max-Aufnahme einstellbar (1-8s, 120s)
|
- [x] VAD-Stille-Toleranz einstellbar (1-8s) + adaptive Mikro-Baseline + Max-Aufnahme einstellbar (1-30 min)
|
||||||
|
- [x] Barge-In: User kann ARIA waehrend Antwort unterbrechen, aria-core bekommt Kontext-Hint
|
||||||
|
- [x] Anruf-Pause: TTS verstummt bei eingehendem Anruf (PhoneStateListener)
|
||||||
|
- [x] Settings-Sub-Screens: 8 Kategorien statt langer Liste
|
||||||
|
- [x] APK ABI-Split arm64-v8a: 35 MB statt 136 MB
|
||||||
|
- [x] Sprachnachrichten-Bubble: audioRequestId statt Substring-Match — keine vertauschten Bubbles mehr bei parallelen Aufnahmen
|
||||||
|
- [x] Bereit-Sound (Airplane Ding-Dong) wenn Mikro nach Wake-Word offen ist — akustische Bestaetigung, in Settings abschaltbar
|
||||||
|
- [x] Wake-Word parallel zu TTS mit AcousticEchoCanceler — "Computer" sagen waehrend ARIA spricht stoppt sie und oeffnet Mikro
|
||||||
|
- [x] GPS-Position mit Nachrichten mitsenden (Toggle in Settings) — ARIA nutzt sie nur bei standortbezogenen Fragen, im Chat sichtbar nur in ihrer Antwort
|
||||||
|
- [x] Sprachnachrichten ohne STT-Result werden nach Timeout automatisch entfernt (skaliert mit Aufnahmedauer)
|
||||||
|
- [x] Background Audio Service: TTS, Wake-Word-Lauschen + Aufnahme laufen auch bei minimierter App weiter (Foreground-Service mit mediaPlayback|microphone, dynamische Notification)
|
||||||
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
||||||
|
- [x] Wake-Word on-device via openWakeWord (ONNX Runtime, kein API-Key) + State-Icon
|
||||||
|
|
||||||
### Phase 2 — ARIA wird produktiv
|
### Phase 2 — ARIA wird produktiv
|
||||||
|
|
||||||
@@ -815,5 +870,5 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- [ ] STARFACE Telefonie-Skill
|
- [ ] STARFACE Telefonie-Skill
|
||||||
- [ ] Desktop Client (Tauri)
|
- [ ] Desktop Client (Tauri)
|
||||||
- [ ] bKVM Remote IT-Support
|
- [ ] bKVM Remote IT-Support
|
||||||
- [ ] Porcupine Wake Word (on-device "ARIA" in der App)
|
- [ ] Custom-`.onnx`-Upload fuer Wake-Word ueber Diagnostic (ohne App-Rebuild)
|
||||||
- [ ] Claude Vision direkt (Bildanalyse ohne Dateipfad-Umweg)
|
- [ ] Claude Vision direkt (Bildanalyse ohne Dateipfad-Umweg)
|
||||||
|
|||||||
@@ -79,8 +79,8 @@ android {
|
|||||||
applicationId "com.ariacockpit"
|
applicationId "com.ariacockpit"
|
||||||
minSdkVersion rootProject.ext.minSdkVersion
|
minSdkVersion rootProject.ext.minSdkVersion
|
||||||
targetSdkVersion rootProject.ext.targetSdkVersion
|
targetSdkVersion rootProject.ext.targetSdkVersion
|
||||||
versionCode 604
|
versionCode 802
|
||||||
versionName "0.0.6.4"
|
versionName "0.0.8.2"
|
||||||
// Fallback fuer Libraries mit Product Flavors
|
// Fallback fuer Libraries mit Product Flavors
|
||||||
missingDimensionStrategy 'react-native-camera', 'general'
|
missingDimensionStrategy 'react-native-camera', 'general'
|
||||||
}
|
}
|
||||||
@@ -104,6 +104,19 @@ android {
|
|||||||
proguardFiles getDefaultProguardFile("proguard-android.txt"), "proguard-rules.pro"
|
proguardFiles getDefaultProguardFile("proguard-android.txt"), "proguard-rules.pro"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ABI-Split: nur arm64-v8a (jedes Android-Phone seit ~2017). Bringt die
|
||||||
|
// APK von ~136 MB auf ~35 MB — relevant weil ONNX Runtime + die anderen
|
||||||
|
// Native-Libs sonst pro Architektur dazukommen. Wer 32-bit oder Emulator
|
||||||
|
// braucht, kann hier "armeabi-v7a", "x86_64" etc. ergaenzen.
|
||||||
|
splits {
|
||||||
|
abi {
|
||||||
|
enable true
|
||||||
|
reset()
|
||||||
|
include "arm64-v8a"
|
||||||
|
universalApk false
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dependencies {
|
dependencies {
|
||||||
@@ -111,6 +124,9 @@ dependencies {
|
|||||||
implementation("com.facebook.react:react-android")
|
implementation("com.facebook.react:react-android")
|
||||||
implementation("com.facebook.react:flipper-integration")
|
implementation("com.facebook.react:flipper-integration")
|
||||||
|
|
||||||
|
// ONNX Runtime fuer on-device Wake-Word (openWakeWord ONNX-Modelle in assets/openwakeword/)
|
||||||
|
implementation("com.microsoft.onnxruntime:onnxruntime-android:1.17.1")
|
||||||
|
|
||||||
if (hermesEnabled.toBoolean()) {
|
if (hermesEnabled.toBoolean()) {
|
||||||
implementation("com.facebook.react:hermes-android")
|
implementation("com.facebook.react:hermes-android")
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -4,6 +4,16 @@
|
|||||||
<uses-permission android:name="android.permission.CAMERA" />
|
<uses-permission android:name="android.permission.CAMERA" />
|
||||||
<uses-permission android:name="android.permission.RECORD_AUDIO" />
|
<uses-permission android:name="android.permission.RECORD_AUDIO" />
|
||||||
<uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" />
|
<uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" />
|
||||||
|
<!-- Anruf-State lesen damit TTS bei klingelndem Telefon pausiert -->
|
||||||
|
<uses-permission android:name="android.permission.READ_PHONE_STATE" />
|
||||||
|
<!-- Foreground-Service damit TTS auch bei minimierter App weiterlaeuft.
|
||||||
|
FOREGROUND_SERVICE_MICROPHONE ist Pflicht ab Android 14 wenn der
|
||||||
|
Service waehrend des Backgrounds aufs Mikro zugreift (Wake-Word,
|
||||||
|
Aufnahme im Gespraechsmodus). -->
|
||||||
|
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
|
||||||
|
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK" />
|
||||||
|
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_MICROPHONE" />
|
||||||
|
<uses-permission android:name="android.permission.POST_NOTIFICATIONS" />
|
||||||
|
|
||||||
<application
|
<application
|
||||||
android:name=".MainApplication"
|
android:name=".MainApplication"
|
||||||
@@ -35,5 +45,10 @@
|
|||||||
android:name="android.support.FILE_PROVIDER_PATHS"
|
android:name="android.support.FILE_PROVIDER_PATHS"
|
||||||
android:resource="@xml/file_paths" />
|
android:resource="@xml/file_paths" />
|
||||||
</provider>
|
</provider>
|
||||||
|
|
||||||
|
<service
|
||||||
|
android:name=".AriaPlaybackService"
|
||||||
|
android:exported="false"
|
||||||
|
android:foregroundServiceType="mediaPlayback|microphone" />
|
||||||
</application>
|
</application>
|
||||||
</manifest>
|
</manifest>
|
||||||
|
|||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,108 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import android.app.Notification
|
||||||
|
import android.app.NotificationChannel
|
||||||
|
import android.app.NotificationManager
|
||||||
|
import android.app.PendingIntent
|
||||||
|
import android.app.Service
|
||||||
|
import android.content.Intent
|
||||||
|
import android.os.Build
|
||||||
|
import android.os.IBinder
|
||||||
|
import android.util.Log
|
||||||
|
import androidx.core.app.NotificationCompat
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Foreground-Service der den App-Prozess waehrend TTS-Wiedergabe am Leben
|
||||||
|
* haelt — Android killt sonst den Prozess sobald die App im Hintergrund ist
|
||||||
|
* und ARIA verstummt mitten im Satz.
|
||||||
|
*
|
||||||
|
* Notification ist persistent (ongoing) waehrend der Service laeuft.
|
||||||
|
* Tap auf die Notification bringt MainActivity zurueck nach vorne.
|
||||||
|
*
|
||||||
|
* foregroundServiceType="mediaPlayback" ist Pflicht ab Android 14, sonst
|
||||||
|
* wirft startForeground() eine SecurityException.
|
||||||
|
*/
|
||||||
|
class AriaPlaybackService : Service() {
|
||||||
|
companion object {
|
||||||
|
private const val TAG = "AriaPlaybackService"
|
||||||
|
private const val CHANNEL_ID = "aria_playback"
|
||||||
|
private const val NOTIFICATION_ID = 1042
|
||||||
|
const val EXTRA_REASON = "reason" // "tts" | "wake" | "rec" | ""
|
||||||
|
}
|
||||||
|
|
||||||
|
private var currentReason: String = ""
|
||||||
|
|
||||||
|
override fun onCreate() {
|
||||||
|
super.onCreate()
|
||||||
|
ensureNotificationChannel()
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
|
||||||
|
val reason = intent?.getStringExtra(EXTRA_REASON) ?: ""
|
||||||
|
currentReason = reason
|
||||||
|
Log.i(TAG, "Foreground-Service start/update (reason=$reason)")
|
||||||
|
try {
|
||||||
|
startForeground(NOTIFICATION_ID, buildNotification(reason))
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.e(TAG, "startForeground fehlgeschlagen", e)
|
||||||
|
stopSelf()
|
||||||
|
}
|
||||||
|
// START_NOT_STICKY: wenn Android den Service killt, NICHT automatisch
|
||||||
|
// wieder starten — die App entscheidet wann der Service noetig ist.
|
||||||
|
return START_NOT_STICKY
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun onDestroy() {
|
||||||
|
Log.i(TAG, "Foreground-Service gestoppt")
|
||||||
|
super.onDestroy()
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun onBind(intent: Intent?): IBinder? = null
|
||||||
|
|
||||||
|
private fun ensureNotificationChannel() {
|
||||||
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
|
||||||
|
val nm = getSystemService(NotificationManager::class.java) ?: return
|
||||||
|
if (nm.getNotificationChannel(CHANNEL_ID) == null) {
|
||||||
|
val channel = NotificationChannel(
|
||||||
|
CHANNEL_ID,
|
||||||
|
"ARIA Audio-Wiedergabe",
|
||||||
|
NotificationManager.IMPORTANCE_LOW,
|
||||||
|
).apply {
|
||||||
|
description = "Notification waehrend ARIA spricht (haelt die App im Hintergrund am Leben)"
|
||||||
|
setShowBadge(false)
|
||||||
|
}
|
||||||
|
nm.createNotificationChannel(channel)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun buildNotification(reason: String): Notification {
|
||||||
|
val launchIntent = Intent(this, MainActivity::class.java).apply {
|
||||||
|
flags = Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_CLEAR_TOP
|
||||||
|
}
|
||||||
|
val pendingFlags = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M)
|
||||||
|
PendingIntent.FLAG_IMMUTABLE or PendingIntent.FLAG_UPDATE_CURRENT
|
||||||
|
else
|
||||||
|
PendingIntent.FLAG_UPDATE_CURRENT
|
||||||
|
val pendingIntent = PendingIntent.getActivity(this, 0, launchIntent, pendingFlags)
|
||||||
|
|
||||||
|
val (title, body) = when (reason) {
|
||||||
|
"tts" -> "ARIA spricht" to "Antwort wird abgespielt — antippen oeffnet die App"
|
||||||
|
"rec" -> "ARIA hoert zu" to "Sprachaufnahme laeuft — antippen oeffnet die App"
|
||||||
|
"wake" -> "ARIA bereit" to "Wake-Word lauscht passiv — antippen oeffnet die App"
|
||||||
|
else -> "ARIA aktiv" to "Hintergrund-Modus — antippen oeffnet die App"
|
||||||
|
}
|
||||||
|
|
||||||
|
return NotificationCompat.Builder(this, CHANNEL_ID)
|
||||||
|
.setContentTitle(title)
|
||||||
|
.setContentText(body)
|
||||||
|
.setSmallIcon(R.mipmap.ic_launcher)
|
||||||
|
.setContentIntent(pendingIntent)
|
||||||
|
.setOngoing(true)
|
||||||
|
.setShowWhen(false)
|
||||||
|
.setPriority(NotificationCompat.PRIORITY_LOW)
|
||||||
|
.setCategory(NotificationCompat.CATEGORY_SERVICE)
|
||||||
|
.setVisibility(NotificationCompat.VISIBILITY_PUBLIC)
|
||||||
|
.build()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,59 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import android.content.Intent
|
||||||
|
import android.os.Build
|
||||||
|
import android.util.Log
|
||||||
|
import com.facebook.react.bridge.Promise
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||||
|
import com.facebook.react.bridge.ReactMethod
|
||||||
|
|
||||||
|
/**
|
||||||
|
* RN-Bridge fuer den AriaPlaybackService.
|
||||||
|
*
|
||||||
|
* Wird vom JS waehrend einer TTS-Wiedergabe gestartet damit Android den
|
||||||
|
* App-Prozess nicht killt wenn die App im Hintergrund ist (= ARIA spricht
|
||||||
|
* weiter, auch wenn Stefan die App minimiert hat).
|
||||||
|
*
|
||||||
|
* Service stoppt entweder explizit per stop() oder wird von Android
|
||||||
|
* mitgekillt wenn der Prozess weg ist (was bei Foreground-Service nur
|
||||||
|
* passiert wenn der User die App force-stopped).
|
||||||
|
*/
|
||||||
|
class BackgroundAudioModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||||
|
override fun getName() = "BackgroundAudio"
|
||||||
|
|
||||||
|
companion object { private const val TAG = "BackgroundAudio" }
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun start(reason: String, promise: Promise) {
|
||||||
|
try {
|
||||||
|
val ctx = reactApplicationContext
|
||||||
|
val intent = Intent(ctx, AriaPlaybackService::class.java)
|
||||||
|
intent.putExtra(AriaPlaybackService.EXTRA_REASON, reason ?: "")
|
||||||
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
|
||||||
|
ctx.startForegroundService(intent)
|
||||||
|
} else {
|
||||||
|
ctx.startService(intent)
|
||||||
|
}
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "start fehlgeschlagen: ${e.message}")
|
||||||
|
promise.reject("START_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun stop(promise: Promise) {
|
||||||
|
try {
|
||||||
|
val ctx = reactApplicationContext
|
||||||
|
ctx.stopService(Intent(ctx, AriaPlaybackService::class.java))
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "stop fehlgeschlagen: ${e.message}")
|
||||||
|
promise.reject("STOP_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod fun addListener(eventName: String) {}
|
||||||
|
@ReactMethod fun removeListeners(count: Int) {}
|
||||||
|
}
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import com.facebook.react.ReactPackage
|
||||||
|
import com.facebook.react.bridge.NativeModule
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.uimanager.ViewManager
|
||||||
|
|
||||||
|
class BackgroundAudioPackage : ReactPackage {
|
||||||
|
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||||
|
return listOf(BackgroundAudioModule(reactContext))
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
||||||
|
return emptyList()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -21,6 +21,9 @@ class MainApplication : Application(), ReactApplication {
|
|||||||
add(ApkInstallerPackage())
|
add(ApkInstallerPackage())
|
||||||
add(AudioFocusPackage())
|
add(AudioFocusPackage())
|
||||||
add(PcmStreamPlayerPackage())
|
add(PcmStreamPlayerPackage())
|
||||||
|
add(OpenWakeWordPackage())
|
||||||
|
add(PhoneCallPackage())
|
||||||
|
add(BackgroundAudioPackage())
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun getJSMainModuleName(): String = "index"
|
override fun getJSMainModuleName(): String = "index"
|
||||||
|
|||||||
@@ -0,0 +1,413 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import ai.onnxruntime.OnnxTensor
|
||||||
|
import ai.onnxruntime.OrtEnvironment
|
||||||
|
import ai.onnxruntime.OrtSession
|
||||||
|
import android.Manifest
|
||||||
|
import android.content.pm.PackageManager
|
||||||
|
import android.media.AudioFormat
|
||||||
|
import android.media.AudioRecord
|
||||||
|
import android.media.MediaRecorder
|
||||||
|
import android.media.audiofx.AcousticEchoCanceler
|
||||||
|
import android.media.audiofx.AutomaticGainControl
|
||||||
|
import android.media.audiofx.NoiseSuppressor
|
||||||
|
import android.util.Log
|
||||||
|
import androidx.core.content.ContextCompat
|
||||||
|
import com.facebook.react.bridge.Promise
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||||
|
import com.facebook.react.bridge.ReactMethod
|
||||||
|
import com.facebook.react.modules.core.DeviceEventManagerModule
|
||||||
|
import java.nio.FloatBuffer
|
||||||
|
import java.util.concurrent.atomic.AtomicBoolean
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Wake-Word Erkennung on-device via openWakeWord (https://github.com/dscripka/openWakeWord).
|
||||||
|
*
|
||||||
|
* Drei-stufige ONNX Pipeline:
|
||||||
|
* 1. Audio (16kHz mono int16, 1280-Sample-Chunks) → Melspectrogram → 32-mel Frames
|
||||||
|
* 2. 76 Mel-Frames Sliding Window (stride 8) → Speech-Embedding → 96-dim Vektor
|
||||||
|
* 3. Letzte 16 Embeddings (~1.28s Kontext) → Wake-Word-Klassifikator → Sigmoid-Score
|
||||||
|
*
|
||||||
|
* Modelle liegen in assets/openwakeword/ (mel + embedding shared, plus pro Keyword
|
||||||
|
* ein eigenes .onnx). Erkennung feuert nach `patience` aufeinanderfolgenden
|
||||||
|
* Frames ueber `threshold` und unterdrueckt Wiederholungen fuer `debounceMs`.
|
||||||
|
*
|
||||||
|
* Emittiert "WakeWordDetected" als RN-Event wenn ein Trigger erkannt wurde.
|
||||||
|
*/
|
||||||
|
class OpenWakeWordModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||||
|
override fun getName() = "OpenWakeWord"
|
||||||
|
|
||||||
|
companion object {
|
||||||
|
private const val TAG = "OpenWakeWord"
|
||||||
|
private const val SAMPLE_RATE = 16000
|
||||||
|
private const val CHUNK_SAMPLES = 1280 // 80ms @ 16kHz
|
||||||
|
private const val MEL_FRAMES_PER_EMBEDDING = 76 // Embedding-Fenster
|
||||||
|
private const val EMBEDDING_STRIDE = 8 // Slide um 8 Mel-Frames
|
||||||
|
private const val EMBEDDING_DIM = 96
|
||||||
|
private const val MEL_BINS = 32
|
||||||
|
private const val DEFAULT_WW_INPUT_FRAMES = 16 // Fallback wenn Modell-Metadata fehlt
|
||||||
|
}
|
||||||
|
|
||||||
|
private val env: OrtEnvironment = OrtEnvironment.getEnvironment()
|
||||||
|
private var melSession: OrtSession? = null
|
||||||
|
private var embSession: OrtSession? = null
|
||||||
|
private var wwSession: OrtSession? = null
|
||||||
|
|
||||||
|
private var melInputName: String = "input"
|
||||||
|
private var embInputName: String = "input_1"
|
||||||
|
private var wwInputName: String = "input"
|
||||||
|
// Anzahl Embedding-Frames die der Wake-Word-Klassifikator pro Inferenz erwartet —
|
||||||
|
// hey_jarvis hat 16, andere Community-Modelle koennen abweichen (z.B. 28).
|
||||||
|
// Wird beim init() aus den Modell-Metadaten gelesen.
|
||||||
|
private var wwInputFrames: Int = DEFAULT_WW_INPUT_FRAMES
|
||||||
|
|
||||||
|
// Konfiguration
|
||||||
|
private var threshold: Float = 0.5f
|
||||||
|
private var patience: Int = 2
|
||||||
|
private var debounceMs: Long = 1500
|
||||||
|
private var modelName: String = "hey_jarvis"
|
||||||
|
|
||||||
|
// Audio-Capture-Thread
|
||||||
|
private var audioRecord: AudioRecord? = null
|
||||||
|
private val running = AtomicBoolean(false)
|
||||||
|
private var captureThread: Thread? = null
|
||||||
|
|
||||||
|
// Audio-Effects: Echo-Cancellation (gegen ARIAs eigene TTS-Stimme die sonst
|
||||||
|
// das Wake-Word triggern wuerde) + Noise-Suppression. Per VOICE_COMMUNICATION
|
||||||
|
// Audio-Source schon vorhanden, aber explizites Aktivieren ist robuster.
|
||||||
|
private var aec: AcousticEchoCanceler? = null
|
||||||
|
private var ns: NoiseSuppressor? = null
|
||||||
|
private var agc: AutomaticGainControl? = null
|
||||||
|
|
||||||
|
// Inferenz-State
|
||||||
|
private val melBuffer: ArrayList<FloatArray> = ArrayList(256) // Liste von 32-dim Frames
|
||||||
|
private var melProcessedIdx: Int = 0
|
||||||
|
private val embBuffer: ArrayDeque<FloatArray> = ArrayDeque(32) // Ringpuffer letzter Embeddings
|
||||||
|
private var consecutiveAboveThreshold: Int = 0
|
||||||
|
private var lastDetectionMs: Long = 0L
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Initialisiert die ONNX-Sessions fuer ein bestimmtes Wake-Word.
|
||||||
|
* modelName: dateiname ohne Suffix (z.B. "hey_jarvis", "alexa", "hey_mycroft", "hey_rhasspy")
|
||||||
|
*/
|
||||||
|
@ReactMethod
|
||||||
|
fun init(modelName: String, threshold: Double, patience: Int, debounceMs: Int, promise: Promise) {
|
||||||
|
try {
|
||||||
|
disposeSessions()
|
||||||
|
this.modelName = modelName
|
||||||
|
this.threshold = threshold.toFloat()
|
||||||
|
this.patience = patience.coerceAtLeast(1)
|
||||||
|
this.debounceMs = debounceMs.toLong()
|
||||||
|
|
||||||
|
val ctx = reactApplicationContext
|
||||||
|
val melBytes = ctx.assets.open("openwakeword/melspectrogram.onnx").use { it.readBytes() }
|
||||||
|
val embBytes = ctx.assets.open("openwakeword/embedding_model.onnx").use { it.readBytes() }
|
||||||
|
val wwBytes = ctx.assets.open("openwakeword/$modelName.onnx").use { it.readBytes() }
|
||||||
|
|
||||||
|
val opts = OrtSession.SessionOptions()
|
||||||
|
melSession = env.createSession(melBytes, opts)
|
||||||
|
embSession = env.createSession(embBytes, opts)
|
||||||
|
wwSession = env.createSession(wwBytes, opts)
|
||||||
|
|
||||||
|
melInputName = melSession!!.inputNames.first()
|
||||||
|
embInputName = embSession!!.inputNames.first()
|
||||||
|
wwInputName = wwSession!!.inputNames.first()
|
||||||
|
|
||||||
|
// WW-Input-Frame-Count aus dem Modell lesen — variiert pro Keyword.
|
||||||
|
// Erwartete Form: (1, N, 96), N steht in der Modell-Metadaten.
|
||||||
|
val wwInputInfo = wwSession!!.inputInfo[wwInputName]
|
||||||
|
val wwShape = (wwInputInfo?.info as? ai.onnxruntime.TensorInfo)?.shape
|
||||||
|
wwInputFrames = wwShape?.getOrNull(1)?.toInt()?.takeIf { it > 0 } ?: DEFAULT_WW_INPUT_FRAMES
|
||||||
|
|
||||||
|
Log.i(TAG, "Init OK: model=$modelName wwFrames=$wwInputFrames threshold=$threshold patience=$patience " +
|
||||||
|
"debounce=${debounceMs}ms (inputs: mel=$melInputName emb=$embInputName ww=$wwInputName)")
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.e(TAG, "Init fehlgeschlagen: ${e.message}", e)
|
||||||
|
disposeSessions()
|
||||||
|
promise.reject("INIT_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun start(promise: Promise) {
|
||||||
|
if (running.get()) {
|
||||||
|
promise.resolve(true)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if (melSession == null || embSession == null || wwSession == null) {
|
||||||
|
promise.reject("NOT_INITIALIZED", "init() muss vor start() aufgerufen werden")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Berechtigung pruefen — der App-Code holt die ueblicherweise schon vorher,
|
||||||
|
// aber wir bestehen hier explizit darauf damit AudioRecord nicht stumm
|
||||||
|
// failt.
|
||||||
|
val perm = ContextCompat.checkSelfPermission(reactApplicationContext, Manifest.permission.RECORD_AUDIO)
|
||||||
|
if (perm != PackageManager.PERMISSION_GRANTED) {
|
||||||
|
promise.reject("NO_MIC_PERMISSION", "RECORD_AUDIO Permission fehlt")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
val minBuf = AudioRecord.getMinBufferSize(
|
||||||
|
SAMPLE_RATE,
|
||||||
|
AudioFormat.CHANNEL_IN_MONO,
|
||||||
|
AudioFormat.ENCODING_PCM_16BIT,
|
||||||
|
).coerceAtLeast(CHUNK_SAMPLES * 2 * 4)
|
||||||
|
|
||||||
|
// VOICE_COMMUNICATION-Source: aktiviert auf den meisten Android-Geraeten
|
||||||
|
// automatisch Echo-Cancellation + Noise-Suppression. Wichtig damit
|
||||||
|
// ARIAs eigene Stimme nicht das Wake-Word triggert wenn parallel
|
||||||
|
// zur TTS-Wiedergabe gelauscht wird.
|
||||||
|
val record = AudioRecord(
|
||||||
|
MediaRecorder.AudioSource.VOICE_COMMUNICATION,
|
||||||
|
SAMPLE_RATE,
|
||||||
|
AudioFormat.CHANNEL_IN_MONO,
|
||||||
|
AudioFormat.ENCODING_PCM_16BIT,
|
||||||
|
minBuf,
|
||||||
|
)
|
||||||
|
if (record.state != AudioRecord.STATE_INITIALIZED) {
|
||||||
|
record.release()
|
||||||
|
promise.reject("AUDIO_INIT", "AudioRecord nicht initialisiert (Mikro belegt?)")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
audioRecord = record
|
||||||
|
|
||||||
|
// Audio-Effects ZUSAETZLICH explizit aktivieren — manche Geraete
|
||||||
|
// benoetigen das, obwohl VOICE_COMMUNICATION es eigentlich schon
|
||||||
|
// mitbringt. Failure ist nicht kritisch (continue ohne Effects).
|
||||||
|
try {
|
||||||
|
if (AcousticEchoCanceler.isAvailable()) {
|
||||||
|
aec = AcousticEchoCanceler.create(record.audioSessionId)?.apply { enabled = true }
|
||||||
|
Log.i(TAG, "AEC aktiviert (enabled=${aec?.enabled})")
|
||||||
|
}
|
||||||
|
} catch (e: Exception) { Log.w(TAG, "AEC failed: ${e.message}") }
|
||||||
|
try {
|
||||||
|
if (NoiseSuppressor.isAvailable()) {
|
||||||
|
ns = NoiseSuppressor.create(record.audioSessionId)?.apply { enabled = true }
|
||||||
|
}
|
||||||
|
} catch (e: Exception) { Log.w(TAG, "NS failed: ${e.message}") }
|
||||||
|
try {
|
||||||
|
if (AutomaticGainControl.isAvailable()) {
|
||||||
|
agc = AutomaticGainControl.create(record.audioSessionId)?.apply { enabled = true }
|
||||||
|
}
|
||||||
|
} catch (e: Exception) { Log.w(TAG, "AGC failed: ${e.message}") }
|
||||||
|
|
||||||
|
resetInferenceState()
|
||||||
|
running.set(true)
|
||||||
|
record.startRecording()
|
||||||
|
|
||||||
|
captureThread = Thread({ captureLoop() }, "OpenWakeWordCapture").apply {
|
||||||
|
isDaemon = true
|
||||||
|
start()
|
||||||
|
}
|
||||||
|
|
||||||
|
Log.i(TAG, "Lauschen gestartet (model=$modelName)")
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.e(TAG, "start fehlgeschlagen", e)
|
||||||
|
running.set(false)
|
||||||
|
audioRecord?.release()
|
||||||
|
audioRecord = null
|
||||||
|
promise.reject("START_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun releaseAudioEffects() {
|
||||||
|
try { aec?.release() } catch (_: Exception) {}
|
||||||
|
try { ns?.release() } catch (_: Exception) {}
|
||||||
|
try { agc?.release() } catch (_: Exception) {}
|
||||||
|
aec = null; ns = null; agc = null
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun stop(promise: Promise) {
|
||||||
|
running.set(false)
|
||||||
|
try {
|
||||||
|
captureThread?.join(1500)
|
||||||
|
} catch (_: InterruptedException) {}
|
||||||
|
captureThread = null
|
||||||
|
try { audioRecord?.stop() } catch (_: Exception) {}
|
||||||
|
try { audioRecord?.release() } catch (_: Exception) {}
|
||||||
|
audioRecord = null
|
||||||
|
releaseAudioEffects()
|
||||||
|
Log.i(TAG, "Lauschen gestoppt")
|
||||||
|
promise.resolve(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun dispose(promise: Promise) {
|
||||||
|
running.set(false)
|
||||||
|
try { captureThread?.join(1000) } catch (_: InterruptedException) {}
|
||||||
|
captureThread = null
|
||||||
|
try { audioRecord?.stop() } catch (_: Exception) {}
|
||||||
|
try { audioRecord?.release() } catch (_: Exception) {}
|
||||||
|
audioRecord = null
|
||||||
|
releaseAudioEffects()
|
||||||
|
disposeSessions()
|
||||||
|
promise.resolve(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun isAvailable(promise: Promise) {
|
||||||
|
// Wake-Word ist immer verfuegbar (kein API-Key, alles on-device)
|
||||||
|
promise.resolve(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RN-Event-Subscriptions — RN-Konvention, sonst Warnung im Debug-Build
|
||||||
|
@ReactMethod fun addListener(eventName: String) {}
|
||||||
|
@ReactMethod fun removeListeners(count: Int) {}
|
||||||
|
|
||||||
|
private fun disposeSessions() {
|
||||||
|
try { melSession?.close() } catch (_: Exception) {}
|
||||||
|
try { embSession?.close() } catch (_: Exception) {}
|
||||||
|
try { wwSession?.close() } catch (_: Exception) {}
|
||||||
|
melSession = null
|
||||||
|
embSession = null
|
||||||
|
wwSession = null
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun resetInferenceState() {
|
||||||
|
melBuffer.clear()
|
||||||
|
melProcessedIdx = 0
|
||||||
|
embBuffer.clear()
|
||||||
|
consecutiveAboveThreshold = 0
|
||||||
|
lastDetectionMs = 0L
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun emitDetected() {
|
||||||
|
val params = com.facebook.react.bridge.Arguments.createMap().apply {
|
||||||
|
putString("model", modelName)
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
reactApplicationContext
|
||||||
|
.getJSModule(DeviceEventManagerModule.RCTDeviceEventEmitter::class.java)
|
||||||
|
.emit("WakeWordDetected", params)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "emit fehlgeschlagen: ${e.message}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun captureLoop() {
|
||||||
|
val buf = ShortArray(CHUNK_SAMPLES)
|
||||||
|
val record = audioRecord ?: return
|
||||||
|
Log.i(TAG, "Capture-Loop gestartet")
|
||||||
|
while (running.get()) {
|
||||||
|
var read = 0
|
||||||
|
while (read < CHUNK_SAMPLES && running.get()) {
|
||||||
|
val n = record.read(buf, read, CHUNK_SAMPLES - read)
|
||||||
|
if (n <= 0) {
|
||||||
|
Log.w(TAG, "AudioRecord.read returned $n — Loop ende")
|
||||||
|
running.set(false)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
read += n
|
||||||
|
}
|
||||||
|
if (!running.get()) break
|
||||||
|
try {
|
||||||
|
processChunk(buf)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "processChunk: ${e.message}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Log.i(TAG, "Capture-Loop beendet")
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Verarbeitet einen 1280-Sample int16 Audio-Chunk. */
|
||||||
|
private fun processChunk(audio: ShortArray) {
|
||||||
|
// 1) Audio → mel (output (1, 1, frames, 32))
|
||||||
|
val floats = FloatArray(audio.size) { audio[it].toFloat() }
|
||||||
|
val melTensor = OnnxTensor.createTensor(
|
||||||
|
env,
|
||||||
|
FloatBuffer.wrap(floats),
|
||||||
|
longArrayOf(1L, audio.size.toLong()),
|
||||||
|
)
|
||||||
|
val melResult = melSession!!.run(mapOf(melInputName to melTensor))
|
||||||
|
val melOut = melResult.get(0).value
|
||||||
|
melTensor.close()
|
||||||
|
@Suppress("UNCHECKED_CAST")
|
||||||
|
val mel4 = melOut as Array<Array<Array<FloatArray>>>
|
||||||
|
val frames = mel4[0][0]
|
||||||
|
// openWakeWord wendet `mel/10 + 2` an, bevor es ans Embedding-Modell geht
|
||||||
|
for (frame in frames) {
|
||||||
|
val scaled = FloatArray(frame.size) { frame[it] / 10f + 2f }
|
||||||
|
melBuffer.add(scaled)
|
||||||
|
}
|
||||||
|
melResult.close()
|
||||||
|
|
||||||
|
// 2) Sliding window: alle vollstaendigen 76-Frame-Fenster verarbeiten
|
||||||
|
while (melBuffer.size >= melProcessedIdx + MEL_FRAMES_PER_EMBEDDING) {
|
||||||
|
val flat = FloatArray(MEL_FRAMES_PER_EMBEDDING * MEL_BINS)
|
||||||
|
var pos = 0
|
||||||
|
for (i in 0 until MEL_FRAMES_PER_EMBEDDING) {
|
||||||
|
val src = melBuffer[melProcessedIdx + i]
|
||||||
|
System.arraycopy(src, 0, flat, pos, MEL_BINS)
|
||||||
|
pos += MEL_BINS
|
||||||
|
}
|
||||||
|
val embIn = OnnxTensor.createTensor(
|
||||||
|
env,
|
||||||
|
FloatBuffer.wrap(flat),
|
||||||
|
longArrayOf(1L, MEL_FRAMES_PER_EMBEDDING.toLong(), MEL_BINS.toLong(), 1L),
|
||||||
|
)
|
||||||
|
val embRes = embSession!!.run(mapOf(embInputName to embIn))
|
||||||
|
val embOut = embRes.get(0).value
|
||||||
|
embIn.close()
|
||||||
|
// Erwartete Output-Form: (1, 1, 1, 96) — rank-4, NICHT (1, 96).
|
||||||
|
// Die Google-Embedding-Pipeline behaelt extra Dimensionen.
|
||||||
|
@Suppress("UNCHECKED_CAST")
|
||||||
|
val embArr = embOut as Array<Array<Array<FloatArray>>>
|
||||||
|
embBuffer.addLast(embArr[0][0][0].copyOf())
|
||||||
|
while (embBuffer.size > wwInputFrames) embBuffer.removeFirst()
|
||||||
|
embRes.close()
|
||||||
|
|
||||||
|
melProcessedIdx += EMBEDDING_STRIDE
|
||||||
|
}
|
||||||
|
// Mel-Buffer trimmen — verhindert Memory-Wachstum
|
||||||
|
if (melProcessedIdx > MEL_FRAMES_PER_EMBEDDING) {
|
||||||
|
val keepFrom = melProcessedIdx - MEL_FRAMES_PER_EMBEDDING
|
||||||
|
val newList = ArrayList<FloatArray>(melBuffer.size - keepFrom)
|
||||||
|
for (i in keepFrom until melBuffer.size) newList.add(melBuffer[i])
|
||||||
|
melBuffer.clear()
|
||||||
|
melBuffer.addAll(newList)
|
||||||
|
melProcessedIdx = MEL_FRAMES_PER_EMBEDDING
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3) Klassifikation — sobald wir 16 Embeddings haben
|
||||||
|
if (embBuffer.size < wwInputFrames) return
|
||||||
|
val flatEmb = FloatArray(wwInputFrames * EMBEDDING_DIM)
|
||||||
|
var p = 0
|
||||||
|
// Letzte wwInputFrames Embeddings nehmen (embBuffer ist auf wwInputFrames begrenzt)
|
||||||
|
for (e in embBuffer) {
|
||||||
|
System.arraycopy(e, 0, flatEmb, p, EMBEDDING_DIM)
|
||||||
|
p += EMBEDDING_DIM
|
||||||
|
}
|
||||||
|
val wwIn = OnnxTensor.createTensor(
|
||||||
|
env,
|
||||||
|
FloatBuffer.wrap(flatEmb),
|
||||||
|
longArrayOf(1L, wwInputFrames.toLong(), EMBEDDING_DIM.toLong()),
|
||||||
|
)
|
||||||
|
val wwRes = wwSession!!.run(mapOf(wwInputName to wwIn))
|
||||||
|
val wwOut = wwRes.get(0).value
|
||||||
|
wwIn.close()
|
||||||
|
// Erwartete Output-Form: (1, 1) → Array<FloatArray>
|
||||||
|
@Suppress("UNCHECKED_CAST")
|
||||||
|
val score = (wwOut as Array<FloatArray>)[0][0]
|
||||||
|
wwRes.close()
|
||||||
|
|
||||||
|
if (score >= threshold) {
|
||||||
|
consecutiveAboveThreshold++
|
||||||
|
if (consecutiveAboveThreshold >= patience) {
|
||||||
|
val now = System.currentTimeMillis()
|
||||||
|
if (now - lastDetectionMs >= debounceMs) {
|
||||||
|
lastDetectionMs = now
|
||||||
|
consecutiveAboveThreshold = 0
|
||||||
|
Log.i(TAG, "Wake-Word erkannt! score=$score model=$modelName")
|
||||||
|
emitDetected()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
consecutiveAboveThreshold = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import com.facebook.react.ReactPackage
|
||||||
|
import com.facebook.react.bridge.NativeModule
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.uimanager.ViewManager
|
||||||
|
|
||||||
|
class OpenWakeWordPackage : ReactPackage {
|
||||||
|
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||||
|
return listOf(OpenWakeWordModule(reactContext))
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
||||||
|
return emptyList()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -137,6 +137,17 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
Log.w(TAG, "play() sofort failed: ${e.message}")
|
Log.w(TAG, "play() sofort failed: ${e.message}")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
// Idle-Cutoff: wenn endRequested NICHT kam aber 30s nichts mehr
|
||||||
|
// reinkommt, brechen wir ab (Bridge-Crash, verlorener final).
|
||||||
|
var idleMs = 0L
|
||||||
|
val maxIdleMs = 30_000L
|
||||||
|
// Zielpufferfuellung — unter diesem Wasserstand fuettern wir
|
||||||
|
// Stille rein damit AudioTrack nicht underrunt waehrend die
|
||||||
|
// Bridge den naechsten Satz rendert. Spotify/YouTube reagieren
|
||||||
|
// sonst mit eigenmaechtiger Wiederaufnahme nach ~10s Stille.
|
||||||
|
val underrunGuardFrames = sampleRate / 10 // ~100ms
|
||||||
|
val silenceFillFrames = sampleRate / 20 // ~50ms pro Refill
|
||||||
|
|
||||||
mainLoop@ while (!writerShouldStop) {
|
mainLoop@ while (!writerShouldStop) {
|
||||||
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS)
|
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS)
|
||||||
if (data == null) {
|
if (data == null) {
|
||||||
@@ -153,8 +164,33 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
}
|
}
|
||||||
break@mainLoop
|
break@mainLoop
|
||||||
}
|
}
|
||||||
|
// Underrun-Schutz: Stille reinfuettern wenn der AudioTrack-
|
||||||
|
// Puffer leerzulaufen droht. Spotify resumed sonst nach
|
||||||
|
// ~10s Pause auf eigene Faust, obwohl wir den Fokus halten.
|
||||||
|
if (playbackStarted) {
|
||||||
|
val framesWritten = bytesBuffered / streamBytesPerFrame
|
||||||
|
val framesPlayed = t.playbackHeadPosition.toLong()
|
||||||
|
val framesInBuffer = framesWritten - framesPlayed
|
||||||
|
if (framesInBuffer < underrunGuardFrames) {
|
||||||
|
val fillBytes = silenceFillFrames * streamBytesPerFrame
|
||||||
|
val silence = ByteArray(fillBytes)
|
||||||
|
var silOff = 0
|
||||||
|
while (silOff < silence.size && !writerShouldStop) {
|
||||||
|
val w = t.write(silence, silOff, silence.size - silOff)
|
||||||
|
if (w <= 0) break
|
||||||
|
silOff += w
|
||||||
|
}
|
||||||
|
bytesBuffered += silence.size
|
||||||
|
}
|
||||||
|
}
|
||||||
|
idleMs += 50L
|
||||||
|
if (idleMs >= maxIdleMs) {
|
||||||
|
Log.w(TAG, "Idle-Cutoff: ${maxIdleMs}ms keine Daten — Stream wird beendet")
|
||||||
|
break@mainLoop
|
||||||
|
}
|
||||||
continue@mainLoop
|
continue@mainLoop
|
||||||
}
|
}
|
||||||
|
idleMs = 0L
|
||||||
|
|
||||||
// Pre-Roll Check: play() erst wenn genug gepuffert
|
// Pre-Roll Check: play() erst wenn genug gepuffert
|
||||||
if (!playbackStarted && bytesBuffered + data.size >= prerollBytes) {
|
if (!playbackStarted && bytesBuffered + data.size >= prerollBytes) {
|
||||||
|
|||||||
@@ -0,0 +1,126 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import android.Manifest
|
||||||
|
import android.content.Context
|
||||||
|
import android.content.pm.PackageManager
|
||||||
|
import android.os.Build
|
||||||
|
import android.telephony.PhoneStateListener
|
||||||
|
import android.telephony.TelephonyCallback
|
||||||
|
import android.telephony.TelephonyManager
|
||||||
|
import android.util.Log
|
||||||
|
import androidx.core.content.ContextCompat
|
||||||
|
import com.facebook.react.bridge.Arguments
|
||||||
|
import com.facebook.react.bridge.Promise
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||||
|
import com.facebook.react.bridge.ReactMethod
|
||||||
|
import com.facebook.react.modules.core.DeviceEventManagerModule
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Lauscht auf Anruf-Statusaenderungen — wenn das Telefon klingelt oder ein
|
||||||
|
* Anruf laeuft, sendet das Modul ein "PhoneCallStateChanged"-Event an JS.
|
||||||
|
*
|
||||||
|
* JS-Side stoppt dann die TTS-Wiedergabe damit ARIA nicht mitten ins Gespraech
|
||||||
|
* weiterredet. Ohne READ_PHONE_STATE-Permission failt start() leise — der Rest
|
||||||
|
* der App funktioniert wie bisher.
|
||||||
|
*
|
||||||
|
* State-Strings: "idle" | "ringing" | "offhook"
|
||||||
|
*/
|
||||||
|
class PhoneCallModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||||
|
override fun getName() = "PhoneCall"
|
||||||
|
|
||||||
|
companion object { private const val TAG = "PhoneCall" }
|
||||||
|
|
||||||
|
private var telephonyManager: TelephonyManager? = null
|
||||||
|
private var legacyListener: PhoneStateListener? = null
|
||||||
|
private var modernCallback: Any? = null // TelephonyCallback ab API 31
|
||||||
|
private var lastState: Int = TelephonyManager.CALL_STATE_IDLE
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun start(promise: Promise) {
|
||||||
|
try {
|
||||||
|
val perm = ContextCompat.checkSelfPermission(reactApplicationContext, Manifest.permission.READ_PHONE_STATE)
|
||||||
|
if (perm != PackageManager.PERMISSION_GRANTED) {
|
||||||
|
Log.w(TAG, "READ_PHONE_STATE Permission fehlt — Anruf-Erkennung inaktiv")
|
||||||
|
promise.resolve(false)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
val tm = reactApplicationContext.getSystemService(Context.TELEPHONY_SERVICE) as? TelephonyManager
|
||||||
|
if (tm == null) {
|
||||||
|
Log.w(TAG, "TelephonyManager nicht verfuegbar")
|
||||||
|
promise.resolve(false)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
telephonyManager = tm
|
||||||
|
|
||||||
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
|
||||||
|
val cb = object : TelephonyCallback(), TelephonyCallback.CallStateListener {
|
||||||
|
override fun onCallStateChanged(state: Int) {
|
||||||
|
handleStateChange(state)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
tm.registerTelephonyCallback(reactApplicationContext.mainExecutor, cb)
|
||||||
|
modernCallback = cb
|
||||||
|
} else {
|
||||||
|
@Suppress("DEPRECATION")
|
||||||
|
val l = object : PhoneStateListener() {
|
||||||
|
override fun onCallStateChanged(state: Int, phoneNumber: String?) {
|
||||||
|
handleStateChange(state)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@Suppress("DEPRECATION")
|
||||||
|
tm.listen(l, PhoneStateListener.LISTEN_CALL_STATE)
|
||||||
|
legacyListener = l
|
||||||
|
}
|
||||||
|
Log.i(TAG, "PhoneCall-Listener aktiv")
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.e(TAG, "start fehlgeschlagen", e)
|
||||||
|
promise.reject("START_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun stop(promise: Promise) {
|
||||||
|
try {
|
||||||
|
val tm = telephonyManager
|
||||||
|
if (tm != null) {
|
||||||
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
|
||||||
|
(modernCallback as? TelephonyCallback)?.let { tm.unregisterTelephonyCallback(it) }
|
||||||
|
modernCallback = null
|
||||||
|
} else {
|
||||||
|
@Suppress("DEPRECATION")
|
||||||
|
legacyListener?.let { tm.listen(it, PhoneStateListener.LISTEN_NONE) }
|
||||||
|
legacyListener = null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
telephonyManager = null
|
||||||
|
lastState = TelephonyManager.CALL_STATE_IDLE
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
promise.reject("STOP_FAILED", e.message ?: "")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun handleStateChange(state: Int) {
|
||||||
|
if (state == lastState) return
|
||||||
|
lastState = state
|
||||||
|
val name = when (state) {
|
||||||
|
TelephonyManager.CALL_STATE_RINGING -> "ringing"
|
||||||
|
TelephonyManager.CALL_STATE_OFFHOOK -> "offhook"
|
||||||
|
TelephonyManager.CALL_STATE_IDLE -> "idle"
|
||||||
|
else -> return
|
||||||
|
}
|
||||||
|
Log.i(TAG, "Telefon-State: $name")
|
||||||
|
val params = Arguments.createMap().apply { putString("state", name) }
|
||||||
|
try {
|
||||||
|
reactApplicationContext.getJSModule(DeviceEventManagerModule.RCTDeviceEventEmitter::class.java)
|
||||||
|
.emit("PhoneCallStateChanged", params)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "Event-emit fehlgeschlagen: ${e.message}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod fun addListener(eventName: String) {}
|
||||||
|
@ReactMethod fun removeListeners(count: Int) {}
|
||||||
|
}
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import com.facebook.react.ReactPackage
|
||||||
|
import com.facebook.react.bridge.NativeModule
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.uimanager.ViewManager
|
||||||
|
|
||||||
|
class PhoneCallPackage : ReactPackage {
|
||||||
|
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||||
|
return listOf(PhoneCallModule(reactContext))
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
||||||
|
return emptyList()
|
||||||
|
}
|
||||||
|
}
|
||||||
Binary file not shown.
+15
-2
@@ -167,10 +167,23 @@ export CI=true
|
|||||||
|
|
||||||
if [ "$MODE" = "debug" ]; then
|
if [ "$MODE" = "debug" ]; then
|
||||||
./gradlew assembleDebug
|
./gradlew assembleDebug
|
||||||
APK_PATH="app/build/outputs/apk/debug/app-debug.apk"
|
OUT_DIR="app/build/outputs/apk/debug"
|
||||||
else
|
else
|
||||||
./gradlew assembleRelease
|
./gradlew assembleRelease
|
||||||
APK_PATH="app/build/outputs/apk/release/app-release.apk"
|
OUT_DIR="app/build/outputs/apk/release"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Mit ABI-Splits heisst die APK z.B. app-arm64-v8a-release.apk statt
|
||||||
|
# app-release.apk. arm64-v8a-Variante zuerst probieren (das ist unser
|
||||||
|
# Standard), Universal-APK als Fallback falls Splits deaktiviert sind.
|
||||||
|
if [ -f "$OUT_DIR/app-arm64-v8a-${MODE}.apk" ]; then
|
||||||
|
APK_PATH="$OUT_DIR/app-arm64-v8a-${MODE}.apk"
|
||||||
|
elif [ -f "$OUT_DIR/app-${MODE}.apk" ]; then
|
||||||
|
APK_PATH="$OUT_DIR/app-${MODE}.apk"
|
||||||
|
else
|
||||||
|
echo -e "${RED}Keine passende APK in $OUT_DIR gefunden${NC}"
|
||||||
|
cd ..
|
||||||
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
cd ..
|
cd ..
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "aria-cockpit",
|
"name": "aria-cockpit",
|
||||||
"version": "0.0.6.4",
|
"version": "0.0.8.2",
|
||||||
"private": true,
|
"private": true,
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"android": "react-native run-android",
|
"android": "react-native run-android",
|
||||||
@@ -24,9 +24,7 @@
|
|||||||
"react-native-camera-kit": "^13.0.0",
|
"react-native-camera-kit": "^13.0.0",
|
||||||
"@react-native-async-storage/async-storage": "^1.21.0",
|
"@react-native-async-storage/async-storage": "^1.21.0",
|
||||||
"react-native-fs": "^2.20.0",
|
"react-native-fs": "^2.20.0",
|
||||||
"react-native-audio-recorder-player": "^3.6.7",
|
"react-native-audio-recorder-player": "^3.6.7"
|
||||||
"@picovoice/porcupine-react-native": "3.0.5",
|
|
||||||
"@picovoice/react-native-voice-processor": "1.2.3"
|
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"typescript": "^5.3.3",
|
"typescript": "^5.3.3",
|
||||||
|
|||||||
Binary file not shown.
@@ -1,68 +1,14 @@
|
|||||||
/**
|
/**
|
||||||
* MessageText — rendert Chat-Text mit Auto-Linkifizierung:
|
* MessageText — selektierbarer Chat-Text mit Android-Auto-Linkifizierung.
|
||||||
* - http(s)://... → tippbar, oeffnet im Browser
|
|
||||||
* - mailto: oder plain E-Mail → tippbar, oeffnet Mail-App
|
|
||||||
* - Telefonnummern → tippbar, oeffnet Android-Dialer
|
|
||||||
*
|
*
|
||||||
* Text ist durchgaengig markierbar/kopierbar (selectable).
|
* Wir nutzen Androids dataDetectorType="all" (System macht Phone/URL/Email
|
||||||
|
* automatisch klickbar) und ein einzelnes <Text selectable> ohne nested
|
||||||
|
* <Text> mit eigenem onPress. Nested Text mit onPress fingen die Long-Press-
|
||||||
|
* Geste ab, damit war Markieren+Kopieren defekt.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import React from 'react';
|
import React from 'react';
|
||||||
import { Text, Linking, TextStyle, StyleProp } from 'react-native';
|
import { Text, TextStyle, StyleProp } from 'react-native';
|
||||||
|
|
||||||
// Regex kombiniert URL | Email | Telefonnummer.
|
|
||||||
// Gruppenreihenfolge ist wichtig fuer die Erkennung unten.
|
|
||||||
//
|
|
||||||
// URL: http://... oder https://... bis zum ersten Whitespace / Anfuehrungszeichen.
|
|
||||||
// Email: simpler Standard-Match (kein RFC-kompatibel aber gut genug).
|
|
||||||
// Telefon: internationale Form (+49..., 0049..., 0176...), darf Leerzeichen
|
|
||||||
// / Bindestriche / Schraegstriche / Klammern enthalten, mindestens 7
|
|
||||||
// Ziffern insgesamt. Vermeidet banale Zahlen (Uhrzeiten, Datum).
|
|
||||||
const LINK_REGEX = new RegExp(
|
|
||||||
'(https?:\\/\\/[^\\s<>"]+)' + // 1: URL
|
|
||||||
'|([A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,})' + // 2: Email
|
|
||||||
'|((?:\\+|00)\\d[\\d\\s()\\-\\/]{6,}\\d|0\\d{2,4}[\\s\\/\\-]?[\\d\\s\\-\\/]{5,}\\d)', // 3: Telefon
|
|
||||||
'g',
|
|
||||||
);
|
|
||||||
|
|
||||||
const LINK_STYLE = { color: '#0096FF', textDecorationLine: 'underline' } as TextStyle;
|
|
||||||
|
|
||||||
interface Segment {
|
|
||||||
text: string;
|
|
||||||
kind: 'text' | 'url' | 'email' | 'phone';
|
|
||||||
}
|
|
||||||
|
|
||||||
function tokenize(raw: string): Segment[] {
|
|
||||||
const out: Segment[] = [];
|
|
||||||
let lastEnd = 0;
|
|
||||||
LINK_REGEX.lastIndex = 0;
|
|
||||||
let m: RegExpExecArray | null;
|
|
||||||
while ((m = LINK_REGEX.exec(raw)) !== null) {
|
|
||||||
if (m.index > lastEnd) {
|
|
||||||
out.push({ text: raw.slice(lastEnd, m.index), kind: 'text' });
|
|
||||||
}
|
|
||||||
if (m[1]) out.push({ text: m[1], kind: 'url' });
|
|
||||||
else if (m[2]) out.push({ text: m[2], kind: 'email' });
|
|
||||||
else if (m[3]) out.push({ text: m[3], kind: 'phone' });
|
|
||||||
lastEnd = LINK_REGEX.lastIndex;
|
|
||||||
}
|
|
||||||
if (lastEnd < raw.length) out.push({ text: raw.slice(lastEnd), kind: 'text' });
|
|
||||||
return out;
|
|
||||||
}
|
|
||||||
|
|
||||||
function onPress(seg: Segment) {
|
|
||||||
try {
|
|
||||||
if (seg.kind === 'url') {
|
|
||||||
Linking.openURL(seg.text);
|
|
||||||
} else if (seg.kind === 'email') {
|
|
||||||
Linking.openURL(`mailto:${seg.text}`);
|
|
||||||
} else if (seg.kind === 'phone') {
|
|
||||||
// Android-Dialer erwartet tel:-Schema ohne Leerzeichen/Bindestriche
|
|
||||||
const clean = seg.text.replace(/[\s\-\/()]/g, '');
|
|
||||||
Linking.openURL(`tel:${clean}`);
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
}
|
|
||||||
|
|
||||||
interface Props {
|
interface Props {
|
||||||
text: string;
|
text: string;
|
||||||
@@ -70,34 +16,9 @@ interface Props {
|
|||||||
}
|
}
|
||||||
|
|
||||||
const MessageText: React.FC<Props> = ({ text, style }) => {
|
const MessageText: React.FC<Props> = ({ text, style }) => {
|
||||||
const segments = React.useMemo(() => tokenize(text), [text]);
|
|
||||||
return (
|
return (
|
||||||
<Text
|
<Text style={style} selectable dataDetectorType="all">
|
||||||
style={style}
|
{text}
|
||||||
selectable
|
|
||||||
// dataDetectorType ist Android-only und macht Phone/URL/Email zusaetzlich
|
|
||||||
// ueber System-Detection klickbar — als Fallback falls unsere Regex-
|
|
||||||
// Tokens nicht passen.
|
|
||||||
dataDetectorType="all"
|
|
||||||
>
|
|
||||||
{segments.map((seg, i) => {
|
|
||||||
if (seg.kind === 'text') {
|
|
||||||
return <Text key={i} selectable>{seg.text}</Text>;
|
|
||||||
}
|
|
||||||
return (
|
|
||||||
<Text
|
|
||||||
key={i}
|
|
||||||
selectable
|
|
||||||
style={LINK_STYLE}
|
|
||||||
onPress={() => onPress(seg)}
|
|
||||||
// Long-Press soll an den Parent durch fuer Selection
|
|
||||||
onLongPress={undefined}
|
|
||||||
suppressHighlighting={false}
|
|
||||||
>
|
|
||||||
{seg.text}
|
|
||||||
</Text>
|
|
||||||
);
|
|
||||||
})}
|
|
||||||
</Text>
|
</Text>
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -44,7 +44,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||||||
const [meterDb, setMeterDb] = useState(-160);
|
const [meterDb, setMeterDb] = useState(-160);
|
||||||
const pulseAnim = useRef(new Animated.Value(1)).current;
|
const pulseAnim = useRef(new Animated.Value(1)).current;
|
||||||
const durationTimer = useRef<ReturnType<typeof setInterval> | null>(null);
|
const durationTimer = useRef<ReturnType<typeof setInterval> | null>(null);
|
||||||
const isLongPress = useRef(false);
|
|
||||||
|
|
||||||
// Puls-Animation starten/stoppen
|
// Puls-Animation starten/stoppen
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -117,31 +116,10 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||||||
if (disabled || isRecording) return;
|
if (disabled || isRecording) return;
|
||||||
const started = await audioService.startRecording(true); // autoStop = true
|
const started = await audioService.startRecording(true); // autoStop = true
|
||||||
if (started) {
|
if (started) {
|
||||||
isLongPress.current = false;
|
|
||||||
setIsRecording(true);
|
setIsRecording(true);
|
||||||
}
|
}
|
||||||
}, [disabled, isRecording]);
|
}, [disabled, isRecording]);
|
||||||
|
|
||||||
// Push-to-Talk: Lang druecken
|
|
||||||
const handlePressIn = async () => {
|
|
||||||
if (disabled || isRecording) return;
|
|
||||||
isLongPress.current = true;
|
|
||||||
const started = await audioService.startRecording(false); // kein autoStop
|
|
||||||
if (started) {
|
|
||||||
setIsRecording(true);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const handlePressOut = async () => {
|
|
||||||
if (!isRecording || !isLongPress.current) return;
|
|
||||||
isLongPress.current = false;
|
|
||||||
setIsRecording(false);
|
|
||||||
const result = await audioService.stopRecording();
|
|
||||||
if (result && result.durationMs > 300) {
|
|
||||||
onRecordingComplete(result);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Tap-to-Talk: Einmal tippen startet mit Auto-Stop.
|
// Tap-to-Talk: Einmal tippen startet mit Auto-Stop.
|
||||||
// Guard gegen Doppel-Tap während asyncer Start/Stop.
|
// Guard gegen Doppel-Tap während asyncer Start/Stop.
|
||||||
const tapBusy = useRef(false);
|
const tapBusy = useRef(false);
|
||||||
@@ -162,7 +140,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||||||
// Aufnahme mit Auto-Stop starten
|
// Aufnahme mit Auto-Stop starten
|
||||||
const started = await audioService.startRecording(true);
|
const started = await audioService.startRecording(true);
|
||||||
if (started) {
|
if (started) {
|
||||||
isLongPress.current = false;
|
|
||||||
setIsRecording(true);
|
setIsRecording(true);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -201,10 +178,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||||||
isRecording && styles.buttonOuterRecording,
|
isRecording && styles.buttonOuterRecording,
|
||||||
{ transform: [{ scale: pulseAnim }] },
|
{ transform: [{ scale: pulseAnim }] },
|
||||||
]}
|
]}
|
||||||
onStartShouldSetResponder={() => true}
|
|
||||||
onResponderGrant={handlePressIn}
|
|
||||||
onResponderRelease={handlePressOut}
|
|
||||||
onResponderTerminate={handlePressOut}
|
|
||||||
>
|
>
|
||||||
<TouchableOpacity
|
<TouchableOpacity
|
||||||
activeOpacity={0.8}
|
activeOpacity={0.8}
|
||||||
|
|||||||
@@ -19,12 +19,19 @@ import {
|
|||||||
ScrollView,
|
ScrollView,
|
||||||
Modal,
|
Modal,
|
||||||
ToastAndroid,
|
ToastAndroid,
|
||||||
|
AppState,
|
||||||
} from 'react-native';
|
} from 'react-native';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
import RNFS from 'react-native-fs';
|
import RNFS from 'react-native-fs';
|
||||||
import rvs, { RVSMessage, ConnectionState } from '../services/rvs';
|
import rvs, { RVSMessage, ConnectionState } from '../services/rvs';
|
||||||
import audioService from '../services/audio';
|
import audioService from '../services/audio';
|
||||||
import wakeWordService from '../services/wakeword';
|
import wakeWordService from '../services/wakeword';
|
||||||
|
import phoneCallService from '../services/phoneCall';
|
||||||
|
import { playWakeReadySound } from '../services/wakeReadySound';
|
||||||
|
import {
|
||||||
|
acquireBackgroundAudio,
|
||||||
|
releaseBackgroundAudio,
|
||||||
|
} from '../services/backgroundAudio';
|
||||||
import updateService from '../services/updater';
|
import updateService from '../services/updater';
|
||||||
import VoiceButton from '../components/VoiceButton';
|
import VoiceButton from '../components/VoiceButton';
|
||||||
import FileUpload, { FileData } from '../components/FileUpload';
|
import FileUpload, { FileData } from '../components/FileUpload';
|
||||||
@@ -54,6 +61,10 @@ interface ChatMessage {
|
|||||||
messageId?: string;
|
messageId?: string;
|
||||||
/** Lokaler Pfad zur gecachten TTS-Audio-Datei (file://...) */
|
/** Lokaler Pfad zur gecachten TTS-Audio-Datei (file://...) */
|
||||||
audioPath?: string;
|
audioPath?: string;
|
||||||
|
/** Korrelations-ID fuer Sprachnachrichten — wird mit dem STT-Result zurueck-
|
||||||
|
* gespiegelt damit wir die EXAKT richtige Placeholder-Bubble ersetzen,
|
||||||
|
* auch wenn mehrere Aufnahmen parallel offen sind. */
|
||||||
|
audioRequestId?: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Konstanten ---
|
// --- Konstanten ---
|
||||||
@@ -136,9 +147,10 @@ const ChatScreen: React.FC = () => {
|
|||||||
return `msg_${Date.now()}_${messageIdCounter.current}`;
|
return `msg_${Date.now()}_${messageIdCounter.current}`;
|
||||||
};
|
};
|
||||||
|
|
||||||
// TTS-Settings beim Mount + bei Screen-Fokus neu laden (damit Settings-Toggle sofort greift)
|
// TTS- + GPS-Settings beim Mount + alle 2s neu laden (damit Settings-Toggle
|
||||||
|
// sofort greift, ohne Context- oder Event-System)
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
const loadTtsSettings = async () => {
|
const loadSettings = async () => {
|
||||||
const enabled = await AsyncStorage.getItem('aria_tts_enabled');
|
const enabled = await AsyncStorage.getItem('aria_tts_enabled');
|
||||||
setTtsDeviceEnabled(enabled !== 'false'); // default true
|
setTtsDeviceEnabled(enabled !== 'false'); // default true
|
||||||
const muted = await AsyncStorage.getItem('aria_tts_muted');
|
const muted = await AsyncStorage.getItem('aria_tts_muted');
|
||||||
@@ -146,10 +158,11 @@ const ChatScreen: React.FC = () => {
|
|||||||
const voice = await AsyncStorage.getItem('aria_xtts_voice');
|
const voice = await AsyncStorage.getItem('aria_xtts_voice');
|
||||||
localXttsVoiceRef.current = voice || '';
|
localXttsVoiceRef.current = voice || '';
|
||||||
ttsSpeedRef.current = await loadTtsSpeed();
|
ttsSpeedRef.current = await loadTtsSpeed();
|
||||||
|
const gps = await AsyncStorage.getItem('aria_gps_enabled');
|
||||||
|
setGpsEnabled(gps === 'true');
|
||||||
};
|
};
|
||||||
loadTtsSettings();
|
loadSettings();
|
||||||
// Poll alle 2s um Settings-Aenderung mitzubekommen (einfache Loesung ohne Context)
|
const interval = setInterval(loadSettings, 2000);
|
||||||
const interval = setInterval(loadTtsSettings, 2000);
|
|
||||||
return () => clearInterval(interval);
|
return () => clearInterval(interval);
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
@@ -159,6 +172,49 @@ const ChatScreen: React.FC = () => {
|
|||||||
const unsub = wakeWordService.onStateChange((s) => {
|
const unsub = wakeWordService.onStateChange((s) => {
|
||||||
setWakeWordState(s);
|
setWakeWordState(s);
|
||||||
setWakeWordActive(s !== 'off');
|
setWakeWordActive(s !== 'off');
|
||||||
|
// Conversation-Focus an Wake-Word-State koppeln: solange wir aktiv im
|
||||||
|
// Dialog sind, soll Spotify dauerhaft gepaust bleiben (auch ueber
|
||||||
|
// Render-Pausen + zwischen Antworten hinweg). Sobald wir zurueck nach
|
||||||
|
// 'armed' oder 'off' fallen, darf Spotify wieder.
|
||||||
|
if (s === 'conversing') audioService.acquireConversationFocus();
|
||||||
|
else audioService.releaseConversationFocus();
|
||||||
|
// Foreground-Service-Slot 'wake' — solange das Ohr ueberhaupt aktiv ist
|
||||||
|
// (armed oder conversing), soll der App-Prozess im Hintergrund am Leben
|
||||||
|
// bleiben damit Mikro-Lauschen + Aufnahme weiterlaufen.
|
||||||
|
if (s !== 'off') acquireBackgroundAudio('wake').catch(() => {});
|
||||||
|
else releaseBackgroundAudio('wake').catch(() => {});
|
||||||
|
});
|
||||||
|
return () => unsub();
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// Anruf-Erkennung: TTS pausieren wenn das Telefon klingelt
|
||||||
|
useEffect(() => {
|
||||||
|
phoneCallService.start().catch(err =>
|
||||||
|
console.warn('[Chat] phoneCall.start fehlgeschlagen', err));
|
||||||
|
return () => { phoneCallService.stop().catch(() => {}); };
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// App-Resume: kurzer Wake-Word-Cooldown — beim Wechsel Background→Foreground
|
||||||
|
// gibt's haeufig Audio-Pegel-Spikes (AudioFocus-Switch, AudioTrack re-route)
|
||||||
|
// die openWakeWord sonst faelschlich als Wake-Word interpretiert.
|
||||||
|
useEffect(() => {
|
||||||
|
let lastState: string = AppState.currentState;
|
||||||
|
const sub = AppState.addEventListener('change', (next) => {
|
||||||
|
if (lastState !== 'active' && next === 'active') {
|
||||||
|
wakeWordService.setResumeCooldown(1500);
|
||||||
|
}
|
||||||
|
lastState = next;
|
||||||
|
});
|
||||||
|
return () => sub.remove();
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// Recording-State an Background-Service-Slot 'rec' koppeln — damit das Mikro
|
||||||
|
// auch im Hintergrund weiter aufnehmen darf (Android killt den App-Prozess
|
||||||
|
// sonst und die Aufnahme bricht ab).
|
||||||
|
useEffect(() => {
|
||||||
|
const unsub = audioService.onStateChange((s) => {
|
||||||
|
if (s === 'recording') acquireBackgroundAudio('rec').catch(() => {});
|
||||||
|
else releaseBackgroundAudio('rec').catch(() => {});
|
||||||
});
|
});
|
||||||
return () => unsub();
|
return () => unsub();
|
||||||
}, []);
|
}, []);
|
||||||
@@ -269,6 +325,8 @@ const ChatScreen: React.FC = () => {
|
|||||||
|
|
||||||
if (message.type === 'chat') {
|
if (message.type === 'chat') {
|
||||||
const sender = (message.payload.sender as string) || '';
|
const sender = (message.payload.sender as string) || '';
|
||||||
|
const dbgText = ((message.payload.text as string) || '').slice(0, 60);
|
||||||
|
console.log('[Chat] chat-event sender=%s text=%s', sender || '(none)', dbgText);
|
||||||
|
|
||||||
// STT-Ergebnis: Transkribierten Text in die Sprach-Bubble schreiben.
|
// STT-Ergebnis: Transkribierten Text in die Sprach-Bubble schreiben.
|
||||||
// WICHTIG: Nur die ERSTE noch unaufgeloeste Aufnahme matchen — sonst
|
// WICHTIG: Nur die ERSTE noch unaufgeloeste Aufnahme matchen — sonst
|
||||||
@@ -276,17 +334,42 @@ const ChatScreen: React.FC = () => {
|
|||||||
// den gleichen Text bekommen (Bug: zweite Antwort ueberschreibt erste).
|
// den gleichen Text bekommen (Bug: zweite Antwort ueberschreibt erste).
|
||||||
if (sender === 'stt') {
|
if (sender === 'stt') {
|
||||||
const sttText = (message.payload.text as string) || '';
|
const sttText = (message.payload.text as string) || '';
|
||||||
if (sttText) {
|
const sttAudioReqId = (message.payload.audioRequestId as string) || '';
|
||||||
setMessages(prev => {
|
if (!sttText) {
|
||||||
const idx = prev.findIndex(m =>
|
return;
|
||||||
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
|
||||||
);
|
|
||||||
if (idx < 0) return prev;
|
|
||||||
const next = prev.slice();
|
|
||||||
next[idx] = { ...next[idx], text: `\uD83C\uDFA4 ${sttText}` };
|
|
||||||
return next;
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
setMessages(prev => {
|
||||||
|
const newText = `\uD83C\uDFA4 ${sttText}`;
|
||||||
|
// Primaer: matche per audioRequestId (eindeutig pro Aufnahme).
|
||||||
|
// So gibt's keine Verwechslung wenn zwei Audios kurz hintereinander
|
||||||
|
// gesendet wurden und ihre STT-Results ueberlappen.
|
||||||
|
if (sttAudioReqId) {
|
||||||
|
const idxById = prev.findIndex(m => m.audioRequestId === sttAudioReqId);
|
||||||
|
if (idxById >= 0) {
|
||||||
|
const next = prev.slice();
|
||||||
|
next[idxById] = { ...next[idxById], text: newText };
|
||||||
|
return next;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Fallback: alte Bridge-Version ohne audioRequestId \u2014 match per Substring,
|
||||||
|
// nimmt die ERSTE noch unaufgeloeste Placeholder.
|
||||||
|
const idx = prev.findIndex(m =>
|
||||||
|
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
||||||
|
);
|
||||||
|
if (idx >= 0) {
|
||||||
|
const next = prev.slice();
|
||||||
|
next[idx] = { ...next[idx], text: newText };
|
||||||
|
return next;
|
||||||
|
}
|
||||||
|
// Letzter Fallback: gar keine Placeholder \u2192 neue Bubble einfuegen
|
||||||
|
return capMessages([...prev, {
|
||||||
|
id: nextId(),
|
||||||
|
sender: 'user',
|
||||||
|
text: newText,
|
||||||
|
timestamp: message.timestamp,
|
||||||
|
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
|
||||||
|
}]);
|
||||||
|
});
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -448,7 +531,14 @@ const ChatScreen: React.FC = () => {
|
|||||||
// Conversation-Window: User hat X Sekunden um anzufangen, sonst Konversation aus
|
// Conversation-Window: User hat X Sekunden um anzufangen, sonst Konversation aus
|
||||||
const windowMs = await loadConvWindowMs();
|
const windowMs = await loadConvWindowMs();
|
||||||
const started = await audioService.startRecording(true, windowMs);
|
const started = await audioService.startRecording(true, windowMs);
|
||||||
if (!started) {
|
if (started) {
|
||||||
|
// Erst JETZT signalisieren dass das Mikro wirklich offen ist —
|
||||||
|
// vorher war's noch in der Init-Phase. So weiss der User exakt
|
||||||
|
// ab wann er reden kann. "Bereit"-Sound (Ding-Dong) ist optional
|
||||||
|
// ueber Settings → Wake-Word abschaltbar.
|
||||||
|
ToastAndroid.show('🎤 Mikro offen — sprich jetzt', ToastAndroid.SHORT);
|
||||||
|
playWakeReadySound().catch(() => {});
|
||||||
|
} else {
|
||||||
// Mikrofon nicht verfuegbar, naechsten Versuch
|
// Mikrofon nicht verfuegbar, naechsten Versuch
|
||||||
wakeWordService.resume();
|
wakeWordService.resume();
|
||||||
}
|
}
|
||||||
@@ -459,13 +549,17 @@ const ChatScreen: React.FC = () => {
|
|||||||
const result = await audioService.stopRecording();
|
const result = await audioService.stopRecording();
|
||||||
if (result && result.durationMs > 500) {
|
if (result && result.durationMs > 500) {
|
||||||
// User hat im Fenster gesprochen → Sprachnachricht senden
|
// User hat im Fenster gesprochen → Sprachnachricht senden
|
||||||
|
// Barge-In: laufende ARIA-Aktivitaet abbrechen wenn welche da ist.
|
||||||
|
const wasInterrupted = interruptAriaIfBusy();
|
||||||
const location = await getCurrentLocation();
|
const location = await getCurrentLocation();
|
||||||
|
const audioRequestId = `audio_${Date.now()}_${Math.floor(Math.random() * 100000)}`;
|
||||||
const userMsg: ChatMessage = {
|
const userMsg: ChatMessage = {
|
||||||
id: nextId(),
|
id: nextId(),
|
||||||
sender: 'user',
|
sender: 'user',
|
||||||
text: '🎙 Spracheingabe wird verarbeitet...',
|
text: '🎙 Spracheingabe wird verarbeitet...',
|
||||||
timestamp: Date.now(),
|
timestamp: Date.now(),
|
||||||
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
|
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
|
||||||
|
audioRequestId,
|
||||||
};
|
};
|
||||||
setMessages(prev => capMessages([...prev, userMsg]));
|
setMessages(prev => capMessages([...prev, userMsg]));
|
||||||
rvs.send('audio', {
|
rvs.send('audio', {
|
||||||
@@ -474,8 +568,11 @@ const ChatScreen: React.FC = () => {
|
|||||||
mimeType: result.mimeType,
|
mimeType: result.mimeType,
|
||||||
voice: localXttsVoiceRef.current,
|
voice: localXttsVoiceRef.current,
|
||||||
speed: ttsSpeedRef.current,
|
speed: ttsSpeedRef.current,
|
||||||
|
interrupted: wasInterrupted,
|
||||||
|
audioRequestId,
|
||||||
...(location && { location }),
|
...(location && { location }),
|
||||||
});
|
});
|
||||||
|
scheduleStaleAudioCleanup(audioRequestId, result.durationMs);
|
||||||
// resume() wird durch onPlaybackFinished nach ARIAs Antwort getriggert.
|
// resume() wird durch onPlaybackFinished nach ARIAs Antwort getriggert.
|
||||||
} else {
|
} else {
|
||||||
// Kein Speech im Window → Konversation beenden (Ohr geht aus oder
|
// Kein Speech im Window → Konversation beenden (Ohr geht aus oder
|
||||||
@@ -486,9 +583,47 @@ const ChatScreen: React.FC = () => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Barge-In via Wake-Word: User sagt "Computer" waehrend ARIA spricht.
|
||||||
|
// Wake-Word-Service hat bei TTS-Start parallel zu lauschen begonnen
|
||||||
|
// (mit AcousticEchoCanceler damit ARIAs eigene Stimme nicht triggert).
|
||||||
|
const unsubBarge = wakeWordService.onBargeIn(async () => {
|
||||||
|
console.log('[Chat] Barge-In via Wake-Word — TTS abbrechen + neue Aufnahme');
|
||||||
|
audioService.haltAllPlayback('barge-in via wake-word');
|
||||||
|
setAgentActivity({ activity: 'idle', tool: '' });
|
||||||
|
rvs.send('cancel_request' as any, {});
|
||||||
|
// Kurze Pause damit halt durchgreift, dann neue Aufnahme starten
|
||||||
|
await new Promise(r => setTimeout(r, 150));
|
||||||
|
const windowMs = await loadConvWindowMs();
|
||||||
|
const started = await audioService.startRecording(true, windowMs);
|
||||||
|
if (started) {
|
||||||
|
ToastAndroid.show('🎤 Mikro offen — sprich jetzt', ToastAndroid.SHORT);
|
||||||
|
playWakeReadySound().catch(() => {});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// TTS-Lifecycle: solange ARIA spricht und Wake-Word verfuegbar ist,
|
||||||
|
// parallel mitlauschen — User kann "Computer" sagen statt manuell tappen.
|
||||||
|
// PLUS: Foreground-Service-Slot 'tts' belegen damit Android den App-
|
||||||
|
// Prozess nicht killt wenn die App im Hintergrund ist.
|
||||||
|
const unsubTtsStart = audioService.onPlaybackStarted(() => {
|
||||||
|
acquireBackgroundAudio('tts').catch(() => {});
|
||||||
|
if (wakeWordService.isConversing() && wakeWordService.hasWakeWord()) {
|
||||||
|
wakeWordService.startBargeListening().catch(() => {});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
const unsubTtsEnd = audioService.onPlaybackFinished(() => {
|
||||||
|
releaseBackgroundAudio('tts').catch(() => {});
|
||||||
|
// Vor naechster Aufnahme: barge-listening aus damit der AudioRecorder
|
||||||
|
// das Mikro greifen kann.
|
||||||
|
wakeWordService.stopBargeListening().catch(() => {});
|
||||||
|
});
|
||||||
|
|
||||||
return () => {
|
return () => {
|
||||||
unsubWake();
|
unsubWake();
|
||||||
unsubSilence();
|
unsubSilence();
|
||||||
|
unsubBarge();
|
||||||
|
unsubTtsStart();
|
||||||
|
unsubTtsEnd();
|
||||||
};
|
};
|
||||||
}, [wakeWordActive]);
|
}, [wakeWordActive]);
|
||||||
|
|
||||||
@@ -563,6 +698,29 @@ const ChatScreen: React.FC = () => {
|
|||||||
|
|
||||||
// --- Nachricht senden ---
|
// --- Nachricht senden ---
|
||||||
|
|
||||||
|
// Aufraeumen von "verarbeitet"-Placeholder die nie ein STT-Result bekommen
|
||||||
|
// haben (leere Aufnahme, Wake-Word-Echo, STT-Fehler etc). Timeout skaliert
|
||||||
|
// mit der Aufnahmedauer — Whisper braucht auf der Gamebox grob real-time/5,
|
||||||
|
// plus Bridge-Roundtrip + Network. Formel: 60s Buffer + 1x Aufnahmedauer.
|
||||||
|
// Bei 5min Aufnahme = 6 min Wait, bei 5s Aufnahme = 65s. Sicher genug damit
|
||||||
|
// langsame STTs nicht versehentlich aufgeraeumt werden.
|
||||||
|
const scheduleStaleAudioCleanup = useCallback((audioRequestId: string, recordingMs: number) => {
|
||||||
|
const timeoutMs = 60000 + recordingMs;
|
||||||
|
setTimeout(() => {
|
||||||
|
setMessages(prev => {
|
||||||
|
const idx = prev.findIndex(m =>
|
||||||
|
m.audioRequestId === audioRequestId &&
|
||||||
|
m.text.includes('Spracheingabe wird verarbeitet')
|
||||||
|
);
|
||||||
|
if (idx < 0) return prev;
|
||||||
|
console.log('[Chat] Sprachnachricht ohne STT-Result nach %dms entfernt: %s',
|
||||||
|
timeoutMs, audioRequestId);
|
||||||
|
ToastAndroid.show('Sprachnachricht nicht erkannt — entfernt', ToastAndroid.SHORT);
|
||||||
|
return prev.filter((_, i) => i !== idx);
|
||||||
|
});
|
||||||
|
}, timeoutMs);
|
||||||
|
}, []);
|
||||||
|
|
||||||
const sendTextMessage = useCallback(async () => {
|
const sendTextMessage = useCallback(async () => {
|
||||||
const text = inputText.trim();
|
const text = inputText.trim();
|
||||||
|
|
||||||
@@ -576,6 +734,8 @@ const ChatScreen: React.FC = () => {
|
|||||||
|
|
||||||
setInputText('');
|
setInputText('');
|
||||||
|
|
||||||
|
// Barge-In: laufende ARIA-Aktivitaet abbrechen wenn welche da ist.
|
||||||
|
const wasInterrupted = interruptAriaIfBusy();
|
||||||
const location = await getCurrentLocation();
|
const location = await getCurrentLocation();
|
||||||
|
|
||||||
const userMsg: ChatMessage = {
|
const userMsg: ChatMessage = {
|
||||||
@@ -586,16 +746,17 @@ const ChatScreen: React.FC = () => {
|
|||||||
};
|
};
|
||||||
setMessages(prev => capMessages([...prev, userMsg]));
|
setMessages(prev => capMessages([...prev, userMsg]));
|
||||||
|
|
||||||
console.log('[Chat] sende mit voice=%s speed=%s',
|
console.log('[Chat] sende mit voice=%s speed=%s interrupted=%s',
|
||||||
localXttsVoiceRef.current || '(default)', ttsSpeedRef.current);
|
localXttsVoiceRef.current || '(default)', ttsSpeedRef.current, wasInterrupted);
|
||||||
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
||||||
rvs.send('chat', {
|
rvs.send('chat', {
|
||||||
text,
|
text,
|
||||||
voice: localXttsVoiceRef.current,
|
voice: localXttsVoiceRef.current,
|
||||||
speed: ttsSpeedRef.current,
|
speed: ttsSpeedRef.current,
|
||||||
|
interrupted: wasInterrupted,
|
||||||
...(location && { location }),
|
...(location && { location }),
|
||||||
});
|
});
|
||||||
}, [inputText, getCurrentLocation, pendingAttachments, sendPendingAttachments]);
|
}, [inputText, getCurrentLocation, pendingAttachments, sendPendingAttachments, interruptAriaIfBusy]);
|
||||||
|
|
||||||
// Anfrage abbrechen — sofort lokalen Indicator weg, Bridge triggert doctor --fix
|
// Anfrage abbrechen — sofort lokalen Indicator weg, Bridge triggert doctor --fix
|
||||||
const cancelRequest = useCallback(() => {
|
const cancelRequest = useCallback(() => {
|
||||||
@@ -603,15 +764,37 @@ const ChatScreen: React.FC = () => {
|
|||||||
rvs.send('cancel_request' as any, {});
|
rvs.send('cancel_request' as any, {});
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
|
// Barge-In: wenn der User waehrend ARIA arbeitet/spricht eine neue Sprach-
|
||||||
|
// Nachricht aufnimmt, alte Aktivitaet sofort abbrechen — TTS verstummen,
|
||||||
|
// aria-core-Run via cancel_request abbrechen. So kann man "ach vergiss es,
|
||||||
|
// mach lieber X" sagen wie in einem echten Gespraech.
|
||||||
|
const interruptAriaIfBusy = useCallback(() => {
|
||||||
|
const speaking = audioService.isPlayingAudio();
|
||||||
|
const thinking = agentActivity.activity !== 'idle';
|
||||||
|
if (!speaking && !thinking) return false;
|
||||||
|
console.log('[Chat] Barge-In: speaking=%s thinking=%s — interrupting ARIA',
|
||||||
|
speaking, thinking);
|
||||||
|
if (speaking) audioService.haltAllPlayback('user spricht (barge-in)');
|
||||||
|
if (thinking) {
|
||||||
|
setAgentActivity({ activity: 'idle', tool: '' });
|
||||||
|
rvs.send('cancel_request' as any, {});
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}, [agentActivity]);
|
||||||
|
|
||||||
// Sprachaufnahme abgeschlossen
|
// Sprachaufnahme abgeschlossen
|
||||||
const handleVoiceRecording = useCallback(async (result: RecordingResult) => {
|
const handleVoiceRecording = useCallback(async (result: RecordingResult) => {
|
||||||
|
// Barge-In: laufende ARIA-Aktivitaet abbrechen falls aktiv.
|
||||||
|
const wasInterrupted = interruptAriaIfBusy();
|
||||||
const location = await getCurrentLocation();
|
const location = await getCurrentLocation();
|
||||||
|
const audioRequestId = `audio_${Date.now()}_${Math.floor(Math.random() * 100000)}`;
|
||||||
|
|
||||||
const userMsg: ChatMessage = {
|
const userMsg: ChatMessage = {
|
||||||
id: nextId(),
|
id: nextId(),
|
||||||
sender: 'user',
|
sender: 'user',
|
||||||
text: '🎙 Spracheingabe wird verarbeitet...',
|
text: '🎙 Spracheingabe wird verarbeitet...',
|
||||||
timestamp: Date.now(),
|
timestamp: Date.now(),
|
||||||
|
audioRequestId,
|
||||||
};
|
};
|
||||||
setMessages(prev => capMessages([...prev, userMsg]));
|
setMessages(prev => capMessages([...prev, userMsg]));
|
||||||
|
|
||||||
@@ -619,9 +802,25 @@ const ChatScreen: React.FC = () => {
|
|||||||
base64: result.base64,
|
base64: result.base64,
|
||||||
durationMs: result.durationMs,
|
durationMs: result.durationMs,
|
||||||
mimeType: result.mimeType,
|
mimeType: result.mimeType,
|
||||||
|
voice: localXttsVoiceRef.current,
|
||||||
|
speed: ttsSpeedRef.current,
|
||||||
|
interrupted: wasInterrupted,
|
||||||
|
audioRequestId,
|
||||||
...(location && { location }),
|
...(location && { location }),
|
||||||
});
|
});
|
||||||
}, [getCurrentLocation]);
|
scheduleStaleAudioCleanup(audioRequestId, result.durationMs);
|
||||||
|
|
||||||
|
// Manueller Mikro-Stop waehrend Wake-Word-Konversation: User hat explizit
|
||||||
|
// den Knopf gedrueckt → er moechte nicht in den automatischen Multi-Turn-
|
||||||
|
// Modus, sondern nach ARIAs Antwort zurueck zu passivem Wake-Word-Lauschen.
|
||||||
|
// Bei VAD-Auto-Stop (Wake-Word-Pfad) laeuft das ueber den silence-callback
|
||||||
|
// und endet mit resume() — der manuelle Stop hier ist der "ich bin fertig"-
|
||||||
|
// Knopf.
|
||||||
|
if (wakeWordService.isConversing()) {
|
||||||
|
console.log('[Chat] Manueller Stop in Konversation → endConversation, zurueck zu armed');
|
||||||
|
await wakeWordService.endConversation();
|
||||||
|
}
|
||||||
|
}, [getCurrentLocation, interruptAriaIfBusy, scheduleStaleAudioCleanup]);
|
||||||
|
|
||||||
// Datei auswaehlen → zur Pending-Liste hinzufuegen
|
// Datei auswaehlen → zur Pending-Liste hinzufuegen
|
||||||
const handleFileSelected = useCallback(async (file: FileData) => {
|
const handleFileSelected = useCallback(async (file: FileData) => {
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ import {
|
|||||||
Platform,
|
Platform,
|
||||||
ToastAndroid,
|
ToastAndroid,
|
||||||
ActivityIndicator,
|
ActivityIndicator,
|
||||||
|
Modal,
|
||||||
} from 'react-native';
|
} from 'react-native';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
import RNFS from 'react-native-fs';
|
import RNFS from 'react-native-fs';
|
||||||
@@ -35,15 +36,28 @@ import {
|
|||||||
CONV_WINDOW_MIN_SEC,
|
CONV_WINDOW_MIN_SEC,
|
||||||
CONV_WINDOW_MAX_SEC,
|
CONV_WINDOW_MAX_SEC,
|
||||||
CONV_WINDOW_STORAGE_KEY,
|
CONV_WINDOW_STORAGE_KEY,
|
||||||
|
MAX_RECORDING_DEFAULT_SEC,
|
||||||
|
MAX_RECORDING_MIN_SEC,
|
||||||
|
MAX_RECORDING_MAX_SEC,
|
||||||
|
MAX_RECORDING_STORAGE_KEY,
|
||||||
|
VAD_SILENCE_DB_DEFAULT,
|
||||||
|
VAD_SILENCE_DB_MIN,
|
||||||
|
VAD_SILENCE_DB_MAX,
|
||||||
|
VAD_SILENCE_DB_OVERRIDE_KEY,
|
||||||
TTS_SPEED_DEFAULT,
|
TTS_SPEED_DEFAULT,
|
||||||
TTS_SPEED_MIN,
|
TTS_SPEED_MIN,
|
||||||
TTS_SPEED_MAX,
|
TTS_SPEED_MAX,
|
||||||
TTS_SPEED_STORAGE_KEY,
|
TTS_SPEED_STORAGE_KEY,
|
||||||
} from '../services/audio';
|
} from '../services/audio';
|
||||||
|
import {
|
||||||
|
isWakeReadySoundEnabled,
|
||||||
|
setWakeReadySoundEnabled,
|
||||||
|
playWakeReadySound,
|
||||||
|
} from '../services/wakeReadySound';
|
||||||
import wakeWordService, {
|
import wakeWordService, {
|
||||||
BUILTIN_KEYWORDS,
|
WAKE_KEYWORDS,
|
||||||
|
KEYWORD_LABELS,
|
||||||
DEFAULT_KEYWORD,
|
DEFAULT_KEYWORD,
|
||||||
WAKE_ACCESS_KEY_STORAGE,
|
|
||||||
WAKE_KEYWORD_STORAGE,
|
WAKE_KEYWORD_STORAGE,
|
||||||
} from '../services/wakeword';
|
} from '../services/wakeword';
|
||||||
import ModeSelector from '../components/ModeSelector';
|
import ModeSelector from '../components/ModeSelector';
|
||||||
@@ -72,6 +86,18 @@ interface EventEntry {
|
|||||||
|
|
||||||
type LogTab = 'live' | 'events';
|
type LogTab = 'live' | 'events';
|
||||||
|
|
||||||
|
// Settings-Sub-Screens. Reihenfolge im Hauptmenue.
|
||||||
|
const SETTINGS_SECTIONS = [
|
||||||
|
{ id: 'connection', icon: '🔌', label: 'Verbindung', desc: 'Server, Token, Status, Verbindungslog' },
|
||||||
|
{ id: 'general', icon: '⚙️', label: 'Allgemein', desc: 'Betriebsmodus, GPS-Standort' },
|
||||||
|
{ id: 'voice_input', icon: '🎙️', label: 'Spracheingabe', desc: 'Stille-Toleranz, Aufnahmedauer' },
|
||||||
|
{ id: 'wake_word', icon: '👂', label: 'Wake-Word', desc: 'Wake-Word-Auswahl' },
|
||||||
|
{ id: 'voice_output', icon: '🔊', label: 'Sprachausgabe', desc: 'Stimmen, Pre-Roll, Geschwindigkeit' },
|
||||||
|
{ id: 'storage', icon: '📁', label: 'Speicher', desc: 'Anhang-Speicherort, Auto-Download' },
|
||||||
|
{ id: 'protocol', icon: '📜', label: 'Protokoll', desc: 'Privatsphaere, Backup' },
|
||||||
|
{ id: 'about', icon: 'ℹ️', label: 'Ueber', desc: 'App-Version, Update' },
|
||||||
|
] as const;
|
||||||
|
|
||||||
// Container-Farben fuer Live-Logs
|
// Container-Farben fuer Live-Logs
|
||||||
const SOURCE_COLORS: Record<string, string> = {
|
const SOURCE_COLORS: Record<string, string> = {
|
||||||
'aria-core': '#4A9EFF', // Blau
|
'aria-core': '#4A9EFF', // Blau
|
||||||
@@ -102,17 +128,24 @@ const SettingsScreen: React.FC = () => {
|
|||||||
const [ttsPrerollSec, setTtsPrerollSec] = useState<number>(TTS_PREROLL_DEFAULT_SEC);
|
const [ttsPrerollSec, setTtsPrerollSec] = useState<number>(TTS_PREROLL_DEFAULT_SEC);
|
||||||
const [vadSilenceSec, setVadSilenceSec] = useState<number>(VAD_SILENCE_DEFAULT_SEC);
|
const [vadSilenceSec, setVadSilenceSec] = useState<number>(VAD_SILENCE_DEFAULT_SEC);
|
||||||
const [convWindowSec, setConvWindowSec] = useState<number>(CONV_WINDOW_DEFAULT_SEC);
|
const [convWindowSec, setConvWindowSec] = useState<number>(CONV_WINDOW_DEFAULT_SEC);
|
||||||
|
const [maxRecordingSec, setMaxRecordingSec] = useState<number>(MAX_RECORDING_DEFAULT_SEC);
|
||||||
|
// null = automatisch (adaptive Baseline), sonst manueller dB-Override
|
||||||
|
const [vadSilenceDb, setVadSilenceDb] = useState<number | null>(null);
|
||||||
|
const [showVadInfo, setShowVadInfo] = useState(false);
|
||||||
const [ttsSpeed, setTtsSpeed] = useState<number>(TTS_SPEED_DEFAULT);
|
const [ttsSpeed, setTtsSpeed] = useState<number>(TTS_SPEED_DEFAULT);
|
||||||
const [wakeAccessKey, setWakeAccessKey] = useState<string>('');
|
|
||||||
const [wakeAccessKeyVisible, setWakeAccessKeyVisible] = useState(false);
|
|
||||||
const [wakeKeyword, setWakeKeyword] = useState<string>(DEFAULT_KEYWORD);
|
const [wakeKeyword, setWakeKeyword] = useState<string>(DEFAULT_KEYWORD);
|
||||||
const [wakeStatus, setWakeStatus] = useState<string>('');
|
const [wakeStatus, setWakeStatus] = useState<string>('');
|
||||||
|
const [wakeReadySound, setWakeReadySound] = useState<boolean>(true);
|
||||||
const [editingPath, setEditingPath] = useState(false);
|
const [editingPath, setEditingPath] = useState(false);
|
||||||
const [xttsVoice, setXttsVoice] = useState('');
|
const [xttsVoice, setXttsVoice] = useState('');
|
||||||
const [loadingVoice, setLoadingVoice] = useState<string | null>(null);
|
const [loadingVoice, setLoadingVoice] = useState<string | null>(null);
|
||||||
const [availableVoices, setAvailableVoices] = useState<Array<{name: string, size: number}>>([]);
|
const [availableVoices, setAvailableVoices] = useState<Array<{name: string, size: number}>>([]);
|
||||||
const [voiceCloneVisible, setVoiceCloneVisible] = useState(false);
|
const [voiceCloneVisible, setVoiceCloneVisible] = useState(false);
|
||||||
const [tempPath, setTempPath] = useState('');
|
const [tempPath, setTempPath] = useState('');
|
||||||
|
// Sub-Screen Navigation: null = Hauptmenue, sonst eine der Section-IDs.
|
||||||
|
// So bleibt aller geteilte State im selben Component-Closure und wir
|
||||||
|
// brauchen keine react-navigation-Stack-Setup.
|
||||||
|
const [currentSection, setCurrentSection] = useState<string | null>(null);
|
||||||
|
|
||||||
let logIdCounter = 0;
|
let logIdCounter = 0;
|
||||||
|
|
||||||
@@ -134,6 +167,9 @@ const SettingsScreen: React.FC = () => {
|
|||||||
AsyncStorage.getItem('aria_tts_enabled').then(saved => {
|
AsyncStorage.getItem('aria_tts_enabled').then(saved => {
|
||||||
if (saved !== null) setTtsEnabled(saved === 'true');
|
if (saved !== null) setTtsEnabled(saved === 'true');
|
||||||
});
|
});
|
||||||
|
AsyncStorage.getItem('aria_gps_enabled').then(saved => {
|
||||||
|
if (saved !== null) setGpsEnabled(saved === 'true');
|
||||||
|
});
|
||||||
AsyncStorage.getItem(TTS_PREROLL_STORAGE_KEY).then(saved => {
|
AsyncStorage.getItem(TTS_PREROLL_STORAGE_KEY).then(saved => {
|
||||||
if (saved != null) {
|
if (saved != null) {
|
||||||
const n = parseFloat(saved);
|
const n = parseFloat(saved);
|
||||||
@@ -158,18 +194,32 @@ const SettingsScreen: React.FC = () => {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
AsyncStorage.getItem(MAX_RECORDING_STORAGE_KEY).then(saved => {
|
||||||
|
if (saved != null) {
|
||||||
|
const n = parseFloat(saved);
|
||||||
|
if (isFinite(n) && n >= MAX_RECORDING_MIN_SEC && n <= MAX_RECORDING_MAX_SEC) {
|
||||||
|
setMaxRecordingSec(n);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
AsyncStorage.getItem(VAD_SILENCE_DB_OVERRIDE_KEY).then(saved => {
|
||||||
|
if (saved != null && saved !== '') {
|
||||||
|
const n = parseFloat(saved);
|
||||||
|
if (isFinite(n) && n >= VAD_SILENCE_DB_MIN && n <= VAD_SILENCE_DB_MAX) {
|
||||||
|
setVadSilenceDb(n);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY).then(saved => {
|
AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY).then(saved => {
|
||||||
if (saved != null) {
|
if (saved != null) {
|
||||||
const n = parseFloat(saved);
|
const n = parseFloat(saved);
|
||||||
if (isFinite(n) && n >= TTS_SPEED_MIN && n <= TTS_SPEED_MAX) setTtsSpeed(n);
|
if (isFinite(n) && n >= TTS_SPEED_MIN && n <= TTS_SPEED_MAX) setTtsSpeed(n);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
AsyncStorage.getItem(WAKE_ACCESS_KEY_STORAGE).then(saved => {
|
|
||||||
if (saved) setWakeAccessKey(saved);
|
|
||||||
});
|
|
||||||
AsyncStorage.getItem(WAKE_KEYWORD_STORAGE).then(saved => {
|
AsyncStorage.getItem(WAKE_KEYWORD_STORAGE).then(saved => {
|
||||||
if (saved) setWakeKeyword(saved);
|
if (saved && (WAKE_KEYWORDS as readonly string[]).includes(saved)) setWakeKeyword(saved);
|
||||||
});
|
});
|
||||||
|
isWakeReadySoundEnabled().then(setWakeReadySound);
|
||||||
AsyncStorage.getItem('aria_xtts_voice').then(saved => {
|
AsyncStorage.getItem('aria_xtts_voice').then(saved => {
|
||||||
if (saved) setXttsVoice(saved);
|
if (saved) setXttsVoice(saved);
|
||||||
});
|
});
|
||||||
@@ -406,7 +456,7 @@ const SettingsScreen: React.FC = () => {
|
|||||||
|
|
||||||
const handleGPSToggle = useCallback((value: boolean) => {
|
const handleGPSToggle = useCallback((value: boolean) => {
|
||||||
setGpsEnabled(value);
|
setGpsEnabled(value);
|
||||||
// In Produktion: Wert in AsyncStorage persistieren
|
AsyncStorage.setItem('aria_gps_enabled', String(value)).catch(() => {});
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
// --- XTTS Voice ---
|
// --- XTTS Voice ---
|
||||||
@@ -485,7 +535,39 @@ const SettingsScreen: React.FC = () => {
|
|||||||
/>
|
/>
|
||||||
<ScrollView style={styles.container} contentContainerStyle={styles.content}>
|
<ScrollView style={styles.container} contentContainerStyle={styles.content}>
|
||||||
|
|
||||||
|
{currentSection === null && (
|
||||||
|
<>
|
||||||
|
{SETTINGS_SECTIONS.map(s => (
|
||||||
|
<TouchableOpacity
|
||||||
|
key={s.id}
|
||||||
|
style={styles.menuItem}
|
||||||
|
onPress={() => setCurrentSection(s.id)}
|
||||||
|
>
|
||||||
|
<Text style={styles.menuItemIcon}>{s.icon}</Text>
|
||||||
|
<View style={styles.menuItemTextWrap}>
|
||||||
|
<Text style={styles.menuItemLabel}>{s.label}</Text>
|
||||||
|
<Text style={styles.menuItemDesc}>{s.desc}</Text>
|
||||||
|
</View>
|
||||||
|
<Text style={styles.menuItemChevron}>›</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
))}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{currentSection !== null && (
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.subScreenHeader}
|
||||||
|
onPress={() => setCurrentSection(null)}
|
||||||
|
>
|
||||||
|
<Text style={styles.subScreenBack}>‹</Text>
|
||||||
|
<Text style={styles.subScreenTitle}>
|
||||||
|
{SETTINGS_SECTIONS.find(s => s.id === currentSection)?.label || ''}
|
||||||
|
</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* === Verbindung === */}
|
{/* === Verbindung === */}
|
||||||
|
{currentSection === 'connection' && (<>
|
||||||
<Text style={styles.sectionTitle}>Verbindung</Text>
|
<Text style={styles.sectionTitle}>Verbindung</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
{/* Status-Anzeige */}
|
{/* Status-Anzeige */}
|
||||||
@@ -582,8 +664,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.clearButtonText}>Log l{'\u00F6'}schen</Text>
|
<Text style={styles.clearButtonText}>Log l{'\u00F6'}schen</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Modus === */}
|
{/* === Modus === */}
|
||||||
|
{currentSection === 'general' && (<>
|
||||||
<Text style={styles.sectionTitle}>Betriebsmodus</Text>
|
<Text style={styles.sectionTitle}>Betriebsmodus</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<ModeSelector currentModeId={currentMode} onModeChange={handleModeChange} />
|
<ModeSelector currentModeId={currentMode} onModeChange={handleModeChange} />
|
||||||
@@ -596,7 +680,11 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<View style={styles.toggleInfo}>
|
<View style={styles.toggleInfo}>
|
||||||
<Text style={styles.toggleLabel}>GPS-Position mitsenden</Text>
|
<Text style={styles.toggleLabel}>GPS-Position mitsenden</Text>
|
||||||
<Text style={styles.toggleHint}>
|
<Text style={styles.toggleHint}>
|
||||||
Standort wird automatisch an Nachrichten angehaengt
|
Position (lat/lon) wird mit jeder Nachricht an ARIA mitgeschickt.
|
||||||
|
Sie sieht's nur intern und nutzt es bei standortbezogenen Fragen
|
||||||
|
("wo bin ich?", "Wetter hier?"), erwaehnt es sonst nicht.
|
||||||
|
Im Chat-Verlauf bleibt die Bubble unveraendert — nur ARIAs
|
||||||
|
Antwort kann darauf eingehen.
|
||||||
</Text>
|
</Text>
|
||||||
</View>
|
</View>
|
||||||
<Switch
|
<Switch
|
||||||
@@ -607,8 +695,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
/>
|
/>
|
||||||
</View>
|
</View>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Spracheingabe (geraetelokal) === */}
|
{/* === Spracheingabe (geraetelokal) === */}
|
||||||
|
{currentSection === 'voice_input' && (<>
|
||||||
<Text style={styles.sectionTitle}>Spracheingabe</Text>
|
<Text style={styles.sectionTitle}>Spracheingabe</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<Text style={styles.toggleLabel}>Stille-Toleranz</Text>
|
<Text style={styles.toggleLabel}>Stille-Toleranz</Text>
|
||||||
@@ -676,46 +766,146 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.prerollButtonText}>+1</Text>
|
<Text style={styles.prerollButtonText}>+1</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
</View>
|
|
||||||
|
|
||||||
{/* === Wake-Word (geraetelokal) === */}
|
<Text style={[styles.toggleLabel, {marginTop: 24}]}>Maximale Aufnahmedauer</Text>
|
||||||
<Text style={styles.sectionTitle}>Wake-Word</Text>
|
|
||||||
<View style={styles.card}>
|
|
||||||
<Text style={styles.toggleHint}>
|
<Text style={styles.toggleHint}>
|
||||||
Wenn ein Picovoice-Access-Key eingetragen ist, hoert die App passiv
|
Notbremse: nach so vielen Minuten wird die Aufnahme automatisch beendet,
|
||||||
auf das gewaehlte Wake-Word — du kannst dich mit anderen unterhalten,
|
auch wenn keine Stille erkannt wurde. Nuetzlich fuer lange Erklaerungen
|
||||||
Musik laufen lassen und mit "{wakeKeyword}" eine Konversation mit
|
oder Diktate. Default: {Math.round(MAX_RECORDING_DEFAULT_SEC / 60)} Min, max {Math.round(MAX_RECORDING_MAX_SEC / 60)} Min.
|
||||||
ARIA starten. Ohne Key oder bei Fehlschlag startet das Ohr direkt
|
|
||||||
eine Konversation (klassischer Modus).
|
|
||||||
</Text>
|
</Text>
|
||||||
|
<View style={styles.prerollRow}>
|
||||||
<Text style={[styles.toggleLabel, {marginTop: 16}]}>Picovoice Access Key</Text>
|
|
||||||
<View style={{flexDirection: 'row', alignItems: 'center', gap: 8, marginTop: 6}}>
|
|
||||||
<TextInput
|
|
||||||
style={[styles.input, {flex: 1}]}
|
|
||||||
value={wakeAccessKey}
|
|
||||||
onChangeText={setWakeAccessKey}
|
|
||||||
placeholder="kostenlos auf console.picovoice.ai"
|
|
||||||
placeholderTextColor="#666680"
|
|
||||||
secureTextEntry={!wakeAccessKeyVisible}
|
|
||||||
autoCapitalize="none"
|
|
||||||
autoCorrect={false}
|
|
||||||
/>
|
|
||||||
<TouchableOpacity
|
<TouchableOpacity
|
||||||
onPress={() => setWakeAccessKeyVisible(v => !v)}
|
style={styles.prerollButton}
|
||||||
style={{padding: 8}}
|
onPress={() => {
|
||||||
|
const next = Math.max(MAX_RECORDING_MIN_SEC, maxRecordingSec - 60);
|
||||||
|
setMaxRecordingSec(next);
|
||||||
|
AsyncStorage.setItem(MAX_RECORDING_STORAGE_KEY, String(next));
|
||||||
|
}}
|
||||||
|
disabled={maxRecordingSec <= MAX_RECORDING_MIN_SEC}
|
||||||
>
|
>
|
||||||
<Text style={{fontSize: 18}}>{wakeAccessKeyVisible ? '🙈' : '👁'}</Text>
|
<Text style={styles.prerollButtonText}>−1m</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
<Text style={styles.prerollValue}>{Math.round(maxRecordingSec / 60)} min</Text>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.prerollButton}
|
||||||
|
onPress={() => {
|
||||||
|
const next = Math.min(MAX_RECORDING_MAX_SEC, maxRecordingSec + 60);
|
||||||
|
setMaxRecordingSec(next);
|
||||||
|
AsyncStorage.setItem(MAX_RECORDING_STORAGE_KEY, String(next));
|
||||||
|
}}
|
||||||
|
disabled={maxRecordingSec >= MAX_RECORDING_MAX_SEC}
|
||||||
|
>
|
||||||
|
<Text style={styles.prerollButtonText}>+1m</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
<View style={{flexDirection: 'row', alignItems: 'center', marginTop: 24, gap: 8}}>
|
||||||
|
<Text style={styles.toggleLabel}>Stille-Pegel (dB)</Text>
|
||||||
|
<TouchableOpacity onPress={() => setShowVadInfo(true)} style={styles.infoBtn}>
|
||||||
|
<Text style={styles.infoBtnText}>i</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
|
<Text style={styles.toggleHint}>
|
||||||
|
Welcher Mikro-Pegel als "Stille" gilt. Standard: automatisch (Baseline aus
|
||||||
|
den ersten 500ms). Manuell setzen wenn Auto nicht zuverlaessig greift.
|
||||||
|
</Text>
|
||||||
|
<View style={styles.prerollRow}>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.prerollButton}
|
||||||
|
onPress={() => {
|
||||||
|
const next = vadSilenceDb == null
|
||||||
|
? VAD_SILENCE_DB_DEFAULT - 1
|
||||||
|
: Math.max(VAD_SILENCE_DB_MIN, vadSilenceDb - 1);
|
||||||
|
setVadSilenceDb(next);
|
||||||
|
AsyncStorage.setItem(VAD_SILENCE_DB_OVERRIDE_KEY, String(next));
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<Text style={styles.prerollButtonText}>−1</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
<Text style={styles.prerollValue}>
|
||||||
|
{vadSilenceDb == null ? 'auto' : `${vadSilenceDb} dB`}
|
||||||
|
</Text>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.prerollButton}
|
||||||
|
onPress={() => {
|
||||||
|
const next = vadSilenceDb == null
|
||||||
|
? VAD_SILENCE_DB_DEFAULT + 1
|
||||||
|
: Math.min(VAD_SILENCE_DB_MAX, vadSilenceDb + 1);
|
||||||
|
setVadSilenceDb(next);
|
||||||
|
AsyncStorage.setItem(VAD_SILENCE_DB_OVERRIDE_KEY, String(next));
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<Text style={styles.prerollButtonText}>+1</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
|
{vadSilenceDb != null && (
|
||||||
|
<TouchableOpacity
|
||||||
|
onPress={() => {
|
||||||
|
setVadSilenceDb(null);
|
||||||
|
AsyncStorage.removeItem(VAD_SILENCE_DB_OVERRIDE_KEY);
|
||||||
|
}}
|
||||||
|
style={{alignSelf: 'center', marginTop: 8, paddingVertical: 6, paddingHorizontal: 12}}
|
||||||
|
>
|
||||||
|
<Text style={{color: '#0096FF', fontSize: 13}}>↻ Auf automatisch zuruecksetzen</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
)}
|
||||||
|
</View>
|
||||||
|
|
||||||
|
<Modal
|
||||||
|
visible={showVadInfo}
|
||||||
|
transparent
|
||||||
|
animationType="fade"
|
||||||
|
onRequestClose={() => setShowVadInfo(false)}
|
||||||
|
>
|
||||||
|
<View style={styles.modalOverlay}>
|
||||||
|
<View style={styles.modalCard}>
|
||||||
|
<Text style={styles.modalTitle}>Stille-Pegel (dB)</Text>
|
||||||
|
<Text style={styles.modalText}>
|
||||||
|
Lautstaerken werden in Dezibel (dB) gemessen — negative Werte, je
|
||||||
|
hoeher (naeher an 0), desto lauter.{'\n\n'}
|
||||||
|
<Text style={{fontWeight: '700'}}>Standard:</Text> automatisch.
|
||||||
|
Die App misst die ersten 500ms Hintergrundpegel und setzt die
|
||||||
|
Stille-Schwelle auf Baseline + 6 dB. Funktioniert in den meisten
|
||||||
|
Umgebungen.{'\n\n'}
|
||||||
|
<Text style={{fontWeight: '700'}}>Manuell:</Text> Pegel unter dem
|
||||||
|
eingestellten Wert gilt als "Stille" → Aufnahme stoppt.{'\n\n'}
|
||||||
|
<Text style={{fontWeight: '700'}}>Faustregel:</Text>{'\n'}
|
||||||
|
• <Text style={{color: '#FFD60A'}}>−45 dB</Text> sehr empfindlich (stoppt schnell, auch bei Atmen){'\n'}
|
||||||
|
• <Text style={{color: '#34C759'}}>−38 dB</Text> ausgewogen (typische Bueroumgebung){'\n'}
|
||||||
|
• <Text style={{color: '#FF6B6B'}}>−25 dB</Text> unempfindlich (laute Umgebung, nur klare Sprache zaehlt){'\n\n'}
|
||||||
|
<Text style={{color: '#8888AA'}}>Niedrigere Zahl (z.B. −50) = sensibler.{'\n'}
|
||||||
|
Hoehere Zahl (z.B. −20) = robuster gegen Hintergrundlaerm,
|
||||||
|
braucht aber lautere Sprache.</Text>
|
||||||
|
</Text>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={[styles.connectButton, {marginTop: 16, alignSelf: 'stretch'}]}
|
||||||
|
onPress={() => setShowVadInfo(false)}
|
||||||
|
>
|
||||||
|
<Text style={styles.connectButtonText}>OK</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
|
</View>
|
||||||
|
</Modal>
|
||||||
|
</>)}
|
||||||
|
|
||||||
|
{/* === Wake-Word (komplett on-device, openWakeWord) === */}
|
||||||
|
{currentSection === 'wake_word' && (<>
|
||||||
|
<Text style={styles.sectionTitle}>Wake-Word</Text>
|
||||||
|
<View style={styles.card}>
|
||||||
|
<Text style={styles.toggleHint}>
|
||||||
|
Lokale Erkennung via openWakeWord (ONNX, on-device). Kein API-Key,
|
||||||
|
kein Cloud-Roundtrip — Audio verlaesst das Geraet nicht. Wenn das Ohr
|
||||||
|
aktiv ist, hoerst du normal mit; sagst du das Wake-Word, startet eine
|
||||||
|
Konversation mit ARIA.
|
||||||
|
</Text>
|
||||||
|
|
||||||
<Text style={[styles.toggleLabel, {marginTop: 16}]}>Wake-Word</Text>
|
<Text style={[styles.toggleLabel, {marginTop: 16}]}>Wake-Word</Text>
|
||||||
<Text style={styles.toggleHint}>
|
<Text style={styles.toggleHint}>
|
||||||
Built-In: sofort verwendbar. "ARIA" als Custom-Keyword kommt spaeter
|
Eigene Wake-Words via openWakeWord-Notebook trainierbar (gratis).
|
||||||
ueber Diagnostic-Upload.
|
Custom-Upload ueber Diagnostic kommt in einer spaeteren Version.
|
||||||
</Text>
|
</Text>
|
||||||
<View style={{flexDirection: 'row', flexWrap: 'wrap', gap: 6, marginTop: 8}}>
|
<View style={{flexDirection: 'row', flexWrap: 'wrap', gap: 6, marginTop: 8}}>
|
||||||
{BUILTIN_KEYWORDS.map(kw => (
|
{WAKE_KEYWORDS.map(kw => (
|
||||||
<TouchableOpacity
|
<TouchableOpacity
|
||||||
key={kw}
|
key={kw}
|
||||||
style={[
|
style={[
|
||||||
@@ -728,7 +918,7 @@ const SettingsScreen: React.FC = () => {
|
|||||||
styles.keywordChipText,
|
styles.keywordChipText,
|
||||||
wakeKeyword === kw && styles.keywordChipTextActive,
|
wakeKeyword === kw && styles.keywordChipTextActive,
|
||||||
]}>
|
]}>
|
||||||
{kw}
|
{KEYWORD_LABELS[kw]}
|
||||||
</Text>
|
</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
))}
|
))}
|
||||||
@@ -740,8 +930,8 @@ const SettingsScreen: React.FC = () => {
|
|||||||
onPress={async () => {
|
onPress={async () => {
|
||||||
setWakeStatus('Initialisiere...');
|
setWakeStatus('Initialisiere...');
|
||||||
try {
|
try {
|
||||||
const ok = await wakeWordService.configure(wakeAccessKey, wakeKeyword);
|
const ok = await wakeWordService.configure(wakeKeyword);
|
||||||
setWakeStatus(ok ? `✅ "${wakeKeyword}" bereit` : '❌ Fehlgeschlagen — Access Key pruefen');
|
setWakeStatus(ok ? `✅ "${KEYWORD_LABELS[wakeKeyword as keyof typeof KEYWORD_LABELS]}" bereit` : '❌ Init-Fehler — Logs pruefen');
|
||||||
} catch (err: any) {
|
} catch (err: any) {
|
||||||
setWakeStatus('❌ ' + String(err?.message || err).slice(0, 80));
|
setWakeStatus('❌ ' + String(err?.message || err).slice(0, 80));
|
||||||
}
|
}
|
||||||
@@ -754,9 +944,36 @@ const SettingsScreen: React.FC = () => {
|
|||||||
{!!wakeStatus && (
|
{!!wakeStatus && (
|
||||||
<Text style={{marginTop: 8, fontSize: 12, color: '#8888AA'}}>{wakeStatus}</Text>
|
<Text style={{marginTop: 8, fontSize: 12, color: '#8888AA'}}>{wakeStatus}</Text>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
|
<View style={[styles.toggleRow, {marginTop: 20, borderTopWidth: 1, borderTopColor: '#1E1E2E', paddingTop: 16}]}>
|
||||||
|
<View style={styles.toggleInfo}>
|
||||||
|
<Text style={styles.toggleLabel}>Bereit-Sound abspielen</Text>
|
||||||
|
<Text style={styles.toggleHint}>
|
||||||
|
Kurzer Ding-Dong wenn das Mikro nach Wake-Word offen ist —
|
||||||
|
akustische Bestaetigung dass du jetzt sprechen darfst.
|
||||||
|
</Text>
|
||||||
|
</View>
|
||||||
|
<Switch
|
||||||
|
value={wakeReadySound}
|
||||||
|
onValueChange={async (val) => {
|
||||||
|
setWakeReadySound(val);
|
||||||
|
await setWakeReadySoundEnabled(val);
|
||||||
|
if (val) {
|
||||||
|
// Direkt eine Vorschau abspielen damit der User weiss wie's klingt.
|
||||||
|
// playWakeReadySound checked das gerade gesetzte Flag — wenn val=true,
|
||||||
|
// wird abgespielt; bei false bleibt es still.
|
||||||
|
setTimeout(() => playWakeReadySound().catch(() => {}), 150);
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
trackColor={{ false: '#2A2A3E', true: '#0096FF' }}
|
||||||
|
thumbColor={wakeReadySound ? '#FFFFFF' : '#666680'}
|
||||||
|
/>
|
||||||
|
</View>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Sprachausgabe (geraetelokal) === */}
|
{/* === Sprachausgabe (geraetelokal) === */}
|
||||||
|
{currentSection === 'voice_output' && (<>
|
||||||
<Text style={styles.sectionTitle}>Sprachausgabe</Text>
|
<Text style={styles.sectionTitle}>Sprachausgabe</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<View style={styles.toggleRow}>
|
<View style={styles.toggleRow}>
|
||||||
@@ -899,7 +1116,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
)}
|
)}
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Speicher === */}
|
{/* === Speicher === */}
|
||||||
|
{currentSection === 'storage' && (<>
|
||||||
<Text style={styles.sectionTitle}>Anhang-Speicher</Text>
|
<Text style={styles.sectionTitle}>Anhang-Speicher</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<View style={styles.toggleRow}>
|
<View style={styles.toggleRow}>
|
||||||
@@ -974,7 +1194,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
)}
|
)}
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Logs === */}
|
{/* === Logs === */}
|
||||||
|
{currentSection === 'protocol' && (<>
|
||||||
<Text style={styles.sectionTitle}>Protokoll</Text>
|
<Text style={styles.sectionTitle}>Protokoll</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
{/* Tab-Umschalter */}
|
{/* Tab-Umschalter */}
|
||||||
@@ -1053,8 +1276,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.clearButtonText}>Protokoll l\u00F6schen</Text>
|
<Text style={styles.clearButtonText}>Protokoll l\u00F6schen</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === About === */}
|
{/* === About === */}
|
||||||
|
{currentSection === 'about' && (<>
|
||||||
<Text style={styles.sectionTitle}>{'\u00DC'}ber</Text>
|
<Text style={styles.sectionTitle}>{'\u00DC'}ber</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<Text style={styles.aboutTitle}>ARIA Cockpit</Text>
|
<Text style={styles.aboutTitle}>ARIA Cockpit</Text>
|
||||||
@@ -1074,6 +1299,7 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.connectButtonText}>Auf Updates pr{'\u00FC'}fen</Text>
|
<Text style={styles.connectButtonText}>Auf Updates pr{'\u00FC'}fen</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* Platz am Ende */}
|
{/* Platz am Ende */}
|
||||||
<View style={styles.bottomSpacer} />
|
<View style={styles.bottomSpacer} />
|
||||||
@@ -1102,6 +1328,58 @@ const styles = StyleSheet.create({
|
|||||||
marginBottom: 8,
|
marginBottom: 8,
|
||||||
marginLeft: 4,
|
marginLeft: 4,
|
||||||
},
|
},
|
||||||
|
menuItem: {
|
||||||
|
flexDirection: 'row',
|
||||||
|
alignItems: 'center',
|
||||||
|
backgroundColor: '#1E1E2E',
|
||||||
|
borderRadius: 10,
|
||||||
|
paddingVertical: 14,
|
||||||
|
paddingHorizontal: 14,
|
||||||
|
marginBottom: 8,
|
||||||
|
},
|
||||||
|
menuItemIcon: {
|
||||||
|
fontSize: 22,
|
||||||
|
marginRight: 14,
|
||||||
|
width: 28,
|
||||||
|
textAlign: 'center',
|
||||||
|
},
|
||||||
|
menuItemTextWrap: {
|
||||||
|
flex: 1,
|
||||||
|
},
|
||||||
|
menuItemLabel: {
|
||||||
|
color: '#FFFFFF',
|
||||||
|
fontSize: 16,
|
||||||
|
fontWeight: '600',
|
||||||
|
},
|
||||||
|
menuItemDesc: {
|
||||||
|
color: '#8888AA',
|
||||||
|
fontSize: 12,
|
||||||
|
marginTop: 2,
|
||||||
|
},
|
||||||
|
menuItemChevron: {
|
||||||
|
color: '#8888AA',
|
||||||
|
fontSize: 24,
|
||||||
|
fontWeight: '300',
|
||||||
|
marginLeft: 8,
|
||||||
|
},
|
||||||
|
subScreenHeader: {
|
||||||
|
flexDirection: 'row',
|
||||||
|
alignItems: 'center',
|
||||||
|
paddingVertical: 8,
|
||||||
|
marginBottom: 8,
|
||||||
|
},
|
||||||
|
subScreenBack: {
|
||||||
|
color: '#0096FF',
|
||||||
|
fontSize: 32,
|
||||||
|
fontWeight: '300',
|
||||||
|
marginRight: 12,
|
||||||
|
lineHeight: 36,
|
||||||
|
},
|
||||||
|
subScreenTitle: {
|
||||||
|
color: '#FFFFFF',
|
||||||
|
fontSize: 20,
|
||||||
|
fontWeight: '700',
|
||||||
|
},
|
||||||
card: {
|
card: {
|
||||||
backgroundColor: '#12122A',
|
backgroundColor: '#12122A',
|
||||||
borderRadius: 14,
|
borderRadius: 14,
|
||||||
@@ -1459,6 +1737,48 @@ const styles = StyleSheet.create({
|
|||||||
textAlign: 'center',
|
textAlign: 'center',
|
||||||
},
|
},
|
||||||
|
|
||||||
|
infoBtn: {
|
||||||
|
width: 22,
|
||||||
|
height: 22,
|
||||||
|
borderRadius: 11,
|
||||||
|
borderWidth: 1.5,
|
||||||
|
borderColor: '#0096FF',
|
||||||
|
alignItems: 'center',
|
||||||
|
justifyContent: 'center',
|
||||||
|
},
|
||||||
|
infoBtnText: {
|
||||||
|
color: '#0096FF',
|
||||||
|
fontSize: 13,
|
||||||
|
fontWeight: '700',
|
||||||
|
fontStyle: 'italic',
|
||||||
|
lineHeight: 16,
|
||||||
|
},
|
||||||
|
modalOverlay: {
|
||||||
|
flex: 1,
|
||||||
|
backgroundColor: 'rgba(0,0,0,0.7)',
|
||||||
|
justifyContent: 'center',
|
||||||
|
alignItems: 'center',
|
||||||
|
padding: 20,
|
||||||
|
},
|
||||||
|
modalCard: {
|
||||||
|
backgroundColor: '#1E1E2E',
|
||||||
|
borderRadius: 14,
|
||||||
|
padding: 20,
|
||||||
|
maxWidth: 460,
|
||||||
|
width: '100%',
|
||||||
|
},
|
||||||
|
modalTitle: {
|
||||||
|
color: '#FFFFFF',
|
||||||
|
fontSize: 18,
|
||||||
|
fontWeight: '700',
|
||||||
|
marginBottom: 12,
|
||||||
|
},
|
||||||
|
modalText: {
|
||||||
|
color: '#E0E0F0',
|
||||||
|
fontSize: 14,
|
||||||
|
lineHeight: 20,
|
||||||
|
},
|
||||||
|
|
||||||
keywordChip: {
|
keywordChip: {
|
||||||
backgroundColor: '#1E1E2E',
|
backgroundColor: '#1E1E2E',
|
||||||
borderWidth: 1,
|
borderWidth: 1,
|
||||||
|
|||||||
+230
-21
@@ -6,10 +6,11 @@
|
|||||||
* Nutzt react-native-audio-recorder-player fuer Aufnahme.
|
* Nutzt react-native-audio-recorder-player fuer Aufnahme.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { Platform, PermissionsAndroid, NativeModules } from 'react-native';
|
import { Platform, PermissionsAndroid, NativeModules, ToastAndroid } from 'react-native';
|
||||||
import Sound from 'react-native-sound';
|
import Sound from 'react-native-sound';
|
||||||
import RNFS from 'react-native-fs';
|
import RNFS from 'react-native-fs';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
|
import { acquireBackgroundAudio, releaseBackgroundAudio, stopBackgroundAudio } from './backgroundAudio';
|
||||||
import AudioRecorderPlayer, {
|
import AudioRecorderPlayer, {
|
||||||
AudioEncoderAndroidType,
|
AudioEncoderAndroidType,
|
||||||
AudioSourceAndroidType,
|
AudioSourceAndroidType,
|
||||||
@@ -72,11 +73,41 @@ const AUDIO_SAMPLE_RATE = 16000;
|
|||||||
const AUDIO_CHANNELS = 1;
|
const AUDIO_CHANNELS = 1;
|
||||||
const AUDIO_ENCODING = 'audio/wav';
|
const AUDIO_ENCODING = 'audio/wav';
|
||||||
|
|
||||||
// VAD (Voice Activity Detection) — Stille-Erkennung
|
// VAD (Voice Activity Detection) — Stille-Erkennung.
|
||||||
const VAD_SILENCE_THRESHOLD_DB = -45; // dB unter dem als "Stille" gilt
|
// Fallback-Werte falls die adaptive Baseline-Messung fehlschlaegt (z.B. weil
|
||||||
const VAD_SPEECH_THRESHOLD_DB = -28; // dB ueber dem als "Sprache" gilt (Sprach-Gate) — hoeher = weniger Umgebungsgeraeusche
|
// das Mikro keine metering-Updates liefert). Adaptive Werte werden zur
|
||||||
|
// Laufzeit aus den ersten BASELINE_SAMPLES gemessen und auf baseline+offset
|
||||||
|
// gesetzt — funktioniert in lauten wie leisen Umgebungen.
|
||||||
|
const VAD_SILENCE_FALLBACK_DB = -38; // Fallback Stille-Schwelle
|
||||||
|
const VAD_SPEECH_FALLBACK_DB = -22; // Fallback Sprach-Schwelle
|
||||||
|
const VAD_SILENCE_OFFSET_DB = 6; // Sprache = Baseline + 6dB
|
||||||
|
const VAD_SPEECH_OFFSET_DB = 12; // sicheres Speech = Baseline + 12dB
|
||||||
|
const VAD_BASELINE_SAMPLES = 5; // 5 × 100ms = 500ms Baseline
|
||||||
const VAD_SPEECH_MIN_MS = 500; // ms Sprache bevor Aufnahme zaehlt — laenger = keine Huestler/Klopfer mehr
|
const VAD_SPEECH_MIN_MS = 500; // ms Sprache bevor Aufnahme zaehlt — laenger = keine Huestler/Klopfer mehr
|
||||||
|
|
||||||
|
// Override fuer die Stille-Schwelle — wenn gesetzt, wird die adaptive Baseline
|
||||||
|
// ignoriert. Nuetzlich wenn die adaptive Logik in spezifischen Umgebungen
|
||||||
|
// nicht zuverlaessig greift. Range -55..-15 dB. Speech-Schwelle wird auf
|
||||||
|
// override+10 dB gesetzt (Speech muss klar lauter als Stille sein).
|
||||||
|
export const VAD_SILENCE_DB_DEFAULT = -38; // wenn User Manuell-Modus waehlt
|
||||||
|
export const VAD_SILENCE_DB_MIN = -55; // sehr empfindlich, fast jeder Pegel ist "Sprache"
|
||||||
|
export const VAD_SILENCE_DB_MAX = -15; // sehr unempfindlich, nur lautes Reden gilt
|
||||||
|
export const VAD_SILENCE_DB_OVERRIDE_KEY = 'aria_vad_silence_db_override';
|
||||||
|
|
||||||
|
/** Liefert den manuellen Override-Wert oder null wenn "automatisch". */
|
||||||
|
export async function loadVadSilenceDbOverride(): Promise<number | null> {
|
||||||
|
try {
|
||||||
|
const raw = await AsyncStorage.getItem(VAD_SILENCE_DB_OVERRIDE_KEY);
|
||||||
|
if (raw == null || raw === '') return null;
|
||||||
|
const n = parseFloat(raw);
|
||||||
|
if (!isFinite(n)) return null;
|
||||||
|
if (n < VAD_SILENCE_DB_MIN || n > VAD_SILENCE_DB_MAX) return null;
|
||||||
|
return n;
|
||||||
|
} catch {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// VAD-Stille (in Sekunden) — wie lange Sprechpause toleriert wird, bevor
|
// VAD-Stille (in Sekunden) — wie lange Sprechpause toleriert wird, bevor
|
||||||
// die Aufnahme automatisch beendet wird. Einstellbar in den App-Settings.
|
// die Aufnahme automatisch beendet wird. Einstellbar in den App-Settings.
|
||||||
export const VAD_SILENCE_DEFAULT_SEC = 2.8;
|
export const VAD_SILENCE_DEFAULT_SEC = 2.8;
|
||||||
@@ -138,7 +169,24 @@ async function loadVadSilenceMs(): Promise<number> {
|
|||||||
|
|
||||||
// Max-Dauer einer Aufnahme (Notbremse gegen Runaway-Loops). Auf 2 Minuten
|
// Max-Dauer einer Aufnahme (Notbremse gegen Runaway-Loops). Auf 2 Minuten
|
||||||
// hochgezogen damit auch laengere Erklaerungen durchgehen.
|
// hochgezogen damit auch laengere Erklaerungen durchgehen.
|
||||||
const MAX_RECORDING_MS = 120000;
|
// Default 5 Minuten — konfigurierbar in den App-Settings (1-30 Minuten).
|
||||||
|
export const MAX_RECORDING_DEFAULT_SEC = 300;
|
||||||
|
export const MAX_RECORDING_MIN_SEC = 60;
|
||||||
|
export const MAX_RECORDING_MAX_SEC = 1800;
|
||||||
|
export const MAX_RECORDING_STORAGE_KEY = 'aria_max_recording_sec';
|
||||||
|
|
||||||
|
export async function loadMaxRecordingMs(): Promise<number> {
|
||||||
|
try {
|
||||||
|
const raw = await AsyncStorage.getItem(MAX_RECORDING_STORAGE_KEY);
|
||||||
|
if (raw != null) {
|
||||||
|
const n = parseFloat(raw);
|
||||||
|
if (isFinite(n) && n >= MAX_RECORDING_MIN_SEC && n <= MAX_RECORDING_MAX_SEC) {
|
||||||
|
return Math.round(n * 1000);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
return MAX_RECORDING_DEFAULT_SEC * 1000;
|
||||||
|
}
|
||||||
|
|
||||||
// Pre-Roll: Wie lange Audio im AudioTrack-Buffer liegt bevor play() startet.
|
// Pre-Roll: Wie lange Audio im AudioTrack-Buffer liegt bevor play() startet.
|
||||||
// Einstellbar via Diagnostic/Settings (Key: aria_tts_preroll_sec).
|
// Einstellbar via Diagnostic/Settings (Key: aria_tts_preroll_sec).
|
||||||
@@ -191,6 +239,19 @@ class AudioService {
|
|||||||
private pcmBytesCollected: number = 0;
|
private pcmBytesCollected: number = 0;
|
||||||
private readonly PCM_MAX_CACHE_BYTES = 30 * 1024 * 1024; // 30MB
|
private readonly PCM_MAX_CACHE_BYTES = 30 * 1024 * 1024; // 30MB
|
||||||
|
|
||||||
|
// AudioFocus wird verzoegert freigegeben — wenn ARIA eine zweite Antwort
|
||||||
|
// direkt hinterherschickt (oder ein neuer Stream startet), bleibt Spotify
|
||||||
|
// pausiert. Ohne diese Verzoegerung springt Spotify im Mikro-Sekunden-Gap
|
||||||
|
// zwischen zwei Streams kurz wieder an.
|
||||||
|
private focusReleaseTimer: ReturnType<typeof setTimeout> | null = null;
|
||||||
|
private readonly FOCUS_RELEASE_DELAY_MS = 800;
|
||||||
|
|
||||||
|
// Conversation-Mode: solange aktiv (Wake-Word Status 'conversing' ODER
|
||||||
|
// wir wissen "ARIA spricht gerade in einem Multi-Turn-Dialog"), halten wir
|
||||||
|
// den AudioFocus DAUERHAFT. Der per-Stream-Release wird unterdrueckt,
|
||||||
|
// damit Spotify nicht in Render-Pausen oder zwischen Antworten zurueckkehrt.
|
||||||
|
private _conversationFocusActive: boolean = false;
|
||||||
|
|
||||||
// VAD State
|
// VAD State
|
||||||
private vadEnabled: boolean = false;
|
private vadEnabled: boolean = false;
|
||||||
private lastSpeechTime: number = 0;
|
private lastSpeechTime: number = 0;
|
||||||
@@ -199,12 +260,80 @@ class AudioService {
|
|||||||
// Latch damit der Silence-Callback pro Aufnahme genau einmal feuert
|
// Latch damit der Silence-Callback pro Aufnahme genau einmal feuert
|
||||||
private silenceFired: boolean = false;
|
private silenceFired: boolean = false;
|
||||||
private noSpeechTimer: ReturnType<typeof setTimeout> | null = null;
|
private noSpeechTimer: ReturnType<typeof setTimeout> | null = null;
|
||||||
|
// Adaptive Schwellen — werden in den ersten 500ms aus dem Mikro-Pegel
|
||||||
|
// gemessen. baseline = avg dB der ersten 5 Samples, dann:
|
||||||
|
// silence = baseline + VAD_SILENCE_OFFSET_DB (6dB ueber ambient)
|
||||||
|
// speech = baseline + VAD_SPEECH_OFFSET_DB (12dB ueber ambient = klares Reden)
|
||||||
|
// Funktioniert sowohl im stillen Buero als auch im lauten Cafe.
|
||||||
|
private vadBaselineSamples: number[] = [];
|
||||||
|
private vadAdaptiveSilenceDb: number = VAD_SILENCE_FALLBACK_DB;
|
||||||
|
private vadAdaptiveSpeechDb: number = VAD_SPEECH_FALLBACK_DB;
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
this.recorder = new AudioRecorderPlayer();
|
this.recorder = new AudioRecorderPlayer();
|
||||||
this.recorder.setSubscriptionDuration(0.1); // 100ms Metering-Updates
|
this.recorder.setSubscriptionDuration(0.1); // 100ms Metering-Updates
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** AudioFocus mit kleiner Verzoegerung freigeben — Spotify/YouTube
|
||||||
|
* springen sonst im Gap zwischen zwei TTS-Streams (oder wenn ARIA
|
||||||
|
* eine zweite Antwort direkt hinterherschickt) kurz wieder an.
|
||||||
|
* Im Conversation-Mode (Wake-Word conversing) wird das Release komplett
|
||||||
|
* unterdrueckt — der Focus bleibt fuer die ganze Konversation gehalten. */
|
||||||
|
private _releaseFocusDeferred(): void {
|
||||||
|
if (this._conversationFocusActive) {
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
|
this.focusReleaseTimer = setTimeout(() => {
|
||||||
|
this.focusReleaseTimer = null;
|
||||||
|
if (this._conversationFocusActive) return;
|
||||||
|
AudioFocus?.release().catch(() => {});
|
||||||
|
}, this.FOCUS_RELEASE_DELAY_MS);
|
||||||
|
}
|
||||||
|
|
||||||
|
private _cancelDeferredFocusRelease(): void {
|
||||||
|
if (this.focusReleaseTimer) {
|
||||||
|
clearTimeout(this.focusReleaseTimer);
|
||||||
|
this.focusReleaseTimer = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Conversation-Mode beginnt → AudioFocus dauerhaft halten (Spotify bleibt
|
||||||
|
* pausiert). Idempotent: mehrfaches Aufrufen ist sicher. */
|
||||||
|
acquireConversationFocus(): void {
|
||||||
|
if (this._conversationFocusActive) return;
|
||||||
|
this._conversationFocusActive = true;
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
|
console.log('[Audio] Conversation-Focus aktiv (Spotify bleibt gepaust)');
|
||||||
|
AudioFocus?.requestDuck().catch(() => {});
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Conversation-Mode endet → Focus darf wieder freigegeben werden
|
||||||
|
* (verzoegert, damit eine direkt folgende Antwort nichts kaputtmacht). */
|
||||||
|
releaseConversationFocus(): void {
|
||||||
|
if (!this._conversationFocusActive) return;
|
||||||
|
this._conversationFocusActive = false;
|
||||||
|
console.log('[Audio] Conversation-Focus inaktiv');
|
||||||
|
this._releaseFocusDeferred();
|
||||||
|
}
|
||||||
|
|
||||||
|
/** TTS-Wiedergabe haart stoppen — z.B. wenn ein Anruf reinkommt.
|
||||||
|
* Released auch sofort den AudioFocus damit der Anruf-Klingelton hoerbar ist. */
|
||||||
|
haltAllPlayback(reason: string = ''): void {
|
||||||
|
console.log('[Audio] haltAllPlayback: %s', reason || '(no reason)');
|
||||||
|
this._conversationFocusActive = false;
|
||||||
|
this.stopPlayback();
|
||||||
|
}
|
||||||
|
|
||||||
|
/** True wenn ARIA gerade was abspielt — egal ob WAV-Queue oder PCM-Stream.
|
||||||
|
* Nuetzlich fuer "Barge-In": wenn der User spricht waehrend ARIA spricht,
|
||||||
|
* soll die ARIA-Wiedergabe abgebrochen + die neue User-Message verarbeitet
|
||||||
|
* werden ("ach vergiss es, mach lieber X"). */
|
||||||
|
isPlayingAudio(): boolean {
|
||||||
|
return this.isPlaying || this.pcmStreamActive;
|
||||||
|
}
|
||||||
|
|
||||||
// --- Berechtigungen ---
|
// --- Berechtigungen ---
|
||||||
|
|
||||||
async requestMicrophonePermission(): Promise<boolean> {
|
async requestMicrophonePermission(): Promise<boolean> {
|
||||||
@@ -262,6 +391,12 @@ class AudioService {
|
|||||||
|
|
||||||
this.recordingPath = `${RNFS.CachesDirectoryPath}/aria_recording_${Date.now()}.mp4`;
|
this.recordingPath = `${RNFS.CachesDirectoryPath}/aria_recording_${Date.now()}.mp4`;
|
||||||
|
|
||||||
|
// Foreground-Service VOR dem AudioRecord starten — sonst blockt Android
|
||||||
|
// den Background-Mic-Zugriff (foregroundServiceType=microphone muss zum
|
||||||
|
// Zeitpunkt des startRecorder() schon aktiv sein, sonst greifen die
|
||||||
|
// Background-Mic-Restrictions ab Android 11+).
|
||||||
|
await acquireBackgroundAudio('rec');
|
||||||
|
|
||||||
// Aufnahme mit Metering starten
|
// Aufnahme mit Metering starten
|
||||||
await this.recorder.startRecorder(this.recordingPath, {
|
await this.recorder.startRecorder(this.recordingPath, {
|
||||||
AudioEncoderAndroid: AudioEncoderAndroidType.AAC,
|
AudioEncoderAndroid: AudioEncoderAndroidType.AAC,
|
||||||
@@ -276,8 +411,36 @@ class AudioService {
|
|||||||
const db = e.currentMetering ?? -160;
|
const db = e.currentMetering ?? -160;
|
||||||
this.meterListeners.forEach(cb => cb(db));
|
this.meterListeners.forEach(cb => cb(db));
|
||||||
|
|
||||||
|
// Adaptive Baseline: erste 5 Samples (~500ms) sammeln, dann Schwellen
|
||||||
|
// anpassen. -160 (kein Metering) ignorieren — sonst wird die Baseline
|
||||||
|
// sinnlos niedrig.
|
||||||
|
if (this.vadBaselineSamples.length < VAD_BASELINE_SAMPLES) {
|
||||||
|
if (db > -100) {
|
||||||
|
this.vadBaselineSamples.push(db);
|
||||||
|
if (this.vadBaselineSamples.length === VAD_BASELINE_SAMPLES) {
|
||||||
|
// Minimum statt Mittelwert: robust gegen Spike-Samples (z.B. wenn
|
||||||
|
// der User direkt nach Wake-Word sofort spricht oder das Wake-Word-
|
||||||
|
// Echo noch im Mikro ist). Min ist der ruhigste Moment.
|
||||||
|
const lowest = Math.min(...this.vadBaselineSamples);
|
||||||
|
const rawSilence = lowest + VAD_SILENCE_OFFSET_DB;
|
||||||
|
const rawSpeech = lowest + VAD_SPEECH_OFFSET_DB;
|
||||||
|
// Cap auf einen vernuenftigen Bereich:
|
||||||
|
// - Silence-Schwelle nicht ueber -28dB (sonst zaehlt Hintergrund-
|
||||||
|
// geraeusch dauerhaft als "Sprache" → VAD feuert nie)
|
||||||
|
// - Silence-Schwelle nicht unter -50dB (sonst zu strikt)
|
||||||
|
this.vadAdaptiveSilenceDb = Math.max(-50, Math.min(rawSilence, -28));
|
||||||
|
this.vadAdaptiveSpeechDb = Math.max(-40, Math.min(rawSpeech, -18));
|
||||||
|
const msg = `VAD: ambient=${lowest.toFixed(0)}dB stille>${this.vadAdaptiveSilenceDb.toFixed(0)}dB`;
|
||||||
|
console.log('[Audio] %s speech>%s (raw silence=%s speech=%s)',
|
||||||
|
msg, this.vadAdaptiveSpeechDb.toFixed(1),
|
||||||
|
rawSilence.toFixed(1), rawSpeech.toFixed(1));
|
||||||
|
try { ToastAndroid.show(msg, ToastAndroid.SHORT); } catch {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Sprach-Gate: Erkennen ob tatsaechlich gesprochen wird
|
// Sprach-Gate: Erkennen ob tatsaechlich gesprochen wird
|
||||||
if (db > VAD_SPEECH_THRESHOLD_DB) {
|
if (db > this.vadAdaptiveSpeechDb) {
|
||||||
if (!this.speechDetected && this.speechStartTime === 0) {
|
if (!this.speechDetected && this.speechStartTime === 0) {
|
||||||
this.speechStartTime = Date.now();
|
this.speechStartTime = Date.now();
|
||||||
}
|
}
|
||||||
@@ -292,7 +455,7 @@ class AudioService {
|
|||||||
|
|
||||||
// VAD: Stille erkennen (nur wenn Sprache erkannt wurde)
|
// VAD: Stille erkennen (nur wenn Sprache erkannt wurde)
|
||||||
if (this.vadEnabled) {
|
if (this.vadEnabled) {
|
||||||
if (db > VAD_SILENCE_THRESHOLD_DB) {
|
if (db > this.vadAdaptiveSilenceDb) {
|
||||||
this.lastSpeechTime = Date.now();
|
this.lastSpeechTime = Date.now();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -302,9 +465,27 @@ class AudioService {
|
|||||||
this.lastSpeechTime = Date.now();
|
this.lastSpeechTime = Date.now();
|
||||||
this.speechDetected = false;
|
this.speechDetected = false;
|
||||||
this.speechStartTime = 0;
|
this.speechStartTime = 0;
|
||||||
|
// VAD-Adaptive zurueckgesetzt: Baseline wird in den ersten 500ms neu
|
||||||
|
// gemessen. Bis dahin gelten die Fallback-Schwellen.
|
||||||
|
this.vadBaselineSamples = [];
|
||||||
|
this.vadAdaptiveSilenceDb = VAD_SILENCE_FALLBACK_DB;
|
||||||
|
this.vadAdaptiveSpeechDb = VAD_SPEECH_FALLBACK_DB;
|
||||||
|
|
||||||
|
// Manueller Override aus Settings — wenn gesetzt, wird die adaptive
|
||||||
|
// Baseline-Messung uebersteuert. User-Wahl gewinnt vor Auto-Magic.
|
||||||
|
const dbOverride = await loadVadSilenceDbOverride();
|
||||||
|
if (dbOverride != null) {
|
||||||
|
this.vadAdaptiveSilenceDb = dbOverride;
|
||||||
|
this.vadAdaptiveSpeechDb = dbOverride + 10; // Speech klar ueber Stille
|
||||||
|
this.vadBaselineSamples = new Array(VAD_BASELINE_SAMPLES).fill(0); // Baseline-Sammeln deaktivieren
|
||||||
|
const msg = `VAD: manuell stille>${dbOverride}dB`;
|
||||||
|
console.log('[Audio] %s', msg);
|
||||||
|
try { ToastAndroid.show(msg, ToastAndroid.SHORT); } catch {}
|
||||||
|
}
|
||||||
this.setState('recording');
|
this.setState('recording');
|
||||||
|
|
||||||
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.requestExclusive().catch(() => {});
|
AudioFocus?.requestExclusive().catch(() => {});
|
||||||
|
|
||||||
// VAD aktivieren — Stille-Dauer aus AsyncStorage (Settings-konfigurierbar).
|
// VAD aktivieren — Stille-Dauer aus AsyncStorage (Settings-konfigurierbar).
|
||||||
@@ -328,18 +509,19 @@ class AudioService {
|
|||||||
};
|
};
|
||||||
if (autoStop) {
|
if (autoStop) {
|
||||||
const vadSilenceMs = await loadVadSilenceMs();
|
const vadSilenceMs = await loadVadSilenceMs();
|
||||||
|
const maxRecordingMs = await loadMaxRecordingMs();
|
||||||
console.log('[Audio] startRecording: autoStop=true, VAD-Stille=%dms, MAX=%dms',
|
console.log('[Audio] startRecording: autoStop=true, VAD-Stille=%dms, MAX=%dms',
|
||||||
vadSilenceMs, MAX_RECORDING_MS);
|
vadSilenceMs, maxRecordingMs);
|
||||||
this.vadTimer = setInterval(() => {
|
this.vadTimer = setInterval(() => {
|
||||||
const silenceDuration = Date.now() - this.lastSpeechTime;
|
const silenceDuration = Date.now() - this.lastSpeechTime;
|
||||||
if (silenceDuration >= vadSilenceMs) {
|
if (silenceDuration >= vadSilenceMs) {
|
||||||
fireSilenceOnce(`VAD ${silenceDuration}ms Stille (Schwelle=${vadSilenceMs}ms)`);
|
fireSilenceOnce(`VAD ${silenceDuration}ms Stille (Schwelle=${vadSilenceMs}ms)`);
|
||||||
}
|
}
|
||||||
}, 200);
|
}, 200);
|
||||||
// Notbremse: Nach MAX_RECORDING_MS zwangsweise stoppen
|
// Notbremse: Nach maxRecordingMs zwangsweise stoppen
|
||||||
this.maxDurationTimer = setTimeout(() => {
|
this.maxDurationTimer = setTimeout(() => {
|
||||||
fireSilenceOnce(`Max-Dauer ${MAX_RECORDING_MS}ms`);
|
fireSilenceOnce(`Max-Dauer ${maxRecordingMs}ms`);
|
||||||
}, MAX_RECORDING_MS);
|
}, maxRecordingMs);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Conversation-Window: Wenn der User innerhalb noSpeechTimeoutMs nicht
|
// Conversation-Window: Wenn der User innerhalb noSpeechTimeoutMs nicht
|
||||||
@@ -387,8 +569,9 @@ class AudioService {
|
|||||||
await this.recorder.stopRecorder();
|
await this.recorder.stopRecorder();
|
||||||
this.recorder.removeRecordBackListener();
|
this.recorder.removeRecordBackListener();
|
||||||
|
|
||||||
// Audio-Focus freigeben — andere Apps duerfen wieder
|
// Audio-Focus verzoegert freigeben — gleich kommt die TTS-Antwort,
|
||||||
AudioFocus?.release().catch(() => {});
|
// im Gap soll Spotify nicht hochkommen.
|
||||||
|
this._releaseFocusDeferred();
|
||||||
|
|
||||||
const durationMs = Date.now() - this.recordingStartTime;
|
const durationMs = Date.now() - this.recordingStartTime;
|
||||||
const hadSpeech = this.speechDetected;
|
const hadSpeech = this.speechDetected;
|
||||||
@@ -535,7 +718,9 @@ class AudioService {
|
|||||||
this.pcmStreamActive = false;
|
this.pcmStreamActive = false;
|
||||||
return '';
|
return '';
|
||||||
}
|
}
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.requestDuck().catch(() => {});
|
AudioFocus?.requestDuck().catch(() => {});
|
||||||
|
this._firePlaybackStarted();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -553,11 +738,12 @@ class AudioService {
|
|||||||
if (isFinal) {
|
if (isFinal) {
|
||||||
if (!silent) {
|
if (!silent) {
|
||||||
// end() resolved jetzt erst wenn der native Writer-Thread fertig
|
// end() resolved jetzt erst wenn der native Writer-Thread fertig
|
||||||
// ist (alle Samples ausgespielt) — danach erst AudioFocus freigeben,
|
// ist (alle Samples ausgespielt) — danach AudioFocus verzoegert
|
||||||
// damit Spotify/YouTube nicht waehrend des Pre-Roll-Ausklangs
|
// freigeben, damit Spotify/YouTube nicht im Mikro-Gap zwischen zwei
|
||||||
// wieder aufdrehen.
|
// ARIA-Antworten wieder hochdrehen. Wenn ein neuer Stream innerhalb
|
||||||
|
// FOCUS_RELEASE_DELAY_MS startet, wird das Release abgebrochen.
|
||||||
try { await PcmStreamPlayer!.end(); } catch {}
|
try { await PcmStreamPlayer!.end(); } catch {}
|
||||||
AudioFocus?.release().catch(() => {});
|
this._releaseFocusDeferred();
|
||||||
}
|
}
|
||||||
this.pcmStreamActive = false;
|
this.pcmStreamActive = false;
|
||||||
|
|
||||||
@@ -649,6 +835,7 @@ class AudioService {
|
|||||||
|
|
||||||
// Callback wenn alle Audio-Teile abgespielt sind
|
// Callback wenn alle Audio-Teile abgespielt sind
|
||||||
private playbackFinishedListeners: (() => void)[] = [];
|
private playbackFinishedListeners: (() => void)[] = [];
|
||||||
|
private playbackStartedListeners: (() => void)[] = [];
|
||||||
|
|
||||||
onPlaybackFinished(callback: () => void): () => void {
|
onPlaybackFinished(callback: () => void): () => void {
|
||||||
this.playbackFinishedListeners.push(callback);
|
this.playbackFinishedListeners.push(callback);
|
||||||
@@ -657,20 +844,38 @@ class AudioService {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Callback wenn ARIAs TTS-Wiedergabe startet — fuer Wake-Word-parallel-
|
||||||
|
* Listening waehrend ARIA spricht (Barge-In via "Computer" sagen). */
|
||||||
|
onPlaybackStarted(callback: () => void): () => void {
|
||||||
|
this.playbackStartedListeners.push(callback);
|
||||||
|
return () => {
|
||||||
|
this.playbackStartedListeners = this.playbackStartedListeners.filter(cb => cb !== callback);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
private _firePlaybackStarted(): void {
|
||||||
|
this.playbackStartedListeners.forEach(cb => {
|
||||||
|
try { cb(); } catch (e) { console.warn('[Audio] playbackStarted listener err:', e); }
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
/** Naechstes Audio aus der Queue abspielen */
|
/** Naechstes Audio aus der Queue abspielen */
|
||||||
private async _playNext(): Promise<void> {
|
private async _playNext(): Promise<void> {
|
||||||
if (this.audioQueue.length === 0) {
|
if (this.audioQueue.length === 0) {
|
||||||
this.isPlaying = false;
|
this.isPlaying = false;
|
||||||
// Audio-Focus abgeben → andere Apps volle Lautstaerke
|
// Audio-Focus verzoegert abgeben → wenn gleich noch eine Antwort kommt,
|
||||||
AudioFocus?.release().catch(() => {});
|
// bleibt Spotify pausiert.
|
||||||
|
this._releaseFocusDeferred();
|
||||||
// Alle Audio-Teile abgespielt → Listener benachrichtigen
|
// Alle Audio-Teile abgespielt → Listener benachrichtigen
|
||||||
this.playbackFinishedListeners.forEach(cb => cb());
|
this.playbackFinishedListeners.forEach(cb => cb());
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Beim ersten Playback-Start: andere Apps ducken
|
// Beim ersten Playback-Start: andere Apps ducken + Listener informieren
|
||||||
if (!this.isPlaying) {
|
if (!this.isPlaying) {
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.requestDuck().catch(() => {});
|
AudioFocus?.requestDuck().catch(() => {});
|
||||||
|
this._firePlaybackStarted();
|
||||||
}
|
}
|
||||||
this.isPlaying = true;
|
this.isPlaying = true;
|
||||||
|
|
||||||
@@ -734,6 +939,9 @@ class AudioService {
|
|||||||
|
|
||||||
/** Laufende Wiedergabe stoppen + Queue leeren */
|
/** Laufende Wiedergabe stoppen + Queue leeren */
|
||||||
stopPlayback(): void {
|
stopPlayback(): void {
|
||||||
|
// Foreground-Service auch stoppen — sonst bleibt die Notification haengen
|
||||||
|
// wenn Wiedergabe abgebrochen wird (Anruf, Cancel, Barge-In).
|
||||||
|
stopBackgroundAudio().catch(() => {});
|
||||||
this.audioQueue = [];
|
this.audioQueue = [];
|
||||||
this.isPlaying = false;
|
this.isPlaying = false;
|
||||||
if (this.currentSound) {
|
if (this.currentSound) {
|
||||||
@@ -755,7 +963,8 @@ class AudioService {
|
|||||||
this.pcmBytesCollected = 0;
|
this.pcmBytesCollected = 0;
|
||||||
this.pcmMessageId = '';
|
this.pcmMessageId = '';
|
||||||
}
|
}
|
||||||
// Audio-Focus freigeben
|
// Audio-Focus sofort freigeben — User hat explizit abgebrochen
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.release().catch(() => {});
|
AudioFocus?.release().catch(() => {});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1,76 @@
|
|||||||
|
/**
|
||||||
|
* Background-Audio: ARIAs TTS, Mic-Aufnahme und Wake-Word-Lauschen sollen
|
||||||
|
* auch bei minimierter App weiterlaufen. Wir starten dafuer einen Foreground-
|
||||||
|
* Service mit foregroundServiceType=mediaPlayback|microphone, der eine
|
||||||
|
* persistente Notification zeigt waehrend irgendein Audio-Slot aktiv ist.
|
||||||
|
*
|
||||||
|
* Mehrere Komponenten koennen den Service unabhaengig "halten":
|
||||||
|
* - 'tts' : ARIA spricht
|
||||||
|
* - 'rec' : Aufnahme laeuft
|
||||||
|
* - 'wake' : Wake-Word lauscht passiv (Ohr aktiv)
|
||||||
|
*
|
||||||
|
* Solange mindestens ein Slot aktiv ist, laeuft der Service. Wenn alle
|
||||||
|
* Slots leer sind, wird er gestoppt. Der Notification-Text passt sich an
|
||||||
|
* den hoechstprioren Slot an (tts > rec > wake).
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { NativeModules } from 'react-native';
|
||||||
|
|
||||||
|
interface BackgroundAudioNative {
|
||||||
|
start(reason: string): Promise<boolean>;
|
||||||
|
stop(): Promise<boolean>;
|
||||||
|
}
|
||||||
|
|
||||||
|
const { BackgroundAudio } = NativeModules as { BackgroundAudio?: BackgroundAudioNative };
|
||||||
|
|
||||||
|
type Slot = 'tts' | 'rec' | 'wake';
|
||||||
|
|
||||||
|
const slots = new Set<Slot>();
|
||||||
|
|
||||||
|
// Prioritaet fuer den Notification-Text — hoechste zuerst.
|
||||||
|
const PRIORITY: Slot[] = ['tts', 'rec', 'wake'];
|
||||||
|
|
||||||
|
function topReason(): string {
|
||||||
|
for (const s of PRIORITY) {
|
||||||
|
if (slots.has(s)) return s;
|
||||||
|
}
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
|
||||||
|
async function applyState(): Promise<void> {
|
||||||
|
if (!BackgroundAudio) return;
|
||||||
|
if (slots.size === 0) {
|
||||||
|
try { await BackgroundAudio.stop(); } catch {}
|
||||||
|
console.log('[BackgroundAudio] Service gestoppt (keine Slots)');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const reason = topReason();
|
||||||
|
try {
|
||||||
|
await BackgroundAudio.start(reason);
|
||||||
|
console.log('[BackgroundAudio] Service aktiv (slot=%s, slots=%s)',
|
||||||
|
reason, [...slots].join('+'));
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn('[BackgroundAudio] start fehlgeschlagen:', err?.message || err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function acquireBackgroundAudio(slot: Slot): Promise<void> {
|
||||||
|
if (slots.has(slot)) return;
|
||||||
|
slots.add(slot);
|
||||||
|
await applyState();
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function releaseBackgroundAudio(slot: Slot): Promise<void> {
|
||||||
|
if (!slots.has(slot)) return;
|
||||||
|
slots.delete(slot);
|
||||||
|
await applyState();
|
||||||
|
}
|
||||||
|
|
||||||
|
export function backgroundAudioActive(): boolean {
|
||||||
|
return slots.size > 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Legacy API (nur tts-Slot) — fuer Aufruf-Sites die noch nichts vom Slot-
|
||||||
|
// system wissen. Mappt auf den 'tts'-Slot. ---
|
||||||
|
export const startBackgroundAudio = () => acquireBackgroundAudio('tts');
|
||||||
|
export const stopBackgroundAudio = () => releaseBackgroundAudio('tts');
|
||||||
@@ -0,0 +1,118 @@
|
|||||||
|
/**
|
||||||
|
* PhoneCall-Service — pausiert die TTS-Wiedergabe wenn das Telefon klingelt
|
||||||
|
* oder ein Anruf laeuft. Native-Bindung an PhoneCallModule.kt.
|
||||||
|
*
|
||||||
|
* Bei "ringing" oder "offhook" wird audioService.haltAllPlayback() gerufen —
|
||||||
|
* ARIA verstummt sofort. Nach dem Auflegen passiert nichts automatisch
|
||||||
|
* (Audio kommt nicht zurueck), der User muesste die Antwort manuell
|
||||||
|
* nochmal anfordern (Play-Button auf der Nachricht).
|
||||||
|
*
|
||||||
|
* Permission READ_PHONE_STATE muss vom Nutzer einmalig erteilt werden —
|
||||||
|
* wenn nicht, failed start() leise und der Rest funktioniert wie bisher.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import {
|
||||||
|
NativeEventEmitter,
|
||||||
|
NativeModules,
|
||||||
|
PermissionsAndroid,
|
||||||
|
Platform,
|
||||||
|
ToastAndroid,
|
||||||
|
} from 'react-native';
|
||||||
|
import audioService from './audio';
|
||||||
|
import wakeWordService from './wakeword';
|
||||||
|
|
||||||
|
interface PhoneCallNative {
|
||||||
|
start(): Promise<boolean>;
|
||||||
|
stop(): Promise<boolean>;
|
||||||
|
}
|
||||||
|
|
||||||
|
const { PhoneCall } = NativeModules as { PhoneCall?: PhoneCallNative };
|
||||||
|
|
||||||
|
type PhoneState = 'idle' | 'ringing' | 'offhook';
|
||||||
|
|
||||||
|
class PhoneCallService {
|
||||||
|
private started: boolean = false;
|
||||||
|
private subscription: { remove: () => void } | null = null;
|
||||||
|
private lastState: PhoneState = 'idle';
|
||||||
|
|
||||||
|
async start(): Promise<boolean> {
|
||||||
|
if (this.started || !PhoneCall) return false;
|
||||||
|
if (Platform.OS !== 'android') return false;
|
||||||
|
|
||||||
|
// Runtime-Permission holen (nur einmal noetig)
|
||||||
|
try {
|
||||||
|
const granted = await PermissionsAndroid.request(
|
||||||
|
PermissionsAndroid.PERMISSIONS.READ_PHONE_STATE,
|
||||||
|
{
|
||||||
|
title: 'ARIA Cockpit — Anruf-Erkennung',
|
||||||
|
message: 'Damit ARIA bei einem eingehenden Anruf nicht weiterredet, '
|
||||||
|
+ 'darf die App den Anruf-Status sehen (Klingeln/Aktiv/Aufgelegt). '
|
||||||
|
+ 'Es werden keine Anrufdaten gelesen oder gespeichert.',
|
||||||
|
buttonPositive: 'Erlauben',
|
||||||
|
buttonNegative: 'Spaeter',
|
||||||
|
},
|
||||||
|
);
|
||||||
|
if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
|
||||||
|
console.warn('[PhoneCall] READ_PHONE_STATE Permission abgelehnt');
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.warn('[PhoneCall] Permission-Anfrage gescheitert', err);
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const ok = await PhoneCall.start();
|
||||||
|
if (!ok) {
|
||||||
|
console.warn('[PhoneCall] Native start() lieferte false (Permission?)');
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
const emitter = new NativeEventEmitter(NativeModules.PhoneCall as any);
|
||||||
|
this.subscription = emitter.addListener('PhoneCallStateChanged', (e: { state: PhoneState }) => {
|
||||||
|
this._onStateChanged(e.state);
|
||||||
|
});
|
||||||
|
this.started = true;
|
||||||
|
console.log('[PhoneCall] Listener aktiv');
|
||||||
|
return true;
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn('[PhoneCall] start gescheitert:', err?.message || err);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async stop(): Promise<void> {
|
||||||
|
if (!this.started || !PhoneCall) return;
|
||||||
|
try {
|
||||||
|
this.subscription?.remove();
|
||||||
|
this.subscription = null;
|
||||||
|
await PhoneCall.stop();
|
||||||
|
} catch {}
|
||||||
|
this.started = false;
|
||||||
|
this.lastState = 'idle';
|
||||||
|
}
|
||||||
|
|
||||||
|
private _onStateChanged(state: PhoneState): void {
|
||||||
|
if (state === this.lastState) return;
|
||||||
|
const prev = this.lastState;
|
||||||
|
console.log('[PhoneCall] State: %s → %s', prev, state);
|
||||||
|
this.lastState = state;
|
||||||
|
if (state === 'ringing' || state === 'offhook') {
|
||||||
|
audioService.haltAllPlayback(`Telefon-State: ${state}`);
|
||||||
|
// Wake-Word + Aufnahme pausieren: Telefonie-App belegt das Mikro
|
||||||
|
// waehrend des Anrufs, plus ARIA soll nicht im Telefonat zuhoeren.
|
||||||
|
wakeWordService.pauseForCall().catch(() => {});
|
||||||
|
ToastAndroid.show(
|
||||||
|
state === 'ringing' ? 'Anruf — ARIA pausiert' : 'Im Gespraech — ARIA pausiert',
|
||||||
|
ToastAndroid.SHORT,
|
||||||
|
);
|
||||||
|
} else if (state === 'idle' && prev !== 'idle') {
|
||||||
|
// Auflegen: Wake-Word reaktivieren wenn vor dem Anruf aktiv war.
|
||||||
|
// TTS kommt nicht automatisch zurueck (Stream weg) — User kann
|
||||||
|
// ARIAs letzte Antwort per Play-Button nochmal hoeren.
|
||||||
|
wakeWordService.resumeFromCall().catch(() => {});
|
||||||
|
ToastAndroid.show('Anruf beendet — ARIA wieder aktiv', ToastAndroid.SHORT);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const phoneCallService = new PhoneCallService();
|
||||||
|
export default phoneCallService;
|
||||||
@@ -0,0 +1,71 @@
|
|||||||
|
/**
|
||||||
|
* Spielt einen kurzen "Bereit"-Sound (Airplane Ding-Dong) wenn das Mikrofon
|
||||||
|
* nach Wake-Word-Erkennung wirklich offen ist. Datei liegt in
|
||||||
|
* android/app/src/main/res/raw/wake_ready_sound.mp3 — wird ueber Android's
|
||||||
|
* Resource-System per react-native-sound abgespielt.
|
||||||
|
*
|
||||||
|
* Toggle: AsyncStorage-Key 'aria_wake_ready_sound_enabled' (default true).
|
||||||
|
*/
|
||||||
|
|
||||||
|
import Sound from 'react-native-sound';
|
||||||
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
|
|
||||||
|
export const WAKE_READY_SOUND_STORAGE_KEY = 'aria_wake_ready_sound_enabled';
|
||||||
|
|
||||||
|
Sound.setCategory('Playback', false);
|
||||||
|
|
||||||
|
let cachedSound: Sound | null = null;
|
||||||
|
let cachedFailed = false;
|
||||||
|
|
||||||
|
function getSound(): Promise<Sound | null> {
|
||||||
|
if (cachedFailed) return Promise.resolve(null);
|
||||||
|
if (cachedSound) return Promise.resolve(cachedSound);
|
||||||
|
return new Promise(resolve => {
|
||||||
|
const s = new Sound('wake_ready_sound', Sound.MAIN_BUNDLE, (err) => {
|
||||||
|
if (err) {
|
||||||
|
console.warn('[WakeReadySound] Konnte nicht geladen werden:', err);
|
||||||
|
cachedFailed = true;
|
||||||
|
resolve(null);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
cachedSound = s;
|
||||||
|
resolve(s);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/** True wenn der User den "Bereit"-Sound aktiviert hat. Default: true. */
|
||||||
|
export async function isWakeReadySoundEnabled(): Promise<boolean> {
|
||||||
|
try {
|
||||||
|
const raw = await AsyncStorage.getItem(WAKE_READY_SOUND_STORAGE_KEY);
|
||||||
|
if (raw === null) return true; // Default an
|
||||||
|
return raw === 'true';
|
||||||
|
} catch {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function setWakeReadySoundEnabled(enabled: boolean): Promise<void> {
|
||||||
|
try {
|
||||||
|
await AsyncStorage.setItem(WAKE_READY_SOUND_STORAGE_KEY, String(enabled));
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Spielt den Bereit-Sound einmal ab — non-blocking. Wenn der User ihn
|
||||||
|
* in den Settings deaktiviert hat oder die Datei nicht ladbar ist,
|
||||||
|
* passiert einfach nichts. */
|
||||||
|
export async function playWakeReadySound(): Promise<void> {
|
||||||
|
if (!(await isWakeReadySoundEnabled())) return;
|
||||||
|
const s = await getSound();
|
||||||
|
if (!s) return;
|
||||||
|
try {
|
||||||
|
s.stop(() => {
|
||||||
|
s.setCurrentTime(0);
|
||||||
|
s.play((success) => {
|
||||||
|
if (!success) console.warn('[WakeReadySound] Wiedergabe fehlgeschlagen');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
} catch (e) {
|
||||||
|
console.warn('[WakeReadySound] play() Exception:', e);
|
||||||
|
}
|
||||||
|
}
|
||||||
+229
-113
@@ -1,142 +1,154 @@
|
|||||||
/**
|
/**
|
||||||
* Gespraechsmodus / Wake Word Service
|
* Gespraechsmodus / Wake Word Service
|
||||||
*
|
*
|
||||||
|
* Wake-Word-Engine: openWakeWord (https://github.com/dscripka/openWakeWord),
|
||||||
|
* komplett on-device via ONNX Runtime in Native-Kotlin (siehe
|
||||||
|
* OpenWakeWordModule.kt + assets/openwakeword/). Kein API-Key, kein Cloud-
|
||||||
|
* Roundtrip, kein Cent Lizenzgebuehren.
|
||||||
|
*
|
||||||
* Drei Zustaende:
|
* Drei Zustaende:
|
||||||
* off — Ohr aus, nichts laeuft
|
* off — Ohr aus, nichts laeuft
|
||||||
* armed — Ohr aktiv, Porcupine hoert passiv auf das Wake-Word.
|
* armed — Ohr aktiv, openWakeWord hoert passiv auf das Wake-Word.
|
||||||
* Das Mikro ist von Porcupine belegt; AudioRecorder ist aus.
|
* Das Mikro ist von OpenWakeWord belegt; AudioRecorder ist aus.
|
||||||
* conversing — Wake-Word getriggert (oder Ohr-Tap ohne Wake-Word):
|
* conversing — Wake-Word getriggert (oder Ohr-Tap manuell):
|
||||||
* aktive Konversation. Porcupine pausiert (gibt Mikro frei),
|
* aktive Konversation. OpenWakeWord pausiert (gibt Mikro frei),
|
||||||
* AudioRecorder uebernimmt fuer die Aufnahme.
|
* AudioRecorder uebernimmt fuer die Aufnahme.
|
||||||
* Nach jeder ARIA-Antwort oeffnet das Mikro fuer X Sekunden
|
* Nach jeder ARIA-Antwort oeffnet das Mikro fuer X Sekunden
|
||||||
* (Conversation-Window). Stille im Fenster → zurueck zu armed.
|
* (Conversation-Window). Stille im Fenster → zurueck zu armed.
|
||||||
*
|
*
|
||||||
* Wake-Word fallback: ist kein Picovoice-Access-Key gesetzt, geht 'start'
|
* Faellt das Native-Modul aus (alte App-Version, ONNX-Init-Fehler), geht
|
||||||
* direkt in 'conversing' (klassischer Gespraechsmodus). 'endConversation'
|
* 'start' direkt in 'conversing' (klassischer Direkt-Aufnahme-Modus).
|
||||||
* geht dann nach 'off' statt 'armed'.
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
import { NativeEventEmitter, NativeModules, ToastAndroid } from 'react-native';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
import { ToastAndroid } from 'react-native';
|
import { acquireBackgroundAudio } from './backgroundAudio';
|
||||||
|
|
||||||
type WakeWordCallback = () => void;
|
type WakeWordCallback = () => void;
|
||||||
type StateCallback = (state: WakeWordState) => void;
|
type StateCallback = (state: WakeWordState) => void;
|
||||||
|
|
||||||
export type WakeWordState = 'off' | 'armed' | 'conversing';
|
export type WakeWordState = 'off' | 'armed' | 'conversing';
|
||||||
|
|
||||||
export const WAKE_ACCESS_KEY_STORAGE = 'aria_wake_access_key';
|
|
||||||
export const WAKE_KEYWORD_STORAGE = 'aria_wake_keyword';
|
export const WAKE_KEYWORD_STORAGE = 'aria_wake_keyword';
|
||||||
|
|
||||||
/** Built-In Keywords von Picovoice — pre-trained, sofort einsetzbar.
|
/** Verfuegbare Wake-Words — entsprechen den .onnx Dateien in
|
||||||
* Custom Keywords (z.B. "ARIA") brauchen ein .ppn File aus der Picovoice
|
* android/app/src/main/assets/openwakeword/. Custom-Keywords (eigenes
|
||||||
* Console — wird spaeter ueber Diagnostic uploadbar. */
|
* Training via openwakeword Notebook) muessen aktuell als Asset eingebaut
|
||||||
export const BUILTIN_KEYWORDS = [
|
* werden — Diagnostic-Upload ist Phase 2. */
|
||||||
'jarvis',
|
export const WAKE_KEYWORDS = [
|
||||||
|
'hey_jarvis',
|
||||||
'computer',
|
'computer',
|
||||||
'picovoice',
|
|
||||||
'porcupine',
|
|
||||||
'bumblebee',
|
|
||||||
'terminator',
|
|
||||||
'alexa',
|
'alexa',
|
||||||
'hey google',
|
'hey_mycroft',
|
||||||
'ok google',
|
'hey_rhasspy',
|
||||||
'hey siri',
|
|
||||||
] as const;
|
] as const;
|
||||||
export type BuiltinKeyword = typeof BUILTIN_KEYWORDS[number];
|
export type WakeKeyword = typeof WAKE_KEYWORDS[number];
|
||||||
export const DEFAULT_KEYWORD: BuiltinKeyword = 'jarvis';
|
export const DEFAULT_KEYWORD: WakeKeyword = 'hey_jarvis';
|
||||||
|
|
||||||
|
/** Hilfs-Mapping fuer die Anzeige im UI. */
|
||||||
|
export const KEYWORD_LABELS: Record<WakeKeyword, string> = {
|
||||||
|
hey_jarvis: 'Hey Jarvis',
|
||||||
|
computer: 'Computer',
|
||||||
|
alexa: 'Alexa',
|
||||||
|
hey_mycroft: 'Hey Mycroft',
|
||||||
|
hey_rhasspy: 'Hey Rhasspy',
|
||||||
|
};
|
||||||
|
|
||||||
|
// Detection-Tuning — kann in Settings spaeter konfigurierbar werden.
|
||||||
|
const DEFAULT_THRESHOLD = 0.5;
|
||||||
|
const DEFAULT_PATIENCE = 2;
|
||||||
|
const DEFAULT_DEBOUNCE_MS = 1500;
|
||||||
|
|
||||||
|
interface OpenWakeWordModule {
|
||||||
|
init(modelName: string, threshold: number, patience: number, debounceMs: number): Promise<boolean>;
|
||||||
|
start(): Promise<boolean>;
|
||||||
|
stop(): Promise<boolean>;
|
||||||
|
dispose(): Promise<boolean>;
|
||||||
|
isAvailable(): Promise<boolean>;
|
||||||
|
}
|
||||||
|
|
||||||
|
const { OpenWakeWord } = NativeModules as { OpenWakeWord?: OpenWakeWordModule };
|
||||||
|
|
||||||
class WakeWordService {
|
class WakeWordService {
|
||||||
private state: WakeWordState = 'off';
|
private state: WakeWordState = 'off';
|
||||||
private wakeCallbacks: WakeWordCallback[] = [];
|
private wakeCallbacks: WakeWordCallback[] = [];
|
||||||
private stateCallbacks: StateCallback[] = [];
|
private stateCallbacks: StateCallback[] = [];
|
||||||
|
/** Barge-In-Callbacks: feuern wenn Wake-Word WAEHREND ARIA spricht erkannt
|
||||||
|
* wird. ChatScreen reagiert mit TTS-stop + neuer Aufnahme. */
|
||||||
|
private bargeCallbacks: WakeWordCallback[] = [];
|
||||||
|
/** True solange Wake-Word parallel zu TTS aktiv ist. */
|
||||||
|
private bargeListening: boolean = false;
|
||||||
|
/** Anruf-Pause: state wird gemerkt damit nach Auflegen wiederhergestellt wird. */
|
||||||
|
private callPaused: boolean = false;
|
||||||
|
private preCallState: WakeWordState = 'off';
|
||||||
|
/** Cooldown nach App-Resume: kurze Phase in der Wake-Word-Detections
|
||||||
|
* ignoriert werden. Beim Wechsel von Background nach Vordergrund gibt's
|
||||||
|
* oft einen Audio-Pegel-Spike (AudioFocus-Switch, AudioTrack re-route),
|
||||||
|
* der openWakeWord faelschlich triggern kann. */
|
||||||
|
private cooldownUntilMs: number = 0;
|
||||||
|
|
||||||
// Picovoice Manager (lazy, da Native Module nicht in jedem Build verfuegbar ist)
|
private keyword: WakeKeyword = DEFAULT_KEYWORD;
|
||||||
private porcupine: any = null;
|
private nativeReady: boolean = false;
|
||||||
private accessKey: string = '';
|
|
||||||
private keyword: string = DEFAULT_KEYWORD;
|
|
||||||
private initInProgress: Promise<boolean> | null = null;
|
private initInProgress: Promise<boolean> | null = null;
|
||||||
|
private eventSub: { remove: () => void } | null = null;
|
||||||
|
|
||||||
/** Beim App-Start aufrufen — laedt Settings, baut Porcupine wenn Key da ist. */
|
/** Beim App-Start aufrufen — laedt Settings, baut Native-Modul. */
|
||||||
async loadFromStorage(): Promise<void> {
|
async loadFromStorage(): Promise<void> {
|
||||||
try {
|
try {
|
||||||
const k = await AsyncStorage.getItem(WAKE_ACCESS_KEY_STORAGE);
|
|
||||||
const w = await AsyncStorage.getItem(WAKE_KEYWORD_STORAGE);
|
const w = await AsyncStorage.getItem(WAKE_KEYWORD_STORAGE);
|
||||||
this.accessKey = (k || '').trim();
|
const wt = (w || DEFAULT_KEYWORD).trim() as WakeKeyword;
|
||||||
this.keyword = (w || DEFAULT_KEYWORD).trim();
|
this.keyword = (WAKE_KEYWORDS as readonly string[]).includes(wt) ? wt : DEFAULT_KEYWORD;
|
||||||
if (this.accessKey) {
|
await this.initNative();
|
||||||
// Vorinitialisieren — wirft sich nicht durch wenn etwas fehlt
|
|
||||||
await this.initPorcupine();
|
|
||||||
}
|
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
console.warn('[WakeWord] loadFromStorage', err);
|
console.warn('[WakeWord] loadFromStorage', err);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Settings-Wechsel — neuer Key oder Keyword. Re-Init Porcupine. */
|
/** Settings-Wechsel: anderes Wake-Word. Re-Init des Native-Moduls. */
|
||||||
async configure(accessKey: string, keyword: string): Promise<boolean> {
|
async configure(keyword: string): Promise<boolean> {
|
||||||
this.accessKey = (accessKey || '').trim();
|
const next: WakeKeyword = (WAKE_KEYWORDS as readonly string[]).includes(keyword)
|
||||||
this.keyword = (keyword || DEFAULT_KEYWORD).trim();
|
? (keyword as WakeKeyword)
|
||||||
await AsyncStorage.setItem(WAKE_ACCESS_KEY_STORAGE, this.accessKey);
|
: DEFAULT_KEYWORD;
|
||||||
await AsyncStorage.setItem(WAKE_KEYWORD_STORAGE, this.keyword);
|
this.keyword = next;
|
||||||
|
await AsyncStorage.setItem(WAKE_KEYWORD_STORAGE, next);
|
||||||
|
|
||||||
// Laufende Instanz stoppen
|
// Laufende Instanz stoppen + neu initialisieren
|
||||||
await this.disposePorcupine();
|
await this.disposeNative();
|
||||||
if (!this.accessKey) {
|
const ok = await this.initNative();
|
||||||
console.warn('[WakeWord] configure: kein Access Key gesetzt');
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Neu initialisieren
|
|
||||||
const ok = await this.initPorcupine();
|
|
||||||
if (!ok) {
|
if (!ok) {
|
||||||
ToastAndroid.show(
|
ToastAndroid.show(
|
||||||
`Wake-Word "${this.keyword}" konnte nicht initialisiert werden — Logs pruefen`,
|
`Wake-Word "${KEYWORD_LABELS[next]}" konnte nicht initialisiert werden — Logs pruefen`,
|
||||||
ToastAndroid.LONG,
|
ToastAndroid.LONG,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
return ok;
|
return ok;
|
||||||
}
|
}
|
||||||
|
|
||||||
private async initPorcupine(): Promise<boolean> {
|
private async initNative(): Promise<boolean> {
|
||||||
|
if (!OpenWakeWord) {
|
||||||
|
console.warn('[WakeWord] OpenWakeWord Native-Modul nicht verfuegbar — Direkt-Aufnahme-Fallback aktiv');
|
||||||
|
this.nativeReady = false;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
if (this.initInProgress) return this.initInProgress;
|
if (this.initInProgress) return this.initInProgress;
|
||||||
this.initInProgress = (async () => {
|
this.initInProgress = (async () => {
|
||||||
try {
|
try {
|
||||||
const porcupineRN = require('@picovoice/porcupine-react-native');
|
await OpenWakeWord.init(this.keyword, DEFAULT_THRESHOLD, DEFAULT_PATIENCE, DEFAULT_DEBOUNCE_MS);
|
||||||
const { PorcupineManager, BuiltInKeywords } = porcupineRN;
|
// Subscribe nur einmal
|
||||||
// Manche Porcupine-Versionen wollen das BuiltInKeywords-Enum (Objekt
|
if (!this.eventSub) {
|
||||||
// mit keys wie JARVIS, COMPUTER, HEY_GOOGLE), andere akzeptieren
|
const emitter = new NativeEventEmitter(NativeModules.OpenWakeWord);
|
||||||
// den String direkt. Mappen mit Fallback auf String:
|
this.eventSub = emitter.addListener('WakeWordDetected', () => {
|
||||||
const enumKey = this.keyword.toUpperCase().replace(/\s+/g, '_');
|
console.log('[WakeWord] Native Detection-Event empfangen');
|
||||||
const kw = (BuiltInKeywords && BuiltInKeywords[enumKey]) || this.keyword;
|
|
||||||
console.log('[WakeWord] Porcupine init: keyword=%s (resolved=%s)',
|
|
||||||
this.keyword, typeof kw === 'string' ? kw : '[enum]');
|
|
||||||
this.porcupine = await PorcupineManager.fromBuiltInKeywords(
|
|
||||||
this.accessKey,
|
|
||||||
[kw],
|
|
||||||
(keywordIndex: number) => {
|
|
||||||
console.log('[WakeWord] Porcupine callback fired (index=%d)', keywordIndex);
|
|
||||||
this.onWakeDetected().catch(err =>
|
this.onWakeDetected().catch(err =>
|
||||||
console.warn('[WakeWord] onWakeDetected crashed:', err));
|
console.warn('[WakeWord] onWakeDetected crashed:', err));
|
||||||
},
|
});
|
||||||
// Error handler (wenn Porcupine im Background-Thread crashed,
|
}
|
||||||
// z.B. beim Audio-Engine-Konflikt mit audio-recorder-player)
|
this.nativeReady = true;
|
||||||
(error: any) => {
|
console.log('[WakeWord] Init OK (model=%s)', this.keyword);
|
||||||
console.warn('[WakeWord] Porcupine runtime error:', error?.message || error);
|
|
||||||
// Nicht in Loop crashen — state zurueck auf off damit der User
|
|
||||||
// mit dem Aufnahme-Button wieder normal arbeiten kann
|
|
||||||
this.setState('off');
|
|
||||||
this.disposePorcupine().catch(() => {});
|
|
||||||
},
|
|
||||||
);
|
|
||||||
console.log('[WakeWord] Porcupine init OK (keyword=%s, manager=%s)',
|
|
||||||
this.keyword, this.porcupine ? 'created' : 'NULL');
|
|
||||||
return true;
|
return true;
|
||||||
} catch (err: any) {
|
} catch (err: any) {
|
||||||
console.warn('[WakeWord] Porcupine init fehlgeschlagen:', err?.message || err);
|
console.warn('[WakeWord] Init fehlgeschlagen:', err?.message || err);
|
||||||
console.warn('[WakeWord] err details:', JSON.stringify({
|
this.nativeReady = false;
|
||||||
name: err?.name, code: err?.code, stack: err?.stack?.slice(0, 200),
|
|
||||||
}));
|
|
||||||
this.porcupine = null;
|
|
||||||
return false;
|
return false;
|
||||||
} finally {
|
} finally {
|
||||||
this.initInProgress = null;
|
this.initInProgress = null;
|
||||||
@@ -145,27 +157,28 @@ class WakeWordService {
|
|||||||
return this.initInProgress;
|
return this.initInProgress;
|
||||||
}
|
}
|
||||||
|
|
||||||
private async disposePorcupine() {
|
private async disposeNative(): Promise<void> {
|
||||||
if (this.porcupine) {
|
if (!OpenWakeWord) return;
|
||||||
try { await this.porcupine.stop(); } catch {}
|
try { await OpenWakeWord.dispose(); } catch {}
|
||||||
try { await this.porcupine.delete(); } catch {}
|
this.nativeReady = false;
|
||||||
this.porcupine = null;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Ohr-Button gedrueckt — startet passives Lauschen oder direkt Konversation. */
|
/** Ohr-Button gedrueckt — startet passives Lauschen oder direkt Konversation. */
|
||||||
async start(): Promise<boolean> {
|
async start(): Promise<boolean> {
|
||||||
if (this.state !== 'off') return true;
|
if (this.state !== 'off') return true;
|
||||||
if (this.porcupine) {
|
// Foreground-Service VOR dem Mic-Zugriff hochziehen damit Background-
|
||||||
// Passives Lauschen via Porcupine
|
// Lauschen funktioniert (Android braucht foregroundServiceType=microphone
|
||||||
|
// aktiv zum Zeitpunkt des AudioRecord.startRecording).
|
||||||
|
await acquireBackgroundAudio('wake');
|
||||||
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
try {
|
try {
|
||||||
await this.porcupine.start();
|
await OpenWakeWord.start();
|
||||||
console.log('[WakeWord] armed — warte auf Wake Word "%s"', this.keyword);
|
console.log('[WakeWord] armed — warte auf "%s"', this.keyword);
|
||||||
ToastAndroid.show(`Lausche auf "${this.keyword}"`, ToastAndroid.SHORT);
|
ToastAndroid.show(`Lausche auf "${KEYWORD_LABELS[this.keyword]}"`, ToastAndroid.SHORT);
|
||||||
this.setState('armed');
|
this.setState('armed');
|
||||||
return true;
|
return true;
|
||||||
} catch (err: any) {
|
} catch (err: any) {
|
||||||
console.warn('[WakeWord] Porcupine start fehlgeschlagen — Fallback Direkt-Konversation:',
|
console.warn('[WakeWord] start fehlgeschlagen — Fallback Direkt-Aufnahme:',
|
||||||
err?.message || err);
|
err?.message || err);
|
||||||
ToastAndroid.show(
|
ToastAndroid.show(
|
||||||
`Wake-Word-Start failed: ${err?.message || err}`,
|
`Wake-Word-Start failed: ${err?.message || err}`,
|
||||||
@@ -173,14 +186,13 @@ class WakeWordService {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Kein Porcupine init → User explicit informieren
|
console.warn('[WakeWord] Native-Modul nicht bereit — Direkt-Aufnahme-Fallback');
|
||||||
console.warn('[WakeWord] Porcupine nicht initialisiert — Access Key fehlt? Fallback Direkt-Aufnahme');
|
|
||||||
ToastAndroid.show(
|
ToastAndroid.show(
|
||||||
'Wake-Word nicht aktiv — direkte Aufnahme startet (Mikro hoert mit)',
|
'Wake-Word nicht aktiv — direkte Aufnahme startet (Mikro hoert mit)',
|
||||||
ToastAndroid.LONG,
|
ToastAndroid.LONG,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
// Fallback: direkt in die Konversation (Mikro AKTIV, nicht passive)
|
// Fallback: direkt in Konversation
|
||||||
console.log('[WakeWord] Direkt-Aufnahme startet (kein Wake-Word)');
|
console.log('[WakeWord] Direkt-Aufnahme startet (kein Wake-Word)');
|
||||||
this.setState('conversing');
|
this.setState('conversing');
|
||||||
setTimeout(() => {
|
setTimeout(() => {
|
||||||
@@ -194,21 +206,46 @@ class WakeWordService {
|
|||||||
/** Komplett ausschalten (Ohr abschalten) */
|
/** Komplett ausschalten (Ohr abschalten) */
|
||||||
async stop(): Promise<void> {
|
async stop(): Promise<void> {
|
||||||
console.log('[WakeWord] Ohr deaktiviert');
|
console.log('[WakeWord] Ohr deaktiviert');
|
||||||
if (this.porcupine) {
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
try { await this.porcupine.stop(); } catch {}
|
try { await OpenWakeWord.stop(); } catch {}
|
||||||
}
|
}
|
||||||
|
this.bargeListening = false;
|
||||||
this.setState('off');
|
this.setState('off');
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Wake-Word getriggert: Porcupine pausieren, Konversation starten. */
|
/** Cooldown setzen — alle Wake-Word-Detections in den naechsten ms ignorieren.
|
||||||
|
* Wird beim App-Resume gerufen weil AppState-Wechsel Audio-Spikes erzeugen
|
||||||
|
* die openWakeWord faelschlich als Trigger interpretiert. */
|
||||||
|
setResumeCooldown(ms: number = 1500): void {
|
||||||
|
this.cooldownUntilMs = Date.now() + ms;
|
||||||
|
console.log('[WakeWord] Cooldown aktiv fuer %dms', ms);
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Wake-Word getriggert: Native-Modul pausieren, Konversation starten. */
|
||||||
private async onWakeDetected(): Promise<void> {
|
private async onWakeDetected(): Promise<void> {
|
||||||
console.log('[WakeWord] Wake-Word "%s" erkannt!', this.keyword);
|
const now = Date.now();
|
||||||
ToastAndroid.show(`Wake-Word "${this.keyword}" erkannt — sprich jetzt`, ToastAndroid.SHORT);
|
if (now < this.cooldownUntilMs) {
|
||||||
if (this.porcupine) {
|
const left = this.cooldownUntilMs - now;
|
||||||
try { await this.porcupine.stop(); } catch {}
|
console.log('[WakeWord] Trigger ignoriert (Cooldown noch %dms aktiv — wahrscheinlich App-Resume-Spike)', left);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
console.log('[WakeWord] Wake-Word "%s" erkannt! (state=%s, barge=%s)',
|
||||||
|
this.keyword, this.state, this.bargeListening);
|
||||||
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
|
try { await OpenWakeWord.stop(); } catch {}
|
||||||
|
}
|
||||||
|
this.bargeListening = false;
|
||||||
|
// Wenn wir bereits in 'conversing' sind und der Trigger waehrend ARIAs TTS
|
||||||
|
// kam (Barge-In via Wake-Word), feuern wir einen separaten Callback damit
|
||||||
|
// ChatScreen das TTS abbrechen + neue Aufnahme starten kann. Sonst normal.
|
||||||
|
if (this.state === 'conversing') {
|
||||||
|
this.bargeCallbacks.forEach(cb => {
|
||||||
|
try { cb(); } catch (e) { console.warn('[WakeWord] barge cb err:', e); }
|
||||||
|
});
|
||||||
|
// Kein erneutes setState — wir bleiben in 'conversing'.
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
this.setState('conversing');
|
this.setState('conversing');
|
||||||
// kurz warten damit Mikrofon frei ist
|
|
||||||
setTimeout(() => {
|
setTimeout(() => {
|
||||||
if (this.state === 'conversing') {
|
if (this.state === 'conversing') {
|
||||||
this.wakeCallbacks.forEach(cb => cb());
|
this.wakeCallbacks.forEach(cb => cb());
|
||||||
@@ -216,17 +253,83 @@ class WakeWordService {
|
|||||||
}, 200);
|
}, 200);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Wake-Word PARALLEL zur TTS-Wiedergabe lauschen lassen — User kann
|
||||||
|
* "Computer" sagen waehrend ARIA noch redet, AcousticEchoCanceler im
|
||||||
|
* Native-Modul verhindert dass ARIAs eigene Stimme triggert.
|
||||||
|
* Voraussetzung: AudioRecorder muss frei sein (Recording aus). Wenn der
|
||||||
|
* AudioRecorder gerade laeuft, hat der Vorrang — Wake-Word geht nicht. */
|
||||||
|
async startBargeListening(): Promise<void> {
|
||||||
|
if (!this.nativeReady || !OpenWakeWord) return;
|
||||||
|
if (this.state !== 'conversing') return;
|
||||||
|
if (this.bargeListening) return;
|
||||||
|
try {
|
||||||
|
await OpenWakeWord.start();
|
||||||
|
this.bargeListening = true;
|
||||||
|
console.log('[WakeWord] Barge-Listening aktiv (parallel zu TTS)');
|
||||||
|
} catch (err) {
|
||||||
|
console.warn('[WakeWord] Barge-Listening start fehlgeschlagen:', err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Barge-Listening wieder aus — z.B. wenn der AudioRecorder fuer die
|
||||||
|
* naechste Aufnahme das Mikro braucht. */
|
||||||
|
async stopBargeListening(): Promise<void> {
|
||||||
|
if (!this.bargeListening) return;
|
||||||
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
|
try { await OpenWakeWord.stop(); } catch {}
|
||||||
|
}
|
||||||
|
this.bargeListening = false;
|
||||||
|
console.log('[WakeWord] Barge-Listening aus');
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Bei eingehendem Anruf: Wake-Word + Aufnahme stoppen, Pre-Call-State
|
||||||
|
* merken. Telefonie-App belegt das Mikro waehrend des Anrufs, plus ARIA
|
||||||
|
* soll nicht in laufende Telefonate reinhoeren. */
|
||||||
|
async pauseForCall(): Promise<void> {
|
||||||
|
if (this.callPaused) return;
|
||||||
|
this.preCallState = this.state;
|
||||||
|
if (this.state === 'off') {
|
||||||
|
this.callPaused = true; // merken dass wir pausiert wurden
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
this.callPaused = true;
|
||||||
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
|
try { await OpenWakeWord.stop(); } catch {}
|
||||||
|
}
|
||||||
|
this.bargeListening = false;
|
||||||
|
console.log('[WakeWord] Anruf — Wake-Word pausiert (war: %s)', this.preCallState);
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Nach Auflegen: Pre-Call-State wiederherstellen. Aktive Konversation
|
||||||
|
* geht zu armed zurueck (User soll nicht in einen halben Dialog springen). */
|
||||||
|
async resumeFromCall(): Promise<void> {
|
||||||
|
if (!this.callPaused) return;
|
||||||
|
const restoreTo = this.preCallState;
|
||||||
|
this.callPaused = false;
|
||||||
|
this.preCallState = 'off';
|
||||||
|
console.log('[WakeWord] Anruf zu Ende — restore state=%s', restoreTo);
|
||||||
|
if (restoreTo === 'off') return;
|
||||||
|
// Aktive Konversation war wahrscheinlich durch haltAllPlayback eh abgebrochen,
|
||||||
|
// sicher zu armed degraden.
|
||||||
|
if (restoreTo === 'conversing') this.setState('armed');
|
||||||
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
|
try { await OpenWakeWord.start(); } catch (err) {
|
||||||
|
console.warn('[WakeWord] Restore-Start fehlgeschlagen:', err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/** Konversation beenden — User hat im Window nichts gesagt.
|
/** Konversation beenden — User hat im Window nichts gesagt.
|
||||||
* Mit Wake-Word: zurueck zu 'armed' (Porcupine wieder an).
|
* Mit Wake-Word: zurueck zu 'armed' (Listener wieder an).
|
||||||
* Ohne: zurueck zu 'off'.
|
* Ohne: zurueck zu 'off'.
|
||||||
*/
|
*/
|
||||||
async endConversation(): Promise<void> {
|
async endConversation(): Promise<void> {
|
||||||
if (this.state !== 'conversing') return;
|
if (this.state !== 'conversing') return;
|
||||||
if (this.porcupine && this.accessKey) {
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
try {
|
try {
|
||||||
await this.porcupine.start();
|
await OpenWakeWord.start();
|
||||||
console.log('[WakeWord] Konversation zu Ende — zurueck zu armed');
|
console.log('[WakeWord] Konversation zu Ende — zurueck zu armed');
|
||||||
ToastAndroid.show(`Lausche wieder auf "${this.keyword}"`, ToastAndroid.SHORT);
|
ToastAndroid.show(`Lausche wieder auf "${KEYWORD_LABELS[this.keyword]}"`, ToastAndroid.SHORT);
|
||||||
this.setState('armed');
|
this.setState('armed');
|
||||||
return;
|
return;
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
@@ -259,10 +362,10 @@ class WakeWordService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
hasWakeWord(): boolean {
|
hasWakeWord(): boolean {
|
||||||
return !!this.porcupine;
|
return this.nativeReady;
|
||||||
}
|
}
|
||||||
|
|
||||||
getKeyword(): string {
|
getKeyword(): WakeKeyword {
|
||||||
return this.keyword;
|
return this.keyword;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -275,6 +378,19 @@ class WakeWordService {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Subscribe auf Barge-In-Events: Wake-Word erkannt waehrend ARIA noch
|
||||||
|
* spricht. ChatScreen sollte dann TTS abbrechen + neue Aufnahme starten. */
|
||||||
|
onBargeIn(callback: WakeWordCallback): () => void {
|
||||||
|
this.bargeCallbacks.push(callback);
|
||||||
|
return () => {
|
||||||
|
this.bargeCallbacks = this.bargeCallbacks.filter(cb => cb !== callback);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
isBargeListening(): boolean {
|
||||||
|
return this.bargeListening;
|
||||||
|
}
|
||||||
|
|
||||||
onStateChange(callback: StateCallback): () => void {
|
onStateChange(callback: StateCallback): () => void {
|
||||||
this.stateCallbacks.push(callback);
|
this.stateCallbacks.push(callback);
|
||||||
return () => {
|
return () => {
|
||||||
|
|||||||
@@ -54,13 +54,6 @@ Fuer Web-Anfragen: **WebFetch** oder **Bash mit curl**. Niemals sagen "ich habe
|
|||||||
|
|
||||||
## Stimme
|
## Stimme
|
||||||
|
|
||||||
| Stimme | Modell | Wann |
|
TTS laeuft ueber F5-TTS (Voice Cloning, Gaming-PC). Stefan kann eigene
|
||||||
|--------|--------|------|
|
Stimmen aus Audio-Samples klonen (Diagnostic → Stimmen → Stimme klonen)
|
||||||
| **Ramona** (weiblich) | `de_DE-ramona-low` | Alltag, Antworten, Gespraeche (Standard) |
|
und in App + Diagnostic auswaehlen.
|
||||||
| **Thorsten** (maennlich, tief) | `de_DE-thorsten-high` | Epische Momente, Alarme, besondere Ereignisse |
|
|
||||||
|
|
||||||
**Thorsten spricht bei:**
|
|
||||||
- Build erfolgreich deployed
|
|
||||||
- Ticket geloest / Aufgabe abgeschlossen
|
|
||||||
- Kritischer Alarm (Server down, Sicherheitswarnung)
|
|
||||||
- Wenn Stefan sagt "So soll es sein"
|
|
||||||
|
|||||||
@@ -80,10 +80,8 @@ Wenn ein Tool nicht klappt, probiere die Alternative. Niemals sagen "ich habe ke
|
|||||||
|
|
||||||
## Stimme
|
## Stimme
|
||||||
|
|
||||||
| Stimme | Modell | Wann |
|
TTS laeuft ueber F5-TTS auf der Gamebox (Voice Cloning). Stefan kann
|
||||||
|--------|--------|------|
|
eigene Stimmen aus Audio-Samples klonen und in App/Diagnostic auswaehlen.
|
||||||
| **Ramona** (weiblich) | `de_DE-ramona-low` | Alltag, Antworten, Gespraeche (Standard) |
|
|
||||||
| **Thorsten** (maennlich, tief) | `de_DE-thorsten-high` | Epische Momente, Alarme, besondere Ereignisse |
|
|
||||||
|
|
||||||
## Gedaechtnis (Memory)
|
## Gedaechtnis (Memory)
|
||||||
|
|
||||||
@@ -147,4 +145,4 @@ Danach den Eintrag in `memory/MEMORY.md` (Index) verlinken.
|
|||||||
### Netzwerk
|
### Netzwerk
|
||||||
- **aria-net:** Internes Docker-Netz (proxy, aria-core)
|
- **aria-net:** Internes Docker-Netz (proxy, aria-core)
|
||||||
- **RVS:** Rendezvous-Server im Rechenzentrum — Relay fuer die Android-App
|
- **RVS:** Rendezvous-Server im Rechenzentrum — Relay fuer die Android-App
|
||||||
- **Bridge:** Voice Bridge (Whisper STT + Piper TTS) — teilt Netzwerk mit aria-core
|
- **Bridge:** Voice Bridge (orchestriert STT/TTS via Gamebox-Bridges) — teilt Netzwerk mit aria-core
|
||||||
|
|||||||
+206
-102
@@ -1,17 +1,13 @@
|
|||||||
"""
|
"""
|
||||||
ARIA Voice Bridge — Hauptmodul.
|
ARIA Voice Bridge — Hauptmodul.
|
||||||
|
|
||||||
Verbindet die Android App (via RVS) mit ARIA-Core und bietet
|
Verbindet die Android App (via RVS) mit ARIA-Core. Spracheingabe laeuft
|
||||||
lokale Spracheingabe (Wake-Word + Whisper STT) und Sprachausgabe (Piper TTS).
|
ueber die whisper-bridge (Gamebox, faster-whisper auf CUDA), Sprachausgabe
|
||||||
|
ueber die f5tts-bridge (Voice Cloning, satzweises PCM-Streaming).
|
||||||
|
|
||||||
Nachrichtenfluss:
|
Nachrichtenfluss:
|
||||||
App → RVS → Bridge → aria-core
|
App → RVS → Bridge → aria-core
|
||||||
aria-core → Bridge → RVS → App
|
aria-core → Bridge → f5tts-bridge → PCM → RVS → App
|
||||||
→ Lautsprecher (TTS)
|
|
||||||
|
|
||||||
Stimmen:
|
|
||||||
- Ramona (de_DE-ramona-low) — Alltag, Gespraeche
|
|
||||||
- Thorsten (de_DE-thorsten-high) — epische Momente, Alarme
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
@@ -493,7 +489,7 @@ class ARIABridge:
|
|||||||
self.current_mode = self._load_persisted_mode()
|
self.current_mode = self._load_persisted_mode()
|
||||||
self.running = False
|
self.running = False
|
||||||
|
|
||||||
# Komponenten (TTS: immer XTTS remote, Piper wurde entfernt)
|
# Komponenten (TTS: F5-TTS remote auf der Gamebox, lokales TTS wurde entfernt)
|
||||||
self.tts_enabled = True
|
self.tts_enabled = True
|
||||||
self.xtts_voice = ""
|
self.xtts_voice = ""
|
||||||
self._f5tts_config: dict = {}
|
self._f5tts_config: dict = {}
|
||||||
@@ -551,6 +547,15 @@ class ARIABridge:
|
|||||||
# Beeinflusst das Timeout fuer stt_request — bei "loading" warten wir laenger,
|
# Beeinflusst das Timeout fuer stt_request — bei "loading" warten wir laenger,
|
||||||
# weil das Modell beim ersten Request noch ~1-2 Min runtergeladen werden kann.
|
# weil das Modell beim ersten Request noch ~1-2 Min runtergeladen werden kann.
|
||||||
self._remote_stt_ready: bool = False
|
self._remote_stt_ready: bool = False
|
||||||
|
# Pending Files: wenn die App ein Bild + Text gleichzeitig schickt, kommen
|
||||||
|
# zwei separate RVS-Events ('file' und 'chat') — wir buffern die Files
|
||||||
|
# kurz und mergen sie mit dem nachfolgenden Chat-Text zu einer einzigen
|
||||||
|
# Anfrage an aria-core. Sonst antwortet ARIA zweimal (einmal "warte auf
|
||||||
|
# Anweisung" beim file, einmal auf den Chat-Text).
|
||||||
|
# Liste von Tuples: (file_path, name, file_type, size_kb, width, height)
|
||||||
|
self._pending_files: list[tuple[str, str, str, int, int, int]] = []
|
||||||
|
self._pending_files_flush_task: Optional[asyncio.Task] = None
|
||||||
|
self._PENDING_FILES_WINDOW_SEC: float = 0.8
|
||||||
|
|
||||||
def initialize(self) -> None:
|
def initialize(self) -> None:
|
||||||
"""Initialisiert alle Komponenten.
|
"""Initialisiert alle Komponenten.
|
||||||
@@ -907,18 +912,13 @@ class ARIABridge:
|
|||||||
logger.info("[core] TTS unterdrueckt (Modus: %s)", self.current_mode.config.name)
|
logger.info("[core] TTS unterdrueckt (Modus: %s)", self.current_mode.config.name)
|
||||||
return
|
return
|
||||||
|
|
||||||
# Voice bestimmen: App-Override fuer diesen Request > globale Default-Voice
|
# Voice bestimmen: App-Override (gesetzt durch letzten chat-Event) > globale
|
||||||
|
# Default-Voice. Der Override wird NICHT pro Antwort verbraucht — sonst nutzt
|
||||||
|
# eine Multi-Turn-Antwort von ARIA (Tool-Use + finale Antwort) ab dem zweiten
|
||||||
|
# TTS-Call wieder die alte Default-Stimme. Der Override bleibt gueltig bis
|
||||||
|
# zum naechsten chat-Event, wo er entweder ueberschrieben oder geloescht wird.
|
||||||
xtts_voice = self._next_voice_override or getattr(self, 'xtts_voice', '')
|
xtts_voice = self._next_voice_override or getattr(self, 'xtts_voice', '')
|
||||||
# Override verbrauchen (gilt nur fuer genau diese naechste Antwort)
|
|
||||||
if self._next_voice_override:
|
|
||||||
logger.info("[core] Nutze Voice-Override: %s", self._next_voice_override)
|
|
||||||
self._next_voice_override = None
|
|
||||||
|
|
||||||
# Speed ebenfalls aus App-Override nehmen (fallback 1.0)
|
|
||||||
xtts_speed = self._next_speed_override or 1.0
|
xtts_speed = self._next_speed_override or 1.0
|
||||||
if self._next_speed_override:
|
|
||||||
logger.info("[core] Nutze Speed-Override: %.2fx", self._next_speed_override)
|
|
||||||
self._next_speed_override = None
|
|
||||||
|
|
||||||
tts_text = tts_text_preview or text
|
tts_text = tts_text_preview or text
|
||||||
if not tts_text:
|
if not tts_text:
|
||||||
@@ -1024,6 +1024,76 @@ class ARIABridge:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.debug("[session] Diagnostic nicht erreichbar (%s) — nutze '%s'", e, self._session_key)
|
logger.debug("[session] Diagnostic nicht erreichbar (%s) — nutze '%s'", e, self._session_key)
|
||||||
|
|
||||||
|
def _build_core_text(self, text: str, interrupted: bool = False,
|
||||||
|
location: Optional[dict] = None) -> str:
|
||||||
|
"""Baut den Text fuer aria-core mit allen relevanten Hints (Barge-In,
|
||||||
|
GPS-Position). Hints sind in eckigen Klammern, der eigentliche User-
|
||||||
|
Text folgt unverandert."""
|
||||||
|
parts: list[str] = []
|
||||||
|
if interrupted:
|
||||||
|
parts.append(
|
||||||
|
"[Hinweis: Stefan hat dich gerade unterbrochen waehrend du noch "
|
||||||
|
"gesprochen oder gearbeitet hast. Folgendes ist eine Korrektur, "
|
||||||
|
"Ergaenzung oder ein Themenwechsel zu deiner letzten Antwort.]"
|
||||||
|
)
|
||||||
|
if location and isinstance(location, dict):
|
||||||
|
lat = location.get("lat")
|
||||||
|
lon = location.get("lon") or location.get("lng")
|
||||||
|
if lat is not None and lon is not None:
|
||||||
|
parts.append(
|
||||||
|
f"[Stefans aktuelle GPS-Position: {float(lat):.6f}, {float(lon):.6f}. "
|
||||||
|
f"Nutze die nur wenn die Frage sich auf seinen Standort bezieht. "
|
||||||
|
f"Erwaehne sie nicht von dir aus, ausser er fragt explizit danach.]"
|
||||||
|
)
|
||||||
|
if parts:
|
||||||
|
return " ".join(parts) + " " + text
|
||||||
|
return text
|
||||||
|
|
||||||
|
def _build_pending_files_message(self, user_text: str) -> str:
|
||||||
|
"""Baut eine Anweisung an aria-core aus den gepufferten Files + optionalem
|
||||||
|
User-Text. user_text leer → 'warte auf Anweisung'-Variante."""
|
||||||
|
parts: list[str] = []
|
||||||
|
for fp, name, ftype, kb, w, h in self._pending_files:
|
||||||
|
dim = f" {w}x{h}px" if (w and h) else ""
|
||||||
|
kind = "Bild" if ftype.startswith("image/") else "Datei"
|
||||||
|
parts.append(f"- {kind}: {name}{dim} ({ftype}, {kb}KB) liegt unter {fp}")
|
||||||
|
files_summary = "\n".join(parts)
|
||||||
|
n = len(self._pending_files)
|
||||||
|
anhang = "Anhang" if n == 1 else "Anhaenge"
|
||||||
|
if user_text:
|
||||||
|
return (f"Stefan hat dir {n} {anhang} geschickt:\n{files_summary}\n\n"
|
||||||
|
f"Er sagt dazu: \"{user_text}\"")
|
||||||
|
return (f"Stefan hat dir {n} {anhang} geschickt:\n{files_summary}\n\n"
|
||||||
|
f"Warte auf seine Anweisung was du damit tun sollst.")
|
||||||
|
|
||||||
|
async def _flush_pending_files_after(self, delay: float) -> None:
|
||||||
|
"""Wenn nach `delay`s kein chat-Text gekommen ist: Files alleine an
|
||||||
|
aria-core senden ('warte auf Anweisung'-Variante)."""
|
||||||
|
try:
|
||||||
|
await asyncio.sleep(delay)
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
return
|
||||||
|
if not self._pending_files:
|
||||||
|
return
|
||||||
|
text = self._build_pending_files_message("")
|
||||||
|
self._pending_files = []
|
||||||
|
self._pending_files_flush_task = None
|
||||||
|
await self.send_to_core(text, source="app-file")
|
||||||
|
|
||||||
|
async def _flush_pending_files_with_text(self, user_text: str) -> bool:
|
||||||
|
"""Wenn ein chat-Text reinkommt waehrend Files gepuffert sind:
|
||||||
|
Files + Text zu einer einzigen aria-core-Nachricht mergen.
|
||||||
|
Returns True wenn gemerged wurde (Caller soll dann nicht nochmal senden)."""
|
||||||
|
if not self._pending_files:
|
||||||
|
return False
|
||||||
|
if self._pending_files_flush_task and not self._pending_files_flush_task.done():
|
||||||
|
self._pending_files_flush_task.cancel()
|
||||||
|
self._pending_files_flush_task = None
|
||||||
|
text = self._build_pending_files_message(user_text)
|
||||||
|
self._pending_files = []
|
||||||
|
await self.send_to_core(text, source="app-file+chat")
|
||||||
|
return True
|
||||||
|
|
||||||
async def send_to_core(self, text: str, source: str = "bridge") -> None:
|
async def send_to_core(self, text: str, source: str = "bridge") -> None:
|
||||||
"""Sendet Text an aria-core (OpenClaw chat.send Protokoll)."""
|
"""Sendet Text an aria-core (OpenClaw chat.send Protokoll)."""
|
||||||
if self.ws_core is None:
|
if self.ws_core is None:
|
||||||
@@ -1169,21 +1239,38 @@ class ARIABridge:
|
|||||||
if sender in ("aria", "stt"):
|
if sender in ("aria", "stt"):
|
||||||
return
|
return
|
||||||
text = payload.get("text", "")
|
text = payload.get("text", "")
|
||||||
# Voice-Override fuer die naechste ARIA-Antwort merken
|
# Voice-Override fuer Folgenachrichten setzen — gilt bis zum naechsten
|
||||||
voice_override = payload.get("voice", "")
|
# chat-Event. Leerer String "" = explizit Default-Voice (override loeschen).
|
||||||
if voice_override:
|
# Field nicht gesendet = vorherigen Override unveraendert lassen (z.B. wenn
|
||||||
self._next_voice_override = voice_override
|
# cancel_request oder anderer Service die App umgeht).
|
||||||
logger.info("[rvs] Voice-Override fuer naechste Antwort: %s", voice_override)
|
if "voice" in payload:
|
||||||
|
voice_override = payload.get("voice", "") or ""
|
||||||
|
self._next_voice_override = voice_override or None
|
||||||
|
logger.info("[rvs] Voice fuer Antworten: %s",
|
||||||
|
self._next_voice_override or "(Default)")
|
||||||
# Speed-Override (TTS-Wiedergabegeschwindigkeit, pro Geraet)
|
# Speed-Override (TTS-Wiedergabegeschwindigkeit, pro Geraet)
|
||||||
try:
|
if "speed" in payload:
|
||||||
speed = float(payload.get("speed", 0) or 0)
|
try:
|
||||||
if 0.1 <= speed <= 5.0:
|
speed = float(payload.get("speed", 0) or 0)
|
||||||
self._next_speed_override = speed
|
self._next_speed_override = speed if 0.1 <= speed <= 5.0 else None
|
||||||
except (TypeError, ValueError):
|
except (TypeError, ValueError):
|
||||||
pass
|
self._next_speed_override = None
|
||||||
if text:
|
if text:
|
||||||
logger.info("[rvs] App-Chat: '%s'", text[:80])
|
interrupted = bool(payload.get("interrupted", False))
|
||||||
await self.send_to_core(text, source="app")
|
location = payload.get("location") or None
|
||||||
|
# Wenn Files gerade gepuffert sind (Bild + Text gleichzeitig
|
||||||
|
# gesendet), mergen wir sie zu einer einzigen Anfrage statt
|
||||||
|
# zwei separater send_to_core-Calls.
|
||||||
|
merged = await self._flush_pending_files_with_text(text)
|
||||||
|
if merged:
|
||||||
|
logger.info("[rvs] App-Chat (mit Anhaengen): '%s'", text[:80])
|
||||||
|
else:
|
||||||
|
core_text = self._build_core_text(text, interrupted, location)
|
||||||
|
logger.info("[rvs] App-Chat%s%s: '%s'",
|
||||||
|
" [BARGE-IN]" if interrupted else "",
|
||||||
|
" [GPS]" if location else "",
|
||||||
|
text[:80])
|
||||||
|
await self.send_to_core(core_text, source="app" + (" [barge-in]" if interrupted else ""))
|
||||||
return
|
return
|
||||||
|
|
||||||
if msg_type == "cancel_request":
|
if msg_type == "cancel_request":
|
||||||
@@ -1342,70 +1429,54 @@ class ARIABridge:
|
|||||||
await self.ws_core.send(raw_message)
|
await self.ws_core.send(raw_message)
|
||||||
|
|
||||||
elif msg_type == "file":
|
elif msg_type == "file":
|
||||||
# Datei von der App → als Text-Nachricht an aria-core
|
# Datei von der App: speichern + zu Pending-Queue hinzufuegen.
|
||||||
|
# Wird mit dem nachfolgenden chat-Event (innerhalb PENDING_FILES_WINDOW)
|
||||||
|
# zu einer einzigen aria-core-Anfrage gemerged. Sonst antwortet ARIA
|
||||||
|
# zweimal: einmal "warte auf Anweisung" beim file, einmal auf den Chat.
|
||||||
file_name = payload.get("name", "unbekannt")
|
file_name = payload.get("name", "unbekannt")
|
||||||
file_type = payload.get("type", "")
|
file_type = payload.get("type", "")
|
||||||
file_b64 = payload.get("base64", "")
|
file_b64 = payload.get("base64", "")
|
||||||
file_size = payload.get("size", 0)
|
|
||||||
width = payload.get("width", 0)
|
width = payload.get("width", 0)
|
||||||
height = payload.get("height", 0)
|
height = payload.get("height", 0)
|
||||||
logger.info("[rvs] Datei empfangen: %s (%s, %dKB)",
|
logger.info("[rvs] Datei empfangen: %s (%s, %dKB)",
|
||||||
file_name, file_type, len(file_b64) // 1365 if file_b64 else 0)
|
file_name, file_type, len(file_b64) // 1365 if file_b64 else 0)
|
||||||
|
|
||||||
# Shared Volume: /shared/ ist in Bridge UND aria-core gemountet
|
|
||||||
SHARED_DIR = "/shared/uploads"
|
SHARED_DIR = "/shared/uploads"
|
||||||
os.makedirs(SHARED_DIR, exist_ok=True)
|
os.makedirs(SHARED_DIR, exist_ok=True)
|
||||||
|
|
||||||
if file_b64 and file_type.startswith("image/"):
|
if not file_b64:
|
||||||
# Bild in Shared Volume speichern
|
text = f"Stefan hat eine Datei gesendet ({file_name}, {file_type}) aber die Daten sind leer angekommen."
|
||||||
|
await self.send_to_core(text, source="app-file")
|
||||||
|
return
|
||||||
|
|
||||||
|
if file_type.startswith("image/"):
|
||||||
ext = ".jpg" if "jpeg" in file_type or "jpg" in file_type else ".png"
|
ext = ".jpg" if "jpeg" in file_type or "jpg" in file_type else ".png"
|
||||||
safe_name = f"img_{int(asyncio.get_event_loop().time())}_{file_name.replace('/', '_')}"
|
safe_name = f"img_{int(asyncio.get_event_loop().time())}_{file_name.replace('/', '_')}"
|
||||||
file_path = os.path.join(SHARED_DIR, safe_name if safe_name.endswith(ext) else safe_name + ext)
|
file_path = os.path.join(SHARED_DIR, safe_name if safe_name.endswith(ext) else safe_name + ext)
|
||||||
with open(file_path, "wb") as f:
|
else:
|
||||||
f.write(base64.b64decode(file_b64))
|
|
||||||
size_kb = len(file_b64) // 1365
|
|
||||||
logger.info("[rvs] Bild gespeichert: %s (%dKB)", file_path, size_kb)
|
|
||||||
# ERST an aria-core senden (wichtigster Schritt)
|
|
||||||
text = (f"Stefan hat dir ein Bild geschickt: {file_name}"
|
|
||||||
f"{f' ({width}x{height}px)' if width else ''}"
|
|
||||||
f", {size_kb}KB."
|
|
||||||
f" Das Bild liegt unter: {file_path}"
|
|
||||||
f" Warte auf Stefans Anweisung was du damit tun sollst.")
|
|
||||||
await self.send_to_core(text, source="app-file")
|
|
||||||
# Dann App informieren (optional, darf nicht crashen)
|
|
||||||
try:
|
|
||||||
await self._send_to_rvs({
|
|
||||||
"type": "file_saved",
|
|
||||||
"payload": {"name": file_name, "serverPath": file_path, "mimeType": file_type},
|
|
||||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning("[rvs] file_saved konnte nicht an App gesendet werden: %s", e)
|
|
||||||
elif file_b64:
|
|
||||||
# Andere Datei in Shared Volume speichern
|
|
||||||
safe_name = f"file_{int(asyncio.get_event_loop().time())}_{file_name.replace('/', '_')}"
|
safe_name = f"file_{int(asyncio.get_event_loop().time())}_{file_name.replace('/', '_')}"
|
||||||
file_path = os.path.join(SHARED_DIR, safe_name)
|
file_path = os.path.join(SHARED_DIR, safe_name)
|
||||||
with open(file_path, "wb") as f:
|
with open(file_path, "wb") as f:
|
||||||
f.write(base64.b64decode(file_b64))
|
f.write(base64.b64decode(file_b64))
|
||||||
size_kb = len(file_b64) // 1365
|
size_kb = len(file_b64) // 1365
|
||||||
logger.info("[rvs] Datei gespeichert: %s (%dKB)", file_path, size_kb)
|
logger.info("[rvs] Datei gespeichert: %s (%dKB)", file_path, size_kb)
|
||||||
# ERST an aria-core senden
|
|
||||||
text = (f"Stefan hat dir eine Datei geschickt: {file_name}"
|
# In Pending-Queue + Flush-Timer (anti-spam Buffering)
|
||||||
f" ({file_type}, {size_kb}KB)."
|
self._pending_files.append((file_path, file_name, file_type, size_kb, int(width or 0), int(height or 0)))
|
||||||
f" Die Datei liegt unter: {file_path}"
|
if self._pending_files_flush_task and not self._pending_files_flush_task.done():
|
||||||
f" Warte auf Stefans Anweisung was du damit tun sollst.")
|
self._pending_files_flush_task.cancel()
|
||||||
await self.send_to_core(text, source="app-file")
|
self._pending_files_flush_task = asyncio.create_task(
|
||||||
try:
|
self._flush_pending_files_after(self._PENDING_FILES_WINDOW_SEC)
|
||||||
await self._send_to_rvs({
|
)
|
||||||
"type": "file_saved",
|
|
||||||
"payload": {"name": file_name, "serverPath": file_path, "mimeType": file_type},
|
try:
|
||||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
await self._send_to_rvs({
|
||||||
})
|
"type": "file_saved",
|
||||||
except Exception as e:
|
"payload": {"name": file_name, "serverPath": file_path, "mimeType": file_type},
|
||||||
logger.warning("[rvs] file_saved konnte nicht an App gesendet werden: %s", e)
|
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||||
else:
|
})
|
||||||
text = f"Stefan hat eine Datei gesendet ({file_name}, {file_type}) aber die Daten sind leer angekommen."
|
except Exception as e:
|
||||||
await self.send_to_core(text, source="app-file")
|
logger.warning("[rvs] file_saved konnte nicht an App gesendet werden: %s", e)
|
||||||
|
|
||||||
elif msg_type == "file_request":
|
elif msg_type == "file_request":
|
||||||
# App fordert eine Datei an (Re-Download nach Cache-Leerung)
|
# App fordert eine Datei an (Re-Download nach Cache-Leerung)
|
||||||
@@ -1444,20 +1515,28 @@ class ARIABridge:
|
|||||||
if not audio_b64:
|
if not audio_b64:
|
||||||
logger.warning("[rvs] Audio ohne Daten empfangen")
|
logger.warning("[rvs] Audio ohne Daten empfangen")
|
||||||
return
|
return
|
||||||
# Voice-Override fuer die kommende ARIA-Antwort (App-lokal gewaehlt)
|
# Voice-Override fuer Folgenachrichten — gleiche Semantik wie beim chat-Event.
|
||||||
voice_override = payload.get("voice", "")
|
if "voice" in payload:
|
||||||
if voice_override:
|
voice_override = payload.get("voice", "") or ""
|
||||||
self._next_voice_override = voice_override
|
self._next_voice_override = voice_override or None
|
||||||
logger.info("[rvs] Voice-Override (via Audio): %s", voice_override)
|
logger.info("[rvs] Voice fuer Antworten (via Audio): %s",
|
||||||
try:
|
self._next_voice_override or "(Default)")
|
||||||
speed = float(payload.get("speed", 0) or 0)
|
if "speed" in payload:
|
||||||
if 0.1 <= speed <= 5.0:
|
try:
|
||||||
self._next_speed_override = speed
|
speed = float(payload.get("speed", 0) or 0)
|
||||||
except (TypeError, ValueError):
|
self._next_speed_override = speed if 0.1 <= speed <= 5.0 else None
|
||||||
pass
|
except (TypeError, ValueError):
|
||||||
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB",
|
self._next_speed_override = None
|
||||||
mime_type, duration_ms, len(audio_b64) // 1365)
|
interrupted = bool(payload.get("interrupted", False))
|
||||||
asyncio.create_task(self._process_app_audio(audio_b64, mime_type))
|
audio_request_id = payload.get("audioRequestId", "") or ""
|
||||||
|
location = payload.get("location") or None
|
||||||
|
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB%s%s%s",
|
||||||
|
mime_type, duration_ms, len(audio_b64) // 1365,
|
||||||
|
" [BARGE-IN]" if interrupted else "",
|
||||||
|
" [GPS]" if location else "",
|
||||||
|
f" reqId={audio_request_id[:16]}" if audio_request_id else "")
|
||||||
|
asyncio.create_task(self._process_app_audio(
|
||||||
|
audio_b64, mime_type, interrupted, audio_request_id, location))
|
||||||
|
|
||||||
elif msg_type == "stt_response":
|
elif msg_type == "stt_response":
|
||||||
# Antwort der whisper-bridge auf unseren stt_request
|
# Antwort der whisper-bridge auf unseren stt_request
|
||||||
@@ -1513,8 +1592,23 @@ class ARIABridge:
|
|||||||
_STT_REMOTE_TIMEOUT_READY_S = 45.0
|
_STT_REMOTE_TIMEOUT_READY_S = 45.0
|
||||||
_STT_REMOTE_TIMEOUT_LOADING_S = 300.0
|
_STT_REMOTE_TIMEOUT_LOADING_S = 300.0
|
||||||
|
|
||||||
async def _process_app_audio(self, audio_b64: str, mime_type: str) -> None:
|
async def _process_app_audio(self, audio_b64: str, mime_type: str,
|
||||||
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal."""
|
interrupted: bool = False,
|
||||||
|
audio_request_id: str = "",
|
||||||
|
location: Optional[dict] = None) -> None:
|
||||||
|
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal.
|
||||||
|
|
||||||
|
interrupted=True wenn der User waehrend ARIA noch sprach/dachte aufgenommen hat
|
||||||
|
(Barge-In). Wird als Hinweis-Praefix an aria-core mitgegeben damit ARIA die
|
||||||
|
Korrektur/Unterbrechung in den Kontext einordnen kann statt als reine
|
||||||
|
Folgefrage zu behandeln.
|
||||||
|
|
||||||
|
audio_request_id: Korrelations-ID die die App im audio-Event mitschickt — wird
|
||||||
|
unveraendert ans STT-Result zurueckgegeben damit die App die EXAKT richtige
|
||||||
|
'wird verarbeitet'-Bubble ersetzen kann (auch bei mehreren parallelen Aufnahmen).
|
||||||
|
|
||||||
|
location: Optional GPS-Position {lat, lon} — wird als Hinweis-Praefix mitgegeben
|
||||||
|
damit ARIA bei standortbezogenen Fragen sie nutzen kann."""
|
||||||
# Erst Remote versuchen
|
# Erst Remote versuchen
|
||||||
text = await self._stt_remote(audio_b64, mime_type)
|
text = await self._stt_remote(audio_b64, mime_type)
|
||||||
if text is None:
|
if text is None:
|
||||||
@@ -1526,19 +1620,29 @@ class ARIABridge:
|
|||||||
|
|
||||||
if text.strip():
|
if text.strip():
|
||||||
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
||||||
|
# Hints (Barge-In, GPS) als Praefix vorschalten — gemeinsamer Helper
|
||||||
|
# mit dem chat-Pfad damit das Verhalten konsistent ist.
|
||||||
|
core_text = self._build_core_text(text, interrupted, location)
|
||||||
# ERST an aria-core senden (wichtigster Schritt)
|
# ERST an aria-core senden (wichtigster Schritt)
|
||||||
await self.send_to_core(text, source="app-voice")
|
await self.send_to_core(core_text, source="app-voice" + (" [barge-in]" if interrupted else ""))
|
||||||
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
||||||
# sender="stt" damit Bridge es ignoriert (kein Loop)
|
# sender="stt" damit Bridge es ignoriert (kein Loop)
|
||||||
try:
|
try:
|
||||||
await self._send_to_rvs({
|
stt_payload = {
|
||||||
|
"text": text,
|
||||||
|
"sender": "stt",
|
||||||
|
}
|
||||||
|
if audio_request_id:
|
||||||
|
stt_payload["audioRequestId"] = audio_request_id
|
||||||
|
ok = await self._send_to_rvs({
|
||||||
"type": "chat",
|
"type": "chat",
|
||||||
"payload": {
|
"payload": stt_payload,
|
||||||
"text": text,
|
|
||||||
"sender": "stt",
|
|
||||||
},
|
|
||||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||||
})
|
})
|
||||||
|
if ok:
|
||||||
|
logger.info("[rvs] STT-Text an RVS broadcastet (sender=stt)")
|
||||||
|
else:
|
||||||
|
logger.warning("[rvs] STT-Text NICHT broadcastet — _send_to_rvs lieferte False")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning("[rvs] STT-Text konnte nicht an RVS gesendet werden: %s", e)
|
logger.warning("[rvs] STT-Text konnte nicht an RVS gesendet werden: %s", e)
|
||||||
else:
|
else:
|
||||||
|
|||||||
+41
-58
@@ -63,24 +63,35 @@
|
|||||||
.log-entry.pipeline-sep { color: #333; margin: 6px 0 2px; }
|
.log-entry.pipeline-sep { color: #333; margin: 6px 0 2px; }
|
||||||
|
|
||||||
.chat-box { background: #080810; border: 1px solid #1E1E2E; border-radius: 6px;
|
.chat-box { background: #080810; border: 1px solid #1E1E2E; border-radius: 6px;
|
||||||
min-height: 120px; max-height: 250px; overflow-y: auto; padding: 8px; margin-bottom: 8px; }
|
min-height: 120px; max-height: 250px; overflow-y: auto;
|
||||||
.chat-msg { margin-bottom: 6px; padding: 6px 10px; border-radius: 6px; font-size: 13px; line-height: 1.5; word-wrap: break-word; }
|
padding: 12px; margin-bottom: 8px; display: flex; flex-direction: column; gap: 8px; }
|
||||||
.chat-msg.sent { background: #0096FF; color: #fff; margin-left: 20%; text-align: right; }
|
.chat-msg { padding: 10px 14px; border-radius: 14px; font-size: 14px; line-height: 1.5;
|
||||||
.chat-msg.received { background: #1E1E2E; margin-right: 20%; }
|
word-wrap: break-word; max-width: 80%; white-space: pre-wrap;
|
||||||
.chat-msg.error { background: #3B1010; color: #FF6B6B; }
|
box-shadow: 0 1px 2px rgba(0,0,0,0.4); }
|
||||||
.chat-msg .meta { font-size: 10px; color: rgba(255,255,255,0.4); margin-top: 2px; }
|
.chat-msg.sent { background: #0096FF; color: #fff; align-self: flex-end;
|
||||||
|
border-bottom-right-radius: 4px; }
|
||||||
|
.chat-msg.received { background: #1E1E2E; color: #E8E8F0; align-self: flex-start;
|
||||||
|
border-bottom-left-radius: 4px; }
|
||||||
|
.chat-msg.error { background: #3B1010; color: #FF6B6B; align-self: flex-start; }
|
||||||
|
.chat-msg .meta { font-size: 10px; color: rgba(255,255,255,0.4); margin-top: 4px;
|
||||||
|
display: block; }
|
||||||
.chat-msg a { color: #66BBFF; text-decoration: underline; }
|
.chat-msg a { color: #66BBFF; text-decoration: underline; }
|
||||||
.chat-msg.sent a { color: #CCEEFF; }
|
.chat-msg.sent a { color: #CCEEFF; }
|
||||||
.chat-msg .chat-media { max-width: 100%; max-height: 200px; border-radius: 4px; margin-top: 4px; cursor: pointer; display: block; }
|
.chat-msg .chat-media { max-width: 100%; max-height: 200px; border-radius: 8px;
|
||||||
|
margin-top: 6px; cursor: pointer; display: block; }
|
||||||
.chat-msg .chat-media:hover { opacity: 0.85; }
|
.chat-msg .chat-media:hover { opacity: 0.85; }
|
||||||
.lightbox-overlay { display:none; position:fixed; top:0; left:0; right:0; bottom:0; background:rgba(0,0,0,0.92);
|
.lightbox-overlay { display:none; position:fixed; top:0; left:0; right:0; bottom:0; background:rgba(0,0,0,0.92);
|
||||||
z-index:2000; justify-content:center; align-items:center; cursor:pointer; }
|
z-index:2000; justify-content:center; align-items:center; cursor:pointer; }
|
||||||
.lightbox-overlay.open { display:flex; }
|
.lightbox-overlay.open { display:flex; }
|
||||||
.lightbox-overlay img, .lightbox-overlay video { max-width:95vw; max-height:95vh; border-radius:8px; }
|
.lightbox-overlay img, .lightbox-overlay video { max-width:95vw; max-height:95vh; border-radius:8px; }
|
||||||
|
|
||||||
.input-row { display: flex; gap: 6px; }
|
.input-row { display: flex; gap: 6px; align-items: flex-end; }
|
||||||
.input-row input { flex: 1; background: #1E1E2E; border: 1px solid #333; border-radius: 6px;
|
.input-row input, .input-row textarea {
|
||||||
padding: 8px 12px; color: #E0E0F0; font-family: inherit; font-size: 13px; }
|
flex: 1; background: #1E1E2E; border: 1px solid #333; border-radius: 6px;
|
||||||
|
padding: 8px 12px; color: #E0E0F0; font-family: inherit; font-size: 13px;
|
||||||
|
}
|
||||||
|
.input-row textarea { resize: none; min-height: 38px; max-height: 200px; line-height: 1.4;
|
||||||
|
overflow-y: auto; }
|
||||||
|
|
||||||
/* Terminal Modal */
|
/* Terminal Modal */
|
||||||
.modal-overlay { display:none; position:fixed; top:0; left:0; right:0; bottom:0; background:rgba(0,0,0,0.85);
|
.modal-overlay { display:none; position:fixed; top:0; left:0; right:0; bottom:0; background:rgba(0,0,0,0.85);
|
||||||
@@ -282,7 +293,7 @@
|
|||||||
📎
|
📎
|
||||||
<input type="file" id="diag-file-input" multiple accept="image/*,application/pdf,.doc,.docx,.txt" style="display:none;" onchange="handleDiagFileSelect(this.files)">
|
<input type="file" id="diag-file-input" multiple accept="image/*,application/pdf,.doc,.docx,.txt" style="display:none;" onchange="handleDiagFileSelect(this.files)">
|
||||||
</label>
|
</label>
|
||||||
<input type="text" id="chat-input" placeholder="Nachricht an ARIA..." onpaste="handleDiagPaste(event)">
|
<textarea id="chat-input" placeholder="Nachricht an ARIA... (Enter sendet, Shift+Enter neue Zeile)" rows="2" onpaste="handleDiagPaste(event)" oninput="autoResizeTextarea(this)"></textarea>
|
||||||
<button class="btn" id="btn-gw" onclick="testGateway()">Gateway senden</button>
|
<button class="btn" id="btn-gw" onclick="testGateway()">Gateway senden</button>
|
||||||
<button class="btn" id="btn-rvs" onclick="testRVS()">Via RVS senden</button>
|
<button class="btn" id="btn-rvs" onclick="testRVS()">Via RVS senden</button>
|
||||||
</div>
|
</div>
|
||||||
@@ -300,7 +311,7 @@
|
|||||||
<span style="animation:pulse 1s infinite;">💭</span> <span id="thinking-text-fs">ARIA denkt...</span>
|
<span style="animation:pulse 1s infinite;">💭</span> <span id="thinking-text-fs">ARIA denkt...</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="input-row" style="margin-top:8px;">
|
<div class="input-row" style="margin-top:8px;">
|
||||||
<input type="text" id="chat-input-fs" placeholder="Nachricht an ARIA..." onkeydown="if(event.key==='Enter'){testRVSFS();event.preventDefault();}">
|
<textarea id="chat-input-fs" placeholder="Nachricht an ARIA... (Enter sendet, Shift+Enter neue Zeile)" rows="2" oninput="autoResizeTextarea(this)"></textarea>
|
||||||
<button class="btn" onclick="testGatewayFS()">Gateway senden</button>
|
<button class="btn" onclick="testGatewayFS()">Gateway senden</button>
|
||||||
<button class="btn" onclick="testRVSFS()">Via RVS senden</button>
|
<button class="btn" onclick="testRVSFS()">Via RVS senden</button>
|
||||||
</div>
|
</div>
|
||||||
@@ -654,24 +665,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Highlight-Trigger -->
|
|
||||||
<div class="settings-section">
|
|
||||||
<h2>Highlight-Trigger</h2>
|
|
||||||
<div style="font-size:11px;color:#8888AA;margin-bottom:8px;">
|
|
||||||
Woerter die automatisch die Highlight-Stimme (Thorsten) ausloesen.
|
|
||||||
Eines pro Zeile. Aenderungen werden in der Bridge gespeichert.
|
|
||||||
</div>
|
|
||||||
<div class="card" style="max-width:500px;">
|
|
||||||
<textarea id="highlight-triggers" rows="8" style="width:100%;box-sizing:border-box;background:#1E1E2E;border:1px solid #2A2A3E;border-radius:6px;padding:8px;color:#fff;font-size:13px;font-family:monospace;resize:vertical;"
|
|
||||||
placeholder="Lade..."></textarea>
|
|
||||||
<div style="display:flex;gap:8px;margin-top:8px;">
|
|
||||||
<button class="btn" onclick="saveHighlightTriggers()" style="flex:1;">Speichern</button>
|
|
||||||
<button class="btn secondary" onclick="loadHighlightTriggers()" style="flex:1;">Neu laden</button>
|
|
||||||
</div>
|
|
||||||
<div id="trigger-status" style="font-size:11px;color:#555570;margin-top:6px;"></div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Tool-Berechtigungen -->
|
<!-- Tool-Berechtigungen -->
|
||||||
<div class="settings-section">
|
<div class="settings-section">
|
||||||
<h2>Tool-Berechtigungen</h2>
|
<h2>Tool-Berechtigungen</h2>
|
||||||
@@ -945,14 +938,6 @@
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (msg.type === 'trigger_list') {
|
|
||||||
const textarea = document.getElementById('highlight-triggers');
|
|
||||||
textarea.value = (msg.triggers || []).join('\n');
|
|
||||||
document.getElementById('trigger-status').textContent = msg.triggers.length + ' Trigger geladen';
|
|
||||||
document.getElementById('trigger-status').style.color = '#8888AA';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (msg.type === 'service_status') {
|
if (msg.type === 'service_status') {
|
||||||
updateServiceStatus(msg.payload || {});
|
updateServiceStatus(msg.payload || {});
|
||||||
return;
|
return;
|
||||||
@@ -1947,20 +1932,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── Highlight-Trigger ────────────────────────
|
|
||||||
function loadHighlightTriggers() {
|
|
||||||
send({ action: 'get_triggers' });
|
|
||||||
}
|
|
||||||
function saveHighlightTriggers() {
|
|
||||||
const text = document.getElementById('highlight-triggers').value;
|
|
||||||
const triggers = text.split('\n').map(t => t.trim()).filter(t => t.length > 0);
|
|
||||||
send({ action: 'save_triggers', triggers });
|
|
||||||
document.getElementById('trigger-status').textContent = 'Gespeichert (' + triggers.length + ' Trigger)';
|
|
||||||
document.getElementById('trigger-status').style.color = '#34C759';
|
|
||||||
}
|
|
||||||
// Beim Tab-Wechsel zu Einstellungen: Trigger laden
|
|
||||||
const origSwitchMainTab = typeof switchMainTab === 'function' ? switchMainTab : null;
|
|
||||||
|
|
||||||
// ── Modus-Wechsel ────────────────────────────
|
// ── Modus-Wechsel ────────────────────────────
|
||||||
// Kanonische IDs (matchen bridge/modes.py canonical_id + android ModeSelector)
|
// Kanonische IDs (matchen bridge/modes.py canonical_id + android ModeSelector)
|
||||||
const MODE_LABELS = { normal: 'Normal', nicht_stoeren: 'Nicht stoeren', fluester: 'Fluestern', hangar: 'Hangar', gaming: 'Gaming' };
|
const MODE_LABELS = { normal: 'Normal', nicht_stoeren: 'Nicht stoeren', fluester: 'Fluestern', hangar: 'Hangar', gaming: 'Gaming' };
|
||||||
@@ -2069,10 +2040,23 @@
|
|||||||
return str.replace(/&/g,'&').replace(/</g,'<').replace(/>/g,'>');
|
return str.replace(/&/g,'&').replace(/</g,'<').replace(/>/g,'>');
|
||||||
}
|
}
|
||||||
|
|
||||||
// Enter-Taste sendet via Gateway
|
// Auto-Resize fuer Textarea — wuchst mit dem Inhalt bis zum max-height
|
||||||
document.getElementById('chat-input').addEventListener('keydown', (e) => {
|
function autoResizeTextarea(el) {
|
||||||
if (e.key === 'Enter') testRVS();
|
el.style.height = 'auto';
|
||||||
});
|
el.style.height = Math.min(el.scrollHeight, 200) + 'px';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enter sendet, Shift+Enter macht neue Zeile (chat-Standard).
|
||||||
|
function chatInputKeydown(e, sendFn) {
|
||||||
|
if (e.key === 'Enter' && !e.shiftKey) {
|
||||||
|
e.preventDefault();
|
||||||
|
sendFn();
|
||||||
|
// Textarea zurueck auf 2 rows setzen
|
||||||
|
e.target.style.height = 'auto';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
document.getElementById('chat-input').addEventListener('keydown', (e) => chatInputKeydown(e, testRVS));
|
||||||
|
document.getElementById('chat-input-fs').addEventListener('keydown', (e) => chatInputKeydown(e, testRVSFS));
|
||||||
|
|
||||||
// Escape schliesst Lightbox
|
// Escape schliesst Lightbox
|
||||||
document.addEventListener('keydown', (e) => {
|
document.addEventListener('keydown', (e) => {
|
||||||
@@ -2432,9 +2416,8 @@
|
|||||||
document.querySelectorAll('.main-nav-btn').forEach(b => {
|
document.querySelectorAll('.main-nav-btn').forEach(b => {
|
||||||
if (b.textContent.trim().toLowerCase().includes(tab === 'main' ? 'main' : 'einstellung')) b.classList.add('active');
|
if (b.textContent.trim().toLowerCase().includes(tab === 'main' ? 'main' : 'einstellung')) b.classList.add('active');
|
||||||
});
|
});
|
||||||
// Einstellungen: Config + Trigger + QR laden
|
// Einstellungen: Config + QR laden
|
||||||
if (tab === 'settings') {
|
if (tab === 'settings') {
|
||||||
loadHighlightTriggers();
|
|
||||||
send({ action: 'get_voice_config' });
|
send({ action: 'get_voice_config' });
|
||||||
loadRuntimeConfig();
|
loadRuntimeConfig();
|
||||||
loadOnboardingQR();
|
loadOnboardingQR();
|
||||||
|
|||||||
+14
-30
@@ -716,11 +716,24 @@ function sendToRVS_withResponse(sendType, sendPayload, expectType, clientWs) {
|
|||||||
|
|
||||||
function sendToRVS_raw(msgObj) {
|
function sendToRVS_raw(msgObj) {
|
||||||
if (!RVS_HOST || !RVS_TOKEN) return;
|
if (!RVS_HOST || !RVS_TOKEN) return;
|
||||||
|
const payload = JSON.stringify(msgObj);
|
||||||
|
// Persistente Connection bevorzugen — die ist garantiert connected
|
||||||
|
// und wird vom RVS direkt an alle anderen Clients (App, Bridge) broadcastet.
|
||||||
|
// Frische Connections hatten Race-Probleme: die WS war nach dem send manchmal
|
||||||
|
// schon zu, bevor RVS broadcasten konnte → App-Nachrichten verloren.
|
||||||
|
if (rvsWs && rvsWs.readyState === WebSocket.OPEN) {
|
||||||
|
try {
|
||||||
|
rvsWs.send(payload);
|
||||||
|
return;
|
||||||
|
} catch (err) {
|
||||||
|
log("warn", "rvs", `persistente Verbindung send failed (${err.message}) — Fallback frische WS`);
|
||||||
|
}
|
||||||
|
}
|
||||||
const proto = RVS_TLS === "true" ? "wss" : "ws";
|
const proto = RVS_TLS === "true" ? "wss" : "ws";
|
||||||
const url = `${proto}://${RVS_HOST}:${RVS_PORT}?token=${RVS_TOKEN}`;
|
const url = `${proto}://${RVS_HOST}:${RVS_PORT}?token=${RVS_TOKEN}`;
|
||||||
const freshWs = new WebSocket(url);
|
const freshWs = new WebSocket(url);
|
||||||
freshWs.on("open", () => {
|
freshWs.on("open", () => {
|
||||||
freshWs.send(JSON.stringify(msgObj));
|
freshWs.send(payload);
|
||||||
setTimeout(() => { try { freshWs.close(); } catch (_) {} }, 5000);
|
setTimeout(() => { try { freshWs.close(); } catch (_) {} }, 5000);
|
||||||
});
|
});
|
||||||
freshWs.on("error", () => {});
|
freshWs.on("error", () => {});
|
||||||
@@ -1462,10 +1475,6 @@ wss.on("connection", (ws) => {
|
|||||||
} catch {}
|
} catch {}
|
||||||
sendToRVS_raw({ type: "config", payload: voiceConfig, timestamp: Date.now() });
|
sendToRVS_raw({ type: "config", payload: voiceConfig, timestamp: Date.now() });
|
||||||
log("info", "server", `Voice-Config gespeichert: xttsVoice=${voiceConfig.xttsVoice || "default"}, whisper=${voiceConfig.whisperModel || "-"}`);
|
log("info", "server", `Voice-Config gespeichert: xttsVoice=${voiceConfig.xttsVoice || "default"}, whisper=${voiceConfig.whisperModel || "-"}`);
|
||||||
} else if (msg.action === "get_triggers") {
|
|
||||||
handleGetTriggers(ws);
|
|
||||||
} else if (msg.action === "save_triggers") {
|
|
||||||
handleSaveTriggers(ws, msg.triggers || []);
|
|
||||||
} else if (msg.action === "test_tts") {
|
} else if (msg.action === "test_tts") {
|
||||||
handleTestTTS(ws, msg.text || "Test");
|
handleTestTTS(ws, msg.text || "Test");
|
||||||
} else if (msg.action === "preview_voice") {
|
} else if (msg.action === "preview_voice") {
|
||||||
@@ -1616,31 +1625,6 @@ function handleGetVoiceConfig(clientWs) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── Highlight-Trigger (legacy UI — wird nicht mehr ausgewertet seit Piper raus) ─
|
|
||||||
const TRIGGERS_FILE = "/shared/config/highlight_triggers.json";
|
|
||||||
|
|
||||||
async function handleGetTriggers(clientWs) {
|
|
||||||
try {
|
|
||||||
const triggers = fs.existsSync(TRIGGERS_FILE)
|
|
||||||
? JSON.parse(fs.readFileSync(TRIGGERS_FILE, "utf-8"))
|
|
||||||
: [];
|
|
||||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers }));
|
|
||||||
} catch (err) {
|
|
||||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers: [], error: err.message }));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function handleSaveTriggers(clientWs, triggers) {
|
|
||||||
try {
|
|
||||||
fs.mkdirSync("/shared/config", { recursive: true });
|
|
||||||
fs.writeFileSync(TRIGGERS_FILE, JSON.stringify(triggers, null, 2));
|
|
||||||
log("info", "server", `${triggers.length} Highlight-Trigger gespeichert`);
|
|
||||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers }));
|
|
||||||
} catch (err) {
|
|
||||||
log("error", "server", `Trigger speichern fehlgeschlagen: ${err.message}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── TTS Diagnose (XTTS) ───────────────────────────────
|
// ── TTS Diagnose (XTTS) ───────────────────────────────
|
||||||
// ── Voice Preview ────────────────────────────────────────
|
// ── Voice Preview ────────────────────────────────────────
|
||||||
// Sammelt audio_pcm Chunks einer Preview-Anfrage, baut am Ende eine WAV
|
// Sammelt audio_pcm Chunks einer Preview-Anfrage, baut am Ende eine WAV
|
||||||
|
|||||||
+1
-1
@@ -9,7 +9,7 @@ services:
|
|||||||
command: >-
|
command: >-
|
||||||
sh -c "apk add --no-cache openssh-client bash curl &&
|
sh -c "apk add --no-cache openssh-client bash curl &&
|
||||||
npm install -g @anthropic-ai/claude-code claude-max-api-proxy &&
|
npm install -g @anthropic-ai/claude-code claude-max-api-proxy &&
|
||||||
DIST=$(find /usr/local/lib -path '*/claude-max-api-proxy/dist' -type d | head -1) &&
|
DIST=$$(find /usr/local/lib -path '*/claude-max-api-proxy/dist' -type d | head -1) &&
|
||||||
sed -i 's/startServer({ port })/startServer({ port, host: process.env.HOST || \"127.0.0.1\" })/' $$DIST/server/standalone.js &&
|
sed -i 's/startServer({ port })/startServer({ port, host: process.env.HOST || \"127.0.0.1\" })/' $$DIST/server/standalone.js &&
|
||||||
sed -i 's/if (model\.includes/if ((model||\"claude-sonnet-4\").includes/g' $$DIST/adapter/cli-to-openai.js &&
|
sed -i 's/if (model\.includes/if ((model||\"claude-sonnet-4\").includes/g' $$DIST/adapter/cli-to-openai.js &&
|
||||||
sed -i '1i\\function _t(c){return typeof c===\"string\"?c:Array.isArray(c)?c.filter(function(b){return b.type===\"text\"}).map(function(b){return b.text||\"\"}).join(\"\"):String(c)}' $$DIST/adapter/openai-to-cli.js &&
|
sed -i '1i\\function _t(c){return typeof c===\"string\"?c:Array.isArray(c)?c.filter(function(b){return b.type===\"text\"}).map(function(b){return b.text||\"\"}).join(\"\"):String(c)}' $$DIST/adapter/openai-to-cli.js &&
|
||||||
|
|||||||
@@ -2,6 +2,41 @@
|
|||||||
|
|
||||||
## Erledigt
|
## Erledigt
|
||||||
|
|
||||||
|
### Bugs / Fixes
|
||||||
|
|
||||||
|
- [x] Diagnostic: "ARIA denkt..." bleibt nicht mehr stehen
|
||||||
|
- [x] App: "ARIA denkt..." Indicator + Abbrechen-Button (Bridge spiegelt agent_activity via RVS)
|
||||||
|
- [x] Textnachrichten werden von ARIA beantwortet (Bridge chat handler fix)
|
||||||
|
- [x] Voice-Auswahl funktioniert wieder: speaker_wav als Basename statt Pfad fuer daswer123 local-Mode
|
||||||
|
- [x] Diagnostic-Voice-Wechsel resettet alle App-lokalen Voice-Overrides via type "config"
|
||||||
|
- [x] Streaming TTS Stop-Race: Writer wartet auf playbackHeadPosition vor stop()/release() — keine abgeschnittenen Saetze mehr
|
||||||
|
- [x] App: Audioausgabe hoert nicht mehr mitten im Satz auf (playbackHeadPosition wait + Stop-Race fix)
|
||||||
|
- [x] AudioFocus.release wartet auf echten Playback-Ende — kein Volume-Hochfahren mehr mid-Antwort
|
||||||
|
- [x] App Mute-/Auto-Playback-Bug: Closure-Bug geloest (ttsCanPlayRef live-gespiegelt, nicht mehr stale)
|
||||||
|
- [x] App Zombie-Recording: Ohr-aus kill laufende Aufnahme damit der Aufnahme-Button weiter funktioniert
|
||||||
|
- [x] Whisper transkribiert Voice-Uploads nicht mehr mit hardcoded "small" — aktuelles Modell wird behalten, kein unnoetiger Modell-Swap
|
||||||
|
- [x] RVS/WebSocket maxPayload 50MB: voice_upload mit WAV als base64 sprengt kein Frame-Limit mehr
|
||||||
|
- [x] Wake-Word Embedding rank-4 Fix (Pipeline-Bug der das Triggern verhinderte) + Frame-Count aus Modell-Metadaten lesen
|
||||||
|
- [x] PCM-Underrun-Schutz: Stille-Fill in Render-Pausen verhindert Spotify-Auto-Resume nach 10s Stillstand
|
||||||
|
- [x] Conversation-Focus-Lifecycle: AudioFocus haengt am Wake-Word-State 'conversing' statt an einzelnen Streams — Spotify bleibt durchgehend gepaust, auch zwischen mehreren Antworten
|
||||||
|
- [x] Voice-Override behaelt Stimme ueber alle TTS-Calls einer Antwort (vorher: nach erstem TTS-Call zurueck auf Default)
|
||||||
|
- [x] Sprachnachricht-Bubble defensiv: STT-Result fuegt neue Bubble hinzu wenn Placeholder fehlt (Race-Schutz)
|
||||||
|
- [x] Bild + Text als EINE Anfrage: Bridge buffert files 800ms, merged mit folgendem chat-Text zu einem send_to_core (statt zwei getrennten ARIA-Antworten)
|
||||||
|
- [x] Diagnostic→App: persistente RVS-Connection statt frische pro Send (Race-Probleme mit Zombie-WS geloest)
|
||||||
|
- [x] Textauswahl in Bubbles wieder funktional (nested Text+onPress raus, dataDetectorType="all" macht Links automatisch klickbar)
|
||||||
|
- [x] **Placeholder-Race bei parallelen Sprachnachrichten geloest**: jede Aufnahme bekommt eine eindeutige audioRequestId, Bridge gibt sie ans STT-Result zurueck — App matcht jetzt punktgenau die richtige Bubble statt per Substring
|
||||||
|
- [x] Mikro-Offen-Toast "🎤 sprich jetzt" erscheint erst wenn audioService.startRecording wirklich erfolgreich war (statt ~400ms vorher beim Wake-Word-Detect)
|
||||||
|
- [x] Sprachnachrichten ohne STT-Result werden nach 60s+Aufnahmedauer automatisch entfernt (sicher genug fuer 5-30min-Aufnahmen, schnell genug fuer leere Wake-Word-Echos)
|
||||||
|
- [x] VAD adaptive Baseline robuster: minimum statt avg + Cap auf -50dB bis -28dB (Stille) / -40dB bis -18dB (Speech) — keine "tote" VAD-Konfiguration mehr bei lauter Umgebung oder Wake-Word-Echo
|
||||||
|
- [x] Push-to-Talk raus, nur noch Tap-to-Talk (verhinderte Touch-Race-Probleme)
|
||||||
|
- [x] Manueller Mikro-Stop beendet Wake-Word-Konversation: Tap auf Mikro-Knopf waehrend conversing → audio raus + zurueck zu armed (= Wake-Word lauscht wieder, kein Auto-Mikro nach ARIAs Antwort). VAD-Auto-Stop bleibt bei Multi-Turn
|
||||||
|
- [x] **Wake-Word pausiert bei Anruf**: phoneCall ruft pauseForCall (openWakeWord.stop) bei RINGING/OFFHOOK, resumeFromCall bei IDLE. Pre-Call-State wird gemerkt — armed bleibt armed, conversing degraded zu armed (User soll nicht in halbem Dialog landen)
|
||||||
|
- [x] **App-Resume-Cooldown**: Wechsel von Background → Foreground triggert keinen falschen Wake-Word-Trigger mehr. AppState-Listener setzt 1.5s Cooldown in dem onWakeDetected-Events ignoriert werden (Audio-Pegel-Spike beim AudioFocus-Switch sonst als Wake-Word interpretiert)
|
||||||
|
- [x] Background-Mikro robust: acquireBackgroundAudio('rec'/'wake') wird jetzt VOR AudioRecord.startRecording gerufen — Foreground-Service mit foregroundServiceType=microphone muss aktiv sein bevor das Mikro greift, sonst blockiert Android ab 11+ den Background-Zugriff
|
||||||
|
- [x] **Stille-Pegel manuell setzbar** (Settings → Spracheingabe): Override-Wert in dB von -55 bis -15, default "automatisch". Info-Button mit Modal erklaert die Skala (niedriger = sensibler, hoeher = robuster gegen Hintergrundlaerm). Bei manuell gesetztem Wert wird die adaptive Baseline ignoriert
|
||||||
|
|
||||||
|
### App Features
|
||||||
|
|
||||||
- [x] Bildupload funktioniert (Shared Volume /shared/uploads/)
|
- [x] Bildupload funktioniert (Shared Volume /shared/uploads/)
|
||||||
- [x] Sprachnachrichten werden als Text angezeigt (STT → Chat-Bubble)
|
- [x] Sprachnachrichten werden als Text angezeigt (STT → Chat-Bubble)
|
||||||
- [x] Cache leeren + Auto-Download von Anhaengen
|
- [x] Cache leeren + Auto-Download von Anhaengen
|
||||||
@@ -11,13 +46,9 @@
|
|||||||
- [x] Ohr-Button → Gespraechsmodus (Auto-Aufnahme nach ARIA-Antwort)
|
- [x] Ohr-Button → Gespraechsmodus (Auto-Aufnahme nach ARIA-Antwort)
|
||||||
- [x] Play-Button in ARIA-Nachrichten fuer Sprachwiedergabe
|
- [x] Play-Button in ARIA-Nachrichten fuer Sprachwiedergabe
|
||||||
- [x] Chat-Suche in der App (Lupe in Statusleiste)
|
- [x] Chat-Suche in der App (Lupe in Statusleiste)
|
||||||
- [x] Watchdog mit Container-Restart (2min Warnung → 5min doctor --fix → 8min Restart)
|
|
||||||
- [x] Abbrechen-Button im Diagnostic Chat
|
- [x] Abbrechen-Button im Diagnostic Chat
|
||||||
- [x] Nachrichten Backup on-the-fly (/shared/config/chat_backup.jsonl)
|
|
||||||
- [x] Grosse Nachrichten satzweise aufteilen fuer TTS
|
|
||||||
- [x] RVS Nachrichten vom Smartphone gehen durch
|
|
||||||
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed pro Stimme — durch XTTS/F5-TTS ersetzt)
|
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed pro Stimme — durch XTTS/F5-TTS ersetzt)
|
||||||
- [x] Highlight-Trigger konfigurierbar in Diagnostic
|
- [x] Highlight-Trigger konfigurierbar in Diagnostic (spaeter komplett entfernt — war Piper-Relikt)
|
||||||
- [x] XTTS v2 Integration (Gaming-PC, GPU, Voice Cloning) — durch F5-TTS ersetzt
|
- [x] XTTS v2 Integration (Gaming-PC, GPU, Voice Cloning) — durch F5-TTS ersetzt
|
||||||
- [x] XTTS Voice Cloning (Audio-Samples hochladen, eigene Stimme)
|
- [x] XTTS Voice Cloning (Audio-Samples hochladen, eigene Stimme)
|
||||||
- [x] TTS Engine waehlbar (Piper/XTTS) — Piper raus, XTTS raus, jetzt nur F5-TTS
|
- [x] TTS Engine waehlbar (Piper/XTTS) — Piper raus, XTTS raus, jetzt nur F5-TTS
|
||||||
@@ -25,16 +56,12 @@
|
|||||||
- [x] Auto-Update: APK-Installation via FileProvider
|
- [x] Auto-Update: APK-Installation via FileProvider
|
||||||
- [x] Auto-Update: "Auf Updates pruefen" Button in App-Einstellungen
|
- [x] Auto-Update: "Auf Updates pruefen" Button in App-Einstellungen
|
||||||
- [x] Audio-Queue (sequentielle Wiedergabe, kein Ueberlappen)
|
- [x] Audio-Queue (sequentielle Wiedergabe, kein Ueberlappen)
|
||||||
- [x] Textnachrichten werden von ARIA beantwortet (Bridge chat handler fix)
|
|
||||||
- [x] Mehrere Anhaenge + Text vor dem Senden (Pending-Vorschau)
|
- [x] Mehrere Anhaenge + Text vor dem Senden (Pending-Vorschau)
|
||||||
- [x] Paste-Support fuer Bilder in Diagnostic Chat
|
- [x] Paste-Support fuer Bilder in Diagnostic Chat
|
||||||
- [x] Markdown-Bereinigung fuer TTS (fett, kursiv, code, links, etc.)
|
- [x] Markdown-Bereinigung fuer TTS (fett, kursiv, code, links, etc.)
|
||||||
- [x] SSH Volume read-write fuer Proxy (kein -F Workaround mehr)
|
|
||||||
- [x] Diagnostic: Sessions als Markdown exportieren (Download-Button)
|
- [x] Diagnostic: Sessions als Markdown exportieren (Download-Button)
|
||||||
- [x] Speech Gate: Aufnahme wird verworfen wenn keine Sprache erkannt
|
- [x] Speech Gate: Aufnahme wird verworfen wenn keine Sprache erkannt
|
||||||
- [x] Session-Persistenz: Gewaehlte Session bleibt ueber Container-Restarts erhalten
|
- [x] Session-Persistenz: Gewaehlte Session bleibt ueber Container-Restarts erhalten
|
||||||
- [x] Diagnostic: "ARIA denkt..." bleibt nicht mehr stehen
|
|
||||||
- [x] App: "ARIA denkt..." Indicator + Abbrechen-Button (Bridge spiegelt agent_activity via RVS)
|
|
||||||
- [x] Whisper STT: Model-Auswahl in Diagnostic (tiny/base/small/medium/large-v3), Hot-Reload
|
- [x] Whisper STT: Model-Auswahl in Diagnostic (tiny/base/small/medium/large-v3), Hot-Reload
|
||||||
- [x] App: Audio-Aufnahme explizit 16kHz mono (spart Resample, optimal fuer Whisper)
|
- [x] App: Audio-Aufnahme explizit 16kHz mono (spart Resample, optimal fuer Whisper)
|
||||||
- [x] Streaming TTS: PCM-Stream → AudioTrack MODE_STREAM, keine WAV-Gaps
|
- [x] Streaming TTS: PCM-Stream → AudioTrack MODE_STREAM, keine WAV-Gaps
|
||||||
@@ -51,14 +78,11 @@
|
|||||||
- [x] Disk-Voll Banner in Diagnostic: rotes Overlay + copy-baren Cleanup-Befehlen (safe + aggressiv)
|
- [x] Disk-Voll Banner in Diagnostic: rotes Overlay + copy-baren Cleanup-Befehlen (safe + aggressiv)
|
||||||
- [x] cleanup.sh: kombinierter Docker-Aufraeum-Befehl (safe / --full)
|
- [x] cleanup.sh: kombinierter Docker-Aufraeum-Befehl (safe / --full)
|
||||||
- [x] Streaming TTS Pre-Roll: AudioTrack play() startet erst wenn 2.5s gepuffert sind
|
- [x] Streaming TTS Pre-Roll: AudioTrack play() startet erst wenn 2.5s gepuffert sind
|
||||||
- [x] Streaming TTS Stop-Race: Writer wartet auf playbackHeadPosition vor stop()/release() — keine abgeschnittenen Saetze mehr
|
|
||||||
- [x] Leading-Silence (200ms) am Stream-Anfang — AudioTrack faehrt sauber an
|
- [x] Leading-Silence (200ms) am Stream-Anfang — AudioTrack faehrt sauber an
|
||||||
- [x] Pre-Roll-Buffer einstellbar in App-Settings (1.0-6.0s, Default 3.5s)
|
- [x] Pre-Roll-Buffer einstellbar in App-Settings (1.0-6.0s, Default 3.5s)
|
||||||
- [x] Fade-In auf erstem PCM-Chunk (120ms) — versteckt XTTS/F5-TTS Warmup-Glitches
|
- [x] Fade-In auf erstem PCM-Chunk (120ms) — versteckt XTTS/F5-TTS Warmup-Glitches
|
||||||
- [x] Decimal-zu-Worte fuer TTS (0.1 → null komma eins, mit IP-Schutz-Lookahead)
|
- [x] Decimal-zu-Worte fuer TTS (0.1 → null komma eins, mit IP-Schutz-Lookahead)
|
||||||
- [x] Generic Acronym-Buchstabieren (XTTS → X T T S, USB → U S B, ueber expliziter Liste)
|
- [x] Generic Acronym-Buchstabieren (XTTS → X T T S, USB → U S B, ueber expliziter Liste)
|
||||||
- [x] Voice-Auswahl funktioniert wieder: speaker_wav als Basename statt Pfad fuer daswer123 local-Mode
|
|
||||||
- [x] Diagnostic-Voice-Wechsel resettet alle App-lokalen Voice-Overrides via type "config"
|
|
||||||
- [x] voice_preload/voice_ready: Stille Mini-Render bei Voice-Wechsel + Toast/Status "bereit"
|
- [x] voice_preload/voice_ready: Stille Mini-Render bei Voice-Wechsel + Toast/Status "bereit"
|
||||||
- [x] Whisper STT auf die Gamebox ausgelagert (faster-whisper CUDA, float16) — neuer aria-whisper-bridge Container
|
- [x] Whisper STT auf die Gamebox ausgelagert (faster-whisper CUDA, float16) — neuer aria-whisper-bridge Container
|
||||||
- [x] aria-bridge: STT primaer remote (Gamebox), Fallback lokal nach 45s Timeout
|
- [x] aria-bridge: STT primaer remote (Gamebox), Fallback lokal nach 45s Timeout
|
||||||
@@ -66,37 +90,49 @@
|
|||||||
- [x] **F5-TTS ersetzt XTTS komplett** — neuer aria-f5tts-bridge Container, Voice Cloning, satzweises Streaming
|
- [x] **F5-TTS ersetzt XTTS komplett** — neuer aria-f5tts-bridge Container, Voice Cloning, satzweises Streaming
|
||||||
- [x] Voice-Upload mit Whisper-Auto-Transkription — User muss keinen Referenz-Text eintippen
|
- [x] Voice-Upload mit Whisper-Auto-Transkription — User muss keinen Referenz-Text eintippen
|
||||||
- [x] Audio-Pause statt Ducking: Spotify/YouTube pausieren komplett waehrend TTS (TRANSIENT statt MAY_DUCK)
|
- [x] Audio-Pause statt Ducking: Spotify/YouTube pausieren komplett waehrend TTS (TRANSIENT statt MAY_DUCK)
|
||||||
- [x] AudioFocus.release wartet auf echten Playback-Ende — kein Volume-Hochfahren mehr mid-Antwort
|
|
||||||
- [x] VAD-Stille einstellbar in App-Settings (1.0-8.0s, Default 2.8s)
|
- [x] VAD-Stille einstellbar in App-Settings (1.0-8.0s, Default 2.8s)
|
||||||
- [x] MAX_RECORDING auf 120s — laengere Erklaerungen moeglich
|
- [x] MAX_RECORDING auf 120s — laengere Erklaerungen moeglich
|
||||||
- [x] App: Audioausgabe hoert nicht mehr mitten im Satz auf (playbackHeadPosition wait + Stop-Race fix)
|
|
||||||
- [x] F5-TTS: Referenz-WAV-Preprocessing — Loudness-Normalisierung -16 LUFS + Silence-Trim + 10s Clip fuer konsistente Cloning-Quali
|
- [x] F5-TTS: Referenz-WAV-Preprocessing — Loudness-Normalisierung -16 LUFS + Silence-Trim + 10s Clip fuer konsistente Cloning-Quali
|
||||||
- [x] F5-TTS: deutsches Fine-Tune (aihpi/F5-TTS-German, Vocos-Variante) via hf:// Pfad in Diagnostic konfigurierbar
|
- [x] F5-TTS: deutsches Fine-Tune (aihpi/F5-TTS-German, Vocos-Variante) via hf:// Pfad in Diagnostic konfigurierbar
|
||||||
- [x] Whisper transkribiert Voice-Uploads nicht mehr mit hardcoded "small" — aktuelles Modell wird behalten, kein unnoetiger Modell-Swap
|
|
||||||
- [x] RVS/WebSocket maxPayload 50MB: voice_upload mit WAV als base64 sprengt kein Frame-Limit mehr
|
|
||||||
- [x] Dynamischer STT-Timeout in aria-bridge: 300s waehrend whisper-bridge 'loading', 45s wenn 'ready'
|
- [x] Dynamischer STT-Timeout in aria-bridge: 300s waehrend whisper-bridge 'loading', 45s wenn 'ready'
|
||||||
- [x] service_status Broadcasts: f5tts/whisper melden Lade-Status, Banner in Diagnostic (unten rechts) + App (oben)
|
- [x] service_status Broadcasts: f5tts/whisper melden Lade-Status, Banner in Diagnostic (unten rechts) + App (oben)
|
||||||
- [x] config_request Pattern: Bridges fragen beim Connect die aktuelle Voice-Config an, aria-bridge antwortet
|
- [x] config_request Pattern: Bridges fragen beim Connect die aktuelle Voice-Config an, aria-bridge antwortet
|
||||||
- [x] F5-TTS Tuning via Diagnostic (Modell-ID, Checkpoint, cfg_strength, nfe_step) statt ENV-Vars — Hot-Reload bei Modell-Wechsel
|
- [x] F5-TTS Tuning via Diagnostic (Modell-ID, Checkpoint, cfg_strength, nfe_step) statt ENV-Vars — Hot-Reload bei Modell-Wechsel
|
||||||
- [x] Conversation-Window: Gespraechsmodus endet nach X Sekunden Stille (1.0-20.0s, Default 8s, einstellbar in Settings)
|
- [x] Conversation-Window: Gespraechsmodus endet nach X Sekunden Stille (1.0-20.0s, Default 8s, einstellbar in Settings)
|
||||||
- [x] Porcupine Wake-Word-Integration in der App (Built-In Keywords + Custom spaeter, per Geraet einstellbar)
|
- [x] Porcupine Wake-Word-Integration in der App (durch openWakeWord ersetzt)
|
||||||
- [x] HF-Cache als Bind-Mount statt Docker Volume — kein .vhdx-Bloat auf Docker Desktop / Windows
|
- [x] HF-Cache als Bind-Mount statt Docker Volume — kein .vhdx-Bloat auf Docker Desktop / Windows
|
||||||
- [x] cleanup-windows.ps1 / .bat: VHDX-Cleanup via diskpart (ohne Hyper-V) mit Self-Elevation
|
- [x] cleanup-windows.ps1 / .bat: VHDX-Cleanup via diskpart (ohne Hyper-V) mit Self-Elevation
|
||||||
- [x] App Mute-/Auto-Playback-Bug: Closure-Bug geloest (ttsCanPlayRef live-gespiegelt, nicht mehr stale)
|
|
||||||
- [x] App Zombie-Recording: Ohr-aus kill laufende Aufnahme damit der Aufnahme-Button weiter funktioniert
|
|
||||||
- [x] App Text-Rendering: Nachrichten selektierbar + Autolink fuer URLs/E-Mails/Telefonnummern (Browser/Mail/Dialer)
|
- [x] App Text-Rendering: Nachrichten selektierbar + Autolink fuer URLs/E-Mails/Telefonnummern (Browser/Mail/Dialer)
|
||||||
- [x] TTS-Wiedergabegeschwindigkeit pro Geraet einstellbar (Settings → 0.5-2.0x in 0.1-Schritten, Default 1.0)
|
- [x] TTS-Wiedergabegeschwindigkeit pro Geraet einstellbar (Settings → 0.5-2.0x in 0.1-Schritten, Default 1.0)
|
||||||
- [x] Diagnostic: Voice-Preview-Modal (Play-Icon vor Delete-X, Textfeld mit Default, WAV im Browser abspielen)
|
- [x] Diagnostic: Voice-Preview-Modal (Play-Icon vor Delete-X, Textfeld mit Default, WAV im Browser abspielen)
|
||||||
|
- [x] **Wake-Word komplett on-device via openWakeWord (ONNX Runtime)** — Porcupine raus, kein API-Key/keine Lizenzgebuehren mehr. Mitgelieferte Keywords: hey_jarvis, computer, alexa, hey_mycroft, hey_rhasspy
|
||||||
|
- [x] APK ABI-Split auf arm64-v8a — von ~136 MB auf ~35 MB, Auto-Update-Downloads aufs Phone deutlich kleiner
|
||||||
|
- [x] PhoneStateListener: TTS pausiert bei eingehendem Anruf (READ_PHONE_STATE Permission)
|
||||||
|
- [x] Diagnostic-Chat: bubblige Formatierung, mehrzeiliges Eingabefeld (textarea, Enter sendet, Shift+Enter neue Zeile)
|
||||||
|
- [x] Adaptive VAD-Schwelle: Baseline aus den ersten 500ms Mic-Pegel, Stille = baseline+6dB / Sprache = baseline+12dB
|
||||||
|
- [x] Max-Aufnahmedauer konfigurierbar in Settings (1-30 min, Default 5 min) — laengere Diktate moeglich
|
||||||
|
- [x] Barge-In: User kann ARIA waehrend Antwort/Tool-Use unterbrechen, alte Aktivitaet wird abgebrochen, Bridge gibt aria-core einen Kontext-Hint dass es eine Korrektur ist
|
||||||
|
- [x] Settings-Sub-Screens: 8 Kategorien (Verbindung, Allgemein, Spracheingabe, Wake-Word, Sprachausgabe, Speicher, Protokoll, Ueber) statt langer Liste
|
||||||
|
- [x] **Bereit-Sound (Airplane Ding-Dong) wenn Mikro nach Wake-Word offen** — akustische Bestaetigung statt nur Toast. Toggle in Settings → Wake-Word, default aktiv
|
||||||
|
- [x] **Wake-Word parallel zu TTS** mit AcousticEchoCanceler: User sagt "Computer" waehrend ARIA spricht → TTS verstummt sofort, neue Aufnahme startet
|
||||||
|
- [x] **GPS-Position mitsenden**: Toggle in Settings → Allgemein → Standort, persistiert in AsyncStorage. Wenn aktiv wird lat/lon mit jeder chat/audio-Message mitgegeben. Bridge prefixed den Text fuer aria-core mit GPS-Hint (mit Anweisung dass die Position nur bei Bedarf erwaehnt wird)
|
||||||
|
- [x] **Background Audio Service**: TTS, Wake-Word-Lauschen UND Aufnahme laufen auch bei minimierter App weiter. Foreground-Service mit foregroundServiceType=mediaPlayback|microphone, persistente Notification mit dynamischem Text ("ARIA spricht" / "ARIA hoert zu" / "ARIA bereit")
|
||||||
|
|
||||||
|
### Infrastruktur
|
||||||
|
|
||||||
|
- [x] Watchdog mit Container-Restart (2min Warnung → 5min doctor --fix → 8min Restart)
|
||||||
|
- [x] Nachrichten Backup on-the-fly (/shared/config/chat_backup.jsonl)
|
||||||
|
- [x] RVS Nachrichten vom Smartphone gehen durch
|
||||||
|
- [x] SSH Volume read-write fuer Proxy (kein -F Workaround mehr)
|
||||||
|
|
||||||
## Offen
|
## Offen
|
||||||
|
|
||||||
### Bugs
|
### Bugs
|
||||||
- [ ] App: Wake-Word "jarvis" triggert nicht zuverlaessig (Porcupine-Debugging via ADB-Logcat ausstehend)
|
|
||||||
- [ ] App: Stuerzt beim Lauschen ab, eventuell bei Nebengeraeuschen (Porcupine + Mic-Race, errorCallback haelt's jetzt zurueck — Dauertest ausstehend)
|
|
||||||
|
|
||||||
### App Features
|
### App Features
|
||||||
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
||||||
- [ ] Background Audio Service (TTS auch bei minimierter App)
|
- [ ] Custom-Wake-Word-Upload via Diagnostic (eigene .onnx-Files ohne App-Rebuild)
|
||||||
|
- [ ] Pause+Resume bei Anruf: aktuell wird der TTS-Stream bei Klingeln hart gestoppt, schoener waere Pause + Resume nach Auflegen
|
||||||
|
|
||||||
### Architektur
|
### Architektur
|
||||||
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
||||||
|
|||||||
Reference in New Issue
Block a user