Compare commits
103 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 087e91dca1 | |||
| 2de4cbc00f | |||
| 03fc465057 | |||
| b696b47feb | |||
| 6aae565541 | |||
| 214bd218a0 | |||
| 2afeee29ee | |||
| c8dee4c416 | |||
| f49f3c3b08 | |||
| c4bbb06710 | |||
| 4411cc4fff | |||
| 24a91887ef | |||
| 4e62b2919f | |||
| fa774156fe | |||
| 3b19f05c5b | |||
| fc3ecaacca | |||
| 08857093b5 | |||
| 62018b3e51 | |||
| 89e3a195a3 | |||
| f023ba0ac5 | |||
| a0570ef8f7 | |||
| facde1fef7 | |||
| 38106a2096 | |||
| a476afb311 | |||
| db4c7b9b72 | |||
| 3bc490b485 | |||
| dd6d70c46e | |||
| b1eaf42fef | |||
| fb9e5dcd10 | |||
| f95e71463f | |||
| 1088bff43d | |||
| cad68db2a2 | |||
| 50b10c8ac0 | |||
| a8b586ec92 | |||
| 632e1e4fa1 | |||
| 7e12816ebd | |||
| 8f64f8fb30 | |||
| b3ff3991c4 | |||
| a4ea387c98 | |||
| 68fbf74a23 | |||
| b857f778e9 | |||
| 31aa82b68c | |||
| de8eeb69e2 | |||
| f5970ce700 | |||
| ef1a4436ca | |||
| 981779cd9e | |||
| 3dcd2ae0b4 | |||
| 2750b867a3 | |||
| f6424add6c | |||
| 2dfd21d1d0 | |||
| 9d9ddc730b | |||
| 77ccee8331 | |||
| 175dcdf225 | |||
| 1549e9cd4f | |||
| 910e74b497 | |||
| 160c5c34b6 | |||
| a6638c0108 | |||
| 43c21d3ddc | |||
| b73c6c346e | |||
| b91ddc5bdf | |||
| 7d08c06720 | |||
| f066a2a555 | |||
| b55b0e7c42 | |||
| 70f806ef80 | |||
| 0773d9496d | |||
| 1a4857ed62 | |||
| 962d814318 | |||
| 9276a92c83 | |||
| d16896c4b4 | |||
| 20050d4077 | |||
| 79760d1b2e | |||
| 13f1103604 | |||
| 73b7a76ea8 | |||
| 17f3d8870e | |||
| 4feaacc7e4 | |||
| af7b2674f3 | |||
| 97442198ec | |||
| e3e841f2ab | |||
| 33185de42b | |||
| dbe547d4ea | |||
| 1a982c0d45 | |||
| dfba5ceb1f | |||
| 1a6f633836 | |||
| 7f7db100af | |||
| d646e9d58e | |||
| bef59ba134 | |||
| dbebfd44ff | |||
| 4d0b9e0d78 | |||
| 0c43a18402 | |||
| 5bdcc3c65b | |||
| 52795530f9 | |||
| 2eb0b4df90 | |||
| 0c18090351 | |||
| d6b54d3247 | |||
| ead28cf09a | |||
| f682aad4ff | |||
| e0c1a4bcd5 | |||
| a648dad96d | |||
| da5579038e | |||
| 4ba48940b9 | |||
| 568ef9ed10 | |||
| 7682a0ce58 | |||
| 3ca834e633 |
@@ -13,6 +13,10 @@ aria-data/config/*.env
|
|||||||
!aria-data/config/*.env.example
|
!aria-data/config/*.env.example
|
||||||
!aria-data/config/openclaw.env
|
!aria-data/config/openclaw.env
|
||||||
|
|
||||||
|
# Privater User-Profile-Snippet (Tool-Stack, interne URLs)
|
||||||
|
aria-data/config/USER.md
|
||||||
|
!aria-data/config/USER.md.example
|
||||||
|
|
||||||
# ── ARIAs Gedächtnis (nur per tar gesichert) ────
|
# ── ARIAs Gedächtnis (nur per tar gesichert) ────
|
||||||
aria-data/brain/
|
aria-data/brain/
|
||||||
|
|
||||||
|
|||||||
@@ -384,7 +384,7 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
|||||||
- **VAD (Voice Activity Detection)**: Adaptive Schwelle (Baseline aus ersten 500ms Mic-Pegel + 6dB Offset). Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme einstellbar (1–30 min, Default 5 min)
|
- **VAD (Voice Activity Detection)**: Adaptive Schwelle (Baseline aus ersten 500ms Mic-Pegel + 6dB Offset). Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme einstellbar (1–30 min, Default 5 min)
|
||||||
- **Barge-In**: Wenn du waehrend ARIAs Antwort eine neue Sprach-/Text-Nachricht reinschickst, wird sie unterbrochen + bekommt den Hint "das ist eine Korrektur"
|
- **Barge-In**: Wenn du waehrend ARIAs Antwort eine neue Sprach-/Text-Nachricht reinschickst, wird sie unterbrochen + bekommt den Hint "das ist eine Korrektur"
|
||||||
- **Wake-Word waehrend TTS**: Du kannst "Computer" sagen waehrend ARIA noch redet — AcousticEchoCanceler verhindert dass ARIAs eigene Stimme das Wake-Word triggert
|
- **Wake-Word waehrend TTS**: Du kannst "Computer" sagen waehrend ARIA noch redet — AcousticEchoCanceler verhindert dass ARIAs eigene Stimme das Wake-Word triggert
|
||||||
- **Anruf-Pause**: TTS verstummt automatisch wenn das Telefon klingelt (READ_PHONE_STATE Permission)
|
- **Anruf-Pause + Auto-Resume**: TTS verstummt bei klassischem Anruf oder VoIP-Call (WhatsApp/Signal/Discord). Nach dem Auflegen geht ARIA von der **genauen Stelle** weiter wo sie unterbrochen wurde — die App misst die Position vom Wiedergabe-Anfang und nutzt den WAV-Cache der Antwort
|
||||||
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
||||||
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
||||||
- **"ARIA denkt..." Indicator**: Zeigt live den Status vom Core (Denken, Tool, Schreiben) + Abbrechen-Button
|
- **"ARIA denkt..." Indicator**: Zeigt live den Status vom Core (Denken, Tool, Schreiben) + Abbrechen-Button
|
||||||
@@ -510,10 +510,36 @@ Der Update-Flow:
|
|||||||
App (Mikrofon) → AAC/MP4 Aufnahme → Base64 → RVS → Bridge
|
App (Mikrofon) → AAC/MP4 Aufnahme → Base64 → RVS → Bridge
|
||||||
Bridge: FFmpeg (16kHz PCM) → Whisper STT → Text → aria-core
|
Bridge: FFmpeg (16kHz PCM) → Whisper STT → Text → aria-core
|
||||||
Bridge: STT-Ergebnis → RVS → App (Placeholder wird durch transkribierten Text ersetzt)
|
Bridge: STT-Ergebnis → RVS → App (Placeholder wird durch transkribierten Text ersetzt)
|
||||||
aria-core → Antwort → Bridge → XTTS (Gaming-PC) → PCM-Stream → RVS → App
|
aria-core → Antwort → Bridge → F5-TTS (Gaming-PC) → PCM-Stream → RVS → App
|
||||||
App: AudioTrack MODE_STREAM (nahtlos), Cache als WAV pro Message
|
App: AudioTrack MODE_STREAM (nahtlos), Cache als WAV pro Message
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Audio-Verhalten in der App
|
||||||
|
|
||||||
|
| Phase | Andere App (Spotify) | ARIA-Mikro |
|
||||||
|
|------------------------------|----------------------|-------------------------|
|
||||||
|
| Idle / Ohr aus | spielt frei | aus |
|
||||||
|
| Wake-Word lauscht (armed) | spielt frei | passiv (openWakeWord) |
|
||||||
|
| User-Aufnahme laeuft | pausiert (EXCLUSIVE) | Recording |
|
||||||
|
| Aufnahme zu Ende | resumed | aus |
|
||||||
|
| ARIA denkt/schreibt (~20s) | spielt frei | aus |
|
||||||
|
| TTS startet | pausiert (DUCK) | aus (oder barge) |
|
||||||
|
| TTS spielt (auch GPU-Pausen) | bleibt pausiert | barge wenn Wake-Word |
|
||||||
|
| TTS zu Ende | nach 800ms resumed | (Conversation-Window) |
|
||||||
|
| Eingehender Anruf (auch VoIP)| — | Mikro pausiert |
|
||||||
|
| Anruf vorbei (Auto-Resume) | pausiert wieder | aus |
|
||||||
|
| Neue Frage waehrend Anruf | — | (Resume verworfen) |
|
||||||
|
|
||||||
|
Mechanismen: Underrun-Schutz im PcmStreamPlayer (Stille-Fill in Render-
|
||||||
|
Pausen), Conversation-Focus bei Wake-Word, Foreground-Service mit
|
||||||
|
mediaPlayback|microphone, Anruf-Erkennung ueber TelephonyManager +
|
||||||
|
AudioFocus-Loss-Listener mit Polling-Fallback (VoIP). Bei Anruf wird
|
||||||
|
die Wiedergabe-Position gemerkt — nach dem Auflegen spielt ARIA ab
|
||||||
|
der genauen Stelle weiter (oder verwirft das wenn der User waehrend
|
||||||
|
des Telefonats per Text eine neue Frage gestellt hat). PcmPlayback-
|
||||||
|
Finished-Event vom Native sorgt dafuer dass Spotify erst pausiert
|
||||||
|
bleibt bis ARIA wirklich verstummt ist.
|
||||||
|
|
||||||
### Datei-Pipeline (Bilder & Anhaenge)
|
### Datei-Pipeline (Bilder & Anhaenge)
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -568,8 +594,7 @@ aria-data/
|
|||||||
│ └── diag-state/ ← Diagnostic persistenter State
|
│ └── diag-state/ ← Diagnostic persistenter State
|
||||||
│
|
│
|
||||||
│ (im Shared Volume /shared/config/):
|
│ (im Shared Volume /shared/config/):
|
||||||
│ ├── voice_config.json ← TTS-Einstellungen (Stimme, Speed, Engine)
|
│ ├── voice_config.json ← TTS-Einstellungen (Stimme, Speed, F5-TTS-Tuning)
|
||||||
│ ├── highlight_triggers.json ← Highlight-Trigger Woerter
|
|
||||||
│ └── chat_backup.jsonl ← Nachrichten-Backup (on-the-fly)
|
│ └── chat_backup.jsonl ← Nachrichten-Backup (on-the-fly)
|
||||||
│
|
│
|
||||||
└── ssh/ ← SSH Keys fuer VM-Zugriff
|
└── ssh/ ← SSH Keys fuer VM-Zugriff
|
||||||
@@ -816,7 +841,7 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- [x] SSH-Zugriff auf VM (aria-wohnung)
|
- [x] SSH-Zugriff auf VM (aria-wohnung)
|
||||||
- [x] Diagnostic Web-UI + Einstellungen
|
- [x] Diagnostic Web-UI + Einstellungen
|
||||||
- [x] Session-Verwaltung + Chat-History
|
- [x] Session-Verwaltung + Chat-History
|
||||||
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed, Highlight-Trigger) — durch XTTS v2 Voice Cloning ersetzt
|
- [x] Stimmen-Einstellungen (frueher Piper Ramona/Thorsten, Highlight-Trigger) — durch XTTS, dann F5-TTS Voice Cloning ersetzt
|
||||||
- [x] Piper komplett entfernt — nur noch XTTS v2 als TTS (Gaming-PC)
|
- [x] Piper komplett entfernt — nur noch XTTS v2 als TTS (Gaming-PC)
|
||||||
- [x] Streaming TTS: PCM-Chunks direkt in AudioTrack, nahtlose Wiedergabe
|
- [x] Streaming TTS: PCM-Chunks direkt in AudioTrack, nahtlose Wiedergabe
|
||||||
- [x] TTS satzweise fuer lange Texte
|
- [x] TTS satzweise fuer lange Texte
|
||||||
@@ -845,12 +870,17 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
||||||
- [x] VAD-Stille-Toleranz einstellbar (1-8s) + adaptive Mikro-Baseline + Max-Aufnahme einstellbar (1-30 min)
|
- [x] VAD-Stille-Toleranz einstellbar (1-8s) + adaptive Mikro-Baseline + Max-Aufnahme einstellbar (1-30 min)
|
||||||
- [x] Barge-In: User kann ARIA waehrend Antwort unterbrechen, aria-core bekommt Kontext-Hint
|
- [x] Barge-In: User kann ARIA waehrend Antwort unterbrechen, aria-core bekommt Kontext-Hint
|
||||||
- [x] Anruf-Pause: TTS verstummt bei eingehendem Anruf (PhoneStateListener)
|
- [x] Anruf-Pause + Auto-Resume: TTS verstummt bei Anruf, faehrt nach Auflegen ab der gemerkten Position fort (Date.now()-Tracking + WAV-Cache der Antwort)
|
||||||
|
- [x] PcmPlaybackFinished-Event: AudioFocus wird erst released wenn AudioTrack wirklich durch ist — kein Spotify-mid-TTS mehr
|
||||||
|
- [x] Edge-Case: neue Frage waehrend Telefonat verwirft pending Auto-Resume, neueste Antwort gewinnt
|
||||||
- [x] Settings-Sub-Screens: 8 Kategorien statt langer Liste
|
- [x] Settings-Sub-Screens: 8 Kategorien statt langer Liste
|
||||||
- [x] APK ABI-Split arm64-v8a: 35 MB statt 136 MB
|
- [x] APK ABI-Split arm64-v8a: 35 MB statt 136 MB
|
||||||
- [x] Sprachnachrichten-Bubble: audioRequestId statt Substring-Match — keine vertauschten Bubbles mehr bei parallelen Aufnahmen
|
- [x] Sprachnachrichten-Bubble: audioRequestId statt Substring-Match — keine vertauschten Bubbles mehr bei parallelen Aufnahmen
|
||||||
- [x] Bereit-Sound (Airplane Ding-Dong) wenn Mikro nach Wake-Word offen ist — akustische Bestaetigung, in Settings abschaltbar
|
- [x] Bereit-Sound (Airplane Ding-Dong) wenn Mikro nach Wake-Word offen ist — akustische Bestaetigung, in Settings abschaltbar
|
||||||
- [x] Wake-Word parallel zu TTS mit AcousticEchoCanceler — "Computer" sagen waehrend ARIA spricht stoppt sie und oeffnet Mikro
|
- [x] Wake-Word parallel zu TTS mit AcousticEchoCanceler — "Computer" sagen waehrend ARIA spricht stoppt sie und oeffnet Mikro
|
||||||
|
- [x] GPS-Position mit Nachrichten mitsenden (Toggle in Settings) — ARIA nutzt sie nur bei standortbezogenen Fragen, im Chat sichtbar nur in ihrer Antwort
|
||||||
|
- [x] Sprachnachrichten ohne STT-Result werden nach Timeout automatisch entfernt (skaliert mit Aufnahmedauer)
|
||||||
|
- [x] Background Audio Service: TTS, Wake-Word-Lauschen + Aufnahme laufen auch bei minimierter App weiter (Foreground-Service mit mediaPlayback|microphone, dynamische Notification)
|
||||||
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
||||||
- [x] Wake-Word on-device via openWakeWord (ONNX Runtime, kein API-Key) + State-Icon
|
- [x] Wake-Word on-device via openWakeWord (ONNX Runtime, kein API-Key) + State-Icon
|
||||||
|
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ import { createBottomTabNavigator } from '@react-navigation/bottom-tabs';
|
|||||||
import ChatScreen from './src/screens/ChatScreen';
|
import ChatScreen from './src/screens/ChatScreen';
|
||||||
import SettingsScreen from './src/screens/SettingsScreen';
|
import SettingsScreen from './src/screens/SettingsScreen';
|
||||||
import rvs from './src/services/rvs';
|
import rvs from './src/services/rvs';
|
||||||
|
import { initLogger } from './src/services/logger';
|
||||||
|
|
||||||
// --- Navigation ---
|
// --- Navigation ---
|
||||||
|
|
||||||
@@ -44,6 +45,10 @@ const TAB_ICONS: Record<string, { active: string; inactive: string }> = {
|
|||||||
const App: React.FC = () => {
|
const App: React.FC = () => {
|
||||||
// Beim Start: gespeicherte RVS-Konfiguration laden und verbinden
|
// Beim Start: gespeicherte RVS-Konfiguration laden und verbinden
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
|
// Verbose-Logging-Setting laden BEVOR andere Module loslegen.
|
||||||
|
// initLogger ist async aber blockt nichts — solange er noch laueft,
|
||||||
|
// loggen wir normal (Default an), danach respektiert console.log das Setting.
|
||||||
|
initLogger().catch(() => {});
|
||||||
const initConnection = async () => {
|
const initConnection = async () => {
|
||||||
const config = await rvs.loadConfig();
|
const config = await rvs.loadConfig();
|
||||||
if (config) {
|
if (config) {
|
||||||
|
|||||||
@@ -79,8 +79,8 @@ android {
|
|||||||
applicationId "com.ariacockpit"
|
applicationId "com.ariacockpit"
|
||||||
minSdkVersion rootProject.ext.minSdkVersion
|
minSdkVersion rootProject.ext.minSdkVersion
|
||||||
targetSdkVersion rootProject.ext.targetSdkVersion
|
targetSdkVersion rootProject.ext.targetSdkVersion
|
||||||
versionCode 707
|
versionCode 10102
|
||||||
versionName "0.0.7.7"
|
versionName "0.1.1.2"
|
||||||
// Fallback fuer Libraries mit Product Flavors
|
// Fallback fuer Libraries mit Product Flavors
|
||||||
missingDimensionStrategy 'react-native-camera', 'general'
|
missingDimensionStrategy 'react-native-camera', 'general'
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,6 +6,17 @@
|
|||||||
<uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" />
|
<uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" />
|
||||||
<!-- Anruf-State lesen damit TTS bei klingelndem Telefon pausiert -->
|
<!-- Anruf-State lesen damit TTS bei klingelndem Telefon pausiert -->
|
||||||
<uses-permission android:name="android.permission.READ_PHONE_STATE" />
|
<uses-permission android:name="android.permission.READ_PHONE_STATE" />
|
||||||
|
<!-- Optional: GPS-Position der Frage anhaengen (nur wenn User in Settings aktiviert) -->
|
||||||
|
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
|
||||||
|
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
|
||||||
|
<!-- Foreground-Service damit TTS auch bei minimierter App weiterlaeuft.
|
||||||
|
FOREGROUND_SERVICE_MICROPHONE ist Pflicht ab Android 14 wenn der
|
||||||
|
Service waehrend des Backgrounds aufs Mikro zugreift (Wake-Word,
|
||||||
|
Aufnahme im Gespraechsmodus). -->
|
||||||
|
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
|
||||||
|
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK" />
|
||||||
|
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_MICROPHONE" />
|
||||||
|
<uses-permission android:name="android.permission.POST_NOTIFICATIONS" />
|
||||||
|
|
||||||
<application
|
<application
|
||||||
android:name=".MainApplication"
|
android:name=".MainApplication"
|
||||||
@@ -37,5 +48,10 @@
|
|||||||
android:name="android.support.FILE_PROVIDER_PATHS"
|
android:name="android.support.FILE_PROVIDER_PATHS"
|
||||||
android:resource="@xml/file_paths" />
|
android:resource="@xml/file_paths" />
|
||||||
</provider>
|
</provider>
|
||||||
|
|
||||||
|
<service
|
||||||
|
android:name=".AriaPlaybackService"
|
||||||
|
android:exported="false"
|
||||||
|
android:foregroundServiceType="mediaPlayback|microphone" />
|
||||||
</application>
|
</application>
|
||||||
</manifest>
|
</manifest>
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import com.facebook.react.uimanager.ViewManager
|
|||||||
|
|
||||||
class ApkInstallerPackage : ReactPackage {
|
class ApkInstallerPackage : ReactPackage {
|
||||||
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||||
return listOf(ApkInstallerModule(reactContext))
|
return listOf(ApkInstallerModule(reactContext), FileOpenerModule(reactContext))
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
||||||
|
|||||||
@@ -0,0 +1,108 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import android.app.Notification
|
||||||
|
import android.app.NotificationChannel
|
||||||
|
import android.app.NotificationManager
|
||||||
|
import android.app.PendingIntent
|
||||||
|
import android.app.Service
|
||||||
|
import android.content.Intent
|
||||||
|
import android.os.Build
|
||||||
|
import android.os.IBinder
|
||||||
|
import android.util.Log
|
||||||
|
import androidx.core.app.NotificationCompat
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Foreground-Service der den App-Prozess waehrend TTS-Wiedergabe am Leben
|
||||||
|
* haelt — Android killt sonst den Prozess sobald die App im Hintergrund ist
|
||||||
|
* und ARIA verstummt mitten im Satz.
|
||||||
|
*
|
||||||
|
* Notification ist persistent (ongoing) waehrend der Service laeuft.
|
||||||
|
* Tap auf die Notification bringt MainActivity zurueck nach vorne.
|
||||||
|
*
|
||||||
|
* foregroundServiceType="mediaPlayback" ist Pflicht ab Android 14, sonst
|
||||||
|
* wirft startForeground() eine SecurityException.
|
||||||
|
*/
|
||||||
|
class AriaPlaybackService : Service() {
|
||||||
|
companion object {
|
||||||
|
private const val TAG = "AriaPlaybackService"
|
||||||
|
private const val CHANNEL_ID = "aria_playback"
|
||||||
|
private const val NOTIFICATION_ID = 1042
|
||||||
|
const val EXTRA_REASON = "reason" // "tts" | "wake" | "rec" | ""
|
||||||
|
}
|
||||||
|
|
||||||
|
private var currentReason: String = ""
|
||||||
|
|
||||||
|
override fun onCreate() {
|
||||||
|
super.onCreate()
|
||||||
|
ensureNotificationChannel()
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
|
||||||
|
val reason = intent?.getStringExtra(EXTRA_REASON) ?: ""
|
||||||
|
currentReason = reason
|
||||||
|
Log.i(TAG, "Foreground-Service start/update (reason=$reason)")
|
||||||
|
try {
|
||||||
|
startForeground(NOTIFICATION_ID, buildNotification(reason))
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.e(TAG, "startForeground fehlgeschlagen", e)
|
||||||
|
stopSelf()
|
||||||
|
}
|
||||||
|
// START_NOT_STICKY: wenn Android den Service killt, NICHT automatisch
|
||||||
|
// wieder starten — die App entscheidet wann der Service noetig ist.
|
||||||
|
return START_NOT_STICKY
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun onDestroy() {
|
||||||
|
Log.i(TAG, "Foreground-Service gestoppt")
|
||||||
|
super.onDestroy()
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun onBind(intent: Intent?): IBinder? = null
|
||||||
|
|
||||||
|
private fun ensureNotificationChannel() {
|
||||||
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
|
||||||
|
val nm = getSystemService(NotificationManager::class.java) ?: return
|
||||||
|
if (nm.getNotificationChannel(CHANNEL_ID) == null) {
|
||||||
|
val channel = NotificationChannel(
|
||||||
|
CHANNEL_ID,
|
||||||
|
"ARIA Audio-Wiedergabe",
|
||||||
|
NotificationManager.IMPORTANCE_LOW,
|
||||||
|
).apply {
|
||||||
|
description = "Notification waehrend ARIA spricht (haelt die App im Hintergrund am Leben)"
|
||||||
|
setShowBadge(false)
|
||||||
|
}
|
||||||
|
nm.createNotificationChannel(channel)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun buildNotification(reason: String): Notification {
|
||||||
|
val launchIntent = Intent(this, MainActivity::class.java).apply {
|
||||||
|
flags = Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_CLEAR_TOP
|
||||||
|
}
|
||||||
|
val pendingFlags = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M)
|
||||||
|
PendingIntent.FLAG_IMMUTABLE or PendingIntent.FLAG_UPDATE_CURRENT
|
||||||
|
else
|
||||||
|
PendingIntent.FLAG_UPDATE_CURRENT
|
||||||
|
val pendingIntent = PendingIntent.getActivity(this, 0, launchIntent, pendingFlags)
|
||||||
|
|
||||||
|
val (title, body) = when (reason) {
|
||||||
|
"tts" -> "ARIA spricht" to "Antwort wird abgespielt — antippen oeffnet die App"
|
||||||
|
"rec" -> "ARIA hoert zu" to "Sprachaufnahme laeuft — antippen oeffnet die App"
|
||||||
|
"wake" -> "ARIA bereit" to "Wake-Word lauscht passiv — antippen oeffnet die App"
|
||||||
|
else -> "ARIA aktiv" to "Hintergrund-Modus — antippen oeffnet die App"
|
||||||
|
}
|
||||||
|
|
||||||
|
return NotificationCompat.Builder(this, CHANNEL_ID)
|
||||||
|
.setContentTitle(title)
|
||||||
|
.setContentText(body)
|
||||||
|
.setSmallIcon(R.mipmap.ic_launcher)
|
||||||
|
.setContentIntent(pendingIntent)
|
||||||
|
.setOngoing(true)
|
||||||
|
.setShowWhen(false)
|
||||||
|
.setPriority(NotificationCompat.PRIORITY_LOW)
|
||||||
|
.setCategory(NotificationCompat.CATEGORY_SERVICE)
|
||||||
|
.setVisibility(NotificationCompat.VISIBILITY_PUBLIC)
|
||||||
|
.build()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -5,26 +5,71 @@ import android.media.AudioAttributes
|
|||||||
import android.media.AudioFocusRequest
|
import android.media.AudioFocusRequest
|
||||||
import android.media.AudioManager
|
import android.media.AudioManager
|
||||||
import android.os.Build
|
import android.os.Build
|
||||||
|
import android.util.Log
|
||||||
|
import com.facebook.react.bridge.Arguments
|
||||||
import com.facebook.react.bridge.Promise
|
import com.facebook.react.bridge.Promise
|
||||||
import com.facebook.react.bridge.ReactApplicationContext
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||||
import com.facebook.react.bridge.ReactMethod
|
import com.facebook.react.bridge.ReactMethod
|
||||||
|
import com.facebook.react.modules.core.DeviceEventManagerModule
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Steuert Audio-Focus fuer Ducking/Muten anderer Apps.
|
* Steuert Audio-Focus fuer Ducking/Muten anderer Apps + emittiert Loss-Events
|
||||||
|
* an JS damit ARIA bei VoIP-Anrufen (WhatsApp/Signal/Discord/...) aufhoert
|
||||||
|
* zu sprechen — diese Anrufe gehen nicht ueber TelephonyManager, sondern
|
||||||
|
* requestn AudioFocus_GAIN_TRANSIENT_EXCLUSIVE was wir hier mitbekommen.
|
||||||
*
|
*
|
||||||
* - requestDuck() → andere Apps werden leiser (ARIA spricht TTS)
|
* - requestDuck() → andere Apps werden leiser (ARIA spricht TTS)
|
||||||
* - requestExclusive() → andere Apps werden pausiert (Mikrofon-Aufnahme)
|
* - requestExclusive() → andere Apps werden pausiert (Mikrofon-Aufnahme)
|
||||||
* - release() → Focus abgeben, andere Apps duerfen wieder
|
* - release() → Focus abgeben, andere Apps duerfen wieder
|
||||||
|
*
|
||||||
|
* Events:
|
||||||
|
* - "AudioFocusChanged" mit type:
|
||||||
|
* "loss" — endgueltiger Verlust (Anruf, andere App permanent)
|
||||||
|
* "loss_transient" — vorruebergehender Verlust (kurze Unterbrechung)
|
||||||
|
* "gain" — Fokus zurueck
|
||||||
*/
|
*/
|
||||||
class AudioFocusModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
class AudioFocusModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||||
override fun getName() = "AudioFocus"
|
override fun getName() = "AudioFocus"
|
||||||
|
|
||||||
|
companion object { private const val TAG = "AudioFocus" }
|
||||||
|
|
||||||
private var currentRequest: AudioFocusRequest? = null
|
private var currentRequest: AudioFocusRequest? = null
|
||||||
|
|
||||||
private fun audioManager(): AudioManager? =
|
private fun audioManager(): AudioManager? =
|
||||||
reactApplicationContext.getSystemService(Context.AUDIO_SERVICE) as? AudioManager
|
reactApplicationContext.getSystemService(Context.AUDIO_SERVICE) as? AudioManager
|
||||||
|
|
||||||
|
private fun emitFocusChange(type: String) {
|
||||||
|
try {
|
||||||
|
val params = Arguments.createMap().apply { putString("type", type) }
|
||||||
|
reactApplicationContext.getJSModule(DeviceEventManagerModule.RCTDeviceEventEmitter::class.java)
|
||||||
|
.emit("AudioFocusChanged", params)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "emit failed: ${e.message}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private val focusListener = AudioManager.OnAudioFocusChangeListener { focusChange ->
|
||||||
|
when (focusChange) {
|
||||||
|
AudioManager.AUDIOFOCUS_LOSS -> {
|
||||||
|
Log.i(TAG, "AUDIOFOCUS_LOSS (z.B. Anruf, anderer Player permanent)")
|
||||||
|
emitFocusChange("loss")
|
||||||
|
}
|
||||||
|
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT -> {
|
||||||
|
Log.i(TAG, "AUDIOFOCUS_LOSS_TRANSIENT (kurze Unterbrechung)")
|
||||||
|
emitFocusChange("loss_transient")
|
||||||
|
}
|
||||||
|
AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK -> {
|
||||||
|
// Notification-Sound o.ae. — wir ignorieren das, ARIA macht weiter
|
||||||
|
Log.d(TAG, "AUDIOFOCUS_LOSS_CAN_DUCK ignoriert")
|
||||||
|
}
|
||||||
|
AudioManager.AUDIOFOCUS_GAIN -> {
|
||||||
|
Log.i(TAG, "AUDIOFOCUS_GAIN")
|
||||||
|
emitFocusChange("gain")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
private fun requestFocus(durationHint: Int, usage: Int, promise: Promise) {
|
private fun requestFocus(durationHint: Int, usage: Int, promise: Promise) {
|
||||||
val am = audioManager()
|
val am = audioManager()
|
||||||
if (am == null) {
|
if (am == null) {
|
||||||
@@ -41,13 +86,13 @@ class AudioFocusModule(reactContext: ReactApplicationContext) : ReactContextBase
|
|||||||
.build()
|
.build()
|
||||||
val req = AudioFocusRequest.Builder(durationHint)
|
val req = AudioFocusRequest.Builder(durationHint)
|
||||||
.setAudioAttributes(attrs)
|
.setAudioAttributes(attrs)
|
||||||
.setOnAudioFocusChangeListener { /* kein Callback noetig */ }
|
.setOnAudioFocusChangeListener(focusListener)
|
||||||
.build()
|
.build()
|
||||||
currentRequest = req
|
currentRequest = req
|
||||||
am.requestAudioFocus(req)
|
am.requestAudioFocus(req)
|
||||||
} else {
|
} else {
|
||||||
@Suppress("DEPRECATION")
|
@Suppress("DEPRECATION")
|
||||||
am.requestAudioFocus(null, AudioManager.STREAM_MUSIC, durationHint)
|
am.requestAudioFocus(focusListener, AudioManager.STREAM_MUSIC, durationHint)
|
||||||
}
|
}
|
||||||
|
|
||||||
promise.resolve(result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED)
|
promise.resolve(result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED)
|
||||||
@@ -86,14 +131,82 @@ class AudioFocusModule(reactContext: ReactApplicationContext) : ReactContextBase
|
|||||||
promise.resolve(true)
|
promise.resolve(true)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Den USAGE_MEDIA-Focus-Stack im System aufmischen, damit Spotify/YouTube
|
||||||
|
* resumen wenn ein anderer Player (z.B. react-native-sound) seinen Focus
|
||||||
|
* nicht ordnungsgemaess released hat. Strategie: kurz selbst USAGE_MEDIA
|
||||||
|
* GAIN beanspruchen — das System invalidiert dabei den haengenden Stack-
|
||||||
|
* Eintrag des anderen Players — und sofort wieder abandonen. Spotify
|
||||||
|
* bekommt den Focus-Gain und resumed.
|
||||||
|
*
|
||||||
|
* Workaround fuer das react-native-sound-Bug: Sound.stop()/release()
|
||||||
|
* laesst den AudioFocusRequest haengen.
|
||||||
|
*/
|
||||||
|
@ReactMethod
|
||||||
|
fun kickReleaseMedia(promise: Promise) {
|
||||||
|
val am = audioManager()
|
||||||
|
if (am == null) {
|
||||||
|
promise.resolve(false)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Async laufen lassen — wir wollen einen request, Pause, dann abandon.
|
||||||
|
// Ohne Pause merkt das System (und damit Spotify) die kurze Owner-
|
||||||
|
// Wechsel oft gar nicht. 250ms reicht erfahrungsgemaess fuer den
|
||||||
|
// Focus-Stack-Refresh.
|
||||||
|
Thread {
|
||||||
|
try {
|
||||||
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
|
||||||
|
val attrs = AudioAttributes.Builder()
|
||||||
|
.setUsage(AudioAttributes.USAGE_MEDIA)
|
||||||
|
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
|
||||||
|
.build()
|
||||||
|
val kickListener = AudioManager.OnAudioFocusChangeListener { /* ignorieren */ }
|
||||||
|
val kickReq = AudioFocusRequest.Builder(AudioManager.AUDIOFOCUS_GAIN)
|
||||||
|
.setAudioAttributes(attrs)
|
||||||
|
.setOnAudioFocusChangeListener(kickListener)
|
||||||
|
.build()
|
||||||
|
am.requestAudioFocus(kickReq)
|
||||||
|
Thread.sleep(250)
|
||||||
|
am.abandonAudioFocusRequest(kickReq)
|
||||||
|
} else {
|
||||||
|
val kickListener = AudioManager.OnAudioFocusChangeListener { /* ignorieren */ }
|
||||||
|
@Suppress("DEPRECATION")
|
||||||
|
am.requestAudioFocus(kickListener, AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN)
|
||||||
|
Thread.sleep(250)
|
||||||
|
@Suppress("DEPRECATION")
|
||||||
|
am.abandonAudioFocus(kickListener)
|
||||||
|
}
|
||||||
|
Log.i(TAG, "kickReleaseMedia: USAGE_MEDIA-Stack aufgemischt (250ms Pause)")
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "kickReleaseMedia failed: ${e.message}")
|
||||||
|
}
|
||||||
|
}.start()
|
||||||
|
promise.resolve(true)
|
||||||
|
}
|
||||||
|
|
||||||
private fun release() {
|
private fun release() {
|
||||||
val am = audioManager() ?: return
|
val am = audioManager() ?: return
|
||||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
|
||||||
currentRequest?.let { am.abandonAudioFocusRequest(it) }
|
currentRequest?.let { am.abandonAudioFocusRequest(it) }
|
||||||
} else {
|
} else {
|
||||||
@Suppress("DEPRECATION")
|
@Suppress("DEPRECATION")
|
||||||
am.abandonAudioFocus(null)
|
am.abandonAudioFocus(focusListener)
|
||||||
}
|
}
|
||||||
currentRequest = null
|
currentRequest = null
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Aktueller Audio-Mode: NORMAL=0, IN_CALL=2, IN_COMMUNICATION=3, CALL_SCREENING=4.
|
||||||
|
* IN_COMMUNICATION ist der typische VoIP-Anruf-Mode (WhatsApp, Signal, etc.) —
|
||||||
|
* kann gepollt werden um zu erkennen wann der Anruf vorbei ist (zurueck NORMAL). */
|
||||||
|
@ReactMethod
|
||||||
|
fun getMode(promise: Promise) {
|
||||||
|
val am = audioManager()
|
||||||
|
if (am == null) {
|
||||||
|
promise.resolve(0)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
promise.resolve(am.mode)
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod fun addListener(eventName: String) {}
|
||||||
|
@ReactMethod fun removeListeners(count: Int) {}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,59 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import android.content.Intent
|
||||||
|
import android.os.Build
|
||||||
|
import android.util.Log
|
||||||
|
import com.facebook.react.bridge.Promise
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||||
|
import com.facebook.react.bridge.ReactMethod
|
||||||
|
|
||||||
|
/**
|
||||||
|
* RN-Bridge fuer den AriaPlaybackService.
|
||||||
|
*
|
||||||
|
* Wird vom JS waehrend einer TTS-Wiedergabe gestartet damit Android den
|
||||||
|
* App-Prozess nicht killt wenn die App im Hintergrund ist (= ARIA spricht
|
||||||
|
* weiter, auch wenn Stefan die App minimiert hat).
|
||||||
|
*
|
||||||
|
* Service stoppt entweder explizit per stop() oder wird von Android
|
||||||
|
* mitgekillt wenn der Prozess weg ist (was bei Foreground-Service nur
|
||||||
|
* passiert wenn der User die App force-stopped).
|
||||||
|
*/
|
||||||
|
class BackgroundAudioModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||||
|
override fun getName() = "BackgroundAudio"
|
||||||
|
|
||||||
|
companion object { private const val TAG = "BackgroundAudio" }
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun start(reason: String, promise: Promise) {
|
||||||
|
try {
|
||||||
|
val ctx = reactApplicationContext
|
||||||
|
val intent = Intent(ctx, AriaPlaybackService::class.java)
|
||||||
|
intent.putExtra(AriaPlaybackService.EXTRA_REASON, reason ?: "")
|
||||||
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
|
||||||
|
ctx.startForegroundService(intent)
|
||||||
|
} else {
|
||||||
|
ctx.startService(intent)
|
||||||
|
}
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "start fehlgeschlagen: ${e.message}")
|
||||||
|
promise.reject("START_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun stop(promise: Promise) {
|
||||||
|
try {
|
||||||
|
val ctx = reactApplicationContext
|
||||||
|
ctx.stopService(Intent(ctx, AriaPlaybackService::class.java))
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "stop fehlgeschlagen: ${e.message}")
|
||||||
|
promise.reject("STOP_FAILED", e.message ?: "Unbekannter Fehler", e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ReactMethod fun addListener(eventName: String) {}
|
||||||
|
@ReactMethod fun removeListeners(count: Int) {}
|
||||||
|
}
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import com.facebook.react.ReactPackage
|
||||||
|
import com.facebook.react.bridge.NativeModule
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.uimanager.ViewManager
|
||||||
|
|
||||||
|
class BackgroundAudioPackage : ReactPackage {
|
||||||
|
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
|
||||||
|
return listOf(BackgroundAudioModule(reactContext))
|
||||||
|
}
|
||||||
|
|
||||||
|
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
|
||||||
|
return emptyList()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,55 @@
|
|||||||
|
package com.ariacockpit
|
||||||
|
|
||||||
|
import android.content.Intent
|
||||||
|
import android.net.Uri
|
||||||
|
import android.os.Build
|
||||||
|
import androidx.core.content.FileProvider
|
||||||
|
import com.facebook.react.bridge.Promise
|
||||||
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
|
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||||
|
import com.facebook.react.bridge.ReactMethod
|
||||||
|
import java.io.File
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Oeffnet eine beliebige Datei (PDF, Bild, Office-Doc, ...) mit der vom User
|
||||||
|
* gewaehlten App via Android-Intent-Picker. Nutzt FileProvider damit auch
|
||||||
|
* Android 7+ (content:// statt file://) das URI lesen darf.
|
||||||
|
*
|
||||||
|
* MIME-Type wird vom Caller bestimmt — App-Auswahl ist davon abhaengig (PDF
|
||||||
|
* geht an PDF-Viewer, image/jpeg an Galerie, etc.).
|
||||||
|
*/
|
||||||
|
class FileOpenerModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
|
||||||
|
override fun getName() = "FileOpener"
|
||||||
|
|
||||||
|
@ReactMethod
|
||||||
|
fun open(filePath: String, mimeType: String, promise: Promise) {
|
||||||
|
try {
|
||||||
|
val cleanPath = filePath.removePrefix("file://")
|
||||||
|
val file = File(cleanPath)
|
||||||
|
if (!file.exists()) {
|
||||||
|
promise.reject("FILE_NOT_FOUND", "Datei nicht gefunden: $cleanPath")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
val context = reactApplicationContext
|
||||||
|
val uri: Uri = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
|
||||||
|
FileProvider.getUriForFile(context, "${context.packageName}.fileprovider", file)
|
||||||
|
} else {
|
||||||
|
Uri.fromFile(file)
|
||||||
|
}
|
||||||
|
val safeMime = if (mimeType.isBlank()) "application/octet-stream" else mimeType
|
||||||
|
val intent = Intent(Intent.ACTION_VIEW).apply {
|
||||||
|
setDataAndType(uri, safeMime)
|
||||||
|
addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
|
||||||
|
addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION)
|
||||||
|
}
|
||||||
|
// Chooser zeigt Android-Auswahl falls mehrere Apps das MIME oeffnen koennen.
|
||||||
|
val chooser = Intent.createChooser(intent, "Oeffnen mit").apply {
|
||||||
|
addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
|
||||||
|
}
|
||||||
|
context.startActivity(chooser)
|
||||||
|
promise.resolve(true)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
promise.reject("OPEN_ERROR", e.message, e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -23,6 +23,7 @@ class MainApplication : Application(), ReactApplication {
|
|||||||
add(PcmStreamPlayerPackage())
|
add(PcmStreamPlayerPackage())
|
||||||
add(OpenWakeWordPackage())
|
add(OpenWakeWordPackage())
|
||||||
add(PhoneCallPackage())
|
add(PhoneCallPackage())
|
||||||
|
add(BackgroundAudioPackage())
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun getJSMainModuleName(): String = "index"
|
override fun getJSMainModuleName(): String = "index"
|
||||||
|
|||||||
@@ -4,12 +4,15 @@ import android.media.AudioAttributes
|
|||||||
import android.media.AudioFormat
|
import android.media.AudioFormat
|
||||||
import android.media.AudioManager
|
import android.media.AudioManager
|
||||||
import android.media.AudioTrack
|
import android.media.AudioTrack
|
||||||
|
import android.os.Build
|
||||||
import android.util.Base64
|
import android.util.Base64
|
||||||
import android.util.Log
|
import android.util.Log
|
||||||
|
import com.facebook.react.bridge.Arguments
|
||||||
import com.facebook.react.bridge.Promise
|
import com.facebook.react.bridge.Promise
|
||||||
import com.facebook.react.bridge.ReactApplicationContext
|
import com.facebook.react.bridge.ReactApplicationContext
|
||||||
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||||
import com.facebook.react.bridge.ReactMethod
|
import com.facebook.react.bridge.ReactMethod
|
||||||
|
import com.facebook.react.modules.core.DeviceEventManagerModule
|
||||||
import java.util.concurrent.LinkedBlockingQueue
|
import java.util.concurrent.LinkedBlockingQueue
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -76,9 +79,12 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
val encoding = AudioFormat.ENCODING_PCM_16BIT
|
val encoding = AudioFormat.ENCODING_PCM_16BIT
|
||||||
val minBuf = AudioTrack.getMinBufferSize(sampleRate, channelConfig, encoding)
|
val minBuf = AudioTrack.getMinBufferSize(sampleRate, channelConfig, encoding)
|
||||||
val bytesPerSecond = sampleRate * channels * 2 // 16-bit = 2 bytes
|
val bytesPerSecond = sampleRate * channels * 2 // 16-bit = 2 bytes
|
||||||
// Buffer muss mindestens PREROLL + etwas Spielraum fassen.
|
|
||||||
val prerollTarget = (bytesPerSecond * prerollSec).toInt()
|
val prerollTarget = (bytesPerSecond * prerollSec).toInt()
|
||||||
val bufferSize = (minBuf * 32).coerceAtLeast(prerollTarget * 2)
|
// Buffer entkoppelt von Preroll — fester ~4s-Buffer. OnePlus A12
|
||||||
|
// mit USAGE_ASSISTANT laeuft AudioTrack erst ab ~3s gepufferter
|
||||||
|
// Daten an. Wir padden Kurztexte vor play() auf 3s (siehe Block
|
||||||
|
// nach mainLoop), Buffer braucht ~1s Headroom weil write() blockt.
|
||||||
|
val bufferSize = (bytesPerSecond * 4).coerceAtLeast(minBuf * 8)
|
||||||
prerollBytes = prerollTarget
|
prerollBytes = prerollTarget
|
||||||
bytesBuffered = 0
|
bytesBuffered = 0
|
||||||
playbackStarted = false
|
playbackStarted = false
|
||||||
@@ -102,7 +108,20 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
.setTransferMode(AudioTrack.MODE_STREAM)
|
.setTransferMode(AudioTrack.MODE_STREAM)
|
||||||
.build()
|
.build()
|
||||||
|
|
||||||
// AudioTrack erstellen — play() wird erst aufgerufen wenn Pre-Roll erreicht.
|
// Start-Threshold runterdrehen: Default ist bufferSize/2 (= 2s bei 4s
|
||||||
|
// Buffer). AudioTrack startet sonst nicht bevor 2s im Puffer sind —
|
||||||
|
// bei kurzen TTS-Antworten (3 Worte ~ 1.4s) bleibt pos auf 0 stehen.
|
||||||
|
// 0.1s reicht damit AudioTrack sofort mit dem ersten Chunk anlaeuft.
|
||||||
|
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
|
||||||
|
try {
|
||||||
|
val startFrames = (sampleRate / 10).coerceAtLeast(1) // 100ms
|
||||||
|
newTrack.setStartThresholdInFrames(startFrames)
|
||||||
|
Log.i(TAG, "Start-Threshold gesetzt: ${startFrames} frames (~100ms)")
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "setStartThresholdInFrames failed: ${e.message}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
track = newTrack
|
track = newTrack
|
||||||
queue.clear()
|
queue.clear()
|
||||||
writerShouldStop = false
|
writerShouldStop = false
|
||||||
@@ -137,10 +156,12 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
Log.w(TAG, "play() sofort failed: ${e.message}")
|
Log.w(TAG, "play() sofort failed: ${e.message}")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Idle-Cutoff: wenn endRequested NICHT kam aber 30s nichts mehr
|
// Idle-Cutoff: wenn endRequested NICHT kam aber lange nichts mehr
|
||||||
// reinkommt, brechen wir ab (Bridge-Crash, verlorener final).
|
// reinkommt, brechen wir ab (Bridge-Crash, verlorener final).
|
||||||
|
// 120s damit lange F5-TTS-Render-Pausen zwischen Saetzen (z.B. bei
|
||||||
|
// Modell-Wechsel oder kalter GPU) nicht den Stream abreissen.
|
||||||
var idleMs = 0L
|
var idleMs = 0L
|
||||||
val maxIdleMs = 30_000L
|
val maxIdleMs = 120_000L
|
||||||
// Zielpufferfuellung — unter diesem Wasserstand fuettern wir
|
// Zielpufferfuellung — unter diesem Wasserstand fuettern wir
|
||||||
// Stille rein damit AudioTrack nicht underrunt waehrend die
|
// Stille rein damit AudioTrack nicht underrunt waehrend die
|
||||||
// Bridge den naechsten Satz rendert. Spotify/YouTube reagieren
|
// Bridge den naechsten Satz rendert. Spotify/YouTube reagieren
|
||||||
@@ -152,15 +173,11 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS)
|
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS)
|
||||||
if (data == null) {
|
if (data == null) {
|
||||||
if (endRequested) {
|
if (endRequested) {
|
||||||
// Falls wir vor Pre-Roll enden (kurzer Text): trotzdem abspielen
|
// Falls play() noch gar nicht lief (Stream ohne data
|
||||||
|
// ueberhaupt — sehr seltene Edge-Case): jetzt anstossen
|
||||||
|
// damit das finally{}-Wait nicht endlos blockt.
|
||||||
if (!playbackStarted) {
|
if (!playbackStarted) {
|
||||||
try {
|
try { t.play(); playbackStarted = true } catch (_: Exception) {}
|
||||||
t.play()
|
|
||||||
playbackStarted = true
|
|
||||||
Log.i(TAG, "Playback gestartet VOR Pre-Roll (kurzer Text, ${bytesBuffered}B gepuffert)")
|
|
||||||
} catch (e: Exception) {
|
|
||||||
Log.w(TAG, "play() fallback failed: ${e.message}")
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
break@mainLoop
|
break@mainLoop
|
||||||
}
|
}
|
||||||
@@ -192,12 +209,16 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
}
|
}
|
||||||
idleMs = 0L
|
idleMs = 0L
|
||||||
|
|
||||||
// Pre-Roll Check: play() erst wenn genug gepuffert
|
// play() beim ALLERERSTEN data-chunk aufrufen — egal wie wenig
|
||||||
if (!playbackStarted && bytesBuffered + data.size >= prerollBytes) {
|
// Daten da sind. Sonst stallt AudioTrack auf OnePlus A12 wenn
|
||||||
|
// play() erst gerufen wird nachdem der Buffer komplett gefuellt
|
||||||
|
// ist. Pre-Roll als "Vorrat aufbauen" passiert dann waehrend
|
||||||
|
// der Track schon spielt — Underrun-Schutz fuettert ggf. Stille.
|
||||||
|
if (!playbackStarted) {
|
||||||
try {
|
try {
|
||||||
t.play()
|
t.play()
|
||||||
playbackStarted = true
|
playbackStarted = true
|
||||||
Log.i(TAG, "Playback gestartet nach Pre-Roll ${bytesBuffered + data.size} Bytes")
|
Log.i(TAG, "Playback gestartet beim 1. Chunk (${bytesBuffered}B leading + ${data.size}B data)")
|
||||||
} catch (e: Exception) {
|
} catch (e: Exception) {
|
||||||
Log.w(TAG, "play() failed: ${e.message}")
|
Log.w(TAG, "play() failed: ${e.message}")
|
||||||
}
|
}
|
||||||
@@ -233,12 +254,21 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
val totalFrames = (bytesBuffered / streamBytesPerFrame).toInt()
|
val totalFrames = (bytesBuffered / streamBytesPerFrame).toInt()
|
||||||
var lastPos = -1
|
var lastPos = -1
|
||||||
var stalledCount = 0
|
var stalledCount = 0
|
||||||
|
var retried = false
|
||||||
while (!writerShouldStop) {
|
while (!writerShouldStop) {
|
||||||
val pos = t.playbackHeadPosition
|
val pos = t.playbackHeadPosition
|
||||||
if (pos >= totalFrames) break
|
if (pos >= totalFrames) break
|
||||||
// Safety: wenn Position 2s nicht mehr vorwaerts → AudioTrack hing
|
|
||||||
if (pos == lastPos) {
|
if (pos == lastPos) {
|
||||||
stalledCount++
|
stalledCount++
|
||||||
|
// Nach 500ms Stillstand: AudioTrack-Quirk auf manchen
|
||||||
|
// Geraeten (OnePlus A12) — play() nochmal anstossen.
|
||||||
|
if (stalledCount == 10 && pos == 0 && !retried) {
|
||||||
|
retried = true
|
||||||
|
Log.w(TAG, "playback nicht angefahren — retry play()")
|
||||||
|
try { t.play() } catch (e: Exception) {
|
||||||
|
Log.w(TAG, "retry play() failed: ${e.message}")
|
||||||
|
}
|
||||||
|
}
|
||||||
if (stalledCount > 40) {
|
if (stalledCount > 40) {
|
||||||
Log.w(TAG, "playback stalled at $pos/$totalFrames — give up")
|
Log.w(TAG, "playback stalled at $pos/$totalFrames — give up")
|
||||||
break
|
break
|
||||||
@@ -253,6 +283,17 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
} catch (_: Exception) {}
|
} catch (_: Exception) {}
|
||||||
try { t.stop() } catch (_: Exception) {}
|
try { t.stop() } catch (_: Exception) {}
|
||||||
try { t.release() } catch (_: Exception) {}
|
try { t.release() } catch (_: Exception) {}
|
||||||
|
// RN-Event: AudioTrack ist wirklich durch (alle Samples gespielt).
|
||||||
|
// JS released erst JETZT den AudioFocus — sonst spielt Spotify
|
||||||
|
// beim end()-Cap waehrend ARIA noch redet (15s+ je nach Buffer).
|
||||||
|
try {
|
||||||
|
val params = Arguments.createMap()
|
||||||
|
reactApplicationContext
|
||||||
|
.getJSModule(DeviceEventManagerModule.RCTDeviceEventEmitter::class.java)
|
||||||
|
.emit("PcmPlaybackFinished", params)
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "PlaybackFinished emit failed: ${e.message}")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}, "PcmStreamWriter").apply { start() }
|
}, "PcmStreamWriter").apply { start() }
|
||||||
|
|
||||||
@@ -309,6 +350,9 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
promise.resolve(true)
|
promise.resolve(true)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ReactMethod fun addListener(eventName: String) {}
|
||||||
|
@ReactMethod fun removeListeners(count: Int) {}
|
||||||
|
|
||||||
private fun stopInternal() {
|
private fun stopInternal() {
|
||||||
writerShouldStop = true
|
writerShouldStop = true
|
||||||
endRequested = true
|
endRequested = true
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
<?xml version="1.0" encoding="utf-8"?>
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
<paths>
|
<paths>
|
||||||
<cache-path name="cache" path="." />
|
<cache-path name="cache" path="." />
|
||||||
|
<files-path name="files" path="." />
|
||||||
|
<external-path name="external" path="." />
|
||||||
|
<external-files-path name="external_files" path="." />
|
||||||
|
<external-cache-path name="external_cache" path="." />
|
||||||
</paths>
|
</paths>
|
||||||
|
|||||||
+18
-17
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "aria-cockpit",
|
"name": "aria-cockpit",
|
||||||
"version": "0.0.7.7",
|
"version": "0.1.1.2",
|
||||||
"private": true,
|
"private": true,
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"android": "react-native run-android",
|
"android": "react-native run-android",
|
||||||
@@ -10,31 +10,32 @@
|
|||||||
"build:apk": "cd android && ./gradlew assembleRelease"
|
"build:apk": "cd android && ./gradlew assembleRelease"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
|
"@react-native-async-storage/async-storage": "^1.21.0",
|
||||||
|
"@react-native-community/geolocation": "^3.2.1",
|
||||||
|
"@react-navigation/bottom-tabs": "^6.5.11",
|
||||||
|
"@react-navigation/native": "^6.1.9",
|
||||||
"react": "18.2.0",
|
"react": "18.2.0",
|
||||||
"react-native": "0.73.4",
|
"react-native": "0.73.4",
|
||||||
"@react-navigation/native": "^6.1.9",
|
"react-native-audio-recorder-player": "^3.6.7",
|
||||||
"@react-navigation/bottom-tabs": "^6.5.11",
|
"react-native-camera-kit": "^13.0.0",
|
||||||
"react-native-screens": "3.27.0",
|
|
||||||
"react-native-safe-area-context": "^4.8.2",
|
|
||||||
"react-native-document-picker": "^9.1.1",
|
"react-native-document-picker": "^9.1.1",
|
||||||
"react-native-sound": "^0.11.2",
|
"react-native-fs": "^2.20.0",
|
||||||
"@react-native-community/geolocation": "^3.2.1",
|
|
||||||
"react-native-image-picker": "^7.1.0",
|
"react-native-image-picker": "^7.1.0",
|
||||||
"react-native-permissions": "^4.1.4",
|
"react-native-permissions": "^4.1.4",
|
||||||
"react-native-camera-kit": "^13.0.0",
|
"react-native-safe-area-context": "^4.8.2",
|
||||||
"@react-native-async-storage/async-storage": "^1.21.0",
|
"react-native-screens": "3.27.0",
|
||||||
"react-native-fs": "^2.20.0",
|
"react-native-sound": "^0.11.2",
|
||||||
"react-native-audio-recorder-player": "^3.6.7"
|
"react-native-svg": "^14.1.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"typescript": "^5.3.3",
|
"@react-native/eslint-config": "^0.73.2",
|
||||||
|
"@react-native/metro-config": "^0.73.5",
|
||||||
|
"@react-native/typescript-config": "^0.73.1",
|
||||||
|
"@types/jest": "^29.5.11",
|
||||||
"@types/react": "^18.2.48",
|
"@types/react": "^18.2.48",
|
||||||
"@types/react-native": "^0.73.0",
|
"@types/react-native": "^0.73.0",
|
||||||
"@react-native/eslint-config": "^0.73.2",
|
|
||||||
"@react-native/typescript-config": "^0.73.1",
|
|
||||||
"@react-native/metro-config": "^0.73.5",
|
|
||||||
"metro-react-native-babel-preset": "^0.77.0",
|
|
||||||
"jest": "^29.7.0",
|
"jest": "^29.7.0",
|
||||||
"@types/jest": "^29.5.11"
|
"metro-react-native-babel-preset": "^0.77.0",
|
||||||
|
"typescript": "^5.3.3"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,25 +1,87 @@
|
|||||||
/**
|
/**
|
||||||
* MessageText — selektierbarer Chat-Text mit Android-Auto-Linkifizierung.
|
* MessageText — selektierbarer Chat-Text mit Android-Auto-Linkifizierung,
|
||||||
|
* plus Inline-Image-Rendering wenn der Text Bild-URLs enthaelt.
|
||||||
*
|
*
|
||||||
* Wir nutzen Androids dataDetectorType="all" (System macht Phone/URL/Email
|
* - Markdown-Syntax `` und plain `https://...image.png` werden
|
||||||
* automatisch klickbar) und ein einzelnes <Text selectable> ohne nested
|
* erkannt — die URL bleibt im Text sichtbar (klickbar via Linkify),
|
||||||
* <Text> mit eigenem onPress. Nested Text mit onPress fingen die Long-Press-
|
* zusaetzlich wird das Bild als <Image> oder <SvgUri> drunter gerendert.
|
||||||
* Geste ab, damit war Markieren+Kopieren defekt.
|
* - Wir nutzen Androids dataDetectorType="all" (System macht Phone/URL/Email
|
||||||
|
* automatisch klickbar) und ein einzelnes <Text selectable> ohne nested
|
||||||
|
* <Text> mit eigenem onPress — Nested Text mit onPress fing die Long-Press-
|
||||||
|
* Geste ab, damit war Markieren+Kopieren defekt.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import React from 'react';
|
import React, { useEffect, useState } from 'react';
|
||||||
import { Text, TextStyle, StyleProp } from 'react-native';
|
import { View, Text, Image, TextStyle, StyleProp } from 'react-native';
|
||||||
|
import { SvgUri } from 'react-native-svg';
|
||||||
|
|
||||||
interface Props {
|
interface Props {
|
||||||
text: string;
|
text: string;
|
||||||
style?: StyleProp<TextStyle>;
|
style?: StyleProp<TextStyle>;
|
||||||
}
|
}
|
||||||
|
|
||||||
const MessageText: React.FC<Props> = ({ text, style }) => {
|
// Bild-URL-Pattern: http(s)://... endend auf gaengige Bild-Endungen.
|
||||||
|
const IMG_URL_RE = /https?:\/\/[^\s)<"']+\.(?:jpe?g|png|gif|webp|bmp|ico|svg)(?:\?[^\s)<"']*)?/gi;
|
||||||
|
|
||||||
|
function extractImageUrls(text: string): string[] {
|
||||||
|
const urls = new Set<string>();
|
||||||
|
const matches = text.match(IMG_URL_RE);
|
||||||
|
if (matches) matches.forEach(u => urls.add(u));
|
||||||
|
return Array.from(urls);
|
||||||
|
}
|
||||||
|
|
||||||
|
const SVG_RE = /\.svg(?:\?|$)/i;
|
||||||
|
|
||||||
|
/** Image mit dynamischer Aspect-Ratio aus echten Bilddimensionen.
|
||||||
|
* SVGs werden ueber react-native-svg gerendert (kein Image.getSize). */
|
||||||
|
const InlineImage: React.FC<{ uri: string }> = ({ uri }) => {
|
||||||
|
const isSvg = SVG_RE.test(uri);
|
||||||
|
const [aspectRatio, setAspectRatio] = useState<number>(1);
|
||||||
|
const [failed, setFailed] = useState(false);
|
||||||
|
useEffect(() => {
|
||||||
|
if (isSvg) return; // Image.getSize geht fuer SVG nicht
|
||||||
|
let cancelled = false;
|
||||||
|
Image.getSize(
|
||||||
|
uri,
|
||||||
|
(w, h) => { if (!cancelled && w > 0 && h > 0) setAspectRatio(Math.max(0.5, Math.min(2.5, w / h))); },
|
||||||
|
() => { if (!cancelled) setFailed(true); },
|
||||||
|
);
|
||||||
|
return () => { cancelled = true; };
|
||||||
|
}, [uri, isSvg]);
|
||||||
|
if (failed) return null;
|
||||||
|
if (isSvg) {
|
||||||
|
return (
|
||||||
|
<View style={{ marginTop: 8, width: 260, height: 260, backgroundColor: '#0D0D1A', borderRadius: 8, alignItems: 'center', justifyContent: 'center' }}>
|
||||||
|
<SvgUri uri={uri} width="100%" height="100%" onError={() => setFailed(true)} />
|
||||||
|
</View>
|
||||||
|
);
|
||||||
|
}
|
||||||
return (
|
return (
|
||||||
<Text style={style} selectable dataDetectorType="all">
|
<Image
|
||||||
{text}
|
source={{ uri }}
|
||||||
</Text>
|
style={{ width: 260, aspectRatio, borderRadius: 8, marginTop: 8, backgroundColor: '#0D0D1A' }}
|
||||||
|
resizeMode="cover"
|
||||||
|
onError={() => setFailed(true)}
|
||||||
|
/>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
const MessageText: React.FC<Props> = ({ text, style }) => {
|
||||||
|
const imageUrls = extractImageUrls(text || '');
|
||||||
|
if (imageUrls.length === 0) {
|
||||||
|
return (
|
||||||
|
<Text style={style} selectable dataDetectorType="all">
|
||||||
|
{text}
|
||||||
|
</Text>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return (
|
||||||
|
<View>
|
||||||
|
<Text style={style} selectable dataDetectorType="all">
|
||||||
|
{text}
|
||||||
|
</Text>
|
||||||
|
{imageUrls.map(u => <InlineImage key={u} uri={u} />)}
|
||||||
|
</View>
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -19,14 +19,21 @@ import {
|
|||||||
ScrollView,
|
ScrollView,
|
||||||
Modal,
|
Modal,
|
||||||
ToastAndroid,
|
ToastAndroid,
|
||||||
|
AppState,
|
||||||
|
NativeModules,
|
||||||
} from 'react-native';
|
} from 'react-native';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
import RNFS from 'react-native-fs';
|
import RNFS from 'react-native-fs';
|
||||||
|
import { SvgUri } from 'react-native-svg';
|
||||||
import rvs, { RVSMessage, ConnectionState } from '../services/rvs';
|
import rvs, { RVSMessage, ConnectionState } from '../services/rvs';
|
||||||
import audioService from '../services/audio';
|
import audioService from '../services/audio';
|
||||||
import wakeWordService from '../services/wakeword';
|
import wakeWordService from '../services/wakeword';
|
||||||
import phoneCallService from '../services/phoneCall';
|
import phoneCallService from '../services/phoneCall';
|
||||||
import { playWakeReadySound } from '../services/wakeReadySound';
|
import { playWakeReadySound } from '../services/wakeReadySound';
|
||||||
|
import {
|
||||||
|
acquireBackgroundAudio,
|
||||||
|
releaseBackgroundAudio,
|
||||||
|
} from '../services/backgroundAudio';
|
||||||
import updateService from '../services/updater';
|
import updateService from '../services/updater';
|
||||||
import VoiceButton from '../components/VoiceButton';
|
import VoiceButton from '../components/VoiceButton';
|
||||||
import FileUpload, { FileData } from '../components/FileUpload';
|
import FileUpload, { FileData } from '../components/FileUpload';
|
||||||
@@ -75,6 +82,73 @@ const capMessages = (msgs: ChatMessage[]): ChatMessage[] =>
|
|||||||
const DEFAULT_ATTACHMENT_DIR = `${RNFS.DocumentDirectoryPath}/chat_attachments`;
|
const DEFAULT_ATTACHMENT_DIR = `${RNFS.DocumentDirectoryPath}/chat_attachments`;
|
||||||
const STORAGE_PATH_KEY = 'aria_attachment_storage_path';
|
const STORAGE_PATH_KEY = 'aria_attachment_storage_path';
|
||||||
|
|
||||||
|
const { FileOpener } = NativeModules as {
|
||||||
|
FileOpener?: { open: (filePath: string, mimeType: string) => Promise<boolean> };
|
||||||
|
};
|
||||||
|
|
||||||
|
/** Datei mit Android-Intent-Picker oeffnen (System waehlt App nach MIME). */
|
||||||
|
async function openFileWithIntent(filePath: string, mimeType: string): Promise<void> {
|
||||||
|
if (!FileOpener) {
|
||||||
|
ToastAndroid.show('FileOpener Native Module fehlt', ToastAndroid.SHORT);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
await FileOpener.open(filePath, mimeType || 'application/octet-stream');
|
||||||
|
} catch (err: any) {
|
||||||
|
ToastAndroid.show(`Oeffnen fehlgeschlagen: ${err?.message || err}`, ToastAndroid.LONG);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Image-Vorschau in der Chat-Bubble. Misst die echte Bild-Dimension via
|
||||||
|
* Image.getSize + setzt aspectRatio dynamisch — dadurch passt sich die
|
||||||
|
* Bubble ans Bild an (kein "Strich" mehr bei breiten oder hohen Bildern). */
|
||||||
|
const CHAT_IMAGE_STYLE = {
|
||||||
|
width: 260,
|
||||||
|
borderRadius: 8,
|
||||||
|
marginBottom: 6,
|
||||||
|
backgroundColor: '#0D0D1A',
|
||||||
|
} as const;
|
||||||
|
const ChatImage: React.FC<{
|
||||||
|
uri: string;
|
||||||
|
onPress: () => void;
|
||||||
|
onError: () => void;
|
||||||
|
}> = ({ uri, onPress, onError }) => {
|
||||||
|
const [aspectRatio, setAspectRatio] = useState<number>(4 / 3);
|
||||||
|
const isSvg = /\.svg(?:\?|$)/i.test(uri);
|
||||||
|
useEffect(() => {
|
||||||
|
if (isSvg) return; // SvgUri hat kein getSize
|
||||||
|
let cancelled = false;
|
||||||
|
Image.getSize(uri, (w, h) => {
|
||||||
|
if (!cancelled && w > 0 && h > 0) {
|
||||||
|
// Aspect-Ratio capen damit sehr lange Panorama-Bilder oder hohe
|
||||||
|
// Screenshot-Streifen die Bubble nicht sprengen
|
||||||
|
const r = Math.max(0.5, Math.min(2.5, w / h));
|
||||||
|
setAspectRatio(r);
|
||||||
|
}
|
||||||
|
}, () => {});
|
||||||
|
return () => { cancelled = true; };
|
||||||
|
}, [uri, isSvg]);
|
||||||
|
if (isSvg) {
|
||||||
|
return (
|
||||||
|
<TouchableOpacity onPress={onPress} activeOpacity={0.8}>
|
||||||
|
<View style={[CHAT_IMAGE_STYLE, { height: 260, alignItems: 'center', justifyContent: 'center' }]}>
|
||||||
|
<SvgUri uri={uri} width="100%" height="100%" onError={onError} />
|
||||||
|
</View>
|
||||||
|
</TouchableOpacity>
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return (
|
||||||
|
<TouchableOpacity onPress={onPress} activeOpacity={0.8}>
|
||||||
|
<Image
|
||||||
|
source={{ uri }}
|
||||||
|
style={[CHAT_IMAGE_STYLE, { aspectRatio }]}
|
||||||
|
resizeMode="cover"
|
||||||
|
onError={onError}
|
||||||
|
/>
|
||||||
|
</TouchableOpacity>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
async function getAttachmentDir(): Promise<string> {
|
async function getAttachmentDir(): Promise<string> {
|
||||||
try {
|
try {
|
||||||
const saved = await AsyncStorage.getItem(STORAGE_PATH_KEY);
|
const saved = await AsyncStorage.getItem(STORAGE_PATH_KEY);
|
||||||
@@ -135,6 +209,10 @@ const ChatScreen: React.FC = () => {
|
|||||||
|
|
||||||
const flatListRef = useRef<FlatList>(null);
|
const flatListRef = useRef<FlatList>(null);
|
||||||
const messageIdCounter = useRef(0);
|
const messageIdCounter = useRef(0);
|
||||||
|
// ServerPaths fuer die der User auf "oeffnen" geklickt hat — beim
|
||||||
|
// file_response wird die Datei nach dem Speichern direkt mit dem System-
|
||||||
|
// Intent geoeffnet (PDF-Viewer, Galerie, etc.).
|
||||||
|
const autoOpenPaths = useRef<Set<string>>(new Set());
|
||||||
|
|
||||||
// Eindeutige Message-ID generieren
|
// Eindeutige Message-ID generieren
|
||||||
const nextId = (): string => {
|
const nextId = (): string => {
|
||||||
@@ -142,20 +220,24 @@ const ChatScreen: React.FC = () => {
|
|||||||
return `msg_${Date.now()}_${messageIdCounter.current}`;
|
return `msg_${Date.now()}_${messageIdCounter.current}`;
|
||||||
};
|
};
|
||||||
|
|
||||||
// TTS-Settings beim Mount + bei Screen-Fokus neu laden (damit Settings-Toggle sofort greift)
|
// TTS- + GPS-Settings beim Mount + alle 2s neu laden (damit Settings-Toggle
|
||||||
|
// sofort greift, ohne Context- oder Event-System)
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
const loadTtsSettings = async () => {
|
const loadSettings = async () => {
|
||||||
const enabled = await AsyncStorage.getItem('aria_tts_enabled');
|
const enabled = await AsyncStorage.getItem('aria_tts_enabled');
|
||||||
setTtsDeviceEnabled(enabled !== 'false'); // default true
|
setTtsDeviceEnabled(enabled !== 'false'); // default true
|
||||||
const muted = await AsyncStorage.getItem('aria_tts_muted');
|
const muted = await AsyncStorage.getItem('aria_tts_muted');
|
||||||
setTtsMuted(muted === 'true'); // default false
|
const isMuted = muted === 'true';
|
||||||
|
setTtsMuted(isMuted); // default false
|
||||||
|
audioService.setMuted(isMuted); // service-internen Flag synchronisieren
|
||||||
const voice = await AsyncStorage.getItem('aria_xtts_voice');
|
const voice = await AsyncStorage.getItem('aria_xtts_voice');
|
||||||
localXttsVoiceRef.current = voice || '';
|
localXttsVoiceRef.current = voice || '';
|
||||||
ttsSpeedRef.current = await loadTtsSpeed();
|
ttsSpeedRef.current = await loadTtsSpeed();
|
||||||
|
const gps = await AsyncStorage.getItem('aria_gps_enabled');
|
||||||
|
setGpsEnabled(gps === 'true');
|
||||||
};
|
};
|
||||||
loadTtsSettings();
|
loadSettings();
|
||||||
// Poll alle 2s um Settings-Aenderung mitzubekommen (einfache Loesung ohne Context)
|
const interval = setInterval(loadSettings, 2000);
|
||||||
const interval = setInterval(loadTtsSettings, 2000);
|
|
||||||
return () => clearInterval(interval);
|
return () => clearInterval(interval);
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
@@ -171,6 +253,11 @@ const ChatScreen: React.FC = () => {
|
|||||||
// 'armed' oder 'off' fallen, darf Spotify wieder.
|
// 'armed' oder 'off' fallen, darf Spotify wieder.
|
||||||
if (s === 'conversing') audioService.acquireConversationFocus();
|
if (s === 'conversing') audioService.acquireConversationFocus();
|
||||||
else audioService.releaseConversationFocus();
|
else audioService.releaseConversationFocus();
|
||||||
|
// Foreground-Service-Slot 'wake' — solange das Ohr ueberhaupt aktiv ist
|
||||||
|
// (armed oder conversing), soll der App-Prozess im Hintergrund am Leben
|
||||||
|
// bleiben damit Mikro-Lauschen + Aufnahme weiterlaufen.
|
||||||
|
if (s !== 'off') acquireBackgroundAudio('wake').catch(() => {});
|
||||||
|
else releaseBackgroundAudio('wake').catch(() => {});
|
||||||
});
|
});
|
||||||
return () => unsub();
|
return () => unsub();
|
||||||
}, []);
|
}, []);
|
||||||
@@ -182,6 +269,31 @@ const ChatScreen: React.FC = () => {
|
|||||||
return () => { phoneCallService.stop().catch(() => {}); };
|
return () => { phoneCallService.stop().catch(() => {}); };
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
|
// App-Resume: kurzer Wake-Word-Cooldown — beim Wechsel Background→Foreground
|
||||||
|
// gibt's haeufig Audio-Pegel-Spikes (AudioFocus-Switch, AudioTrack re-route)
|
||||||
|
// die openWakeWord sonst faelschlich als Wake-Word interpretiert.
|
||||||
|
useEffect(() => {
|
||||||
|
let lastState: string = AppState.currentState;
|
||||||
|
const sub = AppState.addEventListener('change', (next) => {
|
||||||
|
if (lastState !== 'active' && next === 'active') {
|
||||||
|
wakeWordService.setResumeCooldown(1500);
|
||||||
|
}
|
||||||
|
lastState = next;
|
||||||
|
});
|
||||||
|
return () => sub.remove();
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// Recording-State an Background-Service-Slot 'rec' koppeln — damit das Mikro
|
||||||
|
// auch im Hintergrund weiter aufnehmen darf (Android killt den App-Prozess
|
||||||
|
// sonst und die Aufnahme bricht ab).
|
||||||
|
useEffect(() => {
|
||||||
|
const unsub = audioService.onStateChange((s) => {
|
||||||
|
if (s === 'recording') acquireBackgroundAudio('rec').catch(() => {});
|
||||||
|
else releaseBackgroundAudio('rec').catch(() => {});
|
||||||
|
});
|
||||||
|
return () => unsub();
|
||||||
|
}, []);
|
||||||
|
|
||||||
// ttsCanPlayRef live aktuell halten — Closure in onMessage unten liest
|
// ttsCanPlayRef live aktuell halten — Closure in onMessage unten liest
|
||||||
// darueber statt direkt ttsDeviceEnabled/ttsMuted (sonst stale).
|
// darueber statt direkt ttsDeviceEnabled/ttsMuted (sonst stale).
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -192,11 +304,15 @@ const ChatScreen: React.FC = () => {
|
|||||||
setTtsMuted(prev => {
|
setTtsMuted(prev => {
|
||||||
const next = !prev;
|
const next = !prev;
|
||||||
AsyncStorage.setItem('aria_tts_muted', String(next));
|
AsyncStorage.setItem('aria_tts_muted', String(next));
|
||||||
// Bei Muten sofort laufende Wiedergabe stoppen
|
// Ref synchron updaten — sonst kommen noch Chunks im selben Tick
|
||||||
if (next) audioService.stopPlayback();
|
// mit canPlay=true durch (Race vor dem useEffect-Update).
|
||||||
|
ttsCanPlayRef.current = ttsDeviceEnabled && !next;
|
||||||
|
// Globalen Mute-Flag im audioService setzen — uebersteuert auch
|
||||||
|
// payload.silent in handlePcmChunk und stoppt laufende Wiedergabe.
|
||||||
|
audioService.setMuted(next);
|
||||||
return next;
|
return next;
|
||||||
});
|
});
|
||||||
}, []);
|
}, [ttsDeviceEnabled]);
|
||||||
|
|
||||||
// Chat-Verlauf aus AsyncStorage laden
|
// Chat-Verlauf aus AsyncStorage laden
|
||||||
const isInitialLoad = useRef(true);
|
const isInitialLoad = useRef(true);
|
||||||
@@ -267,11 +383,32 @@ const ChatScreen: React.FC = () => {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// file_from_aria: ARIA hat eine Datei rausgegeben → als ARIA-Bubble anzeigen
|
||||||
|
if (message.type === 'file_from_aria') {
|
||||||
|
const p = message.payload || {};
|
||||||
|
const ariaMsg: ChatMessage = {
|
||||||
|
id: nextId(),
|
||||||
|
sender: 'aria',
|
||||||
|
text: '',
|
||||||
|
timestamp: Date.now(),
|
||||||
|
attachments: [{
|
||||||
|
type: (typeof p.mimeType === 'string' && p.mimeType.startsWith('image/')) ? 'image' : 'file',
|
||||||
|
name: (p.name as string) || 'datei',
|
||||||
|
size: (p.size as number) || 0,
|
||||||
|
mimeType: (p.mimeType as string) || '',
|
||||||
|
serverPath: (p.serverPath as string) || '',
|
||||||
|
}],
|
||||||
|
};
|
||||||
|
setMessages(prev => capMessages([...prev, ariaMsg]));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
// file_response: Re-Download von Server — lokal speichern
|
// file_response: Re-Download von Server — lokal speichern
|
||||||
if (message.type === 'file_response') {
|
if (message.type === 'file_response') {
|
||||||
const reqId = (message.payload.requestId as string) || '';
|
const reqId = (message.payload.requestId as string) || '';
|
||||||
const b64 = (message.payload.base64 as string) || '';
|
const b64 = (message.payload.base64 as string) || '';
|
||||||
const serverPath = (message.payload.serverPath as string) || '';
|
const serverPath = (message.payload.serverPath as string) || '';
|
||||||
|
const mimeType = (message.payload.mimeType as string) || '';
|
||||||
if (b64 && reqId) {
|
if (b64 && reqId) {
|
||||||
const fileName = (message.payload.name as string) || 'download';
|
const fileName = (message.payload.name as string) || 'download';
|
||||||
persistAttachment(b64, reqId, fileName).then(filePath => {
|
persistAttachment(b64, reqId, fileName).then(filePath => {
|
||||||
@@ -281,6 +418,11 @@ const ChatScreen: React.FC = () => {
|
|||||||
a.serverPath === serverPath ? { ...a, uri: filePath } : a
|
a.serverPath === serverPath ? { ...a, uri: filePath } : a
|
||||||
),
|
),
|
||||||
})));
|
})));
|
||||||
|
// Wenn der User dieses File explizit oeffnen wollte → Intent-Picker
|
||||||
|
if (serverPath && autoOpenPaths.current.has(serverPath)) {
|
||||||
|
autoOpenPaths.current.delete(serverPath);
|
||||||
|
openFileWithIntent(filePath.replace(/^file:\/\//, ''), mimeType);
|
||||||
|
}
|
||||||
}).catch(() => {});
|
}).catch(() => {});
|
||||||
}
|
}
|
||||||
return;
|
return;
|
||||||
@@ -413,6 +555,8 @@ const ChatScreen: React.FC = () => {
|
|||||||
const activity = (message.payload.activity as string) || 'idle';
|
const activity = (message.payload.activity as string) || 'idle';
|
||||||
const tool = (message.payload.tool as string) || '';
|
const tool = (message.payload.tool as string) || '';
|
||||||
setAgentActivity({ activity, tool });
|
setAgentActivity({ activity, tool });
|
||||||
|
// Spotify darf waehrend "ARIA denkt/schreibt" weiterspielen — pausiert
|
||||||
|
// nur wenn TTS startet (dann acquired _firePlaybackStarted den Focus).
|
||||||
}
|
}
|
||||||
|
|
||||||
// Voice-Config aus Diagnostic — setzt die lokale App-Stimme auf den
|
// Voice-Config aus Diagnostic — setzt die lokale App-Stimme auf den
|
||||||
@@ -535,6 +679,7 @@ const ChatScreen: React.FC = () => {
|
|||||||
audioRequestId,
|
audioRequestId,
|
||||||
...(location && { location }),
|
...(location && { location }),
|
||||||
});
|
});
|
||||||
|
scheduleStaleAudioCleanup(audioRequestId, result.durationMs);
|
||||||
// resume() wird durch onPlaybackFinished nach ARIAs Antwort getriggert.
|
// resume() wird durch onPlaybackFinished nach ARIAs Antwort getriggert.
|
||||||
} else {
|
} else {
|
||||||
// Kein Speech im Window → Konversation beenden (Ohr geht aus oder
|
// Kein Speech im Window → Konversation beenden (Ohr geht aus oder
|
||||||
@@ -565,12 +710,16 @@ const ChatScreen: React.FC = () => {
|
|||||||
|
|
||||||
// TTS-Lifecycle: solange ARIA spricht und Wake-Word verfuegbar ist,
|
// TTS-Lifecycle: solange ARIA spricht und Wake-Word verfuegbar ist,
|
||||||
// parallel mitlauschen — User kann "Computer" sagen statt manuell tappen.
|
// parallel mitlauschen — User kann "Computer" sagen statt manuell tappen.
|
||||||
|
// PLUS: Foreground-Service-Slot 'tts' belegen damit Android den App-
|
||||||
|
// Prozess nicht killt wenn die App im Hintergrund ist.
|
||||||
const unsubTtsStart = audioService.onPlaybackStarted(() => {
|
const unsubTtsStart = audioService.onPlaybackStarted(() => {
|
||||||
|
acquireBackgroundAudio('tts').catch(() => {});
|
||||||
if (wakeWordService.isConversing() && wakeWordService.hasWakeWord()) {
|
if (wakeWordService.isConversing() && wakeWordService.hasWakeWord()) {
|
||||||
wakeWordService.startBargeListening().catch(() => {});
|
wakeWordService.startBargeListening().catch(() => {});
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
const unsubTtsEnd = audioService.onPlaybackFinished(() => {
|
const unsubTtsEnd = audioService.onPlaybackFinished(() => {
|
||||||
|
releaseBackgroundAudio('tts').catch(() => {});
|
||||||
// Vor naechster Aufnahme: barge-listening aus damit der AudioRecorder
|
// Vor naechster Aufnahme: barge-listening aus damit der AudioRecorder
|
||||||
// das Mikro greifen kann.
|
// das Mikro greifen kann.
|
||||||
wakeWordService.stopBargeListening().catch(() => {});
|
wakeWordService.stopBargeListening().catch(() => {});
|
||||||
@@ -636,17 +785,23 @@ const ChatScreen: React.FC = () => {
|
|||||||
|
|
||||||
// GPS-Position holen (optional)
|
// GPS-Position holen (optional)
|
||||||
const getCurrentLocation = useCallback((): Promise<{ lat: number; lon: number } | null> => {
|
const getCurrentLocation = useCallback((): Promise<{ lat: number; lon: number } | null> => {
|
||||||
if (!gpsEnabled) return Promise.resolve(null);
|
if (!gpsEnabled) {
|
||||||
|
console.log('[GPS] gpsEnabled=false → kein Standort');
|
||||||
|
return Promise.resolve(null);
|
||||||
|
}
|
||||||
|
|
||||||
return new Promise((resolve) => {
|
return new Promise((resolve) => {
|
||||||
Geolocation.getCurrentPosition(
|
Geolocation.getCurrentPosition(
|
||||||
(position) => {
|
(position) => {
|
||||||
resolve({
|
const loc = {
|
||||||
lat: position.coords.latitude,
|
lat: position.coords.latitude,
|
||||||
lon: position.coords.longitude,
|
lon: position.coords.longitude,
|
||||||
});
|
};
|
||||||
|
console.log('[GPS] Position: lat=%s lon=%s', loc.lat, loc.lon);
|
||||||
|
resolve(loc);
|
||||||
},
|
},
|
||||||
(_error) => {
|
(error) => {
|
||||||
|
console.warn('[GPS] getCurrentPosition Fehler:', error?.code, error?.message);
|
||||||
resolve(null);
|
resolve(null);
|
||||||
},
|
},
|
||||||
{ enableHighAccuracy: false, timeout: 5000 },
|
{ enableHighAccuracy: false, timeout: 5000 },
|
||||||
@@ -656,6 +811,29 @@ const ChatScreen: React.FC = () => {
|
|||||||
|
|
||||||
// --- Nachricht senden ---
|
// --- Nachricht senden ---
|
||||||
|
|
||||||
|
// Aufraeumen von "verarbeitet"-Placeholder die nie ein STT-Result bekommen
|
||||||
|
// haben (leere Aufnahme, Wake-Word-Echo, STT-Fehler etc). Timeout skaliert
|
||||||
|
// mit der Aufnahmedauer — Whisper braucht auf der Gamebox grob real-time/5,
|
||||||
|
// plus Bridge-Roundtrip + Network. Formel: 60s Buffer + 1x Aufnahmedauer.
|
||||||
|
// Bei 5min Aufnahme = 6 min Wait, bei 5s Aufnahme = 65s. Sicher genug damit
|
||||||
|
// langsame STTs nicht versehentlich aufgeraeumt werden.
|
||||||
|
const scheduleStaleAudioCleanup = useCallback((audioRequestId: string, recordingMs: number) => {
|
||||||
|
const timeoutMs = 60000 + recordingMs;
|
||||||
|
setTimeout(() => {
|
||||||
|
setMessages(prev => {
|
||||||
|
const idx = prev.findIndex(m =>
|
||||||
|
m.audioRequestId === audioRequestId &&
|
||||||
|
m.text.includes('Spracheingabe wird verarbeitet')
|
||||||
|
);
|
||||||
|
if (idx < 0) return prev;
|
||||||
|
console.log('[Chat] Sprachnachricht ohne STT-Result nach %dms entfernt: %s',
|
||||||
|
timeoutMs, audioRequestId);
|
||||||
|
ToastAndroid.show('Sprachnachricht nicht erkannt — entfernt', ToastAndroid.SHORT);
|
||||||
|
return prev.filter((_, i) => i !== idx);
|
||||||
|
});
|
||||||
|
}, timeoutMs);
|
||||||
|
}, []);
|
||||||
|
|
||||||
const sendTextMessage = useCallback(async () => {
|
const sendTextMessage = useCallback(async () => {
|
||||||
const text = inputText.trim();
|
const text = inputText.trim();
|
||||||
|
|
||||||
@@ -743,7 +921,19 @@ const ChatScreen: React.FC = () => {
|
|||||||
audioRequestId,
|
audioRequestId,
|
||||||
...(location && { location }),
|
...(location && { location }),
|
||||||
});
|
});
|
||||||
}, [getCurrentLocation, interruptAriaIfBusy]);
|
scheduleStaleAudioCleanup(audioRequestId, result.durationMs);
|
||||||
|
|
||||||
|
// Manueller Mikro-Stop waehrend Wake-Word-Konversation: User hat explizit
|
||||||
|
// den Knopf gedrueckt → er moechte nicht in den automatischen Multi-Turn-
|
||||||
|
// Modus, sondern nach ARIAs Antwort zurueck zu passivem Wake-Word-Lauschen.
|
||||||
|
// Bei VAD-Auto-Stop (Wake-Word-Pfad) laeuft das ueber den silence-callback
|
||||||
|
// und endet mit resume() — der manuelle Stop hier ist der "ich bin fertig"-
|
||||||
|
// Knopf.
|
||||||
|
if (wakeWordService.isConversing()) {
|
||||||
|
console.log('[Chat] Manueller Stop in Konversation → endConversation, zurueck zu armed');
|
||||||
|
await wakeWordService.endConversation();
|
||||||
|
}
|
||||||
|
}, [getCurrentLocation, interruptAriaIfBusy, scheduleStaleAudioCleanup]);
|
||||||
|
|
||||||
// Datei auswaehlen → zur Pending-Liste hinzufuegen
|
// Datei auswaehlen → zur Pending-Liste hinzufuegen
|
||||||
const handleFileSelected = useCallback(async (file: FileData) => {
|
const handleFileSelected = useCallback(async (file: FileData) => {
|
||||||
@@ -760,6 +950,7 @@ const ChatScreen: React.FC = () => {
|
|||||||
// Alle Pending Anhaenge + Text senden
|
// Alle Pending Anhaenge + Text senden
|
||||||
const sendPendingAttachments = useCallback(async (messageText: string) => {
|
const sendPendingAttachments = useCallback(async (messageText: string) => {
|
||||||
if (pendingAttachments.length === 0) return;
|
if (pendingAttachments.length === 0) return;
|
||||||
|
console.log('[Chat] sendPendingAttachments: %d Anhang/Anhaenge', pendingAttachments.length);
|
||||||
const location = await getCurrentLocation();
|
const location = await getCurrentLocation();
|
||||||
const msgId = nextId();
|
const msgId = nextId();
|
||||||
|
|
||||||
@@ -809,6 +1000,8 @@ const ChatScreen: React.FC = () => {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// An RVS senden
|
// An RVS senden
|
||||||
|
console.log('[Chat] sende file: name=%s mime=%s size=%s b64Bytes=%s',
|
||||||
|
name, mimeType, file.size, base64.length);
|
||||||
rvs.send('file', {
|
rvs.send('file', {
|
||||||
name,
|
name,
|
||||||
type: mimeType,
|
type: mimeType,
|
||||||
@@ -848,11 +1041,9 @@ const ChatScreen: React.FC = () => {
|
|||||||
{item.attachments?.map((att, idx) => (
|
{item.attachments?.map((att, idx) => (
|
||||||
<View key={idx}>
|
<View key={idx}>
|
||||||
{att.type === 'image' && att.uri ? (
|
{att.type === 'image' && att.uri ? (
|
||||||
<TouchableOpacity onPress={() => setFullscreenImage(att.uri || null)} activeOpacity={0.8}>
|
<ChatImage
|
||||||
<Image
|
uri={att.uri}
|
||||||
source={{ uri: att.uri }}
|
onPress={() => setFullscreenImage(att.uri || null)}
|
||||||
style={styles.attachmentImage}
|
|
||||||
resizeMode="cover"
|
|
||||||
onError={() => {
|
onError={() => {
|
||||||
setMessages(prev => prev.map(m =>
|
setMessages(prev => prev.map(m =>
|
||||||
m.id === item.id ? { ...m, attachments: m.attachments?.map((a, i) =>
|
m.id === item.id ? { ...m, attachments: m.attachments?.map((a, i) =>
|
||||||
@@ -861,7 +1052,6 @@ const ChatScreen: React.FC = () => {
|
|||||||
));
|
));
|
||||||
}}
|
}}
|
||||||
/>
|
/>
|
||||||
</TouchableOpacity>
|
|
||||||
) : att.type === 'image' && !att.uri ? (
|
) : att.type === 'image' && !att.uri ? (
|
||||||
<TouchableOpacity
|
<TouchableOpacity
|
||||||
style={styles.attachmentFile}
|
style={styles.attachmentFile}
|
||||||
@@ -878,7 +1068,22 @@ const ChatScreen: React.FC = () => {
|
|||||||
</Text>
|
</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
) : (
|
) : (
|
||||||
<View style={styles.attachmentFile}>
|
<TouchableOpacity
|
||||||
|
style={styles.attachmentFile}
|
||||||
|
onPress={() => {
|
||||||
|
// Lokal vorhanden \u2192 direkt mit System-Intent oeffnen
|
||||||
|
if (att.uri) {
|
||||||
|
openFileWithIntent(att.uri.replace(/^file:\/\//, ''), att.mimeType || '');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
// Sonst: file_request \u2192 bei file_response wird die Datei
|
||||||
|
// gespeichert UND geoeffnet (autoOpenPaths-Tracking).
|
||||||
|
if (att.serverPath) {
|
||||||
|
autoOpenPaths.current.add(att.serverPath);
|
||||||
|
rvs.send('file_request' as any, { serverPath: att.serverPath, requestId: item.id });
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
>
|
||||||
<Text style={styles.attachmentFileIcon}>
|
<Text style={styles.attachmentFileIcon}>
|
||||||
{att.mimeType?.includes('pdf') ? '\uD83D\uDCC4' :
|
{att.mimeType?.includes('pdf') ? '\uD83D\uDCC4' :
|
||||||
att.mimeType?.includes('word') || att.mimeType?.includes('document') ? '\uD83D\uDCC3' :
|
att.mimeType?.includes('word') || att.mimeType?.includes('document') ? '\uD83D\uDCC3' :
|
||||||
@@ -888,12 +1093,10 @@ const ChatScreen: React.FC = () => {
|
|||||||
<Text style={styles.attachmentFileName} numberOfLines={1}>{att.name}</Text>
|
<Text style={styles.attachmentFileName} numberOfLines={1}>{att.name}</Text>
|
||||||
{att.size ? <Text style={styles.attachmentFileSize}>{Math.round(att.size / 1024)}KB</Text> : null}
|
{att.size ? <Text style={styles.attachmentFileSize}>{Math.round(att.size / 1024)}KB</Text> : null}
|
||||||
{!att.uri && att.serverPath && (
|
{!att.uri && att.serverPath && (
|
||||||
<TouchableOpacity onPress={() => rvs.send('file_request' as any, { serverPath: att.serverPath, requestId: item.id })}>
|
<Text style={[styles.attachmentFileSize, {color: '#0096FF'}]}>(tippen zum oeffnen)</Text>
|
||||||
<Text style={[styles.attachmentFileSize, {color: '#0096FF'}]}>(laden)</Text>
|
|
||||||
</TouchableOpacity>
|
|
||||||
)}
|
)}
|
||||||
{!att.uri && !att.serverPath && <Text style={styles.attachmentFileSize}>(nicht verfuegbar)</Text>}
|
{!att.uri && !att.serverPath && <Text style={styles.attachmentFileSize}>(nicht verfuegbar)</Text>}
|
||||||
</View>
|
</TouchableOpacity>
|
||||||
)}
|
)}
|
||||||
</View>
|
</View>
|
||||||
))}
|
))}
|
||||||
@@ -908,19 +1111,24 @@ const ChatScreen: React.FC = () => {
|
|||||||
{!isUser && item.text.length > 0 && (
|
{!isUser && item.text.length > 0 && (
|
||||||
<TouchableOpacity
|
<TouchableOpacity
|
||||||
style={styles.playButton}
|
style={styles.playButton}
|
||||||
onPress={() => {
|
onPress={async () => {
|
||||||
if (item.audioPath) {
|
// Erst lokalen Cache pruefen — audioPath kann auf eine geloeschte
|
||||||
audioService.playFromPath(item.audioPath);
|
// Datei zeigen (TTS-Cache geleert oder Auto-Cleanup). In dem Fall
|
||||||
} else {
|
// ueber RVS neu rendern lassen statt stumm zu bleiben.
|
||||||
// messageId mitschicken damit die Bridge das generierte Audio
|
const cachePath = item.audioPath?.replace(/^file:\/\//, '') || '';
|
||||||
// wieder mit der Nachricht verknuepft (fuer den naechsten Replay aus Cache)
|
const cached = cachePath ? await RNFS.exists(cachePath).catch(() => false) : false;
|
||||||
rvs.send('tts_request' as any, {
|
if (cached) {
|
||||||
text: item.text,
|
audioService.playFromPath(item.audioPath!);
|
||||||
voice: localXttsVoiceRef.current,
|
return;
|
||||||
speed: ttsSpeedRef.current,
|
|
||||||
messageId: item.messageId || '',
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
// messageId mitschicken damit die Bridge das generierte Audio
|
||||||
|
// wieder mit der Nachricht verknuepft (fuer den naechsten Replay aus Cache)
|
||||||
|
rvs.send('tts_request' as any, {
|
||||||
|
text: item.text,
|
||||||
|
voice: localXttsVoiceRef.current,
|
||||||
|
speed: ttsSpeedRef.current,
|
||||||
|
messageId: item.messageId || '',
|
||||||
|
});
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
<Text style={styles.playButtonText}>{'\uD83D\uDD0A'}</Text>
|
<Text style={styles.playButtonText}>{'\uD83D\uDD0A'}</Text>
|
||||||
@@ -1264,9 +1472,11 @@ const styles = StyleSheet.create({
|
|||||||
color: '#E0E0F0',
|
color: '#E0E0F0',
|
||||||
},
|
},
|
||||||
attachmentImage: {
|
attachmentImage: {
|
||||||
width: '100%',
|
// Feste Breite + dynamische aspectRatio (in ChatImage gesetzt) damit die
|
||||||
minHeight: 200,
|
// Bubble sich ans Bild anpasst. Mit width: '100%' ohne explizite Parent-
|
||||||
maxHeight: 400,
|
// Breite wuerde RN das Bild auf 0px schrumpfen → "Strich".
|
||||||
|
width: 260,
|
||||||
|
aspectRatio: 4 / 3,
|
||||||
borderRadius: 8,
|
borderRadius: 8,
|
||||||
marginBottom: 6,
|
marginBottom: 6,
|
||||||
backgroundColor: '#0D0D1A',
|
backgroundColor: '#0D0D1A',
|
||||||
|
|||||||
@@ -17,6 +17,8 @@ import {
|
|||||||
Platform,
|
Platform,
|
||||||
ToastAndroid,
|
ToastAndroid,
|
||||||
ActivityIndicator,
|
ActivityIndicator,
|
||||||
|
Modal,
|
||||||
|
PermissionsAndroid,
|
||||||
} from 'react-native';
|
} from 'react-native';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
import RNFS from 'react-native-fs';
|
import RNFS from 'react-native-fs';
|
||||||
@@ -39,11 +41,17 @@ import {
|
|||||||
MAX_RECORDING_MIN_SEC,
|
MAX_RECORDING_MIN_SEC,
|
||||||
MAX_RECORDING_MAX_SEC,
|
MAX_RECORDING_MAX_SEC,
|
||||||
MAX_RECORDING_STORAGE_KEY,
|
MAX_RECORDING_STORAGE_KEY,
|
||||||
|
VAD_SILENCE_DB_DEFAULT,
|
||||||
|
VAD_SILENCE_DB_MIN,
|
||||||
|
VAD_SILENCE_DB_MAX,
|
||||||
|
VAD_SILENCE_DB_OVERRIDE_KEY,
|
||||||
TTS_SPEED_DEFAULT,
|
TTS_SPEED_DEFAULT,
|
||||||
TTS_SPEED_MIN,
|
TTS_SPEED_MIN,
|
||||||
TTS_SPEED_MAX,
|
TTS_SPEED_MAX,
|
||||||
TTS_SPEED_STORAGE_KEY,
|
TTS_SPEED_STORAGE_KEY,
|
||||||
} from '../services/audio';
|
} from '../services/audio';
|
||||||
|
import audioService from '../services/audio';
|
||||||
|
import { isVerboseLogging, setVerboseLogging } from '../services/logger';
|
||||||
import {
|
import {
|
||||||
isWakeReadySoundEnabled,
|
isWakeReadySoundEnabled,
|
||||||
setWakeReadySoundEnabled,
|
setWakeReadySoundEnabled,
|
||||||
@@ -58,6 +66,7 @@ import wakeWordService, {
|
|||||||
import ModeSelector from '../components/ModeSelector';
|
import ModeSelector from '../components/ModeSelector';
|
||||||
import QRScanner from '../components/QRScanner';
|
import QRScanner from '../components/QRScanner';
|
||||||
import VoiceCloneModal from '../components/VoiceCloneModal';
|
import VoiceCloneModal from '../components/VoiceCloneModal';
|
||||||
|
import updateService from '../services/updater';
|
||||||
|
|
||||||
const STORAGE_PATH_KEY = 'aria_attachment_storage_path';
|
const STORAGE_PATH_KEY = 'aria_attachment_storage_path';
|
||||||
const DEFAULT_STORAGE_PATH = `${RNFS.DocumentDirectoryPath}/chat_attachments`;
|
const DEFAULT_STORAGE_PATH = `${RNFS.DocumentDirectoryPath}/chat_attachments`;
|
||||||
@@ -124,6 +133,12 @@ const SettingsScreen: React.FC = () => {
|
|||||||
const [vadSilenceSec, setVadSilenceSec] = useState<number>(VAD_SILENCE_DEFAULT_SEC);
|
const [vadSilenceSec, setVadSilenceSec] = useState<number>(VAD_SILENCE_DEFAULT_SEC);
|
||||||
const [convWindowSec, setConvWindowSec] = useState<number>(CONV_WINDOW_DEFAULT_SEC);
|
const [convWindowSec, setConvWindowSec] = useState<number>(CONV_WINDOW_DEFAULT_SEC);
|
||||||
const [maxRecordingSec, setMaxRecordingSec] = useState<number>(MAX_RECORDING_DEFAULT_SEC);
|
const [maxRecordingSec, setMaxRecordingSec] = useState<number>(MAX_RECORDING_DEFAULT_SEC);
|
||||||
|
// null = automatisch (adaptive Baseline), sonst manueller dB-Override
|
||||||
|
const [vadSilenceDb, setVadSilenceDb] = useState<number | null>(null);
|
||||||
|
const [showVadInfo, setShowVadInfo] = useState(false);
|
||||||
|
const [apkCacheInfo, setApkCacheInfo] = useState<{count: number, totalMB: number} | null>(null);
|
||||||
|
const [ttsCacheInfo, setTtsCacheInfo] = useState<{count: number, totalMB: number} | null>(null);
|
||||||
|
const [verboseLogging, setVerboseLoggingState] = useState<boolean>(isVerboseLogging());
|
||||||
const [ttsSpeed, setTtsSpeed] = useState<number>(TTS_SPEED_DEFAULT);
|
const [ttsSpeed, setTtsSpeed] = useState<number>(TTS_SPEED_DEFAULT);
|
||||||
const [wakeKeyword, setWakeKeyword] = useState<string>(DEFAULT_KEYWORD);
|
const [wakeKeyword, setWakeKeyword] = useState<string>(DEFAULT_KEYWORD);
|
||||||
const [wakeStatus, setWakeStatus] = useState<string>('');
|
const [wakeStatus, setWakeStatus] = useState<string>('');
|
||||||
@@ -159,6 +174,9 @@ const SettingsScreen: React.FC = () => {
|
|||||||
AsyncStorage.getItem('aria_tts_enabled').then(saved => {
|
AsyncStorage.getItem('aria_tts_enabled').then(saved => {
|
||||||
if (saved !== null) setTtsEnabled(saved === 'true');
|
if (saved !== null) setTtsEnabled(saved === 'true');
|
||||||
});
|
});
|
||||||
|
AsyncStorage.getItem('aria_gps_enabled').then(saved => {
|
||||||
|
if (saved !== null) setGpsEnabled(saved === 'true');
|
||||||
|
});
|
||||||
AsyncStorage.getItem(TTS_PREROLL_STORAGE_KEY).then(saved => {
|
AsyncStorage.getItem(TTS_PREROLL_STORAGE_KEY).then(saved => {
|
||||||
if (saved != null) {
|
if (saved != null) {
|
||||||
const n = parseFloat(saved);
|
const n = parseFloat(saved);
|
||||||
@@ -191,6 +209,14 @@ const SettingsScreen: React.FC = () => {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
AsyncStorage.getItem(VAD_SILENCE_DB_OVERRIDE_KEY).then(saved => {
|
||||||
|
if (saved != null && saved !== '') {
|
||||||
|
const n = parseFloat(saved);
|
||||||
|
if (isFinite(n) && n >= VAD_SILENCE_DB_MIN && n <= VAD_SILENCE_DB_MAX) {
|
||||||
|
setVadSilenceDb(n);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY).then(saved => {
|
AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY).then(saved => {
|
||||||
if (saved != null) {
|
if (saved != null) {
|
||||||
const n = parseFloat(saved);
|
const n = parseFloat(saved);
|
||||||
@@ -201,6 +227,8 @@ const SettingsScreen: React.FC = () => {
|
|||||||
if (saved && (WAKE_KEYWORDS as readonly string[]).includes(saved)) setWakeKeyword(saved);
|
if (saved && (WAKE_KEYWORDS as readonly string[]).includes(saved)) setWakeKeyword(saved);
|
||||||
});
|
});
|
||||||
isWakeReadySoundEnabled().then(setWakeReadySound);
|
isWakeReadySoundEnabled().then(setWakeReadySound);
|
||||||
|
updateService.getApkCacheSize().then(setApkCacheInfo).catch(() => {});
|
||||||
|
audioService.getTtsCacheSize().then(setTtsCacheInfo).catch(() => {});
|
||||||
AsyncStorage.getItem('aria_xtts_voice').then(saved => {
|
AsyncStorage.getItem('aria_xtts_voice').then(saved => {
|
||||||
if (saved) setXttsVoice(saved);
|
if (saved) setXttsVoice(saved);
|
||||||
});
|
});
|
||||||
@@ -435,9 +463,31 @@ const SettingsScreen: React.FC = () => {
|
|||||||
|
|
||||||
// --- GPS Toggle ---
|
// --- GPS Toggle ---
|
||||||
|
|
||||||
const handleGPSToggle = useCallback((value: boolean) => {
|
const handleGPSToggle = useCallback(async (value: boolean) => {
|
||||||
|
if (value && Platform.OS === 'android') {
|
||||||
|
try {
|
||||||
|
const granted = await PermissionsAndroid.request(
|
||||||
|
PermissionsAndroid.PERMISSIONS.ACCESS_COARSE_LOCATION,
|
||||||
|
{
|
||||||
|
title: 'ARIA — Standort an Anfragen anhaengen',
|
||||||
|
message: 'Damit ARIA bei Anfragen wie "Wo ist der naechste...?" den '
|
||||||
|
+ 'Standort kennt, darf die App den ungefaehren Standort lesen. '
|
||||||
|
+ 'Wird nur bei jeder Anfrage einmal abgerufen, nicht im Hintergrund.',
|
||||||
|
buttonPositive: 'Erlauben',
|
||||||
|
buttonNegative: 'Abbrechen',
|
||||||
|
},
|
||||||
|
);
|
||||||
|
if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
|
||||||
|
ToastAndroid.show('Standort-Berechtigung abgelehnt', ToastAndroid.SHORT);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
console.warn('[Settings] GPS-Permission Request gescheitert:', err);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
setGpsEnabled(value);
|
setGpsEnabled(value);
|
||||||
// In Produktion: Wert in AsyncStorage persistieren
|
AsyncStorage.setItem('aria_gps_enabled', String(value)).catch(() => {});
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
// --- XTTS Voice ---
|
// --- XTTS Voice ---
|
||||||
@@ -661,7 +711,11 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<View style={styles.toggleInfo}>
|
<View style={styles.toggleInfo}>
|
||||||
<Text style={styles.toggleLabel}>GPS-Position mitsenden</Text>
|
<Text style={styles.toggleLabel}>GPS-Position mitsenden</Text>
|
||||||
<Text style={styles.toggleHint}>
|
<Text style={styles.toggleHint}>
|
||||||
Standort wird automatisch an Nachrichten angehaengt
|
Position (lat/lon) wird mit jeder Nachricht an ARIA mitgeschickt.
|
||||||
|
Sie sieht's nur intern und nutzt es bei standortbezogenen Fragen
|
||||||
|
("wo bin ich?", "Wetter hier?"), erwaehnt es sonst nicht.
|
||||||
|
Im Chat-Verlauf bleibt die Bubble unveraendert — nur ARIAs
|
||||||
|
Antwort kann darauf eingehen.
|
||||||
</Text>
|
</Text>
|
||||||
</View>
|
</View>
|
||||||
<Switch
|
<Switch
|
||||||
@@ -775,8 +829,94 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.prerollButtonText}>+1m</Text>
|
<Text style={styles.prerollButtonText}>+1m</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
<View style={{flexDirection: 'row', alignItems: 'center', marginTop: 24, gap: 8}}>
|
||||||
|
<Text style={styles.toggleLabel}>Stille-Pegel (dB)</Text>
|
||||||
|
<TouchableOpacity onPress={() => setShowVadInfo(true)} style={styles.infoBtn}>
|
||||||
|
<Text style={styles.infoBtnText}>i</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
|
<Text style={styles.toggleHint}>
|
||||||
|
Welcher Mikro-Pegel als "Stille" gilt. Standard: automatisch (Baseline aus
|
||||||
|
den ersten 500ms). Manuell setzen wenn Auto nicht zuverlaessig greift.
|
||||||
|
</Text>
|
||||||
|
<View style={styles.prerollRow}>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.prerollButton}
|
||||||
|
onPress={() => {
|
||||||
|
const next = vadSilenceDb == null
|
||||||
|
? VAD_SILENCE_DB_DEFAULT - 1
|
||||||
|
: Math.max(VAD_SILENCE_DB_MIN, vadSilenceDb - 1);
|
||||||
|
setVadSilenceDb(next);
|
||||||
|
AsyncStorage.setItem(VAD_SILENCE_DB_OVERRIDE_KEY, String(next));
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<Text style={styles.prerollButtonText}>−1</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
<Text style={styles.prerollValue}>
|
||||||
|
{vadSilenceDb == null ? 'auto' : `${vadSilenceDb} dB`}
|
||||||
|
</Text>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.prerollButton}
|
||||||
|
onPress={() => {
|
||||||
|
const next = vadSilenceDb == null
|
||||||
|
? VAD_SILENCE_DB_DEFAULT + 1
|
||||||
|
: Math.min(VAD_SILENCE_DB_MAX, vadSilenceDb + 1);
|
||||||
|
setVadSilenceDb(next);
|
||||||
|
AsyncStorage.setItem(VAD_SILENCE_DB_OVERRIDE_KEY, String(next));
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<Text style={styles.prerollButtonText}>+1</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
|
{vadSilenceDb != null && (
|
||||||
|
<TouchableOpacity
|
||||||
|
onPress={() => {
|
||||||
|
setVadSilenceDb(null);
|
||||||
|
AsyncStorage.removeItem(VAD_SILENCE_DB_OVERRIDE_KEY);
|
||||||
|
}}
|
||||||
|
style={{alignSelf: 'center', marginTop: 8, paddingVertical: 6, paddingHorizontal: 12}}
|
||||||
|
>
|
||||||
|
<Text style={{color: '#0096FF', fontSize: 13}}>↻ Auf automatisch zuruecksetzen</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
)}
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
<Modal
|
||||||
|
visible={showVadInfo}
|
||||||
|
transparent
|
||||||
|
animationType="fade"
|
||||||
|
onRequestClose={() => setShowVadInfo(false)}
|
||||||
|
>
|
||||||
|
<View style={styles.modalOverlay}>
|
||||||
|
<View style={styles.modalCard}>
|
||||||
|
<Text style={styles.modalTitle}>Stille-Pegel (dB)</Text>
|
||||||
|
<Text style={styles.modalText}>
|
||||||
|
Lautstaerken werden in Dezibel (dB) gemessen — negative Werte, je
|
||||||
|
hoeher (naeher an 0), desto lauter.{'\n\n'}
|
||||||
|
<Text style={{fontWeight: '700'}}>Standard:</Text> automatisch.
|
||||||
|
Die App misst die ersten 500ms Hintergrundpegel und setzt die
|
||||||
|
Stille-Schwelle auf Baseline + 6 dB. Funktioniert in den meisten
|
||||||
|
Umgebungen.{'\n\n'}
|
||||||
|
<Text style={{fontWeight: '700'}}>Manuell:</Text> Pegel unter dem
|
||||||
|
eingestellten Wert gilt als "Stille" → Aufnahme stoppt.{'\n\n'}
|
||||||
|
<Text style={{fontWeight: '700'}}>Faustregel:</Text>{'\n'}
|
||||||
|
• <Text style={{color: '#FFD60A'}}>−45 dB</Text> sehr empfindlich (stoppt schnell, auch bei Atmen){'\n'}
|
||||||
|
• <Text style={{color: '#34C759'}}>−38 dB</Text> ausgewogen (typische Bueroumgebung){'\n'}
|
||||||
|
• <Text style={{color: '#FF6B6B'}}>−25 dB</Text> unempfindlich (laute Umgebung, nur klare Sprache zaehlt){'\n\n'}
|
||||||
|
<Text style={{color: '#8888AA'}}>Niedrigere Zahl (z.B. −50) = sensibler.{'\n'}
|
||||||
|
Hoehere Zahl (z.B. −20) = robuster gegen Hintergrundlaerm,
|
||||||
|
braucht aber lautere Sprache.</Text>
|
||||||
|
</Text>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={[styles.connectButton, {marginTop: 16, alignSelf: 'stretch'}]}
|
||||||
|
onPress={() => setShowVadInfo(false)}
|
||||||
|
>
|
||||||
|
<Text style={styles.connectButtonText}>OK</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
|
</View>
|
||||||
|
</Modal>
|
||||||
</>)}
|
</>)}
|
||||||
|
|
||||||
{/* === Wake-Word (komplett on-device, openWakeWord) === */}
|
{/* === Wake-Word (komplett on-device, openWakeWord) === */}
|
||||||
@@ -1085,11 +1225,96 @@ const SettingsScreen: React.FC = () => {
|
|||||||
)}
|
)}
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
{/* === Update-Cache === */}
|
||||||
|
<Text style={[styles.sectionTitle, {marginTop: 16}]}>Update-Cache</Text>
|
||||||
|
<View style={styles.card}>
|
||||||
|
<Text style={styles.toggleHint}>
|
||||||
|
Heruntergeladene APK-Dateien fuer App-Updates. Werden automatisch
|
||||||
|
beim App-Start und vor jedem neuen Download geloescht — der Button
|
||||||
|
ist fuer den Notfall (z.B. wenn ein Download haengen geblieben ist).
|
||||||
|
</Text>
|
||||||
|
<Text style={[styles.storageSizeText, {marginTop: 8}]}>
|
||||||
|
{apkCacheInfo === null ? '...' :
|
||||||
|
apkCacheInfo.count === 0 ? 'leer' :
|
||||||
|
`${apkCacheInfo.count} APK${apkCacheInfo.count === 1 ? '' : 's'} · ${apkCacheInfo.totalMB.toFixed(1)}MB`}
|
||||||
|
</Text>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={[styles.clearButton, {marginTop: 8, backgroundColor: 'rgba(255,59,48,0.15)'}]}
|
||||||
|
onPress={async () => {
|
||||||
|
const res = await updateService.cleanupOldApks();
|
||||||
|
ToastAndroid.show(
|
||||||
|
res.removed === 0
|
||||||
|
? 'Update-Cache war schon leer'
|
||||||
|
: `${res.removed} APK${res.removed === 1 ? '' : 's'} geloescht (${res.freedMB.toFixed(1)}MB frei)`,
|
||||||
|
ToastAndroid.SHORT,
|
||||||
|
);
|
||||||
|
const info = await updateService.getApkCacheSize();
|
||||||
|
setApkCacheInfo(info);
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<Text style={[styles.clearButtonText, {color: '#FF3B30'}]}>Update-Cache leeren</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
|
|
||||||
|
{/* === TTS-Cache === */}
|
||||||
|
<Text style={[styles.sectionTitle, {marginTop: 16}]}>TTS-Cache</Text>
|
||||||
|
<View style={styles.card}>
|
||||||
|
<Text style={styles.toggleHint}>
|
||||||
|
Gespeicherte Sprachausgaben (WAV pro Antwort) — werden fuer den
|
||||||
|
Play-Button und Auto-Resume nach Anrufen genutzt. Loeschen
|
||||||
|
unterbricht keine laufende Wiedergabe, alte Antworten lassen sich
|
||||||
|
danach nur nicht mehr abspielen.
|
||||||
|
</Text>
|
||||||
|
<Text style={[styles.storageSizeText, {marginTop: 8}]}>
|
||||||
|
{ttsCacheInfo === null ? '...' :
|
||||||
|
ttsCacheInfo.count === 0 ? 'leer' :
|
||||||
|
`${ttsCacheInfo.count} WAV${ttsCacheInfo.count === 1 ? '' : 's'} · ${ttsCacheInfo.totalMB.toFixed(1)}MB`}
|
||||||
|
</Text>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={[styles.clearButton, {marginTop: 8, backgroundColor: 'rgba(255,59,48,0.15)'}]}
|
||||||
|
onPress={async () => {
|
||||||
|
const res = await audioService.clearTtsCache();
|
||||||
|
ToastAndroid.show(
|
||||||
|
res.removed === 0
|
||||||
|
? 'TTS-Cache war schon leer'
|
||||||
|
: `${res.removed} WAV${res.removed === 1 ? '' : 's'} geloescht (${res.freedMB.toFixed(1)}MB frei)`,
|
||||||
|
ToastAndroid.SHORT,
|
||||||
|
);
|
||||||
|
const info = await audioService.getTtsCacheSize();
|
||||||
|
setTtsCacheInfo(info);
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<Text style={[styles.clearButtonText, {color: '#FF3B30'}]}>TTS-Cache leeren</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
|
|
||||||
</>)}
|
</>)}
|
||||||
|
|
||||||
{/* === Logs === */}
|
{/* === Logs === */}
|
||||||
{currentSection === 'protocol' && (<>
|
{currentSection === 'protocol' && (<>
|
||||||
<Text style={styles.sectionTitle}>Protokoll</Text>
|
<Text style={styles.sectionTitle}>Protokoll</Text>
|
||||||
|
|
||||||
|
{/* Verbose-Logging-Toggle */}
|
||||||
|
<View style={styles.card}>
|
||||||
|
<View style={styles.toggleRow}>
|
||||||
|
<Text style={styles.toggleLabel}>Verbose Logging</Text>
|
||||||
|
<Switch
|
||||||
|
value={verboseLogging}
|
||||||
|
onValueChange={(v) => {
|
||||||
|
setVerboseLogging(v);
|
||||||
|
setVerboseLoggingState(v);
|
||||||
|
}}
|
||||||
|
trackColor={{ false: '#3A3A52', true: '#0096FF' }}
|
||||||
|
thumbColor={verboseLogging ? '#FFFFFF' : '#666680'}
|
||||||
|
/>
|
||||||
|
</View>
|
||||||
|
<Text style={styles.toggleHint}>
|
||||||
|
Wenn aus: console.log wird global stummgeschaltet (Speicher schonen).
|
||||||
|
Warnungen und Fehler bleiben immer aktiv. Bei Bedarf einschalten zum
|
||||||
|
Debuggen via adb logcat.
|
||||||
|
</Text>
|
||||||
|
</View>
|
||||||
|
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
{/* Tab-Umschalter */}
|
{/* Tab-Umschalter */}
|
||||||
<View style={styles.tabRow}>
|
<View style={styles.tabRow}>
|
||||||
@@ -1628,6 +1853,48 @@ const styles = StyleSheet.create({
|
|||||||
textAlign: 'center',
|
textAlign: 'center',
|
||||||
},
|
},
|
||||||
|
|
||||||
|
infoBtn: {
|
||||||
|
width: 22,
|
||||||
|
height: 22,
|
||||||
|
borderRadius: 11,
|
||||||
|
borderWidth: 1.5,
|
||||||
|
borderColor: '#0096FF',
|
||||||
|
alignItems: 'center',
|
||||||
|
justifyContent: 'center',
|
||||||
|
},
|
||||||
|
infoBtnText: {
|
||||||
|
color: '#0096FF',
|
||||||
|
fontSize: 13,
|
||||||
|
fontWeight: '700',
|
||||||
|
fontStyle: 'italic',
|
||||||
|
lineHeight: 16,
|
||||||
|
},
|
||||||
|
modalOverlay: {
|
||||||
|
flex: 1,
|
||||||
|
backgroundColor: 'rgba(0,0,0,0.7)',
|
||||||
|
justifyContent: 'center',
|
||||||
|
alignItems: 'center',
|
||||||
|
padding: 20,
|
||||||
|
},
|
||||||
|
modalCard: {
|
||||||
|
backgroundColor: '#1E1E2E',
|
||||||
|
borderRadius: 14,
|
||||||
|
padding: 20,
|
||||||
|
maxWidth: 460,
|
||||||
|
width: '100%',
|
||||||
|
},
|
||||||
|
modalTitle: {
|
||||||
|
color: '#FFFFFF',
|
||||||
|
fontSize: 18,
|
||||||
|
fontWeight: '700',
|
||||||
|
marginBottom: 12,
|
||||||
|
},
|
||||||
|
modalText: {
|
||||||
|
color: '#E0E0F0',
|
||||||
|
fontSize: 14,
|
||||||
|
lineHeight: 20,
|
||||||
|
},
|
||||||
|
|
||||||
keywordChip: {
|
keywordChip: {
|
||||||
backgroundColor: '#1E1E2E',
|
backgroundColor: '#1E1E2E',
|
||||||
borderWidth: 1,
|
borderWidth: 1,
|
||||||
|
|||||||
+431
-32
@@ -6,10 +6,11 @@
|
|||||||
* Nutzt react-native-audio-recorder-player fuer Aufnahme.
|
* Nutzt react-native-audio-recorder-player fuer Aufnahme.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { Platform, PermissionsAndroid, NativeModules, ToastAndroid } from 'react-native';
|
import { Platform, PermissionsAndroid, NativeModules, ToastAndroid, NativeEventEmitter } from 'react-native';
|
||||||
import Sound from 'react-native-sound';
|
import Sound from 'react-native-sound';
|
||||||
import RNFS from 'react-native-fs';
|
import RNFS from 'react-native-fs';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
|
import { acquireBackgroundAudio, releaseBackgroundAudio, stopBackgroundAudio } from './backgroundAudio';
|
||||||
import AudioRecorderPlayer, {
|
import AudioRecorderPlayer, {
|
||||||
AudioEncoderAndroidType,
|
AudioEncoderAndroidType,
|
||||||
AudioSourceAndroidType,
|
AudioSourceAndroidType,
|
||||||
@@ -40,6 +41,8 @@ const { AudioFocus, PcmStreamPlayer } = NativeModules as {
|
|||||||
requestDuck: () => Promise<boolean>;
|
requestDuck: () => Promise<boolean>;
|
||||||
requestExclusive: () => Promise<boolean>;
|
requestExclusive: () => Promise<boolean>;
|
||||||
release: () => Promise<boolean>;
|
release: () => Promise<boolean>;
|
||||||
|
kickReleaseMedia: () => Promise<boolean>;
|
||||||
|
getMode?: () => Promise<number>;
|
||||||
};
|
};
|
||||||
PcmStreamPlayer?: {
|
PcmStreamPlayer?: {
|
||||||
start: (sampleRate: number, channels: number, prerollSeconds: number) => Promise<boolean>;
|
start: (sampleRate: number, channels: number, prerollSeconds: number) => Promise<boolean>;
|
||||||
@@ -84,6 +87,29 @@ const VAD_SPEECH_OFFSET_DB = 12; // sicheres Speech = Baseline + 12dB
|
|||||||
const VAD_BASELINE_SAMPLES = 5; // 5 × 100ms = 500ms Baseline
|
const VAD_BASELINE_SAMPLES = 5; // 5 × 100ms = 500ms Baseline
|
||||||
const VAD_SPEECH_MIN_MS = 500; // ms Sprache bevor Aufnahme zaehlt — laenger = keine Huestler/Klopfer mehr
|
const VAD_SPEECH_MIN_MS = 500; // ms Sprache bevor Aufnahme zaehlt — laenger = keine Huestler/Klopfer mehr
|
||||||
|
|
||||||
|
// Override fuer die Stille-Schwelle — wenn gesetzt, wird die adaptive Baseline
|
||||||
|
// ignoriert. Nuetzlich wenn die adaptive Logik in spezifischen Umgebungen
|
||||||
|
// nicht zuverlaessig greift. Range -55..-15 dB. Speech-Schwelle wird auf
|
||||||
|
// override+10 dB gesetzt (Speech muss klar lauter als Stille sein).
|
||||||
|
export const VAD_SILENCE_DB_DEFAULT = -38; // wenn User Manuell-Modus waehlt
|
||||||
|
export const VAD_SILENCE_DB_MIN = -85; // extrem empfindlich, praktisch alles gilt als Sprache
|
||||||
|
export const VAD_SILENCE_DB_MAX = -15; // sehr unempfindlich, nur lautes Reden gilt
|
||||||
|
export const VAD_SILENCE_DB_OVERRIDE_KEY = 'aria_vad_silence_db_override';
|
||||||
|
|
||||||
|
/** Liefert den manuellen Override-Wert oder null wenn "automatisch". */
|
||||||
|
export async function loadVadSilenceDbOverride(): Promise<number | null> {
|
||||||
|
try {
|
||||||
|
const raw = await AsyncStorage.getItem(VAD_SILENCE_DB_OVERRIDE_KEY);
|
||||||
|
if (raw == null || raw === '') return null;
|
||||||
|
const n = parseFloat(raw);
|
||||||
|
if (!isFinite(n)) return null;
|
||||||
|
if (n < VAD_SILENCE_DB_MIN || n > VAD_SILENCE_DB_MAX) return null;
|
||||||
|
return n;
|
||||||
|
} catch {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// VAD-Stille (in Sekunden) — wie lange Sprechpause toleriert wird, bevor
|
// VAD-Stille (in Sekunden) — wie lange Sprechpause toleriert wird, bevor
|
||||||
// die Aufnahme automatisch beendet wird. Einstellbar in den App-Settings.
|
// die Aufnahme automatisch beendet wird. Einstellbar in den App-Settings.
|
||||||
export const VAD_SILENCE_DEFAULT_SEC = 2.8;
|
export const VAD_SILENCE_DEFAULT_SEC = 2.8;
|
||||||
@@ -245,9 +271,44 @@ class AudioService {
|
|||||||
private vadAdaptiveSilenceDb: number = VAD_SILENCE_FALLBACK_DB;
|
private vadAdaptiveSilenceDb: number = VAD_SILENCE_FALLBACK_DB;
|
||||||
private vadAdaptiveSpeechDb: number = VAD_SPEECH_FALLBACK_DB;
|
private vadAdaptiveSpeechDb: number = VAD_SPEECH_FALLBACK_DB;
|
||||||
|
|
||||||
|
// Interruption-Tracking fuer Auto-Resume nach Anruf:
|
||||||
|
// - playbackStartTime: ms-Timestamp wenn AudioTrack tatsaechlich anfing
|
||||||
|
// abzuspielen (= _firePlaybackStarted)
|
||||||
|
// - currentPlaybackMsgId: welche Antwort lief gerade
|
||||||
|
// - pausedPosition / pausedMessageId: bei captureInterruption gemerkt
|
||||||
|
private playbackStartTime: number = 0;
|
||||||
|
private currentPlaybackMsgId: string = '';
|
||||||
|
private pausedPosition: number = 0; // Sekunden in der Audio-Datei
|
||||||
|
private pausedMessageId: string = '';
|
||||||
|
private resumeSound: Sound | null = null; // halten damit GC nicht zuschlaegt
|
||||||
|
// Leading-Silence wird im Native vor den Chunks geschrieben — beim
|
||||||
|
// Position-Berechnen vom playbackStarted abziehen
|
||||||
|
private readonly LEADING_SILENCE_SEC = 0.3;
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
this.recorder = new AudioRecorderPlayer();
|
this.recorder = new AudioRecorderPlayer();
|
||||||
this.recorder.setSubscriptionDuration(0.1); // 100ms Metering-Updates
|
this.recorder.setSubscriptionDuration(0.1); // 100ms Metering-Updates
|
||||||
|
// Native Event: AudioTrack hat alle Samples wirklich durchgespielt (nach
|
||||||
|
// dem finally{}-Block im Writer-Thread). ERST jetzt darf AudioFocus
|
||||||
|
// freigegeben werden — sonst spielt Spotify schon waehrend ARIA noch
|
||||||
|
// redet (PcmStreamPlayer.end() returnt mit 15s-Cap viel zu frueh).
|
||||||
|
if (PcmStreamPlayer) {
|
||||||
|
try {
|
||||||
|
const emitter = new NativeEventEmitter(NativeModules.PcmStreamPlayer as any);
|
||||||
|
emitter.addListener('PcmPlaybackFinished', () => {
|
||||||
|
console.log('[Audio] PcmPlaybackFinished — Focus jetzt freigeben');
|
||||||
|
this._releaseFocusDeferred();
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
console.warn('[Audio] PcmPlaybackFinished-Subscription fehlgeschlagen:', err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// App-Start: orphaned aria_tts_*.wav / aria_recording_*.mp4 aus dem Cache
|
||||||
|
// wegraeumen. Sammeln sich an wenn Sound mid-playback gestoppt wird (Anruf,
|
||||||
|
// Mute, Barge-In) — der completion-callback feuert dann nicht und die Datei
|
||||||
|
// bleibt liegen. 5min-Threshold damit gerade aktiv geschriebene Files sicher
|
||||||
|
// sind. cleanupOnStartup ist async, blockt den Constructor nicht.
|
||||||
|
this._cleanupStaleCacheFiles(5 * 60 * 1000).catch(() => {});
|
||||||
}
|
}
|
||||||
|
|
||||||
/** AudioFocus mit kleiner Verzoegerung freigeben — Spotify/YouTube
|
/** AudioFocus mit kleiner Verzoegerung freigeben — Spotify/YouTube
|
||||||
@@ -257,13 +318,19 @@ class AudioService {
|
|||||||
* unterdrueckt — der Focus bleibt fuer die ganze Konversation gehalten. */
|
* unterdrueckt — der Focus bleibt fuer die ganze Konversation gehalten. */
|
||||||
private _releaseFocusDeferred(): void {
|
private _releaseFocusDeferred(): void {
|
||||||
if (this._conversationFocusActive) {
|
if (this._conversationFocusActive) {
|
||||||
|
console.log('[Audio] _releaseFocusDeferred: Conversation aktiv → kein Release');
|
||||||
this._cancelDeferredFocusRelease();
|
this._cancelDeferredFocusRelease();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
this._cancelDeferredFocusRelease();
|
this._cancelDeferredFocusRelease();
|
||||||
|
console.log('[Audio] _releaseFocusDeferred: in %dms', this.FOCUS_RELEASE_DELAY_MS);
|
||||||
this.focusReleaseTimer = setTimeout(() => {
|
this.focusReleaseTimer = setTimeout(() => {
|
||||||
this.focusReleaseTimer = null;
|
this.focusReleaseTimer = null;
|
||||||
if (this._conversationFocusActive) return;
|
if (this._conversationFocusActive) {
|
||||||
|
console.log('[Audio] Focus-Release abgebrochen (Conversation jetzt aktiv)');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
console.log('[Audio] AudioFocus jetzt released');
|
||||||
AudioFocus?.release().catch(() => {});
|
AudioFocus?.release().catch(() => {});
|
||||||
}, this.FOCUS_RELEASE_DELAY_MS);
|
}, this.FOCUS_RELEASE_DELAY_MS);
|
||||||
}
|
}
|
||||||
@@ -294,14 +361,155 @@ class AudioService {
|
|||||||
this._releaseFocusDeferred();
|
this._releaseFocusDeferred();
|
||||||
}
|
}
|
||||||
|
|
||||||
/** TTS-Wiedergabe haart stoppen — z.B. wenn ein Anruf reinkommt.
|
/** TTS-Wiedergabe haart stoppen — z.B. fuer Barge-In. Buffer wird geleert,
|
||||||
* Released auch sofort den AudioFocus damit der Anruf-Klingelton hoerbar ist. */
|
* kein Auto-Resume. Released auch sofort den AudioFocus. */
|
||||||
haltAllPlayback(reason: string = ''): void {
|
haltAllPlayback(reason: string = ''): void {
|
||||||
console.log('[Audio] haltAllPlayback: %s', reason || '(no reason)');
|
console.log('[Audio] haltAllPlayback: %s', reason || '(no reason)');
|
||||||
this._conversationFocusActive = false;
|
this._conversationFocusActive = false;
|
||||||
this.stopPlayback();
|
this.stopPlayback();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Speziell fuer Anrufe: AudioTrack stoppen + Focus releasen, ABER pcm-
|
||||||
|
* Buffer + messageId behalten damit weitere Chunks der unterbrochenen
|
||||||
|
* Antwort weiter gesammelt werden. isFinal schreibt dann die WAV trotz
|
||||||
|
* Anruf — und resumeFromInterruption findet sie. */
|
||||||
|
pauseForCall(reason: string = ''): void {
|
||||||
|
console.log('[Audio] pauseForCall: %s', reason || '(no reason)');
|
||||||
|
this._conversationFocusActive = false;
|
||||||
|
this._pausedForCall = true;
|
||||||
|
// Queue + isPlaying ruecksetzen — sonst klemmt der naechste Play-Button
|
||||||
|
// (playAudio sieht isPlaying=true und ruft _playNext nicht mehr auf).
|
||||||
|
this.audioQueue = [];
|
||||||
|
this.isPlaying = false;
|
||||||
|
// Foreground-Service stoppen — Notification waere sonst irrefuehrend
|
||||||
|
stopBackgroundAudio().catch(() => {});
|
||||||
|
// SoundPool/RNSound (Resume-Sound, Play-Button) stoppen — nicht relevant fuer Auto-Resume
|
||||||
|
if (this.currentSound) {
|
||||||
|
try { this.currentSound.stop(); this.currentSound.release(); } catch {}
|
||||||
|
this.currentSound = null;
|
||||||
|
}
|
||||||
|
if (this.resumeSound) {
|
||||||
|
try { this.resumeSound.stop(); this.resumeSound.release(); } catch {}
|
||||||
|
this.resumeSound = null;
|
||||||
|
}
|
||||||
|
// AudioTrack hart stoppen damit nichts mehr aus dem Lautsprecher kommt.
|
||||||
|
// pcmStreamActive bleibt true, pcmBuffer/pcmMessageId BLEIBEN — damit
|
||||||
|
// weitere Chunks gesammelt werden und isFinal die WAV schreiben kann.
|
||||||
|
PcmStreamPlayer?.stop().catch(() => {});
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
|
AudioFocus?.release().catch(() => {});
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Anruf vorbei → weitere Chunks duerfen wieder abgespielt werden.
|
||||||
|
* resumeFromInterruption uebernimmt die Wiedergabe ab gemerkter Position. */
|
||||||
|
endCallPause(): void {
|
||||||
|
if (!this._pausedForCall) return;
|
||||||
|
this._pausedForCall = false;
|
||||||
|
console.log('[Audio] endCallPause');
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Bei Anruf: aktuelle Wiedergabe-Position merken damit wir nach dem
|
||||||
|
* Auflegen von dort weitermachen koennen. Returnt Position in Sekunden
|
||||||
|
* oder 0 wenn nichts spielte.
|
||||||
|
*
|
||||||
|
* Idempotent: bei mehrfachem Aufruf (ringing → offhook) wird die Position
|
||||||
|
* vom ersten Mal NICHT ueberschrieben. playbackStartTime laeuft stumpf
|
||||||
|
* weiter obwohl das Audio gestoppt ist — der erste Halt ist der echte. */
|
||||||
|
captureInterruption(): number {
|
||||||
|
if (this.pausedMessageId) {
|
||||||
|
console.log('[Audio] captureInterruption: bereits erfasst (msgId=%s pos=%ss) — skip',
|
||||||
|
this.pausedMessageId, this.pausedPosition.toFixed(2));
|
||||||
|
return this.pausedPosition;
|
||||||
|
}
|
||||||
|
if (!this.playbackStartTime || !this.currentPlaybackMsgId) {
|
||||||
|
console.log('[Audio] captureInterruption: nichts spielte (startTime=%s, msgId=%s)',
|
||||||
|
this.playbackStartTime, this.currentPlaybackMsgId || '(leer)');
|
||||||
|
this.pausedPosition = 0;
|
||||||
|
this.pausedMessageId = '';
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
const elapsedMs = Date.now() - this.playbackStartTime;
|
||||||
|
const positionSec = Math.max(0, elapsedMs / 1000 - this.LEADING_SILENCE_SEC);
|
||||||
|
this.pausedPosition = positionSec;
|
||||||
|
this.pausedMessageId = this.currentPlaybackMsgId;
|
||||||
|
console.log('[Audio] captureInterruption: msgId=%s pos=%ss',
|
||||||
|
this.pausedMessageId, positionSec.toFixed(2));
|
||||||
|
return positionSec;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Nach Anruf-Ende: ab gemerkter Position weiterspielen. Wenn Cache noch
|
||||||
|
* nicht geschrieben (final kam waehrend Anruf vielleicht doch nicht),
|
||||||
|
* warten bis maxWaitMs und dann probieren. Returnt true wenn gestartet. */
|
||||||
|
async resumeFromInterruption(maxWaitMs: number = 30000): Promise<boolean> {
|
||||||
|
const msgId = this.pausedMessageId;
|
||||||
|
const position = this.pausedPosition;
|
||||||
|
if (!msgId) {
|
||||||
|
console.log('[Audio] resumeFromInterruption: kein gemerkter Stand — skip');
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
console.log('[Audio] resumeFromInterruption: starte fuer msgId=%s pos=%ss',
|
||||||
|
msgId, position.toFixed(2));
|
||||||
|
this.pausedMessageId = ''; // konsumieren
|
||||||
|
const cachePath = `${RNFS.DocumentDirectoryPath}/tts_cache/${msgId}.wav`;
|
||||||
|
const startTime = Date.now();
|
||||||
|
while (Date.now() - startTime < maxWaitMs) {
|
||||||
|
try {
|
||||||
|
if (await RNFS.exists(cachePath)) {
|
||||||
|
return await this._playFromPathAtPosition(cachePath, position);
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
await new Promise(r => setTimeout(r, 500));
|
||||||
|
}
|
||||||
|
console.warn('[Audio] resumeFromInterruption: WAV %s nicht binnen %dms verfuegbar',
|
||||||
|
msgId, maxWaitMs);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
private async _playFromPathAtPosition(path: string, positionSec: number): Promise<boolean> {
|
||||||
|
try {
|
||||||
|
// Bestehende laufende Wiedergabe abbrechen damit wir sauber starten
|
||||||
|
if (this.resumeSound) {
|
||||||
|
try { this.resumeSound.stop(); this.resumeSound.release(); } catch {}
|
||||||
|
this.resumeSound = null;
|
||||||
|
}
|
||||||
|
const sound = await new Promise<Sound>((resolve, reject) => {
|
||||||
|
const s = new Sound(path.replace(/^file:\/\//, ''), '', (err) =>
|
||||||
|
err ? reject(err) : resolve(s));
|
||||||
|
});
|
||||||
|
// Audio-Focus anfordern damit Spotify pausiert
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
|
AudioFocus?.requestDuck().catch(() => {});
|
||||||
|
this._firePlaybackStarted();
|
||||||
|
this.isPlaying = true;
|
||||||
|
this.resumeSound = sound;
|
||||||
|
// Tracking auch fuer den Resume-Sound aktualisieren — sonst kann
|
||||||
|
// captureInterruption bei einem zweiten Anruf die Position nicht
|
||||||
|
// mehr ermitteln (playbackStartTime waere von der ersten Wiedergabe).
|
||||||
|
const msgIdMatch = path.match(/([^/\\]+)\.wav$/i);
|
||||||
|
if (msgIdMatch) this.currentPlaybackMsgId = msgIdMatch[1];
|
||||||
|
// Virtuelle Start-Zeit so setzen, dass captureInterruption (das den
|
||||||
|
// Leading-Silence-Offset wieder abzieht) die korrekte Position liefert.
|
||||||
|
this.playbackStartTime = Date.now() - (positionSec + this.LEADING_SILENCE_SEC) * 1000;
|
||||||
|
console.log('[Audio] Resume von Position %ss aus %s',
|
||||||
|
positionSec.toFixed(2), path);
|
||||||
|
sound.setCurrentTime(Math.max(0, positionSec));
|
||||||
|
sound.play((success) => {
|
||||||
|
if (!success) console.warn('[Audio] Resume-Wiedergabe fehlgeschlagen');
|
||||||
|
try { sound.release(); } catch {}
|
||||||
|
if (this.resumeSound === sound) this.resumeSound = null;
|
||||||
|
this.isPlaying = false;
|
||||||
|
this.playbackFinishedListeners.forEach(cb => {
|
||||||
|
try { cb(); } catch (e) { console.warn('[Audio] cb err:', e); }
|
||||||
|
});
|
||||||
|
this._releaseFocusDeferred();
|
||||||
|
});
|
||||||
|
return true;
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn('[Audio] _playFromPathAtPosition fehlgeschlagen:', err?.message || err);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/** True wenn ARIA gerade was abspielt — egal ob WAV-Queue oder PCM-Stream.
|
/** True wenn ARIA gerade was abspielt — egal ob WAV-Queue oder PCM-Stream.
|
||||||
* Nuetzlich fuer "Barge-In": wenn der User spricht waehrend ARIA spricht,
|
* Nuetzlich fuer "Barge-In": wenn der User spricht waehrend ARIA spricht,
|
||||||
* soll die ARIA-Wiedergabe abgebrochen + die neue User-Message verarbeitet
|
* soll die ARIA-Wiedergabe abgebrochen + die neue User-Message verarbeitet
|
||||||
@@ -367,6 +575,12 @@ class AudioService {
|
|||||||
|
|
||||||
this.recordingPath = `${RNFS.CachesDirectoryPath}/aria_recording_${Date.now()}.mp4`;
|
this.recordingPath = `${RNFS.CachesDirectoryPath}/aria_recording_${Date.now()}.mp4`;
|
||||||
|
|
||||||
|
// Foreground-Service VOR dem AudioRecord starten — sonst blockt Android
|
||||||
|
// den Background-Mic-Zugriff (foregroundServiceType=microphone muss zum
|
||||||
|
// Zeitpunkt des startRecorder() schon aktiv sein, sonst greifen die
|
||||||
|
// Background-Mic-Restrictions ab Android 11+).
|
||||||
|
await acquireBackgroundAudio('rec');
|
||||||
|
|
||||||
// Aufnahme mit Metering starten
|
// Aufnahme mit Metering starten
|
||||||
await this.recorder.startRecorder(this.recordingPath, {
|
await this.recorder.startRecorder(this.recordingPath, {
|
||||||
AudioEncoderAndroid: AudioEncoderAndroidType.AAC,
|
AudioEncoderAndroid: AudioEncoderAndroidType.AAC,
|
||||||
@@ -388,11 +602,22 @@ class AudioService {
|
|||||||
if (db > -100) {
|
if (db > -100) {
|
||||||
this.vadBaselineSamples.push(db);
|
this.vadBaselineSamples.push(db);
|
||||||
if (this.vadBaselineSamples.length === VAD_BASELINE_SAMPLES) {
|
if (this.vadBaselineSamples.length === VAD_BASELINE_SAMPLES) {
|
||||||
const avg = this.vadBaselineSamples.reduce((a, b) => a + b, 0) / VAD_BASELINE_SAMPLES;
|
// Minimum statt Mittelwert: robust gegen Spike-Samples (z.B. wenn
|
||||||
this.vadAdaptiveSilenceDb = avg + VAD_SILENCE_OFFSET_DB;
|
// der User direkt nach Wake-Word sofort spricht oder das Wake-Word-
|
||||||
this.vadAdaptiveSpeechDb = avg + VAD_SPEECH_OFFSET_DB;
|
// Echo noch im Mikro ist). Min ist der ruhigste Moment.
|
||||||
const msg = `VAD: ambient=${avg.toFixed(0)}dB stille>${this.vadAdaptiveSilenceDb.toFixed(0)}dB`;
|
const lowest = Math.min(...this.vadBaselineSamples);
|
||||||
console.log('[Audio] %s speech>%s', msg, this.vadAdaptiveSpeechDb.toFixed(1));
|
const rawSilence = lowest + VAD_SILENCE_OFFSET_DB;
|
||||||
|
const rawSpeech = lowest + VAD_SPEECH_OFFSET_DB;
|
||||||
|
// Cap auf einen vernuenftigen Bereich:
|
||||||
|
// - Silence-Schwelle nicht ueber -28dB (sonst zaehlt Hintergrund-
|
||||||
|
// geraeusch dauerhaft als "Sprache" → VAD feuert nie)
|
||||||
|
// - Silence-Schwelle nicht unter -50dB (sonst zu strikt)
|
||||||
|
this.vadAdaptiveSilenceDb = Math.max(-50, Math.min(rawSilence, -28));
|
||||||
|
this.vadAdaptiveSpeechDb = Math.max(-40, Math.min(rawSpeech, -18));
|
||||||
|
const msg = `VAD: ambient=${lowest.toFixed(0)}dB stille>${this.vadAdaptiveSilenceDb.toFixed(0)}dB`;
|
||||||
|
console.log('[Audio] %s speech>%s (raw silence=%s speech=%s)',
|
||||||
|
msg, this.vadAdaptiveSpeechDb.toFixed(1),
|
||||||
|
rawSilence.toFixed(1), rawSpeech.toFixed(1));
|
||||||
try { ToastAndroid.show(msg, ToastAndroid.SHORT); } catch {}
|
try { ToastAndroid.show(msg, ToastAndroid.SHORT); } catch {}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -425,11 +650,22 @@ class AudioService {
|
|||||||
this.speechDetected = false;
|
this.speechDetected = false;
|
||||||
this.speechStartTime = 0;
|
this.speechStartTime = 0;
|
||||||
// VAD-Adaptive zurueckgesetzt: Baseline wird in den ersten 500ms neu
|
// VAD-Adaptive zurueckgesetzt: Baseline wird in den ersten 500ms neu
|
||||||
// gemessen. Bis dahin gelten die Fallback-Schwellen — die sind etwas
|
// gemessen. Bis dahin gelten die Fallback-Schwellen.
|
||||||
// empfindlicher als die alten Werte (-38 statt -45 fuer Stille).
|
|
||||||
this.vadBaselineSamples = [];
|
this.vadBaselineSamples = [];
|
||||||
this.vadAdaptiveSilenceDb = VAD_SILENCE_FALLBACK_DB;
|
this.vadAdaptiveSilenceDb = VAD_SILENCE_FALLBACK_DB;
|
||||||
this.vadAdaptiveSpeechDb = VAD_SPEECH_FALLBACK_DB;
|
this.vadAdaptiveSpeechDb = VAD_SPEECH_FALLBACK_DB;
|
||||||
|
|
||||||
|
// Manueller Override aus Settings — wenn gesetzt, wird die adaptive
|
||||||
|
// Baseline-Messung uebersteuert. User-Wahl gewinnt vor Auto-Magic.
|
||||||
|
const dbOverride = await loadVadSilenceDbOverride();
|
||||||
|
if (dbOverride != null) {
|
||||||
|
this.vadAdaptiveSilenceDb = dbOverride;
|
||||||
|
this.vadAdaptiveSpeechDb = dbOverride + 10; // Speech klar ueber Stille
|
||||||
|
this.vadBaselineSamples = new Array(VAD_BASELINE_SAMPLES).fill(0); // Baseline-Sammeln deaktivieren
|
||||||
|
const msg = `VAD: manuell stille>${dbOverride}dB`;
|
||||||
|
console.log('[Audio] %s', msg);
|
||||||
|
try { ToastAndroid.show(msg, ToastAndroid.SHORT); } catch {}
|
||||||
|
}
|
||||||
this.setState('recording');
|
this.setState('recording');
|
||||||
|
|
||||||
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
||||||
@@ -558,8 +794,15 @@ class AudioService {
|
|||||||
/** Base64-kodiertes Audio in die Queue stellen und abspielen */
|
/** Base64-kodiertes Audio in die Queue stellen und abspielen */
|
||||||
async playAudio(base64Data: string): Promise<void> {
|
async playAudio(base64Data: string): Promise<void> {
|
||||||
if (!base64Data) return;
|
if (!base64Data) return;
|
||||||
|
// Mute-Flag respektieren — robust gegen Race-Conditions zwischen User-
|
||||||
|
// Klick auf Mute und einem TTS-Chunk der im selben Tick eintrifft.
|
||||||
|
if (this._muted) {
|
||||||
|
console.log('[Audio] playAudio: muted=true → skip');
|
||||||
|
return;
|
||||||
|
}
|
||||||
this.audioQueue.push(base64Data);
|
this.audioQueue.push(base64Data);
|
||||||
|
console.log('[Audio] playAudio: queued (queue=%d isPlaying=%s pausedForCall=%s)',
|
||||||
|
this.audioQueue.length, this.isPlaying, this._pausedForCall);
|
||||||
if (!this.isPlaying) {
|
if (!this.isPlaying) {
|
||||||
this._playNext();
|
this._playNext();
|
||||||
}
|
}
|
||||||
@@ -625,7 +868,16 @@ class AudioService {
|
|||||||
final?: boolean;
|
final?: boolean;
|
||||||
silent?: boolean;
|
silent?: boolean;
|
||||||
}): Promise<string> {
|
}): Promise<string> {
|
||||||
const silent = !!payload.silent;
|
// _stoppedMessageId: User hat diese Antwort mid-Wiedergabe gestoppt
|
||||||
|
// (Mute geklickt). Auch wenn Mute jetzt wieder aus ist, soll diese
|
||||||
|
// Antwort nicht weiterspielen. Erst eine neue messageId resetted das.
|
||||||
|
const incomingMsgId = payload.messageId || '';
|
||||||
|
const stoppedByUser = !!this._stoppedMessageId && incomingMsgId === this._stoppedMessageId;
|
||||||
|
// Globaler Mute-Flag uebersteuert das per-Call silent — verhindert
|
||||||
|
// Race-Conditions wenn der User zwischen Chunks den Mute-Knopf drueckt.
|
||||||
|
// _pausedForCall: AudioTrack ist gestoppt waehrend Anruf — Chunks weiter
|
||||||
|
// sammeln (fuer WAV-Cache), aber NICHT in den Player schicken.
|
||||||
|
const silent = !!payload.silent || this._muted || this._pausedForCall || stoppedByUser;
|
||||||
if (!silent && !PcmStreamPlayer) {
|
if (!silent && !PcmStreamPlayer) {
|
||||||
console.warn('[Audio] PcmStreamPlayer Native Module nicht verfuegbar');
|
console.warn('[Audio] PcmStreamPlayer Native Module nicht verfuegbar');
|
||||||
return '';
|
return '';
|
||||||
@@ -651,6 +903,28 @@ class AudioService {
|
|||||||
this.pcmBuffer = [];
|
this.pcmBuffer = [];
|
||||||
this.pcmBytesCollected = 0;
|
this.pcmBytesCollected = 0;
|
||||||
}
|
}
|
||||||
|
// Resume-Sound stoppen falls noch aktiv (User hat nach Anruf eine
|
||||||
|
// neue Frage gestellt — die alte interruptierte Antwort ist obsolet).
|
||||||
|
if (this.resumeSound) {
|
||||||
|
try { this.resumeSound.stop(); this.resumeSound.release(); } catch {}
|
||||||
|
this.resumeSound = null;
|
||||||
|
}
|
||||||
|
// Pending Auto-Resume verwerfen wenn die neue Antwort eine andere
|
||||||
|
// messageId hat. Sonst spielt nach 30s-Wartezeit der Resume die
|
||||||
|
// ueberholte Antwort ab.
|
||||||
|
if (this.pausedMessageId && this.pausedMessageId !== messageId) {
|
||||||
|
console.log('[Audio] Neue TTS-Antwort (msgId=%s) — Auto-Resume fuer %s verworfen',
|
||||||
|
messageId, this.pausedMessageId);
|
||||||
|
this.pausedMessageId = '';
|
||||||
|
this.pausedPosition = 0;
|
||||||
|
}
|
||||||
|
// Stop-Marker zuruecksetzen wenn neue messageId — neue Antwort darf
|
||||||
|
// wieder normal abspielen, egal ob Mute zwischendurch aktiv war.
|
||||||
|
if (this._stoppedMessageId && this._stoppedMessageId !== messageId) {
|
||||||
|
console.log('[Audio] Neue Antwort (msgId=%s) — Stop-Marker fuer %s zurueckgesetzt',
|
||||||
|
messageId, this._stoppedMessageId);
|
||||||
|
this._stoppedMessageId = '';
|
||||||
|
}
|
||||||
this.pcmStreamActive = true;
|
this.pcmStreamActive = true;
|
||||||
this.pcmMessageId = messageId;
|
this.pcmMessageId = messageId;
|
||||||
this.pcmSampleRate = sampleRate;
|
this.pcmSampleRate = sampleRate;
|
||||||
@@ -685,13 +959,16 @@ class AudioService {
|
|||||||
|
|
||||||
if (isFinal) {
|
if (isFinal) {
|
||||||
if (!silent) {
|
if (!silent) {
|
||||||
// end() resolved jetzt erst wenn der native Writer-Thread fertig
|
// end() signalisiert dem Writer "keine weiteren Chunks". Aber WIR
|
||||||
// ist (alle Samples ausgespielt) — danach AudioFocus verzoegert
|
// releasen den AudioFocus NICHT hier — der writer braucht u.U. noch
|
||||||
// freigeben, damit Spotify/YouTube nicht im Mikro-Gap zwischen zwei
|
// 30+ Sekunden bis der Buffer wirklich abgespielt ist. Den release
|
||||||
// ARIA-Antworten wieder hochdrehen. Wenn ein neuer Stream innerhalb
|
// triggert das native Event "PcmPlaybackFinished" wenn AudioTrack
|
||||||
// FOCUS_RELEASE_DELAY_MS startet, wird das Release abgebrochen.
|
// wirklich am Ende ist (siehe ensurePlaybackFinishedListener).
|
||||||
try { await PcmStreamPlayer!.end(); } catch {}
|
try { await PcmStreamPlayer!.end(); } catch {}
|
||||||
this._releaseFocusDeferred();
|
// playbackFinished-Listener informieren (UI-Logik)
|
||||||
|
this.playbackFinishedListeners.forEach(cb => {
|
||||||
|
try { cb(); } catch (e) { console.warn('[Audio] playbackFinished cb err:', e); }
|
||||||
|
});
|
||||||
}
|
}
|
||||||
this.pcmStreamActive = false;
|
this.pcmStreamActive = false;
|
||||||
|
|
||||||
@@ -765,7 +1042,10 @@ class AudioService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Audio aus lokaler Datei (file:// Pfad) in die Queue und abspielen. */
|
/** Audio aus lokaler Datei (file:// Pfad) in die Queue und abspielen.
|
||||||
|
* Setzt zusaetzlich playbackStartTime + currentPlaybackMsgId damit ein
|
||||||
|
* Anruf waehrend dieses Playbacks korrekt erfasst wird (ohne dieses
|
||||||
|
* Tracking liefert captureInterruption nichts → kein Auto-Resume). */
|
||||||
async playFromPath(filePath: string): Promise<void> {
|
async playFromPath(filePath: string): Promise<void> {
|
||||||
if (!filePath) return;
|
if (!filePath) return;
|
||||||
try {
|
try {
|
||||||
@@ -774,6 +1054,14 @@ class AudioService {
|
|||||||
console.warn('[Audio] Cache-Datei existiert nicht mehr:', cleanPath);
|
console.warn('[Audio] Cache-Datei existiert nicht mehr:', cleanPath);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
// Dateiname ohne .wav als messageId nehmen (egal ob UUID oder andere ID)
|
||||||
|
const fileMatch = cleanPath.match(/([^/\\]+)\.wav$/i);
|
||||||
|
const msgId = fileMatch ? fileMatch[1] : '';
|
||||||
|
console.log('[Audio] playFromPath: cleanPath=%s → msgId=%s', cleanPath, msgId || '(leer)');
|
||||||
|
if (msgId) {
|
||||||
|
this.currentPlaybackMsgId = msgId;
|
||||||
|
this.playbackStartTime = Date.now() - this.LEADING_SILENCE_SEC * 1000;
|
||||||
|
}
|
||||||
const b64 = await RNFS.readFile(cleanPath, 'base64');
|
const b64 = await RNFS.readFile(cleanPath, 'base64');
|
||||||
this.playAudio(b64);
|
this.playAudio(b64);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
@@ -802,6 +1090,15 @@ class AudioService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private _firePlaybackStarted(): void {
|
private _firePlaybackStarted(): void {
|
||||||
|
// Tracking fuer Auto-Resume nach Anruf-Pause: NUR setzen wenn ein
|
||||||
|
// PCM-Stream laeuft (Live-TTS). Bei Play-Button / Resume-Sound hat der
|
||||||
|
// Caller (playFromPath / _playFromPathAtPosition) das Tracking schon
|
||||||
|
// korrekt mit der msgId aus dem Pfad gesetzt — sonst wuerden wir hier
|
||||||
|
// mit leerem pcmMessageId ueberschreiben.
|
||||||
|
if (this.pcmMessageId) {
|
||||||
|
this.playbackStartTime = Date.now();
|
||||||
|
this.currentPlaybackMsgId = this.pcmMessageId;
|
||||||
|
}
|
||||||
this.playbackStartedListeners.forEach(cb => {
|
this.playbackStartedListeners.forEach(cb => {
|
||||||
try { cb(); } catch (e) { console.warn('[Audio] playbackStarted listener err:', e); }
|
try { cb(); } catch (e) { console.warn('[Audio] playbackStarted listener err:', e); }
|
||||||
});
|
});
|
||||||
@@ -854,11 +1151,13 @@ class AudioService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
this.currentSound = sound;
|
this.currentSound = sound;
|
||||||
|
console.log('[Audio] Sound.play startet (path=%s)', soundPath);
|
||||||
|
|
||||||
// Naechstes Audio schon vorbereiten waehrend dieses abspielt
|
// Naechstes Audio schon vorbereiten waehrend dieses abspielt
|
||||||
this._preloadNext();
|
this._preloadNext();
|
||||||
|
|
||||||
sound.play((success) => {
|
sound.play((success) => {
|
||||||
|
console.log('[Audio] Sound.play callback: success=%s queue=%d', success, this.audioQueue.length);
|
||||||
if (!success) console.warn('[Audio] Wiedergabe fehlgeschlagen');
|
if (!success) console.warn('[Audio] Wiedergabe fehlgeschlagen');
|
||||||
sound.release();
|
sound.release();
|
||||||
this.currentSound = null;
|
this.currentSound = null;
|
||||||
@@ -885,8 +1184,51 @@ class AudioService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Mute: alle eingehenden TTS-Chunks/WAVs werden ignoriert bis wieder
|
||||||
|
* unmuted. Robuster als ein React-Ref weil hier kein Re-Render-Race ist
|
||||||
|
* — die Bridge kann einen Chunk im selben JS-Tick liefern in dem der
|
||||||
|
* User Mute geklickt hat. */
|
||||||
|
private _muted: boolean = false;
|
||||||
|
/** Anruf laeuft → Chunks werden nur in den Cache-Buffer gepusht, nicht
|
||||||
|
* abgespielt. Wird in pauseForCall gesetzt, in endCallPause/resumeFrom-
|
||||||
|
* Interruption zurueckgenommen. */
|
||||||
|
private _pausedForCall: boolean = false;
|
||||||
|
/** Wenn der User mid-Wiedergabe Mute drueckt: messageId der ABGEBROCHENEN
|
||||||
|
* Antwort merken. Folge-Chunks dieser msgId werden silent ignoriert, auch
|
||||||
|
* wenn der User Mute wieder ausschaltet — kein "Resume mid-Antwort". Eine
|
||||||
|
* NEUE messageId resetted das, dann spielt's wieder normal. */
|
||||||
|
private _stoppedMessageId: string = '';
|
||||||
|
setMuted(muted: boolean): void {
|
||||||
|
console.log('[Audio] setMuted: %s (currentSound=%s pcmStreamActive=%s)',
|
||||||
|
muted, this.currentSound ? 'aktiv' : 'null', this.pcmStreamActive);
|
||||||
|
this._muted = muted;
|
||||||
|
if (muted) {
|
||||||
|
// Aktuell laufende Antwort als "verworfen" markieren — nachfolgende
|
||||||
|
// chunks dieser msgId werden silent gehalten auch wenn der User Mute
|
||||||
|
// gleich wieder ausschaltet. Erst eine NEUE Antwort darf wieder reden.
|
||||||
|
const activeMsgId = this.pcmMessageId || this.currentPlaybackMsgId;
|
||||||
|
if (activeMsgId) {
|
||||||
|
this._stoppedMessageId = activeMsgId;
|
||||||
|
console.log('[Audio] Antwort %s als gestoppt markiert', activeMsgId);
|
||||||
|
}
|
||||||
|
this.stopPlayback();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
isMuted(): boolean { return this._muted; }
|
||||||
|
|
||||||
/** Laufende Wiedergabe stoppen + Queue leeren */
|
/** Laufende Wiedergabe stoppen + Queue leeren */
|
||||||
stopPlayback(): void {
|
stopPlayback(): void {
|
||||||
|
// Idempotent: wenn nichts mehr aktiv ist, NICHT noch einen Focus-Release/
|
||||||
|
// Kick-Cycle anstossen — Re-Renders triggern setMuted oft mehrfach hinter-
|
||||||
|
// einander, und jeder weitere Kick lässt Spotify nochmal kurz pausieren.
|
||||||
|
const hasAnything = !!(this.currentSound || this.resumeSound || this.preloadedSound
|
||||||
|
|| this.pcmStreamActive || this.audioQueue.length || this.isPlaying);
|
||||||
|
if (!hasAnything) return;
|
||||||
|
console.log('[Audio] stopPlayback: currentSound=%s queue=%d pcm=%s',
|
||||||
|
this.currentSound ? 'aktiv' : 'null', this.audioQueue.length, this.pcmStreamActive);
|
||||||
|
// Foreground-Service auch stoppen — sonst bleibt die Notification haengen
|
||||||
|
// wenn Wiedergabe abgebrochen wird (Anruf, Cancel, Barge-In).
|
||||||
|
stopBackgroundAudio().catch(() => {});
|
||||||
this.audioQueue = [];
|
this.audioQueue = [];
|
||||||
this.isPlaying = false;
|
this.isPlaying = false;
|
||||||
if (this.currentSound) {
|
if (this.currentSound) {
|
||||||
@@ -894,21 +1236,31 @@ class AudioService {
|
|||||||
this.currentSound.release();
|
this.currentSound.release();
|
||||||
this.currentSound = null;
|
this.currentSound = null;
|
||||||
}
|
}
|
||||||
|
if (this.resumeSound) {
|
||||||
|
this.resumeSound.stop();
|
||||||
|
this.resumeSound.release();
|
||||||
|
this.resumeSound = null;
|
||||||
|
}
|
||||||
if (this.preloadedSound) {
|
if (this.preloadedSound) {
|
||||||
this.preloadedSound.release();
|
this.preloadedSound.release();
|
||||||
this.preloadedSound = null;
|
this.preloadedSound = null;
|
||||||
if (this.preloadedPath) RNFS.unlink(this.preloadedPath).catch(() => {});
|
if (this.preloadedPath) RNFS.unlink(this.preloadedPath).catch(() => {});
|
||||||
this.preloadedPath = '';
|
this.preloadedPath = '';
|
||||||
}
|
}
|
||||||
// PCM-Stream ebenfalls hart stoppen (Cancel/Abbruch)
|
// PCM-Stream ebenfalls hart stoppen (Cancel/Abbruch).
|
||||||
if (this.pcmStreamActive) {
|
// pcmStreamActive wird beim isFinal-Chunk schon false gesetzt — der
|
||||||
PcmStreamPlayer?.stop().catch(() => {});
|
// AudioTrack spielt aber noch sekundenlang aus seinem Buffer ab. Daher
|
||||||
this.pcmStreamActive = false;
|
// IMMER stop() aufrufen, ohne den Flag zu pruefen (ist idempotent).
|
||||||
this.pcmBuffer = [];
|
PcmStreamPlayer?.stop().catch(() => {});
|
||||||
this.pcmBytesCollected = 0;
|
this.pcmStreamActive = false;
|
||||||
this.pcmMessageId = '';
|
this.pcmBuffer = [];
|
||||||
}
|
this.pcmBytesCollected = 0;
|
||||||
// Audio-Focus sofort freigeben — User hat explizit abgebrochen
|
this.pcmMessageId = '';
|
||||||
|
// Audio-Focus sofort freigeben — User hat explizit abgebrochen.
|
||||||
|
// Unser Focus war TRANSIENT, Spotify resumed darum automatisch beim
|
||||||
|
// Abandon. Den frueheren kickReleaseMedia haben wir entfernt: er
|
||||||
|
// requestete USAGE_MEDIA mit GAIN (permanent), was Spotify als
|
||||||
|
// "user-action stopp" interpretierte und Auto-Resume verhinderte.
|
||||||
this._cancelDeferredFocusRelease();
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.release().catch(() => {});
|
AudioFocus?.release().catch(() => {});
|
||||||
}
|
}
|
||||||
@@ -950,19 +1302,29 @@ class AudioService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Alte Aufnahme- und TTS-Files aus dem Cache loeschen (>30s alt). */
|
/** Alte Aufnahme- und TTS-Files aus dem Cache loeschen.
|
||||||
private async _cleanupStaleCacheFiles(): Promise<void> {
|
* Default 30s — verwendet beim Mikro-Start (kurze Lebensdauer reicht).
|
||||||
|
* App-Start nutzt 5min damit gerade aktive Files nicht erwischt werden. */
|
||||||
|
private async _cleanupStaleCacheFiles(maxAgeMs: number = 30000): Promise<void> {
|
||||||
try {
|
try {
|
||||||
const files = await RNFS.readDir(RNFS.CachesDirectoryPath);
|
const files = await RNFS.readDir(RNFS.CachesDirectoryPath);
|
||||||
const now = Date.now();
|
const now = Date.now();
|
||||||
|
let removed = 0;
|
||||||
|
let freedBytes = 0;
|
||||||
for (const f of files) {
|
for (const f of files) {
|
||||||
if (!f.isFile()) continue;
|
if (!f.isFile()) continue;
|
||||||
if (!f.name.startsWith('aria_recording_') && !f.name.startsWith('aria_tts_')) continue;
|
if (!f.name.startsWith('aria_recording_') && !f.name.startsWith('aria_tts_')) continue;
|
||||||
const age = now - (f.mtime ? f.mtime.getTime() : 0);
|
const age = now - (f.mtime ? f.mtime.getTime() : 0);
|
||||||
if (age > 30000) {
|
if (age > maxAgeMs) {
|
||||||
|
freedBytes += parseInt(f.size as any, 10) || 0;
|
||||||
await RNFS.unlink(f.path).catch(() => {});
|
await RNFS.unlink(f.path).catch(() => {});
|
||||||
|
removed += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if (removed > 0) {
|
||||||
|
console.log('[Audio] Cache-Cleanup: %d Files entfernt, %.1fMB freigegeben',
|
||||||
|
removed, freedBytes / 1024 / 1024);
|
||||||
|
}
|
||||||
} catch {
|
} catch {
|
||||||
// silent — cleanup ist best-effort
|
// silent — cleanup ist best-effort
|
||||||
}
|
}
|
||||||
@@ -989,6 +1351,43 @@ class AudioService {
|
|||||||
// silent
|
// silent
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Aktuelle Groesse des TTS-Caches. */
|
||||||
|
async getTtsCacheSize(): Promise<{ count: number; totalMB: number }> {
|
||||||
|
let count = 0;
|
||||||
|
let total = 0;
|
||||||
|
try {
|
||||||
|
const dir = `${RNFS.DocumentDirectoryPath}/tts_cache`;
|
||||||
|
if (await RNFS.exists(dir)) {
|
||||||
|
const files = await RNFS.readDir(dir);
|
||||||
|
for (const f of files) {
|
||||||
|
if (!f.isFile() || !f.name.endsWith('.wav')) continue;
|
||||||
|
count += 1;
|
||||||
|
total += parseInt(f.size as any, 10) || 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
return { count, totalMB: total / 1024 / 1024 };
|
||||||
|
}
|
||||||
|
|
||||||
|
/** TTS-Cache komplett leeren (Settings-Button). */
|
||||||
|
async clearTtsCache(): Promise<{ removed: number; freedMB: number }> {
|
||||||
|
let removed = 0;
|
||||||
|
let freed = 0;
|
||||||
|
try {
|
||||||
|
const dir = `${RNFS.DocumentDirectoryPath}/tts_cache`;
|
||||||
|
if (!(await RNFS.exists(dir))) return { removed: 0, freedMB: 0 };
|
||||||
|
const files = await RNFS.readDir(dir);
|
||||||
|
for (const f of files) {
|
||||||
|
if (!f.isFile() || !f.name.endsWith('.wav')) continue;
|
||||||
|
const size = parseInt(f.size as any, 10) || 0;
|
||||||
|
await RNFS.unlink(f.path).catch(() => {});
|
||||||
|
removed += 1;
|
||||||
|
freed += size;
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
return { removed, freedMB: freed / 1024 / 1024 };
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Singleton
|
// Singleton
|
||||||
|
|||||||
@@ -0,0 +1,76 @@
|
|||||||
|
/**
|
||||||
|
* Background-Audio: ARIAs TTS, Mic-Aufnahme und Wake-Word-Lauschen sollen
|
||||||
|
* auch bei minimierter App weiterlaufen. Wir starten dafuer einen Foreground-
|
||||||
|
* Service mit foregroundServiceType=mediaPlayback|microphone, der eine
|
||||||
|
* persistente Notification zeigt waehrend irgendein Audio-Slot aktiv ist.
|
||||||
|
*
|
||||||
|
* Mehrere Komponenten koennen den Service unabhaengig "halten":
|
||||||
|
* - 'tts' : ARIA spricht
|
||||||
|
* - 'rec' : Aufnahme laeuft
|
||||||
|
* - 'wake' : Wake-Word lauscht passiv (Ohr aktiv)
|
||||||
|
*
|
||||||
|
* Solange mindestens ein Slot aktiv ist, laeuft der Service. Wenn alle
|
||||||
|
* Slots leer sind, wird er gestoppt. Der Notification-Text passt sich an
|
||||||
|
* den hoechstprioren Slot an (tts > rec > wake).
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { NativeModules } from 'react-native';
|
||||||
|
|
||||||
|
interface BackgroundAudioNative {
|
||||||
|
start(reason: string): Promise<boolean>;
|
||||||
|
stop(): Promise<boolean>;
|
||||||
|
}
|
||||||
|
|
||||||
|
const { BackgroundAudio } = NativeModules as { BackgroundAudio?: BackgroundAudioNative };
|
||||||
|
|
||||||
|
type Slot = 'tts' | 'rec' | 'wake';
|
||||||
|
|
||||||
|
const slots = new Set<Slot>();
|
||||||
|
|
||||||
|
// Prioritaet fuer den Notification-Text — hoechste zuerst.
|
||||||
|
const PRIORITY: Slot[] = ['tts', 'rec', 'wake'];
|
||||||
|
|
||||||
|
function topReason(): string {
|
||||||
|
for (const s of PRIORITY) {
|
||||||
|
if (slots.has(s)) return s;
|
||||||
|
}
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
|
||||||
|
async function applyState(): Promise<void> {
|
||||||
|
if (!BackgroundAudio) return;
|
||||||
|
if (slots.size === 0) {
|
||||||
|
try { await BackgroundAudio.stop(); } catch {}
|
||||||
|
console.log('[BackgroundAudio] Service gestoppt (keine Slots)');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const reason = topReason();
|
||||||
|
try {
|
||||||
|
await BackgroundAudio.start(reason);
|
||||||
|
console.log('[BackgroundAudio] Service aktiv (slot=%s, slots=%s)',
|
||||||
|
reason, [...slots].join('+'));
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn('[BackgroundAudio] start fehlgeschlagen:', err?.message || err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function acquireBackgroundAudio(slot: Slot): Promise<void> {
|
||||||
|
if (slots.has(slot)) return;
|
||||||
|
slots.add(slot);
|
||||||
|
await applyState();
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function releaseBackgroundAudio(slot: Slot): Promise<void> {
|
||||||
|
if (!slots.has(slot)) return;
|
||||||
|
slots.delete(slot);
|
||||||
|
await applyState();
|
||||||
|
}
|
||||||
|
|
||||||
|
export function backgroundAudioActive(): boolean {
|
||||||
|
return slots.size > 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Legacy API (nur tts-Slot) — fuer Aufruf-Sites die noch nichts vom Slot-
|
||||||
|
// system wissen. Mappt auf den 'tts'-Slot. ---
|
||||||
|
export const startBackgroundAudio = () => acquireBackgroundAudio('tts');
|
||||||
|
export const stopBackgroundAudio = () => releaseBackgroundAudio('tts');
|
||||||
@@ -0,0 +1,41 @@
|
|||||||
|
/**
|
||||||
|
* Verbose-Logging-Toggle: console.log laesst sich global stummschalten.
|
||||||
|
* console.warn/console.error bleiben immer an — Fehler will man immer sehen.
|
||||||
|
*
|
||||||
|
* Default: an (true). Toggle ueber Settings → Protokoll → Verbose Logging.
|
||||||
|
* Beim Start wird der gespeicherte Wert geladen, vorher loggen wir normal.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
|
|
||||||
|
export const VERBOSE_LOGGING_KEY = 'aria_verbose_logging';
|
||||||
|
|
||||||
|
// Original-console.log retten, damit wir die Wrapper jederzeit wieder
|
||||||
|
// "scharf" stellen koennen (sonst waere ein Toggle-an nach -aus tot).
|
||||||
|
const originalLog = console.log.bind(console);
|
||||||
|
const noop = () => {};
|
||||||
|
|
||||||
|
let _verbose = true;
|
||||||
|
|
||||||
|
function applyState(): void {
|
||||||
|
console.log = _verbose ? originalLog : noop;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Wert aus AsyncStorage laden und anwenden. Beim App-Start aufrufen. */
|
||||||
|
export async function initLogger(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const v = await AsyncStorage.getItem(VERBOSE_LOGGING_KEY);
|
||||||
|
_verbose = v !== 'false'; // default: true
|
||||||
|
} catch {}
|
||||||
|
applyState();
|
||||||
|
}
|
||||||
|
|
||||||
|
export function isVerboseLogging(): boolean {
|
||||||
|
return _verbose;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function setVerboseLogging(verbose: boolean): void {
|
||||||
|
_verbose = verbose;
|
||||||
|
applyState();
|
||||||
|
AsyncStorage.setItem(VERBOSE_LOGGING_KEY, String(verbose)).catch(() => {});
|
||||||
|
}
|
||||||
@@ -1,14 +1,19 @@
|
|||||||
/**
|
/**
|
||||||
* PhoneCall-Service — pausiert die TTS-Wiedergabe wenn das Telefon klingelt
|
* PhoneCall-Service — pausiert ARIA bei Telefonaten:
|
||||||
* oder ein Anruf laeuft. Native-Bindung an PhoneCallModule.kt.
|
|
||||||
*
|
*
|
||||||
* Bei "ringing" oder "offhook" wird audioService.haltAllPlayback() gerufen —
|
* 1. Klassischer Mobilfunk-Anruf via TelephonyManager (PhoneCallModule.kt)
|
||||||
* ARIA verstummt sofort. Nach dem Auflegen passiert nichts automatisch
|
* Status: idle / ringing / offhook
|
||||||
* (Audio kommt nicht zurueck), der User muesste die Antwort manuell
|
|
||||||
* nochmal anfordern (Play-Button auf der Nachricht).
|
|
||||||
*
|
*
|
||||||
* Permission READ_PHONE_STATE muss vom Nutzer einmalig erteilt werden —
|
* 2. VoIP-Anrufe (WhatsApp, Signal, Discord, Telegram, Teams, ...) via
|
||||||
* wenn nicht, failed start() leise und der Rest funktioniert wie bisher.
|
* AudioFocus-Loss-Event (AudioFocusModule.kt). Diese Apps requestn
|
||||||
|
* AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE wenn ein Anruf reinkommt — wir
|
||||||
|
* bekommen ein "loss" Event und reagieren genauso wie auf RINGING.
|
||||||
|
*
|
||||||
|
* In beiden Faellen wird audioService.haltAllPlayback() + wakeWordService.
|
||||||
|
* pauseForCall() gerufen. Bei call-end (idle / focus-gain) → resumeFromCall.
|
||||||
|
*
|
||||||
|
* Permission READ_PHONE_STATE ist nur fuer Pfad 1 noetig — Pfad 2 braucht
|
||||||
|
* keine extra Berechtigung weil unser eigener AudioFocus-Listener feuert.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import {
|
import {
|
||||||
@@ -19,6 +24,7 @@ import {
|
|||||||
ToastAndroid,
|
ToastAndroid,
|
||||||
} from 'react-native';
|
} from 'react-native';
|
||||||
import audioService from './audio';
|
import audioService from './audio';
|
||||||
|
import wakeWordService from './wakeword';
|
||||||
|
|
||||||
interface PhoneCallNative {
|
interface PhoneCallNative {
|
||||||
start(): Promise<boolean>;
|
start(): Promise<boolean>;
|
||||||
@@ -32,75 +38,183 @@ type PhoneState = 'idle' | 'ringing' | 'offhook';
|
|||||||
class PhoneCallService {
|
class PhoneCallService {
|
||||||
private started: boolean = false;
|
private started: boolean = false;
|
||||||
private subscription: { remove: () => void } | null = null;
|
private subscription: { remove: () => void } | null = null;
|
||||||
|
private focusSubscription: { remove: () => void } | null = null;
|
||||||
private lastState: PhoneState = 'idle';
|
private lastState: PhoneState = 'idle';
|
||||||
|
/** Damit Resume nach VoIP-Loss nicht doppelt feuert wenn auch
|
||||||
|
* TelephonyManager-IDLE-Event kommt. */
|
||||||
|
private interruptedByFocus: boolean = false;
|
||||||
|
|
||||||
async start(): Promise<boolean> {
|
async start(): Promise<boolean> {
|
||||||
if (this.started || !PhoneCall) return false;
|
if (this.started || Platform.OS !== 'android') return false;
|
||||||
if (Platform.OS !== 'android') return false;
|
|
||||||
|
|
||||||
// Runtime-Permission holen (nur einmal noetig)
|
// 1. AudioFocus-Listener IMMER registrieren — fangs VoIP-Calls (WhatsApp,
|
||||||
|
// Signal, Discord etc.) abdecken, brauchen keine Permission.
|
||||||
try {
|
try {
|
||||||
const granted = await PermissionsAndroid.request(
|
const focusEmitter = new NativeEventEmitter(NativeModules.AudioFocus as any);
|
||||||
PermissionsAndroid.PERMISSIONS.READ_PHONE_STATE,
|
this.focusSubscription = focusEmitter.addListener(
|
||||||
{
|
'AudioFocusChanged',
|
||||||
title: 'ARIA Cockpit — Anruf-Erkennung',
|
(e: { type: 'loss' | 'loss_transient' | 'gain' }) => this._onFocusChanged(e.type),
|
||||||
message: 'Damit ARIA bei einem eingehenden Anruf nicht weiterredet, '
|
|
||||||
+ 'darf die App den Anruf-Status sehen (Klingeln/Aktiv/Aufgelegt). '
|
|
||||||
+ 'Es werden keine Anrufdaten gelesen oder gespeichert.',
|
|
||||||
buttonPositive: 'Erlauben',
|
|
||||||
buttonNegative: 'Spaeter',
|
|
||||||
},
|
|
||||||
);
|
);
|
||||||
if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
|
console.log('[PhoneCall] AudioFocus-Listener aktiv (fuer VoIP-Calls)');
|
||||||
console.warn('[PhoneCall] READ_PHONE_STATE Permission abgelehnt');
|
} catch (err: any) {
|
||||||
return false;
|
console.warn('[PhoneCall] AudioFocus-Subscription gescheitert', err?.message || err);
|
||||||
}
|
|
||||||
} catch (err) {
|
|
||||||
console.warn('[PhoneCall] Permission-Anfrage gescheitert', err);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
// 2. TelephonyManager-Listener — fuer klassische Mobilfunk-Anrufe
|
||||||
const ok = await PhoneCall.start();
|
if (PhoneCall) {
|
||||||
if (!ok) {
|
try {
|
||||||
console.warn('[PhoneCall] Native start() lieferte false (Permission?)');
|
const granted = await PermissionsAndroid.request(
|
||||||
return false;
|
PermissionsAndroid.PERMISSIONS.READ_PHONE_STATE,
|
||||||
|
{
|
||||||
|
title: 'ARIA Cockpit — Anruf-Erkennung',
|
||||||
|
message: 'Damit ARIA bei einem eingehenden Anruf nicht weiterredet, '
|
||||||
|
+ 'darf die App den Anruf-Status sehen (Klingeln/Aktiv/Aufgelegt). '
|
||||||
|
+ 'Es werden keine Anrufdaten gelesen oder gespeichert.',
|
||||||
|
buttonPositive: 'Erlauben',
|
||||||
|
buttonNegative: 'Spaeter',
|
||||||
|
},
|
||||||
|
);
|
||||||
|
if (granted === PermissionsAndroid.RESULTS.GRANTED) {
|
||||||
|
const ok = await PhoneCall.start();
|
||||||
|
if (ok) {
|
||||||
|
const emitter = new NativeEventEmitter(NativeModules.PhoneCall as any);
|
||||||
|
this.subscription = emitter.addListener(
|
||||||
|
'PhoneCallStateChanged',
|
||||||
|
(e: { state: PhoneState }) => this._onStateChanged(e.state),
|
||||||
|
);
|
||||||
|
console.log('[PhoneCall] TelephonyManager-Listener aktiv');
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
console.warn('[PhoneCall] READ_PHONE_STATE abgelehnt — VoIP-Calls werden trotzdem ueber AudioFocus erkannt');
|
||||||
|
}
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn('[PhoneCall] TelephonyManager-Setup gescheitert:', err?.message || err);
|
||||||
}
|
}
|
||||||
const emitter = new NativeEventEmitter(NativeModules.PhoneCall as any);
|
|
||||||
this.subscription = emitter.addListener('PhoneCallStateChanged', (e: { state: PhoneState }) => {
|
|
||||||
this._onStateChanged(e.state);
|
|
||||||
});
|
|
||||||
this.started = true;
|
|
||||||
console.log('[PhoneCall] Listener aktiv');
|
|
||||||
return true;
|
|
||||||
} catch (err: any) {
|
|
||||||
console.warn('[PhoneCall] start gescheitert:', err?.message || err);
|
|
||||||
return false;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
this.started = true;
|
||||||
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
async stop(): Promise<void> {
|
async stop(): Promise<void> {
|
||||||
if (!this.started || !PhoneCall) return;
|
if (!this.started) return;
|
||||||
try {
|
try { this.subscription?.remove(); } catch {}
|
||||||
this.subscription?.remove();
|
try { this.focusSubscription?.remove(); } catch {}
|
||||||
this.subscription = null;
|
this.subscription = null;
|
||||||
await PhoneCall.stop();
|
this.focusSubscription = null;
|
||||||
} catch {}
|
if (PhoneCall) {
|
||||||
|
try { await PhoneCall.stop(); } catch {}
|
||||||
|
}
|
||||||
this.started = false;
|
this.started = false;
|
||||||
this.lastState = 'idle';
|
this.lastState = 'idle';
|
||||||
|
this.interruptedByFocus = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
private _onStateChanged(state: PhoneState): void {
|
private _onStateChanged(state: PhoneState): void {
|
||||||
if (state === this.lastState) return;
|
if (state === this.lastState) return;
|
||||||
console.log('[PhoneCall] State: %s → %s', this.lastState, state);
|
const prev = this.lastState;
|
||||||
|
console.log('[PhoneCall] State: %s → %s', prev, state);
|
||||||
this.lastState = state;
|
this.lastState = state;
|
||||||
if (state === 'ringing' || state === 'offhook') {
|
if (state === 'ringing' || state === 'offhook') {
|
||||||
audioService.haltAllPlayback(`Telefon-State: ${state}`);
|
this._haltForCall(state === 'ringing' ? 'Anruf — ARIA pausiert' : 'Im Gespraech — ARIA pausiert');
|
||||||
ToastAndroid.show(
|
} else if (state === 'idle' && prev !== 'idle') {
|
||||||
state === 'ringing' ? 'Anruf — ARIA pausiert' : 'Im Gespraech — ARIA pausiert',
|
// Wenn schon durch AudioFocus-Loss pausiert wurde, NICHT doppelt resumen.
|
||||||
ToastAndroid.SHORT,
|
// Der Focus-Gain-Event triggert das Resume.
|
||||||
);
|
if (!this.interruptedByFocus) {
|
||||||
|
this._resumeAfterCall('Anruf beendet — ARIA wieder aktiv');
|
||||||
|
}
|
||||||
}
|
}
|
||||||
// idle: nichts automatisch — User soll nichts unbeabsichtigt re-triggern
|
}
|
||||||
|
|
||||||
|
/** AudioFocus-Loss = irgendeine andere App hat den Focus uebernommen.
|
||||||
|
* Das passiert bei VoIP-Anrufen (was wir wollen) ABER auch bei normalen
|
||||||
|
* Audio-Playern (anderer Player startet, Notification-Sound, sogar
|
||||||
|
* unsere eigenen Sound-Calls beim Play-Button). Daher checken wir den
|
||||||
|
* AudioMode — nur IN_CALL (2) oder IN_COMMUNICATION (3) zaehlt als Anruf. */
|
||||||
|
private async _onFocusChanged(type: 'loss' | 'loss_transient' | 'gain'): Promise<void> {
|
||||||
|
if (type === 'loss' || type === 'loss_transient') {
|
||||||
|
// Schon durch klassischen TelephonyManager pausiert? Dann nichts doppeln.
|
||||||
|
if (this.lastState === 'ringing' || this.lastState === 'offhook') return;
|
||||||
|
// Mode pruefen — nur echte Anrufe behandeln.
|
||||||
|
let mode = -1;
|
||||||
|
try { mode = await (NativeModules.AudioFocus as any)?.getMode?.(); } catch {}
|
||||||
|
if (mode !== 2 && mode !== 3) {
|
||||||
|
// NORMAL-Mode → kein Anruf (Stefan hat z.B. Play-Button gedrueckt
|
||||||
|
// oder Spotify hat sich neu reingedraengelt). Keine Toasts.
|
||||||
|
console.log('[PhoneCall] FOCUS_LOSS ignoriert (AudioMode=%d, kein Call)', mode);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
this.interruptedByFocus = true;
|
||||||
|
this._haltForCall('Anruf erkannt (VoIP) — ARIA pausiert');
|
||||||
|
// Pollen, weil GAIN nicht zuverlaessig kommt (wir releasen den Focus
|
||||||
|
// selbst beim halt → kein automatischer GAIN). AudioMode != IN_COMMUNICATION
|
||||||
|
// = Call vorbei.
|
||||||
|
this._startVoipResumePoll();
|
||||||
|
} else if (type === 'gain') {
|
||||||
|
if (this.interruptedByFocus) {
|
||||||
|
this.interruptedByFocus = false;
|
||||||
|
this._stopVoipResumePoll();
|
||||||
|
this._resumeAfterCall('Audio frei — ARIA wieder aktiv');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Polling-Fallback: alle 3s checken ob AudioMode wieder NORMAL ist. */
|
||||||
|
private voipPollTimer: ReturnType<typeof setInterval> | null = null;
|
||||||
|
private _startVoipResumePoll(): void {
|
||||||
|
if (this.voipPollTimer) return;
|
||||||
|
this.voipPollTimer = setInterval(async () => {
|
||||||
|
if (!this.interruptedByFocus) {
|
||||||
|
this._stopVoipResumePoll();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
const mode = await (NativeModules.AudioFocus as any)?.getMode?.();
|
||||||
|
// 0 = MODE_NORMAL — Call ist vorbei
|
||||||
|
if (typeof mode === 'number' && mode === 0) {
|
||||||
|
this.interruptedByFocus = false;
|
||||||
|
this._stopVoipResumePoll();
|
||||||
|
this._resumeAfterCall('Anruf beendet — ARIA wieder aktiv');
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
}, 3000);
|
||||||
|
}
|
||||||
|
private _stopVoipResumePoll(): void {
|
||||||
|
if (this.voipPollTimer) {
|
||||||
|
clearInterval(this.voipPollTimer);
|
||||||
|
this.voipPollTimer = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private _haltForCall(toast: string): void {
|
||||||
|
// Position merken bevor wir den Stream killen — fuer Auto-Resume.
|
||||||
|
audioService.captureInterruption();
|
||||||
|
// pauseForCall (statt haltAllPlayback): pcmBuffer + messageId bleiben,
|
||||||
|
// weitere Chunks werden weiter gesammelt damit isFinal die WAV schreibt.
|
||||||
|
audioService.pauseForCall(toast);
|
||||||
|
wakeWordService.pauseForCall().catch(() => {});
|
||||||
|
ToastAndroid.show(toast, ToastAndroid.SHORT);
|
||||||
|
}
|
||||||
|
|
||||||
|
private _resumeAfterCall(toast: string): void {
|
||||||
|
// Anruf-Pause aufheben — neue Chunks duerfen wieder direkt abgespielt
|
||||||
|
// werden (falls die Bridge mid-Anruf isFinal noch nicht geschickt hat).
|
||||||
|
audioService.endCallPause();
|
||||||
|
wakeWordService.resumeFromCall().catch(() => {});
|
||||||
|
ToastAndroid.show(toast, ToastAndroid.SHORT);
|
||||||
|
// 800ms warten bevor Auto-Resume — sonst kollidiert ARIA's neuer Focus-
|
||||||
|
// Request mit Spotify's Auto-Resume nach Anruf-Ende. System haengt nach
|
||||||
|
// dem Auflegen noch im IN_CALL-Mode-Uebergang, Spotify schaut auf Focus-
|
||||||
|
// Gain und wuerde sofort wieder LOSS sehen → bleibt pausiert.
|
||||||
|
// Mit Delay: Spotify resumed kurz, dann pausiert ARIA wieder ordnungs-
|
||||||
|
// gemaess. Wenn ARIA nichts pending hat, bleibt Spotify einfach an.
|
||||||
|
setTimeout(() => {
|
||||||
|
audioService.resumeFromInterruption(30000).then(ok => {
|
||||||
|
if (ok) {
|
||||||
|
console.log('[PhoneCall] Auto-Resume von gemerkter Position gestartet');
|
||||||
|
}
|
||||||
|
}).catch(() => {});
|
||||||
|
}, 800);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -50,28 +50,69 @@ class UpdateService {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Raeumt alte heruntergeladene APK-Dateien aus dem Cache auf. */
|
/** Sucht ueberall wo .apk-Dateien herumliegen koennten. */
|
||||||
private async cleanupOldApks(): Promise<void> {
|
private async _apkSearchDirs(): Promise<string[]> {
|
||||||
try {
|
const dirs = [RNFS.CachesDirectoryPath, RNFS.DocumentDirectoryPath];
|
||||||
const files = await RNFS.readDir(RNFS.CachesDirectoryPath);
|
if ((RNFS as any).ExternalCachesDirectoryPath) {
|
||||||
const apks = files.filter(f => /\.apk$/i.test(f.name));
|
dirs.push((RNFS as any).ExternalCachesDirectoryPath);
|
||||||
let freed = 0;
|
|
||||||
for (const f of apks) {
|
|
||||||
try {
|
|
||||||
const size = parseInt(f.size as any, 10) || 0;
|
|
||||||
await RNFS.unlink(f.path);
|
|
||||||
freed += size;
|
|
||||||
console.log(`[Update] Alte APK geloescht: ${f.name} (${(size / 1024 / 1024).toFixed(1)}MB)`);
|
|
||||||
} catch (err: any) {
|
|
||||||
console.warn(`[Update] APK-Loeschen fehlgeschlagen: ${f.name} (${err?.message || err})`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (apks.length > 0) {
|
|
||||||
console.log(`[Update] Cleanup fertig: ${apks.length} APKs entfernt, ${(freed / 1024 / 1024).toFixed(1)}MB freigegeben`);
|
|
||||||
}
|
|
||||||
} catch (err: any) {
|
|
||||||
console.warn(`[Update] Cleanup-Fehler: ${err?.message || err}`);
|
|
||||||
}
|
}
|
||||||
|
if (RNFS.ExternalDirectoryPath) {
|
||||||
|
dirs.push(RNFS.ExternalDirectoryPath);
|
||||||
|
}
|
||||||
|
return dirs;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Raeumt alte heruntergeladene APK-Dateien aus den App-Verzeichnissen auf.
|
||||||
|
* Public damit Settings den Button "Update-Cache leeren" benutzen kann. */
|
||||||
|
async cleanupOldApks(keepCurrentName?: string): Promise<{ removed: number; freedMB: number }> {
|
||||||
|
const dirs = await this._apkSearchDirs();
|
||||||
|
let removed = 0;
|
||||||
|
let freed = 0;
|
||||||
|
for (const dir of dirs) {
|
||||||
|
try {
|
||||||
|
if (!(await RNFS.exists(dir))) continue;
|
||||||
|
const files = await RNFS.readDir(dir);
|
||||||
|
const apks = files.filter(f => /\.apk$/i.test(f.name));
|
||||||
|
for (const f of apks) {
|
||||||
|
if (keepCurrentName && f.name === keepCurrentName) continue;
|
||||||
|
try {
|
||||||
|
const size = parseInt(f.size as any, 10) || 0;
|
||||||
|
await RNFS.unlink(f.path);
|
||||||
|
removed += 1;
|
||||||
|
freed += size;
|
||||||
|
console.log(`[Update] APK geloescht: ${f.path} (${(size / 1024 / 1024).toFixed(1)}MB)`);
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn(`[Update] APK-Loeschen fehlgeschlagen: ${f.path} (${err?.message || err})`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn(`[Update] Cleanup-Fehler in ${dir}: ${err?.message || err}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const freedMB = freed / 1024 / 1024;
|
||||||
|
if (removed > 0) {
|
||||||
|
console.log(`[Update] Cleanup fertig: ${removed} APK${removed === 1 ? '' : 's'} entfernt, ${freedMB.toFixed(1)}MB freigegeben`);
|
||||||
|
}
|
||||||
|
return { removed, freedMB };
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Aktuelle Groesse aller APK-Dateien in den App-Verzeichnissen (in MB). */
|
||||||
|
async getApkCacheSize(): Promise<{ count: number; totalMB: number }> {
|
||||||
|
const dirs = await this._apkSearchDirs();
|
||||||
|
let count = 0;
|
||||||
|
let total = 0;
|
||||||
|
for (const dir of dirs) {
|
||||||
|
try {
|
||||||
|
if (!(await RNFS.exists(dir))) continue;
|
||||||
|
const files = await RNFS.readDir(dir);
|
||||||
|
for (const f of files) {
|
||||||
|
if (!f.isFile() || !/\.apk$/i.test(f.name)) continue;
|
||||||
|
count += 1;
|
||||||
|
total += parseInt(f.size as any, 10) || 0;
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
return { count, totalMB: total / 1024 / 1024 };
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Bei App-Start Update pruefen */
|
/** Bei App-Start Update pruefen */
|
||||||
|
|||||||
@@ -22,6 +22,7 @@
|
|||||||
|
|
||||||
import { NativeEventEmitter, NativeModules, ToastAndroid } from 'react-native';
|
import { NativeEventEmitter, NativeModules, ToastAndroid } from 'react-native';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
|
import { acquireBackgroundAudio } from './backgroundAudio';
|
||||||
|
|
||||||
type WakeWordCallback = () => void;
|
type WakeWordCallback = () => void;
|
||||||
type StateCallback = (state: WakeWordState) => void;
|
type StateCallback = (state: WakeWordState) => void;
|
||||||
@@ -77,6 +78,14 @@ class WakeWordService {
|
|||||||
private bargeCallbacks: WakeWordCallback[] = [];
|
private bargeCallbacks: WakeWordCallback[] = [];
|
||||||
/** True solange Wake-Word parallel zu TTS aktiv ist. */
|
/** True solange Wake-Word parallel zu TTS aktiv ist. */
|
||||||
private bargeListening: boolean = false;
|
private bargeListening: boolean = false;
|
||||||
|
/** Anruf-Pause: state wird gemerkt damit nach Auflegen wiederhergestellt wird. */
|
||||||
|
private callPaused: boolean = false;
|
||||||
|
private preCallState: WakeWordState = 'off';
|
||||||
|
/** Cooldown nach App-Resume: kurze Phase in der Wake-Word-Detections
|
||||||
|
* ignoriert werden. Beim Wechsel von Background nach Vordergrund gibt's
|
||||||
|
* oft einen Audio-Pegel-Spike (AudioFocus-Switch, AudioTrack re-route),
|
||||||
|
* der openWakeWord faelschlich triggern kann. */
|
||||||
|
private cooldownUntilMs: number = 0;
|
||||||
|
|
||||||
private keyword: WakeKeyword = DEFAULT_KEYWORD;
|
private keyword: WakeKeyword = DEFAULT_KEYWORD;
|
||||||
private nativeReady: boolean = false;
|
private nativeReady: boolean = false;
|
||||||
@@ -157,6 +166,10 @@ class WakeWordService {
|
|||||||
/** Ohr-Button gedrueckt — startet passives Lauschen oder direkt Konversation. */
|
/** Ohr-Button gedrueckt — startet passives Lauschen oder direkt Konversation. */
|
||||||
async start(): Promise<boolean> {
|
async start(): Promise<boolean> {
|
||||||
if (this.state !== 'off') return true;
|
if (this.state !== 'off') return true;
|
||||||
|
// Foreground-Service VOR dem Mic-Zugriff hochziehen damit Background-
|
||||||
|
// Lauschen funktioniert (Android braucht foregroundServiceType=microphone
|
||||||
|
// aktiv zum Zeitpunkt des AudioRecord.startRecording).
|
||||||
|
await acquireBackgroundAudio('wake');
|
||||||
if (this.nativeReady && OpenWakeWord) {
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
try {
|
try {
|
||||||
await OpenWakeWord.start();
|
await OpenWakeWord.start();
|
||||||
@@ -200,8 +213,22 @@ class WakeWordService {
|
|||||||
this.setState('off');
|
this.setState('off');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Cooldown setzen — alle Wake-Word-Detections in den naechsten ms ignorieren.
|
||||||
|
* Wird beim App-Resume gerufen weil AppState-Wechsel Audio-Spikes erzeugen
|
||||||
|
* die openWakeWord faelschlich als Trigger interpretiert. */
|
||||||
|
setResumeCooldown(ms: number = 1500): void {
|
||||||
|
this.cooldownUntilMs = Date.now() + ms;
|
||||||
|
console.log('[WakeWord] Cooldown aktiv fuer %dms', ms);
|
||||||
|
}
|
||||||
|
|
||||||
/** Wake-Word getriggert: Native-Modul pausieren, Konversation starten. */
|
/** Wake-Word getriggert: Native-Modul pausieren, Konversation starten. */
|
||||||
private async onWakeDetected(): Promise<void> {
|
private async onWakeDetected(): Promise<void> {
|
||||||
|
const now = Date.now();
|
||||||
|
if (now < this.cooldownUntilMs) {
|
||||||
|
const left = this.cooldownUntilMs - now;
|
||||||
|
console.log('[WakeWord] Trigger ignoriert (Cooldown noch %dms aktiv — wahrscheinlich App-Resume-Spike)', left);
|
||||||
|
return;
|
||||||
|
}
|
||||||
console.log('[WakeWord] Wake-Word "%s" erkannt! (state=%s, barge=%s)',
|
console.log('[WakeWord] Wake-Word "%s" erkannt! (state=%s, barge=%s)',
|
||||||
this.keyword, this.state, this.bargeListening);
|
this.keyword, this.state, this.bargeListening);
|
||||||
if (this.nativeReady && OpenWakeWord) {
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
@@ -255,6 +282,43 @@ class WakeWordService {
|
|||||||
console.log('[WakeWord] Barge-Listening aus');
|
console.log('[WakeWord] Barge-Listening aus');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Bei eingehendem Anruf: Wake-Word + Aufnahme stoppen, Pre-Call-State
|
||||||
|
* merken. Telefonie-App belegt das Mikro waehrend des Anrufs, plus ARIA
|
||||||
|
* soll nicht in laufende Telefonate reinhoeren. */
|
||||||
|
async pauseForCall(): Promise<void> {
|
||||||
|
if (this.callPaused) return;
|
||||||
|
this.preCallState = this.state;
|
||||||
|
if (this.state === 'off') {
|
||||||
|
this.callPaused = true; // merken dass wir pausiert wurden
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
this.callPaused = true;
|
||||||
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
|
try { await OpenWakeWord.stop(); } catch {}
|
||||||
|
}
|
||||||
|
this.bargeListening = false;
|
||||||
|
console.log('[WakeWord] Anruf — Wake-Word pausiert (war: %s)', this.preCallState);
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Nach Auflegen: Pre-Call-State wiederherstellen. Aktive Konversation
|
||||||
|
* geht zu armed zurueck (User soll nicht in einen halben Dialog springen). */
|
||||||
|
async resumeFromCall(): Promise<void> {
|
||||||
|
if (!this.callPaused) return;
|
||||||
|
const restoreTo = this.preCallState;
|
||||||
|
this.callPaused = false;
|
||||||
|
this.preCallState = 'off';
|
||||||
|
console.log('[WakeWord] Anruf zu Ende — restore state=%s', restoreTo);
|
||||||
|
if (restoreTo === 'off') return;
|
||||||
|
// Aktive Konversation war wahrscheinlich durch haltAllPlayback eh abgebrochen,
|
||||||
|
// sicher zu armed degraden.
|
||||||
|
if (restoreTo === 'conversing') this.setState('armed');
|
||||||
|
if (this.nativeReady && OpenWakeWord) {
|
||||||
|
try { await OpenWakeWord.start(); } catch (err) {
|
||||||
|
console.warn('[WakeWord] Restore-Start fehlgeschlagen:', err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/** Konversation beenden — User hat im Window nichts gesagt.
|
/** Konversation beenden — User hat im Window nichts gesagt.
|
||||||
* Mit Wake-Word: zurueck zu 'armed' (Listener wieder an).
|
* Mit Wake-Word: zurueck zu 'armed' (Listener wieder an).
|
||||||
* Ohne: zurueck zu 'off'.
|
* Ohne: zurueck zu 'off'.
|
||||||
|
|||||||
+56
-10
@@ -52,15 +52,61 @@ Fuer Web-Anfragen: **WebFetch** oder **Bash mit curl**. Niemals sagen "ich habe
|
|||||||
4. **Regelmaessig committen** — mit sinnvollen Commit-Messages.
|
4. **Regelmaessig committen** — mit sinnvollen Commit-Messages.
|
||||||
5. **Tageslog fuehren** — was wurde getan, was ist offen.
|
5. **Tageslog fuehren** — was wurde getan, was ist offen.
|
||||||
|
|
||||||
|
## Dateien an Stefan zurueckgeben — KRITISCH
|
||||||
|
|
||||||
|
**Das ist die EINZIGE Methode wie Stefan an Dateien rankommt. Ohne
|
||||||
|
diese Schritte sieht und bekommt er die Datei NICHT.**
|
||||||
|
|
||||||
|
### Regel 1 — Speicher-Ort
|
||||||
|
|
||||||
|
Dateien fuer Stefan AUSSCHLIESSLICH unter `/shared/uploads/` speichern.
|
||||||
|
|
||||||
|
NIEMALS in:
|
||||||
|
- `/home/node/.openclaw/workspace/...` (das ist NUR dein Arbeitsverzeichnis,
|
||||||
|
Stefan hat keinen Zugriff darauf)
|
||||||
|
- `/tmp/...`, `/root/...`, oder sonst irgendwo
|
||||||
|
|
||||||
|
Dateinamen mit `aria_`-Prefix damit Cleanup-Scripts sie zuordnen koennen:
|
||||||
|
|
||||||
|
```
|
||||||
|
/shared/uploads/aria_<beschreibender_name>.<ext>
|
||||||
|
```
|
||||||
|
|
||||||
|
Beispiele: `aria_termin_zusage.pdf`, `aria_einkaufsliste.md`,
|
||||||
|
`aria_logs_2026-05-10.zip`.
|
||||||
|
|
||||||
|
### Regel 2 — Marker im Antworttext
|
||||||
|
|
||||||
|
Am Ende deiner Antwort EINMALIG den Marker setzen:
|
||||||
|
|
||||||
|
```
|
||||||
|
[FILE: /shared/uploads/aria_<name>.<ext>]
|
||||||
|
```
|
||||||
|
|
||||||
|
OHNE diesen Marker erscheint die Datei NICHT in der App / Diagnostic.
|
||||||
|
|
||||||
|
Mehrere Dateien: mehrere `[FILE: ...]`-Marker am Ende, jeder in
|
||||||
|
eigener Zeile.
|
||||||
|
|
||||||
|
### Beispiel — kompletter Workflow
|
||||||
|
|
||||||
|
User: "Schreib mir ein Lasagne-Rezept als md-Datei"
|
||||||
|
|
||||||
|
1. Du schreibst die Datei: `Write` Tool mit Pfad `/shared/uploads/aria_lasagne.md`
|
||||||
|
2. Antwort an Stefan:
|
||||||
|
|
||||||
|
```
|
||||||
|
Hier dein Lasagne-Rezept — Ragu am Vortag, echter Parmesan,
|
||||||
|
Ruhezeit nicht skippen. Beim Schichten Bechamel auf jede Lage.
|
||||||
|
|
||||||
|
[FILE: /shared/uploads/aria_lasagne.md]
|
||||||
|
```
|
||||||
|
|
||||||
|
Der Marker wird automatisch aus dem sichtbaren Text entfernt und
|
||||||
|
als Anhang-Bubble angezeigt. Stefan tippt drauf → oeffnet die Datei.
|
||||||
|
|
||||||
## Stimme
|
## Stimme
|
||||||
|
|
||||||
| Stimme | Modell | Wann |
|
TTS laeuft ueber F5-TTS (Voice Cloning, Gaming-PC). Stefan kann eigene
|
||||||
|--------|--------|------|
|
Stimmen aus Audio-Samples klonen (Diagnostic → Stimmen → Stimme klonen)
|
||||||
| **Ramona** (weiblich) | `de_DE-ramona-low` | Alltag, Antworten, Gespraeche (Standard) |
|
und in App + Diagnostic auswaehlen.
|
||||||
| **Thorsten** (maennlich, tief) | `de_DE-thorsten-high` | Epische Momente, Alarme, besondere Ereignisse |
|
|
||||||
|
|
||||||
**Thorsten spricht bei:**
|
|
||||||
- Build erfolgreich deployed
|
|
||||||
- Ticket geloest / Aufgabe abgeschlossen
|
|
||||||
- Kritischer Alarm (Server down, Sicherheitswarnung)
|
|
||||||
- Wenn Stefan sagt "So soll es sein"
|
|
||||||
|
|||||||
@@ -78,12 +78,93 @@ Wenn ein Tool nicht klappt, probiere die Alternative. Niemals sagen "ich habe ke
|
|||||||
- Destruktive Operationen (Dateien loeschen, Datenbanken droppen)
|
- Destruktive Operationen (Dateien loeschen, Datenbanken droppen)
|
||||||
- Push auf main
|
- Push auf main
|
||||||
|
|
||||||
|
## Dateien an Stefan zurueckgeben — KRITISCH
|
||||||
|
|
||||||
|
**Das ist die EINZIGE Methode wie Stefan an Dateien rankommt. Ohne diese
|
||||||
|
Schritte sieht und bekommt er die Datei NICHT.**
|
||||||
|
|
||||||
|
### Regel 1 — Speicher-Ort
|
||||||
|
|
||||||
|
Dateien fuer Stefan AUSSCHLIESSLICH unter `/shared/uploads/` speichern.
|
||||||
|
|
||||||
|
NIEMALS in:
|
||||||
|
- `/home/node/.openclaw/workspace/...` (NUR dein Arbeitsverzeichnis,
|
||||||
|
Stefan hat keinen Zugriff)
|
||||||
|
- `/tmp/...`, `/root/...`, oder sonst irgendwo
|
||||||
|
|
||||||
|
Dateinamen mit `aria_`-Prefix:
|
||||||
|
|
||||||
|
```
|
||||||
|
/shared/uploads/aria_<beschreibender_name>.<ext>
|
||||||
|
```
|
||||||
|
|
||||||
|
Beispiele: `aria_termin_zusage.pdf`, `aria_einkaufsliste.md`,
|
||||||
|
`aria_logs_2026-05-10.zip`.
|
||||||
|
|
||||||
|
### Regel 2 — Marker im Antworttext
|
||||||
|
|
||||||
|
Am Ende deiner Antwort EINMALIG den Marker setzen:
|
||||||
|
|
||||||
|
```
|
||||||
|
[FILE: /shared/uploads/aria_<name>.<ext>]
|
||||||
|
```
|
||||||
|
|
||||||
|
OHNE diesen Marker erscheint die Datei NICHT in der App / Diagnostic.
|
||||||
|
|
||||||
|
Mehrere Dateien: mehrere `[FILE: ...]`-Marker am Ende, jeder in
|
||||||
|
eigener Zeile.
|
||||||
|
|
||||||
|
### Beispiel — kompletter Workflow
|
||||||
|
|
||||||
|
User: "Schreib mir ein Lasagne-Rezept als md-Datei"
|
||||||
|
|
||||||
|
1. Du schreibst: `Write` Tool mit Pfad `/shared/uploads/aria_lasagne.md`
|
||||||
|
2. Antwort an Stefan:
|
||||||
|
|
||||||
|
```
|
||||||
|
Hier dein Lasagne-Rezept — Ragu am Vortag, echter Parmesan,
|
||||||
|
Ruhezeit nicht skippen. Beim Schichten Bechamel auf jede Lage.
|
||||||
|
|
||||||
|
[FILE: /shared/uploads/aria_lasagne.md]
|
||||||
|
```
|
||||||
|
|
||||||
|
Der Marker wird automatisch aus dem sichtbaren Text entfernt und
|
||||||
|
als Anhang-Bubble angezeigt. Stefan tippt drauf → oeffnet die Datei
|
||||||
|
im jeweiligen Standard-Programm.
|
||||||
|
|
||||||
|
### Externe Bilder/Dateien — IMMER runterladen, nicht nur verlinken
|
||||||
|
|
||||||
|
Wenn Stefan ein Bild oder eine Datei aus dem Netz haben will (Wikipedia,
|
||||||
|
Wiki Commons, ein Beispiel-PDF, etc.):
|
||||||
|
|
||||||
|
NICHT NUR die URL in die Antwort schreiben — das Bild ist dann nur
|
||||||
|
solange sichtbar wie der externe Server lebt.
|
||||||
|
|
||||||
|
STATTDESSEN:
|
||||||
|
1. Mit `Bash` + curl/wget herunterladen nach `/shared/uploads/aria_<name>.<ext>`
|
||||||
|
2. Mit `[FILE: ...]`-Marker als Anhang ausspielen
|
||||||
|
|
||||||
|
Beispiel — User: "Zeig mir ein Bild von Micky Maus"
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -sL "https://upload.wikimedia.org/wikipedia/commons/7/7f/Mickey_Mouse.svg" \
|
||||||
|
-o /shared/uploads/aria_mickey_mouse.svg
|
||||||
|
```
|
||||||
|
|
||||||
|
Antwort:
|
||||||
|
```
|
||||||
|
Hier Micky Maus — offizielles SVG von Wikimedia Commons (Public Domain).
|
||||||
|
|
||||||
|
[FILE: /shared/uploads/aria_mickey_mouse.svg]
|
||||||
|
```
|
||||||
|
|
||||||
|
So bleibt das Bild permanent im Chat-Verlauf, auch wenn die Wiki-URL
|
||||||
|
spaeter offline geht oder umgezogen wird.
|
||||||
|
|
||||||
## Stimme
|
## Stimme
|
||||||
|
|
||||||
| Stimme | Modell | Wann |
|
TTS laeuft ueber F5-TTS auf der Gamebox (Voice Cloning). Stefan kann
|
||||||
|--------|--------|------|
|
eigene Stimmen aus Audio-Samples klonen und in App/Diagnostic auswaehlen.
|
||||||
| **Ramona** (weiblich) | `de_DE-ramona-low` | Alltag, Antworten, Gespraeche (Standard) |
|
|
||||||
| **Thorsten** (maennlich, tief) | `de_DE-thorsten-high` | Epische Momente, Alarme, besondere Ereignisse |
|
|
||||||
|
|
||||||
## Gedaechtnis (Memory)
|
## Gedaechtnis (Memory)
|
||||||
|
|
||||||
@@ -147,4 +228,4 @@ Danach den Eintrag in `memory/MEMORY.md` (Index) verlinken.
|
|||||||
### Netzwerk
|
### Netzwerk
|
||||||
- **aria-net:** Internes Docker-Netz (proxy, aria-core)
|
- **aria-net:** Internes Docker-Netz (proxy, aria-core)
|
||||||
- **RVS:** Rendezvous-Server im Rechenzentrum — Relay fuer die Android-App
|
- **RVS:** Rendezvous-Server im Rechenzentrum — Relay fuer die Android-App
|
||||||
- **Bridge:** Voice Bridge (Whisper STT + Piper TTS) — teilt Netzwerk mit aria-core
|
- **Bridge:** Voice Bridge (orchestriert STT/TTS via Gamebox-Bridges) — teilt Netzwerk mit aria-core
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
# Stefan — Benutzer-Praeferenzen
|
# <Username> — Benutzer-Praeferenzen
|
||||||
|
|
||||||
## Allgemein
|
## Allgemein
|
||||||
|
|
||||||
- **Sprache:** Deutsch
|
- **Sprache:** <z.B. Deutsch>
|
||||||
- **Kommunikation:** Direkt, kein Bullshit, Humor willkommen
|
- **Kommunikation:** <z.B. Direkt, kein Bullshit, Humor willkommen>
|
||||||
- **Rolle:** Chef, Auftraggeber, Entwickler bei HackerSoft Oldenburg
|
- **Rolle:** <z.B. Chef, Auftraggeber, Entwickler bei XYZ>
|
||||||
|
|
||||||
## Bestaetigung erforderlich fuer
|
## Bestaetigung erforderlich fuer
|
||||||
|
|
||||||
@@ -12,7 +12,6 @@
|
|||||||
- Push auf main
|
- Push auf main
|
||||||
- Aenderungen an Kundensystemen
|
- Aenderungen an Kundensystemen
|
||||||
- Server-Befehle die nicht rueckgaengig gemacht werden koennen
|
- Server-Befehle die nicht rueckgaengig gemacht werden koennen
|
||||||
- Windows neu installieren (erst Daten sichern!)
|
|
||||||
|
|
||||||
## Autonomes Arbeiten OK fuer
|
## Autonomes Arbeiten OK fuer
|
||||||
|
|
||||||
@@ -28,8 +27,10 @@
|
|||||||
|
|
||||||
| Tool | Zweck |
|
| Tool | Zweck |
|
||||||
|------|-------|
|
|------|-------|
|
||||||
| **Proxmox** | VM-Infrastruktur (ARIAs Zuhause) |
|
| **<Beispiel-Tool>** | <Zweck> |
|
||||||
| **Gitea** | Code-Hosting (gitea.hackersoft.de) |
|
|
||||||
| **OpenCRM** | Kundenverwaltung |
|
<!--
|
||||||
| **STARFACE** | Telefonie |
|
Diese Datei ist eine Vorlage. Lokal als USER.md kopieren und mit
|
||||||
| **RustDesk** | Remote IT-Support bei Kunden |
|
eigenen Praeferenzen + Tool-Stack fuellen. USER.md selbst ist via
|
||||||
|
.gitignore vom Repo ausgeschlossen.
|
||||||
|
-->
|
||||||
+163
-35
@@ -1,17 +1,13 @@
|
|||||||
"""
|
"""
|
||||||
ARIA Voice Bridge — Hauptmodul.
|
ARIA Voice Bridge — Hauptmodul.
|
||||||
|
|
||||||
Verbindet die Android App (via RVS) mit ARIA-Core und bietet
|
Verbindet die Android App (via RVS) mit ARIA-Core. Spracheingabe laeuft
|
||||||
lokale Spracheingabe (Wake-Word + Whisper STT) und Sprachausgabe (Piper TTS).
|
ueber die whisper-bridge (Gamebox, faster-whisper auf CUDA), Sprachausgabe
|
||||||
|
ueber die f5tts-bridge (Voice Cloning, satzweises PCM-Streaming).
|
||||||
|
|
||||||
Nachrichtenfluss:
|
Nachrichtenfluss:
|
||||||
App → RVS → Bridge → aria-core
|
App → RVS → Bridge → aria-core
|
||||||
aria-core → Bridge → RVS → App
|
aria-core → Bridge → f5tts-bridge → PCM → RVS → App
|
||||||
→ Lautsprecher (TTS)
|
|
||||||
|
|
||||||
Stimmen:
|
|
||||||
- Ramona (de_DE-ramona-low) — Alltag, Gespraeche
|
|
||||||
- Thorsten (de_DE-thorsten-high) — epische Momente, Alarme
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
@@ -20,7 +16,9 @@ import asyncio
|
|||||||
import base64
|
import base64
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
import mimetypes
|
||||||
import os
|
import os
|
||||||
|
import re
|
||||||
import signal
|
import signal
|
||||||
import ssl
|
import ssl
|
||||||
import sys
|
import sys
|
||||||
@@ -493,7 +491,7 @@ class ARIABridge:
|
|||||||
self.current_mode = self._load_persisted_mode()
|
self.current_mode = self._load_persisted_mode()
|
||||||
self.running = False
|
self.running = False
|
||||||
|
|
||||||
# Komponenten (TTS: immer XTTS remote, Piper wurde entfernt)
|
# Komponenten (TTS: F5-TTS remote auf der Gamebox, lokales TTS wurde entfernt)
|
||||||
self.tts_enabled = True
|
self.tts_enabled = True
|
||||||
self.xtts_voice = ""
|
self.xtts_voice = ""
|
||||||
self._f5tts_config: dict = {}
|
self._f5tts_config: dict = {}
|
||||||
@@ -681,7 +679,10 @@ class ARIABridge:
|
|||||||
while self.running:
|
while self.running:
|
||||||
try:
|
try:
|
||||||
logger.info("[core] Verbinde: %s", self.ws_url)
|
logger.info("[core] Verbinde: %s", self.ws_url)
|
||||||
async with websockets.connect(self.ws_url) as ws:
|
# max_size=50MB damit grosse Bilder/Voice-Uploads durchgehen.
|
||||||
|
# Python-websockets Default ist nur 1 MiB → 5MB JPEG sprengt
|
||||||
|
# das Limit, Connection wird silent gedroppt.
|
||||||
|
async with websockets.connect(self.ws_url, max_size=50 * 1024 * 1024) as ws:
|
||||||
# OpenClaw Handshake durchfuehren
|
# OpenClaw Handshake durchfuehren
|
||||||
if not await self._openclaw_handshake(ws):
|
if not await self._openclaw_handshake(ws):
|
||||||
logger.error("[core] Handshake fehlgeschlagen — Reconnect")
|
logger.error("[core] Handshake fehlgeschlagen — Reconnect")
|
||||||
@@ -787,13 +788,29 @@ class ARIABridge:
|
|||||||
await self._emit_activity("idle", "")
|
await self._emit_activity("idle", "")
|
||||||
if not text:
|
if not text:
|
||||||
logger.warning("[core] chat final ohne Text: %s", json.dumps(payload)[:200])
|
logger.warning("[core] chat final ohne Text: %s", json.dumps(payload)[:200])
|
||||||
|
# App+Diagnostic informieren statt stumm — sonst wartet die
|
||||||
|
# UI ewig auf eine Antwort die nicht kommt. Passiert z.B.
|
||||||
|
# wenn Claude-Vision das Bild ablehnt (leere Antwort)
|
||||||
|
# oder die Antwort nur aus Tool-Calls ohne Final-Text bestand.
|
||||||
|
await self._send_to_rvs({
|
||||||
|
"type": "chat",
|
||||||
|
"payload": {
|
||||||
|
"text": "[Hinweis] Antwort ohne Text — moeglicherweise Bild zu gross fuer Vision-API oder reine Tool-Ausfuehrung.",
|
||||||
|
"sender": "aria",
|
||||||
|
},
|
||||||
|
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||||
|
})
|
||||||
return
|
return
|
||||||
logger.info("[core] Antwort: '%s'", text[:80])
|
logger.info("[core] Antwort: '%s'", text[:80])
|
||||||
await self._process_core_response(text, payload)
|
await self._process_core_response(text, payload)
|
||||||
return
|
return
|
||||||
|
|
||||||
if state == "error":
|
if state == "error":
|
||||||
error = payload.get("error", "Unbekannt")
|
# OpenClaw nutzt errorMessage statt error bei state=error.
|
||||||
|
error = (payload.get("error")
|
||||||
|
or payload.get("errorMessage")
|
||||||
|
or payload.get("message")
|
||||||
|
or "Unbekannt")
|
||||||
logger.error("[core] Chat-Fehler: %s", error)
|
logger.error("[core] Chat-Fehler: %s", error)
|
||||||
self._last_chat_final_at = asyncio.get_event_loop().time()
|
self._last_chat_final_at = asyncio.get_event_loop().time()
|
||||||
await self._emit_activity("idle", "")
|
await self._emit_activity("idle", "")
|
||||||
@@ -829,7 +846,12 @@ class ARIABridge:
|
|||||||
return
|
return
|
||||||
|
|
||||||
if event_name == "chat:error":
|
if event_name == "chat:error":
|
||||||
error = payload.get("error", payload.get("message", "Unbekannt"))
|
# OpenClaw legt den echten Text manchmal in errorMessage ab
|
||||||
|
# (state=error). Vorher wurde nur error/message gechecked → "Unbekannt".
|
||||||
|
error = (payload.get("error")
|
||||||
|
or payload.get("errorMessage")
|
||||||
|
or payload.get("message")
|
||||||
|
or "Unbekannt")
|
||||||
logger.error("[core] Chat-Fehler (legacy): %s", error)
|
logger.error("[core] Chat-Fehler (legacy): %s", error)
|
||||||
await self._send_to_rvs({
|
await self._send_to_rvs({
|
||||||
"type": "chat",
|
"type": "chat",
|
||||||
@@ -862,6 +884,48 @@ class ARIABridge:
|
|||||||
pass
|
pass
|
||||||
return payload.get("text", "")
|
return payload.get("text", "")
|
||||||
|
|
||||||
|
# File-Marker-Pattern: `[FILE: /pfad/zur/datei.ext]` (Pfad kann Spaces
|
||||||
|
# enthalten, Endung beliebig). Mehrfach im Text moeglich.
|
||||||
|
_FILE_MARKER_RE = re.compile(r"\[FILE:\s*(/shared/uploads/[^\]]+?)\s*\]", re.IGNORECASE)
|
||||||
|
|
||||||
|
def _extract_file_markers(self, text: str) -> tuple[str, list[dict]]:
|
||||||
|
"""Sucht [FILE: /shared/uploads/...]-Marker, gibt (cleaned_text, file_list) zurueck."""
|
||||||
|
files: list[dict] = []
|
||||||
|
for m in self._FILE_MARKER_RE.finditer(text):
|
||||||
|
path = m.group(1).strip()
|
||||||
|
if not path.startswith("/shared/uploads/"):
|
||||||
|
logger.warning("[core] FILE-Marker mit unerlaubtem Pfad ignoriert: %s", path)
|
||||||
|
continue
|
||||||
|
if not os.path.isfile(path):
|
||||||
|
logger.warning("[core] FILE-Marker zeigt auf nicht existente Datei: %s", path)
|
||||||
|
continue
|
||||||
|
name = os.path.basename(path)
|
||||||
|
mime, _ = mimetypes.guess_type(path)
|
||||||
|
size = os.path.getsize(path)
|
||||||
|
files.append({
|
||||||
|
"serverPath": path,
|
||||||
|
"name": name,
|
||||||
|
"mimeType": mime or "application/octet-stream",
|
||||||
|
"size": size,
|
||||||
|
})
|
||||||
|
cleaned = self._FILE_MARKER_RE.sub("", text).strip()
|
||||||
|
# Zwei aufeinanderfolgende Leerzeilen → eine
|
||||||
|
cleaned = re.sub(r"\n{3,}", "\n\n", cleaned)
|
||||||
|
return cleaned, files
|
||||||
|
|
||||||
|
async def _broadcast_aria_file(self, file_info: dict) -> None:
|
||||||
|
"""ARIA hat eine Datei fuer den User erstellt — App+Diagnostic informieren."""
|
||||||
|
logger.info("[rvs] ARIA-Datei rausgeben: %s (%s, %dKB)",
|
||||||
|
file_info["name"], file_info["mimeType"], file_info["size"] // 1024)
|
||||||
|
try:
|
||||||
|
await self._send_to_rvs({
|
||||||
|
"type": "file_from_aria",
|
||||||
|
"payload": file_info,
|
||||||
|
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||||
|
})
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning("[rvs] file_from_aria broadcast fehlgeschlagen: %s", e)
|
||||||
|
|
||||||
async def _process_core_response(self, text: str, payload: dict) -> None:
|
async def _process_core_response(self, text: str, payload: dict) -> None:
|
||||||
"""Verarbeitet eine fertige Antwort von aria-core.
|
"""Verarbeitet eine fertige Antwort von aria-core.
|
||||||
|
|
||||||
@@ -876,6 +940,14 @@ class ARIABridge:
|
|||||||
logger.info("[core] NO_REPLY empfangen — Antwort still verworfen")
|
logger.info("[core] NO_REPLY empfangen — Antwort still verworfen")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# File-Marker `[FILE: /shared/uploads/aria_xyz.pdf]` extrahieren —
|
||||||
|
# ARIA legt damit Dateien fuer den User bereit (Bilder, PDFs, etc.).
|
||||||
|
# Der Marker wird aus dem Antworttext entfernt (TTS soll ihn nicht
|
||||||
|
# vorlesen) und parallel als file_from_aria-Event geschickt.
|
||||||
|
text, aria_files = self._extract_file_markers(text)
|
||||||
|
for f in aria_files:
|
||||||
|
await self._broadcast_aria_file(f)
|
||||||
|
|
||||||
metadata = payload.get("metadata", {})
|
metadata = payload.get("metadata", {})
|
||||||
is_critical = metadata.get("critical", False)
|
is_critical = metadata.get("critical", False)
|
||||||
requested_voice = metadata.get("voice")
|
requested_voice = metadata.get("voice")
|
||||||
@@ -1028,6 +1100,31 @@ class ARIABridge:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.debug("[session] Diagnostic nicht erreichbar (%s) — nutze '%s'", e, self._session_key)
|
logger.debug("[session] Diagnostic nicht erreichbar (%s) — nutze '%s'", e, self._session_key)
|
||||||
|
|
||||||
|
def _build_core_text(self, text: str, interrupted: bool = False,
|
||||||
|
location: Optional[dict] = None) -> str:
|
||||||
|
"""Baut den Text fuer aria-core mit allen relevanten Hints (Barge-In,
|
||||||
|
GPS-Position). Hints sind in eckigen Klammern, der eigentliche User-
|
||||||
|
Text folgt unverandert."""
|
||||||
|
parts: list[str] = []
|
||||||
|
if interrupted:
|
||||||
|
parts.append(
|
||||||
|
"[Hinweis: Stefan hat dich gerade unterbrochen waehrend du noch "
|
||||||
|
"gesprochen oder gearbeitet hast. Folgendes ist eine Korrektur, "
|
||||||
|
"Ergaenzung oder ein Themenwechsel zu deiner letzten Antwort.]"
|
||||||
|
)
|
||||||
|
if location and isinstance(location, dict):
|
||||||
|
lat = location.get("lat")
|
||||||
|
lon = location.get("lon") or location.get("lng")
|
||||||
|
if lat is not None and lon is not None:
|
||||||
|
parts.append(
|
||||||
|
f"[Stefans aktuelle GPS-Position: {float(lat):.6f}, {float(lon):.6f}. "
|
||||||
|
f"Nutze die nur wenn die Frage sich auf seinen Standort bezieht. "
|
||||||
|
f"Erwaehne sie nicht von dir aus, ausser er fragt explizit danach.]"
|
||||||
|
)
|
||||||
|
if parts:
|
||||||
|
return " ".join(parts) + " " + text
|
||||||
|
return text
|
||||||
|
|
||||||
def _build_pending_files_message(self, user_text: str) -> str:
|
def _build_pending_files_message(self, user_text: str) -> str:
|
||||||
"""Baut eine Anweisung an aria-core aus den gepufferten Files + optionalem
|
"""Baut eine Anweisung an aria-core aus den gepufferten Files + optionalem
|
||||||
User-Text. user_text leer → 'warte auf Anweisung'-Variante."""
|
User-Text. user_text leer → 'warte auf Anweisung'-Variante."""
|
||||||
@@ -1120,7 +1217,8 @@ class ARIABridge:
|
|||||||
try:
|
try:
|
||||||
url = f"{current_url}?token={self.rvs_token}"
|
url = f"{current_url}?token={self.rvs_token}"
|
||||||
logger.info("[rvs] Verbinde: %s", current_url)
|
logger.info("[rvs] Verbinde: %s", current_url)
|
||||||
async with websockets.connect(url) as ws:
|
# max_size=50MB (siehe core-Connect oben — gleicher Grund).
|
||||||
|
async with websockets.connect(url, max_size=50 * 1024 * 1024) as ws:
|
||||||
self.ws_rvs = ws
|
self.ws_rvs = ws
|
||||||
retry_delay = 2
|
retry_delay = 2
|
||||||
logger.info("[rvs] Verbunden — warte auf App-Nachrichten")
|
logger.info("[rvs] Verbunden — warte auf App-Nachrichten")
|
||||||
@@ -1236,6 +1334,7 @@ class ARIABridge:
|
|||||||
self._next_speed_override = None
|
self._next_speed_override = None
|
||||||
if text:
|
if text:
|
||||||
interrupted = bool(payload.get("interrupted", False))
|
interrupted = bool(payload.get("interrupted", False))
|
||||||
|
location = payload.get("location") or None
|
||||||
# Wenn Files gerade gepuffert sind (Bild + Text gleichzeitig
|
# Wenn Files gerade gepuffert sind (Bild + Text gleichzeitig
|
||||||
# gesendet), mergen wir sie zu einer einzigen Anfrage statt
|
# gesendet), mergen wir sie zu einer einzigen Anfrage statt
|
||||||
# zwei separater send_to_core-Calls.
|
# zwei separater send_to_core-Calls.
|
||||||
@@ -1243,15 +1342,11 @@ class ARIABridge:
|
|||||||
if merged:
|
if merged:
|
||||||
logger.info("[rvs] App-Chat (mit Anhaengen): '%s'", text[:80])
|
logger.info("[rvs] App-Chat (mit Anhaengen): '%s'", text[:80])
|
||||||
else:
|
else:
|
||||||
core_text = (
|
core_text = self._build_core_text(text, interrupted, location)
|
||||||
f"[Hinweis: Stefan hat dich gerade unterbrochen waehrend du noch "
|
logger.info("[rvs] App-Chat%s%s: '%s'",
|
||||||
f"gesprochen oder gearbeitet hast. Folgendes ist eine Korrektur, "
|
" [BARGE-IN]" if interrupted else "",
|
||||||
f"Ergaenzung oder ein Themenwechsel zu deiner letzten Antwort.] "
|
" [GPS]" if location else "",
|
||||||
f"{text}"
|
text[:80])
|
||||||
if interrupted else text
|
|
||||||
)
|
|
||||||
logger.info("[rvs] App-Chat%s: '%s'",
|
|
||||||
" [BARGE-IN]" if interrupted else "", text[:80])
|
|
||||||
await self.send_to_core(core_text, source="app" + (" [barge-in]" if interrupted else ""))
|
await self.send_to_core(core_text, source="app" + (" [barge-in]" if interrupted else ""))
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -1443,6 +1538,31 @@ class ARIABridge:
|
|||||||
size_kb = len(file_b64) // 1365
|
size_kb = len(file_b64) // 1365
|
||||||
logger.info("[rvs] Datei gespeichert: %s (%dKB)", file_path, size_kb)
|
logger.info("[rvs] Datei gespeichert: %s (%dKB)", file_path, size_kb)
|
||||||
|
|
||||||
|
# Pixel-Bilder fuer Claude-Vision shrinken wenn > 2 MB. SVG/PDF/ZIP
|
||||||
|
# bleiben unangetastet (Vision laeuft eh nur auf Raster-Formaten).
|
||||||
|
CLAUDE_VISION_FORMATS = ("image/jpeg", "image/jpg", "image/png", "image/webp", "image/gif")
|
||||||
|
if file_type.lower() in CLAUDE_VISION_FORMATS:
|
||||||
|
file_size_bytes = os.path.getsize(file_path)
|
||||||
|
if file_size_bytes > 2 * 1024 * 1024:
|
||||||
|
try:
|
||||||
|
from PIL import Image
|
||||||
|
with Image.open(file_path) as img:
|
||||||
|
orig_w, orig_h = img.size
|
||||||
|
# Anthropic-Empfehlung: max 1568px lange Seite. RGB-Konvertierung
|
||||||
|
# falls RGBA/Palette (JPEG braucht RGB).
|
||||||
|
img.thumbnail((1568, 1568), Image.Resampling.LANCZOS)
|
||||||
|
if img.mode in ("RGBA", "P"):
|
||||||
|
img = img.convert("RGB")
|
||||||
|
img.save(file_path, "JPEG", quality=85, optimize=True)
|
||||||
|
new_size_bytes = os.path.getsize(file_path)
|
||||||
|
logger.info("[rvs] Bild verkleinert: %dx%d → %dx%d, %.1fMB → %.1fMB",
|
||||||
|
orig_w, orig_h, img.size[0], img.size[1],
|
||||||
|
file_size_bytes / 1024 / 1024,
|
||||||
|
new_size_bytes / 1024 / 1024)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning("[rvs] Bild-Resize fehlgeschlagen (%s) — Original wird genutzt: %s",
|
||||||
|
file_name, e)
|
||||||
|
|
||||||
# In Pending-Queue + Flush-Timer (anti-spam Buffering)
|
# In Pending-Queue + Flush-Timer (anti-spam Buffering)
|
||||||
self._pending_files.append((file_path, file_name, file_type, size_kb, int(width or 0), int(height or 0)))
|
self._pending_files.append((file_path, file_name, file_type, size_kb, int(width or 0), int(height or 0)))
|
||||||
if self._pending_files_flush_task and not self._pending_files_flush_task.done():
|
if self._pending_files_flush_task and not self._pending_files_flush_task.done():
|
||||||
@@ -1477,6 +1597,7 @@ class ARIABridge:
|
|||||||
return
|
return
|
||||||
with open(server_path, "rb") as f:
|
with open(server_path, "rb") as f:
|
||||||
file_b64 = base64.b64encode(f.read()).decode("ascii")
|
file_b64 = base64.b64encode(f.read()).decode("ascii")
|
||||||
|
mime, _ = mimetypes.guess_type(server_path)
|
||||||
logger.info("[rvs] Re-Download: %s (%dKB)", server_path, len(file_b64) // 1365)
|
logger.info("[rvs] Re-Download: %s (%dKB)", server_path, len(file_b64) // 1365)
|
||||||
await self._send_to_rvs({
|
await self._send_to_rvs({
|
||||||
"type": "file_response",
|
"type": "file_response",
|
||||||
@@ -1485,6 +1606,7 @@ class ARIABridge:
|
|||||||
"serverPath": server_path,
|
"serverPath": server_path,
|
||||||
"base64": file_b64,
|
"base64": file_b64,
|
||||||
"name": os.path.basename(server_path),
|
"name": os.path.basename(server_path),
|
||||||
|
"mimeType": mime or "application/octet-stream",
|
||||||
},
|
},
|
||||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||||
})
|
})
|
||||||
@@ -1511,11 +1633,14 @@ class ARIABridge:
|
|||||||
self._next_speed_override = None
|
self._next_speed_override = None
|
||||||
interrupted = bool(payload.get("interrupted", False))
|
interrupted = bool(payload.get("interrupted", False))
|
||||||
audio_request_id = payload.get("audioRequestId", "") or ""
|
audio_request_id = payload.get("audioRequestId", "") or ""
|
||||||
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB%s%s",
|
location = payload.get("location") or None
|
||||||
|
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB%s%s%s",
|
||||||
mime_type, duration_ms, len(audio_b64) // 1365,
|
mime_type, duration_ms, len(audio_b64) // 1365,
|
||||||
" [BARGE-IN]" if interrupted else "",
|
" [BARGE-IN]" if interrupted else "",
|
||||||
|
" [GPS]" if location else "",
|
||||||
f" reqId={audio_request_id[:16]}" if audio_request_id else "")
|
f" reqId={audio_request_id[:16]}" if audio_request_id else "")
|
||||||
asyncio.create_task(self._process_app_audio(audio_b64, mime_type, interrupted, audio_request_id))
|
asyncio.create_task(self._process_app_audio(
|
||||||
|
audio_b64, mime_type, interrupted, audio_request_id, location))
|
||||||
|
|
||||||
elif msg_type == "stt_response":
|
elif msg_type == "stt_response":
|
||||||
# Antwort der whisper-bridge auf unseren stt_request
|
# Antwort der whisper-bridge auf unseren stt_request
|
||||||
@@ -1573,7 +1698,8 @@ class ARIABridge:
|
|||||||
|
|
||||||
async def _process_app_audio(self, audio_b64: str, mime_type: str,
|
async def _process_app_audio(self, audio_b64: str, mime_type: str,
|
||||||
interrupted: bool = False,
|
interrupted: bool = False,
|
||||||
audio_request_id: str = "") -> None:
|
audio_request_id: str = "",
|
||||||
|
location: Optional[dict] = None) -> None:
|
||||||
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal.
|
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal.
|
||||||
|
|
||||||
interrupted=True wenn der User waehrend ARIA noch sprach/dachte aufgenommen hat
|
interrupted=True wenn der User waehrend ARIA noch sprach/dachte aufgenommen hat
|
||||||
@@ -1583,7 +1709,10 @@ class ARIABridge:
|
|||||||
|
|
||||||
audio_request_id: Korrelations-ID die die App im audio-Event mitschickt — wird
|
audio_request_id: Korrelations-ID die die App im audio-Event mitschickt — wird
|
||||||
unveraendert ans STT-Result zurueckgegeben damit die App die EXAKT richtige
|
unveraendert ans STT-Result zurueckgegeben damit die App die EXAKT richtige
|
||||||
'wird verarbeitet'-Bubble ersetzen kann (auch bei mehreren parallelen Aufnahmen)."""
|
'wird verarbeitet'-Bubble ersetzen kann (auch bei mehreren parallelen Aufnahmen).
|
||||||
|
|
||||||
|
location: Optional GPS-Position {lat, lon} — wird als Hinweis-Praefix mitgegeben
|
||||||
|
damit ARIA bei standortbezogenen Fragen sie nutzen kann."""
|
||||||
# Erst Remote versuchen
|
# Erst Remote versuchen
|
||||||
text = await self._stt_remote(audio_b64, mime_type)
|
text = await self._stt_remote(audio_b64, mime_type)
|
||||||
if text is None:
|
if text is None:
|
||||||
@@ -1595,15 +1724,9 @@ class ARIABridge:
|
|||||||
|
|
||||||
if text.strip():
|
if text.strip():
|
||||||
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
||||||
# Barge-In-Hinweis: gibt ARIA den Kontext dass sie unterbrochen wurde
|
# Hints (Barge-In, GPS) als Praefix vorschalten — gemeinsamer Helper
|
||||||
# und dies eine Korrektur/Aenderung der vorherigen Anweisung sein kann.
|
# mit dem chat-Pfad damit das Verhalten konsistent ist.
|
||||||
core_text = (
|
core_text = self._build_core_text(text, interrupted, location)
|
||||||
f"[Hinweis: Stefan hat dich gerade unterbrochen waehrend du noch "
|
|
||||||
f"gesprochen oder gearbeitet hast. Folgendes ist eine Korrektur, "
|
|
||||||
f"Ergaenzung oder ein Themenwechsel zu deiner letzten Antwort.] "
|
|
||||||
f"{text}"
|
|
||||||
if interrupted else text
|
|
||||||
)
|
|
||||||
# ERST an aria-core senden (wichtigster Schritt)
|
# ERST an aria-core senden (wichtigster Schritt)
|
||||||
await self.send_to_core(core_text, source="app-voice" + (" [barge-in]" if interrupted else ""))
|
await self.send_to_core(core_text, source="app-voice" + (" [barge-in]" if interrupted else ""))
|
||||||
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
||||||
@@ -1615,6 +1738,11 @@ class ARIABridge:
|
|||||||
}
|
}
|
||||||
if audio_request_id:
|
if audio_request_id:
|
||||||
stt_payload["audioRequestId"] = audio_request_id
|
stt_payload["audioRequestId"] = audio_request_id
|
||||||
|
# GPS aus dem Original-Audio-Payload mitgeben — Diagnostic
|
||||||
|
# zeigt sie sonst nicht an (App sendet location nur einmal,
|
||||||
|
# die im audio-Payload). Reine Anzeige-Information.
|
||||||
|
if location:
|
||||||
|
stt_payload["location"] = location
|
||||||
ok = await self._send_to_rvs({
|
ok = await self._send_to_rvs({
|
||||||
"type": "chat",
|
"type": "chat",
|
||||||
"payload": stt_payload,
|
"payload": stt_payload,
|
||||||
|
|||||||
@@ -16,3 +16,6 @@ sounddevice
|
|||||||
|
|
||||||
# Wake-Word Erkennung
|
# Wake-Word Erkennung
|
||||||
openwakeword
|
openwakeword
|
||||||
|
|
||||||
|
# Bild-Resizing (zu grosse Pixel-Bilder shrinken bevor Claude-Vision sie sieht — 5MB-Limit)
|
||||||
|
Pillow
|
||||||
|
|||||||
+76
-45
@@ -278,6 +278,10 @@
|
|||||||
<input type="checkbox" id="tts-debug-toggle" onchange="toggleTtsDebug()" style="margin-right:4px;vertical-align:middle;">
|
<input type="checkbox" id="tts-debug-toggle" onchange="toggleTtsDebug()" style="margin-right:4px;vertical-align:middle;">
|
||||||
TTS-Text einblenden
|
TTS-Text einblenden
|
||||||
</label>
|
</label>
|
||||||
|
<label style="color:#8888AA;font-size:11px;cursor:pointer;">
|
||||||
|
<input type="checkbox" id="gps-debug-toggle" onchange="toggleGpsDebug()" style="margin-right:4px;vertical-align:middle;">
|
||||||
|
GPS-Position einblenden
|
||||||
|
</label>
|
||||||
<button class="btn secondary" onclick="toggleChatFullscreen()" id="btn-chat-fs" style="padding:4px 10px;font-size:11px;">Vollbild</button>
|
<button class="btn secondary" onclick="toggleChatFullscreen()" id="btn-chat-fs" style="padding:4px 10px;font-size:11px;">Vollbild</button>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -665,24 +669,6 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Highlight-Trigger -->
|
|
||||||
<div class="settings-section">
|
|
||||||
<h2>Highlight-Trigger</h2>
|
|
||||||
<div style="font-size:11px;color:#8888AA;margin-bottom:8px;">
|
|
||||||
Woerter die automatisch die Highlight-Stimme (Thorsten) ausloesen.
|
|
||||||
Eines pro Zeile. Aenderungen werden in der Bridge gespeichert.
|
|
||||||
</div>
|
|
||||||
<div class="card" style="max-width:500px;">
|
|
||||||
<textarea id="highlight-triggers" rows="8" style="width:100%;box-sizing:border-box;background:#1E1E2E;border:1px solid #2A2A3E;border-radius:6px;padding:8px;color:#fff;font-size:13px;font-family:monospace;resize:vertical;"
|
|
||||||
placeholder="Lade..."></textarea>
|
|
||||||
<div style="display:flex;gap:8px;margin-top:8px;">
|
|
||||||
<button class="btn" onclick="saveHighlightTriggers()" style="flex:1;">Speichern</button>
|
|
||||||
<button class="btn secondary" onclick="loadHighlightTriggers()" style="flex:1;">Neu laden</button>
|
|
||||||
</div>
|
|
||||||
<div id="trigger-status" style="font-size:11px;color:#555570;margin-top:6px;"></div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Tool-Berechtigungen -->
|
<!-- Tool-Berechtigungen -->
|
||||||
<div class="settings-section">
|
<div class="settings-section">
|
||||||
<h2>Tool-Berechtigungen</h2>
|
<h2>Tool-Berechtigungen</h2>
|
||||||
@@ -956,14 +942,6 @@
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (msg.type === 'trigger_list') {
|
|
||||||
const textarea = document.getElementById('highlight-triggers');
|
|
||||||
textarea.value = (msg.triggers || []).join('\n');
|
|
||||||
document.getElementById('trigger-status').textContent = msg.triggers.length + ' Trigger geladen';
|
|
||||||
document.getElementById('trigger-status').style.color = '#8888AA';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (msg.type === 'service_status') {
|
if (msg.type === 'service_status') {
|
||||||
updateServiceStatus(msg.payload || {});
|
updateServiceStatus(msg.payload || {});
|
||||||
return;
|
return;
|
||||||
@@ -1015,7 +993,17 @@
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (msg.type === 'chat_final') {
|
if (msg.type === 'chat_final') {
|
||||||
addChat('received', msg.text, 'chat:final');
|
// [FILE: /shared/uploads/aria_xxx.ext]-Marker aus dem Antworttext
|
||||||
|
// entfernen — die Datei kommt separat via file_from_aria.
|
||||||
|
// (Diagnostic empfaengt chat_final direkt vom Gateway, Bridge
|
||||||
|
// hat darum nicht filtern koennen.)
|
||||||
|
const cleaned = (msg.text || '').replace(/\[FILE:\s*\/shared\/uploads\/[^\]]+\]/gi, '').replace(/\n{3,}/g, '\n\n').trim();
|
||||||
|
addChat('received', cleaned, 'chat:final');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (msg.type === 'file_from_aria') {
|
||||||
|
const p = msg.payload || {};
|
||||||
|
addAriaFile(p);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (msg.type === 'chat_delta') { return; }
|
if (msg.type === 'chat_delta') { return; }
|
||||||
@@ -1030,7 +1018,7 @@
|
|||||||
if (sender === 'aria') return;
|
if (sender === 'aria') return;
|
||||||
const chatType = 'sent';
|
const chatType = 'sent';
|
||||||
const label = sender === 'stt' ? '\uD83C\uDFA4 Spracheingabe' : `via RVS (${sender})`;
|
const label = sender === 'stt' ? '\uD83C\uDFA4 Spracheingabe' : `via RVS (${sender})`;
|
||||||
addChat(chatType, p.text || '?', label);
|
addChat(chatType, p.text || '?', label, { location: p.location });
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if (msg.type === 'proxy_result') {
|
if (msg.type === 'proxy_result') {
|
||||||
@@ -1421,6 +1409,16 @@
|
|||||||
if (el) el.checked = showTtsDebug;
|
if (el) el.checked = showTtsDebug;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Debug-Toggle: GPS-Position unter User-Nachrichten einblenden (nur Diagnostic).
|
||||||
|
// App zeigt's bewusst nicht — die Position geht nur an aria-core.
|
||||||
|
let showGpsDebug = localStorage.getItem('aria-show-gps-debug') === '1';
|
||||||
|
function toggleGpsDebug() {
|
||||||
|
showGpsDebug = !showGpsDebug;
|
||||||
|
localStorage.setItem('aria-show-gps-debug', showGpsDebug ? '1' : '0');
|
||||||
|
const el = document.getElementById('gps-debug-toggle');
|
||||||
|
if (el) el.checked = showGpsDebug;
|
||||||
|
}
|
||||||
|
|
||||||
// Minimal-JS-Port von clean_text_for_tts() (Bridge) — reine Anzeige
|
// Minimal-JS-Port von clean_text_for_tts() (Bridge) — reine Anzeige
|
||||||
function previewTtsText(text) {
|
function previewTtsText(text) {
|
||||||
if (!text) return '';
|
if (!text) return '';
|
||||||
@@ -1460,7 +1458,18 @@
|
|||||||
ttsBlock = `<div style="margin-top:6px;padding:4px 8px;background:rgba(0,150,255,0.08);border-left:2px solid #0096FF;font-size:11px;color:#88AACC;"><span style="color:#0096FF;font-weight:bold;">TTS:</span> ${escapeHtml(ttsText)}</div>`;
|
ttsBlock = `<div style="margin-top:6px;padding:4px 8px;background:rgba(0,150,255,0.08);border-left:2px solid #0096FF;font-size:11px;color:#88AACC;"><span style="color:#0096FF;font-weight:bold;">TTS:</span> ${escapeHtml(ttsText)}</div>`;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
const html = `${linked}${ttsBlock}<div class="meta">${escapeHtml(meta)} — ${new Date().toLocaleTimeString('de-DE')}</div>`;
|
// Optional: GPS-Position als Block unter User-Nachrichten (nur Diagnostic)
|
||||||
|
let gpsBlock = '';
|
||||||
|
if (showGpsDebug && options && options.location) {
|
||||||
|
const loc = options.location;
|
||||||
|
const lat = typeof loc.lat === 'number' ? loc.lat.toFixed(6) : '?';
|
||||||
|
const lon = typeof loc.lon === 'number' ? loc.lon.toFixed(6) : (typeof loc.lng === 'number' ? loc.lng.toFixed(6) : '?');
|
||||||
|
if (lat !== '?' && lon !== '?') {
|
||||||
|
const mapLink = `https://www.openstreetmap.org/?mlat=${lat}&mlon=${lon}#map=16/${lat}/${lon}`;
|
||||||
|
gpsBlock = `<div style="margin-top:6px;padding:4px 8px;background:rgba(52,199,89,0.08);border-left:2px solid #34C759;font-size:11px;color:#88BB99;"><span style="color:#34C759;font-weight:bold;">📍 GPS:</span> <a href="${mapLink}" target="_blank" rel="noopener" style="color:#88BB99;text-decoration:underline;">${lat}, ${lon}</a></div>`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const html = `${linked}${ttsBlock}${gpsBlock}<div class="meta">${escapeHtml(meta)} — ${new Date().toLocaleTimeString('de-DE')}</div>`;
|
||||||
|
|
||||||
// Thinking-Indikator ausblenden bei neuer Nachricht
|
// Thinking-Indikator ausblenden bei neuer Nachricht
|
||||||
updateThinkingIndicator({ activity: 'idle' });
|
updateThinkingIndicator({ activity: 'idle' });
|
||||||
@@ -1476,6 +1485,41 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** ARIA hat eine Datei rausgegeben — als eigene Bubble mit Klick-Handler. */
|
||||||
|
function addAriaFile(p) {
|
||||||
|
const name = p.name || 'datei';
|
||||||
|
const serverPath = p.serverPath || '';
|
||||||
|
const mimeType = p.mimeType || '';
|
||||||
|
const sizeKB = p.size ? Math.round(p.size / 1024) : 0;
|
||||||
|
const isImage = mimeType.startsWith('image/');
|
||||||
|
const isPdf = mimeType === 'application/pdf';
|
||||||
|
const url = serverPath; // Diagnostic-Server liefert /shared/* aus
|
||||||
|
const sizeStr = sizeKB > 1024 ? `${(sizeKB/1024).toFixed(1)}MB` : `${sizeKB}KB`;
|
||||||
|
const icon = isImage ? '🖼️' : isPdf ? '📄' : '📎';
|
||||||
|
// PDFs/Bilder: target=_blank → neuer Tab. Andere: download-Attribut.
|
||||||
|
const linkAttrs = (isImage || isPdf)
|
||||||
|
? `href="${url}" target="_blank" rel="noopener"`
|
||||||
|
: `href="${url}" download="${escapeHtml(name)}"`;
|
||||||
|
let preview = '';
|
||||||
|
if (isImage) {
|
||||||
|
preview = `<img src="${url}" class="chat-media" onclick="openLightbox('image','${url}')" onerror="this.style.display='none'" style="margin-top:6px;">`;
|
||||||
|
}
|
||||||
|
const html = `<div style="font-weight:bold;">${icon} ARIA hat eine Datei erstellt</div>` +
|
||||||
|
`<a ${linkAttrs} style="color:#0096FF;text-decoration:underline;">${escapeHtml(name)}</a>` +
|
||||||
|
` <span style="color:#888;font-size:11px;">(${escapeHtml(mimeType)}, ${sizeStr})</span>` +
|
||||||
|
preview +
|
||||||
|
`<div style="margin-top:4px;font-size:10px;color:#666;font-family:monospace;">${escapeHtml(serverPath)}</div>` +
|
||||||
|
`<div class="meta">ARIA-Datei — ${new Date().toLocaleTimeString('de-DE')}</div>`;
|
||||||
|
for (const box of [chatBox, document.getElementById('chat-box-fs')]) {
|
||||||
|
if (!box) continue;
|
||||||
|
const el = document.createElement('div');
|
||||||
|
el.className = 'chat-msg received';
|
||||||
|
el.innerHTML = html;
|
||||||
|
box.appendChild(el);
|
||||||
|
box.scrollTop = box.scrollHeight;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
let chatFullscreen = false;
|
let chatFullscreen = false;
|
||||||
function toggleChatFullscreen() {
|
function toggleChatFullscreen() {
|
||||||
const modal = document.getElementById('chat-fullscreen');
|
const modal = document.getElementById('chat-fullscreen');
|
||||||
@@ -1958,20 +2002,6 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── Highlight-Trigger ────────────────────────
|
|
||||||
function loadHighlightTriggers() {
|
|
||||||
send({ action: 'get_triggers' });
|
|
||||||
}
|
|
||||||
function saveHighlightTriggers() {
|
|
||||||
const text = document.getElementById('highlight-triggers').value;
|
|
||||||
const triggers = text.split('\n').map(t => t.trim()).filter(t => t.length > 0);
|
|
||||||
send({ action: 'save_triggers', triggers });
|
|
||||||
document.getElementById('trigger-status').textContent = 'Gespeichert (' + triggers.length + ' Trigger)';
|
|
||||||
document.getElementById('trigger-status').style.color = '#34C759';
|
|
||||||
}
|
|
||||||
// Beim Tab-Wechsel zu Einstellungen: Trigger laden
|
|
||||||
const origSwitchMainTab = typeof switchMainTab === 'function' ? switchMainTab : null;
|
|
||||||
|
|
||||||
// ── Modus-Wechsel ────────────────────────────
|
// ── Modus-Wechsel ────────────────────────────
|
||||||
// Kanonische IDs (matchen bridge/modes.py canonical_id + android ModeSelector)
|
// Kanonische IDs (matchen bridge/modes.py canonical_id + android ModeSelector)
|
||||||
const MODE_LABELS = { normal: 'Normal', nicht_stoeren: 'Nicht stoeren', fluester: 'Fluestern', hangar: 'Hangar', gaming: 'Gaming' };
|
const MODE_LABELS = { normal: 'Normal', nicht_stoeren: 'Nicht stoeren', fluester: 'Fluestern', hangar: 'Hangar', gaming: 'Gaming' };
|
||||||
@@ -2456,9 +2486,8 @@
|
|||||||
document.querySelectorAll('.main-nav-btn').forEach(b => {
|
document.querySelectorAll('.main-nav-btn').forEach(b => {
|
||||||
if (b.textContent.trim().toLowerCase().includes(tab === 'main' ? 'main' : 'einstellung')) b.classList.add('active');
|
if (b.textContent.trim().toLowerCase().includes(tab === 'main' ? 'main' : 'einstellung')) b.classList.add('active');
|
||||||
});
|
});
|
||||||
// Einstellungen: Config + Trigger + QR laden
|
// Einstellungen: Config + QR laden
|
||||||
if (tab === 'settings') {
|
if (tab === 'settings') {
|
||||||
loadHighlightTriggers();
|
|
||||||
send({ action: 'get_voice_config' });
|
send({ action: 'get_voice_config' });
|
||||||
loadRuntimeConfig();
|
loadRuntimeConfig();
|
||||||
loadOnboardingQR();
|
loadOnboardingQR();
|
||||||
@@ -2492,6 +2521,8 @@
|
|||||||
// Toggle-Checkbox initial korrekt setzen
|
// Toggle-Checkbox initial korrekt setzen
|
||||||
const ttsToggleEl = document.getElementById('tts-debug-toggle');
|
const ttsToggleEl = document.getElementById('tts-debug-toggle');
|
||||||
if (ttsToggleEl) ttsToggleEl.checked = showTtsDebug;
|
if (ttsToggleEl) ttsToggleEl.checked = showTtsDebug;
|
||||||
|
const gpsToggleEl = document.getElementById('gps-debug-toggle');
|
||||||
|
if (gpsToggleEl) gpsToggleEl.checked = showGpsDebug;
|
||||||
|
|
||||||
// Disk-Space Banner aktualisieren (wird vom Server via disk_status gepusht)
|
// Disk-Space Banner aktualisieren (wird vom Server via disk_status gepusht)
|
||||||
function updateDiskBanner(status) {
|
function updateDiskBanner(status) {
|
||||||
|
|||||||
+5
-29
@@ -620,6 +620,11 @@ function connectRVS(forcePlain) {
|
|||||||
type: "chat",
|
type: "chat",
|
||||||
payload: { text: `Anhang: ${name}\n${serverPath}`, sender: "user" }
|
payload: { text: `Anhang: ${name}\n${serverPath}`, sender: "user" }
|
||||||
}});
|
}});
|
||||||
|
} else if (msg.type === "file_from_aria" && msg.payload) {
|
||||||
|
// ARIA hat eine Datei fuer den User erstellt — im Chat als Anhang anzeigen
|
||||||
|
const p = msg.payload;
|
||||||
|
log("info", "rvs", `ARIA-Datei: ${p.name} (${p.mimeType}, ${(p.size||0)/1024|0}KB)`);
|
||||||
|
broadcast({ type: "file_from_aria", payload: p });
|
||||||
} else if (msg.type === "heartbeat") {
|
} else if (msg.type === "heartbeat") {
|
||||||
// ignorieren
|
// ignorieren
|
||||||
} else if (msg.type === "mode") {
|
} else if (msg.type === "mode") {
|
||||||
@@ -1475,10 +1480,6 @@ wss.on("connection", (ws) => {
|
|||||||
} catch {}
|
} catch {}
|
||||||
sendToRVS_raw({ type: "config", payload: voiceConfig, timestamp: Date.now() });
|
sendToRVS_raw({ type: "config", payload: voiceConfig, timestamp: Date.now() });
|
||||||
log("info", "server", `Voice-Config gespeichert: xttsVoice=${voiceConfig.xttsVoice || "default"}, whisper=${voiceConfig.whisperModel || "-"}`);
|
log("info", "server", `Voice-Config gespeichert: xttsVoice=${voiceConfig.xttsVoice || "default"}, whisper=${voiceConfig.whisperModel || "-"}`);
|
||||||
} else if (msg.action === "get_triggers") {
|
|
||||||
handleGetTriggers(ws);
|
|
||||||
} else if (msg.action === "save_triggers") {
|
|
||||||
handleSaveTriggers(ws, msg.triggers || []);
|
|
||||||
} else if (msg.action === "test_tts") {
|
} else if (msg.action === "test_tts") {
|
||||||
handleTestTTS(ws, msg.text || "Test");
|
handleTestTTS(ws, msg.text || "Test");
|
||||||
} else if (msg.action === "preview_voice") {
|
} else if (msg.action === "preview_voice") {
|
||||||
@@ -1629,31 +1630,6 @@ function handleGetVoiceConfig(clientWs) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── Highlight-Trigger (legacy UI — wird nicht mehr ausgewertet seit Piper raus) ─
|
|
||||||
const TRIGGERS_FILE = "/shared/config/highlight_triggers.json";
|
|
||||||
|
|
||||||
async function handleGetTriggers(clientWs) {
|
|
||||||
try {
|
|
||||||
const triggers = fs.existsSync(TRIGGERS_FILE)
|
|
||||||
? JSON.parse(fs.readFileSync(TRIGGERS_FILE, "utf-8"))
|
|
||||||
: [];
|
|
||||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers }));
|
|
||||||
} catch (err) {
|
|
||||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers: [], error: err.message }));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function handleSaveTriggers(clientWs, triggers) {
|
|
||||||
try {
|
|
||||||
fs.mkdirSync("/shared/config", { recursive: true });
|
|
||||||
fs.writeFileSync(TRIGGERS_FILE, JSON.stringify(triggers, null, 2));
|
|
||||||
log("info", "server", `${triggers.length} Highlight-Trigger gespeichert`);
|
|
||||||
clientWs.send(JSON.stringify({ type: "trigger_list", triggers }));
|
|
||||||
} catch (err) {
|
|
||||||
log("error", "server", `Trigger speichern fehlgeschlagen: ${err.message}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── TTS Diagnose (XTTS) ───────────────────────────────
|
// ── TTS Diagnose (XTTS) ───────────────────────────────
|
||||||
// ── Voice Preview ────────────────────────────────────────
|
// ── Voice Preview ────────────────────────────────────────
|
||||||
// Sammelt audio_pcm Chunks einer Preview-Anfrage, baut am Ende eine WAV
|
// Sammelt audio_pcm Chunks einer Preview-Anfrage, baut am Ende eine WAV
|
||||||
|
|||||||
@@ -0,0 +1,43 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# ════════════════════════════════════════════════════════════
|
||||||
|
# ARIA — Setup-Script
|
||||||
|
#
|
||||||
|
# Materialisiert Config-Dateien aus *.example-Vorlagen wenn
|
||||||
|
# das Original fehlt. Wird einmalig nach git clone und nach
|
||||||
|
# jedem git pull empfohlen — schadet auch sonst nichts (idempotent,
|
||||||
|
# ueberschreibt nichts Bestehendes).
|
||||||
|
#
|
||||||
|
# Beispiele:
|
||||||
|
# aria-data/config/USER.md.example → USER.md (wenn nicht vorhanden)
|
||||||
|
# aria-data/config/aria.env.example → aria.env (wenn nicht vorhanden)
|
||||||
|
#
|
||||||
|
# Diese Files sind via .gitignore vom Repo ausgeschlossen — die
|
||||||
|
# Vorlagen liegen aber im Repo damit ein frisches Setup ohne lange
|
||||||
|
# Anleitung lauffaehig ist.
|
||||||
|
# ════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
set -e
|
||||||
|
cd "$(dirname "$0")"
|
||||||
|
|
||||||
|
created=0
|
||||||
|
skipped=0
|
||||||
|
|
||||||
|
for example in aria-data/config/*.example; do
|
||||||
|
[ -f "$example" ] || continue
|
||||||
|
target="${example%.example}"
|
||||||
|
if [ -e "$target" ]; then
|
||||||
|
skipped=$((skipped + 1))
|
||||||
|
else
|
||||||
|
cp "$example" "$target"
|
||||||
|
echo "✓ $target erstellt aus $(basename "$example")"
|
||||||
|
created=$((created + 1))
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ $created -eq 0 ]; then
|
||||||
|
echo "Alle Config-Dateien vorhanden ($skipped uebersprungen)."
|
||||||
|
else
|
||||||
|
echo ""
|
||||||
|
echo "$created Datei(en) angelegt, $skipped uebersprungen."
|
||||||
|
echo "Falls noetig anpassen: aria-data/config/"
|
||||||
|
fi
|
||||||
@@ -1,7 +1,109 @@
|
|||||||
# ARIA Issues & Features
|
# ARIA Issues & Features
|
||||||
|
|
||||||
|
## Audio-Verhalten in der App
|
||||||
|
|
||||||
|
So sollte die App in den verschiedenen Phasen mit fremden Audio-Apps
|
||||||
|
(Spotify, YouTube, Podcasts etc.) und dem eigenen Mikro umgehen.
|
||||||
|
Wenn was anders ist, ist's ein Bug.
|
||||||
|
|
||||||
|
| Phase | Andere App (Spotify) | ARIA-Mikro | Hintergrund-Service |
|
||||||
|
|------------------------------|----------------------|---------------------|---------------------|
|
||||||
|
| Idle / Ohr aus | spielt frei | aus | aus |
|
||||||
|
| Wake-Word lauscht (armed) | spielt frei | passiv (openWakeWord) | aktiv ('wake') |
|
||||||
|
| User-Aufnahme laeuft | pausiert (EXCLUSIVE) | Recording | aktiv ('rec') |
|
||||||
|
| Aufnahme zu Ende | resumed | aus | (rec released) |
|
||||||
|
| ARIA denkt/schreibt (~20s) | spielt frei | aus | (kein Slot) |
|
||||||
|
| TTS startet | pausiert (DUCK) | aus (oder barge) | aktiv ('tts') |
|
||||||
|
| TTS spielt (auch GPU-Pausen) | bleibt pausiert | barge wenn Wake-Word| aktiv |
|
||||||
|
| TTS zu Ende | nach 800ms resumed | (Conversation-Window)| (tts released) |
|
||||||
|
| Eingehender Anruf (auch VoIP)| — | Mikro pausiert | aus |
|
||||||
|
| Anruf vorbei | — | Mikro wieder armed | aktiv ('wake') |
|
||||||
|
| Anruf vorbei (Auto-Resume) | pausiert wieder | aus | aktiv ('tts') |
|
||||||
|
| Neue Frage waehrend Anruf | — | Mikro pausiert | (rec waehrend Anruf nicht) |
|
||||||
|
| Anruf vorbei nach neuer Frage | (siehe TTS-Phasen) | (siehe TTS-Phasen) | (tts gewinnt, alter Resume verworfen) |
|
||||||
|
|
||||||
|
Wichtige Mechanismen:
|
||||||
|
- **Underrun-Schutz** im PcmStreamPlayer fuettert Stille rein wenn die
|
||||||
|
Bridge in Render-Pausen liefert — Spotify bleibt durchgehend pausiert,
|
||||||
|
auch zwischen den Saetzen einer langen Antwort.
|
||||||
|
- **Conversation-Focus** (nur bei Wake-Word 'conversing') haelt den
|
||||||
|
AudioFocus dauerhaft. Bei reinem Tap-to-Talk oder Text-Chat greift's
|
||||||
|
nicht — Spotify darf in der Denk-Phase ruhig weiterspielen.
|
||||||
|
- **Foreground-Service** (mediaPlayback|microphone) haelt App-Prozess
|
||||||
|
am Leben damit TTS/Mikro/Wake-Word auch bei minimierter App weiter-
|
||||||
|
laufen. Notification zeigt aktuellen Status ("ARIA spricht/hoert
|
||||||
|
zu/bereit").
|
||||||
|
- **Anruf-Erkennung** ueber TelephonyManager (klassisch) + AudioFocus-
|
||||||
|
Loss-Listener mit Polling-Fallback (VoIP wie WhatsApp/Signal/Discord).
|
||||||
|
- **Auto-Resume nach Anruf**: beim Halt wird die Wiedergabe-Position
|
||||||
|
gemerkt (Date.now() - playbackStart - leadingSilence). Nach Auflegen
|
||||||
|
wartet die App bis zu 30s auf den WAV-Cache und spielt dann ab der
|
||||||
|
gemerkten Position weiter. Wenn das Telefonat länger als die Antwort
|
||||||
|
dauerte, ist der Cache schon fertig — instant Resume.
|
||||||
|
- **Neue Frage waehrend Anruf** (Text-Chat geht trotz Telefonat): die
|
||||||
|
neue Antwort ueberschreibt den pending Resume. _handlePcmChunkImpl
|
||||||
|
stoppt einen ggf. laufenden resumeSound und setzt pausedMessageId
|
||||||
|
zurueck wenn die neue Stream-messageId abweicht. Die letzte Antwort
|
||||||
|
gewinnt immer.
|
||||||
|
- **Audio-Ausgabe trotz aktivem Telefonat**: ARIA antwortet auch waehrend
|
||||||
|
eines Telefonats per Lautsprecher (Telefon-Audio geht ueber separaten
|
||||||
|
Stream zur Gegenseite). haltAllPlayback wird nur beim STATE-WECHSEL
|
||||||
|
ringing/offhook gerufen — wenn der Anruf schon laeuft (offhook→offhook),
|
||||||
|
triggert eine neue Frage keinen Halt mehr.
|
||||||
|
|
||||||
## Erledigt
|
## Erledigt
|
||||||
|
|
||||||
|
### Bugs / Fixes
|
||||||
|
|
||||||
|
- [x] Diagnostic: "ARIA denkt..." bleibt nicht mehr stehen
|
||||||
|
- [x] App: "ARIA denkt..." Indicator + Abbrechen-Button (Bridge spiegelt agent_activity via RVS)
|
||||||
|
- [x] Textnachrichten werden von ARIA beantwortet (Bridge chat handler fix)
|
||||||
|
- [x] Voice-Auswahl funktioniert wieder: speaker_wav als Basename statt Pfad fuer daswer123 local-Mode
|
||||||
|
- [x] Diagnostic-Voice-Wechsel resettet alle App-lokalen Voice-Overrides via type "config"
|
||||||
|
- [x] Streaming TTS Stop-Race: Writer wartet auf playbackHeadPosition vor stop()/release() — keine abgeschnittenen Saetze mehr
|
||||||
|
- [x] App: Audioausgabe hoert nicht mehr mitten im Satz auf (playbackHeadPosition wait + Stop-Race fix)
|
||||||
|
- [x] AudioFocus.release wartet auf echten Playback-Ende — kein Volume-Hochfahren mehr mid-Antwort
|
||||||
|
- [x] App Mute-/Auto-Playback-Bug: Closure-Bug geloest (ttsCanPlayRef live-gespiegelt, nicht mehr stale)
|
||||||
|
- [x] App Zombie-Recording: Ohr-aus kill laufende Aufnahme damit der Aufnahme-Button weiter funktioniert
|
||||||
|
- [x] Whisper transkribiert Voice-Uploads nicht mehr mit hardcoded "small" — aktuelles Modell wird behalten, kein unnoetiger Modell-Swap
|
||||||
|
- [x] RVS/WebSocket maxPayload 50MB: voice_upload mit WAV als base64 sprengt kein Frame-Limit mehr
|
||||||
|
- [x] Wake-Word Embedding rank-4 Fix (Pipeline-Bug der das Triggern verhinderte) + Frame-Count aus Modell-Metadaten lesen
|
||||||
|
- [x] PCM-Underrun-Schutz: Stille-Fill in Render-Pausen verhindert Spotify-Auto-Resume nach 10s Stillstand
|
||||||
|
- [x] Conversation-Focus-Lifecycle: AudioFocus haengt am Wake-Word-State 'conversing' statt an einzelnen Streams — Spotify bleibt durchgehend gepaust, auch zwischen mehreren Antworten
|
||||||
|
- [x] Voice-Override behaelt Stimme ueber alle TTS-Calls einer Antwort (vorher: nach erstem TTS-Call zurueck auf Default)
|
||||||
|
- [x] Sprachnachricht-Bubble defensiv: STT-Result fuegt neue Bubble hinzu wenn Placeholder fehlt (Race-Schutz)
|
||||||
|
- [x] Bild + Text als EINE Anfrage: Bridge buffert files 800ms, merged mit folgendem chat-Text zu einem send_to_core (statt zwei getrennten ARIA-Antworten)
|
||||||
|
- [x] Diagnostic→App: persistente RVS-Connection statt frische pro Send (Race-Probleme mit Zombie-WS geloest)
|
||||||
|
- [x] Textauswahl in Bubbles wieder funktional (nested Text+onPress raus, dataDetectorType="all" macht Links automatisch klickbar)
|
||||||
|
- [x] **Placeholder-Race bei parallelen Sprachnachrichten geloest**: jede Aufnahme bekommt eine eindeutige audioRequestId, Bridge gibt sie ans STT-Result zurueck — App matcht jetzt punktgenau die richtige Bubble statt per Substring
|
||||||
|
- [x] Mikro-Offen-Toast "🎤 sprich jetzt" erscheint erst wenn audioService.startRecording wirklich erfolgreich war (statt ~400ms vorher beim Wake-Word-Detect)
|
||||||
|
- [x] Sprachnachrichten ohne STT-Result werden nach 60s+Aufnahmedauer automatisch entfernt (sicher genug fuer 5-30min-Aufnahmen, schnell genug fuer leere Wake-Word-Echos)
|
||||||
|
- [x] VAD adaptive Baseline robuster: minimum statt avg + Cap auf -50dB bis -28dB (Stille) / -40dB bis -18dB (Speech) — keine "tote" VAD-Konfiguration mehr bei lauter Umgebung oder Wake-Word-Echo
|
||||||
|
- [x] Push-to-Talk raus, nur noch Tap-to-Talk (verhinderte Touch-Race-Probleme)
|
||||||
|
- [x] Manueller Mikro-Stop beendet Wake-Word-Konversation: Tap auf Mikro-Knopf waehrend conversing → audio raus + zurueck zu armed (= Wake-Word lauscht wieder, kein Auto-Mikro nach ARIAs Antwort). VAD-Auto-Stop bleibt bei Multi-Turn
|
||||||
|
- [x] **Wake-Word pausiert bei Anruf**: phoneCall ruft pauseForCall (openWakeWord.stop) bei RINGING/OFFHOOK, resumeFromCall bei IDLE. Pre-Call-State wird gemerkt — armed bleibt armed, conversing degraded zu armed (User soll nicht in halbem Dialog landen)
|
||||||
|
- [x] **App-Resume-Cooldown**: Wechsel von Background → Foreground triggert keinen falschen Wake-Word-Trigger mehr. AppState-Listener setzt 1.5s Cooldown in dem onWakeDetected-Events ignoriert werden (Audio-Pegel-Spike beim AudioFocus-Switch sonst als Wake-Word interpretiert)
|
||||||
|
- [x] Background-Mikro robust: acquireBackgroundAudio('rec'/'wake') wird jetzt VOR AudioRecord.startRecording gerufen — Foreground-Service mit foregroundServiceType=microphone muss aktiv sein bevor das Mikro greift, sonst blockiert Android ab 11+ den Background-Zugriff
|
||||||
|
- [x] **Stille-Pegel manuell setzbar** (Settings → Spracheingabe): Override-Wert in dB von -55 bis -15, default "automatisch". Info-Button mit Modal erklaert die Skala (niedriger = sensibler, hoeher = robuster gegen Hintergrundlaerm). Bei manuell gesetztem Wert wird die adaptive Baseline ignoriert
|
||||||
|
- [x] **Kurze TTS-Texte (1-3 Worte) spielen jetzt ab** — auf OnePlus A12 stallte AudioTrack mit `pos=0` weil der Default-Start-Threshold `bufferSize/2` (= 2s) bei kurzen Streams nie ueberschritten wurde. Fix: `setStartThresholdInFrames(100ms)` direkt nach dem Track-Build (API 31+). Buffer auf 4s entkoppelt von Pre-Roll, `play()` wird beim allerersten data-chunk gerufen
|
||||||
|
- [x] **Mute-Button stoppt jetzt auch laufenden PCM-Stream** — `pcmStreamActive` wurde beim isFinal-Chunk schon false gesetzt, der AudioTrack spielte aber noch sekundenlang aus seinem Buffer. `stopPlayback()` uebersprang darum `PcmStreamPlayer.stop()`. Fix: stop() immer rufen (ist idempotent), kein Flag-Check mehr
|
||||||
|
- [x] **GPS-Permission im Manifest + Runtime-Request** beim Settings-Toggle — vorher fehlten ACCESS_COARSE_LOCATION / ACCESS_FINE_LOCATION komplett. `Geolocation.getCurrentPosition` schlug lautlos fehl, App sendete nie ein location-Feld
|
||||||
|
- [x] **GPS-Position auch im STT-Payload an Diagnostic** — die App sendet location einmal im audio-Payload. Die Bridge nutzte sie zwar (ging in aria-core's Kontext rein), reichte sie aber nicht im STT-broadcast an Diagnostic durch. Diagnostic zeigte darum bei Spracheingaben nie den GPS-Block, obwohl der "GPS einblenden"-Toggle aktiv war
|
||||||
|
- [x] **Auto-Resume nach Anruf — pcmBuffer bleibt erhalten**: `haltAllPlayback` leerte den pcmBuffer mid-Anruf, isFinal schrieb dann eine leere WAV. Neue `pauseForCall`-Methode statt `haltAllPlayback`: AudioTrack stoppt + Focus released, `pcmBuffer` und `pcmMessageId` bleiben — chunks werden weiter gesammelt damit isFinal die WAV schreibt und resumeFromInterruption sie findet. Plus `captureInterruption` idempotent gemacht (ringing → offhook ueberschreibt nicht)
|
||||||
|
- [x] **Replay-Resume nach Anruf**: `_firePlaybackStarted` ueberschrieb `currentPlaybackMsgId` mit leerem pcmMessageId — captureInterruption hatte nichts zu merken. Plus Regex `[0-9a-f-]+\.wav` matchte nicht alle Dateinamen. Plus `_playFromPathAtPosition` aktualisiert jetzt das Tracking damit ein zweiter Anruf in derselben Antwort auch funktioniert
|
||||||
|
- [x] **`pauseForCall` setzt `isPlaying` zurueck**: vorher haengten weitere Play-Button-Klicks nach Anruf, weil `playAudio` bei `isPlaying=true` den `_playNext`-Pfad ueberspringt
|
||||||
|
- [x] **Play-Button rendert neu wenn Cache-Datei weg ist**: vorher checkte der Button nur `if (item.audioPath)` — auf eine geloeschte Cache-Datei zeigte das aber stillschweigend ins Leere. Jetzt RNFS.exists-Check mit Fallback auf `tts_request` an die Bridge → F5-TTS rendert neu, WAV wandert zurueck in den Cache
|
||||||
|
- [x] **Bridge WebSocket max_size 50 MB**: Python `websockets.connect` hat 1 MiB Default — Stefan's 4MB JPEG (5.78 MB Base64) sprengte das, Bridge-Connection wurde silent gedroppt. f5tts/whisper-bridges hatten max_size schon, nur aria_bridge war vergessen
|
||||||
|
- [x] **Bridge resized Bilder >2 MB serverseitig auf 1568px**: Claude-Vision-API hat ~5 MB Base64-Limit. Galerie-Bilder via `react-native-image-picker` sind clientseitig schon klein, Buroklammer/DocumentPicker reichte das rohe File durch — Claude lieferte leere Antwort. Pillow im Bridge-Container, nur fuer JPEG/PNG/WebP/GIF (PDFs/ZIPs/SVGs unangetastet)
|
||||||
|
- [x] **Bridge `chat:error` liest auch `errorMessage`**: OpenClaw legt bei state=error den Text dort statt in `error` ab → Bridge meldete generisches "[Fehler] Unbekannt", echter Fehler nur in Container-Logs. Plus: `chat:final` ohne text wird jetzt mit Hinweis-Bubble an die App gemeldet (statt stumm), z.B. wenn Vision das Bild silent ablehnt
|
||||||
|
- [x] **Cache-Cleanup beim App-Start** — orphane `aria_tts_*.wav` Files (>5 min) im CachesDirectoryPath werden weggeraeumt, sammeln sich sonst an wenn Sound mid-playback gestoppt wird (Anruf, Mute, Barge-In) und der completion-Callback nicht feuert. Plus neuer Settings-Button "TTS-Cache leeren" mit Live-Groessenanzeige
|
||||||
|
- [x] **Verbose-Logging-Toggle in Settings → Protokoll**: `console.log` global stummschaltbar (warn/error bleiben aktiv) — spart adb-logcat-Speicher wenn alles laeuft
|
||||||
|
- [x] **800 ms-Delay vor Anruf-Auto-Resume**: ARIA's neuer Focus-Request kollidierte sonst mit Spotify's Auto-Resume nach Anruf-Ende. System haengt noch im IN_CALL→NORMAL-Mode-Uebergang, Spotify sieht Loss → Loss und bleibt pausiert. Mit Delay schafft Spotify den Resume-Schritt, dann pausiert ARIA wieder ordnungsgemaess
|
||||||
|
- [x] **Mute-Button = Stop fuer aktuelle Antwort**: vorher startete eine NEUE PCM-Chunk-Sequenz nach Mute-aus die alte Antwort weiter wo sie war (funktionierte 2x, dann nicht mehr weil isFinal schon kam). Jetzt mit `_stoppedMessageId`-Tracking: bei Mute wird die aktive msgId gemerkt, alle weiteren chunks dieser msgId bleiben silent — auch wenn Mute zurueckgenommen wird. Reset bei neuer msgId, neue Antworten spielen normal
|
||||||
|
- [x] **Spotify resumed nach Mute-Stop**: `stopPlayback` released seinen TRANSIENT-Focus (USAGE_ASSISTANT) sauber → Spotify bekommt GAIN-Event und resumed automatisch. Ein zwischenzeitlich eingebauter `kickReleaseMedia` (USAGE_MEDIA + GAIN) verhinderte das Auto-Resume sogar (Spotify interpretierte es als "user-action stopp") — wieder rausgenommen
|
||||||
|
|
||||||
|
### App Features
|
||||||
|
|
||||||
- [x] Bildupload funktioniert (Shared Volume /shared/uploads/)
|
- [x] Bildupload funktioniert (Shared Volume /shared/uploads/)
|
||||||
- [x] Sprachnachrichten werden als Text angezeigt (STT → Chat-Bubble)
|
- [x] Sprachnachrichten werden als Text angezeigt (STT → Chat-Bubble)
|
||||||
- [x] Cache leeren + Auto-Download von Anhaengen
|
- [x] Cache leeren + Auto-Download von Anhaengen
|
||||||
@@ -11,13 +113,9 @@
|
|||||||
- [x] Ohr-Button → Gespraechsmodus (Auto-Aufnahme nach ARIA-Antwort)
|
- [x] Ohr-Button → Gespraechsmodus (Auto-Aufnahme nach ARIA-Antwort)
|
||||||
- [x] Play-Button in ARIA-Nachrichten fuer Sprachwiedergabe
|
- [x] Play-Button in ARIA-Nachrichten fuer Sprachwiedergabe
|
||||||
- [x] Chat-Suche in der App (Lupe in Statusleiste)
|
- [x] Chat-Suche in der App (Lupe in Statusleiste)
|
||||||
- [x] Watchdog mit Container-Restart (2min Warnung → 5min doctor --fix → 8min Restart)
|
|
||||||
- [x] Abbrechen-Button im Diagnostic Chat
|
- [x] Abbrechen-Button im Diagnostic Chat
|
||||||
- [x] Nachrichten Backup on-the-fly (/shared/config/chat_backup.jsonl)
|
|
||||||
- [x] Grosse Nachrichten satzweise aufteilen fuer TTS
|
|
||||||
- [x] RVS Nachrichten vom Smartphone gehen durch
|
|
||||||
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed pro Stimme — durch XTTS/F5-TTS ersetzt)
|
- [x] Stimmen-Einstellungen (Ramona/Thorsten, Speed pro Stimme — durch XTTS/F5-TTS ersetzt)
|
||||||
- [x] Highlight-Trigger konfigurierbar in Diagnostic
|
- [x] Highlight-Trigger konfigurierbar in Diagnostic (spaeter komplett entfernt — war Piper-Relikt)
|
||||||
- [x] XTTS v2 Integration (Gaming-PC, GPU, Voice Cloning) — durch F5-TTS ersetzt
|
- [x] XTTS v2 Integration (Gaming-PC, GPU, Voice Cloning) — durch F5-TTS ersetzt
|
||||||
- [x] XTTS Voice Cloning (Audio-Samples hochladen, eigene Stimme)
|
- [x] XTTS Voice Cloning (Audio-Samples hochladen, eigene Stimme)
|
||||||
- [x] TTS Engine waehlbar (Piper/XTTS) — Piper raus, XTTS raus, jetzt nur F5-TTS
|
- [x] TTS Engine waehlbar (Piper/XTTS) — Piper raus, XTTS raus, jetzt nur F5-TTS
|
||||||
@@ -25,16 +123,12 @@
|
|||||||
- [x] Auto-Update: APK-Installation via FileProvider
|
- [x] Auto-Update: APK-Installation via FileProvider
|
||||||
- [x] Auto-Update: "Auf Updates pruefen" Button in App-Einstellungen
|
- [x] Auto-Update: "Auf Updates pruefen" Button in App-Einstellungen
|
||||||
- [x] Audio-Queue (sequentielle Wiedergabe, kein Ueberlappen)
|
- [x] Audio-Queue (sequentielle Wiedergabe, kein Ueberlappen)
|
||||||
- [x] Textnachrichten werden von ARIA beantwortet (Bridge chat handler fix)
|
|
||||||
- [x] Mehrere Anhaenge + Text vor dem Senden (Pending-Vorschau)
|
- [x] Mehrere Anhaenge + Text vor dem Senden (Pending-Vorschau)
|
||||||
- [x] Paste-Support fuer Bilder in Diagnostic Chat
|
- [x] Paste-Support fuer Bilder in Diagnostic Chat
|
||||||
- [x] Markdown-Bereinigung fuer TTS (fett, kursiv, code, links, etc.)
|
- [x] Markdown-Bereinigung fuer TTS (fett, kursiv, code, links, etc.)
|
||||||
- [x] SSH Volume read-write fuer Proxy (kein -F Workaround mehr)
|
|
||||||
- [x] Diagnostic: Sessions als Markdown exportieren (Download-Button)
|
- [x] Diagnostic: Sessions als Markdown exportieren (Download-Button)
|
||||||
- [x] Speech Gate: Aufnahme wird verworfen wenn keine Sprache erkannt
|
- [x] Speech Gate: Aufnahme wird verworfen wenn keine Sprache erkannt
|
||||||
- [x] Session-Persistenz: Gewaehlte Session bleibt ueber Container-Restarts erhalten
|
- [x] Session-Persistenz: Gewaehlte Session bleibt ueber Container-Restarts erhalten
|
||||||
- [x] Diagnostic: "ARIA denkt..." bleibt nicht mehr stehen
|
|
||||||
- [x] App: "ARIA denkt..." Indicator + Abbrechen-Button (Bridge spiegelt agent_activity via RVS)
|
|
||||||
- [x] Whisper STT: Model-Auswahl in Diagnostic (tiny/base/small/medium/large-v3), Hot-Reload
|
- [x] Whisper STT: Model-Auswahl in Diagnostic (tiny/base/small/medium/large-v3), Hot-Reload
|
||||||
- [x] App: Audio-Aufnahme explizit 16kHz mono (spart Resample, optimal fuer Whisper)
|
- [x] App: Audio-Aufnahme explizit 16kHz mono (spart Resample, optimal fuer Whisper)
|
||||||
- [x] Streaming TTS: PCM-Stream → AudioTrack MODE_STREAM, keine WAV-Gaps
|
- [x] Streaming TTS: PCM-Stream → AudioTrack MODE_STREAM, keine WAV-Gaps
|
||||||
@@ -51,14 +145,11 @@
|
|||||||
- [x] Disk-Voll Banner in Diagnostic: rotes Overlay + copy-baren Cleanup-Befehlen (safe + aggressiv)
|
- [x] Disk-Voll Banner in Diagnostic: rotes Overlay + copy-baren Cleanup-Befehlen (safe + aggressiv)
|
||||||
- [x] cleanup.sh: kombinierter Docker-Aufraeum-Befehl (safe / --full)
|
- [x] cleanup.sh: kombinierter Docker-Aufraeum-Befehl (safe / --full)
|
||||||
- [x] Streaming TTS Pre-Roll: AudioTrack play() startet erst wenn 2.5s gepuffert sind
|
- [x] Streaming TTS Pre-Roll: AudioTrack play() startet erst wenn 2.5s gepuffert sind
|
||||||
- [x] Streaming TTS Stop-Race: Writer wartet auf playbackHeadPosition vor stop()/release() — keine abgeschnittenen Saetze mehr
|
|
||||||
- [x] Leading-Silence (200ms) am Stream-Anfang — AudioTrack faehrt sauber an
|
- [x] Leading-Silence (200ms) am Stream-Anfang — AudioTrack faehrt sauber an
|
||||||
- [x] Pre-Roll-Buffer einstellbar in App-Settings (1.0-6.0s, Default 3.5s)
|
- [x] Pre-Roll-Buffer einstellbar in App-Settings (1.0-6.0s, Default 3.5s)
|
||||||
- [x] Fade-In auf erstem PCM-Chunk (120ms) — versteckt XTTS/F5-TTS Warmup-Glitches
|
- [x] Fade-In auf erstem PCM-Chunk (120ms) — versteckt XTTS/F5-TTS Warmup-Glitches
|
||||||
- [x] Decimal-zu-Worte fuer TTS (0.1 → null komma eins, mit IP-Schutz-Lookahead)
|
- [x] Decimal-zu-Worte fuer TTS (0.1 → null komma eins, mit IP-Schutz-Lookahead)
|
||||||
- [x] Generic Acronym-Buchstabieren (XTTS → X T T S, USB → U S B, ueber expliziter Liste)
|
- [x] Generic Acronym-Buchstabieren (XTTS → X T T S, USB → U S B, ueber expliziter Liste)
|
||||||
- [x] Voice-Auswahl funktioniert wieder: speaker_wav als Basename statt Pfad fuer daswer123 local-Mode
|
|
||||||
- [x] Diagnostic-Voice-Wechsel resettet alle App-lokalen Voice-Overrides via type "config"
|
|
||||||
- [x] voice_preload/voice_ready: Stille Mini-Render bei Voice-Wechsel + Toast/Status "bereit"
|
- [x] voice_preload/voice_ready: Stille Mini-Render bei Voice-Wechsel + Toast/Status "bereit"
|
||||||
- [x] Whisper STT auf die Gamebox ausgelagert (faster-whisper CUDA, float16) — neuer aria-whisper-bridge Container
|
- [x] Whisper STT auf die Gamebox ausgelagert (faster-whisper CUDA, float16) — neuer aria-whisper-bridge Container
|
||||||
- [x] aria-bridge: STT primaer remote (Gamebox), Fallback lokal nach 45s Timeout
|
- [x] aria-bridge: STT primaer remote (Gamebox), Fallback lokal nach 45s Timeout
|
||||||
@@ -66,48 +157,46 @@
|
|||||||
- [x] **F5-TTS ersetzt XTTS komplett** — neuer aria-f5tts-bridge Container, Voice Cloning, satzweises Streaming
|
- [x] **F5-TTS ersetzt XTTS komplett** — neuer aria-f5tts-bridge Container, Voice Cloning, satzweises Streaming
|
||||||
- [x] Voice-Upload mit Whisper-Auto-Transkription — User muss keinen Referenz-Text eintippen
|
- [x] Voice-Upload mit Whisper-Auto-Transkription — User muss keinen Referenz-Text eintippen
|
||||||
- [x] Audio-Pause statt Ducking: Spotify/YouTube pausieren komplett waehrend TTS (TRANSIENT statt MAY_DUCK)
|
- [x] Audio-Pause statt Ducking: Spotify/YouTube pausieren komplett waehrend TTS (TRANSIENT statt MAY_DUCK)
|
||||||
- [x] AudioFocus.release wartet auf echten Playback-Ende — kein Volume-Hochfahren mehr mid-Antwort
|
|
||||||
- [x] VAD-Stille einstellbar in App-Settings (1.0-8.0s, Default 2.8s)
|
- [x] VAD-Stille einstellbar in App-Settings (1.0-8.0s, Default 2.8s)
|
||||||
- [x] MAX_RECORDING auf 120s — laengere Erklaerungen moeglich
|
- [x] MAX_RECORDING auf 120s — laengere Erklaerungen moeglich
|
||||||
- [x] App: Audioausgabe hoert nicht mehr mitten im Satz auf (playbackHeadPosition wait + Stop-Race fix)
|
|
||||||
- [x] F5-TTS: Referenz-WAV-Preprocessing — Loudness-Normalisierung -16 LUFS + Silence-Trim + 10s Clip fuer konsistente Cloning-Quali
|
- [x] F5-TTS: Referenz-WAV-Preprocessing — Loudness-Normalisierung -16 LUFS + Silence-Trim + 10s Clip fuer konsistente Cloning-Quali
|
||||||
- [x] F5-TTS: deutsches Fine-Tune (aihpi/F5-TTS-German, Vocos-Variante) via hf:// Pfad in Diagnostic konfigurierbar
|
- [x] F5-TTS: deutsches Fine-Tune (aihpi/F5-TTS-German, Vocos-Variante) via hf:// Pfad in Diagnostic konfigurierbar
|
||||||
- [x] Whisper transkribiert Voice-Uploads nicht mehr mit hardcoded "small" — aktuelles Modell wird behalten, kein unnoetiger Modell-Swap
|
|
||||||
- [x] RVS/WebSocket maxPayload 50MB: voice_upload mit WAV als base64 sprengt kein Frame-Limit mehr
|
|
||||||
- [x] Dynamischer STT-Timeout in aria-bridge: 300s waehrend whisper-bridge 'loading', 45s wenn 'ready'
|
- [x] Dynamischer STT-Timeout in aria-bridge: 300s waehrend whisper-bridge 'loading', 45s wenn 'ready'
|
||||||
- [x] service_status Broadcasts: f5tts/whisper melden Lade-Status, Banner in Diagnostic (unten rechts) + App (oben)
|
- [x] service_status Broadcasts: f5tts/whisper melden Lade-Status, Banner in Diagnostic (unten rechts) + App (oben)
|
||||||
- [x] config_request Pattern: Bridges fragen beim Connect die aktuelle Voice-Config an, aria-bridge antwortet
|
- [x] config_request Pattern: Bridges fragen beim Connect die aktuelle Voice-Config an, aria-bridge antwortet
|
||||||
- [x] F5-TTS Tuning via Diagnostic (Modell-ID, Checkpoint, cfg_strength, nfe_step) statt ENV-Vars — Hot-Reload bei Modell-Wechsel
|
- [x] F5-TTS Tuning via Diagnostic (Modell-ID, Checkpoint, cfg_strength, nfe_step) statt ENV-Vars — Hot-Reload bei Modell-Wechsel
|
||||||
- [x] Conversation-Window: Gespraechsmodus endet nach X Sekunden Stille (1.0-20.0s, Default 8s, einstellbar in Settings)
|
- [x] Conversation-Window: Gespraechsmodus endet nach X Sekunden Stille (1.0-20.0s, Default 8s, einstellbar in Settings)
|
||||||
- [x] Porcupine Wake-Word-Integration in der App (Built-In Keywords + Custom spaeter, per Geraet einstellbar)
|
- [x] Porcupine Wake-Word-Integration in der App (durch openWakeWord ersetzt)
|
||||||
- [x] HF-Cache als Bind-Mount statt Docker Volume — kein .vhdx-Bloat auf Docker Desktop / Windows
|
- [x] HF-Cache als Bind-Mount statt Docker Volume — kein .vhdx-Bloat auf Docker Desktop / Windows
|
||||||
- [x] cleanup-windows.ps1 / .bat: VHDX-Cleanup via diskpart (ohne Hyper-V) mit Self-Elevation
|
- [x] cleanup-windows.ps1 / .bat: VHDX-Cleanup via diskpart (ohne Hyper-V) mit Self-Elevation
|
||||||
- [x] App Mute-/Auto-Playback-Bug: Closure-Bug geloest (ttsCanPlayRef live-gespiegelt, nicht mehr stale)
|
|
||||||
- [x] App Zombie-Recording: Ohr-aus kill laufende Aufnahme damit der Aufnahme-Button weiter funktioniert
|
|
||||||
- [x] App Text-Rendering: Nachrichten selektierbar + Autolink fuer URLs/E-Mails/Telefonnummern (Browser/Mail/Dialer)
|
- [x] App Text-Rendering: Nachrichten selektierbar + Autolink fuer URLs/E-Mails/Telefonnummern (Browser/Mail/Dialer)
|
||||||
- [x] TTS-Wiedergabegeschwindigkeit pro Geraet einstellbar (Settings → 0.5-2.0x in 0.1-Schritten, Default 1.0)
|
- [x] TTS-Wiedergabegeschwindigkeit pro Geraet einstellbar (Settings → 0.5-2.0x in 0.1-Schritten, Default 1.0)
|
||||||
- [x] Diagnostic: Voice-Preview-Modal (Play-Icon vor Delete-X, Textfeld mit Default, WAV im Browser abspielen)
|
- [x] Diagnostic: Voice-Preview-Modal (Play-Icon vor Delete-X, Textfeld mit Default, WAV im Browser abspielen)
|
||||||
- [x] **Wake-Word komplett on-device via openWakeWord (ONNX Runtime)** — Porcupine raus, kein API-Key/keine Lizenzgebuehren mehr. Mitgelieferte Keywords: hey_jarvis, computer, alexa, hey_mycroft, hey_rhasspy
|
- [x] **Wake-Word komplett on-device via openWakeWord (ONNX Runtime)** — Porcupine raus, kein API-Key/keine Lizenzgebuehren mehr. Mitgelieferte Keywords: hey_jarvis, computer, alexa, hey_mycroft, hey_rhasspy
|
||||||
- [x] Wake-Word Embedding rank-4 Fix (Pipeline-Bug der das Triggern verhinderte) + Frame-Count aus Modell-Metadaten lesen
|
|
||||||
- [x] APK ABI-Split auf arm64-v8a — von ~136 MB auf ~35 MB, Auto-Update-Downloads aufs Phone deutlich kleiner
|
- [x] APK ABI-Split auf arm64-v8a — von ~136 MB auf ~35 MB, Auto-Update-Downloads aufs Phone deutlich kleiner
|
||||||
- [x] PCM-Underrun-Schutz: Stille-Fill in Render-Pausen verhindert Spotify-Auto-Resume nach 10s Stillstand
|
|
||||||
- [x] Conversation-Focus-Lifecycle: AudioFocus haengt am Wake-Word-State 'conversing' statt an einzelnen Streams — Spotify bleibt durchgehend gepaust, auch zwischen mehreren Antworten
|
|
||||||
- [x] PhoneStateListener: TTS pausiert bei eingehendem Anruf (READ_PHONE_STATE Permission)
|
- [x] PhoneStateListener: TTS pausiert bei eingehendem Anruf (READ_PHONE_STATE Permission)
|
||||||
- [x] Voice-Override behaelt Stimme ueber alle TTS-Calls einer Antwort (vorher: nach erstem TTS-Call zurueck auf Default)
|
- [x] **VoIP-Anrufe** (WhatsApp/Signal/Discord/Teams) erkannt via AudioFocus-Loss-Listener + getMode-Polling-Fallback (alle 3s)
|
||||||
- [x] Sprachnachricht-Bubble defensiv: STT-Result fuegt neue Bubble hinzu wenn Placeholder fehlt (Race-Schutz)
|
- [x] **Auto-Resume nach Anruf**: ARIAs unterbrochene Antwort spielt nach dem Auflegen ab der gemerkten Position weiter (Date.now()-Tracking + WAV-Cache, 30s-Wartezeit auf final-Marker bei kurzem Telefonat)
|
||||||
- [x] Bild + Text als EINE Anfrage: Bridge buffert files 800ms, merged mit folgendem chat-Text zu einem send_to_core (statt zwei getrennten ARIA-Antworten)
|
- [x] **Neue Frage waehrend Telefonat** ueberschreibt pending Auto-Resume — letzte Antwort gewinnt, alter resumeSound wird gestoppt
|
||||||
|
- [x] **Audio-Ausgabe waehrend aktivem Telefonat** funktioniert (haltAllPlayback nur bei state-Wechsel idle→ringing/offhook, nicht bei offhook→offhook)
|
||||||
|
- [x] **PcmPlaybackFinished-Event** im Native: AudioFocus wird erst released wenn AudioTrack wirklich durch ist (vorher: end()-Cap nach 0.5s → Spotify spielte 32s parallel zu ARIA)
|
||||||
|
- [x] **APK-Cache-Cleanup robuster**: durchsucht jetzt CachesDirectoryPath + DocumentDirectoryPath + ExternalCachesDirectoryPath + ExternalDirectoryPath statt nur Caches. Plus manueller Button "Update-Cache leeren" in Settings → Speicher mit Live-Anzeige der aktuellen Groesse
|
||||||
- [x] Diagnostic-Chat: bubblige Formatierung, mehrzeiliges Eingabefeld (textarea, Enter sendet, Shift+Enter neue Zeile)
|
- [x] Diagnostic-Chat: bubblige Formatierung, mehrzeiliges Eingabefeld (textarea, Enter sendet, Shift+Enter neue Zeile)
|
||||||
- [x] Diagnostic→App: persistente RVS-Connection statt frische pro Send (Race-Probleme mit Zombie-WS geloest)
|
- [x] Adaptive VAD-Schwelle: Baseline aus den ersten 500ms Mic-Pegel, Stille = baseline+6dB / Sprache = baseline+12dB
|
||||||
- [x] Adaptive VAD-Schwelle: Baseline aus den ersten 500ms Mic-Pegel, Stille = baseline+6dB / Sprache = baseline+12dB. Funktioniert in lauten wie leisen Umgebungen
|
|
||||||
- [x] Max-Aufnahmedauer konfigurierbar in Settings (1-30 min, Default 5 min) — laengere Diktate moeglich
|
- [x] Max-Aufnahmedauer konfigurierbar in Settings (1-30 min, Default 5 min) — laengere Diktate moeglich
|
||||||
- [x] Barge-In: User kann ARIA waehrend Antwort/Tool-Use unterbrechen, alte Aktivitaet wird abgebrochen, Bridge gibt aria-core einen Kontext-Hint dass es eine Korrektur ist
|
- [x] Barge-In: User kann ARIA waehrend Antwort/Tool-Use unterbrechen, alte Aktivitaet wird abgebrochen, Bridge gibt aria-core einen Kontext-Hint dass es eine Korrektur ist
|
||||||
- [x] Push-to-Talk raus, nur noch Tap-to-Talk (verhinderte Touch-Race-Probleme)
|
|
||||||
- [x] Settings-Sub-Screens: 8 Kategorien (Verbindung, Allgemein, Spracheingabe, Wake-Word, Sprachausgabe, Speicher, Protokoll, Ueber) statt langer Liste
|
- [x] Settings-Sub-Screens: 8 Kategorien (Verbindung, Allgemein, Spracheingabe, Wake-Word, Sprachausgabe, Speicher, Protokoll, Ueber) statt langer Liste
|
||||||
- [x] Textauswahl in Bubbles wieder funktional (nested Text+onPress raus, dataDetectorType="all" macht Links automatisch klickbar)
|
|
||||||
- [x] **Placeholder-Race bei parallelen Sprachnachrichten geloest**: jede Aufnahme bekommt eine eindeutige audioRequestId, Bridge gibt sie ans STT-Result zurueck — App matcht jetzt punktgenau die richtige Bubble statt per Substring "Spracheingabe wird verarbeitet"
|
|
||||||
- [x] Mikro-Offen-Toast "🎤 sprich jetzt" erscheint erst wenn audioService.startRecording wirklich erfolgreich war (statt ~400ms vorher beim Wake-Word-Detect)
|
|
||||||
- [x] **Bereit-Sound (Airplane Ding-Dong) wenn Mikro nach Wake-Word offen** — akustische Bestaetigung statt nur Toast. Toggle in Settings → Wake-Word, default aktiv
|
- [x] **Bereit-Sound (Airplane Ding-Dong) wenn Mikro nach Wake-Word offen** — akustische Bestaetigung statt nur Toast. Toggle in Settings → Wake-Word, default aktiv
|
||||||
- [x] **Wake-Word parallel zu TTS** mit AcousticEchoCanceler: User sagt "Computer" waehrend ARIA spricht → TTS verstummt sofort, neue Aufnahme startet. Native AEC verhindert dass ARIAs eigene Stimme das Wake-Word triggert. Audio-Source ist VOICE_COMMUNICATION + zusaetzlich AEC/NS/AGC-Effekte aktiviert
|
- [x] **Wake-Word parallel zu TTS** mit AcousticEchoCanceler: User sagt "Computer" waehrend ARIA spricht → TTS verstummt sofort, neue Aufnahme startet
|
||||||
|
- [x] **GPS-Position mitsenden**: Toggle in Settings → Allgemein → Standort, persistiert in AsyncStorage. Wenn aktiv wird lat/lon mit jeder chat/audio-Message mitgegeben. Bridge prefixed den Text fuer aria-core mit GPS-Hint (mit Anweisung dass die Position nur bei Bedarf erwaehnt wird)
|
||||||
|
- [x] **Background Audio Service**: TTS, Wake-Word-Lauschen UND Aufnahme laufen auch bei minimierter App weiter. Foreground-Service mit foregroundServiceType=mediaPlayback|microphone, persistente Notification mit dynamischem Text ("ARIA spricht" / "ARIA hoert zu" / "ARIA bereit")
|
||||||
|
|
||||||
|
### Infrastruktur
|
||||||
|
|
||||||
|
- [x] Watchdog mit Container-Restart (2min Warnung → 5min doctor --fix → 8min Restart)
|
||||||
|
- [x] Nachrichten Backup on-the-fly (/shared/config/chat_backup.jsonl)
|
||||||
|
- [x] RVS Nachrichten vom Smartphone gehen durch
|
||||||
|
- [x] SSH Volume read-write fuer Proxy (kein -F Workaround mehr)
|
||||||
|
|
||||||
## Offen
|
## Offen
|
||||||
|
|
||||||
@@ -115,9 +204,7 @@
|
|||||||
|
|
||||||
### App Features
|
### App Features
|
||||||
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
||||||
- [ ] Background Audio Service (TTS auch bei minimierter App)
|
|
||||||
- [ ] Custom-Wake-Word-Upload via Diagnostic (eigene .onnx-Files ohne App-Rebuild)
|
- [ ] Custom-Wake-Word-Upload via Diagnostic (eigene .onnx-Files ohne App-Rebuild)
|
||||||
- [ ] Pause+Resume bei Anruf: aktuell wird der TTS-Stream bei Klingeln hart gestoppt, schoener waere Pause + Resume nach Auflegen
|
|
||||||
|
|
||||||
### Architektur
|
### Architektur
|
||||||
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ const ALLOWED_TYPES = new Set([
|
|||||||
"update_check", "update_available", "update_download", "update_data",
|
"update_check", "update_available", "update_download", "update_data",
|
||||||
"agent_activity", "cancel_request",
|
"agent_activity", "cancel_request",
|
||||||
"audio_pcm",
|
"audio_pcm",
|
||||||
|
"file_from_aria",
|
||||||
"xtts_delete_voice",
|
"xtts_delete_voice",
|
||||||
"voice_preload", "voice_ready",
|
"voice_preload", "voice_ready",
|
||||||
"stt_request", "stt_response",
|
"stt_request", "stt_response",
|
||||||
|
|||||||
Reference in New Issue
Block a user