Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 55ef207454 | |||
| 6651f5937d | |||
| e9e7dd804f | |||
| ec9530f17f | |||
| 97cb7be313 | |||
| 77e927ffcd | |||
| a9a87f12df | |||
| 2a56ac0290 | |||
| edc65ce645 | |||
| d7efaf93b3 | |||
| 31ff20c846 | |||
| 406f4cb3cc | |||
| fa0667088a | |||
| f55329706e |
@@ -378,10 +378,13 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
||||
### Features
|
||||
|
||||
- Text-Chat mit ARIA
|
||||
- **Sprachaufnahme**: Push-to-Talk (halten) oder Tap-to-Talk (tippen, Auto-Stop bei Stille)
|
||||
- **Sprachaufnahme**: Tap-to-Talk (tippen startet, tippen stoppt, Auto-Stop bei Stille via VAD)
|
||||
- **Gespraechsmodus** (Ohr-Button): Nach jeder ARIA-Antwort startet automatisch die Aufnahme — wie ein natuerliches Gespraech hin und her
|
||||
- **Wake-Word** (on-device, openWakeWord ONNX): "Hey Jarvis", "Alexa", "Hey Mycroft", "Hey Rhasspy" — Mikrofon hoert passiv mit, Konversation startet beim Schluesselwort. Komplett on-device via ONNX Runtime, kein API-Key, kein Cloud-Roundtrip, Audio verlaesst das Geraet nicht.
|
||||
- **VAD (Voice Activity Detection)**: Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme 120s.
|
||||
- **VAD (Voice Activity Detection)**: Adaptive Schwelle (Baseline aus ersten 500ms Mic-Pegel + 6dB Offset). Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme einstellbar (1–30 min, Default 5 min)
|
||||
- **Barge-In**: Wenn du waehrend ARIAs Antwort eine neue Sprach-/Text-Nachricht reinschickst, wird sie unterbrochen + bekommt den Hint "das ist eine Korrektur"
|
||||
- **Wake-Word waehrend TTS**: Du kannst "Computer" sagen waehrend ARIA noch redet — AcousticEchoCanceler verhindert dass ARIAs eigene Stimme das Wake-Word triggert
|
||||
- **Anruf-Pause**: TTS verstummt automatisch wenn das Telefon klingelt (READ_PHONE_STATE Permission)
|
||||
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
||||
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
||||
- **"ARIA denkt..." Indicator**: Zeigt live den Status vom Core (Denken, Tool, Schreiben) + Abbrechen-Button
|
||||
@@ -415,7 +418,7 @@ Community-Modelle stammen aus [fwartner/home-assistant-wakewords-collection](htt
|
||||
**Bedienung:**
|
||||
- App → **Einstellungen** → **Wake-Word** → gewuenschtes Keyword waehlen → **Speichern + Aktivieren**
|
||||
- **Ohr-Button (👂)** in der Statusleiste tippen → Wake-Word ist scharf, App hoert passiv mit
|
||||
- Wake-Word sagen → Symbol wechselt auf 🎙️, Konversation laeuft
|
||||
- Wake-Word sagen → Symbol wechselt auf 🎙️, **Bereit-Sound** (Ding-Dong, optional in Settings) + Toast "🎤 sprich jetzt" sobald das Mikro wirklich offen ist
|
||||
- Nach jeder ARIA-Antwort oeffnet sich das Mikro nochmal — Stille → zurueck zu 👂
|
||||
- Erneut tippen → Ohr aus (🔇)
|
||||
|
||||
@@ -840,7 +843,14 @@ docker exec aria-core ssh aria-wohnung hostname
|
||||
- [x] Whisper STT auf die Gamebox ausgelagert (CUDA float16, fast Echtzeit)
|
||||
- [x] **F5-TTS ersetzt XTTS** — bessere Voice-Cloning-Qualitaet, Whisper-auto-transkribierter Referenz-Text
|
||||
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
||||
- [x] VAD-Stille-Toleranz und Max-Aufnahme einstellbar (1-8s, 120s)
|
||||
- [x] VAD-Stille-Toleranz einstellbar (1-8s) + adaptive Mikro-Baseline + Max-Aufnahme einstellbar (1-30 min)
|
||||
- [x] Barge-In: User kann ARIA waehrend Antwort unterbrechen, aria-core bekommt Kontext-Hint
|
||||
- [x] Anruf-Pause: TTS verstummt bei eingehendem Anruf (PhoneStateListener)
|
||||
- [x] Settings-Sub-Screens: 8 Kategorien statt langer Liste
|
||||
- [x] APK ABI-Split arm64-v8a: 35 MB statt 136 MB
|
||||
- [x] Sprachnachrichten-Bubble: audioRequestId statt Substring-Match — keine vertauschten Bubbles mehr bei parallelen Aufnahmen
|
||||
- [x] Bereit-Sound (Airplane Ding-Dong) wenn Mikro nach Wake-Word offen ist — akustische Bestaetigung, in Settings abschaltbar
|
||||
- [x] Wake-Word parallel zu TTS mit AcousticEchoCanceler — "Computer" sagen waehrend ARIA spricht stoppt sie und oeffnet Mikro
|
||||
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
||||
- [x] Wake-Word on-device via openWakeWord (ONNX Runtime, kein API-Key) + State-Icon
|
||||
|
||||
|
||||
@@ -79,8 +79,8 @@ android {
|
||||
applicationId "com.ariacockpit"
|
||||
minSdkVersion rootProject.ext.minSdkVersion
|
||||
targetSdkVersion rootProject.ext.targetSdkVersion
|
||||
versionCode 703
|
||||
versionName "0.0.7.3"
|
||||
versionCode 707
|
||||
versionName "0.0.7.7"
|
||||
// Fallback fuer Libraries mit Product Flavors
|
||||
missingDimensionStrategy 'react-native-camera', 'general'
|
||||
}
|
||||
|
||||
@@ -8,6 +8,9 @@ import android.content.pm.PackageManager
|
||||
import android.media.AudioFormat
|
||||
import android.media.AudioRecord
|
||||
import android.media.MediaRecorder
|
||||
import android.media.audiofx.AcousticEchoCanceler
|
||||
import android.media.audiofx.AutomaticGainControl
|
||||
import android.media.audiofx.NoiseSuppressor
|
||||
import android.util.Log
|
||||
import androidx.core.content.ContextCompat
|
||||
import com.facebook.react.bridge.Promise
|
||||
@@ -70,6 +73,13 @@ class OpenWakeWordModule(reactContext: ReactApplicationContext) : ReactContextBa
|
||||
private val running = AtomicBoolean(false)
|
||||
private var captureThread: Thread? = null
|
||||
|
||||
// Audio-Effects: Echo-Cancellation (gegen ARIAs eigene TTS-Stimme die sonst
|
||||
// das Wake-Word triggern wuerde) + Noise-Suppression. Per VOICE_COMMUNICATION
|
||||
// Audio-Source schon vorhanden, aber explizites Aktivieren ist robuster.
|
||||
private var aec: AcousticEchoCanceler? = null
|
||||
private var ns: NoiseSuppressor? = null
|
||||
private var agc: AutomaticGainControl? = null
|
||||
|
||||
// Inferenz-State
|
||||
private val melBuffer: ArrayList<FloatArray> = ArrayList(256) // Liste von 32-dim Frames
|
||||
private var melProcessedIdx: Int = 0
|
||||
@@ -146,8 +156,12 @@ class OpenWakeWordModule(reactContext: ReactApplicationContext) : ReactContextBa
|
||||
AudioFormat.ENCODING_PCM_16BIT,
|
||||
).coerceAtLeast(CHUNK_SAMPLES * 2 * 4)
|
||||
|
||||
// VOICE_COMMUNICATION-Source: aktiviert auf den meisten Android-Geraeten
|
||||
// automatisch Echo-Cancellation + Noise-Suppression. Wichtig damit
|
||||
// ARIAs eigene Stimme nicht das Wake-Word triggert wenn parallel
|
||||
// zur TTS-Wiedergabe gelauscht wird.
|
||||
val record = AudioRecord(
|
||||
MediaRecorder.AudioSource.MIC,
|
||||
MediaRecorder.AudioSource.VOICE_COMMUNICATION,
|
||||
SAMPLE_RATE,
|
||||
AudioFormat.CHANNEL_IN_MONO,
|
||||
AudioFormat.ENCODING_PCM_16BIT,
|
||||
@@ -159,6 +173,27 @@ class OpenWakeWordModule(reactContext: ReactApplicationContext) : ReactContextBa
|
||||
return
|
||||
}
|
||||
audioRecord = record
|
||||
|
||||
// Audio-Effects ZUSAETZLICH explizit aktivieren — manche Geraete
|
||||
// benoetigen das, obwohl VOICE_COMMUNICATION es eigentlich schon
|
||||
// mitbringt. Failure ist nicht kritisch (continue ohne Effects).
|
||||
try {
|
||||
if (AcousticEchoCanceler.isAvailable()) {
|
||||
aec = AcousticEchoCanceler.create(record.audioSessionId)?.apply { enabled = true }
|
||||
Log.i(TAG, "AEC aktiviert (enabled=${aec?.enabled})")
|
||||
}
|
||||
} catch (e: Exception) { Log.w(TAG, "AEC failed: ${e.message}") }
|
||||
try {
|
||||
if (NoiseSuppressor.isAvailable()) {
|
||||
ns = NoiseSuppressor.create(record.audioSessionId)?.apply { enabled = true }
|
||||
}
|
||||
} catch (e: Exception) { Log.w(TAG, "NS failed: ${e.message}") }
|
||||
try {
|
||||
if (AutomaticGainControl.isAvailable()) {
|
||||
agc = AutomaticGainControl.create(record.audioSessionId)?.apply { enabled = true }
|
||||
}
|
||||
} catch (e: Exception) { Log.w(TAG, "AGC failed: ${e.message}") }
|
||||
|
||||
resetInferenceState()
|
||||
running.set(true)
|
||||
record.startRecording()
|
||||
@@ -179,6 +214,13 @@ class OpenWakeWordModule(reactContext: ReactApplicationContext) : ReactContextBa
|
||||
}
|
||||
}
|
||||
|
||||
private fun releaseAudioEffects() {
|
||||
try { aec?.release() } catch (_: Exception) {}
|
||||
try { ns?.release() } catch (_: Exception) {}
|
||||
try { agc?.release() } catch (_: Exception) {}
|
||||
aec = null; ns = null; agc = null
|
||||
}
|
||||
|
||||
@ReactMethod
|
||||
fun stop(promise: Promise) {
|
||||
running.set(false)
|
||||
@@ -189,6 +231,7 @@ class OpenWakeWordModule(reactContext: ReactApplicationContext) : ReactContextBa
|
||||
try { audioRecord?.stop() } catch (_: Exception) {}
|
||||
try { audioRecord?.release() } catch (_: Exception) {}
|
||||
audioRecord = null
|
||||
releaseAudioEffects()
|
||||
Log.i(TAG, "Lauschen gestoppt")
|
||||
promise.resolve(true)
|
||||
}
|
||||
@@ -201,6 +244,7 @@ class OpenWakeWordModule(reactContext: ReactApplicationContext) : ReactContextBa
|
||||
try { audioRecord?.stop() } catch (_: Exception) {}
|
||||
try { audioRecord?.release() } catch (_: Exception) {}
|
||||
audioRecord = null
|
||||
releaseAudioEffects()
|
||||
disposeSessions()
|
||||
promise.resolve(true)
|
||||
}
|
||||
|
||||
Binary file not shown.
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "aria-cockpit",
|
||||
"version": "0.0.7.3",
|
||||
"version": "0.0.7.7",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"android": "react-native run-android",
|
||||
|
||||
Binary file not shown.
@@ -1,68 +1,14 @@
|
||||
/**
|
||||
* MessageText — rendert Chat-Text mit Auto-Linkifizierung:
|
||||
* - http(s)://... → tippbar, oeffnet im Browser
|
||||
* - mailto: oder plain E-Mail → tippbar, oeffnet Mail-App
|
||||
* - Telefonnummern → tippbar, oeffnet Android-Dialer
|
||||
* MessageText — selektierbarer Chat-Text mit Android-Auto-Linkifizierung.
|
||||
*
|
||||
* Text ist durchgaengig markierbar/kopierbar (selectable).
|
||||
* Wir nutzen Androids dataDetectorType="all" (System macht Phone/URL/Email
|
||||
* automatisch klickbar) und ein einzelnes <Text selectable> ohne nested
|
||||
* <Text> mit eigenem onPress. Nested Text mit onPress fingen die Long-Press-
|
||||
* Geste ab, damit war Markieren+Kopieren defekt.
|
||||
*/
|
||||
|
||||
import React from 'react';
|
||||
import { Text, Linking, TextStyle, StyleProp } from 'react-native';
|
||||
|
||||
// Regex kombiniert URL | Email | Telefonnummer.
|
||||
// Gruppenreihenfolge ist wichtig fuer die Erkennung unten.
|
||||
//
|
||||
// URL: http://... oder https://... bis zum ersten Whitespace / Anfuehrungszeichen.
|
||||
// Email: simpler Standard-Match (kein RFC-kompatibel aber gut genug).
|
||||
// Telefon: internationale Form (+49..., 0049..., 0176...), darf Leerzeichen
|
||||
// / Bindestriche / Schraegstriche / Klammern enthalten, mindestens 7
|
||||
// Ziffern insgesamt. Vermeidet banale Zahlen (Uhrzeiten, Datum).
|
||||
const LINK_REGEX = new RegExp(
|
||||
'(https?:\\/\\/[^\\s<>"]+)' + // 1: URL
|
||||
'|([A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,})' + // 2: Email
|
||||
'|((?:\\+|00)\\d[\\d\\s()\\-\\/]{6,}\\d|0\\d{2,4}[\\s\\/\\-]?[\\d\\s\\-\\/]{5,}\\d)', // 3: Telefon
|
||||
'g',
|
||||
);
|
||||
|
||||
const LINK_STYLE = { color: '#0096FF', textDecorationLine: 'underline' } as TextStyle;
|
||||
|
||||
interface Segment {
|
||||
text: string;
|
||||
kind: 'text' | 'url' | 'email' | 'phone';
|
||||
}
|
||||
|
||||
function tokenize(raw: string): Segment[] {
|
||||
const out: Segment[] = [];
|
||||
let lastEnd = 0;
|
||||
LINK_REGEX.lastIndex = 0;
|
||||
let m: RegExpExecArray | null;
|
||||
while ((m = LINK_REGEX.exec(raw)) !== null) {
|
||||
if (m.index > lastEnd) {
|
||||
out.push({ text: raw.slice(lastEnd, m.index), kind: 'text' });
|
||||
}
|
||||
if (m[1]) out.push({ text: m[1], kind: 'url' });
|
||||
else if (m[2]) out.push({ text: m[2], kind: 'email' });
|
||||
else if (m[3]) out.push({ text: m[3], kind: 'phone' });
|
||||
lastEnd = LINK_REGEX.lastIndex;
|
||||
}
|
||||
if (lastEnd < raw.length) out.push({ text: raw.slice(lastEnd), kind: 'text' });
|
||||
return out;
|
||||
}
|
||||
|
||||
function onPress(seg: Segment) {
|
||||
try {
|
||||
if (seg.kind === 'url') {
|
||||
Linking.openURL(seg.text);
|
||||
} else if (seg.kind === 'email') {
|
||||
Linking.openURL(`mailto:${seg.text}`);
|
||||
} else if (seg.kind === 'phone') {
|
||||
// Android-Dialer erwartet tel:-Schema ohne Leerzeichen/Bindestriche
|
||||
const clean = seg.text.replace(/[\s\-\/()]/g, '');
|
||||
Linking.openURL(`tel:${clean}`);
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
import { Text, TextStyle, StyleProp } from 'react-native';
|
||||
|
||||
interface Props {
|
||||
text: string;
|
||||
@@ -70,34 +16,9 @@ interface Props {
|
||||
}
|
||||
|
||||
const MessageText: React.FC<Props> = ({ text, style }) => {
|
||||
const segments = React.useMemo(() => tokenize(text), [text]);
|
||||
return (
|
||||
<Text
|
||||
style={style}
|
||||
selectable
|
||||
// dataDetectorType ist Android-only und macht Phone/URL/Email zusaetzlich
|
||||
// ueber System-Detection klickbar — als Fallback falls unsere Regex-
|
||||
// Tokens nicht passen.
|
||||
dataDetectorType="all"
|
||||
>
|
||||
{segments.map((seg, i) => {
|
||||
if (seg.kind === 'text') {
|
||||
return <Text key={i} selectable>{seg.text}</Text>;
|
||||
}
|
||||
return (
|
||||
<Text
|
||||
key={i}
|
||||
selectable
|
||||
style={LINK_STYLE}
|
||||
onPress={() => onPress(seg)}
|
||||
// Long-Press soll an den Parent durch fuer Selection
|
||||
onLongPress={undefined}
|
||||
suppressHighlighting={false}
|
||||
>
|
||||
{seg.text}
|
||||
</Text>
|
||||
);
|
||||
})}
|
||||
<Text style={style} selectable dataDetectorType="all">
|
||||
{text}
|
||||
</Text>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -44,7 +44,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
||||
const [meterDb, setMeterDb] = useState(-160);
|
||||
const pulseAnim = useRef(new Animated.Value(1)).current;
|
||||
const durationTimer = useRef<ReturnType<typeof setInterval> | null>(null);
|
||||
const isLongPress = useRef(false);
|
||||
|
||||
// Puls-Animation starten/stoppen
|
||||
useEffect(() => {
|
||||
@@ -117,31 +116,10 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
||||
if (disabled || isRecording) return;
|
||||
const started = await audioService.startRecording(true); // autoStop = true
|
||||
if (started) {
|
||||
isLongPress.current = false;
|
||||
setIsRecording(true);
|
||||
}
|
||||
}, [disabled, isRecording]);
|
||||
|
||||
// Push-to-Talk: Lang druecken
|
||||
const handlePressIn = async () => {
|
||||
if (disabled || isRecording) return;
|
||||
isLongPress.current = true;
|
||||
const started = await audioService.startRecording(false); // kein autoStop
|
||||
if (started) {
|
||||
setIsRecording(true);
|
||||
}
|
||||
};
|
||||
|
||||
const handlePressOut = async () => {
|
||||
if (!isRecording || !isLongPress.current) return;
|
||||
isLongPress.current = false;
|
||||
setIsRecording(false);
|
||||
const result = await audioService.stopRecording();
|
||||
if (result && result.durationMs > 300) {
|
||||
onRecordingComplete(result);
|
||||
}
|
||||
};
|
||||
|
||||
// Tap-to-Talk: Einmal tippen startet mit Auto-Stop.
|
||||
// Guard gegen Doppel-Tap während asyncer Start/Stop.
|
||||
const tapBusy = useRef(false);
|
||||
@@ -162,7 +140,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
||||
// Aufnahme mit Auto-Stop starten
|
||||
const started = await audioService.startRecording(true);
|
||||
if (started) {
|
||||
isLongPress.current = false;
|
||||
setIsRecording(true);
|
||||
}
|
||||
}
|
||||
@@ -201,10 +178,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
||||
isRecording && styles.buttonOuterRecording,
|
||||
{ transform: [{ scale: pulseAnim }] },
|
||||
]}
|
||||
onStartShouldSetResponder={() => true}
|
||||
onResponderGrant={handlePressIn}
|
||||
onResponderRelease={handlePressOut}
|
||||
onResponderTerminate={handlePressOut}
|
||||
>
|
||||
<TouchableOpacity
|
||||
activeOpacity={0.8}
|
||||
|
||||
@@ -26,6 +26,7 @@ import rvs, { RVSMessage, ConnectionState } from '../services/rvs';
|
||||
import audioService from '../services/audio';
|
||||
import wakeWordService from '../services/wakeword';
|
||||
import phoneCallService from '../services/phoneCall';
|
||||
import { playWakeReadySound } from '../services/wakeReadySound';
|
||||
import updateService from '../services/updater';
|
||||
import VoiceButton from '../components/VoiceButton';
|
||||
import FileUpload, { FileData } from '../components/FileUpload';
|
||||
@@ -55,6 +56,10 @@ interface ChatMessage {
|
||||
messageId?: string;
|
||||
/** Lokaler Pfad zur gecachten TTS-Audio-Datei (file://...) */
|
||||
audioPath?: string;
|
||||
/** Korrelations-ID fuer Sprachnachrichten — wird mit dem STT-Result zurueck-
|
||||
* gespiegelt damit wir die EXAKT richtige Placeholder-Bubble ersetzen,
|
||||
* auch wenn mehrere Aufnahmen parallel offen sind. */
|
||||
audioRequestId?: string;
|
||||
}
|
||||
|
||||
// --- Konstanten ---
|
||||
@@ -292,33 +297,42 @@ const ChatScreen: React.FC = () => {
|
||||
// den gleichen Text bekommen (Bug: zweite Antwort ueberschreibt erste).
|
||||
if (sender === 'stt') {
|
||||
const sttText = (message.payload.text as string) || '';
|
||||
if (sttText) {
|
||||
setMessages(prev => {
|
||||
const idx = prev.findIndex(m =>
|
||||
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
||||
);
|
||||
console.log('[Chat] STT-Result: idx=%d text="%s" placeholders=%d',
|
||||
idx, sttText.slice(0, 60),
|
||||
prev.filter(m => m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')).length);
|
||||
const newText = `\uD83C\uDFA4 ${sttText}`;
|
||||
if (idx < 0) {
|
||||
// Defensiv: wenn keine Placeholder im State (z.B. weil sie nie
|
||||
// hinzugefuegt wurde oder schon durch ein anderes Update verloren
|
||||
// ging), die Sprachnachricht trotzdem als neue Bubble einfuegen.
|
||||
// Sonst kommt ARIAs Antwort ohne sichtbare User-Nachricht.
|
||||
return capMessages([...prev, {
|
||||
id: nextId(),
|
||||
sender: 'user',
|
||||
text: newText,
|
||||
timestamp: message.timestamp,
|
||||
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
|
||||
}]);
|
||||
const sttAudioReqId = (message.payload.audioRequestId as string) || '';
|
||||
if (!sttText) {
|
||||
return;
|
||||
}
|
||||
setMessages(prev => {
|
||||
const newText = `\uD83C\uDFA4 ${sttText}`;
|
||||
// Primaer: matche per audioRequestId (eindeutig pro Aufnahme).
|
||||
// So gibt's keine Verwechslung wenn zwei Audios kurz hintereinander
|
||||
// gesendet wurden und ihre STT-Results ueberlappen.
|
||||
if (sttAudioReqId) {
|
||||
const idxById = prev.findIndex(m => m.audioRequestId === sttAudioReqId);
|
||||
if (idxById >= 0) {
|
||||
const next = prev.slice();
|
||||
next[idxById] = { ...next[idxById], text: newText };
|
||||
return next;
|
||||
}
|
||||
}
|
||||
// Fallback: alte Bridge-Version ohne audioRequestId \u2014 match per Substring,
|
||||
// nimmt die ERSTE noch unaufgeloeste Placeholder.
|
||||
const idx = prev.findIndex(m =>
|
||||
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
||||
);
|
||||
if (idx >= 0) {
|
||||
const next = prev.slice();
|
||||
next[idx] = { ...next[idx], text: newText };
|
||||
return next;
|
||||
});
|
||||
}
|
||||
}
|
||||
// Letzter Fallback: gar keine Placeholder \u2192 neue Bubble einfuegen
|
||||
return capMessages([...prev, {
|
||||
id: nextId(),
|
||||
sender: 'user',
|
||||
text: newText,
|
||||
timestamp: message.timestamp,
|
||||
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
|
||||
}]);
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -480,7 +494,14 @@ const ChatScreen: React.FC = () => {
|
||||
// Conversation-Window: User hat X Sekunden um anzufangen, sonst Konversation aus
|
||||
const windowMs = await loadConvWindowMs();
|
||||
const started = await audioService.startRecording(true, windowMs);
|
||||
if (!started) {
|
||||
if (started) {
|
||||
// Erst JETZT signalisieren dass das Mikro wirklich offen ist —
|
||||
// vorher war's noch in der Init-Phase. So weiss der User exakt
|
||||
// ab wann er reden kann. "Bereit"-Sound (Ding-Dong) ist optional
|
||||
// ueber Settings → Wake-Word abschaltbar.
|
||||
ToastAndroid.show('🎤 Mikro offen — sprich jetzt', ToastAndroid.SHORT);
|
||||
playWakeReadySound().catch(() => {});
|
||||
} else {
|
||||
// Mikrofon nicht verfuegbar, naechsten Versuch
|
||||
wakeWordService.resume();
|
||||
}
|
||||
@@ -491,13 +512,17 @@ const ChatScreen: React.FC = () => {
|
||||
const result = await audioService.stopRecording();
|
||||
if (result && result.durationMs > 500) {
|
||||
// User hat im Fenster gesprochen → Sprachnachricht senden
|
||||
// Barge-In: laufende ARIA-Aktivitaet abbrechen wenn welche da ist.
|
||||
const wasInterrupted = interruptAriaIfBusy();
|
||||
const location = await getCurrentLocation();
|
||||
const audioRequestId = `audio_${Date.now()}_${Math.floor(Math.random() * 100000)}`;
|
||||
const userMsg: ChatMessage = {
|
||||
id: nextId(),
|
||||
sender: 'user',
|
||||
text: '🎙 Spracheingabe wird verarbeitet...',
|
||||
timestamp: Date.now(),
|
||||
attachments: [{ type: 'audio', name: 'Sprachaufnahme' }],
|
||||
audioRequestId,
|
||||
};
|
||||
setMessages(prev => capMessages([...prev, userMsg]));
|
||||
rvs.send('audio', {
|
||||
@@ -506,6 +531,8 @@ const ChatScreen: React.FC = () => {
|
||||
mimeType: result.mimeType,
|
||||
voice: localXttsVoiceRef.current,
|
||||
speed: ttsSpeedRef.current,
|
||||
interrupted: wasInterrupted,
|
||||
audioRequestId,
|
||||
...(location && { location }),
|
||||
});
|
||||
// resume() wird durch onPlaybackFinished nach ARIAs Antwort getriggert.
|
||||
@@ -518,9 +545,43 @@ const ChatScreen: React.FC = () => {
|
||||
}
|
||||
});
|
||||
|
||||
// Barge-In via Wake-Word: User sagt "Computer" waehrend ARIA spricht.
|
||||
// Wake-Word-Service hat bei TTS-Start parallel zu lauschen begonnen
|
||||
// (mit AcousticEchoCanceler damit ARIAs eigene Stimme nicht triggert).
|
||||
const unsubBarge = wakeWordService.onBargeIn(async () => {
|
||||
console.log('[Chat] Barge-In via Wake-Word — TTS abbrechen + neue Aufnahme');
|
||||
audioService.haltAllPlayback('barge-in via wake-word');
|
||||
setAgentActivity({ activity: 'idle', tool: '' });
|
||||
rvs.send('cancel_request' as any, {});
|
||||
// Kurze Pause damit halt durchgreift, dann neue Aufnahme starten
|
||||
await new Promise(r => setTimeout(r, 150));
|
||||
const windowMs = await loadConvWindowMs();
|
||||
const started = await audioService.startRecording(true, windowMs);
|
||||
if (started) {
|
||||
ToastAndroid.show('🎤 Mikro offen — sprich jetzt', ToastAndroid.SHORT);
|
||||
playWakeReadySound().catch(() => {});
|
||||
}
|
||||
});
|
||||
|
||||
// TTS-Lifecycle: solange ARIA spricht und Wake-Word verfuegbar ist,
|
||||
// parallel mitlauschen — User kann "Computer" sagen statt manuell tappen.
|
||||
const unsubTtsStart = audioService.onPlaybackStarted(() => {
|
||||
if (wakeWordService.isConversing() && wakeWordService.hasWakeWord()) {
|
||||
wakeWordService.startBargeListening().catch(() => {});
|
||||
}
|
||||
});
|
||||
const unsubTtsEnd = audioService.onPlaybackFinished(() => {
|
||||
// Vor naechster Aufnahme: barge-listening aus damit der AudioRecorder
|
||||
// das Mikro greifen kann.
|
||||
wakeWordService.stopBargeListening().catch(() => {});
|
||||
});
|
||||
|
||||
return () => {
|
||||
unsubWake();
|
||||
unsubSilence();
|
||||
unsubBarge();
|
||||
unsubTtsStart();
|
||||
unsubTtsEnd();
|
||||
};
|
||||
}, [wakeWordActive]);
|
||||
|
||||
@@ -608,6 +669,8 @@ const ChatScreen: React.FC = () => {
|
||||
|
||||
setInputText('');
|
||||
|
||||
// Barge-In: laufende ARIA-Aktivitaet abbrechen wenn welche da ist.
|
||||
const wasInterrupted = interruptAriaIfBusy();
|
||||
const location = await getCurrentLocation();
|
||||
|
||||
const userMsg: ChatMessage = {
|
||||
@@ -618,16 +681,17 @@ const ChatScreen: React.FC = () => {
|
||||
};
|
||||
setMessages(prev => capMessages([...prev, userMsg]));
|
||||
|
||||
console.log('[Chat] sende mit voice=%s speed=%s',
|
||||
localXttsVoiceRef.current || '(default)', ttsSpeedRef.current);
|
||||
console.log('[Chat] sende mit voice=%s speed=%s interrupted=%s',
|
||||
localXttsVoiceRef.current || '(default)', ttsSpeedRef.current, wasInterrupted);
|
||||
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
||||
rvs.send('chat', {
|
||||
text,
|
||||
voice: localXttsVoiceRef.current,
|
||||
speed: ttsSpeedRef.current,
|
||||
interrupted: wasInterrupted,
|
||||
...(location && { location }),
|
||||
});
|
||||
}, [inputText, getCurrentLocation, pendingAttachments, sendPendingAttachments]);
|
||||
}, [inputText, getCurrentLocation, pendingAttachments, sendPendingAttachments, interruptAriaIfBusy]);
|
||||
|
||||
// Anfrage abbrechen — sofort lokalen Indicator weg, Bridge triggert doctor --fix
|
||||
const cancelRequest = useCallback(() => {
|
||||
@@ -635,15 +699,37 @@ const ChatScreen: React.FC = () => {
|
||||
rvs.send('cancel_request' as any, {});
|
||||
}, []);
|
||||
|
||||
// Barge-In: wenn der User waehrend ARIA arbeitet/spricht eine neue Sprach-
|
||||
// Nachricht aufnimmt, alte Aktivitaet sofort abbrechen — TTS verstummen,
|
||||
// aria-core-Run via cancel_request abbrechen. So kann man "ach vergiss es,
|
||||
// mach lieber X" sagen wie in einem echten Gespraech.
|
||||
const interruptAriaIfBusy = useCallback(() => {
|
||||
const speaking = audioService.isPlayingAudio();
|
||||
const thinking = agentActivity.activity !== 'idle';
|
||||
if (!speaking && !thinking) return false;
|
||||
console.log('[Chat] Barge-In: speaking=%s thinking=%s — interrupting ARIA',
|
||||
speaking, thinking);
|
||||
if (speaking) audioService.haltAllPlayback('user spricht (barge-in)');
|
||||
if (thinking) {
|
||||
setAgentActivity({ activity: 'idle', tool: '' });
|
||||
rvs.send('cancel_request' as any, {});
|
||||
}
|
||||
return true;
|
||||
}, [agentActivity]);
|
||||
|
||||
// Sprachaufnahme abgeschlossen
|
||||
const handleVoiceRecording = useCallback(async (result: RecordingResult) => {
|
||||
// Barge-In: laufende ARIA-Aktivitaet abbrechen falls aktiv.
|
||||
const wasInterrupted = interruptAriaIfBusy();
|
||||
const location = await getCurrentLocation();
|
||||
const audioRequestId = `audio_${Date.now()}_${Math.floor(Math.random() * 100000)}`;
|
||||
|
||||
const userMsg: ChatMessage = {
|
||||
id: nextId(),
|
||||
sender: 'user',
|
||||
text: '🎙 Spracheingabe wird verarbeitet...',
|
||||
timestamp: Date.now(),
|
||||
audioRequestId,
|
||||
};
|
||||
setMessages(prev => capMessages([...prev, userMsg]));
|
||||
|
||||
@@ -653,9 +739,11 @@ const ChatScreen: React.FC = () => {
|
||||
mimeType: result.mimeType,
|
||||
voice: localXttsVoiceRef.current,
|
||||
speed: ttsSpeedRef.current,
|
||||
interrupted: wasInterrupted,
|
||||
audioRequestId,
|
||||
...(location && { location }),
|
||||
});
|
||||
}, [getCurrentLocation]);
|
||||
}, [getCurrentLocation, interruptAriaIfBusy]);
|
||||
|
||||
// Datei auswaehlen → zur Pending-Liste hinzufuegen
|
||||
const handleFileSelected = useCallback(async (file: FileData) => {
|
||||
|
||||
@@ -35,11 +35,20 @@ import {
|
||||
CONV_WINDOW_MIN_SEC,
|
||||
CONV_WINDOW_MAX_SEC,
|
||||
CONV_WINDOW_STORAGE_KEY,
|
||||
MAX_RECORDING_DEFAULT_SEC,
|
||||
MAX_RECORDING_MIN_SEC,
|
||||
MAX_RECORDING_MAX_SEC,
|
||||
MAX_RECORDING_STORAGE_KEY,
|
||||
TTS_SPEED_DEFAULT,
|
||||
TTS_SPEED_MIN,
|
||||
TTS_SPEED_MAX,
|
||||
TTS_SPEED_STORAGE_KEY,
|
||||
} from '../services/audio';
|
||||
import {
|
||||
isWakeReadySoundEnabled,
|
||||
setWakeReadySoundEnabled,
|
||||
playWakeReadySound,
|
||||
} from '../services/wakeReadySound';
|
||||
import wakeWordService, {
|
||||
WAKE_KEYWORDS,
|
||||
KEYWORD_LABELS,
|
||||
@@ -72,6 +81,18 @@ interface EventEntry {
|
||||
|
||||
type LogTab = 'live' | 'events';
|
||||
|
||||
// Settings-Sub-Screens. Reihenfolge im Hauptmenue.
|
||||
const SETTINGS_SECTIONS = [
|
||||
{ id: 'connection', icon: '🔌', label: 'Verbindung', desc: 'Server, Token, Status, Verbindungslog' },
|
||||
{ id: 'general', icon: '⚙️', label: 'Allgemein', desc: 'Betriebsmodus, GPS-Standort' },
|
||||
{ id: 'voice_input', icon: '🎙️', label: 'Spracheingabe', desc: 'Stille-Toleranz, Aufnahmedauer' },
|
||||
{ id: 'wake_word', icon: '👂', label: 'Wake-Word', desc: 'Wake-Word-Auswahl' },
|
||||
{ id: 'voice_output', icon: '🔊', label: 'Sprachausgabe', desc: 'Stimmen, Pre-Roll, Geschwindigkeit' },
|
||||
{ id: 'storage', icon: '📁', label: 'Speicher', desc: 'Anhang-Speicherort, Auto-Download' },
|
||||
{ id: 'protocol', icon: '📜', label: 'Protokoll', desc: 'Privatsphaere, Backup' },
|
||||
{ id: 'about', icon: 'ℹ️', label: 'Ueber', desc: 'App-Version, Update' },
|
||||
] as const;
|
||||
|
||||
// Container-Farben fuer Live-Logs
|
||||
const SOURCE_COLORS: Record<string, string> = {
|
||||
'aria-core': '#4A9EFF', // Blau
|
||||
@@ -102,15 +123,21 @@ const SettingsScreen: React.FC = () => {
|
||||
const [ttsPrerollSec, setTtsPrerollSec] = useState<number>(TTS_PREROLL_DEFAULT_SEC);
|
||||
const [vadSilenceSec, setVadSilenceSec] = useState<number>(VAD_SILENCE_DEFAULT_SEC);
|
||||
const [convWindowSec, setConvWindowSec] = useState<number>(CONV_WINDOW_DEFAULT_SEC);
|
||||
const [maxRecordingSec, setMaxRecordingSec] = useState<number>(MAX_RECORDING_DEFAULT_SEC);
|
||||
const [ttsSpeed, setTtsSpeed] = useState<number>(TTS_SPEED_DEFAULT);
|
||||
const [wakeKeyword, setWakeKeyword] = useState<string>(DEFAULT_KEYWORD);
|
||||
const [wakeStatus, setWakeStatus] = useState<string>('');
|
||||
const [wakeReadySound, setWakeReadySound] = useState<boolean>(true);
|
||||
const [editingPath, setEditingPath] = useState(false);
|
||||
const [xttsVoice, setXttsVoice] = useState('');
|
||||
const [loadingVoice, setLoadingVoice] = useState<string | null>(null);
|
||||
const [availableVoices, setAvailableVoices] = useState<Array<{name: string, size: number}>>([]);
|
||||
const [voiceCloneVisible, setVoiceCloneVisible] = useState(false);
|
||||
const [tempPath, setTempPath] = useState('');
|
||||
// Sub-Screen Navigation: null = Hauptmenue, sonst eine der Section-IDs.
|
||||
// So bleibt aller geteilte State im selben Component-Closure und wir
|
||||
// brauchen keine react-navigation-Stack-Setup.
|
||||
const [currentSection, setCurrentSection] = useState<string | null>(null);
|
||||
|
||||
let logIdCounter = 0;
|
||||
|
||||
@@ -156,6 +183,14 @@ const SettingsScreen: React.FC = () => {
|
||||
}
|
||||
}
|
||||
});
|
||||
AsyncStorage.getItem(MAX_RECORDING_STORAGE_KEY).then(saved => {
|
||||
if (saved != null) {
|
||||
const n = parseFloat(saved);
|
||||
if (isFinite(n) && n >= MAX_RECORDING_MIN_SEC && n <= MAX_RECORDING_MAX_SEC) {
|
||||
setMaxRecordingSec(n);
|
||||
}
|
||||
}
|
||||
});
|
||||
AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY).then(saved => {
|
||||
if (saved != null) {
|
||||
const n = parseFloat(saved);
|
||||
@@ -165,6 +200,7 @@ const SettingsScreen: React.FC = () => {
|
||||
AsyncStorage.getItem(WAKE_KEYWORD_STORAGE).then(saved => {
|
||||
if (saved && (WAKE_KEYWORDS as readonly string[]).includes(saved)) setWakeKeyword(saved);
|
||||
});
|
||||
isWakeReadySoundEnabled().then(setWakeReadySound);
|
||||
AsyncStorage.getItem('aria_xtts_voice').then(saved => {
|
||||
if (saved) setXttsVoice(saved);
|
||||
});
|
||||
@@ -480,7 +516,39 @@ const SettingsScreen: React.FC = () => {
|
||||
/>
|
||||
<ScrollView style={styles.container} contentContainerStyle={styles.content}>
|
||||
|
||||
{currentSection === null && (
|
||||
<>
|
||||
{SETTINGS_SECTIONS.map(s => (
|
||||
<TouchableOpacity
|
||||
key={s.id}
|
||||
style={styles.menuItem}
|
||||
onPress={() => setCurrentSection(s.id)}
|
||||
>
|
||||
<Text style={styles.menuItemIcon}>{s.icon}</Text>
|
||||
<View style={styles.menuItemTextWrap}>
|
||||
<Text style={styles.menuItemLabel}>{s.label}</Text>
|
||||
<Text style={styles.menuItemDesc}>{s.desc}</Text>
|
||||
</View>
|
||||
<Text style={styles.menuItemChevron}>›</Text>
|
||||
</TouchableOpacity>
|
||||
))}
|
||||
</>
|
||||
)}
|
||||
|
||||
{currentSection !== null && (
|
||||
<TouchableOpacity
|
||||
style={styles.subScreenHeader}
|
||||
onPress={() => setCurrentSection(null)}
|
||||
>
|
||||
<Text style={styles.subScreenBack}>‹</Text>
|
||||
<Text style={styles.subScreenTitle}>
|
||||
{SETTINGS_SECTIONS.find(s => s.id === currentSection)?.label || ''}
|
||||
</Text>
|
||||
</TouchableOpacity>
|
||||
)}
|
||||
|
||||
{/* === Verbindung === */}
|
||||
{currentSection === 'connection' && (<>
|
||||
<Text style={styles.sectionTitle}>Verbindung</Text>
|
||||
<View style={styles.card}>
|
||||
{/* Status-Anzeige */}
|
||||
@@ -577,8 +645,10 @@ const SettingsScreen: React.FC = () => {
|
||||
<Text style={styles.clearButtonText}>Log l{'\u00F6'}schen</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</>)}
|
||||
|
||||
{/* === Modus === */}
|
||||
{currentSection === 'general' && (<>
|
||||
<Text style={styles.sectionTitle}>Betriebsmodus</Text>
|
||||
<View style={styles.card}>
|
||||
<ModeSelector currentModeId={currentMode} onModeChange={handleModeChange} />
|
||||
@@ -602,8 +672,10 @@ const SettingsScreen: React.FC = () => {
|
||||
/>
|
||||
</View>
|
||||
</View>
|
||||
</>)}
|
||||
|
||||
{/* === Spracheingabe (geraetelokal) === */}
|
||||
{currentSection === 'voice_input' && (<>
|
||||
<Text style={styles.sectionTitle}>Spracheingabe</Text>
|
||||
<View style={styles.card}>
|
||||
<Text style={styles.toggleLabel}>Stille-Toleranz</Text>
|
||||
@@ -671,9 +743,44 @@ const SettingsScreen: React.FC = () => {
|
||||
<Text style={styles.prerollButtonText}>+1</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
|
||||
<Text style={[styles.toggleLabel, {marginTop: 24}]}>Maximale Aufnahmedauer</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Notbremse: nach so vielen Minuten wird die Aufnahme automatisch beendet,
|
||||
auch wenn keine Stille erkannt wurde. Nuetzlich fuer lange Erklaerungen
|
||||
oder Diktate. Default: {Math.round(MAX_RECORDING_DEFAULT_SEC / 60)} Min, max {Math.round(MAX_RECORDING_MAX_SEC / 60)} Min.
|
||||
</Text>
|
||||
<View style={styles.prerollRow}>
|
||||
<TouchableOpacity
|
||||
style={styles.prerollButton}
|
||||
onPress={() => {
|
||||
const next = Math.max(MAX_RECORDING_MIN_SEC, maxRecordingSec - 60);
|
||||
setMaxRecordingSec(next);
|
||||
AsyncStorage.setItem(MAX_RECORDING_STORAGE_KEY, String(next));
|
||||
}}
|
||||
disabled={maxRecordingSec <= MAX_RECORDING_MIN_SEC}
|
||||
>
|
||||
<Text style={styles.prerollButtonText}>−1m</Text>
|
||||
</TouchableOpacity>
|
||||
<Text style={styles.prerollValue}>{Math.round(maxRecordingSec / 60)} min</Text>
|
||||
<TouchableOpacity
|
||||
style={styles.prerollButton}
|
||||
onPress={() => {
|
||||
const next = Math.min(MAX_RECORDING_MAX_SEC, maxRecordingSec + 60);
|
||||
setMaxRecordingSec(next);
|
||||
AsyncStorage.setItem(MAX_RECORDING_STORAGE_KEY, String(next));
|
||||
}}
|
||||
disabled={maxRecordingSec >= MAX_RECORDING_MAX_SEC}
|
||||
>
|
||||
<Text style={styles.prerollButtonText}>+1m</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</View>
|
||||
|
||||
</>)}
|
||||
|
||||
{/* === Wake-Word (komplett on-device, openWakeWord) === */}
|
||||
{currentSection === 'wake_word' && (<>
|
||||
<Text style={styles.sectionTitle}>Wake-Word</Text>
|
||||
<View style={styles.card}>
|
||||
<Text style={styles.toggleHint}>
|
||||
@@ -728,9 +835,36 @@ const SettingsScreen: React.FC = () => {
|
||||
{!!wakeStatus && (
|
||||
<Text style={{marginTop: 8, fontSize: 12, color: '#8888AA'}}>{wakeStatus}</Text>
|
||||
)}
|
||||
|
||||
<View style={[styles.toggleRow, {marginTop: 20, borderTopWidth: 1, borderTopColor: '#1E1E2E', paddingTop: 16}]}>
|
||||
<View style={styles.toggleInfo}>
|
||||
<Text style={styles.toggleLabel}>Bereit-Sound abspielen</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Kurzer Ding-Dong wenn das Mikro nach Wake-Word offen ist —
|
||||
akustische Bestaetigung dass du jetzt sprechen darfst.
|
||||
</Text>
|
||||
</View>
|
||||
<Switch
|
||||
value={wakeReadySound}
|
||||
onValueChange={async (val) => {
|
||||
setWakeReadySound(val);
|
||||
await setWakeReadySoundEnabled(val);
|
||||
if (val) {
|
||||
// Direkt eine Vorschau abspielen damit der User weiss wie's klingt.
|
||||
// playWakeReadySound checked das gerade gesetzte Flag — wenn val=true,
|
||||
// wird abgespielt; bei false bleibt es still.
|
||||
setTimeout(() => playWakeReadySound().catch(() => {}), 150);
|
||||
}
|
||||
}}
|
||||
trackColor={{ false: '#2A2A3E', true: '#0096FF' }}
|
||||
thumbColor={wakeReadySound ? '#FFFFFF' : '#666680'}
|
||||
/>
|
||||
</View>
|
||||
</View>
|
||||
</>)}
|
||||
|
||||
{/* === Sprachausgabe (geraetelokal) === */}
|
||||
{currentSection === 'voice_output' && (<>
|
||||
<Text style={styles.sectionTitle}>Sprachausgabe</Text>
|
||||
<View style={styles.card}>
|
||||
<View style={styles.toggleRow}>
|
||||
@@ -873,7 +1007,10 @@ const SettingsScreen: React.FC = () => {
|
||||
)}
|
||||
</View>
|
||||
|
||||
</>)}
|
||||
|
||||
{/* === Speicher === */}
|
||||
{currentSection === 'storage' && (<>
|
||||
<Text style={styles.sectionTitle}>Anhang-Speicher</Text>
|
||||
<View style={styles.card}>
|
||||
<View style={styles.toggleRow}>
|
||||
@@ -948,7 +1085,10 @@ const SettingsScreen: React.FC = () => {
|
||||
)}
|
||||
</View>
|
||||
|
||||
</>)}
|
||||
|
||||
{/* === Logs === */}
|
||||
{currentSection === 'protocol' && (<>
|
||||
<Text style={styles.sectionTitle}>Protokoll</Text>
|
||||
<View style={styles.card}>
|
||||
{/* Tab-Umschalter */}
|
||||
@@ -1027,8 +1167,10 @@ const SettingsScreen: React.FC = () => {
|
||||
<Text style={styles.clearButtonText}>Protokoll l\u00F6schen</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</>)}
|
||||
|
||||
{/* === About === */}
|
||||
{currentSection === 'about' && (<>
|
||||
<Text style={styles.sectionTitle}>{'\u00DC'}ber</Text>
|
||||
<View style={styles.card}>
|
||||
<Text style={styles.aboutTitle}>ARIA Cockpit</Text>
|
||||
@@ -1048,6 +1190,7 @@ const SettingsScreen: React.FC = () => {
|
||||
<Text style={styles.connectButtonText}>Auf Updates pr{'\u00FC'}fen</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</>)}
|
||||
|
||||
{/* Platz am Ende */}
|
||||
<View style={styles.bottomSpacer} />
|
||||
@@ -1076,6 +1219,58 @@ const styles = StyleSheet.create({
|
||||
marginBottom: 8,
|
||||
marginLeft: 4,
|
||||
},
|
||||
menuItem: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
backgroundColor: '#1E1E2E',
|
||||
borderRadius: 10,
|
||||
paddingVertical: 14,
|
||||
paddingHorizontal: 14,
|
||||
marginBottom: 8,
|
||||
},
|
||||
menuItemIcon: {
|
||||
fontSize: 22,
|
||||
marginRight: 14,
|
||||
width: 28,
|
||||
textAlign: 'center',
|
||||
},
|
||||
menuItemTextWrap: {
|
||||
flex: 1,
|
||||
},
|
||||
menuItemLabel: {
|
||||
color: '#FFFFFF',
|
||||
fontSize: 16,
|
||||
fontWeight: '600',
|
||||
},
|
||||
menuItemDesc: {
|
||||
color: '#8888AA',
|
||||
fontSize: 12,
|
||||
marginTop: 2,
|
||||
},
|
||||
menuItemChevron: {
|
||||
color: '#8888AA',
|
||||
fontSize: 24,
|
||||
fontWeight: '300',
|
||||
marginLeft: 8,
|
||||
},
|
||||
subScreenHeader: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
paddingVertical: 8,
|
||||
marginBottom: 8,
|
||||
},
|
||||
subScreenBack: {
|
||||
color: '#0096FF',
|
||||
fontSize: 32,
|
||||
fontWeight: '300',
|
||||
marginRight: 12,
|
||||
lineHeight: 36,
|
||||
},
|
||||
subScreenTitle: {
|
||||
color: '#FFFFFF',
|
||||
fontSize: 20,
|
||||
fontWeight: '700',
|
||||
},
|
||||
card: {
|
||||
backgroundColor: '#12122A',
|
||||
borderRadius: 14,
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
* Nutzt react-native-audio-recorder-player fuer Aufnahme.
|
||||
*/
|
||||
|
||||
import { Platform, PermissionsAndroid, NativeModules } from 'react-native';
|
||||
import { Platform, PermissionsAndroid, NativeModules, ToastAndroid } from 'react-native';
|
||||
import Sound from 'react-native-sound';
|
||||
import RNFS from 'react-native-fs';
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||
@@ -72,9 +72,16 @@ const AUDIO_SAMPLE_RATE = 16000;
|
||||
const AUDIO_CHANNELS = 1;
|
||||
const AUDIO_ENCODING = 'audio/wav';
|
||||
|
||||
// VAD (Voice Activity Detection) — Stille-Erkennung
|
||||
const VAD_SILENCE_THRESHOLD_DB = -45; // dB unter dem als "Stille" gilt
|
||||
const VAD_SPEECH_THRESHOLD_DB = -28; // dB ueber dem als "Sprache" gilt (Sprach-Gate) — hoeher = weniger Umgebungsgeraeusche
|
||||
// VAD (Voice Activity Detection) — Stille-Erkennung.
|
||||
// Fallback-Werte falls die adaptive Baseline-Messung fehlschlaegt (z.B. weil
|
||||
// das Mikro keine metering-Updates liefert). Adaptive Werte werden zur
|
||||
// Laufzeit aus den ersten BASELINE_SAMPLES gemessen und auf baseline+offset
|
||||
// gesetzt — funktioniert in lauten wie leisen Umgebungen.
|
||||
const VAD_SILENCE_FALLBACK_DB = -38; // Fallback Stille-Schwelle
|
||||
const VAD_SPEECH_FALLBACK_DB = -22; // Fallback Sprach-Schwelle
|
||||
const VAD_SILENCE_OFFSET_DB = 6; // Sprache = Baseline + 6dB
|
||||
const VAD_SPEECH_OFFSET_DB = 12; // sicheres Speech = Baseline + 12dB
|
||||
const VAD_BASELINE_SAMPLES = 5; // 5 × 100ms = 500ms Baseline
|
||||
const VAD_SPEECH_MIN_MS = 500; // ms Sprache bevor Aufnahme zaehlt — laenger = keine Huestler/Klopfer mehr
|
||||
|
||||
// VAD-Stille (in Sekunden) — wie lange Sprechpause toleriert wird, bevor
|
||||
@@ -138,7 +145,24 @@ async function loadVadSilenceMs(): Promise<number> {
|
||||
|
||||
// Max-Dauer einer Aufnahme (Notbremse gegen Runaway-Loops). Auf 2 Minuten
|
||||
// hochgezogen damit auch laengere Erklaerungen durchgehen.
|
||||
const MAX_RECORDING_MS = 120000;
|
||||
// Default 5 Minuten — konfigurierbar in den App-Settings (1-30 Minuten).
|
||||
export const MAX_RECORDING_DEFAULT_SEC = 300;
|
||||
export const MAX_RECORDING_MIN_SEC = 60;
|
||||
export const MAX_RECORDING_MAX_SEC = 1800;
|
||||
export const MAX_RECORDING_STORAGE_KEY = 'aria_max_recording_sec';
|
||||
|
||||
export async function loadMaxRecordingMs(): Promise<number> {
|
||||
try {
|
||||
const raw = await AsyncStorage.getItem(MAX_RECORDING_STORAGE_KEY);
|
||||
if (raw != null) {
|
||||
const n = parseFloat(raw);
|
||||
if (isFinite(n) && n >= MAX_RECORDING_MIN_SEC && n <= MAX_RECORDING_MAX_SEC) {
|
||||
return Math.round(n * 1000);
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
return MAX_RECORDING_DEFAULT_SEC * 1000;
|
||||
}
|
||||
|
||||
// Pre-Roll: Wie lange Audio im AudioTrack-Buffer liegt bevor play() startet.
|
||||
// Einstellbar via Diagnostic/Settings (Key: aria_tts_preroll_sec).
|
||||
@@ -212,6 +236,14 @@ class AudioService {
|
||||
// Latch damit der Silence-Callback pro Aufnahme genau einmal feuert
|
||||
private silenceFired: boolean = false;
|
||||
private noSpeechTimer: ReturnType<typeof setTimeout> | null = null;
|
||||
// Adaptive Schwellen — werden in den ersten 500ms aus dem Mikro-Pegel
|
||||
// gemessen. baseline = avg dB der ersten 5 Samples, dann:
|
||||
// silence = baseline + VAD_SILENCE_OFFSET_DB (6dB ueber ambient)
|
||||
// speech = baseline + VAD_SPEECH_OFFSET_DB (12dB ueber ambient = klares Reden)
|
||||
// Funktioniert sowohl im stillen Buero als auch im lauten Cafe.
|
||||
private vadBaselineSamples: number[] = [];
|
||||
private vadAdaptiveSilenceDb: number = VAD_SILENCE_FALLBACK_DB;
|
||||
private vadAdaptiveSpeechDb: number = VAD_SPEECH_FALLBACK_DB;
|
||||
|
||||
constructor() {
|
||||
this.recorder = new AudioRecorderPlayer();
|
||||
@@ -270,6 +302,14 @@ class AudioService {
|
||||
this.stopPlayback();
|
||||
}
|
||||
|
||||
/** True wenn ARIA gerade was abspielt — egal ob WAV-Queue oder PCM-Stream.
|
||||
* Nuetzlich fuer "Barge-In": wenn der User spricht waehrend ARIA spricht,
|
||||
* soll die ARIA-Wiedergabe abgebrochen + die neue User-Message verarbeitet
|
||||
* werden ("ach vergiss es, mach lieber X"). */
|
||||
isPlayingAudio(): boolean {
|
||||
return this.isPlaying || this.pcmStreamActive;
|
||||
}
|
||||
|
||||
// --- Berechtigungen ---
|
||||
|
||||
async requestMicrophonePermission(): Promise<boolean> {
|
||||
@@ -341,8 +381,25 @@ class AudioService {
|
||||
const db = e.currentMetering ?? -160;
|
||||
this.meterListeners.forEach(cb => cb(db));
|
||||
|
||||
// Adaptive Baseline: erste 5 Samples (~500ms) sammeln, dann Schwellen
|
||||
// anpassen. -160 (kein Metering) ignorieren — sonst wird die Baseline
|
||||
// sinnlos niedrig.
|
||||
if (this.vadBaselineSamples.length < VAD_BASELINE_SAMPLES) {
|
||||
if (db > -100) {
|
||||
this.vadBaselineSamples.push(db);
|
||||
if (this.vadBaselineSamples.length === VAD_BASELINE_SAMPLES) {
|
||||
const avg = this.vadBaselineSamples.reduce((a, b) => a + b, 0) / VAD_BASELINE_SAMPLES;
|
||||
this.vadAdaptiveSilenceDb = avg + VAD_SILENCE_OFFSET_DB;
|
||||
this.vadAdaptiveSpeechDb = avg + VAD_SPEECH_OFFSET_DB;
|
||||
const msg = `VAD: ambient=${avg.toFixed(0)}dB stille>${this.vadAdaptiveSilenceDb.toFixed(0)}dB`;
|
||||
console.log('[Audio] %s speech>%s', msg, this.vadAdaptiveSpeechDb.toFixed(1));
|
||||
try { ToastAndroid.show(msg, ToastAndroid.SHORT); } catch {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sprach-Gate: Erkennen ob tatsaechlich gesprochen wird
|
||||
if (db > VAD_SPEECH_THRESHOLD_DB) {
|
||||
if (db > this.vadAdaptiveSpeechDb) {
|
||||
if (!this.speechDetected && this.speechStartTime === 0) {
|
||||
this.speechStartTime = Date.now();
|
||||
}
|
||||
@@ -357,7 +414,7 @@ class AudioService {
|
||||
|
||||
// VAD: Stille erkennen (nur wenn Sprache erkannt wurde)
|
||||
if (this.vadEnabled) {
|
||||
if (db > VAD_SILENCE_THRESHOLD_DB) {
|
||||
if (db > this.vadAdaptiveSilenceDb) {
|
||||
this.lastSpeechTime = Date.now();
|
||||
}
|
||||
}
|
||||
@@ -367,6 +424,12 @@ class AudioService {
|
||||
this.lastSpeechTime = Date.now();
|
||||
this.speechDetected = false;
|
||||
this.speechStartTime = 0;
|
||||
// VAD-Adaptive zurueckgesetzt: Baseline wird in den ersten 500ms neu
|
||||
// gemessen. Bis dahin gelten die Fallback-Schwellen — die sind etwas
|
||||
// empfindlicher als die alten Werte (-38 statt -45 fuer Stille).
|
||||
this.vadBaselineSamples = [];
|
||||
this.vadAdaptiveSilenceDb = VAD_SILENCE_FALLBACK_DB;
|
||||
this.vadAdaptiveSpeechDb = VAD_SPEECH_FALLBACK_DB;
|
||||
this.setState('recording');
|
||||
|
||||
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
||||
@@ -394,18 +457,19 @@ class AudioService {
|
||||
};
|
||||
if (autoStop) {
|
||||
const vadSilenceMs = await loadVadSilenceMs();
|
||||
const maxRecordingMs = await loadMaxRecordingMs();
|
||||
console.log('[Audio] startRecording: autoStop=true, VAD-Stille=%dms, MAX=%dms',
|
||||
vadSilenceMs, MAX_RECORDING_MS);
|
||||
vadSilenceMs, maxRecordingMs);
|
||||
this.vadTimer = setInterval(() => {
|
||||
const silenceDuration = Date.now() - this.lastSpeechTime;
|
||||
if (silenceDuration >= vadSilenceMs) {
|
||||
fireSilenceOnce(`VAD ${silenceDuration}ms Stille (Schwelle=${vadSilenceMs}ms)`);
|
||||
}
|
||||
}, 200);
|
||||
// Notbremse: Nach MAX_RECORDING_MS zwangsweise stoppen
|
||||
// Notbremse: Nach maxRecordingMs zwangsweise stoppen
|
||||
this.maxDurationTimer = setTimeout(() => {
|
||||
fireSilenceOnce(`Max-Dauer ${MAX_RECORDING_MS}ms`);
|
||||
}, MAX_RECORDING_MS);
|
||||
fireSilenceOnce(`Max-Dauer ${maxRecordingMs}ms`);
|
||||
}, maxRecordingMs);
|
||||
}
|
||||
|
||||
// Conversation-Window: Wenn der User innerhalb noSpeechTimeoutMs nicht
|
||||
@@ -604,6 +668,7 @@ class AudioService {
|
||||
}
|
||||
this._cancelDeferredFocusRelease();
|
||||
AudioFocus?.requestDuck().catch(() => {});
|
||||
this._firePlaybackStarted();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -718,6 +783,7 @@ class AudioService {
|
||||
|
||||
// Callback wenn alle Audio-Teile abgespielt sind
|
||||
private playbackFinishedListeners: (() => void)[] = [];
|
||||
private playbackStartedListeners: (() => void)[] = [];
|
||||
|
||||
onPlaybackFinished(callback: () => void): () => void {
|
||||
this.playbackFinishedListeners.push(callback);
|
||||
@@ -726,6 +792,21 @@ class AudioService {
|
||||
};
|
||||
}
|
||||
|
||||
/** Callback wenn ARIAs TTS-Wiedergabe startet — fuer Wake-Word-parallel-
|
||||
* Listening waehrend ARIA spricht (Barge-In via "Computer" sagen). */
|
||||
onPlaybackStarted(callback: () => void): () => void {
|
||||
this.playbackStartedListeners.push(callback);
|
||||
return () => {
|
||||
this.playbackStartedListeners = this.playbackStartedListeners.filter(cb => cb !== callback);
|
||||
};
|
||||
}
|
||||
|
||||
private _firePlaybackStarted(): void {
|
||||
this.playbackStartedListeners.forEach(cb => {
|
||||
try { cb(); } catch (e) { console.warn('[Audio] playbackStarted listener err:', e); }
|
||||
});
|
||||
}
|
||||
|
||||
/** Naechstes Audio aus der Queue abspielen */
|
||||
private async _playNext(): Promise<void> {
|
||||
if (this.audioQueue.length === 0) {
|
||||
@@ -738,10 +819,11 @@ class AudioService {
|
||||
return;
|
||||
}
|
||||
|
||||
// Beim ersten Playback-Start: andere Apps ducken
|
||||
// Beim ersten Playback-Start: andere Apps ducken + Listener informieren
|
||||
if (!this.isPlaying) {
|
||||
this._cancelDeferredFocusRelease();
|
||||
AudioFocus?.requestDuck().catch(() => {});
|
||||
this._firePlaybackStarted();
|
||||
}
|
||||
this.isPlaying = true;
|
||||
|
||||
|
||||
@@ -0,0 +1,71 @@
|
||||
/**
|
||||
* Spielt einen kurzen "Bereit"-Sound (Airplane Ding-Dong) wenn das Mikrofon
|
||||
* nach Wake-Word-Erkennung wirklich offen ist. Datei liegt in
|
||||
* android/app/src/main/res/raw/wake_ready_sound.mp3 — wird ueber Android's
|
||||
* Resource-System per react-native-sound abgespielt.
|
||||
*
|
||||
* Toggle: AsyncStorage-Key 'aria_wake_ready_sound_enabled' (default true).
|
||||
*/
|
||||
|
||||
import Sound from 'react-native-sound';
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||
|
||||
export const WAKE_READY_SOUND_STORAGE_KEY = 'aria_wake_ready_sound_enabled';
|
||||
|
||||
Sound.setCategory('Playback', false);
|
||||
|
||||
let cachedSound: Sound | null = null;
|
||||
let cachedFailed = false;
|
||||
|
||||
function getSound(): Promise<Sound | null> {
|
||||
if (cachedFailed) return Promise.resolve(null);
|
||||
if (cachedSound) return Promise.resolve(cachedSound);
|
||||
return new Promise(resolve => {
|
||||
const s = new Sound('wake_ready_sound', Sound.MAIN_BUNDLE, (err) => {
|
||||
if (err) {
|
||||
console.warn('[WakeReadySound] Konnte nicht geladen werden:', err);
|
||||
cachedFailed = true;
|
||||
resolve(null);
|
||||
return;
|
||||
}
|
||||
cachedSound = s;
|
||||
resolve(s);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/** True wenn der User den "Bereit"-Sound aktiviert hat. Default: true. */
|
||||
export async function isWakeReadySoundEnabled(): Promise<boolean> {
|
||||
try {
|
||||
const raw = await AsyncStorage.getItem(WAKE_READY_SOUND_STORAGE_KEY);
|
||||
if (raw === null) return true; // Default an
|
||||
return raw === 'true';
|
||||
} catch {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
export async function setWakeReadySoundEnabled(enabled: boolean): Promise<void> {
|
||||
try {
|
||||
await AsyncStorage.setItem(WAKE_READY_SOUND_STORAGE_KEY, String(enabled));
|
||||
} catch {}
|
||||
}
|
||||
|
||||
/** Spielt den Bereit-Sound einmal ab — non-blocking. Wenn der User ihn
|
||||
* in den Settings deaktiviert hat oder die Datei nicht ladbar ist,
|
||||
* passiert einfach nichts. */
|
||||
export async function playWakeReadySound(): Promise<void> {
|
||||
if (!(await isWakeReadySoundEnabled())) return;
|
||||
const s = await getSound();
|
||||
if (!s) return;
|
||||
try {
|
||||
s.stop(() => {
|
||||
s.setCurrentTime(0);
|
||||
s.play((success) => {
|
||||
if (!success) console.warn('[WakeReadySound] Wiedergabe fehlgeschlagen');
|
||||
});
|
||||
});
|
||||
} catch (e) {
|
||||
console.warn('[WakeReadySound] play() Exception:', e);
|
||||
}
|
||||
}
|
||||
@@ -72,6 +72,11 @@ class WakeWordService {
|
||||
private state: WakeWordState = 'off';
|
||||
private wakeCallbacks: WakeWordCallback[] = [];
|
||||
private stateCallbacks: StateCallback[] = [];
|
||||
/** Barge-In-Callbacks: feuern wenn Wake-Word WAEHREND ARIA spricht erkannt
|
||||
* wird. ChatScreen reagiert mit TTS-stop + neuer Aufnahme. */
|
||||
private bargeCallbacks: WakeWordCallback[] = [];
|
||||
/** True solange Wake-Word parallel zu TTS aktiv ist. */
|
||||
private bargeListening: boolean = false;
|
||||
|
||||
private keyword: WakeKeyword = DEFAULT_KEYWORD;
|
||||
private nativeReady: boolean = false;
|
||||
@@ -191,16 +196,28 @@ class WakeWordService {
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try { await OpenWakeWord.stop(); } catch {}
|
||||
}
|
||||
this.bargeListening = false;
|
||||
this.setState('off');
|
||||
}
|
||||
|
||||
/** Wake-Word getriggert: Native-Modul pausieren, Konversation starten. */
|
||||
private async onWakeDetected(): Promise<void> {
|
||||
console.log('[WakeWord] Wake-Word "%s" erkannt!', this.keyword);
|
||||
ToastAndroid.show(`Wake-Word "${KEYWORD_LABELS[this.keyword]}" erkannt — sprich jetzt`, ToastAndroid.SHORT);
|
||||
console.log('[WakeWord] Wake-Word "%s" erkannt! (state=%s, barge=%s)',
|
||||
this.keyword, this.state, this.bargeListening);
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try { await OpenWakeWord.stop(); } catch {}
|
||||
}
|
||||
this.bargeListening = false;
|
||||
// Wenn wir bereits in 'conversing' sind und der Trigger waehrend ARIAs TTS
|
||||
// kam (Barge-In via Wake-Word), feuern wir einen separaten Callback damit
|
||||
// ChatScreen das TTS abbrechen + neue Aufnahme starten kann. Sonst normal.
|
||||
if (this.state === 'conversing') {
|
||||
this.bargeCallbacks.forEach(cb => {
|
||||
try { cb(); } catch (e) { console.warn('[WakeWord] barge cb err:', e); }
|
||||
});
|
||||
// Kein erneutes setState — wir bleiben in 'conversing'.
|
||||
return;
|
||||
}
|
||||
this.setState('conversing');
|
||||
setTimeout(() => {
|
||||
if (this.state === 'conversing') {
|
||||
@@ -209,6 +226,35 @@ class WakeWordService {
|
||||
}, 200);
|
||||
}
|
||||
|
||||
/** Wake-Word PARALLEL zur TTS-Wiedergabe lauschen lassen — User kann
|
||||
* "Computer" sagen waehrend ARIA noch redet, AcousticEchoCanceler im
|
||||
* Native-Modul verhindert dass ARIAs eigene Stimme triggert.
|
||||
* Voraussetzung: AudioRecorder muss frei sein (Recording aus). Wenn der
|
||||
* AudioRecorder gerade laeuft, hat der Vorrang — Wake-Word geht nicht. */
|
||||
async startBargeListening(): Promise<void> {
|
||||
if (!this.nativeReady || !OpenWakeWord) return;
|
||||
if (this.state !== 'conversing') return;
|
||||
if (this.bargeListening) return;
|
||||
try {
|
||||
await OpenWakeWord.start();
|
||||
this.bargeListening = true;
|
||||
console.log('[WakeWord] Barge-Listening aktiv (parallel zu TTS)');
|
||||
} catch (err) {
|
||||
console.warn('[WakeWord] Barge-Listening start fehlgeschlagen:', err);
|
||||
}
|
||||
}
|
||||
|
||||
/** Barge-Listening wieder aus — z.B. wenn der AudioRecorder fuer die
|
||||
* naechste Aufnahme das Mikro braucht. */
|
||||
async stopBargeListening(): Promise<void> {
|
||||
if (!this.bargeListening) return;
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try { await OpenWakeWord.stop(); } catch {}
|
||||
}
|
||||
this.bargeListening = false;
|
||||
console.log('[WakeWord] Barge-Listening aus');
|
||||
}
|
||||
|
||||
/** Konversation beenden — User hat im Window nichts gesagt.
|
||||
* Mit Wake-Word: zurueck zu 'armed' (Listener wieder an).
|
||||
* Ohne: zurueck zu 'off'.
|
||||
@@ -268,6 +314,19 @@ class WakeWordService {
|
||||
};
|
||||
}
|
||||
|
||||
/** Subscribe auf Barge-In-Events: Wake-Word erkannt waehrend ARIA noch
|
||||
* spricht. ChatScreen sollte dann TTS abbrechen + neue Aufnahme starten. */
|
||||
onBargeIn(callback: WakeWordCallback): () => void {
|
||||
this.bargeCallbacks.push(callback);
|
||||
return () => {
|
||||
this.bargeCallbacks = this.bargeCallbacks.filter(cb => cb !== callback);
|
||||
};
|
||||
}
|
||||
|
||||
isBargeListening(): boolean {
|
||||
return this.bargeListening;
|
||||
}
|
||||
|
||||
onStateChange(callback: StateCallback): () => void {
|
||||
this.stateCallbacks.push(callback);
|
||||
return () => {
|
||||
|
||||
+53
-13
@@ -1235,6 +1235,7 @@ class ARIABridge:
|
||||
except (TypeError, ValueError):
|
||||
self._next_speed_override = None
|
||||
if text:
|
||||
interrupted = bool(payload.get("interrupted", False))
|
||||
# Wenn Files gerade gepuffert sind (Bild + Text gleichzeitig
|
||||
# gesendet), mergen wir sie zu einer einzigen Anfrage statt
|
||||
# zwei separater send_to_core-Calls.
|
||||
@@ -1242,8 +1243,16 @@ class ARIABridge:
|
||||
if merged:
|
||||
logger.info("[rvs] App-Chat (mit Anhaengen): '%s'", text[:80])
|
||||
else:
|
||||
logger.info("[rvs] App-Chat: '%s'", text[:80])
|
||||
await self.send_to_core(text, source="app")
|
||||
core_text = (
|
||||
f"[Hinweis: Stefan hat dich gerade unterbrochen waehrend du noch "
|
||||
f"gesprochen oder gearbeitet hast. Folgendes ist eine Korrektur, "
|
||||
f"Ergaenzung oder ein Themenwechsel zu deiner letzten Antwort.] "
|
||||
f"{text}"
|
||||
if interrupted else text
|
||||
)
|
||||
logger.info("[rvs] App-Chat%s: '%s'",
|
||||
" [BARGE-IN]" if interrupted else "", text[:80])
|
||||
await self.send_to_core(core_text, source="app" + (" [barge-in]" if interrupted else ""))
|
||||
return
|
||||
|
||||
if msg_type == "cancel_request":
|
||||
@@ -1500,9 +1509,13 @@ class ARIABridge:
|
||||
self._next_speed_override = speed if 0.1 <= speed <= 5.0 else None
|
||||
except (TypeError, ValueError):
|
||||
self._next_speed_override = None
|
||||
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB",
|
||||
mime_type, duration_ms, len(audio_b64) // 1365)
|
||||
asyncio.create_task(self._process_app_audio(audio_b64, mime_type))
|
||||
interrupted = bool(payload.get("interrupted", False))
|
||||
audio_request_id = payload.get("audioRequestId", "") or ""
|
||||
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB%s%s",
|
||||
mime_type, duration_ms, len(audio_b64) // 1365,
|
||||
" [BARGE-IN]" if interrupted else "",
|
||||
f" reqId={audio_request_id[:16]}" if audio_request_id else "")
|
||||
asyncio.create_task(self._process_app_audio(audio_b64, mime_type, interrupted, audio_request_id))
|
||||
|
||||
elif msg_type == "stt_response":
|
||||
# Antwort der whisper-bridge auf unseren stt_request
|
||||
@@ -1558,8 +1571,19 @@ class ARIABridge:
|
||||
_STT_REMOTE_TIMEOUT_READY_S = 45.0
|
||||
_STT_REMOTE_TIMEOUT_LOADING_S = 300.0
|
||||
|
||||
async def _process_app_audio(self, audio_b64: str, mime_type: str) -> None:
|
||||
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal."""
|
||||
async def _process_app_audio(self, audio_b64: str, mime_type: str,
|
||||
interrupted: bool = False,
|
||||
audio_request_id: str = "") -> None:
|
||||
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal.
|
||||
|
||||
interrupted=True wenn der User waehrend ARIA noch sprach/dachte aufgenommen hat
|
||||
(Barge-In). Wird als Hinweis-Praefix an aria-core mitgegeben damit ARIA die
|
||||
Korrektur/Unterbrechung in den Kontext einordnen kann statt als reine
|
||||
Folgefrage zu behandeln.
|
||||
|
||||
audio_request_id: Korrelations-ID die die App im audio-Event mitschickt — wird
|
||||
unveraendert ans STT-Result zurueckgegeben damit die App die EXAKT richtige
|
||||
'wird verarbeitet'-Bubble ersetzen kann (auch bei mehreren parallelen Aufnahmen)."""
|
||||
# Erst Remote versuchen
|
||||
text = await self._stt_remote(audio_b64, mime_type)
|
||||
if text is None:
|
||||
@@ -1571,19 +1595,35 @@ class ARIABridge:
|
||||
|
||||
if text.strip():
|
||||
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
||||
# Barge-In-Hinweis: gibt ARIA den Kontext dass sie unterbrochen wurde
|
||||
# und dies eine Korrektur/Aenderung der vorherigen Anweisung sein kann.
|
||||
core_text = (
|
||||
f"[Hinweis: Stefan hat dich gerade unterbrochen waehrend du noch "
|
||||
f"gesprochen oder gearbeitet hast. Folgendes ist eine Korrektur, "
|
||||
f"Ergaenzung oder ein Themenwechsel zu deiner letzten Antwort.] "
|
||||
f"{text}"
|
||||
if interrupted else text
|
||||
)
|
||||
# ERST an aria-core senden (wichtigster Schritt)
|
||||
await self.send_to_core(text, source="app-voice")
|
||||
await self.send_to_core(core_text, source="app-voice" + (" [barge-in]" if interrupted else ""))
|
||||
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
||||
# sender="stt" damit Bridge es ignoriert (kein Loop)
|
||||
try:
|
||||
await self._send_to_rvs({
|
||||
stt_payload = {
|
||||
"text": text,
|
||||
"sender": "stt",
|
||||
}
|
||||
if audio_request_id:
|
||||
stt_payload["audioRequestId"] = audio_request_id
|
||||
ok = await self._send_to_rvs({
|
||||
"type": "chat",
|
||||
"payload": {
|
||||
"text": text,
|
||||
"sender": "stt",
|
||||
},
|
||||
"payload": stt_payload,
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
if ok:
|
||||
logger.info("[rvs] STT-Text an RVS broadcastet (sender=stt)")
|
||||
else:
|
||||
logger.warning("[rvs] STT-Text NICHT broadcastet — _send_to_rvs lieferte False")
|
||||
except Exception as e:
|
||||
logger.warning("[rvs] STT-Text konnte nicht an RVS gesendet werden: %s", e)
|
||||
else:
|
||||
|
||||
@@ -87,16 +87,37 @@
|
||||
- [x] App Text-Rendering: Nachrichten selektierbar + Autolink fuer URLs/E-Mails/Telefonnummern (Browser/Mail/Dialer)
|
||||
- [x] TTS-Wiedergabegeschwindigkeit pro Geraet einstellbar (Settings → 0.5-2.0x in 0.1-Schritten, Default 1.0)
|
||||
- [x] Diagnostic: Voice-Preview-Modal (Play-Icon vor Delete-X, Textfeld mit Default, WAV im Browser abspielen)
|
||||
- [x] **Wake-Word komplett on-device via openWakeWord (ONNX Runtime)** — Porcupine raus, kein API-Key/keine Lizenzgebuehren mehr. Mitgelieferte Keywords: hey_jarvis, computer, alexa, hey_mycroft, hey_rhasspy
|
||||
- [x] Wake-Word Embedding rank-4 Fix (Pipeline-Bug der das Triggern verhinderte) + Frame-Count aus Modell-Metadaten lesen
|
||||
- [x] APK ABI-Split auf arm64-v8a — von ~136 MB auf ~35 MB, Auto-Update-Downloads aufs Phone deutlich kleiner
|
||||
- [x] PCM-Underrun-Schutz: Stille-Fill in Render-Pausen verhindert Spotify-Auto-Resume nach 10s Stillstand
|
||||
- [x] Conversation-Focus-Lifecycle: AudioFocus haengt am Wake-Word-State 'conversing' statt an einzelnen Streams — Spotify bleibt durchgehend gepaust, auch zwischen mehreren Antworten
|
||||
- [x] PhoneStateListener: TTS pausiert bei eingehendem Anruf (READ_PHONE_STATE Permission)
|
||||
- [x] Voice-Override behaelt Stimme ueber alle TTS-Calls einer Antwort (vorher: nach erstem TTS-Call zurueck auf Default)
|
||||
- [x] Sprachnachricht-Bubble defensiv: STT-Result fuegt neue Bubble hinzu wenn Placeholder fehlt (Race-Schutz)
|
||||
- [x] Bild + Text als EINE Anfrage: Bridge buffert files 800ms, merged mit folgendem chat-Text zu einem send_to_core (statt zwei getrennten ARIA-Antworten)
|
||||
- [x] Diagnostic-Chat: bubblige Formatierung, mehrzeiliges Eingabefeld (textarea, Enter sendet, Shift+Enter neue Zeile)
|
||||
- [x] Diagnostic→App: persistente RVS-Connection statt frische pro Send (Race-Probleme mit Zombie-WS geloest)
|
||||
- [x] Adaptive VAD-Schwelle: Baseline aus den ersten 500ms Mic-Pegel, Stille = baseline+6dB / Sprache = baseline+12dB. Funktioniert in lauten wie leisen Umgebungen
|
||||
- [x] Max-Aufnahmedauer konfigurierbar in Settings (1-30 min, Default 5 min) — laengere Diktate moeglich
|
||||
- [x] Barge-In: User kann ARIA waehrend Antwort/Tool-Use unterbrechen, alte Aktivitaet wird abgebrochen, Bridge gibt aria-core einen Kontext-Hint dass es eine Korrektur ist
|
||||
- [x] Push-to-Talk raus, nur noch Tap-to-Talk (verhinderte Touch-Race-Probleme)
|
||||
- [x] Settings-Sub-Screens: 8 Kategorien (Verbindung, Allgemein, Spracheingabe, Wake-Word, Sprachausgabe, Speicher, Protokoll, Ueber) statt langer Liste
|
||||
- [x] Textauswahl in Bubbles wieder funktional (nested Text+onPress raus, dataDetectorType="all" macht Links automatisch klickbar)
|
||||
- [x] **Placeholder-Race bei parallelen Sprachnachrichten geloest**: jede Aufnahme bekommt eine eindeutige audioRequestId, Bridge gibt sie ans STT-Result zurueck — App matcht jetzt punktgenau die richtige Bubble statt per Substring "Spracheingabe wird verarbeitet"
|
||||
- [x] Mikro-Offen-Toast "🎤 sprich jetzt" erscheint erst wenn audioService.startRecording wirklich erfolgreich war (statt ~400ms vorher beim Wake-Word-Detect)
|
||||
- [x] **Bereit-Sound (Airplane Ding-Dong) wenn Mikro nach Wake-Word offen** — akustische Bestaetigung statt nur Toast. Toggle in Settings → Wake-Word, default aktiv
|
||||
- [x] **Wake-Word parallel zu TTS** mit AcousticEchoCanceler: User sagt "Computer" waehrend ARIA spricht → TTS verstummt sofort, neue Aufnahme startet. Native AEC verhindert dass ARIAs eigene Stimme das Wake-Word triggert. Audio-Source ist VOICE_COMMUNICATION + zusaetzlich AEC/NS/AGC-Effekte aktiviert
|
||||
|
||||
## Offen
|
||||
|
||||
### Bugs
|
||||
- [ ] App: Wake-Word "jarvis" triggert nicht zuverlaessig (Porcupine-Debugging via ADB-Logcat ausstehend)
|
||||
- [ ] App: Stuerzt beim Lauschen ab, eventuell bei Nebengeraeuschen (Porcupine + Mic-Race, errorCallback haelt's jetzt zurueck — Dauertest ausstehend)
|
||||
|
||||
### App Features
|
||||
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
||||
- [ ] Background Audio Service (TTS auch bei minimierter App)
|
||||
- [ ] Custom-Wake-Word-Upload via Diagnostic (eigene .onnx-Files ohne App-Rebuild)
|
||||
- [ ] Pause+Resume bei Anruf: aktuell wird der TTS-Stream bei Klingeln hart gestoppt, schoener waere Pause + Resume nach Auflegen
|
||||
|
||||
### Architektur
|
||||
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
||||
|
||||
Reference in New Issue
Block a user