Compare commits
9 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| a9a87f12df | |||
| 2a56ac0290 | |||
| edc65ce645 | |||
| d7efaf93b3 | |||
| 31ff20c846 | |||
| 406f4cb3cc | |||
| fa0667088a | |||
| f55329706e | |||
| 6c7fd1d0e3 |
@@ -378,10 +378,12 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
|||||||
### Features
|
### Features
|
||||||
|
|
||||||
- Text-Chat mit ARIA
|
- Text-Chat mit ARIA
|
||||||
- **Sprachaufnahme**: Push-to-Talk (halten) oder Tap-to-Talk (tippen, Auto-Stop bei Stille)
|
- **Sprachaufnahme**: Tap-to-Talk (tippen startet, tippen stoppt, Auto-Stop bei Stille via VAD)
|
||||||
- **Gespraechsmodus** (Ohr-Button): Nach jeder ARIA-Antwort startet automatisch die Aufnahme — wie ein natuerliches Gespraech hin und her
|
- **Gespraechsmodus** (Ohr-Button): Nach jeder ARIA-Antwort startet automatisch die Aufnahme — wie ein natuerliches Gespraech hin und her
|
||||||
- **Wake-Word** (on-device, openWakeWord ONNX): "Hey Jarvis", "Alexa", "Hey Mycroft", "Hey Rhasspy" — Mikrofon hoert passiv mit, Konversation startet beim Schluesselwort. Komplett on-device via ONNX Runtime, kein API-Key, kein Cloud-Roundtrip, Audio verlaesst das Geraet nicht.
|
- **Wake-Word** (on-device, openWakeWord ONNX): "Hey Jarvis", "Alexa", "Hey Mycroft", "Hey Rhasspy" — Mikrofon hoert passiv mit, Konversation startet beim Schluesselwort. Komplett on-device via ONNX Runtime, kein API-Key, kein Cloud-Roundtrip, Audio verlaesst das Geraet nicht.
|
||||||
- **VAD (Voice Activity Detection)**: Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme 120s.
|
- **VAD (Voice Activity Detection)**: Adaptive Schwelle (Baseline aus ersten 500ms Mic-Pegel + 6dB Offset). Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme einstellbar (1–30 min, Default 5 min)
|
||||||
|
- **Barge-In**: Wenn du waehrend ARIAs Antwort eine neue Sprach-/Text-Nachricht reinschickst, wird sie unterbrochen + bekommt den Hint "das ist eine Korrektur"
|
||||||
|
- **Anruf-Pause**: TTS verstummt automatisch wenn das Telefon klingelt (READ_PHONE_STATE Permission)
|
||||||
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
||||||
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
||||||
- **"ARIA denkt..." Indicator**: Zeigt live den Status vom Core (Denken, Tool, Schreiben) + Abbrechen-Button
|
- **"ARIA denkt..." Indicator**: Zeigt live den Status vom Core (Denken, Tool, Schreiben) + Abbrechen-Button
|
||||||
@@ -840,7 +842,11 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- [x] Whisper STT auf die Gamebox ausgelagert (CUDA float16, fast Echtzeit)
|
- [x] Whisper STT auf die Gamebox ausgelagert (CUDA float16, fast Echtzeit)
|
||||||
- [x] **F5-TTS ersetzt XTTS** — bessere Voice-Cloning-Qualitaet, Whisper-auto-transkribierter Referenz-Text
|
- [x] **F5-TTS ersetzt XTTS** — bessere Voice-Cloning-Qualitaet, Whisper-auto-transkribierter Referenz-Text
|
||||||
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
||||||
- [x] VAD-Stille-Toleranz und Max-Aufnahme einstellbar (1-8s, 120s)
|
- [x] VAD-Stille-Toleranz einstellbar (1-8s) + adaptive Mikro-Baseline + Max-Aufnahme einstellbar (1-30 min)
|
||||||
|
- [x] Barge-In: User kann ARIA waehrend Antwort unterbrechen, aria-core bekommt Kontext-Hint
|
||||||
|
- [x] Anruf-Pause: TTS verstummt bei eingehendem Anruf (PhoneStateListener)
|
||||||
|
- [x] Settings-Sub-Screens: 8 Kategorien statt langer Liste
|
||||||
|
- [x] APK ABI-Split arm64-v8a: 35 MB statt 136 MB
|
||||||
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
||||||
- [x] Wake-Word on-device via openWakeWord (ONNX Runtime, kein API-Key) + State-Icon
|
- [x] Wake-Word on-device via openWakeWord (ONNX Runtime, kein API-Key) + State-Icon
|
||||||
|
|
||||||
|
|||||||
@@ -79,8 +79,8 @@ android {
|
|||||||
applicationId "com.ariacockpit"
|
applicationId "com.ariacockpit"
|
||||||
minSdkVersion rootProject.ext.minSdkVersion
|
minSdkVersion rootProject.ext.minSdkVersion
|
||||||
targetSdkVersion rootProject.ext.targetSdkVersion
|
targetSdkVersion rootProject.ext.targetSdkVersion
|
||||||
versionCode 702
|
versionCode 705
|
||||||
versionName "0.0.7.2"
|
versionName "0.0.7.5"
|
||||||
// Fallback fuer Libraries mit Product Flavors
|
// Fallback fuer Libraries mit Product Flavors
|
||||||
missingDimensionStrategy 'react-native-camera', 'general'
|
missingDimensionStrategy 'react-native-camera', 'general'
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "aria-cockpit",
|
"name": "aria-cockpit",
|
||||||
"version": "0.0.7.2",
|
"version": "0.0.7.5",
|
||||||
"private": true,
|
"private": true,
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"android": "react-native run-android",
|
"android": "react-native run-android",
|
||||||
|
|||||||
@@ -1,68 +1,14 @@
|
|||||||
/**
|
/**
|
||||||
* MessageText — rendert Chat-Text mit Auto-Linkifizierung:
|
* MessageText — selektierbarer Chat-Text mit Android-Auto-Linkifizierung.
|
||||||
* - http(s)://... → tippbar, oeffnet im Browser
|
|
||||||
* - mailto: oder plain E-Mail → tippbar, oeffnet Mail-App
|
|
||||||
* - Telefonnummern → tippbar, oeffnet Android-Dialer
|
|
||||||
*
|
*
|
||||||
* Text ist durchgaengig markierbar/kopierbar (selectable).
|
* Wir nutzen Androids dataDetectorType="all" (System macht Phone/URL/Email
|
||||||
|
* automatisch klickbar) und ein einzelnes <Text selectable> ohne nested
|
||||||
|
* <Text> mit eigenem onPress. Nested Text mit onPress fingen die Long-Press-
|
||||||
|
* Geste ab, damit war Markieren+Kopieren defekt.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import React from 'react';
|
import React from 'react';
|
||||||
import { Text, Linking, TextStyle, StyleProp } from 'react-native';
|
import { Text, TextStyle, StyleProp } from 'react-native';
|
||||||
|
|
||||||
// Regex kombiniert URL | Email | Telefonnummer.
|
|
||||||
// Gruppenreihenfolge ist wichtig fuer die Erkennung unten.
|
|
||||||
//
|
|
||||||
// URL: http://... oder https://... bis zum ersten Whitespace / Anfuehrungszeichen.
|
|
||||||
// Email: simpler Standard-Match (kein RFC-kompatibel aber gut genug).
|
|
||||||
// Telefon: internationale Form (+49..., 0049..., 0176...), darf Leerzeichen
|
|
||||||
// / Bindestriche / Schraegstriche / Klammern enthalten, mindestens 7
|
|
||||||
// Ziffern insgesamt. Vermeidet banale Zahlen (Uhrzeiten, Datum).
|
|
||||||
const LINK_REGEX = new RegExp(
|
|
||||||
'(https?:\\/\\/[^\\s<>"]+)' + // 1: URL
|
|
||||||
'|([A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,})' + // 2: Email
|
|
||||||
'|((?:\\+|00)\\d[\\d\\s()\\-\\/]{6,}\\d|0\\d{2,4}[\\s\\/\\-]?[\\d\\s\\-\\/]{5,}\\d)', // 3: Telefon
|
|
||||||
'g',
|
|
||||||
);
|
|
||||||
|
|
||||||
const LINK_STYLE = { color: '#0096FF', textDecorationLine: 'underline' } as TextStyle;
|
|
||||||
|
|
||||||
interface Segment {
|
|
||||||
text: string;
|
|
||||||
kind: 'text' | 'url' | 'email' | 'phone';
|
|
||||||
}
|
|
||||||
|
|
||||||
function tokenize(raw: string): Segment[] {
|
|
||||||
const out: Segment[] = [];
|
|
||||||
let lastEnd = 0;
|
|
||||||
LINK_REGEX.lastIndex = 0;
|
|
||||||
let m: RegExpExecArray | null;
|
|
||||||
while ((m = LINK_REGEX.exec(raw)) !== null) {
|
|
||||||
if (m.index > lastEnd) {
|
|
||||||
out.push({ text: raw.slice(lastEnd, m.index), kind: 'text' });
|
|
||||||
}
|
|
||||||
if (m[1]) out.push({ text: m[1], kind: 'url' });
|
|
||||||
else if (m[2]) out.push({ text: m[2], kind: 'email' });
|
|
||||||
else if (m[3]) out.push({ text: m[3], kind: 'phone' });
|
|
||||||
lastEnd = LINK_REGEX.lastIndex;
|
|
||||||
}
|
|
||||||
if (lastEnd < raw.length) out.push({ text: raw.slice(lastEnd), kind: 'text' });
|
|
||||||
return out;
|
|
||||||
}
|
|
||||||
|
|
||||||
function onPress(seg: Segment) {
|
|
||||||
try {
|
|
||||||
if (seg.kind === 'url') {
|
|
||||||
Linking.openURL(seg.text);
|
|
||||||
} else if (seg.kind === 'email') {
|
|
||||||
Linking.openURL(`mailto:${seg.text}`);
|
|
||||||
} else if (seg.kind === 'phone') {
|
|
||||||
// Android-Dialer erwartet tel:-Schema ohne Leerzeichen/Bindestriche
|
|
||||||
const clean = seg.text.replace(/[\s\-\/()]/g, '');
|
|
||||||
Linking.openURL(`tel:${clean}`);
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
}
|
|
||||||
|
|
||||||
interface Props {
|
interface Props {
|
||||||
text: string;
|
text: string;
|
||||||
@@ -70,34 +16,9 @@ interface Props {
|
|||||||
}
|
}
|
||||||
|
|
||||||
const MessageText: React.FC<Props> = ({ text, style }) => {
|
const MessageText: React.FC<Props> = ({ text, style }) => {
|
||||||
const segments = React.useMemo(() => tokenize(text), [text]);
|
|
||||||
return (
|
return (
|
||||||
<Text
|
<Text style={style} selectable dataDetectorType="all">
|
||||||
style={style}
|
{text}
|
||||||
selectable
|
|
||||||
// dataDetectorType ist Android-only und macht Phone/URL/Email zusaetzlich
|
|
||||||
// ueber System-Detection klickbar — als Fallback falls unsere Regex-
|
|
||||||
// Tokens nicht passen.
|
|
||||||
dataDetectorType="all"
|
|
||||||
>
|
|
||||||
{segments.map((seg, i) => {
|
|
||||||
if (seg.kind === 'text') {
|
|
||||||
return <Text key={i} selectable>{seg.text}</Text>;
|
|
||||||
}
|
|
||||||
return (
|
|
||||||
<Text
|
|
||||||
key={i}
|
|
||||||
selectable
|
|
||||||
style={LINK_STYLE}
|
|
||||||
onPress={() => onPress(seg)}
|
|
||||||
// Long-Press soll an den Parent durch fuer Selection
|
|
||||||
onLongPress={undefined}
|
|
||||||
suppressHighlighting={false}
|
|
||||||
>
|
|
||||||
{seg.text}
|
|
||||||
</Text>
|
|
||||||
);
|
|
||||||
})}
|
|
||||||
</Text>
|
</Text>
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -44,7 +44,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||||||
const [meterDb, setMeterDb] = useState(-160);
|
const [meterDb, setMeterDb] = useState(-160);
|
||||||
const pulseAnim = useRef(new Animated.Value(1)).current;
|
const pulseAnim = useRef(new Animated.Value(1)).current;
|
||||||
const durationTimer = useRef<ReturnType<typeof setInterval> | null>(null);
|
const durationTimer = useRef<ReturnType<typeof setInterval> | null>(null);
|
||||||
const isLongPress = useRef(false);
|
|
||||||
|
|
||||||
// Puls-Animation starten/stoppen
|
// Puls-Animation starten/stoppen
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -117,31 +116,10 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||||||
if (disabled || isRecording) return;
|
if (disabled || isRecording) return;
|
||||||
const started = await audioService.startRecording(true); // autoStop = true
|
const started = await audioService.startRecording(true); // autoStop = true
|
||||||
if (started) {
|
if (started) {
|
||||||
isLongPress.current = false;
|
|
||||||
setIsRecording(true);
|
setIsRecording(true);
|
||||||
}
|
}
|
||||||
}, [disabled, isRecording]);
|
}, [disabled, isRecording]);
|
||||||
|
|
||||||
// Push-to-Talk: Lang druecken
|
|
||||||
const handlePressIn = async () => {
|
|
||||||
if (disabled || isRecording) return;
|
|
||||||
isLongPress.current = true;
|
|
||||||
const started = await audioService.startRecording(false); // kein autoStop
|
|
||||||
if (started) {
|
|
||||||
setIsRecording(true);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const handlePressOut = async () => {
|
|
||||||
if (!isRecording || !isLongPress.current) return;
|
|
||||||
isLongPress.current = false;
|
|
||||||
setIsRecording(false);
|
|
||||||
const result = await audioService.stopRecording();
|
|
||||||
if (result && result.durationMs > 300) {
|
|
||||||
onRecordingComplete(result);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Tap-to-Talk: Einmal tippen startet mit Auto-Stop.
|
// Tap-to-Talk: Einmal tippen startet mit Auto-Stop.
|
||||||
// Guard gegen Doppel-Tap während asyncer Start/Stop.
|
// Guard gegen Doppel-Tap während asyncer Start/Stop.
|
||||||
const tapBusy = useRef(false);
|
const tapBusy = useRef(false);
|
||||||
@@ -162,7 +140,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||||||
// Aufnahme mit Auto-Stop starten
|
// Aufnahme mit Auto-Stop starten
|
||||||
const started = await audioService.startRecording(true);
|
const started = await audioService.startRecording(true);
|
||||||
if (started) {
|
if (started) {
|
||||||
isLongPress.current = false;
|
|
||||||
setIsRecording(true);
|
setIsRecording(true);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -201,10 +178,6 @@ const VoiceButton: React.FC<VoiceButtonProps> = ({
|
|||||||
isRecording && styles.buttonOuterRecording,
|
isRecording && styles.buttonOuterRecording,
|
||||||
{ transform: [{ scale: pulseAnim }] },
|
{ transform: [{ scale: pulseAnim }] },
|
||||||
]}
|
]}
|
||||||
onStartShouldSetResponder={() => true}
|
|
||||||
onResponderGrant={handlePressIn}
|
|
||||||
onResponderRelease={handlePressOut}
|
|
||||||
onResponderTerminate={handlePressOut}
|
|
||||||
>
|
>
|
||||||
<TouchableOpacity
|
<TouchableOpacity
|
||||||
activeOpacity={0.8}
|
activeOpacity={0.8}
|
||||||
|
|||||||
@@ -292,14 +292,27 @@ const ChatScreen: React.FC = () => {
|
|||||||
// den gleichen Text bekommen (Bug: zweite Antwort ueberschreibt erste).
|
// den gleichen Text bekommen (Bug: zweite Antwort ueberschreibt erste).
|
||||||
if (sender === 'stt') {
|
if (sender === 'stt') {
|
||||||
const sttText = (message.payload.text as string) || '';
|
const sttText = (message.payload.text as string) || '';
|
||||||
|
// Debug-Toast: visualisiert dass das STT-Event in der App angekommen ist.
|
||||||
|
// Wenn dieser Toast NICHT erscheint, kommt das Event nicht durch (Bridge
|
||||||
|
// oder RVS broadcastet es nicht), und der Bug liegt server-side.
|
||||||
|
ToastAndroid.show(`STT empfangen: "${sttText.slice(0, 40)}"`, ToastAndroid.SHORT);
|
||||||
if (sttText) {
|
if (sttText) {
|
||||||
setMessages(prev => {
|
setMessages(prev => {
|
||||||
const idx = prev.findIndex(m =>
|
const idx = prev.findIndex(m =>
|
||||||
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
||||||
);
|
);
|
||||||
|
const placeholderCount = prev.filter(m =>
|
||||||
|
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
||||||
|
).length;
|
||||||
console.log('[Chat] STT-Result: idx=%d text="%s" placeholders=%d',
|
console.log('[Chat] STT-Result: idx=%d text="%s" placeholders=%d',
|
||||||
idx, sttText.slice(0, 60),
|
idx, sttText.slice(0, 60), placeholderCount);
|
||||||
prev.filter(m => m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')).length);
|
// Zweiter Toast: zeigt ob die Placeholder gefunden wurde.
|
||||||
|
ToastAndroid.show(
|
||||||
|
idx < 0
|
||||||
|
? `STT: keine Placeholder (${placeholderCount}) \u2192 neue Bubble`
|
||||||
|
: `STT: Bubble #${idx} ersetzt`,
|
||||||
|
ToastAndroid.SHORT,
|
||||||
|
);
|
||||||
const newText = `\uD83C\uDFA4 ${sttText}`;
|
const newText = `\uD83C\uDFA4 ${sttText}`;
|
||||||
if (idx < 0) {
|
if (idx < 0) {
|
||||||
// Defensiv: wenn keine Placeholder im State (z.B. weil sie nie
|
// Defensiv: wenn keine Placeholder im State (z.B. weil sie nie
|
||||||
@@ -491,6 +504,8 @@ const ChatScreen: React.FC = () => {
|
|||||||
const result = await audioService.stopRecording();
|
const result = await audioService.stopRecording();
|
||||||
if (result && result.durationMs > 500) {
|
if (result && result.durationMs > 500) {
|
||||||
// User hat im Fenster gesprochen → Sprachnachricht senden
|
// User hat im Fenster gesprochen → Sprachnachricht senden
|
||||||
|
// Barge-In: laufende ARIA-Aktivitaet abbrechen wenn welche da ist.
|
||||||
|
const wasInterrupted = interruptAriaIfBusy();
|
||||||
const location = await getCurrentLocation();
|
const location = await getCurrentLocation();
|
||||||
const userMsg: ChatMessage = {
|
const userMsg: ChatMessage = {
|
||||||
id: nextId(),
|
id: nextId(),
|
||||||
@@ -506,6 +521,7 @@ const ChatScreen: React.FC = () => {
|
|||||||
mimeType: result.mimeType,
|
mimeType: result.mimeType,
|
||||||
voice: localXttsVoiceRef.current,
|
voice: localXttsVoiceRef.current,
|
||||||
speed: ttsSpeedRef.current,
|
speed: ttsSpeedRef.current,
|
||||||
|
interrupted: wasInterrupted,
|
||||||
...(location && { location }),
|
...(location && { location }),
|
||||||
});
|
});
|
||||||
// resume() wird durch onPlaybackFinished nach ARIAs Antwort getriggert.
|
// resume() wird durch onPlaybackFinished nach ARIAs Antwort getriggert.
|
||||||
@@ -608,6 +624,8 @@ const ChatScreen: React.FC = () => {
|
|||||||
|
|
||||||
setInputText('');
|
setInputText('');
|
||||||
|
|
||||||
|
// Barge-In: laufende ARIA-Aktivitaet abbrechen wenn welche da ist.
|
||||||
|
const wasInterrupted = interruptAriaIfBusy();
|
||||||
const location = await getCurrentLocation();
|
const location = await getCurrentLocation();
|
||||||
|
|
||||||
const userMsg: ChatMessage = {
|
const userMsg: ChatMessage = {
|
||||||
@@ -618,16 +636,17 @@ const ChatScreen: React.FC = () => {
|
|||||||
};
|
};
|
||||||
setMessages(prev => capMessages([...prev, userMsg]));
|
setMessages(prev => capMessages([...prev, userMsg]));
|
||||||
|
|
||||||
console.log('[Chat] sende mit voice=%s speed=%s',
|
console.log('[Chat] sende mit voice=%s speed=%s interrupted=%s',
|
||||||
localXttsVoiceRef.current || '(default)', ttsSpeedRef.current);
|
localXttsVoiceRef.current || '(default)', ttsSpeedRef.current, wasInterrupted);
|
||||||
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
||||||
rvs.send('chat', {
|
rvs.send('chat', {
|
||||||
text,
|
text,
|
||||||
voice: localXttsVoiceRef.current,
|
voice: localXttsVoiceRef.current,
|
||||||
speed: ttsSpeedRef.current,
|
speed: ttsSpeedRef.current,
|
||||||
|
interrupted: wasInterrupted,
|
||||||
...(location && { location }),
|
...(location && { location }),
|
||||||
});
|
});
|
||||||
}, [inputText, getCurrentLocation, pendingAttachments, sendPendingAttachments]);
|
}, [inputText, getCurrentLocation, pendingAttachments, sendPendingAttachments, interruptAriaIfBusy]);
|
||||||
|
|
||||||
// Anfrage abbrechen — sofort lokalen Indicator weg, Bridge triggert doctor --fix
|
// Anfrage abbrechen — sofort lokalen Indicator weg, Bridge triggert doctor --fix
|
||||||
const cancelRequest = useCallback(() => {
|
const cancelRequest = useCallback(() => {
|
||||||
@@ -635,8 +654,28 @@ const ChatScreen: React.FC = () => {
|
|||||||
rvs.send('cancel_request' as any, {});
|
rvs.send('cancel_request' as any, {});
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
|
// Barge-In: wenn der User waehrend ARIA arbeitet/spricht eine neue Sprach-
|
||||||
|
// Nachricht aufnimmt, alte Aktivitaet sofort abbrechen — TTS verstummen,
|
||||||
|
// aria-core-Run via cancel_request abbrechen. So kann man "ach vergiss es,
|
||||||
|
// mach lieber X" sagen wie in einem echten Gespraech.
|
||||||
|
const interruptAriaIfBusy = useCallback(() => {
|
||||||
|
const speaking = audioService.isPlayingAudio();
|
||||||
|
const thinking = agentActivity.activity !== 'idle';
|
||||||
|
if (!speaking && !thinking) return false;
|
||||||
|
console.log('[Chat] Barge-In: speaking=%s thinking=%s — interrupting ARIA',
|
||||||
|
speaking, thinking);
|
||||||
|
if (speaking) audioService.haltAllPlayback('user spricht (barge-in)');
|
||||||
|
if (thinking) {
|
||||||
|
setAgentActivity({ activity: 'idle', tool: '' });
|
||||||
|
rvs.send('cancel_request' as any, {});
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}, [agentActivity]);
|
||||||
|
|
||||||
// Sprachaufnahme abgeschlossen
|
// Sprachaufnahme abgeschlossen
|
||||||
const handleVoiceRecording = useCallback(async (result: RecordingResult) => {
|
const handleVoiceRecording = useCallback(async (result: RecordingResult) => {
|
||||||
|
// Barge-In: laufende ARIA-Aktivitaet abbrechen falls aktiv.
|
||||||
|
const wasInterrupted = interruptAriaIfBusy();
|
||||||
const location = await getCurrentLocation();
|
const location = await getCurrentLocation();
|
||||||
|
|
||||||
const userMsg: ChatMessage = {
|
const userMsg: ChatMessage = {
|
||||||
@@ -653,9 +692,10 @@ const ChatScreen: React.FC = () => {
|
|||||||
mimeType: result.mimeType,
|
mimeType: result.mimeType,
|
||||||
voice: localXttsVoiceRef.current,
|
voice: localXttsVoiceRef.current,
|
||||||
speed: ttsSpeedRef.current,
|
speed: ttsSpeedRef.current,
|
||||||
|
interrupted: wasInterrupted,
|
||||||
...(location && { location }),
|
...(location && { location }),
|
||||||
});
|
});
|
||||||
}, [getCurrentLocation]);
|
}, [getCurrentLocation, interruptAriaIfBusy]);
|
||||||
|
|
||||||
// Datei auswaehlen → zur Pending-Liste hinzufuegen
|
// Datei auswaehlen → zur Pending-Liste hinzufuegen
|
||||||
const handleFileSelected = useCallback(async (file: FileData) => {
|
const handleFileSelected = useCallback(async (file: FileData) => {
|
||||||
|
|||||||
@@ -35,6 +35,10 @@ import {
|
|||||||
CONV_WINDOW_MIN_SEC,
|
CONV_WINDOW_MIN_SEC,
|
||||||
CONV_WINDOW_MAX_SEC,
|
CONV_WINDOW_MAX_SEC,
|
||||||
CONV_WINDOW_STORAGE_KEY,
|
CONV_WINDOW_STORAGE_KEY,
|
||||||
|
MAX_RECORDING_DEFAULT_SEC,
|
||||||
|
MAX_RECORDING_MIN_SEC,
|
||||||
|
MAX_RECORDING_MAX_SEC,
|
||||||
|
MAX_RECORDING_STORAGE_KEY,
|
||||||
TTS_SPEED_DEFAULT,
|
TTS_SPEED_DEFAULT,
|
||||||
TTS_SPEED_MIN,
|
TTS_SPEED_MIN,
|
||||||
TTS_SPEED_MAX,
|
TTS_SPEED_MAX,
|
||||||
@@ -72,6 +76,18 @@ interface EventEntry {
|
|||||||
|
|
||||||
type LogTab = 'live' | 'events';
|
type LogTab = 'live' | 'events';
|
||||||
|
|
||||||
|
// Settings-Sub-Screens. Reihenfolge im Hauptmenue.
|
||||||
|
const SETTINGS_SECTIONS = [
|
||||||
|
{ id: 'connection', icon: '🔌', label: 'Verbindung', desc: 'Server, Token, Status, Verbindungslog' },
|
||||||
|
{ id: 'general', icon: '⚙️', label: 'Allgemein', desc: 'Betriebsmodus, GPS-Standort' },
|
||||||
|
{ id: 'voice_input', icon: '🎙️', label: 'Spracheingabe', desc: 'Stille-Toleranz, Aufnahmedauer' },
|
||||||
|
{ id: 'wake_word', icon: '👂', label: 'Wake-Word', desc: 'Wake-Word-Auswahl' },
|
||||||
|
{ id: 'voice_output', icon: '🔊', label: 'Sprachausgabe', desc: 'Stimmen, Pre-Roll, Geschwindigkeit' },
|
||||||
|
{ id: 'storage', icon: '📁', label: 'Speicher', desc: 'Anhang-Speicherort, Auto-Download' },
|
||||||
|
{ id: 'protocol', icon: '📜', label: 'Protokoll', desc: 'Privatsphaere, Backup' },
|
||||||
|
{ id: 'about', icon: 'ℹ️', label: 'Ueber', desc: 'App-Version, Update' },
|
||||||
|
] as const;
|
||||||
|
|
||||||
// Container-Farben fuer Live-Logs
|
// Container-Farben fuer Live-Logs
|
||||||
const SOURCE_COLORS: Record<string, string> = {
|
const SOURCE_COLORS: Record<string, string> = {
|
||||||
'aria-core': '#4A9EFF', // Blau
|
'aria-core': '#4A9EFF', // Blau
|
||||||
@@ -102,6 +118,7 @@ const SettingsScreen: React.FC = () => {
|
|||||||
const [ttsPrerollSec, setTtsPrerollSec] = useState<number>(TTS_PREROLL_DEFAULT_SEC);
|
const [ttsPrerollSec, setTtsPrerollSec] = useState<number>(TTS_PREROLL_DEFAULT_SEC);
|
||||||
const [vadSilenceSec, setVadSilenceSec] = useState<number>(VAD_SILENCE_DEFAULT_SEC);
|
const [vadSilenceSec, setVadSilenceSec] = useState<number>(VAD_SILENCE_DEFAULT_SEC);
|
||||||
const [convWindowSec, setConvWindowSec] = useState<number>(CONV_WINDOW_DEFAULT_SEC);
|
const [convWindowSec, setConvWindowSec] = useState<number>(CONV_WINDOW_DEFAULT_SEC);
|
||||||
|
const [maxRecordingSec, setMaxRecordingSec] = useState<number>(MAX_RECORDING_DEFAULT_SEC);
|
||||||
const [ttsSpeed, setTtsSpeed] = useState<number>(TTS_SPEED_DEFAULT);
|
const [ttsSpeed, setTtsSpeed] = useState<number>(TTS_SPEED_DEFAULT);
|
||||||
const [wakeKeyword, setWakeKeyword] = useState<string>(DEFAULT_KEYWORD);
|
const [wakeKeyword, setWakeKeyword] = useState<string>(DEFAULT_KEYWORD);
|
||||||
const [wakeStatus, setWakeStatus] = useState<string>('');
|
const [wakeStatus, setWakeStatus] = useState<string>('');
|
||||||
@@ -111,6 +128,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
const [availableVoices, setAvailableVoices] = useState<Array<{name: string, size: number}>>([]);
|
const [availableVoices, setAvailableVoices] = useState<Array<{name: string, size: number}>>([]);
|
||||||
const [voiceCloneVisible, setVoiceCloneVisible] = useState(false);
|
const [voiceCloneVisible, setVoiceCloneVisible] = useState(false);
|
||||||
const [tempPath, setTempPath] = useState('');
|
const [tempPath, setTempPath] = useState('');
|
||||||
|
// Sub-Screen Navigation: null = Hauptmenue, sonst eine der Section-IDs.
|
||||||
|
// So bleibt aller geteilte State im selben Component-Closure und wir
|
||||||
|
// brauchen keine react-navigation-Stack-Setup.
|
||||||
|
const [currentSection, setCurrentSection] = useState<string | null>(null);
|
||||||
|
|
||||||
let logIdCounter = 0;
|
let logIdCounter = 0;
|
||||||
|
|
||||||
@@ -156,6 +177,14 @@ const SettingsScreen: React.FC = () => {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
AsyncStorage.getItem(MAX_RECORDING_STORAGE_KEY).then(saved => {
|
||||||
|
if (saved != null) {
|
||||||
|
const n = parseFloat(saved);
|
||||||
|
if (isFinite(n) && n >= MAX_RECORDING_MIN_SEC && n <= MAX_RECORDING_MAX_SEC) {
|
||||||
|
setMaxRecordingSec(n);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY).then(saved => {
|
AsyncStorage.getItem(TTS_SPEED_STORAGE_KEY).then(saved => {
|
||||||
if (saved != null) {
|
if (saved != null) {
|
||||||
const n = parseFloat(saved);
|
const n = parseFloat(saved);
|
||||||
@@ -480,7 +509,39 @@ const SettingsScreen: React.FC = () => {
|
|||||||
/>
|
/>
|
||||||
<ScrollView style={styles.container} contentContainerStyle={styles.content}>
|
<ScrollView style={styles.container} contentContainerStyle={styles.content}>
|
||||||
|
|
||||||
|
{currentSection === null && (
|
||||||
|
<>
|
||||||
|
{SETTINGS_SECTIONS.map(s => (
|
||||||
|
<TouchableOpacity
|
||||||
|
key={s.id}
|
||||||
|
style={styles.menuItem}
|
||||||
|
onPress={() => setCurrentSection(s.id)}
|
||||||
|
>
|
||||||
|
<Text style={styles.menuItemIcon}>{s.icon}</Text>
|
||||||
|
<View style={styles.menuItemTextWrap}>
|
||||||
|
<Text style={styles.menuItemLabel}>{s.label}</Text>
|
||||||
|
<Text style={styles.menuItemDesc}>{s.desc}</Text>
|
||||||
|
</View>
|
||||||
|
<Text style={styles.menuItemChevron}>›</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
))}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{currentSection !== null && (
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.subScreenHeader}
|
||||||
|
onPress={() => setCurrentSection(null)}
|
||||||
|
>
|
||||||
|
<Text style={styles.subScreenBack}>‹</Text>
|
||||||
|
<Text style={styles.subScreenTitle}>
|
||||||
|
{SETTINGS_SECTIONS.find(s => s.id === currentSection)?.label || ''}
|
||||||
|
</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* === Verbindung === */}
|
{/* === Verbindung === */}
|
||||||
|
{currentSection === 'connection' && (<>
|
||||||
<Text style={styles.sectionTitle}>Verbindung</Text>
|
<Text style={styles.sectionTitle}>Verbindung</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
{/* Status-Anzeige */}
|
{/* Status-Anzeige */}
|
||||||
@@ -577,8 +638,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.clearButtonText}>Log l{'\u00F6'}schen</Text>
|
<Text style={styles.clearButtonText}>Log l{'\u00F6'}schen</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Modus === */}
|
{/* === Modus === */}
|
||||||
|
{currentSection === 'general' && (<>
|
||||||
<Text style={styles.sectionTitle}>Betriebsmodus</Text>
|
<Text style={styles.sectionTitle}>Betriebsmodus</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<ModeSelector currentModeId={currentMode} onModeChange={handleModeChange} />
|
<ModeSelector currentModeId={currentMode} onModeChange={handleModeChange} />
|
||||||
@@ -602,8 +665,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
/>
|
/>
|
||||||
</View>
|
</View>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Spracheingabe (geraetelokal) === */}
|
{/* === Spracheingabe (geraetelokal) === */}
|
||||||
|
{currentSection === 'voice_input' && (<>
|
||||||
<Text style={styles.sectionTitle}>Spracheingabe</Text>
|
<Text style={styles.sectionTitle}>Spracheingabe</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<Text style={styles.toggleLabel}>Stille-Toleranz</Text>
|
<Text style={styles.toggleLabel}>Stille-Toleranz</Text>
|
||||||
@@ -671,9 +736,44 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.prerollButtonText}>+1</Text>
|
<Text style={styles.prerollButtonText}>+1</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
<Text style={[styles.toggleLabel, {marginTop: 24}]}>Maximale Aufnahmedauer</Text>
|
||||||
|
<Text style={styles.toggleHint}>
|
||||||
|
Notbremse: nach so vielen Minuten wird die Aufnahme automatisch beendet,
|
||||||
|
auch wenn keine Stille erkannt wurde. Nuetzlich fuer lange Erklaerungen
|
||||||
|
oder Diktate. Default: {Math.round(MAX_RECORDING_DEFAULT_SEC / 60)} Min, max {Math.round(MAX_RECORDING_MAX_SEC / 60)} Min.
|
||||||
|
</Text>
|
||||||
|
<View style={styles.prerollRow}>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.prerollButton}
|
||||||
|
onPress={() => {
|
||||||
|
const next = Math.max(MAX_RECORDING_MIN_SEC, maxRecordingSec - 60);
|
||||||
|
setMaxRecordingSec(next);
|
||||||
|
AsyncStorage.setItem(MAX_RECORDING_STORAGE_KEY, String(next));
|
||||||
|
}}
|
||||||
|
disabled={maxRecordingSec <= MAX_RECORDING_MIN_SEC}
|
||||||
|
>
|
||||||
|
<Text style={styles.prerollButtonText}>−1m</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
<Text style={styles.prerollValue}>{Math.round(maxRecordingSec / 60)} min</Text>
|
||||||
|
<TouchableOpacity
|
||||||
|
style={styles.prerollButton}
|
||||||
|
onPress={() => {
|
||||||
|
const next = Math.min(MAX_RECORDING_MAX_SEC, maxRecordingSec + 60);
|
||||||
|
setMaxRecordingSec(next);
|
||||||
|
AsyncStorage.setItem(MAX_RECORDING_STORAGE_KEY, String(next));
|
||||||
|
}}
|
||||||
|
disabled={maxRecordingSec >= MAX_RECORDING_MAX_SEC}
|
||||||
|
>
|
||||||
|
<Text style={styles.prerollButtonText}>+1m</Text>
|
||||||
|
</TouchableOpacity>
|
||||||
|
</View>
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Wake-Word (komplett on-device, openWakeWord) === */}
|
{/* === Wake-Word (komplett on-device, openWakeWord) === */}
|
||||||
|
{currentSection === 'wake_word' && (<>
|
||||||
<Text style={styles.sectionTitle}>Wake-Word</Text>
|
<Text style={styles.sectionTitle}>Wake-Word</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<Text style={styles.toggleHint}>
|
<Text style={styles.toggleHint}>
|
||||||
@@ -729,8 +829,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={{marginTop: 8, fontSize: 12, color: '#8888AA'}}>{wakeStatus}</Text>
|
<Text style={{marginTop: 8, fontSize: 12, color: '#8888AA'}}>{wakeStatus}</Text>
|
||||||
)}
|
)}
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Sprachausgabe (geraetelokal) === */}
|
{/* === Sprachausgabe (geraetelokal) === */}
|
||||||
|
{currentSection === 'voice_output' && (<>
|
||||||
<Text style={styles.sectionTitle}>Sprachausgabe</Text>
|
<Text style={styles.sectionTitle}>Sprachausgabe</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<View style={styles.toggleRow}>
|
<View style={styles.toggleRow}>
|
||||||
@@ -873,7 +975,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
)}
|
)}
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Speicher === */}
|
{/* === Speicher === */}
|
||||||
|
{currentSection === 'storage' && (<>
|
||||||
<Text style={styles.sectionTitle}>Anhang-Speicher</Text>
|
<Text style={styles.sectionTitle}>Anhang-Speicher</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<View style={styles.toggleRow}>
|
<View style={styles.toggleRow}>
|
||||||
@@ -948,7 +1053,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
)}
|
)}
|
||||||
</View>
|
</View>
|
||||||
|
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === Logs === */}
|
{/* === Logs === */}
|
||||||
|
{currentSection === 'protocol' && (<>
|
||||||
<Text style={styles.sectionTitle}>Protokoll</Text>
|
<Text style={styles.sectionTitle}>Protokoll</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
{/* Tab-Umschalter */}
|
{/* Tab-Umschalter */}
|
||||||
@@ -1027,8 +1135,10 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.clearButtonText}>Protokoll l\u00F6schen</Text>
|
<Text style={styles.clearButtonText}>Protokoll l\u00F6schen</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* === About === */}
|
{/* === About === */}
|
||||||
|
{currentSection === 'about' && (<>
|
||||||
<Text style={styles.sectionTitle}>{'\u00DC'}ber</Text>
|
<Text style={styles.sectionTitle}>{'\u00DC'}ber</Text>
|
||||||
<View style={styles.card}>
|
<View style={styles.card}>
|
||||||
<Text style={styles.aboutTitle}>ARIA Cockpit</Text>
|
<Text style={styles.aboutTitle}>ARIA Cockpit</Text>
|
||||||
@@ -1048,6 +1158,7 @@ const SettingsScreen: React.FC = () => {
|
|||||||
<Text style={styles.connectButtonText}>Auf Updates pr{'\u00FC'}fen</Text>
|
<Text style={styles.connectButtonText}>Auf Updates pr{'\u00FC'}fen</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</View>
|
</View>
|
||||||
|
</>)}
|
||||||
|
|
||||||
{/* Platz am Ende */}
|
{/* Platz am Ende */}
|
||||||
<View style={styles.bottomSpacer} />
|
<View style={styles.bottomSpacer} />
|
||||||
@@ -1076,6 +1187,58 @@ const styles = StyleSheet.create({
|
|||||||
marginBottom: 8,
|
marginBottom: 8,
|
||||||
marginLeft: 4,
|
marginLeft: 4,
|
||||||
},
|
},
|
||||||
|
menuItem: {
|
||||||
|
flexDirection: 'row',
|
||||||
|
alignItems: 'center',
|
||||||
|
backgroundColor: '#1E1E2E',
|
||||||
|
borderRadius: 10,
|
||||||
|
paddingVertical: 14,
|
||||||
|
paddingHorizontal: 14,
|
||||||
|
marginBottom: 8,
|
||||||
|
},
|
||||||
|
menuItemIcon: {
|
||||||
|
fontSize: 22,
|
||||||
|
marginRight: 14,
|
||||||
|
width: 28,
|
||||||
|
textAlign: 'center',
|
||||||
|
},
|
||||||
|
menuItemTextWrap: {
|
||||||
|
flex: 1,
|
||||||
|
},
|
||||||
|
menuItemLabel: {
|
||||||
|
color: '#FFFFFF',
|
||||||
|
fontSize: 16,
|
||||||
|
fontWeight: '600',
|
||||||
|
},
|
||||||
|
menuItemDesc: {
|
||||||
|
color: '#8888AA',
|
||||||
|
fontSize: 12,
|
||||||
|
marginTop: 2,
|
||||||
|
},
|
||||||
|
menuItemChevron: {
|
||||||
|
color: '#8888AA',
|
||||||
|
fontSize: 24,
|
||||||
|
fontWeight: '300',
|
||||||
|
marginLeft: 8,
|
||||||
|
},
|
||||||
|
subScreenHeader: {
|
||||||
|
flexDirection: 'row',
|
||||||
|
alignItems: 'center',
|
||||||
|
paddingVertical: 8,
|
||||||
|
marginBottom: 8,
|
||||||
|
},
|
||||||
|
subScreenBack: {
|
||||||
|
color: '#0096FF',
|
||||||
|
fontSize: 32,
|
||||||
|
fontWeight: '300',
|
||||||
|
marginRight: 12,
|
||||||
|
lineHeight: 36,
|
||||||
|
},
|
||||||
|
subScreenTitle: {
|
||||||
|
color: '#FFFFFF',
|
||||||
|
fontSize: 20,
|
||||||
|
fontWeight: '700',
|
||||||
|
},
|
||||||
card: {
|
card: {
|
||||||
backgroundColor: '#12122A',
|
backgroundColor: '#12122A',
|
||||||
borderRadius: 14,
|
borderRadius: 14,
|
||||||
|
|||||||
@@ -6,7 +6,7 @@
|
|||||||
* Nutzt react-native-audio-recorder-player fuer Aufnahme.
|
* Nutzt react-native-audio-recorder-player fuer Aufnahme.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { Platform, PermissionsAndroid, NativeModules } from 'react-native';
|
import { Platform, PermissionsAndroid, NativeModules, ToastAndroid } from 'react-native';
|
||||||
import Sound from 'react-native-sound';
|
import Sound from 'react-native-sound';
|
||||||
import RNFS from 'react-native-fs';
|
import RNFS from 'react-native-fs';
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
@@ -72,9 +72,16 @@ const AUDIO_SAMPLE_RATE = 16000;
|
|||||||
const AUDIO_CHANNELS = 1;
|
const AUDIO_CHANNELS = 1;
|
||||||
const AUDIO_ENCODING = 'audio/wav';
|
const AUDIO_ENCODING = 'audio/wav';
|
||||||
|
|
||||||
// VAD (Voice Activity Detection) — Stille-Erkennung
|
// VAD (Voice Activity Detection) — Stille-Erkennung.
|
||||||
const VAD_SILENCE_THRESHOLD_DB = -45; // dB unter dem als "Stille" gilt
|
// Fallback-Werte falls die adaptive Baseline-Messung fehlschlaegt (z.B. weil
|
||||||
const VAD_SPEECH_THRESHOLD_DB = -28; // dB ueber dem als "Sprache" gilt (Sprach-Gate) — hoeher = weniger Umgebungsgeraeusche
|
// das Mikro keine metering-Updates liefert). Adaptive Werte werden zur
|
||||||
|
// Laufzeit aus den ersten BASELINE_SAMPLES gemessen und auf baseline+offset
|
||||||
|
// gesetzt — funktioniert in lauten wie leisen Umgebungen.
|
||||||
|
const VAD_SILENCE_FALLBACK_DB = -38; // Fallback Stille-Schwelle
|
||||||
|
const VAD_SPEECH_FALLBACK_DB = -22; // Fallback Sprach-Schwelle
|
||||||
|
const VAD_SILENCE_OFFSET_DB = 6; // Sprache = Baseline + 6dB
|
||||||
|
const VAD_SPEECH_OFFSET_DB = 12; // sicheres Speech = Baseline + 12dB
|
||||||
|
const VAD_BASELINE_SAMPLES = 5; // 5 × 100ms = 500ms Baseline
|
||||||
const VAD_SPEECH_MIN_MS = 500; // ms Sprache bevor Aufnahme zaehlt — laenger = keine Huestler/Klopfer mehr
|
const VAD_SPEECH_MIN_MS = 500; // ms Sprache bevor Aufnahme zaehlt — laenger = keine Huestler/Klopfer mehr
|
||||||
|
|
||||||
// VAD-Stille (in Sekunden) — wie lange Sprechpause toleriert wird, bevor
|
// VAD-Stille (in Sekunden) — wie lange Sprechpause toleriert wird, bevor
|
||||||
@@ -138,7 +145,24 @@ async function loadVadSilenceMs(): Promise<number> {
|
|||||||
|
|
||||||
// Max-Dauer einer Aufnahme (Notbremse gegen Runaway-Loops). Auf 2 Minuten
|
// Max-Dauer einer Aufnahme (Notbremse gegen Runaway-Loops). Auf 2 Minuten
|
||||||
// hochgezogen damit auch laengere Erklaerungen durchgehen.
|
// hochgezogen damit auch laengere Erklaerungen durchgehen.
|
||||||
const MAX_RECORDING_MS = 120000;
|
// Default 5 Minuten — konfigurierbar in den App-Settings (1-30 Minuten).
|
||||||
|
export const MAX_RECORDING_DEFAULT_SEC = 300;
|
||||||
|
export const MAX_RECORDING_MIN_SEC = 60;
|
||||||
|
export const MAX_RECORDING_MAX_SEC = 1800;
|
||||||
|
export const MAX_RECORDING_STORAGE_KEY = 'aria_max_recording_sec';
|
||||||
|
|
||||||
|
export async function loadMaxRecordingMs(): Promise<number> {
|
||||||
|
try {
|
||||||
|
const raw = await AsyncStorage.getItem(MAX_RECORDING_STORAGE_KEY);
|
||||||
|
if (raw != null) {
|
||||||
|
const n = parseFloat(raw);
|
||||||
|
if (isFinite(n) && n >= MAX_RECORDING_MIN_SEC && n <= MAX_RECORDING_MAX_SEC) {
|
||||||
|
return Math.round(n * 1000);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
return MAX_RECORDING_DEFAULT_SEC * 1000;
|
||||||
|
}
|
||||||
|
|
||||||
// Pre-Roll: Wie lange Audio im AudioTrack-Buffer liegt bevor play() startet.
|
// Pre-Roll: Wie lange Audio im AudioTrack-Buffer liegt bevor play() startet.
|
||||||
// Einstellbar via Diagnostic/Settings (Key: aria_tts_preroll_sec).
|
// Einstellbar via Diagnostic/Settings (Key: aria_tts_preroll_sec).
|
||||||
@@ -212,6 +236,14 @@ class AudioService {
|
|||||||
// Latch damit der Silence-Callback pro Aufnahme genau einmal feuert
|
// Latch damit der Silence-Callback pro Aufnahme genau einmal feuert
|
||||||
private silenceFired: boolean = false;
|
private silenceFired: boolean = false;
|
||||||
private noSpeechTimer: ReturnType<typeof setTimeout> | null = null;
|
private noSpeechTimer: ReturnType<typeof setTimeout> | null = null;
|
||||||
|
// Adaptive Schwellen — werden in den ersten 500ms aus dem Mikro-Pegel
|
||||||
|
// gemessen. baseline = avg dB der ersten 5 Samples, dann:
|
||||||
|
// silence = baseline + VAD_SILENCE_OFFSET_DB (6dB ueber ambient)
|
||||||
|
// speech = baseline + VAD_SPEECH_OFFSET_DB (12dB ueber ambient = klares Reden)
|
||||||
|
// Funktioniert sowohl im stillen Buero als auch im lauten Cafe.
|
||||||
|
private vadBaselineSamples: number[] = [];
|
||||||
|
private vadAdaptiveSilenceDb: number = VAD_SILENCE_FALLBACK_DB;
|
||||||
|
private vadAdaptiveSpeechDb: number = VAD_SPEECH_FALLBACK_DB;
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
this.recorder = new AudioRecorderPlayer();
|
this.recorder = new AudioRecorderPlayer();
|
||||||
@@ -270,6 +302,14 @@ class AudioService {
|
|||||||
this.stopPlayback();
|
this.stopPlayback();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** True wenn ARIA gerade was abspielt — egal ob WAV-Queue oder PCM-Stream.
|
||||||
|
* Nuetzlich fuer "Barge-In": wenn der User spricht waehrend ARIA spricht,
|
||||||
|
* soll die ARIA-Wiedergabe abgebrochen + die neue User-Message verarbeitet
|
||||||
|
* werden ("ach vergiss es, mach lieber X"). */
|
||||||
|
isPlayingAudio(): boolean {
|
||||||
|
return this.isPlaying || this.pcmStreamActive;
|
||||||
|
}
|
||||||
|
|
||||||
// --- Berechtigungen ---
|
// --- Berechtigungen ---
|
||||||
|
|
||||||
async requestMicrophonePermission(): Promise<boolean> {
|
async requestMicrophonePermission(): Promise<boolean> {
|
||||||
@@ -341,8 +381,25 @@ class AudioService {
|
|||||||
const db = e.currentMetering ?? -160;
|
const db = e.currentMetering ?? -160;
|
||||||
this.meterListeners.forEach(cb => cb(db));
|
this.meterListeners.forEach(cb => cb(db));
|
||||||
|
|
||||||
|
// Adaptive Baseline: erste 5 Samples (~500ms) sammeln, dann Schwellen
|
||||||
|
// anpassen. -160 (kein Metering) ignorieren — sonst wird die Baseline
|
||||||
|
// sinnlos niedrig.
|
||||||
|
if (this.vadBaselineSamples.length < VAD_BASELINE_SAMPLES) {
|
||||||
|
if (db > -100) {
|
||||||
|
this.vadBaselineSamples.push(db);
|
||||||
|
if (this.vadBaselineSamples.length === VAD_BASELINE_SAMPLES) {
|
||||||
|
const avg = this.vadBaselineSamples.reduce((a, b) => a + b, 0) / VAD_BASELINE_SAMPLES;
|
||||||
|
this.vadAdaptiveSilenceDb = avg + VAD_SILENCE_OFFSET_DB;
|
||||||
|
this.vadAdaptiveSpeechDb = avg + VAD_SPEECH_OFFSET_DB;
|
||||||
|
const msg = `VAD: ambient=${avg.toFixed(0)}dB stille>${this.vadAdaptiveSilenceDb.toFixed(0)}dB`;
|
||||||
|
console.log('[Audio] %s speech>%s', msg, this.vadAdaptiveSpeechDb.toFixed(1));
|
||||||
|
try { ToastAndroid.show(msg, ToastAndroid.SHORT); } catch {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Sprach-Gate: Erkennen ob tatsaechlich gesprochen wird
|
// Sprach-Gate: Erkennen ob tatsaechlich gesprochen wird
|
||||||
if (db > VAD_SPEECH_THRESHOLD_DB) {
|
if (db > this.vadAdaptiveSpeechDb) {
|
||||||
if (!this.speechDetected && this.speechStartTime === 0) {
|
if (!this.speechDetected && this.speechStartTime === 0) {
|
||||||
this.speechStartTime = Date.now();
|
this.speechStartTime = Date.now();
|
||||||
}
|
}
|
||||||
@@ -357,7 +414,7 @@ class AudioService {
|
|||||||
|
|
||||||
// VAD: Stille erkennen (nur wenn Sprache erkannt wurde)
|
// VAD: Stille erkennen (nur wenn Sprache erkannt wurde)
|
||||||
if (this.vadEnabled) {
|
if (this.vadEnabled) {
|
||||||
if (db > VAD_SILENCE_THRESHOLD_DB) {
|
if (db > this.vadAdaptiveSilenceDb) {
|
||||||
this.lastSpeechTime = Date.now();
|
this.lastSpeechTime = Date.now();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -367,6 +424,12 @@ class AudioService {
|
|||||||
this.lastSpeechTime = Date.now();
|
this.lastSpeechTime = Date.now();
|
||||||
this.speechDetected = false;
|
this.speechDetected = false;
|
||||||
this.speechStartTime = 0;
|
this.speechStartTime = 0;
|
||||||
|
// VAD-Adaptive zurueckgesetzt: Baseline wird in den ersten 500ms neu
|
||||||
|
// gemessen. Bis dahin gelten die Fallback-Schwellen — die sind etwas
|
||||||
|
// empfindlicher als die alten Werte (-38 statt -45 fuer Stille).
|
||||||
|
this.vadBaselineSamples = [];
|
||||||
|
this.vadAdaptiveSilenceDb = VAD_SILENCE_FALLBACK_DB;
|
||||||
|
this.vadAdaptiveSpeechDb = VAD_SPEECH_FALLBACK_DB;
|
||||||
this.setState('recording');
|
this.setState('recording');
|
||||||
|
|
||||||
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
||||||
@@ -394,18 +457,19 @@ class AudioService {
|
|||||||
};
|
};
|
||||||
if (autoStop) {
|
if (autoStop) {
|
||||||
const vadSilenceMs = await loadVadSilenceMs();
|
const vadSilenceMs = await loadVadSilenceMs();
|
||||||
|
const maxRecordingMs = await loadMaxRecordingMs();
|
||||||
console.log('[Audio] startRecording: autoStop=true, VAD-Stille=%dms, MAX=%dms',
|
console.log('[Audio] startRecording: autoStop=true, VAD-Stille=%dms, MAX=%dms',
|
||||||
vadSilenceMs, MAX_RECORDING_MS);
|
vadSilenceMs, maxRecordingMs);
|
||||||
this.vadTimer = setInterval(() => {
|
this.vadTimer = setInterval(() => {
|
||||||
const silenceDuration = Date.now() - this.lastSpeechTime;
|
const silenceDuration = Date.now() - this.lastSpeechTime;
|
||||||
if (silenceDuration >= vadSilenceMs) {
|
if (silenceDuration >= vadSilenceMs) {
|
||||||
fireSilenceOnce(`VAD ${silenceDuration}ms Stille (Schwelle=${vadSilenceMs}ms)`);
|
fireSilenceOnce(`VAD ${silenceDuration}ms Stille (Schwelle=${vadSilenceMs}ms)`);
|
||||||
}
|
}
|
||||||
}, 200);
|
}, 200);
|
||||||
// Notbremse: Nach MAX_RECORDING_MS zwangsweise stoppen
|
// Notbremse: Nach maxRecordingMs zwangsweise stoppen
|
||||||
this.maxDurationTimer = setTimeout(() => {
|
this.maxDurationTimer = setTimeout(() => {
|
||||||
fireSilenceOnce(`Max-Dauer ${MAX_RECORDING_MS}ms`);
|
fireSilenceOnce(`Max-Dauer ${maxRecordingMs}ms`);
|
||||||
}, MAX_RECORDING_MS);
|
}, maxRecordingMs);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Conversation-Window: Wenn der User innerhalb noSpeechTimeoutMs nicht
|
// Conversation-Window: Wenn der User innerhalb noSpeechTimeoutMs nicht
|
||||||
|
|||||||
+38
-9
@@ -1235,6 +1235,7 @@ class ARIABridge:
|
|||||||
except (TypeError, ValueError):
|
except (TypeError, ValueError):
|
||||||
self._next_speed_override = None
|
self._next_speed_override = None
|
||||||
if text:
|
if text:
|
||||||
|
interrupted = bool(payload.get("interrupted", False))
|
||||||
# Wenn Files gerade gepuffert sind (Bild + Text gleichzeitig
|
# Wenn Files gerade gepuffert sind (Bild + Text gleichzeitig
|
||||||
# gesendet), mergen wir sie zu einer einzigen Anfrage statt
|
# gesendet), mergen wir sie zu einer einzigen Anfrage statt
|
||||||
# zwei separater send_to_core-Calls.
|
# zwei separater send_to_core-Calls.
|
||||||
@@ -1242,8 +1243,16 @@ class ARIABridge:
|
|||||||
if merged:
|
if merged:
|
||||||
logger.info("[rvs] App-Chat (mit Anhaengen): '%s'", text[:80])
|
logger.info("[rvs] App-Chat (mit Anhaengen): '%s'", text[:80])
|
||||||
else:
|
else:
|
||||||
logger.info("[rvs] App-Chat: '%s'", text[:80])
|
core_text = (
|
||||||
await self.send_to_core(text, source="app")
|
f"[Hinweis: Stefan hat dich gerade unterbrochen waehrend du noch "
|
||||||
|
f"gesprochen oder gearbeitet hast. Folgendes ist eine Korrektur, "
|
||||||
|
f"Ergaenzung oder ein Themenwechsel zu deiner letzten Antwort.] "
|
||||||
|
f"{text}"
|
||||||
|
if interrupted else text
|
||||||
|
)
|
||||||
|
logger.info("[rvs] App-Chat%s: '%s'",
|
||||||
|
" [BARGE-IN]" if interrupted else "", text[:80])
|
||||||
|
await self.send_to_core(core_text, source="app" + (" [barge-in]" if interrupted else ""))
|
||||||
return
|
return
|
||||||
|
|
||||||
if msg_type == "cancel_request":
|
if msg_type == "cancel_request":
|
||||||
@@ -1500,9 +1509,11 @@ class ARIABridge:
|
|||||||
self._next_speed_override = speed if 0.1 <= speed <= 5.0 else None
|
self._next_speed_override = speed if 0.1 <= speed <= 5.0 else None
|
||||||
except (TypeError, ValueError):
|
except (TypeError, ValueError):
|
||||||
self._next_speed_override = None
|
self._next_speed_override = None
|
||||||
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB",
|
interrupted = bool(payload.get("interrupted", False))
|
||||||
mime_type, duration_ms, len(audio_b64) // 1365)
|
logger.info("[rvs] Audio empfangen: %s, %dms, %dKB%s",
|
||||||
asyncio.create_task(self._process_app_audio(audio_b64, mime_type))
|
mime_type, duration_ms, len(audio_b64) // 1365,
|
||||||
|
" [BARGE-IN]" if interrupted else "")
|
||||||
|
asyncio.create_task(self._process_app_audio(audio_b64, mime_type, interrupted))
|
||||||
|
|
||||||
elif msg_type == "stt_response":
|
elif msg_type == "stt_response":
|
||||||
# Antwort der whisper-bridge auf unseren stt_request
|
# Antwort der whisper-bridge auf unseren stt_request
|
||||||
@@ -1558,8 +1569,13 @@ class ARIABridge:
|
|||||||
_STT_REMOTE_TIMEOUT_READY_S = 45.0
|
_STT_REMOTE_TIMEOUT_READY_S = 45.0
|
||||||
_STT_REMOTE_TIMEOUT_LOADING_S = 300.0
|
_STT_REMOTE_TIMEOUT_LOADING_S = 300.0
|
||||||
|
|
||||||
async def _process_app_audio(self, audio_b64: str, mime_type: str) -> None:
|
async def _process_app_audio(self, audio_b64: str, mime_type: str, interrupted: bool = False) -> None:
|
||||||
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal."""
|
"""App-Audio → STT → aria-core. Primaer via whisper-bridge (RVS), Fallback lokal.
|
||||||
|
|
||||||
|
interrupted=True wenn der User waehrend ARIA noch sprach/dachte aufgenommen hat
|
||||||
|
(Barge-In). Wird als Hinweis-Praefix an aria-core mitgegeben damit ARIA die
|
||||||
|
Korrektur/Unterbrechung in den Kontext einordnen kann statt als reine
|
||||||
|
Folgefrage zu behandeln."""
|
||||||
# Erst Remote versuchen
|
# Erst Remote versuchen
|
||||||
text = await self._stt_remote(audio_b64, mime_type)
|
text = await self._stt_remote(audio_b64, mime_type)
|
||||||
if text is None:
|
if text is None:
|
||||||
@@ -1571,12 +1587,21 @@ class ARIABridge:
|
|||||||
|
|
||||||
if text.strip():
|
if text.strip():
|
||||||
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
logger.info("[rvs] STT Ergebnis: '%s'", text[:80])
|
||||||
|
# Barge-In-Hinweis: gibt ARIA den Kontext dass sie unterbrochen wurde
|
||||||
|
# und dies eine Korrektur/Aenderung der vorherigen Anweisung sein kann.
|
||||||
|
core_text = (
|
||||||
|
f"[Hinweis: Stefan hat dich gerade unterbrochen waehrend du noch "
|
||||||
|
f"gesprochen oder gearbeitet hast. Folgendes ist eine Korrektur, "
|
||||||
|
f"Ergaenzung oder ein Themenwechsel zu deiner letzten Antwort.] "
|
||||||
|
f"{text}"
|
||||||
|
if interrupted else text
|
||||||
|
)
|
||||||
# ERST an aria-core senden (wichtigster Schritt)
|
# ERST an aria-core senden (wichtigster Schritt)
|
||||||
await self.send_to_core(text, source="app-voice")
|
await self.send_to_core(core_text, source="app-voice" + (" [barge-in]" if interrupted else ""))
|
||||||
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
# STT-Text an RVS senden (fuer Anzeige in App + Diagnostic)
|
||||||
# sender="stt" damit Bridge es ignoriert (kein Loop)
|
# sender="stt" damit Bridge es ignoriert (kein Loop)
|
||||||
try:
|
try:
|
||||||
await self._send_to_rvs({
|
ok = await self._send_to_rvs({
|
||||||
"type": "chat",
|
"type": "chat",
|
||||||
"payload": {
|
"payload": {
|
||||||
"text": text,
|
"text": text,
|
||||||
@@ -1584,6 +1609,10 @@ class ARIABridge:
|
|||||||
},
|
},
|
||||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||||
})
|
})
|
||||||
|
if ok:
|
||||||
|
logger.info("[rvs] STT-Text an RVS broadcastet (sender=stt)")
|
||||||
|
else:
|
||||||
|
logger.warning("[rvs] STT-Text NICHT broadcastet — _send_to_rvs lieferte False")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning("[rvs] STT-Text konnte nicht an RVS gesendet werden: %s", e)
|
logger.warning("[rvs] STT-Text konnte nicht an RVS gesendet werden: %s", e)
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -87,16 +87,34 @@
|
|||||||
- [x] App Text-Rendering: Nachrichten selektierbar + Autolink fuer URLs/E-Mails/Telefonnummern (Browser/Mail/Dialer)
|
- [x] App Text-Rendering: Nachrichten selektierbar + Autolink fuer URLs/E-Mails/Telefonnummern (Browser/Mail/Dialer)
|
||||||
- [x] TTS-Wiedergabegeschwindigkeit pro Geraet einstellbar (Settings → 0.5-2.0x in 0.1-Schritten, Default 1.0)
|
- [x] TTS-Wiedergabegeschwindigkeit pro Geraet einstellbar (Settings → 0.5-2.0x in 0.1-Schritten, Default 1.0)
|
||||||
- [x] Diagnostic: Voice-Preview-Modal (Play-Icon vor Delete-X, Textfeld mit Default, WAV im Browser abspielen)
|
- [x] Diagnostic: Voice-Preview-Modal (Play-Icon vor Delete-X, Textfeld mit Default, WAV im Browser abspielen)
|
||||||
|
- [x] **Wake-Word komplett on-device via openWakeWord (ONNX Runtime)** — Porcupine raus, kein API-Key/keine Lizenzgebuehren mehr. Mitgelieferte Keywords: hey_jarvis, computer, alexa, hey_mycroft, hey_rhasspy
|
||||||
|
- [x] Wake-Word Embedding rank-4 Fix (Pipeline-Bug der das Triggern verhinderte) + Frame-Count aus Modell-Metadaten lesen
|
||||||
|
- [x] APK ABI-Split auf arm64-v8a — von ~136 MB auf ~35 MB, Auto-Update-Downloads aufs Phone deutlich kleiner
|
||||||
|
- [x] PCM-Underrun-Schutz: Stille-Fill in Render-Pausen verhindert Spotify-Auto-Resume nach 10s Stillstand
|
||||||
|
- [x] Conversation-Focus-Lifecycle: AudioFocus haengt am Wake-Word-State 'conversing' statt an einzelnen Streams — Spotify bleibt durchgehend gepaust, auch zwischen mehreren Antworten
|
||||||
|
- [x] PhoneStateListener: TTS pausiert bei eingehendem Anruf (READ_PHONE_STATE Permission)
|
||||||
|
- [x] Voice-Override behaelt Stimme ueber alle TTS-Calls einer Antwort (vorher: nach erstem TTS-Call zurueck auf Default)
|
||||||
|
- [x] Sprachnachricht-Bubble defensiv: STT-Result fuegt neue Bubble hinzu wenn Placeholder fehlt (Race-Schutz)
|
||||||
|
- [x] Bild + Text als EINE Anfrage: Bridge buffert files 800ms, merged mit folgendem chat-Text zu einem send_to_core (statt zwei getrennten ARIA-Antworten)
|
||||||
|
- [x] Diagnostic-Chat: bubblige Formatierung, mehrzeiliges Eingabefeld (textarea, Enter sendet, Shift+Enter neue Zeile)
|
||||||
|
- [x] Diagnostic→App: persistente RVS-Connection statt frische pro Send (Race-Probleme mit Zombie-WS geloest)
|
||||||
|
- [x] Adaptive VAD-Schwelle: Baseline aus den ersten 500ms Mic-Pegel, Stille = baseline+6dB / Sprache = baseline+12dB. Funktioniert in lauten wie leisen Umgebungen
|
||||||
|
- [x] Max-Aufnahmedauer konfigurierbar in Settings (1-30 min, Default 5 min) — laengere Diktate moeglich
|
||||||
|
- [x] Barge-In: User kann ARIA waehrend Antwort/Tool-Use unterbrechen, alte Aktivitaet wird abgebrochen, Bridge gibt aria-core einen Kontext-Hint dass es eine Korrektur ist
|
||||||
|
- [x] Push-to-Talk raus, nur noch Tap-to-Talk (verhinderte Touch-Race-Probleme)
|
||||||
|
- [x] Settings-Sub-Screens: 8 Kategorien (Verbindung, Allgemein, Spracheingabe, Wake-Word, Sprachausgabe, Speicher, Protokoll, Ueber) statt langer Liste
|
||||||
|
- [x] Textauswahl in Bubbles wieder funktional (nested Text+onPress raus, dataDetectorType="all" macht Links automatisch klickbar)
|
||||||
|
|
||||||
## Offen
|
## Offen
|
||||||
|
|
||||||
### Bugs
|
### Bugs
|
||||||
- [ ] App: Wake-Word "jarvis" triggert nicht zuverlaessig (Porcupine-Debugging via ADB-Logcat ausstehend)
|
- [ ] App: STT-Text ersetzt Placeholder nicht — Toast-Debug + Bridge-Log eingebaut, beim naechsten Test pruefen ob das chat-Event mit sender=stt in der App ankommt
|
||||||
- [ ] App: Stuerzt beim Lauschen ab, eventuell bei Nebengeraeuschen (Porcupine + Mic-Race, errorCallback haelt's jetzt zurueck — Dauertest ausstehend)
|
|
||||||
|
|
||||||
### App Features
|
### App Features
|
||||||
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
||||||
- [ ] Background Audio Service (TTS auch bei minimierter App)
|
- [ ] Background Audio Service (TTS auch bei minimierter App)
|
||||||
|
- [ ] Custom-Wake-Word-Upload via Diagnostic (eigene .onnx-Files ohne App-Rebuild)
|
||||||
|
- [ ] Pause+Resume bei Anruf: aktuell wird der TTS-Stream bei Klingeln hart gestoppt, schoener waere Pause + Resume nach Auflegen
|
||||||
|
|
||||||
### Architektur
|
### Architektur
|
||||||
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
- [ ] Bilder: Claude Vision direkt nutzen (aktuell nur Dateipfad an ARIA)
|
||||||
|
|||||||
Reference in New Issue
Block a user