Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| a4d3449e3a | |||
| 44d2c6b4fe | |||
| 0309c95aa5 | |||
| 2aa2cc70c9 | |||
| 9d0776c819 | |||
| f031fa159e | |||
| be373466a3 | |||
| bbf9aed3ba | |||
| 745b4a07c0 | |||
| 23ca815cb2 | |||
| cc3fac8142 | |||
| cd89e36ec2 | |||
| f5b4285d15 | |||
| 248e7c9ae4 |
@@ -380,6 +380,7 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
|||||||
- Text-Chat mit ARIA
|
- Text-Chat mit ARIA
|
||||||
- **Sprachaufnahme**: Push-to-Talk (halten) oder Tap-to-Talk (tippen, Auto-Stop bei Stille)
|
- **Sprachaufnahme**: Push-to-Talk (halten) oder Tap-to-Talk (tippen, Auto-Stop bei Stille)
|
||||||
- **Gespraechsmodus** (Ohr-Button): Nach jeder ARIA-Antwort startet automatisch die Aufnahme — wie ein natuerliches Gespraech hin und her
|
- **Gespraechsmodus** (Ohr-Button): Nach jeder ARIA-Antwort startet automatisch die Aufnahme — wie ein natuerliches Gespraech hin und her
|
||||||
|
- **Wake-Word** (optional, Picovoice Porcupine on-device): "Jarvis", "Computer" usw. — Mikrofon hoert passiv mit, Konversation startet beim Schluesselwort. Eigene Wake-Words ueber die Picovoice Console moeglich. Ohne API-Key faellt der Ohr-Button auf Direkt-Aufnahme zurueck.
|
||||||
- **VAD (Voice Activity Detection)**: Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme 120s.
|
- **VAD (Voice Activity Detection)**: Konfigurierbare Stille-Toleranz (1.0–8.0s, Default 2.8s) bevor Auto-Stop greift. Max-Aufnahme 120s.
|
||||||
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
- **Speech Gate**: Aufnahme wird verworfen wenn keine Sprache erkannt
|
||||||
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
- **STT (Speech-to-Text)**: 16kHz mono → Bridge → Gamebox-Whisper (CUDA) → Text im Chat. Fast in Echtzeit.
|
||||||
@@ -398,6 +399,49 @@ API-Endpoint fuer andere Services: `GET http://localhost:3001/api/session`
|
|||||||
- GPS-Position (optional)
|
- GPS-Position (optional)
|
||||||
- QR-Code Scanner fuer Token-Pairing
|
- QR-Code Scanner fuer Token-Pairing
|
||||||
|
|
||||||
|
### Wake-Word einrichten (Picovoice Porcupine)
|
||||||
|
|
||||||
|
Das Wake-Word laeuft komplett **on-device** in der App — kein Audio verlaesst dein Telefon
|
||||||
|
fuer die Erkennung. Picovoice bietet aktuell einen **7-Tage Free Trial** ohne Kreditkarte
|
||||||
|
und ohne Auto-Renewal an, danach kostenpflichtig (siehe [picovoice.ai/pricing](https://picovoice.ai/pricing)).
|
||||||
|
Wer das Wake-Word ueberspringen will: der Ohr-Button funktioniert auch ohne AccessKey
|
||||||
|
(Direkt-Aufnahme statt passivem Lauschen — siehe unten).
|
||||||
|
|
||||||
|
**1) AccessKey holen** (einmalig, ~2 Minuten):
|
||||||
|
|
||||||
|
1. Auf [console.picovoice.ai](https://console.picovoice.ai) registrieren (Email + Passwort, keine Kreditkarte fuer den Trial).
|
||||||
|
2. Nach dem Login auf dem Dashboard → **AccessKey** kopieren (langer Base64-String).
|
||||||
|
|
||||||
|
**2) AccessKey in der App eintragen:**
|
||||||
|
|
||||||
|
- App → **Einstellungen** → Abschnitt **Wake-Word**
|
||||||
|
- AccessKey einfuegen, **Keyword** auswaehlen (Default: `jarvis`)
|
||||||
|
- Speichern → die App initialisiert Porcupine automatisch
|
||||||
|
|
||||||
|
**Eingebaute Keywords** (sofort verfuegbar, kein Training noetig):
|
||||||
|
`jarvis`, `computer`, `picovoice`, `porcupine`, `bumblebee`, `terminator`,
|
||||||
|
`alexa`, `hey google`, `ok google`, `hey siri`
|
||||||
|
|
||||||
|
**3) Eigenes Wake-Word erstellen** ("ARIA", "Hey Stefan", was du willst):
|
||||||
|
|
||||||
|
1. [console.picovoice.ai](https://console.picovoice.ai) → **Porcupine** → **Train Wake Word**
|
||||||
|
2. Wort eingeben (z.B. `ARIA`), Sprache `German` waehlen, Plattform `Android`
|
||||||
|
3. **Train** druecken — Picovoice trainiert das Modell in ~1–2 Minuten
|
||||||
|
4. Die fertige `.ppn`-Datei runterladen
|
||||||
|
5. *(Custom-Upload in der App ist Phase 2 — aktuell nur eingebaute Keywords.
|
||||||
|
`.ppn`-Dateien koennen schon manuell ins App-Bundle gelegt werden, die UI
|
||||||
|
dafuer kommt mit dem naechsten Diagnostic-Update.)*
|
||||||
|
|
||||||
|
**Bedienung:**
|
||||||
|
- **Ohr-Button (👂)** in der Statusleiste tippen → Wake-Word ist scharf, App hoert passiv mit
|
||||||
|
- Wake-Word sagen → Symbol wechselt auf 🎙️, normale Konversation laeuft
|
||||||
|
- Nach jeder ARIA-Antwort oeffnet sich das Mikro nochmal — Stille → zurueck zu 👂
|
||||||
|
- Erneut tippen → Ohr aus (🔇)
|
||||||
|
|
||||||
|
**Ohne AccessKey:** Der Ohr-Button startet stattdessen die Direkt-Aufnahme (Mikro
|
||||||
|
ist sofort aktiv, kein passives Lauschen). Auch ein gueltiger Modus, nur halt ohne
|
||||||
|
"Hands-free" via Schluesselwort.
|
||||||
|
|
||||||
### Ersteinrichtung (Dev-Maschine, einmalig)
|
### Ersteinrichtung (Dev-Maschine, einmalig)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -744,8 +788,9 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- **Proxy Cold Start**: Jede Nachricht spawnt einen neuen `claude --print` Prozess.
|
- **Proxy Cold Start**: Jede Nachricht spawnt einen neuen `claude --print` Prozess.
|
||||||
Dadurch ist ARIA langsamer als die direkte Claude CLI. Timeout ist auf 900s (15 Min).
|
Dadurch ist ARIA langsamer als die direkte Claude CLI. Timeout ist auf 900s (15 Min).
|
||||||
- **Kein Streaming zur App**: Die App zeigt erst die fertige Antwort, keine Streaming-Tokens.
|
- **Kein Streaming zur App**: Die App zeigt erst die fertige Antwort, keine Streaming-Tokens.
|
||||||
- **Wake Word nur auf VM**: Die Bridge hoert auf "ARIA" ueber das lokale Mikrofon der VM.
|
- **Wake-Word in der App nur eingebaute Keywords**: `jarvis`, `computer` etc. funktionieren
|
||||||
In der App gibt es Energy-basierte Erkennung (Phase 1). On-device "ARIA"-Keyword (Porcupine) ist Phase 2.
|
sofort, eigene Wake-Words (`.ppn` aus der Picovoice Console) muessen aktuell noch manuell
|
||||||
|
ins App-Bundle. Die Upload-UI in Diagnostic ist Phase 2.
|
||||||
- **Audio-Format**: App nimmt AAC/MP4 auf, Bridge konvertiert via FFmpeg zu 16kHz PCM.
|
- **Audio-Format**: App nimmt AAC/MP4 auf, Bridge konvertiert via FFmpeg zu 16kHz PCM.
|
||||||
- **RVS Zombie-Connections**: WebSocket-Verbindungen sterben gelegentlich ohne Fehlermeldung.
|
- **RVS Zombie-Connections**: WebSocket-Verbindungen sterben gelegentlich ohne Fehlermeldung.
|
||||||
Bridge hat Ping-Check (5s), Diagnostic nutzt frische Verbindungen pro Request.
|
Bridge hat Ping-Check (5s), Diagnostic nutzt frische Verbindungen pro Request.
|
||||||
@@ -800,6 +845,7 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
- [x] Audio-Pause statt Ducking (TRANSIENT statt MAY_DUCK) + release-Timing fix
|
||||||
- [x] VAD-Stille-Toleranz und Max-Aufnahme einstellbar (1-8s, 120s)
|
- [x] VAD-Stille-Toleranz und Max-Aufnahme einstellbar (1-8s, 120s)
|
||||||
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
- [x] Disk-Voll Banner in Diagnostic mit copy-baren Cleanup-Befehlen
|
||||||
|
- [x] Porcupine Wake-Word on-device in der App (eingebaute Keywords + State-Icon)
|
||||||
|
|
||||||
### Phase 2 — ARIA wird produktiv
|
### Phase 2 — ARIA wird produktiv
|
||||||
|
|
||||||
@@ -815,5 +861,5 @@ docker exec aria-core ssh aria-wohnung hostname
|
|||||||
- [ ] STARFACE Telefonie-Skill
|
- [ ] STARFACE Telefonie-Skill
|
||||||
- [ ] Desktop Client (Tauri)
|
- [ ] Desktop Client (Tauri)
|
||||||
- [ ] bKVM Remote IT-Support
|
- [ ] bKVM Remote IT-Support
|
||||||
- [ ] Porcupine Wake Word (on-device "ARIA" in der App)
|
- [ ] Custom-`.ppn`-Upload fuer Wake-Word ueber Diagnostic (eigene Trigger-Worte)
|
||||||
- [ ] Claude Vision direkt (Bildanalyse ohne Dateipfad-Umweg)
|
- [ ] Claude Vision direkt (Bildanalyse ohne Dateipfad-Umweg)
|
||||||
|
|||||||
@@ -79,8 +79,8 @@ android {
|
|||||||
applicationId "com.ariacockpit"
|
applicationId "com.ariacockpit"
|
||||||
minSdkVersion rootProject.ext.minSdkVersion
|
minSdkVersion rootProject.ext.minSdkVersion
|
||||||
targetSdkVersion rootProject.ext.targetSdkVersion
|
targetSdkVersion rootProject.ext.targetSdkVersion
|
||||||
versionCode 509
|
versionCode 605
|
||||||
versionName "0.0.5.9"
|
versionName "0.0.6.5"
|
||||||
// Fallback fuer Libraries mit Product Flavors
|
// Fallback fuer Libraries mit Product Flavors
|
||||||
missingDimensionStrategy 'react-native-camera', 'general'
|
missingDimensionStrategy 'react-native-camera', 'general'
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -39,7 +39,10 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
private const val MAX_PREROLL_SECONDS = 10.0
|
private const val MAX_PREROLL_SECONDS = 10.0
|
||||||
// Stille am Stream-Anfang, damit AudioTrack sauber anfaehrt und die
|
// Stille am Stream-Anfang, damit AudioTrack sauber anfaehrt und die
|
||||||
// ersten Samples nicht abgeschnitten werden (XTTS-Warmup + play()-Latenz).
|
// ersten Samples nicht abgeschnitten werden (XTTS-Warmup + play()-Latenz).
|
||||||
private const val LEADING_SILENCE_SECONDS = 0.2
|
private const val LEADING_SILENCE_SECONDS = 0.3
|
||||||
|
// Stille am Ende — puffert das Hardware-Flushen damit die letzten
|
||||||
|
// echten Samples garantiert ausgespielt werden bevor stop() kommt.
|
||||||
|
private const val TRAILING_SILENCE_SECONDS = 0.3
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun getName() = "PcmStreamPlayer"
|
override fun getName() = "PcmStreamPlayer"
|
||||||
@@ -109,9 +112,9 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
val t = track ?: return@Thread
|
val t = track ?: return@Thread
|
||||||
try {
|
try {
|
||||||
// Leading-Silence in den Buffer — gibt AudioTrack Zeit anzufahren.
|
// Leading-Silence in den Buffer — gibt AudioTrack Zeit anzufahren.
|
||||||
val silenceBytes = ((sampleRate * channels * 2) * LEADING_SILENCE_SECONDS).toInt() and 0x7FFFFFFE
|
val leadingBytes = ((sampleRate * channels * 2) * LEADING_SILENCE_SECONDS).toInt() and 0x7FFFFFFE
|
||||||
if (silenceBytes > 0) {
|
if (leadingBytes > 0) {
|
||||||
val silence = ByteArray(silenceBytes)
|
val silence = ByteArray(leadingBytes)
|
||||||
var silOff = 0
|
var silOff = 0
|
||||||
while (silOff < silence.size && !writerShouldStop) {
|
while (silOff < silence.size && !writerShouldStop) {
|
||||||
val w = t.write(silence, silOff, silence.size - silOff)
|
val w = t.write(silence, silOff, silence.size - silOff)
|
||||||
@@ -120,8 +123,23 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
}
|
}
|
||||||
bytesBuffered += silence.size
|
bytesBuffered += silence.size
|
||||||
}
|
}
|
||||||
while (!writerShouldStop) {
|
// Bei preroll=0: play() SOFORT nach Leading-Silence aufrufen,
|
||||||
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS) ?: run {
|
// nicht erst bei Ankunft des ersten echten Chunks. Android's
|
||||||
|
// AudioTrack haelt den Play-State und wartet auf neue Samples.
|
||||||
|
// So verschluckt es keine Worte wenn der erste Chunk erst
|
||||||
|
// nach play()-Startup-Latenz eintrifft.
|
||||||
|
if (prerollBytes == 0 && !playbackStarted) {
|
||||||
|
try {
|
||||||
|
t.play()
|
||||||
|
playbackStarted = true
|
||||||
|
Log.i(TAG, "Playback sofort gestartet (preroll=0, ${bytesBuffered}B silence)")
|
||||||
|
} catch (e: Exception) {
|
||||||
|
Log.w(TAG, "play() sofort failed: ${e.message}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
mainLoop@ while (!writerShouldStop) {
|
||||||
|
val data = queue.poll(50, java.util.concurrent.TimeUnit.MILLISECONDS)
|
||||||
|
if (data == null) {
|
||||||
if (endRequested) {
|
if (endRequested) {
|
||||||
// Falls wir vor Pre-Roll enden (kurzer Text): trotzdem abspielen
|
// Falls wir vor Pre-Roll enden (kurzer Text): trotzdem abspielen
|
||||||
if (!playbackStarted) {
|
if (!playbackStarted) {
|
||||||
@@ -133,10 +151,10 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
Log.w(TAG, "play() fallback failed: ${e.message}")
|
Log.w(TAG, "play() fallback failed: ${e.message}")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return@Thread
|
break@mainLoop
|
||||||
}
|
}
|
||||||
null
|
continue@mainLoop
|
||||||
} ?: continue
|
}
|
||||||
|
|
||||||
// Pre-Roll Check: play() erst wenn genug gepuffert
|
// Pre-Roll Check: play() erst wenn genug gepuffert
|
||||||
if (!playbackStarted && bytesBuffered + data.size >= prerollBytes) {
|
if (!playbackStarted && bytesBuffered + data.size >= prerollBytes) {
|
||||||
@@ -157,6 +175,19 @@ class PcmStreamPlayerModule(reactContext: ReactApplicationContext) : ReactContex
|
|||||||
}
|
}
|
||||||
bytesBuffered += data.size
|
bytesBuffered += data.size
|
||||||
}
|
}
|
||||||
|
// Trailing-Silence damit die letzten echten Samples garantiert
|
||||||
|
// durch das Hardware-Buffering kommen bevor stop() sie abschneidet
|
||||||
|
val trailingBytes = ((sampleRate * channels * 2) * TRAILING_SILENCE_SECONDS).toInt() and 0x7FFFFFFE
|
||||||
|
if (trailingBytes > 0 && !writerShouldStop) {
|
||||||
|
val silence = ByteArray(trailingBytes)
|
||||||
|
var silOff = 0
|
||||||
|
while (silOff < silence.size && !writerShouldStop) {
|
||||||
|
val w = t.write(silence, silOff, silence.size - silOff)
|
||||||
|
if (w <= 0) break
|
||||||
|
silOff += w
|
||||||
|
}
|
||||||
|
bytesBuffered += silence.size
|
||||||
|
}
|
||||||
} catch (e: Exception) {
|
} catch (e: Exception) {
|
||||||
Log.w(TAG, "Writer-Thread Fehler: ${e.message}")
|
Log.w(TAG, "Writer-Thread Fehler: ${e.message}")
|
||||||
} finally {
|
} finally {
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "aria-cockpit",
|
"name": "aria-cockpit",
|
||||||
"version": "0.0.5.9",
|
"version": "0.0.6.5",
|
||||||
"private": true,
|
"private": true,
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"android": "react-native run-android",
|
"android": "react-native run-android",
|
||||||
|
|||||||
@@ -72,13 +72,28 @@ interface Props {
|
|||||||
const MessageText: React.FC<Props> = ({ text, style }) => {
|
const MessageText: React.FC<Props> = ({ text, style }) => {
|
||||||
const segments = React.useMemo(() => tokenize(text), [text]);
|
const segments = React.useMemo(() => tokenize(text), [text]);
|
||||||
return (
|
return (
|
||||||
<Text style={style} selectable>
|
<Text
|
||||||
|
style={style}
|
||||||
|
selectable
|
||||||
|
// dataDetectorType ist Android-only und macht Phone/URL/Email zusaetzlich
|
||||||
|
// ueber System-Detection klickbar — als Fallback falls unsere Regex-
|
||||||
|
// Tokens nicht passen.
|
||||||
|
dataDetectorType="all"
|
||||||
|
>
|
||||||
{segments.map((seg, i) => {
|
{segments.map((seg, i) => {
|
||||||
if (seg.kind === 'text') {
|
if (seg.kind === 'text') {
|
||||||
return <Text key={i}>{seg.text}</Text>;
|
return <Text key={i} selectable>{seg.text}</Text>;
|
||||||
}
|
}
|
||||||
return (
|
return (
|
||||||
<Text key={i} style={LINK_STYLE} onPress={() => onPress(seg)}>
|
<Text
|
||||||
|
key={i}
|
||||||
|
selectable
|
||||||
|
style={LINK_STYLE}
|
||||||
|
onPress={() => onPress(seg)}
|
||||||
|
// Long-Press soll an den Parent durch fuer Selection
|
||||||
|
onLongPress={undefined}
|
||||||
|
suppressHighlighting={false}
|
||||||
|
>
|
||||||
{seg.text}
|
{seg.text}
|
||||||
</Text>
|
</Text>
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -104,6 +104,8 @@ const ChatScreen: React.FC = () => {
|
|||||||
const [showCameraUpload, setShowCameraUpload] = useState(false);
|
const [showCameraUpload, setShowCameraUpload] = useState(false);
|
||||||
const [gpsEnabled, setGpsEnabled] = useState(false);
|
const [gpsEnabled, setGpsEnabled] = useState(false);
|
||||||
const [wakeWordActive, setWakeWordActive] = useState(false);
|
const [wakeWordActive, setWakeWordActive] = useState(false);
|
||||||
|
// Genauer State (off/armed/conversing) fuer UI-Feedback am Button
|
||||||
|
const [wakeWordState, setWakeWordState] = useState<'off' | 'armed' | 'conversing'>('off');
|
||||||
const [fullscreenImage, setFullscreenImage] = useState<string | null>(null);
|
const [fullscreenImage, setFullscreenImage] = useState<string | null>(null);
|
||||||
const [searchQuery, setSearchQuery] = useState('');
|
const [searchQuery, setSearchQuery] = useState('');
|
||||||
const [searchVisible, setSearchVisible] = useState(false);
|
const [searchVisible, setSearchVisible] = useState(false);
|
||||||
@@ -154,6 +156,11 @@ const ChatScreen: React.FC = () => {
|
|||||||
// Wake Word: einmalig laden + Porcupine vorbereiten (wenn Access Key gesetzt)
|
// Wake Word: einmalig laden + Porcupine vorbereiten (wenn Access Key gesetzt)
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
wakeWordService.loadFromStorage().catch(() => {});
|
wakeWordService.loadFromStorage().catch(() => {});
|
||||||
|
const unsub = wakeWordService.onStateChange((s) => {
|
||||||
|
setWakeWordState(s);
|
||||||
|
setWakeWordActive(s !== 'off');
|
||||||
|
});
|
||||||
|
return () => unsub();
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
// ttsCanPlayRef live aktuell halten — Closure in onMessage unten liest
|
// ttsCanPlayRef live aktuell halten — Closure in onMessage unten liest
|
||||||
@@ -263,15 +270,22 @@ const ChatScreen: React.FC = () => {
|
|||||||
if (message.type === 'chat') {
|
if (message.type === 'chat') {
|
||||||
const sender = (message.payload.sender as string) || '';
|
const sender = (message.payload.sender as string) || '';
|
||||||
|
|
||||||
// STT-Ergebnis: Transkribierten Text in die Sprach-Bubble schreiben
|
// STT-Ergebnis: Transkribierten Text in die Sprach-Bubble schreiben.
|
||||||
|
// WICHTIG: Nur die ERSTE noch unaufgeloeste Aufnahme matchen — sonst
|
||||||
|
// wuerde bei zwei kurz hintereinander gesendeten Audios beide Bubbles
|
||||||
|
// den gleichen Text bekommen (Bug: zweite Antwort ueberschreibt erste).
|
||||||
if (sender === 'stt') {
|
if (sender === 'stt') {
|
||||||
const sttText = (message.payload.text as string) || '';
|
const sttText = (message.payload.text as string) || '';
|
||||||
if (sttText) {
|
if (sttText) {
|
||||||
setMessages(prev => prev.map(m =>
|
setMessages(prev => {
|
||||||
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
const idx = prev.findIndex(m =>
|
||||||
? { ...m, text: `\uD83C\uDFA4 ${sttText}` }
|
m.sender === 'user' && m.text.includes('Spracheingabe wird verarbeitet')
|
||||||
: m
|
);
|
||||||
));
|
if (idx < 0) return prev;
|
||||||
|
const next = prev.slice();
|
||||||
|
next[idx] = { ...next[idx], text: `\uD83C\uDFA4 ${sttText}` };
|
||||||
|
return next;
|
||||||
|
});
|
||||||
}
|
}
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@@ -572,6 +586,8 @@ const ChatScreen: React.FC = () => {
|
|||||||
};
|
};
|
||||||
setMessages(prev => capMessages([...prev, userMsg]));
|
setMessages(prev => capMessages([...prev, userMsg]));
|
||||||
|
|
||||||
|
console.log('[Chat] sende mit voice=%s speed=%s',
|
||||||
|
localXttsVoiceRef.current || '(default)', ttsSpeedRef.current);
|
||||||
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
// An RVS senden — mit geraetelokaler Voice (Bridge nutzt sie fuer die Antwort)
|
||||||
rvs.send('chat', {
|
rvs.send('chat', {
|
||||||
text,
|
text,
|
||||||
@@ -1000,7 +1016,10 @@ const ChatScreen: React.FC = () => {
|
|||||||
style={[styles.wakeWordBtn, wakeWordActive && styles.wakeWordBtnActive]}
|
style={[styles.wakeWordBtn, wakeWordActive && styles.wakeWordBtnActive]}
|
||||||
onPress={toggleWakeWord}
|
onPress={toggleWakeWord}
|
||||||
>
|
>
|
||||||
<Text style={styles.wakeWordIcon}>{wakeWordActive ? '👂' : '🔇'}</Text>
|
<Text style={styles.wakeWordIcon}>
|
||||||
|
{wakeWordState === 'conversing' ? '🎙️' :
|
||||||
|
wakeWordState === 'armed' ? '👂' : '🔇'}
|
||||||
|
</Text>
|
||||||
</TouchableOpacity>
|
</TouchableOpacity>
|
||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@@ -191,6 +191,13 @@ class AudioService {
|
|||||||
private pcmBytesCollected: number = 0;
|
private pcmBytesCollected: number = 0;
|
||||||
private readonly PCM_MAX_CACHE_BYTES = 30 * 1024 * 1024; // 30MB
|
private readonly PCM_MAX_CACHE_BYTES = 30 * 1024 * 1024; // 30MB
|
||||||
|
|
||||||
|
// AudioFocus wird verzoegert freigegeben — wenn ARIA eine zweite Antwort
|
||||||
|
// direkt hinterherschickt (oder ein neuer Stream startet), bleibt Spotify
|
||||||
|
// pausiert. Ohne diese Verzoegerung springt Spotify im Mikro-Sekunden-Gap
|
||||||
|
// zwischen zwei Streams kurz wieder an.
|
||||||
|
private focusReleaseTimer: ReturnType<typeof setTimeout> | null = null;
|
||||||
|
private readonly FOCUS_RELEASE_DELAY_MS = 800;
|
||||||
|
|
||||||
// VAD State
|
// VAD State
|
||||||
private vadEnabled: boolean = false;
|
private vadEnabled: boolean = false;
|
||||||
private lastSpeechTime: number = 0;
|
private lastSpeechTime: number = 0;
|
||||||
@@ -205,6 +212,24 @@ class AudioService {
|
|||||||
this.recorder.setSubscriptionDuration(0.1); // 100ms Metering-Updates
|
this.recorder.setSubscriptionDuration(0.1); // 100ms Metering-Updates
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** AudioFocus mit kleiner Verzoegerung freigeben — Spotify/YouTube
|
||||||
|
* springen sonst im Gap zwischen zwei TTS-Streams (oder wenn ARIA
|
||||||
|
* eine zweite Antwort direkt hinterherschickt) kurz wieder an. */
|
||||||
|
private _releaseFocusDeferred(): void {
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
|
this.focusReleaseTimer = setTimeout(() => {
|
||||||
|
this.focusReleaseTimer = null;
|
||||||
|
AudioFocus?.release().catch(() => {});
|
||||||
|
}, this.FOCUS_RELEASE_DELAY_MS);
|
||||||
|
}
|
||||||
|
|
||||||
|
private _cancelDeferredFocusRelease(): void {
|
||||||
|
if (this.focusReleaseTimer) {
|
||||||
|
clearTimeout(this.focusReleaseTimer);
|
||||||
|
this.focusReleaseTimer = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// --- Berechtigungen ---
|
// --- Berechtigungen ---
|
||||||
|
|
||||||
async requestMicrophonePermission(): Promise<boolean> {
|
async requestMicrophonePermission(): Promise<boolean> {
|
||||||
@@ -305,6 +330,7 @@ class AudioService {
|
|||||||
this.setState('recording');
|
this.setState('recording');
|
||||||
|
|
||||||
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
// Andere Apps waehrend der Aufnahme pausieren (Musik, Videos etc.)
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.requestExclusive().catch(() => {});
|
AudioFocus?.requestExclusive().catch(() => {});
|
||||||
|
|
||||||
// VAD aktivieren — Stille-Dauer aus AsyncStorage (Settings-konfigurierbar).
|
// VAD aktivieren — Stille-Dauer aus AsyncStorage (Settings-konfigurierbar).
|
||||||
@@ -328,11 +354,12 @@ class AudioService {
|
|||||||
};
|
};
|
||||||
if (autoStop) {
|
if (autoStop) {
|
||||||
const vadSilenceMs = await loadVadSilenceMs();
|
const vadSilenceMs = await loadVadSilenceMs();
|
||||||
console.log('[Audio] VAD-Stille:', vadSilenceMs, 'ms');
|
console.log('[Audio] startRecording: autoStop=true, VAD-Stille=%dms, MAX=%dms',
|
||||||
|
vadSilenceMs, MAX_RECORDING_MS);
|
||||||
this.vadTimer = setInterval(() => {
|
this.vadTimer = setInterval(() => {
|
||||||
const silenceDuration = Date.now() - this.lastSpeechTime;
|
const silenceDuration = Date.now() - this.lastSpeechTime;
|
||||||
if (silenceDuration >= vadSilenceMs) {
|
if (silenceDuration >= vadSilenceMs) {
|
||||||
fireSilenceOnce(`VAD ${silenceDuration}ms Stille`);
|
fireSilenceOnce(`VAD ${silenceDuration}ms Stille (Schwelle=${vadSilenceMs}ms)`);
|
||||||
}
|
}
|
||||||
}, 200);
|
}, 200);
|
||||||
// Notbremse: Nach MAX_RECORDING_MS zwangsweise stoppen
|
// Notbremse: Nach MAX_RECORDING_MS zwangsweise stoppen
|
||||||
@@ -386,8 +413,9 @@ class AudioService {
|
|||||||
await this.recorder.stopRecorder();
|
await this.recorder.stopRecorder();
|
||||||
this.recorder.removeRecordBackListener();
|
this.recorder.removeRecordBackListener();
|
||||||
|
|
||||||
// Audio-Focus freigeben — andere Apps duerfen wieder
|
// Audio-Focus verzoegert freigeben — gleich kommt die TTS-Antwort,
|
||||||
AudioFocus?.release().catch(() => {});
|
// im Gap soll Spotify nicht hochkommen.
|
||||||
|
this._releaseFocusDeferred();
|
||||||
|
|
||||||
const durationMs = Date.now() - this.recordingStartTime;
|
const durationMs = Date.now() - this.recordingStartTime;
|
||||||
const hadSpeech = this.speechDetected;
|
const hadSpeech = this.speechDetected;
|
||||||
@@ -459,7 +487,13 @@ class AudioService {
|
|||||||
|
|
||||||
/** Einen PCM-Chunk aus einer audio_pcm Nachricht empfangen.
|
/** Einen PCM-Chunk aus einer audio_pcm Nachricht empfangen.
|
||||||
* silent=true → nur cachen, nicht abspielen (z.B. wenn TTS geraetelokal gemutet).
|
* silent=true → nur cachen, nicht abspielen (z.B. wenn TTS geraetelokal gemutet).
|
||||||
* Gibt bei final=true den Cache-Pfad zurueck (file://) oder '' wenn nicht gecached. */
|
* Gibt bei final=true den Cache-Pfad zurueck (file://) oder '' wenn nicht gecached.
|
||||||
|
*
|
||||||
|
* Wrapper serialisiert aufeinanderfolgende Chunk-Calls via Promise-Queue —
|
||||||
|
* sonst gabs bei kurzen Streams einen Race: final-Chunk konnte `end()` rufen
|
||||||
|
* BEVOR der vorherige `start()` im Native-Modul fertig war. Der Writer-
|
||||||
|
* Thread sah dann endRequested=true ohne jemals Chunks zu verarbeiten. */
|
||||||
|
private _pcmChunkQueue: Promise<any> = Promise.resolve();
|
||||||
async handlePcmChunk(payload: {
|
async handlePcmChunk(payload: {
|
||||||
base64: string;
|
base64: string;
|
||||||
sampleRate?: number;
|
sampleRate?: number;
|
||||||
@@ -468,6 +502,24 @@ class AudioService {
|
|||||||
chunk?: number;
|
chunk?: number;
|
||||||
final?: boolean;
|
final?: boolean;
|
||||||
silent?: boolean;
|
silent?: boolean;
|
||||||
|
}): Promise<string> {
|
||||||
|
const p = this._pcmChunkQueue.then(() => this._handlePcmChunkImpl(payload)).catch(err => {
|
||||||
|
console.warn('[Audio] handlePcmChunk queued err:', err);
|
||||||
|
return '';
|
||||||
|
});
|
||||||
|
// Chain only on the side effect — callers still get the per-call result
|
||||||
|
this._pcmChunkQueue = p;
|
||||||
|
return p;
|
||||||
|
}
|
||||||
|
|
||||||
|
private async _handlePcmChunkImpl(payload: {
|
||||||
|
base64: string;
|
||||||
|
sampleRate?: number;
|
||||||
|
channels?: number;
|
||||||
|
messageId?: string;
|
||||||
|
chunk?: number;
|
||||||
|
final?: boolean;
|
||||||
|
silent?: boolean;
|
||||||
}): Promise<string> {
|
}): Promise<string> {
|
||||||
const silent = !!payload.silent;
|
const silent = !!payload.silent;
|
||||||
if (!silent && !PcmStreamPlayer) {
|
if (!silent && !PcmStreamPlayer) {
|
||||||
@@ -510,6 +562,7 @@ class AudioService {
|
|||||||
this.pcmStreamActive = false;
|
this.pcmStreamActive = false;
|
||||||
return '';
|
return '';
|
||||||
}
|
}
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.requestDuck().catch(() => {});
|
AudioFocus?.requestDuck().catch(() => {});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -528,11 +581,12 @@ class AudioService {
|
|||||||
if (isFinal) {
|
if (isFinal) {
|
||||||
if (!silent) {
|
if (!silent) {
|
||||||
// end() resolved jetzt erst wenn der native Writer-Thread fertig
|
// end() resolved jetzt erst wenn der native Writer-Thread fertig
|
||||||
// ist (alle Samples ausgespielt) — danach erst AudioFocus freigeben,
|
// ist (alle Samples ausgespielt) — danach AudioFocus verzoegert
|
||||||
// damit Spotify/YouTube nicht waehrend des Pre-Roll-Ausklangs
|
// freigeben, damit Spotify/YouTube nicht im Mikro-Gap zwischen zwei
|
||||||
// wieder aufdrehen.
|
// ARIA-Antworten wieder hochdrehen. Wenn ein neuer Stream innerhalb
|
||||||
|
// FOCUS_RELEASE_DELAY_MS startet, wird das Release abgebrochen.
|
||||||
try { await PcmStreamPlayer!.end(); } catch {}
|
try { await PcmStreamPlayer!.end(); } catch {}
|
||||||
AudioFocus?.release().catch(() => {});
|
this._releaseFocusDeferred();
|
||||||
}
|
}
|
||||||
this.pcmStreamActive = false;
|
this.pcmStreamActive = false;
|
||||||
|
|
||||||
@@ -636,8 +690,9 @@ class AudioService {
|
|||||||
private async _playNext(): Promise<void> {
|
private async _playNext(): Promise<void> {
|
||||||
if (this.audioQueue.length === 0) {
|
if (this.audioQueue.length === 0) {
|
||||||
this.isPlaying = false;
|
this.isPlaying = false;
|
||||||
// Audio-Focus abgeben → andere Apps volle Lautstaerke
|
// Audio-Focus verzoegert abgeben → wenn gleich noch eine Antwort kommt,
|
||||||
AudioFocus?.release().catch(() => {});
|
// bleibt Spotify pausiert.
|
||||||
|
this._releaseFocusDeferred();
|
||||||
// Alle Audio-Teile abgespielt → Listener benachrichtigen
|
// Alle Audio-Teile abgespielt → Listener benachrichtigen
|
||||||
this.playbackFinishedListeners.forEach(cb => cb());
|
this.playbackFinishedListeners.forEach(cb => cb());
|
||||||
return;
|
return;
|
||||||
@@ -645,6 +700,7 @@ class AudioService {
|
|||||||
|
|
||||||
// Beim ersten Playback-Start: andere Apps ducken
|
// Beim ersten Playback-Start: andere Apps ducken
|
||||||
if (!this.isPlaying) {
|
if (!this.isPlaying) {
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.requestDuck().catch(() => {});
|
AudioFocus?.requestDuck().catch(() => {});
|
||||||
}
|
}
|
||||||
this.isPlaying = true;
|
this.isPlaying = true;
|
||||||
@@ -730,7 +786,8 @@ class AudioService {
|
|||||||
this.pcmBytesCollected = 0;
|
this.pcmBytesCollected = 0;
|
||||||
this.pcmMessageId = '';
|
this.pcmMessageId = '';
|
||||||
}
|
}
|
||||||
// Audio-Focus freigeben
|
// Audio-Focus sofort freigeben — User hat explizit abgebrochen
|
||||||
|
this._cancelDeferredFocusRelease();
|
||||||
AudioFocus?.release().catch(() => {});
|
AudioFocus?.release().catch(() => {});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -29,6 +29,11 @@ class UpdateService {
|
|||||||
private downloading = false;
|
private downloading = false;
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
|
// Beim Start alte APK-Reste aus dem Cache wegraeumen — wenn diese App
|
||||||
|
// laeuft, sind frueher heruntergeladene APKs entweder schon installiert
|
||||||
|
// oder unvollstaendig gewesen. Spart sonst pro Update 20-30MB auf dem Handy.
|
||||||
|
this.cleanupOldApks().catch(() => {});
|
||||||
|
|
||||||
// Auf update_available Nachrichten lauschen
|
// Auf update_available Nachrichten lauschen
|
||||||
rvs.onMessage((msg: RVSMessage) => {
|
rvs.onMessage((msg: RVSMessage) => {
|
||||||
if (msg.type === 'update_available' as any) {
|
if (msg.type === 'update_available' as any) {
|
||||||
@@ -45,6 +50,30 @@ class UpdateService {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Raeumt alte heruntergeladene APK-Dateien aus dem Cache auf. */
|
||||||
|
private async cleanupOldApks(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const files = await RNFS.readDir(RNFS.CachesDirectoryPath);
|
||||||
|
const apks = files.filter(f => /\.apk$/i.test(f.name));
|
||||||
|
let freed = 0;
|
||||||
|
for (const f of apks) {
|
||||||
|
try {
|
||||||
|
const size = parseInt(f.size as any, 10) || 0;
|
||||||
|
await RNFS.unlink(f.path);
|
||||||
|
freed += size;
|
||||||
|
console.log(`[Update] Alte APK geloescht: ${f.name} (${(size / 1024 / 1024).toFixed(1)}MB)`);
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn(`[Update] APK-Loeschen fehlgeschlagen: ${f.name} (${err?.message || err})`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (apks.length > 0) {
|
||||||
|
console.log(`[Update] Cleanup fertig: ${apks.length} APKs entfernt, ${(freed / 1024 / 1024).toFixed(1)}MB freigegeben`);
|
||||||
|
}
|
||||||
|
} catch (err: any) {
|
||||||
|
console.warn(`[Update] Cleanup-Fehler: ${err?.message || err}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/** Bei App-Start Update pruefen */
|
/** Bei App-Start Update pruefen */
|
||||||
checkForUpdate(): void {
|
checkForUpdate(): void {
|
||||||
if (this.checking) return;
|
if (this.checking) return;
|
||||||
@@ -111,6 +140,10 @@ class UpdateService {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Vor dem Schreiben alte APKs im Cache wegraeumen — falls mehrere
|
||||||
|
// Updates in einer Session gezogen werden
|
||||||
|
await this.cleanupOldApks();
|
||||||
|
|
||||||
// Base64 als APK-Datei speichern
|
// Base64 als APK-Datei speichern
|
||||||
const destPath = `${RNFS.CachesDirectoryPath}/${apkData.fileName}`;
|
const destPath = `${RNFS.CachesDirectoryPath}/${apkData.fileName}`;
|
||||||
await RNFS.writeFile(destPath, apkData.base64, 'base64');
|
await RNFS.writeFile(destPath, apkData.base64, 'base64');
|
||||||
|
|||||||
@@ -17,6 +17,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||||
|
import { ToastAndroid } from 'react-native';
|
||||||
|
|
||||||
type WakeWordCallback = () => void;
|
type WakeWordCallback = () => void;
|
||||||
type StateCallback = (state: WakeWordState) => void;
|
type StateCallback = (state: WakeWordState) => void;
|
||||||
@@ -80,10 +81,20 @@ class WakeWordService {
|
|||||||
|
|
||||||
// Laufende Instanz stoppen
|
// Laufende Instanz stoppen
|
||||||
await this.disposePorcupine();
|
await this.disposePorcupine();
|
||||||
if (!this.accessKey) return false;
|
if (!this.accessKey) {
|
||||||
|
console.warn('[WakeWord] configure: kein Access Key gesetzt');
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
// Neu initialisieren
|
// Neu initialisieren
|
||||||
return this.initPorcupine();
|
const ok = await this.initPorcupine();
|
||||||
|
if (!ok) {
|
||||||
|
ToastAndroid.show(
|
||||||
|
`Wake-Word "${this.keyword}" konnte nicht initialisiert werden — Logs pruefen`,
|
||||||
|
ToastAndroid.LONG,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
return ok;
|
||||||
}
|
}
|
||||||
|
|
||||||
private async initPorcupine(): Promise<boolean> {
|
private async initPorcupine(): Promise<boolean> {
|
||||||
@@ -117,10 +128,14 @@ class WakeWordService {
|
|||||||
this.disposePorcupine().catch(() => {});
|
this.disposePorcupine().catch(() => {});
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
console.log('[WakeWord] Porcupine init OK (keyword=%s)', this.keyword);
|
console.log('[WakeWord] Porcupine init OK (keyword=%s, manager=%s)',
|
||||||
|
this.keyword, this.porcupine ? 'created' : 'NULL');
|
||||||
return true;
|
return true;
|
||||||
} catch (err) {
|
} catch (err: any) {
|
||||||
console.warn('[WakeWord] Porcupine init fehlgeschlagen:', err);
|
console.warn('[WakeWord] Porcupine init fehlgeschlagen:', err?.message || err);
|
||||||
|
console.warn('[WakeWord] err details:', JSON.stringify({
|
||||||
|
name: err?.name, code: err?.code, stack: err?.stack?.slice(0, 200),
|
||||||
|
}));
|
||||||
this.porcupine = null;
|
this.porcupine = null;
|
||||||
return false;
|
return false;
|
||||||
} finally {
|
} finally {
|
||||||
@@ -146,14 +161,27 @@ class WakeWordService {
|
|||||||
try {
|
try {
|
||||||
await this.porcupine.start();
|
await this.porcupine.start();
|
||||||
console.log('[WakeWord] armed — warte auf Wake Word "%s"', this.keyword);
|
console.log('[WakeWord] armed — warte auf Wake Word "%s"', this.keyword);
|
||||||
|
ToastAndroid.show(`Lausche auf "${this.keyword}"`, ToastAndroid.SHORT);
|
||||||
this.setState('armed');
|
this.setState('armed');
|
||||||
return true;
|
return true;
|
||||||
} catch (err) {
|
} catch (err: any) {
|
||||||
console.warn('[WakeWord] Porcupine start fehlgeschlagen — Fallback Direkt-Konversation:', err);
|
console.warn('[WakeWord] Porcupine start fehlgeschlagen — Fallback Direkt-Konversation:',
|
||||||
|
err?.message || err);
|
||||||
|
ToastAndroid.show(
|
||||||
|
`Wake-Word-Start failed: ${err?.message || err}`,
|
||||||
|
ToastAndroid.LONG,
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
// Kein Porcupine init → User explicit informieren
|
||||||
|
console.warn('[WakeWord] Porcupine nicht initialisiert — Access Key fehlt? Fallback Direkt-Aufnahme');
|
||||||
|
ToastAndroid.show(
|
||||||
|
'Wake-Word nicht aktiv — direkte Aufnahme startet (Mikro hoert mit)',
|
||||||
|
ToastAndroid.LONG,
|
||||||
|
);
|
||||||
}
|
}
|
||||||
// Fallback: direkt in die Konversation
|
// Fallback: direkt in die Konversation (Mikro AKTIV, nicht passive)
|
||||||
console.log('[WakeWord] Konversation startet sofort (kein Wake-Word)');
|
console.log('[WakeWord] Direkt-Aufnahme startet (kein Wake-Word)');
|
||||||
this.setState('conversing');
|
this.setState('conversing');
|
||||||
setTimeout(() => {
|
setTimeout(() => {
|
||||||
if (this.state === 'conversing') {
|
if (this.state === 'conversing') {
|
||||||
@@ -175,6 +203,7 @@ class WakeWordService {
|
|||||||
/** Wake-Word getriggert: Porcupine pausieren, Konversation starten. */
|
/** Wake-Word getriggert: Porcupine pausieren, Konversation starten. */
|
||||||
private async onWakeDetected(): Promise<void> {
|
private async onWakeDetected(): Promise<void> {
|
||||||
console.log('[WakeWord] Wake-Word "%s" erkannt!', this.keyword);
|
console.log('[WakeWord] Wake-Word "%s" erkannt!', this.keyword);
|
||||||
|
ToastAndroid.show(`Wake-Word "${this.keyword}" erkannt — sprich jetzt`, ToastAndroid.SHORT);
|
||||||
if (this.porcupine) {
|
if (this.porcupine) {
|
||||||
try { await this.porcupine.stop(); } catch {}
|
try { await this.porcupine.stop(); } catch {}
|
||||||
}
|
}
|
||||||
@@ -197,6 +226,7 @@ class WakeWordService {
|
|||||||
try {
|
try {
|
||||||
await this.porcupine.start();
|
await this.porcupine.start();
|
||||||
console.log('[WakeWord] Konversation zu Ende — zurueck zu armed');
|
console.log('[WakeWord] Konversation zu Ende — zurueck zu armed');
|
||||||
|
ToastAndroid.show(`Lausche wieder auf "${this.keyword}"`, ToastAndroid.SHORT);
|
||||||
this.setState('armed');
|
this.setState('armed');
|
||||||
return;
|
return;
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
@@ -204,6 +234,7 @@ class WakeWordService {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
console.log('[WakeWord] Konversation zu Ende — Ohr aus');
|
console.log('[WakeWord] Konversation zu Ende — Ohr aus');
|
||||||
|
ToastAndroid.show('Mikro aus', ToastAndroid.SHORT);
|
||||||
this.setState('off');
|
this.setState('off');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -942,7 +942,8 @@ class ARIABridge:
|
|||||||
},
|
},
|
||||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||||
})
|
})
|
||||||
logger.info("[core] XTTS-Request gesendet (%s): '%s'", xtts_voice or "default", tts_text[:60])
|
logger.info("[core] XTTS-Request gesendet (voice=%s, speed=%.2fx): '%s'",
|
||||||
|
xtts_voice or "default", xtts_speed, tts_text[:60])
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error("[core] XTTS-Request fehlgeschlagen: %s — kein Audio", e)
|
logger.error("[core] XTTS-Request fehlgeschlagen: %s — kein Audio", e)
|
||||||
|
|
||||||
|
|||||||
@@ -239,6 +239,8 @@ class F5Runner:
|
|||||||
|
|
||||||
def _infer_blocking(self, gen_text: str, ref_wav: str, ref_text: str,
|
def _infer_blocking(self, gen_text: str, ref_wav: str, ref_text: str,
|
||||||
speed: float = 1.0) -> tuple[np.ndarray, int]:
|
speed: float = 1.0) -> tuple[np.ndarray, int]:
|
||||||
|
logger.info("infer() text=%d chars, speed=%.2f, cfg=%.2f, nfe=%d",
|
||||||
|
len(gen_text), speed, self.cfg_strength, self.nfe_step)
|
||||||
wav, sr, _ = self.model.infer(
|
wav, sr, _ = self.model.infer(
|
||||||
ref_file=ref_wav,
|
ref_file=ref_wav,
|
||||||
ref_text=ref_text,
|
ref_text=ref_text,
|
||||||
@@ -507,7 +509,8 @@ async def _do_tts(ws, runner: F5Runner, text: str, voice: str,
|
|||||||
ref_wav_str, ref_text = str(pair[0]), pair[1].read_text(encoding="utf-8").strip()
|
ref_wav_str, ref_text = str(pair[0]), pair[1].read_text(encoding="utf-8").strip()
|
||||||
|
|
||||||
sentences = split_sentences(text)
|
sentences = split_sentences(text)
|
||||||
logger.info("F5-TTS: %d Satz(e), voice=%s (%s)", len(sentences), voice or "default", ref_wav_str)
|
logger.info("F5-TTS: %d Satz(e), voice=%s, speed=%.2fx (%s)",
|
||||||
|
len(sentences), voice or "default", speed, ref_wav_str)
|
||||||
|
|
||||||
chunk_index = 0
|
chunk_index = 0
|
||||||
pcm_sr = TARGET_SR
|
pcm_sr = TARGET_SR
|
||||||
|
|||||||
Reference in New Issue
Block a user