Compare commits
47 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 31b0bfaac1 | |||
| 1d3c45fdda | |||
| 84a59d7b4f | |||
| 8ad3e39453 | |||
| afa96b1d44 | |||
| 0407c5bc3c | |||
| 2d348aeec7 | |||
| 7e53dcfed3 | |||
| 33d5be781f | |||
| 785f5d0805 | |||
| fac87474ec | |||
| 8227266aea | |||
| 5d24e01d4b | |||
| 4fe72cc4a8 | |||
| eeeb1d43f5 | |||
| 0044e222db | |||
| 048d231b60 | |||
| 2bac9c26ca | |||
| c758727345 | |||
| cb0e879118 | |||
| ce6f5b551e | |||
| b6a68b7658 | |||
| 03edee8881 | |||
| 7093ebaf0b | |||
| b4923bc221 | |||
| 7a66752655 | |||
| b510ccd93a | |||
| bbd51406a9 | |||
| 2cd436f6e9 | |||
| 22adc91c1e | |||
| 61cf8e3bcc | |||
| 3e38f1dad3 | |||
| 635944299e | |||
| b2ac013765 | |||
| 93db6a3156 | |||
| 579a466402 | |||
| 5133f0bc2d | |||
| a476a4b734 | |||
| 11b205ddaf | |||
| 71c60ade8a | |||
| bf3dc635d9 | |||
| 8ca899aaf5 | |||
| 15facf48eb | |||
| 71fc90fcb8 | |||
| 856701fb6f | |||
| 6037b62612 | |||
| 8f88cb0030 |
@@ -219,11 +219,15 @@ Der Proxy-Container (`node:22-alpine`) installiert bei jedem Start:
|
||||
Danach wird der Proxy gepatcht:
|
||||
1. **Host-Binding** (sed): Server hoert auf `0.0.0.0` statt localhost
|
||||
2. **Tool-Permissions** (sed): `--dangerously-skip-permissions` Flag injizieren
|
||||
3. **Tool-Use-Adapter** (Datei-Overwrite aus [`proxy-patches/`](proxy-patches/)):
|
||||
3. **CLI-Timeout** (sed): `DEFAULT_TIMEOUT 300000 → 1200000` (5 → 20 Min) im subprocess-manager. Multi-Tool-Workflows mit echtem Bash + curl + DB-Inserts brauchen oft 8–15 Min; 5 Min war chronisch zu kurz
|
||||
4. **Tool-Use-Adapter** (Datei-Overwrite aus [`proxy-patches/`](proxy-patches/)):
|
||||
- `openai-to-cli.js` injiziert das OpenAI-`tools`-Feld als `<system>`-Block mit Schema-Beschreibungen + Anweisung `<tool_call name="X">{json}</tool_call>` als Antwortformat. `role=tool`-Messages werden als `<tool_result>`-Bloecke eingewoben. Multimodal-Content (Array von Parts) bleibt String-kompatibel.
|
||||
- `cli-to-openai.js` parsed `<tool_call>`-Bloecke aus Claudes Antwort und liefert sie als echte OpenAI `tool_calls` mit `finish_reason="tool_calls"`. Pre-Tool-Text bleibt im `content`. Mehrere parallele Calls werden korrekt aufgeteilt. Model-Name null-safe.
|
||||
- `routes.js` hookt die `assistant`-Events des Subprozesses und feuert pro `tool_use`-Block (Bash, Read, Edit, Grep, …) einen HTTP-POST an die Bridge (`/internal/agent-activity`). Bridge spiegelt das als RVS `agent_activity` an App+Diagnostic → der Gedanken-Stream zeigt live mit was ARIA gerade tut. Fire-and-forget, fail-open — Brain-Call bricht nicht ab wenn die Bridge mal nicht da ist.
|
||||
|
||||
**Warum?** Die npm-Version des Proxys ignoriert das `tools`-Feld komplett und reicht nur einen Prompt-String an die CLI weiter. Claude Code nutzt dann ihre internen Tools (Bash, Read, …) und „simuliert" Aktionen — z.B. `sleep 120` statt `trigger_timer`. Mit den eigenen Adaptern landen ARIA-Tools wieder auf der Linie und Side-Effects (Trigger anlegen, Skills aufrufen, GPS-Tracking schalten) funktionieren.
|
||||
**Warum?** Die npm-Version des Proxys ignoriert das `tools`-Feld komplett und reicht nur einen Prompt-String an die CLI weiter. Claude Code nutzt dann ihre internen Tools (Bash, Read, …) und „simuliert" Aktionen — z.B. `sleep 120` statt `trigger_timer`. Mit den eigenen Adaptern landen ARIA-Tools wieder auf der Linie und Side-Effects (Trigger anlegen, Skills aufrufen, GPS-Tracking schalten) funktionieren. Der Tool-Hook im `routes.js` macht zusaetzlich das interne Claude-Code-Werkzeug-Geschehen fuer den User sichtbar.
|
||||
|
||||
**Brain ↔ Bridge ist async**: `_handle_rvs_message` ruft `send_to_core` als `asyncio.create_task` statt `await` — sonst blockierte der WS-recv-Loop bis zu 20 Min und der RVS-Server (mobil.hacker-net.de) droppte die Bridge nach ~4 Min Idle-Timeout. Brain laeuft jetzt im Hintergrund-Task, RVS-Verbindung bleibt waehrend ARIA arbeitet aktiv.
|
||||
|
||||
**Wichtige Umgebungsvariablen im Proxy:**
|
||||
- `HOST=0.0.0.0` — API von aussen erreichbar (Docker-Netz)
|
||||
@@ -316,7 +320,7 @@ Erreichbar unter `http://<VM-IP>:3001`. Teilt das Netzwerk mit der Bridge.
|
||||
|
||||
### Tabs
|
||||
|
||||
- **Main**: Brain/RVS/Proxy-Status, Chat-Test, "ARIA denkt..."-Indikator, End-to-End-Trace, Container-Logs
|
||||
- **Main**: Brain/RVS/Proxy-Status, Chat-Test, "ARIA denkt..."-Indikator, **💭 Gedanken-Stream** (zentrales Modal, zeigt live alle Tool-Calls + Phasen mit Zeitstempel und Trennlinien bei langen Pausen), End-to-End-Trace, Container-Logs
|
||||
- **Gehirn**: Memory-Browser (Vector-DB), Suche mit zwei Modi (**📝 Wortlich** = Substring-Match Default + **🧠 Semantisch** mit Score-Threshold), **Advanced Search** (aufklappbares Panel, beliebig viele AND/OR-verknuepfte Felder, + Button fuer mehr Zeilen), Type+Pinned-Filter (greifen auch in der Suche), klappbare Type-Kategorien (Default eingeklappt), Add/Edit/Delete mit Category-Autosuggest, **📎 Anhaenge** pro Memory (Bilder/PDFs/...): Upload + Thumbnail-Vorschau + Lightbox + Lösch-Button, 📎N-Badge in der Liste, automatischer Cleanup beim Memory-Delete. ℹ-Info-Modal das erklaert welche Types FEST in den Prompt vs. Cold Memory wandern. **📄 Druckansicht** (Strg+P → PDF). Konversation-Status mit Destillat-Trigger, **Token/Call-Metrics mit Subscription-Quota-Tracking**, Bootstrap & Migration (3 Wiederherstellungs-Wege), Gehirn-Export/Import (tar.gz)
|
||||
- **Skills**: Liste aller Skills mit Logs pro Run, Activate/Deactivate, Export/Import als tar.gz, "von ARIA"-Badge fuer selbst gebaute
|
||||
- **Trigger**: passive Aufweck-Quellen. **Timer** (einmalig, ISO-Timestamp oder via `in_seconds` als Server-Berechnung) + **Watcher** (recurring, mit Condition + Throttle). Liste aktiver Trigger + Logs pro Feuer-Event. Modal mit Type-Dropdown, Live-Anzeige aller verfuegbaren Condition-Variablen (`disk_free_gb`, `hour_of_day`, `current_lat/lon`, `last_user_message_ago_sec`, …). **Drei GPS-Funktionen** mit unterschiedlicher Semantik:
|
||||
@@ -362,8 +366,11 @@ Erreichbar unter `http://<VM-IP>:3001`. Teilt das Netzwerk mit der Bridge.
|
||||
- **Lokale Voice-Wahl**: Pro Geraet eigene Stimme moeglich (in Settings). Diagnostic-Wechsel ueberschreibt alle App-Wahlen.
|
||||
- **Voice-Ready Toast**: Beim Wechsel zeigt die App "Stimme X bereit (X.Ys)" sobald der Preload durch ist
|
||||
- **Play-Button**: Jede ARIA-Nachricht kann nochmal vorgelesen werden (aus Cache wenn vorhanden, sonst neu rendern)
|
||||
- **Chat-Suche**: Lupe in der Statusleiste filtert Nachrichten live
|
||||
- **Chat-Suche**: Lupe in der Statusleiste — Highlight + Next/Prev springt zum Treffer (Bubble landet am Text-Anfang oben am Viewport). Reihenfolge **neueste zuerst** (analog WhatsApp), „Naechster" geht in die Vergangenheit. Item-Hoehen werden per `onLayout` gecached fuer praezisen Pre-Scroll auch bei langen Listen
|
||||
- **Jump-to-Bottom-Button**: erscheint rechts unten sobald man weg von der neuesten Nachricht scrollt, ein Tap fuehrt zurueck
|
||||
- **Delivery-Status pro User-Bubble** (WhatsApp-Style): `⏱` (queued, wartet auf Verbindung) → `⏳` (sending) → `✓` (Bridge hat ACK gesendet) → `✓✓` (ARIA hat verarbeitet). Bei Netzausfall werden Nachrichten lokal als queued gehalten und beim Reconnect automatisch geflusht. Bei drei ACK-Timeouts → `⚠ tippen f. Retry`. Idempotenz auf der Bridge (LRU ueber `clientMsgId`) verhindert Doppelte beim Retry
|
||||
- **Mülltonne pro Bubble** (mit Confirm): gezielt eine Nachricht loeschen — geht nicht nur aus der UI weg, sondern auch aus `chat_backup.jsonl`, Brain-Conversation-Window und allen anderen Clients (RVS-Broadcast). Wichtig damit ARIA den Turn auch beim naechsten Prompt nicht mehr im Kontext hat
|
||||
- **💭 Gedanken-Stream**: chronologisches Log was ARIA intern macht — gefuettert aus `agent_activity`-Events (denkt / 🔧 Tool-Name / schreibt / ✓ fertig). Live-Update waehrend Brain arbeitet: pro Tool-Call (Bash, Read, Edit, Grep, …) erscheint sofort ein Eintrag, durchgereicht vom claude-max-api-proxy via `proxy-patches/routes.js`-Hook. Lange Pausen zwischen Denk-Phasen werden als Trennlinie mit Minuten-Hint sichtbar. App: Icon in der Statusleiste oeffnet ein Bottom-Sheet, persistiert in AsyncStorage (capped 500). Diagnostic: identische Funktion als zentrales Modal im Chat-Test-Header
|
||||
- **🗂️ Notizen-Inbox + Memory-Editor**: Neben der Lupe oeffnet `🗂️` ein Vollbild-Modal mit allen Memory/Trigger/Skill-Spezial-Bubbles aus dem Chat plus dem vollen DB-Browser. Tap auf eine Memory oeffnet ein **Detail/Edit-Modal**: Felder editieren, Anhaenge hoch-/runterladen + loeschen, Memory komplett loeschen. Identischer Editor auch in Settings → 🧠 Gedaechtnis. Spezial-Bubbles werden aus dem Chat-Stream gefiltert (keine ewig-unten-haengenden Notiz-Bubbles mehr)
|
||||
- **Bubble-Header dynamic**: „ARIA hat etwas gemerkt" / „Notiz geaendert" (gelb) / „Notiz geloescht" (rot) — je nach action im memory_saved-Event
|
||||
- **App-Crash-Reporting**: ungefangene JS-Errors + React-Render-Fehler landen automatisch in `/shared/logs/app.log` via RVS — kein ADB noetig, Logs holen via `tools/fetch-app-logs.sh` oder Diagnostic GET `/api/app-log`. ErrorBoundary verhindert White-Screen, zeigt stattdessen Error-Box im Modal mit Stack-Trace + Schliessen-Button
|
||||
@@ -373,7 +380,7 @@ Erreichbar unter `http://<VM-IP>:3001`. Teilt das Netzwerk mit der Bridge.
|
||||
- **Einstellungen**: TTS-aktiv, F5-TTS-Voice, Pre-Roll-Buffer, Stille-Toleranz, Speicherort, Auto-Download, GPS, Verbose-Logging
|
||||
- **Auto-Update**: Prueft beim Start + per Button auf neue Version, Download + Installation ueber RVS (FileProvider)
|
||||
- GPS-Position (optional, mit Runtime-Permission-Request) — wird in jeden Chat/Audio-Payload mitgegeben und ist in Diagnostic als Debug-Block einblendbar
|
||||
- **GPS-Tracking (kontinuierlich)**: Toggle in Settings → Standort. Wenn aktiv, pushed die App alle ~15s bzw. ab 30m Bewegung ein `location_update` an die Bridge — Voraussetzung damit Watcher mit `near(lat, lon, m)` (z.B. Blitzer-Warner, Ankunft-Erinnerungen) ueberhaupt feuern koennen. ARIA selbst kann das Tracking via `request_location_tracking`-Tool an-/ausschalten und tut das automatisch wenn sie einen GPS-Watcher anlegt
|
||||
- **GPS-Tracking (kontinuierlich)**: Toggle in Settings → Standort. Wenn aktiv, pushed die App ab 30m Bewegung ein `location_update` an die Bridge — Voraussetzung damit Watcher mit `near(lat, lon, m)` (z.B. Blitzer-Warner, Ankunft-Erinnerungen) ueberhaupt feuern koennen. **Heartbeat alle 60 s**: auch ohne Bewegung wird die letzte bekannte Position erneut an die Bridge geschickt damit der Brain-State nicht nach 5 min (NEAR_MAX_AGE_SEC) veraltet — kein extra GPS-Wakeup, akkufreundlich. ARIA selbst kann das Tracking via `request_location_tracking`-Tool an-/ausschalten und tut das automatisch wenn sie einen GPS-Watcher anlegt
|
||||
- QR-Code Scanner fuer Token-Pairing
|
||||
- **ARIA-Dateien empfangen**: Wenn ARIA eine PDF/Bild/Markdown/ZIP fuer dich erstellt (Marker `[FILE: /shared/uploads/aria_*]` in der Antwort), erscheint sie als eigene Anhang-Bubble. Tippen → wird via RVS geladen + mit Android-Intent-Picker geoeffnet (PDF-Viewer, Bildbetrachter, Standard-App). Inline-Bilder aus Markdown-``-Syntax werden direkt unter dem Text gerendert (PNG/JPG via Image, SVG via react-native-svg)
|
||||
- **Vollbild mit Pinch-Zoom**: Bilder im Vollbild-Modal sind pinch-zoombar (1x..5x), 1-Finger-Pan wenn gezoomt, Doppel-Tap toggelt 1x↔2.5x — alles ohne externe Lib
|
||||
|
||||
+39
-1
@@ -6,7 +6,8 @@
|
||||
*/
|
||||
|
||||
import React, { useEffect } from 'react';
|
||||
import { StatusBar, StyleSheet } from 'react-native';
|
||||
import { PermissionsAndroid, Platform, StatusBar, StyleSheet } from 'react-native';
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||
import { NavigationContainer, DefaultTheme } from '@react-navigation/native';
|
||||
import { createBottomTabNavigator } from '@react-navigation/bottom-tabs';
|
||||
|
||||
@@ -14,6 +15,7 @@ import ChatScreen from './src/screens/ChatScreen';
|
||||
import SettingsScreen from './src/screens/SettingsScreen';
|
||||
import rvs from './src/services/rvs';
|
||||
import { initLogger, installGlobalCrashReporter } from './src/services/logger';
|
||||
import { acquireBackgroundAudio } from './src/services/backgroundAudio';
|
||||
|
||||
// --- Navigation ---
|
||||
|
||||
@@ -61,6 +63,42 @@ const App: React.FC = () => {
|
||||
};
|
||||
initConnection();
|
||||
|
||||
// Hintergrund-Modus: Foreground-Service starten damit JS-Engine +
|
||||
// WebSocket auch ueberleben wenn die App im Hintergrund ist.
|
||||
// Trigger-Replies, Reconnects, Timer-Erinnerungen kommen sonst nicht
|
||||
// durch weil Android nach ~30s die JS-Engine pausiert.
|
||||
//
|
||||
// Default an, kann in Settings → Hintergrund-Modus deaktiviert werden.
|
||||
// Braucht POST_NOTIFICATIONS Permission ab Android 13.
|
||||
const initBackground = async () => {
|
||||
const setting = await AsyncStorage.getItem('aria_background_mode');
|
||||
if (setting === 'false') {
|
||||
console.log('[App] Hintergrund-Modus deaktiviert (Settings)');
|
||||
return;
|
||||
}
|
||||
// Permission fuer die persistente Notification
|
||||
if (Platform.OS === 'android' && Platform.Version >= 33) {
|
||||
try {
|
||||
await PermissionsAndroid.request(
|
||||
'android.permission.POST_NOTIFICATIONS' as any,
|
||||
{
|
||||
title: 'Hintergrund-Modus',
|
||||
message: 'ARIA zeigt eine Notification damit Trigger und Reconnects auch laufen wenn die App im Hintergrund ist.',
|
||||
buttonPositive: 'Erlauben',
|
||||
buttonNegative: 'Spaeter',
|
||||
},
|
||||
);
|
||||
} catch {}
|
||||
}
|
||||
try {
|
||||
await acquireBackgroundAudio('background');
|
||||
console.log('[App] Hintergrund-Modus aktiv');
|
||||
} catch (err: any) {
|
||||
console.warn('[App] Hintergrund-Modus konnte nicht starten:', err?.message || err);
|
||||
}
|
||||
};
|
||||
initBackground();
|
||||
|
||||
// Beim Beenden: Verbindung sauber trennen
|
||||
return () => {
|
||||
rvs.disconnect();
|
||||
|
||||
@@ -79,8 +79,8 @@ android {
|
||||
applicationId "com.ariacockpit"
|
||||
minSdkVersion rootProject.ext.minSdkVersion
|
||||
targetSdkVersion rootProject.ext.targetSdkVersion
|
||||
versionCode 10405
|
||||
versionName "0.1.4.5"
|
||||
versionCode 10601
|
||||
versionName "0.1.6.1"
|
||||
// Fallback fuer Libraries mit Product Flavors
|
||||
missingDimensionStrategy 'react-native-camera', 'general'
|
||||
}
|
||||
|
||||
@@ -15,6 +15,7 @@ import com.facebook.react.bridge.ReactApplicationContext
|
||||
import com.facebook.react.bridge.ReactContextBaseJavaModule
|
||||
import com.facebook.react.bridge.ReactMethod
|
||||
import com.facebook.react.modules.core.DeviceEventManagerModule
|
||||
import java.util.concurrent.Executors
|
||||
|
||||
/**
|
||||
* Lauscht auf Anruf-Statusaenderungen — wenn das Telefon klingelt oder ein
|
||||
@@ -35,6 +36,11 @@ class PhoneCallModule(reactContext: ReactApplicationContext) : ReactContextBaseJ
|
||||
private var legacyListener: PhoneStateListener? = null
|
||||
private var modernCallback: Any? = null // TelephonyCallback ab API 31
|
||||
private var lastState: Int = TelephonyManager.CALL_STATE_IDLE
|
||||
// Eigener Single-Thread-Executor statt mainExecutor — der wird bei
|
||||
// pausierter Activity verzoegert oder gar nicht abgearbeitet, der eigene
|
||||
// Thread laeuft unabhaengig solange der App-Prozess lebt (was er ja tut,
|
||||
// wir haben einen Foreground-Service der das garantiert).
|
||||
private val callbackExecutor = Executors.newSingleThreadExecutor()
|
||||
|
||||
@ReactMethod
|
||||
fun start(promise: Promise) {
|
||||
@@ -59,7 +65,7 @@ class PhoneCallModule(reactContext: ReactApplicationContext) : ReactContextBaseJ
|
||||
handleStateChange(state)
|
||||
}
|
||||
}
|
||||
tm.registerTelephonyCallback(reactApplicationContext.mainExecutor, cb)
|
||||
tm.registerTelephonyCallback(callbackExecutor, cb)
|
||||
modernCallback = cb
|
||||
} else {
|
||||
@Suppress("DEPRECATION")
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "aria-cockpit",
|
||||
"version": "0.1.4.5",
|
||||
"version": "0.1.6.1",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"android": "react-native run-android",
|
||||
|
||||
@@ -0,0 +1,583 @@
|
||||
/**
|
||||
* Trigger-Browser — Liste aller Trigger (timer + watcher) mit Toggle,
|
||||
* Tap-zum-Bearbeiten und "+ Neu"-Knopf.
|
||||
*
|
||||
* Eingesetzt von SettingsScreen → Sektion "Trigger".
|
||||
*
|
||||
* Brain-API ueber brainApi (RVS-Brain-Proxy).
|
||||
*/
|
||||
|
||||
import React, { useCallback, useEffect, useState } from 'react';
|
||||
import {
|
||||
ActivityIndicator,
|
||||
Alert,
|
||||
FlatList,
|
||||
Modal,
|
||||
ScrollView,
|
||||
StyleSheet,
|
||||
Switch,
|
||||
Text,
|
||||
TextInput,
|
||||
TouchableOpacity,
|
||||
View,
|
||||
} from 'react-native';
|
||||
|
||||
import brainApi, { Trigger } from '../services/brainApi';
|
||||
|
||||
const COL_ACTIVE = '#34C759';
|
||||
const COL_INACTIVE = '#555570';
|
||||
const COL_TIMER = '#0096FF';
|
||||
const COL_WATCHER = '#FFD60A';
|
||||
|
||||
function relTime(iso: string | null | undefined): string {
|
||||
if (!iso) return '—';
|
||||
const t = new Date(iso).getTime();
|
||||
if (!t) return '—';
|
||||
const diffSec = Math.floor((Date.now() - t) / 1000);
|
||||
if (diffSec < 60) return `vor ${diffSec}s`;
|
||||
if (diffSec < 3600) return `vor ${Math.floor(diffSec / 60)}min`;
|
||||
if (diffSec < 86400) return `vor ${Math.floor(diffSec / 3600)}h`;
|
||||
return `vor ${Math.floor(diffSec / 86400)}d`;
|
||||
}
|
||||
|
||||
export const TriggerBrowser: React.FC = () => {
|
||||
const [items, setItems] = useState<Trigger[]>([]);
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [err, setErr] = useState<string | null>(null);
|
||||
const [filter, setFilter] = useState<'all' | 'active' | 'inactive'>('all');
|
||||
const [editTrigger, setEditTrigger] = useState<Trigger | null>(null);
|
||||
const [showNew, setShowNew] = useState(false);
|
||||
|
||||
const load = useCallback(() => {
|
||||
setLoading(true); setErr(null);
|
||||
brainApi.listTriggers()
|
||||
.then(t => {
|
||||
// Sortierung: aktive zuerst, dann nach Name
|
||||
t.sort((a, b) => {
|
||||
if (a.active !== b.active) return a.active ? -1 : 1;
|
||||
return (a.name || '').localeCompare(b.name || '');
|
||||
});
|
||||
setItems(t);
|
||||
})
|
||||
.catch(e => setErr(String(e?.message || e)))
|
||||
.finally(() => setLoading(false));
|
||||
}, []);
|
||||
|
||||
useEffect(() => { load(); }, [load]);
|
||||
|
||||
const visible = items.filter(t => {
|
||||
if (filter === 'active') return t.active;
|
||||
if (filter === 'inactive') return !t.active;
|
||||
return true;
|
||||
});
|
||||
|
||||
const toggleActive = (t: Trigger) => {
|
||||
brainApi.updateTrigger(t.name, { active: !t.active })
|
||||
.then(() => load())
|
||||
.catch(e => Alert.alert('Fehler', String(e?.message || e)));
|
||||
};
|
||||
|
||||
const deleteTrigger = (t: Trigger) => {
|
||||
Alert.alert(
|
||||
'Trigger löschen?',
|
||||
`"${t.name}" — diese Aktion ist nicht rückgängig zu machen.`,
|
||||
[
|
||||
{ text: 'Abbrechen', style: 'cancel' },
|
||||
{
|
||||
text: 'Löschen',
|
||||
style: 'destructive',
|
||||
onPress: () => {
|
||||
brainApi.deleteTrigger(t.name)
|
||||
.then(() => { setEditTrigger(null); load(); })
|
||||
.catch(e => Alert.alert('Fehler', String(e?.message || e)));
|
||||
},
|
||||
},
|
||||
],
|
||||
);
|
||||
};
|
||||
|
||||
const renderItem = ({ item }: { item: Trigger }) => {
|
||||
const typeColor = item.type === 'timer' ? COL_TIMER : COL_WATCHER;
|
||||
const typeLabel = item.type === 'timer' ? '⏰ Timer' : '👁 Watcher';
|
||||
return (
|
||||
<TouchableOpacity style={s.row} onPress={() => setEditTrigger(item)}>
|
||||
<View style={{flex: 1, marginRight: 8}}>
|
||||
<View style={{flexDirection: 'row', alignItems: 'center', gap: 6, marginBottom: 4}}>
|
||||
<Text style={{color: typeColor, fontSize: 11, fontWeight: '700'}}>{typeLabel}</Text>
|
||||
<Text style={{color: '#E0E0F0', fontWeight: '600', flex: 1}} numberOfLines={1}>{item.name}</Text>
|
||||
</View>
|
||||
<Text style={{color: '#8888AA', fontSize: 12}} numberOfLines={2}>{item.message}</Text>
|
||||
{item.type === 'watcher' && item.condition ? (
|
||||
<Text style={{color: '#555570', fontSize: 11, marginTop: 4, fontFamily: 'monospace'}} numberOfLines={1}>
|
||||
{item.condition}
|
||||
</Text>
|
||||
) : null}
|
||||
{item.type === 'timer' && item.fires_at ? (
|
||||
<Text style={{color: '#555570', fontSize: 11, marginTop: 4}}>
|
||||
feuert: {new Date(item.fires_at).toLocaleString('de-DE')}
|
||||
</Text>
|
||||
) : null}
|
||||
<Text style={{color: '#444460', fontSize: 10, marginTop: 4}}>
|
||||
{item.fire_count || 0}× gefeuert · zuletzt: {relTime(item.last_fired_at)}
|
||||
</Text>
|
||||
</View>
|
||||
<Switch
|
||||
value={item.active}
|
||||
onValueChange={() => toggleActive(item)}
|
||||
trackColor={{ false: '#1E1E2E', true: COL_ACTIVE }}
|
||||
thumbColor="#E0E0F0"
|
||||
/>
|
||||
</TouchableOpacity>
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<View style={{flex: 1}}>
|
||||
{/* Filter-Leiste + Reload + Neu */}
|
||||
<View style={s.toolbar}>
|
||||
{(['all', 'active', 'inactive'] as const).map(f => (
|
||||
<TouchableOpacity
|
||||
key={f}
|
||||
style={[s.chip, filter === f && s.chipActive]}
|
||||
onPress={() => setFilter(f)}
|
||||
>
|
||||
<Text style={{color: filter === f ? '#0D0D1A' : '#8888AA', fontSize: 12, fontWeight: '600'}}>
|
||||
{f === 'all' ? 'Alle' : f === 'active' ? 'Aktive' : 'Inaktive'}
|
||||
</Text>
|
||||
</TouchableOpacity>
|
||||
))}
|
||||
<View style={{flex: 1}} />
|
||||
<TouchableOpacity onPress={load} style={s.iconBtn}>
|
||||
<Text style={{fontSize: 16}}>{'↻'}</Text>
|
||||
</TouchableOpacity>
|
||||
<TouchableOpacity onPress={() => setShowNew(true)} style={[s.iconBtn, {backgroundColor: '#0096FF'}]}>
|
||||
<Text style={{fontSize: 14, color: '#fff', fontWeight: '700'}}>+ Neu</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
|
||||
{err ? <Text style={s.err}>{err}</Text> : null}
|
||||
|
||||
{loading && items.length === 0 ? (
|
||||
<ActivityIndicator color="#0096FF" style={{marginTop: 20}} />
|
||||
) : (
|
||||
<FlatList
|
||||
data={visible}
|
||||
keyExtractor={t => t.name}
|
||||
renderItem={renderItem}
|
||||
nestedScrollEnabled={true}
|
||||
ListEmptyComponent={
|
||||
<Text style={{color: '#555570', textAlign: 'center', padding: 20, fontStyle: 'italic'}}>
|
||||
{items.length === 0 ? '(keine Trigger angelegt)' : '(keine Treffer für diesen Filter)'}
|
||||
</Text>
|
||||
}
|
||||
contentContainerStyle={{paddingBottom: 20}}
|
||||
/>
|
||||
)}
|
||||
|
||||
{editTrigger ? (
|
||||
<TriggerEditModal
|
||||
trigger={editTrigger}
|
||||
onClose={() => setEditTrigger(null)}
|
||||
onSaved={() => { setEditTrigger(null); load(); }}
|
||||
onDelete={() => deleteTrigger(editTrigger)}
|
||||
/>
|
||||
) : null}
|
||||
|
||||
{showNew ? (
|
||||
<TriggerNewModal
|
||||
onClose={() => setShowNew(false)}
|
||||
onCreated={() => { setShowNew(false); load(); }}
|
||||
/>
|
||||
) : null}
|
||||
</View>
|
||||
);
|
||||
};
|
||||
|
||||
// ── Edit-Modal ─────────────────────────────────────────────────────────
|
||||
|
||||
interface EditProps {
|
||||
trigger: Trigger;
|
||||
onClose: () => void;
|
||||
onSaved: () => void;
|
||||
onDelete: () => void;
|
||||
}
|
||||
|
||||
const TriggerEditModal: React.FC<EditProps> = ({ trigger, onClose, onSaved, onDelete }) => {
|
||||
const [message, setMessage] = useState(trigger.message || '');
|
||||
const [condition, setCondition] = useState(trigger.condition || '');
|
||||
const [firesAt, setFiresAt] = useState(trigger.fires_at || '');
|
||||
const [checkInterval, setCheckInterval] = useState(String(trigger.check_interval_sec || 300));
|
||||
const [throttle, setThrottle] = useState(String(trigger.throttle_sec || 3600));
|
||||
const [saving, setSaving] = useState(false);
|
||||
|
||||
const save = () => {
|
||||
setSaving(true);
|
||||
const patch: any = { message };
|
||||
if (trigger.type === 'watcher') {
|
||||
patch.condition = condition;
|
||||
patch.check_interval_sec = parseInt(checkInterval, 10) || 300;
|
||||
patch.throttle_sec = parseInt(throttle, 10) || 3600;
|
||||
} else if (trigger.type === 'timer') {
|
||||
patch.fires_at = firesAt;
|
||||
}
|
||||
brainApi.updateTrigger(trigger.name, patch)
|
||||
.then(onSaved)
|
||||
.catch(e => Alert.alert('Fehler beim Speichern', String(e?.message || e)))
|
||||
.finally(() => setSaving(false));
|
||||
};
|
||||
|
||||
return (
|
||||
<Modal visible animationType="slide" onRequestClose={onClose} transparent>
|
||||
<View style={s.modalBg}>
|
||||
<View style={s.modal}>
|
||||
<View style={s.modalHeader}>
|
||||
<Text style={{color: trigger.type === 'timer' ? COL_TIMER : COL_WATCHER, fontWeight: '700', fontSize: 16, flex: 1}}>
|
||||
{trigger.type === 'timer' ? '⏰' : '👁'} {trigger.name}
|
||||
</Text>
|
||||
<TouchableOpacity onPress={onClose}>
|
||||
<Text style={{color: '#8888AA', fontSize: 24}}>×</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
<ScrollView style={{padding: 14}} nestedScrollEnabled>
|
||||
<Text style={s.label}>Nachricht</Text>
|
||||
<TextInput
|
||||
style={s.input}
|
||||
value={message}
|
||||
onChangeText={setMessage}
|
||||
multiline
|
||||
placeholder="Was soll ARIA sagen wenn der Trigger feuert?"
|
||||
placeholderTextColor="#555570"
|
||||
/>
|
||||
|
||||
{trigger.type === 'watcher' ? (
|
||||
<>
|
||||
<Text style={s.label}>Condition</Text>
|
||||
<TextInput
|
||||
style={[s.input, {fontFamily: 'monospace', fontSize: 12}]}
|
||||
value={condition}
|
||||
onChangeText={setCondition}
|
||||
placeholder="z.B. near(53.0, 8.5, 300)"
|
||||
placeholderTextColor="#555570"
|
||||
autoCapitalize="none"
|
||||
/>
|
||||
<View style={{flexDirection: 'row', gap: 8}}>
|
||||
<View style={{flex: 1}}>
|
||||
<Text style={s.label}>Check-Intervall (s)</Text>
|
||||
<TextInput
|
||||
style={s.input}
|
||||
value={checkInterval}
|
||||
onChangeText={setCheckInterval}
|
||||
keyboardType="number-pad"
|
||||
/>
|
||||
</View>
|
||||
<View style={{flex: 1}}>
|
||||
<Text style={s.label}>Throttle (s)</Text>
|
||||
<TextInput
|
||||
style={s.input}
|
||||
value={throttle}
|
||||
onChangeText={setThrottle}
|
||||
keyboardType="number-pad"
|
||||
/>
|
||||
</View>
|
||||
</View>
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Text style={s.label}>Feuert am (ISO, UTC)</Text>
|
||||
<TextInput
|
||||
style={[s.input, {fontFamily: 'monospace', fontSize: 12}]}
|
||||
value={firesAt}
|
||||
onChangeText={setFiresAt}
|
||||
placeholder="2026-05-15T20:00:00+00:00"
|
||||
placeholderTextColor="#555570"
|
||||
autoCapitalize="none"
|
||||
/>
|
||||
</>
|
||||
)}
|
||||
|
||||
<View style={s.metaBox}>
|
||||
<Text style={s.meta}>Status: {trigger.active ? '🟢 aktiv' : '⚪ inaktiv'}</Text>
|
||||
<Text style={s.meta}>Gefeuert: {trigger.fire_count || 0}×</Text>
|
||||
<Text style={s.meta}>Zuletzt gefeuert: {relTime(trigger.last_fired_at)}</Text>
|
||||
<Text style={s.meta}>Zuletzt geprüft: {relTime(trigger.last_checked_at)}</Text>
|
||||
{trigger.author ? <Text style={s.meta}>Angelegt von: {trigger.author}</Text> : null}
|
||||
</View>
|
||||
</ScrollView>
|
||||
<View style={s.modalFooter}>
|
||||
<TouchableOpacity onPress={onDelete} style={[s.btn, {backgroundColor: '#3A1F1F', borderColor: '#FF3B30'}]}>
|
||||
<Text style={{color: '#FF3B30', fontWeight: '700'}}>🗑 Löschen</Text>
|
||||
</TouchableOpacity>
|
||||
<View style={{flex: 1}} />
|
||||
<TouchableOpacity onPress={save} disabled={saving} style={[s.btn, {backgroundColor: '#0096FF', opacity: saving ? 0.5 : 1}]}>
|
||||
<Text style={{color: '#fff', fontWeight: '700'}}>{saving ? 'Speichert...' : 'Speichern'}</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</View>
|
||||
</View>
|
||||
</Modal>
|
||||
);
|
||||
};
|
||||
|
||||
// ── Neu-Modal ──────────────────────────────────────────────────────────
|
||||
|
||||
interface NewProps {
|
||||
onClose: () => void;
|
||||
onCreated: () => void;
|
||||
}
|
||||
|
||||
const TriggerNewModal: React.FC<NewProps> = ({ onClose, onCreated }) => {
|
||||
const [ttype, setTtype] = useState<'timer' | 'watcher'>('watcher');
|
||||
const [name, setName] = useState('');
|
||||
const [message, setMessage] = useState('');
|
||||
const [condition, setCondition] = useState('');
|
||||
const [firesAt, setFiresAt] = useState('');
|
||||
const [checkInterval, setCheckInterval] = useState('300');
|
||||
const [throttle, setThrottle] = useState('3600');
|
||||
const [saving, setSaving] = useState(false);
|
||||
|
||||
const create = () => {
|
||||
if (!name.trim() || !message.trim()) {
|
||||
Alert.alert('Name und Nachricht erforderlich');
|
||||
return;
|
||||
}
|
||||
setSaving(true);
|
||||
const promise = ttype === 'timer'
|
||||
? brainApi.createTimer({
|
||||
name: name.trim(),
|
||||
fires_at: firesAt.trim(),
|
||||
message: message.trim(),
|
||||
})
|
||||
: brainApi.createWatcher({
|
||||
name: name.trim(),
|
||||
condition: condition.trim(),
|
||||
message: message.trim(),
|
||||
check_interval_sec: parseInt(checkInterval, 10) || 300,
|
||||
throttle_sec: parseInt(throttle, 10) || 3600,
|
||||
});
|
||||
promise
|
||||
.then(onCreated)
|
||||
.catch(e => Alert.alert('Fehler beim Anlegen', String(e?.message || e)))
|
||||
.finally(() => setSaving(false));
|
||||
};
|
||||
|
||||
return (
|
||||
<Modal visible animationType="slide" onRequestClose={onClose} transparent>
|
||||
<View style={s.modalBg}>
|
||||
<View style={s.modal}>
|
||||
<View style={s.modalHeader}>
|
||||
<Text style={{color: '#FFD60A', fontWeight: '700', fontSize: 16, flex: 1}}>+ Neuer Trigger</Text>
|
||||
<TouchableOpacity onPress={onClose}>
|
||||
<Text style={{color: '#8888AA', fontSize: 24}}>×</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
<ScrollView style={{padding: 14}} nestedScrollEnabled>
|
||||
<Text style={s.label}>Typ</Text>
|
||||
<View style={{flexDirection: 'row', gap: 8, marginBottom: 12}}>
|
||||
{(['watcher', 'timer'] as const).map(t => (
|
||||
<TouchableOpacity
|
||||
key={t}
|
||||
onPress={() => setTtype(t)}
|
||||
style={[s.chip, ttype === t && s.chipActive, {flex: 1, paddingVertical: 10}]}
|
||||
>
|
||||
<Text style={{color: ttype === t ? '#0D0D1A' : '#8888AA', fontWeight: '700', textAlign: 'center'}}>
|
||||
{t === 'watcher' ? '👁 Watcher' : '⏰ Timer'}
|
||||
</Text>
|
||||
</TouchableOpacity>
|
||||
))}
|
||||
</View>
|
||||
|
||||
<Text style={s.label}>Name (kebab-case)</Text>
|
||||
<TextInput
|
||||
style={s.input}
|
||||
value={name}
|
||||
onChangeText={setName}
|
||||
placeholder="z.B. drk-kreyenbrueck-warnung"
|
||||
placeholderTextColor="#555570"
|
||||
autoCapitalize="none"
|
||||
/>
|
||||
|
||||
<Text style={s.label}>Nachricht</Text>
|
||||
<TextInput
|
||||
style={s.input}
|
||||
value={message}
|
||||
onChangeText={setMessage}
|
||||
multiline
|
||||
placeholder="Was soll ARIA sagen?"
|
||||
placeholderTextColor="#555570"
|
||||
/>
|
||||
|
||||
{ttype === 'watcher' ? (
|
||||
<>
|
||||
<Text style={s.label}>Condition</Text>
|
||||
<TextInput
|
||||
style={[s.input, {fontFamily: 'monospace', fontSize: 12}]}
|
||||
value={condition}
|
||||
onChangeText={setCondition}
|
||||
placeholder="z.B. entered_near(53.0, 8.5, 300)"
|
||||
placeholderTextColor="#555570"
|
||||
autoCapitalize="none"
|
||||
/>
|
||||
<Text style={s.hint}>
|
||||
Funktionen: near() / entered_near() / left_near() · Variablen: disk_free_gb, hour_of_day, current_lat, current_lon, last_user_message_ago_sec
|
||||
</Text>
|
||||
<View style={{flexDirection: 'row', gap: 8}}>
|
||||
<View style={{flex: 1}}>
|
||||
<Text style={s.label}>Check-Intervall (s)</Text>
|
||||
<TextInput
|
||||
style={s.input}
|
||||
value={checkInterval}
|
||||
onChangeText={setCheckInterval}
|
||||
keyboardType="number-pad"
|
||||
/>
|
||||
</View>
|
||||
<View style={{flex: 1}}>
|
||||
<Text style={s.label}>Throttle (s)</Text>
|
||||
<TextInput
|
||||
style={s.input}
|
||||
value={throttle}
|
||||
onChangeText={setThrottle}
|
||||
keyboardType="number-pad"
|
||||
/>
|
||||
</View>
|
||||
</View>
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Text style={s.label}>Feuert am (ISO, UTC)</Text>
|
||||
<TextInput
|
||||
style={[s.input, {fontFamily: 'monospace', fontSize: 12}]}
|
||||
value={firesAt}
|
||||
onChangeText={setFiresAt}
|
||||
placeholder="2026-05-15T20:00:00+00:00"
|
||||
placeholderTextColor="#555570"
|
||||
autoCapitalize="none"
|
||||
/>
|
||||
<Text style={s.hint}>Beispiel oben: heute 20:00 UTC = 22:00 CEST</Text>
|
||||
</>
|
||||
)}
|
||||
</ScrollView>
|
||||
<View style={s.modalFooter}>
|
||||
<View style={{flex: 1}} />
|
||||
<TouchableOpacity onPress={create} disabled={saving} style={[s.btn, {backgroundColor: '#0096FF', opacity: saving ? 0.5 : 1}]}>
|
||||
<Text style={{color: '#fff', fontWeight: '700'}}>{saving ? 'Legt an...' : 'Anlegen'}</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
</View>
|
||||
</View>
|
||||
</Modal>
|
||||
);
|
||||
};
|
||||
|
||||
const s = StyleSheet.create({
|
||||
toolbar: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
gap: 6,
|
||||
marginBottom: 8,
|
||||
},
|
||||
chip: {
|
||||
paddingHorizontal: 10,
|
||||
paddingVertical: 6,
|
||||
borderRadius: 14,
|
||||
backgroundColor: '#1E1E2E',
|
||||
},
|
||||
chipActive: {
|
||||
backgroundColor: '#FFD60A',
|
||||
},
|
||||
iconBtn: {
|
||||
paddingHorizontal: 10,
|
||||
paddingVertical: 6,
|
||||
borderRadius: 14,
|
||||
backgroundColor: '#1E1E2E',
|
||||
},
|
||||
err: {
|
||||
color: '#FF3B30',
|
||||
padding: 12,
|
||||
fontSize: 12,
|
||||
},
|
||||
row: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
padding: 12,
|
||||
backgroundColor: '#1A1A2E',
|
||||
borderRadius: 8,
|
||||
marginBottom: 6,
|
||||
},
|
||||
modalBg: {
|
||||
flex: 1,
|
||||
backgroundColor: 'rgba(0,0,0,0.6)',
|
||||
justifyContent: 'center',
|
||||
alignItems: 'center',
|
||||
padding: 16,
|
||||
},
|
||||
modal: {
|
||||
backgroundColor: '#0D0D1A',
|
||||
borderRadius: 12,
|
||||
width: '100%',
|
||||
maxWidth: 600,
|
||||
maxHeight: '90%',
|
||||
borderWidth: 1,
|
||||
borderColor: '#1E1E2E',
|
||||
},
|
||||
modalHeader: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
padding: 14,
|
||||
borderBottomWidth: 1,
|
||||
borderBottomColor: '#1E1E2E',
|
||||
},
|
||||
modalFooter: {
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
padding: 12,
|
||||
borderTopWidth: 1,
|
||||
borderTopColor: '#1E1E2E',
|
||||
gap: 8,
|
||||
},
|
||||
label: {
|
||||
color: '#8888AA',
|
||||
fontSize: 11,
|
||||
fontWeight: '700',
|
||||
textTransform: 'uppercase',
|
||||
letterSpacing: 0.5,
|
||||
marginTop: 8,
|
||||
marginBottom: 4,
|
||||
},
|
||||
input: {
|
||||
backgroundColor: '#1A1A2E',
|
||||
borderWidth: 1,
|
||||
borderColor: '#1E1E2E',
|
||||
borderRadius: 6,
|
||||
color: '#E0E0F0',
|
||||
padding: 10,
|
||||
fontSize: 14,
|
||||
marginBottom: 8,
|
||||
},
|
||||
hint: {
|
||||
color: '#555570',
|
||||
fontSize: 11,
|
||||
fontStyle: 'italic',
|
||||
marginTop: -4,
|
||||
marginBottom: 10,
|
||||
},
|
||||
metaBox: {
|
||||
backgroundColor: '#1A1A2E',
|
||||
borderRadius: 6,
|
||||
padding: 10,
|
||||
marginTop: 10,
|
||||
gap: 4,
|
||||
},
|
||||
meta: {
|
||||
color: '#8888AA',
|
||||
fontSize: 12,
|
||||
},
|
||||
btn: {
|
||||
paddingHorizontal: 14,
|
||||
paddingVertical: 10,
|
||||
borderRadius: 6,
|
||||
borderWidth: 1,
|
||||
borderColor: 'transparent',
|
||||
},
|
||||
});
|
||||
|
||||
export default TriggerBrowser;
|
||||
@@ -126,16 +126,45 @@ interface ChatMessage {
|
||||
sendAttempts?: number;
|
||||
}
|
||||
|
||||
/** Ein Eintrag im Gedanken-Stream — chronologisches Log dessen was ARIA
|
||||
* intern macht (Brain-`agent_activity`-Events). Bleibt zwischen Denk-
|
||||
* Phasen stehen, wird in AsyncStorage persistiert. */
|
||||
interface ThoughtEntry {
|
||||
ts: number;
|
||||
/** Roh-Activity vom Brain: thinking, tool, assistant, idle (= ✓ fertig). */
|
||||
activity: string;
|
||||
/** Bei activity='tool' der Tool-Name, sonst leer. */
|
||||
tool?: string;
|
||||
}
|
||||
|
||||
// --- Konstanten ---
|
||||
|
||||
const CHAT_STORAGE_KEY = 'aria_chat_messages';
|
||||
const THOUGHT_STORAGE_KEY = 'aria_thought_stream';
|
||||
const MAX_STORED_MESSAGES = 500;
|
||||
const MAX_MEMORY_MESSAGES = 500;
|
||||
const MAX_THOUGHTS = 500;
|
||||
|
||||
// Hilfe: Messages-Array auf Max kappen (aelteste raus) — verhindert OOM
|
||||
// im Gespraechsmodus bei sehr vielen Nachrichten.
|
||||
const capMessages = (msgs: ChatMessage[]): ChatMessage[] =>
|
||||
msgs.length > MAX_MEMORY_MESSAGES ? msgs.slice(-MAX_MEMORY_MESSAGES) : msgs;
|
||||
|
||||
// Bridge fuegt User-Texten Praefixe in eckigen Klammern hinzu damit Brain
|
||||
// Kontext hat (GPS-Position, Barge-In-Hint etc.). Diese sollen nicht in der
|
||||
// Bubble auftauchen — nur Brain sieht sie. Filtert alle aufeinanderfolgenden
|
||||
// [...]-Bloecke am Textanfang weg, inkl. der Trennleerzeichen dahinter.
|
||||
function stripSystemHints(text: string): string {
|
||||
if (!text) return text;
|
||||
let out = text;
|
||||
// Mehrere Hints koennen aneinanderhaengen — "[A] [B] Hallo" → "Hallo"
|
||||
while (true) {
|
||||
const m = out.match(/^\s*\[[^\]]*\]\s*/);
|
||||
if (!m) break;
|
||||
out = out.slice(m[0].length);
|
||||
}
|
||||
return out;
|
||||
}
|
||||
const DEFAULT_ATTACHMENT_DIR = `${RNFS.DocumentDirectoryPath}/chat_attachments`;
|
||||
const STORAGE_PATH_KEY = 'aria_attachment_storage_path';
|
||||
|
||||
@@ -252,12 +281,26 @@ const ChatScreen: React.FC = () => {
|
||||
const [searchIndex, setSearchIndex] = useState(0); // welcher Treffer aktiv ist
|
||||
const [pendingAttachments, setPendingAttachments] = useState<{file: any, isPhoto: boolean}[]>([]);
|
||||
const [agentActivity, setAgentActivity] = useState<{activity: string, tool: string}>({activity: 'idle', tool: ''});
|
||||
// Gedanken-Stream: chronologisches Log dessen was ARIA intern macht.
|
||||
// Wird aus agent_activity-Events gefuettert und in AsyncStorage persistiert.
|
||||
const [thoughts, setThoughts] = useState<ThoughtEntry[]>([]);
|
||||
const [thoughtsVisible, setThoughtsVisible] = useState(false);
|
||||
// Spiegel der letzten Activity in einer Ref — verhindert dass aufeinander-
|
||||
// folgende identische Events (z.B. zwei 'thinking' hintereinander) den
|
||||
// Stream zumuellen. Eigentlich seltener Fall, aber billig zu pruefen.
|
||||
const lastThoughtKeyRef = useRef<string>('');
|
||||
// Service-Status (Gamebox: F5-TTS / Whisper Lade-Status) + Banner-Sichtbarkeit
|
||||
const [serviceStatus, setServiceStatus] = useState<Record<string, {state: string, model?: string, loadSeconds?: number, error?: string}>>({});
|
||||
const [serviceStatus, setServiceStatus] = useState<Record<string, {state: string, model?: string, loadSeconds?: number, error?: string, downloading?: boolean, freshlyDownloaded?: boolean}>>({});
|
||||
const [serviceBannerDismissed, setServiceBannerDismissed] = useState(false);
|
||||
// Gerätelokale TTS-Config: globaler Toggle (aus Settings) + temporäres Muten (Mund-Button)
|
||||
const [ttsDeviceEnabled, setTtsDeviceEnabled] = useState(true);
|
||||
const [ttsMuted, setTtsMuted] = useState(false);
|
||||
// System-Hints in Bubble: Bridge fuegt User-Text Praefixe wie
|
||||
// "[Stefans aktuelle GPS-Position: ...]" oder "[Hinweis: Stefan hat
|
||||
// dich gerade unterbrochen...]" hinzu damit Brain Kontext hat. Die
|
||||
// App soll sie standardmaessig NICHT anzeigen — Stefan sieht sonst
|
||||
// jeden Hint mit. Toggle in Settings.
|
||||
const [showSystemHints, setShowSystemHints] = useState(false);
|
||||
// Gerätelokale XTTS-Voice-Wahl (bevorzugt gegenueber dem globalen Default)
|
||||
const localXttsVoiceRef = useRef<string>('');
|
||||
// Geraetelokale TTS-Wiedergabegeschwindigkeit (speed-Param an F5-TTS)
|
||||
@@ -270,6 +313,9 @@ const ChatScreen: React.FC = () => {
|
||||
|
||||
const flatListRef = useRef<FlatList>(null);
|
||||
const messageIdCounter = useRef(0);
|
||||
// Spiegel der messages-Liste in einer Ref — Closures (z.B. dispatchWithAck-
|
||||
// Retry) brauchen Zugriff auf den aktuellen Status einer Bubble.
|
||||
const messagesRef = useRef<ChatMessage[]>([]);
|
||||
// Watchdog gegen "ARIA denkt"-Hang: wird bei jedem agent_activity-Event mit
|
||||
// nicht-idle Status neu armiert. Feuert er, sind 180s lang KEINE Updates
|
||||
// vom Brain mehr gekommen → wir gehen davon aus dass die Verbindung
|
||||
@@ -300,7 +346,12 @@ const ChatScreen: React.FC = () => {
|
||||
|
||||
// Wie lange wir auf das ACK warten bevor wir retryen. Bridge sollte
|
||||
// unmittelbar zurueckmelden — 30s ist grosszuegig fuer schlechte Netze.
|
||||
const ACK_TIMEOUT_MS = 30_000;
|
||||
// 60s — grosszuegiger als 30s, weil langsame Brain-Calls (Multi-Tool) sonst
|
||||
// 90s × 3 Retries lang die User-Bubble auf ⏳ stehen lassen wuerden. Der
|
||||
// wichtige Pfad ist sowieso: agent_activity = thinking → markiert die
|
||||
// Bubble sofort als 'sent' (siehe handler). Das hier ist Fallback wenn
|
||||
// weder ACK noch agent_activity ankommt.
|
||||
const ACK_TIMEOUT_MS = 60_000;
|
||||
// Wie oft re-tryen wir bevor wir "failed" anzeigen.
|
||||
const MAX_SEND_ATTEMPTS = 3;
|
||||
// Pending ACK-Timer pro clientMsgId — fuer cancel beim ACK.
|
||||
@@ -334,8 +385,19 @@ const ChatScreen: React.FC = () => {
|
||||
// - Wenn offline → status='queued', wird beim Reconnect rausgeschickt.
|
||||
// - Wenn online → status='sending', Timer fuer ACK-Erwartung.
|
||||
// - Bei ACK-Timeout: retry (bis MAX_SEND_ATTEMPTS) oder 'failed'.
|
||||
// - Wenn die Bubble inzwischen 'delivered' ist (z.B. ARIA hat geantwortet
|
||||
// bevor das ACK durchkam) → komplett abbrechen, keinen Retry mehr.
|
||||
const dispatchWithAck = useCallback(
|
||||
(cmid: string, type: 'chat' | 'audio', payload: Record<string, unknown>, attempt = 1) => {
|
||||
// Schutz: wenn die Bubble inzwischen delivered ist, Retry-Loop stoppen
|
||||
// (kann bei verspaeteten ACKs oder manuellem Retry passieren wenn ARIA
|
||||
// schon laengst geantwortet hat).
|
||||
const current = messagesRef.current.find(m => m.clientMsgId === cmid);
|
||||
if (current?.deliveryStatus === 'delivered') {
|
||||
clearAckTimer(cmid);
|
||||
pendingPayloads.current.delete(cmid);
|
||||
return;
|
||||
}
|
||||
pendingPayloads.current.set(cmid, { type, payload });
|
||||
const online = connectionStateRef.current === 'connected';
|
||||
if (!online) {
|
||||
@@ -350,6 +412,13 @@ const ChatScreen: React.FC = () => {
|
||||
cmid,
|
||||
setTimeout(() => {
|
||||
ackTimers.current.delete(cmid);
|
||||
// Vor dem Retry erneut pruefen ob die Bubble nicht inzwischen
|
||||
// delivered wurde — sonst spawnen wir endlose Retries.
|
||||
const fresh = messagesRef.current.find(m => m.clientMsgId === cmid);
|
||||
if (fresh?.deliveryStatus === 'delivered') {
|
||||
pendingPayloads.current.delete(cmid);
|
||||
return;
|
||||
}
|
||||
if (attempt >= MAX_SEND_ATTEMPTS) {
|
||||
updateMessageStatus(cmid, { deliveryStatus: 'failed', sendAttempts: attempt });
|
||||
console.warn('[Chat] Send fehlgeschlagen nach %d Versuchen: %s', attempt, cmid);
|
||||
@@ -399,6 +468,8 @@ const ChatScreen: React.FC = () => {
|
||||
ttsSpeedRef.current = await loadTtsSpeed();
|
||||
const gps = await AsyncStorage.getItem('aria_gps_enabled');
|
||||
setGpsEnabled(gps === 'true');
|
||||
const hints = await AsyncStorage.getItem('aria_show_hints');
|
||||
setShowSystemHints(hints === 'true'); // default false
|
||||
};
|
||||
loadSettings();
|
||||
const interval = setInterval(loadSettings, 2000);
|
||||
@@ -433,14 +504,40 @@ const ChatScreen: React.FC = () => {
|
||||
return () => { phoneCallService.stop().catch(() => {}); };
|
||||
}, []);
|
||||
|
||||
// App-Resume: kurzer Wake-Word-Cooldown — beim Wechsel Background→Foreground
|
||||
// gibt's haeufig Audio-Pegel-Spikes (AudioFocus-Switch, AudioTrack re-route)
|
||||
// die openWakeWord sonst faelschlich als Wake-Word interpretiert.
|
||||
// App-Resume: drei Schutzmaßnahmen gegen verirrte Wake-Word-Trigger
|
||||
// beim Wechsel Background→Foreground:
|
||||
// (a) Cooldown 3s — Audio-Pegel-Spikes (AudioFocus-Switch, AudioTrack
|
||||
// re-route) sollen openWakeWord nicht faelschlich triggern
|
||||
// (b) Wenn die App laenger im Hintergrund war und in 'conversing'
|
||||
// zurueckkommt: vermutlich false-positive durch ein Hintergrund-
|
||||
// Geraeusch (TV, Husten etc.) waehrend Stefan gar nicht da war.
|
||||
// Wir verwerfen den Trigger und gehen zurueck zu 'armed'.
|
||||
// (c) Aktuelle Aufnahme abbrechen falls sie aus dem false-positive
|
||||
// gerade gestartet wurde.
|
||||
useEffect(() => {
|
||||
let lastState: string = AppState.currentState;
|
||||
let lastBackgroundAt = 0;
|
||||
const sub = AppState.addEventListener('change', (next) => {
|
||||
if (lastState !== 'active' && next === 'active') {
|
||||
wakeWordService.setResumeCooldown(1500);
|
||||
if (next === 'background' || next === 'inactive') {
|
||||
lastBackgroundAt = Date.now();
|
||||
} else if (lastState !== 'active' && next === 'active') {
|
||||
wakeWordService.setResumeCooldown(3000);
|
||||
const bgDur = lastBackgroundAt > 0 ? Date.now() - lastBackgroundAt : 0;
|
||||
// Bei laengerer Hintergrund-Zeit (>30s): pruefen ob ein frisches
|
||||
// Wake-Word getriggert wurde wahrend die App weg war — wenn ja,
|
||||
// verwerfen + laufende Aufnahme stoppen.
|
||||
if (bgDur > 30_000) {
|
||||
wakeWordService.discardIfFreshlyTriggered(15_000).then(discarded => {
|
||||
if (discarded) {
|
||||
try { audioService.cancelRecording(); } catch {}
|
||||
}
|
||||
}).catch(() => {});
|
||||
}
|
||||
// PhoneCall-Listener pruefen: kann passieren dass der nach laengerer
|
||||
// Hintergrund-Zeit verloren geht (Bridge-Context recreated). Refresh
|
||||
// versucht ihn neu zu attachen falls noetig — sonst kriegt die App
|
||||
// bei display-aus / minimized keine Anruf-Events mit.
|
||||
phoneCallService.refresh().catch(() => {});
|
||||
}
|
||||
lastState = next;
|
||||
});
|
||||
@@ -644,15 +741,30 @@ const ChatScreen: React.FC = () => {
|
||||
// gesetzt UND text leer/Placeholder)
|
||||
// - User-Bubbles deren clientMsgId der Server noch nicht kennt:
|
||||
// z.B. waehrend Reconnect-Race oder solange flushQueuedMessages
|
||||
// noch laeuft. Ohne diesen Schutz haette der history_response
|
||||
// die gerade reaktivierten Offline-Nachrichten geloescht.
|
||||
const localOnly = prev.filter(m =>
|
||||
m.skillCreated ||
|
||||
m.triggerCreated ||
|
||||
m.memorySaved ||
|
||||
(m.audioRequestId && (!m.text || m.text === '🎙 Aufnahme...' || m.text === 'Aufnahme...')) ||
|
||||
(m.sender === 'user' && m.clientMsgId && !serverCmids.has(m.clientMsgId))
|
||||
// noch laeuft. ABER: wenn der Server eine textgleiche User-
|
||||
// Bubble hat (egal mit welcher cmid oder ohne — z.B. wenn die
|
||||
// Bubble aus einer Bridge-Version vor dem clientMsgId-Patch
|
||||
// stammt oder wenn die ts kaputt sind), werten wir das als
|
||||
// Treffer und verwerfen die lokale Kopie. Sonst Doppelpost:
|
||||
// einmal als Server-Bubble (delivered) und einmal als lokale
|
||||
// failed/queued mit Retry-Knopf.
|
||||
const serverUserTexts = new Set(
|
||||
fromServer.filter(s => s.sender === 'user').map(s => s.text || '')
|
||||
);
|
||||
const localOnly = prev.filter(m => {
|
||||
if (m.skillCreated || m.triggerCreated || m.memorySaved) return true;
|
||||
if (m.audioRequestId && (!m.text || m.text === '🎙 Aufnahme...' || m.text === 'Aufnahme...')) return true;
|
||||
if (m.sender === 'user' && m.clientMsgId && !serverCmids.has(m.clientMsgId)) {
|
||||
// Text-Match-Fallback: wenn der Server irgendwo eine textgleiche
|
||||
// User-Bubble hat, ist es dieselbe Nachricht (vor cmid-Aera, ts
|
||||
// kaputt etc.) — wir verwerfen die lokale Kopie. Leerer Text
|
||||
// (z.B. nur Anhang) faellt nicht in den Vergleich.
|
||||
const text = m.text || '';
|
||||
if (text && serverUserTexts.has(text)) return false;
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
// Server-Stand + lokal-only (chronologisch sortiert)
|
||||
const merged = [...fromServer, ...localOnly].sort((a, b) => a.timestamp - b.timestamp);
|
||||
return capMessages(merged);
|
||||
@@ -776,6 +888,16 @@ const ChatScreen: React.FC = () => {
|
||||
const b64 = (message.payload.base64 as string) || '';
|
||||
const serverPath = (message.payload.serverPath as string) || '';
|
||||
const mimeType = (message.payload.mimeType as string) || '';
|
||||
// Fehler-Response (z.B. Datei zu gross, nicht gefunden) → Toast,
|
||||
// kein erneuter Versuch. Hauptverdacht: 40+ MB Videos die ueber
|
||||
// den 70 MB Bridge-Limit gehen.
|
||||
const fileErr = (message.payload as any).error as string | undefined;
|
||||
if (fileErr) {
|
||||
const fname = (message.payload.name as string) || serverPath.split('/').pop() || 'Datei';
|
||||
console.warn('[Chat] file_response Fehler fuer %s: %s', fname, fileErr);
|
||||
ToastAndroid.show(`${fname}: ${fileErr}`, ToastAndroid.LONG);
|
||||
return;
|
||||
}
|
||||
if (b64 && reqId) {
|
||||
const fileName = (message.payload.name as string) || 'download';
|
||||
persistAttachment(b64, reqId, fileName).then(filePath => {
|
||||
@@ -919,6 +1041,14 @@ const ChatScreen: React.FC = () => {
|
||||
});
|
||||
// ARIA hat geantwortet → Watchdog clearen, falls noch armiert
|
||||
clearStuckWatchdog();
|
||||
// ALLE noch laufenden ACK-Timer clearen — Bridge hat unsere Messages
|
||||
// ja offensichtlich verarbeitet (sonst keine ARIA-Antwort). Wenn
|
||||
// ein ACK aus Netzgruenden verloren ging, soll der Retry nicht
|
||||
// nachtraeglich loslaufen und die Bubble auf 'failed' setzen.
|
||||
for (const cmid of Array.from(ackTimers.current.keys())) {
|
||||
clearAckTimer(cmid);
|
||||
pendingPayloads.current.delete(cmid);
|
||||
}
|
||||
}
|
||||
|
||||
// TTS-Audio abspielen wenn vorhanden — respektiert geraetelokalen Mute/Disable
|
||||
@@ -961,10 +1091,51 @@ const ChatScreen: React.FC = () => {
|
||||
const activity = (message.payload.activity as string) || 'idle';
|
||||
const tool = (message.payload.tool as string) || '';
|
||||
setAgentActivity({ activity, tool });
|
||||
// Implizite ACK-Bestaetigung: Brain hat angefangen zu arbeiten →
|
||||
// unsere Nachricht ist offensichtlich angekommen, auch wenn das
|
||||
// chat_ack aus irgendeinem Grund nicht durchkam. Alle laufenden
|
||||
// ACK-Timer canceln + sending-Bubbles auf 'sent' setzen.
|
||||
// Vermeidet das Symptom "Sanduhr bleibt + Timeout" bei langsamen
|
||||
// Brain-Antworten (>90 s, also nach 3 ACK-Retries auf failed).
|
||||
if (activity !== 'idle' && ackTimers.current.size > 0) {
|
||||
for (const cmid of Array.from(ackTimers.current.keys())) {
|
||||
clearAckTimer(cmid);
|
||||
}
|
||||
// Reference-stable: wenn keine Bubble zu aendern ist, geben wir
|
||||
// prev unveraendert zurueck. Sonst triggert .map() ein neues
|
||||
// Array + Re-Render, was waehrend einer aktiven Such-Scroll-
|
||||
// Sequenz die FlatList-Layouts invalidiert → permanenter
|
||||
// onScrollToIndexFailed-Loop.
|
||||
setMessages(prev => {
|
||||
const needs = prev.some(m => m.sender === 'user' && m.deliveryStatus === 'sending');
|
||||
if (!needs) return prev;
|
||||
return prev.map(m =>
|
||||
m.sender === 'user' && m.deliveryStatus === 'sending'
|
||||
? { ...m, deliveryStatus: 'sent' }
|
||||
: m,
|
||||
);
|
||||
});
|
||||
}
|
||||
// In den Gedanken-Stream einfuegen. Dedup gegen identische Folge-
|
||||
// Events (z.B. zwei mal 'thinking' direkt hintereinander). Tool-
|
||||
// Events NIE dedupen — wenn ARIA dreimal Bash hintereinander ruft,
|
||||
// sollen alle drei sichtbar sein.
|
||||
const key = `${activity}|${tool}`;
|
||||
const isTool = activity === 'tool';
|
||||
if (isTool || key !== lastThoughtKeyRef.current) {
|
||||
lastThoughtKeyRef.current = key;
|
||||
setThoughts(prev => {
|
||||
const next = [...prev, { ts: Date.now(), activity, tool }];
|
||||
return next.length > MAX_THOUGHTS ? next.slice(-MAX_THOUGHTS) : next;
|
||||
});
|
||||
}
|
||||
// Spotify darf waehrend "ARIA denkt/schreibt" weiterspielen — pausiert
|
||||
// nur wenn TTS startet (dann acquired _firePlaybackStarted den Focus).
|
||||
// Watchdog: solange Brain noch Lebenszeichen sendet (jedes neue
|
||||
// activity-Event), Timer neu starten. 180s ohne Update → Hang.
|
||||
// activity-Event), Timer neu starten. 21 Min ohne Update → Hang.
|
||||
// Knapp ueber Brain-Timeout (20 Min) damit nur bei echten
|
||||
// Verbindungsabbruechen / Brain-Crashes gefeuert wird, nicht waehrend
|
||||
// legitimer langer Multi-Tool-Sessions die das Brain selbst kappt.
|
||||
clearStuckWatchdog();
|
||||
if (activity !== 'idle') {
|
||||
stuckWatchdog.current = setTimeout(() => {
|
||||
@@ -973,10 +1144,10 @@ const ChatScreen: React.FC = () => {
|
||||
setMessages(prev => capMessages([...prev, {
|
||||
id: nextId(),
|
||||
sender: 'aria',
|
||||
text: '⚠️ Habe gerade keine Verbindung zurueck bekommen (Timeout nach 3 Min). Deine letzte Nachricht ist evtl. nicht durchgekommen — schick sie nochmal.',
|
||||
text: '⚠️ Habe gerade keine Verbindung zurueck bekommen (Timeout nach 21 Min). Deine letzte Nachricht ist evtl. nicht durchgekommen — schick sie nochmal.',
|
||||
timestamp: Date.now(),
|
||||
}]));
|
||||
}, 180_000);
|
||||
}, 1_260_000);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1000,22 +1171,39 @@ const ChatScreen: React.FC = () => {
|
||||
}
|
||||
}
|
||||
|
||||
// Gamebox-Bridges (f5tts/whisper) melden Lade-Status — Banner oben
|
||||
// Gamebox-Bridges (f5tts/whisper/flux) melden Lade-Status — Banner oben.
|
||||
// Toast bei Download-Ende: erstmaliger HF-Download (mehrere GB) → User
|
||||
// soll wissen dass er Bilder/Stimmen jetzt nutzen kann ohne in den
|
||||
// Banner gucken zu muessen.
|
||||
if (message.type === ('service_status' as any)) {
|
||||
const p = message.payload as any;
|
||||
const svc = (p?.service as string) || '';
|
||||
if (!svc) return;
|
||||
const newState = (p?.state as string) || 'unknown';
|
||||
const freshlyDownloaded = p?.freshlyDownloaded === true;
|
||||
setServiceStatus(prev => ({
|
||||
...prev,
|
||||
[svc]: {
|
||||
state: (p?.state as string) || 'unknown',
|
||||
state: newState,
|
||||
model: p?.model as string | undefined,
|
||||
loadSeconds: p?.loadSeconds as number | undefined,
|
||||
error: p?.error as string | undefined,
|
||||
downloading: p?.downloading === true,
|
||||
freshlyDownloaded,
|
||||
},
|
||||
}));
|
||||
// Bei neuer Loading-Phase Banner wieder aktivieren
|
||||
if (p?.state === 'loading') setServiceBannerDismissed(false);
|
||||
if (newState === 'loading') setServiceBannerDismissed(false);
|
||||
// Download-Fertig-Toast: Bridge setzt freshlyDownloaded=true bei dem
|
||||
// 'ready'-Broadcast direkt nach einem Cache-Miss-Load. Ein einziger
|
||||
// Toast pro Modell-Download, kein State-Tracking auf App-Seite noetig.
|
||||
if (newState === 'ready' && freshlyDownloaded) {
|
||||
const niceName = svc === 'flux' ? 'FLUX' : svc === 'f5tts' ? 'F5-TTS' : svc === 'whisper' ? 'Whisper' : svc;
|
||||
const model = p?.model ? ` (${p.model})` : '';
|
||||
try {
|
||||
ToastAndroid.show(`${niceName}-Modell heruntergeladen${model} — jetzt einsatzbereit`, ToastAndroid.LONG);
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
@@ -1225,6 +1413,40 @@ const ChatScreen: React.FC = () => {
|
||||
return () => { if (saveTimer.current) clearTimeout(saveTimer.current); };
|
||||
}, [messages]);
|
||||
|
||||
// Gedanken-Stream beim Mount aus AsyncStorage laden
|
||||
useEffect(() => {
|
||||
AsyncStorage.getItem(THOUGHT_STORAGE_KEY)
|
||||
.then(raw => {
|
||||
if (!raw) return;
|
||||
try {
|
||||
const parsed = JSON.parse(raw);
|
||||
if (Array.isArray(parsed)) setThoughts(parsed.slice(-MAX_THOUGHTS));
|
||||
} catch {}
|
||||
})
|
||||
.catch(() => {});
|
||||
}, []);
|
||||
|
||||
// Gedanken-Stream persistieren (debounced)
|
||||
const thoughtSaveTimer = useRef<ReturnType<typeof setTimeout> | null>(null);
|
||||
useEffect(() => {
|
||||
if (thoughts.length === 0) {
|
||||
AsyncStorage.removeItem(THOUGHT_STORAGE_KEY).catch(() => {});
|
||||
return;
|
||||
}
|
||||
if (thoughtSaveTimer.current) clearTimeout(thoughtSaveTimer.current);
|
||||
thoughtSaveTimer.current = setTimeout(() => {
|
||||
AsyncStorage.setItem(
|
||||
THOUGHT_STORAGE_KEY,
|
||||
JSON.stringify(thoughts.slice(-MAX_THOUGHTS)),
|
||||
).catch(() => {});
|
||||
}, 500);
|
||||
return () => { if (thoughtSaveTimer.current) clearTimeout(thoughtSaveTimer.current); };
|
||||
}, [thoughts]);
|
||||
|
||||
// messagesRef immer aktuell halten — wird von dispatchWithAck/Retry gelesen
|
||||
// damit Retries auf den aktuellen deliveryStatus reagieren koennen.
|
||||
useEffect(() => { messagesRef.current = messages; }, [messages]);
|
||||
|
||||
// Inverted FlatList: neueste Nachrichten unten, kein manuelles Scrollen noetig
|
||||
// Spezial-Bubbles (memorySaved/triggerCreated/skillCreated) sollen im Chat
|
||||
// NICHT mehr erscheinen — sie werden in der Notizen-Inbox angezeigt.
|
||||
@@ -1236,15 +1458,22 @@ const ChatScreen: React.FC = () => {
|
||||
);
|
||||
const invertedMessages = useMemo(() => [...chatVisibleMessages].reverse(), [chatVisibleMessages]);
|
||||
|
||||
// Such-Treffer: alle Message-IDs die zur Query passen, in chronologischer
|
||||
// Reihenfolge (aelteste zuerst). Bei Query-Change resetten wir den Index.
|
||||
// Such-Treffer: alle Message-IDs die zur Query passen. NEUESTE ZUERST —
|
||||
// analog zu WhatsApp/Telegram: User ist visuell unten im Chat, der erste
|
||||
// Treffer ist meist schon im Viewport (kein weiter Pre-Scroll, kein
|
||||
// Cold-Start-Sprung-Fail). „Naechster" geht in die Vergangenheit.
|
||||
// WICHTIG: nur in chatVisibleMessages suchen — Spezial-Bubbles (Memory/
|
||||
// Skill/Trigger) sind im Chat-Stream nicht sichtbar und Treffer auf die
|
||||
// wuerden zu „ID nicht im FlatList → findIndex=-1 → kein Scroll"-Fail
|
||||
// fuehren.
|
||||
const searchMatchIds = useMemo(() => {
|
||||
const q = searchQuery.trim().toLowerCase();
|
||||
if (!q) return [] as string[];
|
||||
return messages
|
||||
return chatVisibleMessages
|
||||
.filter(m => (m.text || '').toLowerCase().includes(q))
|
||||
.map(m => m.id);
|
||||
}, [messages, searchQuery]);
|
||||
.map(m => m.id)
|
||||
.reverse();
|
||||
}, [chatVisibleMessages, searchQuery]);
|
||||
|
||||
useEffect(() => {
|
||||
setSearchIndex(0);
|
||||
@@ -1258,11 +1487,21 @@ const ChatScreen: React.FC = () => {
|
||||
// ein neuer Search-Hit kommt, damit alte Retries nicht den neuen
|
||||
// Scroll-Versuch durcheinanderbringen ("permanent springen"-Bug).
|
||||
const pendingScrollRetry = useRef<ReturnType<typeof setTimeout> | null>(null);
|
||||
// Zaehler fuer fehlgeschlagene Scroll-Retries. Hartes Limit gegen
|
||||
// Endlos-Loops wenn das Item-Layout aus irgendwelchen Gruenden nie
|
||||
// verfuegbar wird (z.B. weil setMessages mitten in der Sequenz die
|
||||
// FlatList re-rendert).
|
||||
const scrollRetryCount = useRef<number>(0);
|
||||
// 6 Retries: bei weiten Spruengen (Suche auf Bubble #150 von Position 0)
|
||||
// kann FlatList mehrere Iterationen brauchen bis die Items in der Naehe
|
||||
// gemessen sind. Vorher 3 = vorzeitig aufgegeben.
|
||||
const MAX_SCROLL_RETRIES = 6;
|
||||
const clearPendingScrollRetry = () => {
|
||||
if (pendingScrollRetry.current) {
|
||||
clearTimeout(pendingScrollRetry.current);
|
||||
pendingScrollRetry.current = null;
|
||||
}
|
||||
scrollRetryCount.current = 0;
|
||||
};
|
||||
|
||||
// Bei Search-Index-Wechsel zur entsprechenden Bubble scrollen.
|
||||
@@ -1273,6 +1512,11 @@ const ChatScreen: React.FC = () => {
|
||||
// Den aktuellen Snapshot von invertedMessages holen wir via Ref.
|
||||
const invertedMessagesRef = useRef(invertedMessages);
|
||||
invertedMessagesRef.current = invertedMessages;
|
||||
// Cache fuer echte Bubble-Hoehen, gefuettert per onLayout in
|
||||
// renderMessage. Wird beim Pre-Scroll genutzt damit der grobe Sprung
|
||||
// praezise landet (statt mit dem 150-px-Pauschalwert weit daneben).
|
||||
const itemHeights = useRef<Map<string, number>>(new Map());
|
||||
const AVG_BUBBLE_HEIGHT = 150; // Fallback fuer noch nicht gemessene Items
|
||||
useEffect(() => {
|
||||
if (!searchMatchIds.length) {
|
||||
lastSearchScrollKey.current = '';
|
||||
@@ -1290,12 +1534,42 @@ const ChatScreen: React.FC = () => {
|
||||
clearPendingScrollRetry();
|
||||
const idx = invertedMessagesRef.current.findIndex(m => m.id === id);
|
||||
if (idx < 0 || !flatListRef.current) return;
|
||||
// Pre-Scroll: erst grob in die Naehe springen, damit FlatList die
|
||||
// Bubbles in der Umgebung ueberhaupt rendert (sonst basiert
|
||||
// averageItemLength im Failed-Handler nur auf den ersten ~10 Items
|
||||
// und liefert einen voellig falschen Sprung).
|
||||
// Offset = Summe echter Hoehen (aus itemHeights-Cache, gefuettert per
|
||||
// onLayout) + dynamischer Fallback aus dem Mittel der bisher
|
||||
// gemessenen Items. Beim Cold-Start gibt's nur 10 Messungen (die
|
||||
// neuesten unten in der invertierten Liste) — der Mittel daraus ist
|
||||
// immer noch besser als die Pauschal-150.
|
||||
const measured = Array.from(itemHeights.current.values());
|
||||
const dynamicAvg = measured.length >= 5
|
||||
? measured.reduce((a, b) => a + b, 0) / measured.length
|
||||
: AVG_BUBBLE_HEIGHT;
|
||||
let preOffset = 0;
|
||||
const inv = invertedMessagesRef.current;
|
||||
for (let i = 0; i < idx; i++) {
|
||||
preOffset += itemHeights.current.get(inv[i].id) || dynamicAvg;
|
||||
}
|
||||
try {
|
||||
flatListRef.current?.scrollToOffset({
|
||||
offset: preOffset,
|
||||
animated: false,
|
||||
});
|
||||
} catch {}
|
||||
// Nach Render-Pause praezise nachsetzen. 350 ms — bei weiten Spruengen
|
||||
// (Pre-Scroll 5000+ px) braucht FlatList Zeit die Items dort zu
|
||||
// mounten und onLayout zu feuern. Zu kurz → averageItemLength im
|
||||
// Failed-Handler basiert noch auf den falschen Items.
|
||||
requestAnimationFrame(() => {
|
||||
try {
|
||||
flatListRef.current?.scrollToIndex({ index: idx, animated: true, viewPosition: 0 });
|
||||
} catch {
|
||||
// onScrollToIndexFailed-Handler uebernimmt den Fallback
|
||||
}
|
||||
setTimeout(() => {
|
||||
try {
|
||||
flatListRef.current?.scrollToIndex({ index: idx, animated: true, viewPosition: 0 });
|
||||
} catch {
|
||||
// onScrollToIndexFailed-Handler uebernimmt den Fallback
|
||||
}
|
||||
}, 350);
|
||||
});
|
||||
}, [searchIndex, searchMatchIds]);
|
||||
|
||||
@@ -1713,7 +1987,15 @@ const ChatScreen: React.FC = () => {
|
||||
}
|
||||
|
||||
return (
|
||||
<View style={[styles.messageBubble, isUser ? styles.userBubble : styles.ariaBubble, searchHighlightStyle]}>
|
||||
<View
|
||||
style={[styles.messageBubble, isUser ? styles.userBubble : styles.ariaBubble, searchHighlightStyle]}
|
||||
onLayout={e => {
|
||||
// Echte Hoehe in Cache speichern — Pre-Scroll der Suche nutzt
|
||||
// die summierten Cache-Werte fuer praezisen Sprung. Bei
|
||||
// unbekannten Items faellt's auf AVG_BUBBLE_HEIGHT zurueck.
|
||||
itemHeights.current.set(item.id, e.nativeEvent.layout.height);
|
||||
}}
|
||||
>
|
||||
{/* Anhang-Vorschau */}
|
||||
{item.attachments?.map((att, idx) => (
|
||||
<View key={idx}>
|
||||
@@ -1801,7 +2083,7 @@ const ChatScreen: React.FC = () => {
|
||||
{/* Text (nicht anzeigen wenn nur "Anhang empfangen" und ein Bild da ist) */}
|
||||
{!(item.text === 'Anhang empfangen' && item.attachments?.some(a => a.type === 'image' && a.uri)) && (
|
||||
<MessageText
|
||||
text={item.text}
|
||||
text={showSystemHints ? item.text : stripSystemHints(item.text)}
|
||||
style={[styles.messageText, isUser ? styles.userText : styles.ariaText]}
|
||||
/>
|
||||
)}
|
||||
@@ -1908,7 +2190,13 @@ const ChatScreen: React.FC = () => {
|
||||
{connectionState === 'connected' ? 'Verbunden' :
|
||||
connectionState === 'connecting' ? 'Verbinde...' : 'Getrennt'}
|
||||
</Text>
|
||||
<TouchableOpacity onPress={() => setInboxVisible(true)} style={{marginLeft: 'auto', paddingHorizontal: 6}} hitSlop={{top:8,bottom:8,left:6,right:6}}>
|
||||
<TouchableOpacity onPress={() => setThoughtsVisible(true)} style={{marginLeft: 'auto', paddingHorizontal: 6, flexDirection: 'row', alignItems: 'center'}} hitSlop={{top:8,bottom:8,left:6,right:6}}>
|
||||
<Text style={{fontSize: 16}}>{'\uD83D\uDCAD'}</Text>
|
||||
{thoughts.length > 0 ? (
|
||||
<Text style={{color: '#8888AA', fontSize: 11, marginLeft: 3}}>{thoughts.length}</Text>
|
||||
) : null}
|
||||
</TouchableOpacity>
|
||||
<TouchableOpacity onPress={() => setInboxVisible(true)} style={{paddingHorizontal: 6}} hitSlop={{top:8,bottom:8,left:6,right:6}}>
|
||||
<Text style={{fontSize: 18}}>{'\uD83D\uDDC2\uFE0F'}</Text>
|
||||
</TouchableOpacity>
|
||||
<TouchableOpacity onPress={() => setSearchVisible(!searchVisible)} style={{paddingHorizontal: 6}} hitSlop={{top:8,bottom:8,left:6,right:6}}>
|
||||
@@ -1925,7 +2213,7 @@ const ChatScreen: React.FC = () => {
|
||||
const allReady = !anyLoading && !anyError && entries.every(([, v]) => v.state === 'ready');
|
||||
const bg = anyError ? '#3A1F1F' : anyLoading ? '#3A331F' : '#1F3A2A';
|
||||
const border = anyError ? '#FF3B30' : anyLoading ? '#FFD60A' : '#34C759';
|
||||
const labels: Record<string, string> = { f5tts: 'F5-TTS', whisper: 'Whisper STT' };
|
||||
const labels: Record<string, string> = { f5tts: 'F5-TTS', whisper: 'Whisper STT', flux: 'FLUX Image-Gen' };
|
||||
return (
|
||||
<TouchableOpacity
|
||||
activeOpacity={allReady ? 0.6 : 1.0}
|
||||
@@ -1935,11 +2223,16 @@ const ChatScreen: React.FC = () => {
|
||||
{entries.map(([svc, info]) => {
|
||||
let icon = '\u23F3', text = '';
|
||||
if (info.state === 'loading') {
|
||||
text = `${labels[svc] || svc}: laedt${info.model ? ' ' + info.model : ''}...`;
|
||||
icon = info.downloading ? '\u2B07' : '\u23F3'; // \u2B07 vs \u23F3
|
||||
const action = info.downloading
|
||||
? 'laedt erstmalig runter (mehrere GB, kann dauern)'
|
||||
: 'laedt';
|
||||
text = `${labels[svc] || svc}: ${action}${info.model ? ' ' + info.model : ''}...`;
|
||||
} else if (info.state === 'ready') {
|
||||
icon = '\u2705';
|
||||
icon = info.freshlyDownloaded ? '\uD83C\uDF89' : '\u2705'; // \uD83C\uDF89 vs \u2705
|
||||
const sec = info.loadSeconds ? ` (${info.loadSeconds.toFixed(1)}s)` : '';
|
||||
text = `${labels[svc] || svc}: bereit${info.model ? ' ' + info.model : ''}${sec}`;
|
||||
const dl = info.freshlyDownloaded ? ' \u2014 Download fertig!' : '';
|
||||
text = `${labels[svc] || svc}: bereit${info.model ? ' ' + info.model : ''}${sec}${dl}`;
|
||||
} else if (info.state === 'error') {
|
||||
icon = '\u274C';
|
||||
text = `${labels[svc] || svc}: Fehler ${info.error || ''}`;
|
||||
@@ -2001,6 +2294,13 @@ const ChatScreen: React.FC = () => {
|
||||
ref={flatListRef}
|
||||
inverted
|
||||
data={invertedMessages}
|
||||
// Mehr Items beim Mount messen → bessere averageItemLength fuer
|
||||
// Such-Sprung gleich nach App-Start. Default sind 10 Items, das
|
||||
// ist bei 300+ Bubbles im Backup viel zu wenig.
|
||||
initialNumToRender={30}
|
||||
// Mehr Items im Speicher halten (Default 21 = 10 oben + 10 unten).
|
||||
// Macht scroll-to-far-away weniger anfaellig fuer Layout-Holes.
|
||||
windowSize={41}
|
||||
onScroll={(e) => {
|
||||
// Bei inverted FlatList: contentOffset.y > 0 = weg von "unten"
|
||||
// (= aelter scrollen). Wir zeigen den Jump-Down-Button ab ~250px.
|
||||
@@ -2010,13 +2310,24 @@ const ChatScreen: React.FC = () => {
|
||||
scrollEventThrottle={120}
|
||||
onScrollToIndexFailed={(info) => {
|
||||
// FlatList kennt das Item-Layout noch nicht. Wir scrollen grob in
|
||||
// die Naehe (Average-Item-Hoehe-Schaetzung) und versuchen EINMAL
|
||||
// nach 300ms praezise nachzusetzen. Mehr Retries → Endlos-Cascade
|
||||
// (jeder failed Retry triggert wieder den Handler → 3, 9, 27 ...
|
||||
// Scrolls in der Pipeline = der "permanent springen"-Bug).
|
||||
// die Naehe (Average-Item-Hoehe-Schaetzung) und versuchen bis zu
|
||||
// MAX_SCROLL_RETRIES mal praezise nachzusetzen. Danach geben wir
|
||||
// auf — User sieht die Bubble in der ungefaehren Naehe und kann
|
||||
// selber finetunen. Frueher: jeder failed Retry triggerte einen
|
||||
// neuen Retry ohne Limit → "permanent springen"-Bug, vor allem
|
||||
// wenn waehrenddessen setMessages die Layouts invalidierte.
|
||||
const offset = info.averageItemLength * info.index;
|
||||
try { flatListRef.current?.scrollToOffset({ offset, animated: false }); } catch {}
|
||||
clearPendingScrollRetry();
|
||||
if (pendingScrollRetry.current) {
|
||||
clearTimeout(pendingScrollRetry.current);
|
||||
pendingScrollRetry.current = null;
|
||||
}
|
||||
if (scrollRetryCount.current >= MAX_SCROLL_RETRIES) {
|
||||
// Aufgeben — Item ist offenbar nicht stabil renderbar
|
||||
scrollRetryCount.current = 0;
|
||||
return;
|
||||
}
|
||||
scrollRetryCount.current += 1;
|
||||
pendingScrollRetry.current = setTimeout(() => {
|
||||
pendingScrollRetry.current = null;
|
||||
try { flatListRef.current?.scrollToIndex({ index: info.index, animated: true, viewPosition: 0 }); } catch {}
|
||||
@@ -2183,6 +2494,110 @@ const ChatScreen: React.FC = () => {
|
||||
</ErrorBoundary>
|
||||
) : null}
|
||||
|
||||
{/* Gedanken-Stream — chronologisches Log von ARIAs interner Aktivitaet.
|
||||
Bottom-Sheet (slide-up), 60% Bildschirmhoehe. Mülltonne zum Leeren. */}
|
||||
<Modal
|
||||
visible={thoughtsVisible}
|
||||
animationType="slide"
|
||||
transparent
|
||||
onRequestClose={() => setThoughtsVisible(false)}
|
||||
>
|
||||
<View style={{flex:1, backgroundColor:'rgba(0,0,0,0.5)', justifyContent:'flex-end'}}>
|
||||
{/* Tap-Outside-Bereich oberhalb des Sheets — separater Touchable
|
||||
damit das Sheet-View NICHT als Responder den FlatList-Scroll
|
||||
blockiert. Frueher hatten wir den ganzen Hintergrund als
|
||||
TouchableOpacity + inneren View mit onStartShouldSetResponder
|
||||
= das hat alle Touch-Events kassiert. */}
|
||||
<TouchableOpacity
|
||||
style={{flex:1}}
|
||||
activeOpacity={1}
|
||||
onPress={() => setThoughtsVisible(false)}
|
||||
/>
|
||||
<View
|
||||
style={{height:'60%', backgroundColor:'#0D0D1A', borderTopLeftRadius:16, borderTopRightRadius:16}}
|
||||
>
|
||||
{/* Drag-Indicator */}
|
||||
<View style={{alignItems:'center', paddingTop:8, paddingBottom:4}}>
|
||||
<View style={{width:40, height:4, borderRadius:2, backgroundColor:'#2A2A3E'}} />
|
||||
</View>
|
||||
<View style={{flexDirection:'row', alignItems:'center', padding:14, borderBottomWidth:1, borderBottomColor:'#1E1E2E'}}>
|
||||
<Text style={{color:'#FFD60A', fontWeight:'bold', fontSize:16, flex:1}}>
|
||||
{'💭'} Gedanken-Stream {thoughts.length > 0 ? `(${thoughts.length})` : ''}
|
||||
</Text>
|
||||
{thoughts.length > 0 ? (
|
||||
<TouchableOpacity
|
||||
onPress={() => {
|
||||
Alert.alert('Gedanken-Stream leeren?', `Alle ${thoughts.length} Eintraege werden geloescht.`, [
|
||||
{ text: 'Abbrechen', style: 'cancel' },
|
||||
{ text: 'Leeren', style: 'destructive', onPress: () => {
|
||||
setThoughts([]);
|
||||
lastThoughtKeyRef.current = '';
|
||||
} },
|
||||
]);
|
||||
}}
|
||||
hitSlop={{top:8,bottom:8,left:8,right:8}}
|
||||
style={{paddingHorizontal:8}}
|
||||
>
|
||||
<Text style={{fontSize:18}}>{'🗑'}</Text>
|
||||
</TouchableOpacity>
|
||||
) : null}
|
||||
<TouchableOpacity onPress={() => setThoughtsVisible(false)} hitSlop={{top:8,bottom:8,left:8,right:8}}>
|
||||
<Text style={{color:'#8888AA', fontSize:24}}>×</Text>
|
||||
</TouchableOpacity>
|
||||
</View>
|
||||
{thoughts.length === 0 ? (
|
||||
<View style={{flex:1, alignItems:'center', justifyContent:'center', padding:24}}>
|
||||
<Text style={{color:'#555570', fontSize:13, fontStyle:'italic', textAlign:'center'}}>
|
||||
Noch keine Gedanken aufgezeichnet.{'\n'}Sobald ARIA was tut, taucht's hier auf.
|
||||
</Text>
|
||||
</View>
|
||||
) : (
|
||||
<FlatList
|
||||
data={thoughts}
|
||||
keyExtractor={(_, i) => `t_${i}`}
|
||||
contentContainerStyle={{paddingVertical:8}}
|
||||
renderItem={({ item, index }) => {
|
||||
const prev = index > 0 ? thoughts[index - 1] : null;
|
||||
// Lange Pause? → Trenn-Linie mit Minuten-Hint
|
||||
const gapMin = prev ? Math.floor((item.ts - prev.ts) / 60000) : 0;
|
||||
const showGap = gapMin >= 1;
|
||||
const time = new Date(item.ts).toLocaleTimeString('de-DE', {hour:'2-digit', minute:'2-digit', second:'2-digit'});
|
||||
const icon =
|
||||
item.activity === 'idle' ? '✓' :
|
||||
item.activity === 'tool' ? '🔧' :
|
||||
item.activity === 'assistant' ? '✍️' :
|
||||
item.activity === 'thinking' ? '💭' : '•';
|
||||
const label =
|
||||
item.activity === 'idle' ? 'fertig' :
|
||||
item.activity === 'tool' ? (item.tool || 'tool') :
|
||||
item.activity === 'assistant' ? 'schreibt' :
|
||||
item.activity === 'thinking' ? 'denkt' : item.activity;
|
||||
const isIdle = item.activity === 'idle';
|
||||
return (
|
||||
<View>
|
||||
{showGap ? (
|
||||
<View style={{flexDirection:'row', alignItems:'center', paddingHorizontal:16, paddingVertical:6}}>
|
||||
<View style={{flex:1, height:1, backgroundColor:'#1E1E2E'}} />
|
||||
<Text style={{color:'#555570', fontSize:10, paddingHorizontal:8}}>
|
||||
{gapMin < 60 ? `${gapMin} Min` : `${Math.floor(gapMin/60)}h ${gapMin%60}m`}
|
||||
</Text>
|
||||
<View style={{flex:1, height:1, backgroundColor:'#1E1E2E'}} />
|
||||
</View>
|
||||
) : null}
|
||||
<View style={{flexDirection:'row', paddingHorizontal:16, paddingVertical:5}}>
|
||||
<Text style={{color:'#555570', fontSize:11, width:78}}>{time}</Text>
|
||||
<Text style={{fontSize:13, width:24}}>{icon}</Text>
|
||||
<Text style={{color: isIdle ? '#34C759' : '#E0E0F0', fontSize:13, flex:1}}>{label}</Text>
|
||||
</View>
|
||||
</View>
|
||||
);
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
</View>
|
||||
</View>
|
||||
</Modal>
|
||||
|
||||
{/* Notizen-Inbox — Listet alle Memories aus dem aktuellen Chat (Special-Bubbles).
|
||||
Bestes-Aus-beiden-Welten: nur die Memory-IDs aus den memorySaved-Bubbles
|
||||
des aktuellen Chats, plus den vollen Browser darunter wenn der User mehr will. */}
|
||||
@@ -2215,7 +2630,7 @@ const ChatScreen: React.FC = () => {
|
||||
<Text style={{color:'#8888AA', fontSize:11, paddingHorizontal:14, paddingTop:8, paddingBottom:4, textTransform:'uppercase', letterSpacing:0.5}}>
|
||||
Aus diesem Chat
|
||||
</Text>
|
||||
<ScrollView style={{paddingHorizontal:8}}>
|
||||
<ScrollView style={{paddingHorizontal:8}} nestedScrollEnabled={true}>
|
||||
{specials.map(m => {
|
||||
if (m.memorySaved) {
|
||||
const ms = m.memorySaved;
|
||||
@@ -2271,7 +2686,12 @@ const ChatScreen: React.FC = () => {
|
||||
<Text style={{color:'#8888AA', fontSize:11, paddingHorizontal:14, paddingTop:10, paddingBottom:4, textTransform:'uppercase', letterSpacing:0.5}}>
|
||||
Alle Memories aus der DB
|
||||
</Text>
|
||||
<MemoryBrowser onOpenMemory={(id) => { setInboxVisible(false); setMemoryDetailId(id); }} />
|
||||
{/* flex:1 Wrapper damit MemoryBrowser den verbleibenden Platz
|
||||
bekommt (sonst rendert die FlatList intern mit 0 Hoehe und
|
||||
nimmt nur was der Inhalt sagt → Scroll-Gestures verschwinden). */}
|
||||
<View style={{flex:1}}>
|
||||
<MemoryBrowser onOpenMemory={(id) => { setInboxVisible(false); setMemoryDetailId(id); }} />
|
||||
</View>
|
||||
</View>
|
||||
</ErrorBoundary>
|
||||
</Modal>
|
||||
|
||||
@@ -19,6 +19,7 @@ import {
|
||||
ActivityIndicator,
|
||||
Modal,
|
||||
PermissionsAndroid,
|
||||
useWindowDimensions,
|
||||
} from 'react-native';
|
||||
import AsyncStorage from '@react-native-async-storage/async-storage';
|
||||
import RNFS from 'react-native-fs';
|
||||
@@ -52,7 +53,9 @@ import {
|
||||
} from '../services/audio';
|
||||
import audioService from '../services/audio';
|
||||
import gpsTrackingService from '../services/gpsTracking';
|
||||
import { acquireBackgroundAudio, releaseBackgroundAudio } from '../services/backgroundAudio';
|
||||
import MemoryBrowser from '../components/MemoryBrowser';
|
||||
import TriggerBrowser from '../components/TriggerBrowser';
|
||||
import { isVerboseLogging, setVerboseLogging } from '../services/logger';
|
||||
import {
|
||||
isWakeReadySoundEnabled,
|
||||
@@ -102,6 +105,7 @@ const SETTINGS_SECTIONS = [
|
||||
{ id: 'storage', icon: '📁', label: 'Speicher', desc: 'Anhang-Speicherort, Auto-Download' },
|
||||
{ id: 'files', icon: '📂', label: 'Dateien', desc: 'ARIA- und User-Dateien — anzeigen, löschen' },
|
||||
{ id: 'memory', icon: '🧠', label: 'Gedächtnis', desc: 'ARIA-Memories durchsuchen, anlegen, bearbeiten, löschen' },
|
||||
{ id: 'triggers', icon: '⏰', label: 'Trigger', desc: 'Timer + Watcher anlegen, bearbeiten, löschen' },
|
||||
{ id: 'protocol', icon: '📜', label: 'Protokoll', desc: 'Privatsphaere, Backup' },
|
||||
{ id: 'about', icon: 'ℹ️', label: 'Ueber', desc: 'App-Version, Update' },
|
||||
] as const;
|
||||
@@ -118,6 +122,7 @@ const SOURCE_COLORS: Record<string, string> = {
|
||||
// --- Komponente ---
|
||||
|
||||
const SettingsScreen: React.FC = () => {
|
||||
const winDims = useWindowDimensions();
|
||||
const [connectionState, setConnectionState] = useState<ConnectionState>('disconnected');
|
||||
const [manualToken, setManualToken] = useState('');
|
||||
const [manualHost, setManualHost] = useState('');
|
||||
@@ -125,6 +130,8 @@ const SettingsScreen: React.FC = () => {
|
||||
const [currentMode, setCurrentMode] = useState('normal');
|
||||
const [gpsEnabled, setGpsEnabled] = useState(false);
|
||||
const [gpsTracking, setGpsTracking] = useState(gpsTrackingService.isActive());
|
||||
const [backgroundMode, setBackgroundMode] = useState(true); // Default an
|
||||
const [showSystemHints, setShowSystemHints] = useState(false); // Default aus
|
||||
const [scannerVisible, setScannerVisible] = useState(false);
|
||||
const [logTab, setLogTab] = useState<LogTab>('live');
|
||||
const [logs, setLogs] = useState<LogEntry[]>([]);
|
||||
@@ -192,6 +199,14 @@ const SettingsScreen: React.FC = () => {
|
||||
AsyncStorage.getItem('aria_gps_enabled').then(saved => {
|
||||
if (saved !== null) setGpsEnabled(saved === 'true');
|
||||
});
|
||||
AsyncStorage.getItem('aria_background_mode').then(saved => {
|
||||
// Default ist an — nur explicit 'false' deaktiviert
|
||||
setBackgroundMode(saved !== 'false');
|
||||
});
|
||||
AsyncStorage.getItem('aria_show_hints').then(saved => {
|
||||
// Default ist aus — nur explicit 'true' aktiviert
|
||||
setShowSystemHints(saved === 'true');
|
||||
});
|
||||
// gpsTrackingService status syncen + auf Aenderungen lauschen
|
||||
setGpsTracking(gpsTrackingService.isActive());
|
||||
const offGps = gpsTrackingService.onChange(setGpsTracking);
|
||||
@@ -575,6 +590,44 @@ const SettingsScreen: React.FC = () => {
|
||||
AsyncStorage.setItem('aria_gps_enabled', String(value)).catch(() => {});
|
||||
}, []);
|
||||
|
||||
// --- Hintergrund-Modus Toggle ---
|
||||
|
||||
const handleBackgroundModeToggle = useCallback(async (value: boolean) => {
|
||||
setBackgroundMode(value);
|
||||
AsyncStorage.setItem('aria_background_mode', String(value)).catch(() => {});
|
||||
try {
|
||||
if (value) {
|
||||
// Permission fuer Notification (Android 13+) — sonst sieht der User
|
||||
// den Hintergrund-Modus nicht und wundert sich
|
||||
if (Platform.OS === 'android' && Platform.Version >= 33) {
|
||||
await PermissionsAndroid.request(
|
||||
'android.permission.POST_NOTIFICATIONS' as any,
|
||||
{
|
||||
title: 'Hintergrund-Modus',
|
||||
message: 'ARIA zeigt eine Notification damit die App im Hintergrund laufen darf.',
|
||||
buttonPositive: 'Erlauben',
|
||||
buttonNegative: 'Spaeter',
|
||||
},
|
||||
);
|
||||
}
|
||||
await acquireBackgroundAudio('background');
|
||||
ToastAndroid.show('Hintergrund-Modus aktiv', ToastAndroid.SHORT);
|
||||
} else {
|
||||
await releaseBackgroundAudio('background');
|
||||
ToastAndroid.show('Hintergrund-Modus aus', ToastAndroid.SHORT);
|
||||
}
|
||||
} catch (err: any) {
|
||||
console.warn('[Settings] Background-Toggle gescheitert:', err?.message || err);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// --- System-Hints Toggle ---
|
||||
|
||||
const handleShowSystemHintsToggle = useCallback((value: boolean) => {
|
||||
setShowSystemHints(value);
|
||||
AsyncStorage.setItem('aria_show_hints', String(value)).catch(() => {});
|
||||
}, []);
|
||||
|
||||
// --- XTTS Voice ---
|
||||
|
||||
const selectVoice = useCallback((voiceName: string) => {
|
||||
@@ -868,7 +921,15 @@ const SettingsScreen: React.FC = () => {
|
||||
})()}
|
||||
</View>
|
||||
</Modal>
|
||||
<ScrollView style={styles.container} contentContainerStyle={styles.content} nestedScrollEnabled={true}>
|
||||
<ScrollView
|
||||
style={styles.container}
|
||||
contentContainerStyle={styles.content}
|
||||
nestedScrollEnabled={true}
|
||||
// Wenn eine Section eine eigene voll-hoch-scrollende Sub-Liste hat
|
||||
// (Memory, Trigger), den outer Scroll deaktivieren — Android-nested-
|
||||
// scrolling laesst sonst nur in eine Richtung scrollen.
|
||||
scrollEnabled={currentSection !== 'memory' && currentSection !== 'triggers'}
|
||||
>
|
||||
|
||||
{currentSection === null && (
|
||||
<>
|
||||
@@ -1053,6 +1114,55 @@ const SettingsScreen: React.FC = () => {
|
||||
/>
|
||||
</View>
|
||||
</View>
|
||||
|
||||
{/* === Bubble-Anzeige === */}
|
||||
<Text style={styles.sectionTitle}>Chat-Bubbles</Text>
|
||||
<View style={styles.card}>
|
||||
<View style={styles.toggleRow}>
|
||||
<View style={styles.toggleInfo}>
|
||||
<Text style={styles.toggleLabel}>System-Hints in Bubbles anzeigen</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Wenn aktiviert: GPS-Position, Barge-In-Hinweise und andere
|
||||
System-Praefixe in eckigen Klammern bleiben in der User-Bubble
|
||||
sichtbar (Debug). Standardmaessig versteckt — Brain bekommt sie
|
||||
trotzdem, sie sind nur fuer dich nicht relevant.
|
||||
</Text>
|
||||
</View>
|
||||
<Switch
|
||||
value={showSystemHints}
|
||||
onValueChange={handleShowSystemHintsToggle}
|
||||
trackColor={{ false: '#2A2A3E', true: '#0096FF' }}
|
||||
thumbColor={showSystemHints ? '#FFFFFF' : '#666680'}
|
||||
/>
|
||||
</View>
|
||||
</View>
|
||||
|
||||
{/* === Hintergrund-Modus === */}
|
||||
<Text style={styles.sectionTitle}>Hintergrund-Modus</Text>
|
||||
<View style={styles.card}>
|
||||
<View style={styles.toggleRow}>
|
||||
<View style={styles.toggleInfo}>
|
||||
<Text style={styles.toggleLabel}>App im Hintergrund weiterlaufen</Text>
|
||||
<Text style={styles.toggleHint}>
|
||||
Haelt die Verbindung zu ARIA auch dann offen wenn die App minimiert
|
||||
ist. Sonst pausiert Android nach ~30s die JS-Engine und Timer-/Watcher-
|
||||
Trigger kommen nicht durch. Notification "ARIA aktiv" bleibt sichtbar
|
||||
waehrend der Modus laeuft (das ist Android-Vorschrift fuer Foreground-
|
||||
Services). Akku-Mehrverbrauch minimal solange ARIA nichts tut.
|
||||
{'\n\n'}
|
||||
Wenn nach Akku-Optimierung Trigger trotzdem nicht durchkommen:
|
||||
Android-Einstellungen → Apps → ARIA Cockpit → Akku → "Uneingeschraenkt"
|
||||
setzen.
|
||||
</Text>
|
||||
</View>
|
||||
<Switch
|
||||
value={backgroundMode}
|
||||
onValueChange={handleBackgroundModeToggle}
|
||||
trackColor={{ false: '#2A2A3E', true: '#0096FF' }}
|
||||
thumbColor={backgroundMode ? '#FFFFFF' : '#666680'}
|
||||
/>
|
||||
</View>
|
||||
</View>
|
||||
</>)}
|
||||
|
||||
{/* === Spracheingabe (geraetelokal) === */}
|
||||
@@ -1682,11 +1792,23 @@ const SettingsScreen: React.FC = () => {
|
||||
Alle Memory-Einträge aus ARIAs Vector-DB. Tippen zum Bearbeiten — mit Anhängen, pinned-Status,
|
||||
Tags. Neue Einträge anlegen via "+ Neu".
|
||||
</Text>
|
||||
<View style={{height: 600, marginBottom: 8}}>
|
||||
<View style={{height: winDims.height - 220, marginBottom: 8}}>
|
||||
<MemoryBrowser />
|
||||
</View>
|
||||
</>)}
|
||||
|
||||
{/* === Trigger === */}
|
||||
{currentSection === 'triggers' && (<>
|
||||
<Text style={styles.sectionTitle}>Trigger</Text>
|
||||
<Text style={{color: '#8888AA', fontSize: 12, marginBottom: 8, paddingHorizontal: 4}}>
|
||||
Timer (einmalige Erinnerung) + Watcher (recurring mit Condition, z.B. GPS-near). Toggle aktiv/inaktiv,
|
||||
Tap zum Bearbeiten, "+ Neu" zum Anlegen.
|
||||
</Text>
|
||||
<View style={{height: winDims.height - 220, marginBottom: 8}}>
|
||||
<TriggerBrowser />
|
||||
</View>
|
||||
</>)}
|
||||
|
||||
{/* === Logs === */}
|
||||
{currentSection === 'protocol' && (<>
|
||||
<Text style={styles.sectionTitle}>Protokoll</Text>
|
||||
@@ -1798,7 +1920,7 @@ const SettingsScreen: React.FC = () => {
|
||||
<Text style={styles.aboutTitle}>ARIA Cockpit</Text>
|
||||
<Text style={styles.aboutVersion}>Version {require('../../package.json').version}</Text>
|
||||
<Text style={styles.aboutInfo}>
|
||||
ARIA \u2014 Autonomous Reasoning & Intelligence Assistant.{'\n'}
|
||||
ARIA {'\u2014'} Autonomous Reasoning & Intelligence Assistant.{'\n'}
|
||||
Stefans Kommandozentrale.{'\n'}
|
||||
Gebaut mit React Native + TypeScript.
|
||||
</Text>
|
||||
|
||||
@@ -727,6 +727,31 @@ class AudioService {
|
||||
}
|
||||
}
|
||||
|
||||
/** Aufnahme abbrechen ohne RecordingResult zu emittieren — z.B. bei
|
||||
* Wake-Word-False-Positive beim App-Resume aus laengerem Hintergrund.
|
||||
* Aufgenommene Datei wird sofort verworfen. */
|
||||
async cancelRecording(): Promise<void> {
|
||||
if (this.recordingState !== 'recording') return;
|
||||
console.log('[Audio] Aufnahme abgebrochen (cancel)');
|
||||
this.vadEnabled = false;
|
||||
if (this.vadTimer) { clearInterval(this.vadTimer); this.vadTimer = null; }
|
||||
if (this.maxDurationTimer) { clearTimeout(this.maxDurationTimer); this.maxDurationTimer = null; }
|
||||
if (this.noSpeechTimer) { clearTimeout(this.noSpeechTimer); this.noSpeechTimer = null; }
|
||||
try {
|
||||
const path = await this.recorder.stopRecorder();
|
||||
this.recorder.removeRecordBackListener();
|
||||
// Datei loeschen wenn da
|
||||
if (path && path !== 'Already stopped') {
|
||||
const local = path.replace(/^file:\/\//, '');
|
||||
try { await RNFS.unlink(local); } catch {}
|
||||
}
|
||||
} catch (err) {
|
||||
console.warn('[Audio] cancelRecording stop fehlgeschlagen:', err);
|
||||
}
|
||||
this._releaseFocusDeferred();
|
||||
this.setState('idle');
|
||||
}
|
||||
|
||||
/** Aufnahme stoppen und Ergebnis zurueckgeben */
|
||||
async stopRecording(): Promise<RecordingResult | null> {
|
||||
if (this.recordingState !== 'recording') {
|
||||
|
||||
@@ -1,17 +1,21 @@
|
||||
/**
|
||||
* Background-Audio: ARIAs TTS, Mic-Aufnahme und Wake-Word-Lauschen sollen
|
||||
* auch bei minimierter App weiterlaufen. Wir starten dafuer einen Foreground-
|
||||
* Background-Audio + Hintergrund-Persistenz: ARIAs TTS, Mic-Aufnahme,
|
||||
* Wake-Word-Lauschen UND der allgemeine Hintergrund-Modus laufen
|
||||
* weiter wenn die App minimiert ist. Wir starten dafuer einen Foreground-
|
||||
* Service mit foregroundServiceType=mediaPlayback|microphone, der eine
|
||||
* persistente Notification zeigt waehrend irgendein Audio-Slot aktiv ist.
|
||||
* persistente Notification zeigt solange irgendein Slot aktiv ist.
|
||||
*
|
||||
* Mehrere Komponenten koennen den Service unabhaengig "halten":
|
||||
* - 'tts' : ARIA spricht
|
||||
* - 'rec' : Aufnahme laeuft
|
||||
* - 'wake' : Wake-Word lauscht passiv (Ohr aktiv)
|
||||
* - 'tts' : ARIA spricht
|
||||
* - 'rec' : Aufnahme laeuft
|
||||
* - 'wake' : Wake-Word lauscht passiv (Ohr aktiv)
|
||||
* - 'background' : Persistenter Hintergrund-Modus (Settings-Toggle).
|
||||
* Haelt JS-Engine + WebSocket auch ohne Audio am Leben
|
||||
* → Trigger-Replies, Reconnects, Push-Reaktionen.
|
||||
*
|
||||
* Solange mindestens ein Slot aktiv ist, laeuft der Service. Wenn alle
|
||||
* Slots leer sind, wird er gestoppt. Der Notification-Text passt sich an
|
||||
* den hoechstprioren Slot an (tts > rec > wake).
|
||||
* den hoechstprioren Slot an (tts > rec > wake > background).
|
||||
*/
|
||||
|
||||
import { NativeModules } from 'react-native';
|
||||
@@ -23,12 +27,13 @@ interface BackgroundAudioNative {
|
||||
|
||||
const { BackgroundAudio } = NativeModules as { BackgroundAudio?: BackgroundAudioNative };
|
||||
|
||||
type Slot = 'tts' | 'rec' | 'wake';
|
||||
type Slot = 'tts' | 'rec' | 'wake' | 'background';
|
||||
|
||||
const slots = new Set<Slot>();
|
||||
|
||||
// Prioritaet fuer den Notification-Text — hoechste zuerst.
|
||||
const PRIORITY: Slot[] = ['tts', 'rec', 'wake'];
|
||||
// Prioritaet fuer den Notification-Text — hoechste zuerst. 'background'
|
||||
// ist die fallback-Anzeige wenn nichts anderes laeuft.
|
||||
const PRIORITY: Slot[] = ['tts', 'rec', 'wake', 'background'];
|
||||
|
||||
function topReason(): string {
|
||||
for (const s of PRIORITY) {
|
||||
|
||||
@@ -121,6 +121,24 @@ export interface Memory {
|
||||
attachments?: MemoryAttachment[];
|
||||
}
|
||||
|
||||
/** Trigger-Manifest wie aus Brain `/triggers/list` zurueckkommt. */
|
||||
export interface Trigger {
|
||||
name: string;
|
||||
type: 'timer' | 'watcher' | string;
|
||||
active: boolean;
|
||||
author?: string;
|
||||
message: string;
|
||||
fires_at?: string; // ISO, nur timer
|
||||
condition?: string; // nur watcher
|
||||
check_interval_sec?: number; // nur watcher
|
||||
throttle_sec?: number; // nur watcher
|
||||
fire_count?: number;
|
||||
last_fired_at?: string | null;
|
||||
last_checked_at?: string | null;
|
||||
created_at?: string;
|
||||
updated_at?: string;
|
||||
}
|
||||
|
||||
// ── Memory CRUD ──────────────────────────────────────────────────────
|
||||
|
||||
export const brainApi = {
|
||||
@@ -215,6 +233,74 @@ export const brainApi = {
|
||||
{ expectBinary: true, timeoutMs: 60000 },
|
||||
);
|
||||
},
|
||||
|
||||
// ── Triggers ────────────────────────────────────────────────────────
|
||||
|
||||
/** Liste aller Trigger (aktive + inaktive). */
|
||||
listTriggers(): Promise<Trigger[]> {
|
||||
return _send('/triggers/list');
|
||||
},
|
||||
|
||||
/** Einzelnen Trigger holen (inkl. fire_count, last_fired_at, ...). */
|
||||
getTrigger(name: string): Promise<Trigger> {
|
||||
return _send(`/triggers/${encodeURIComponent(name)}`);
|
||||
},
|
||||
|
||||
/** Verfuegbare Condition-Variablen + Funktionen (fuer Watcher-Editor). */
|
||||
getTriggerConditions(): Promise<{ variables: any[]; functions: any[] }> {
|
||||
return _send('/triggers/conditions');
|
||||
},
|
||||
|
||||
/** Trigger-Logs (last N Feuerungen). */
|
||||
getTriggerLogs(name: string, limit: number = 50): Promise<any[]> {
|
||||
return _send(`/triggers/${encodeURIComponent(name)}/logs?limit=${limit}`);
|
||||
},
|
||||
|
||||
/** Timer anlegen. fires_at = ISO timestamp (UTC). */
|
||||
createTimer(body: { name: string; fires_at: string; message: string; author?: string }): Promise<Trigger> {
|
||||
return _send('/triggers/timer', {
|
||||
method: 'POST',
|
||||
body: { author: 'app', ...body },
|
||||
});
|
||||
},
|
||||
|
||||
/** Watcher anlegen. */
|
||||
createWatcher(body: {
|
||||
name: string;
|
||||
condition: string;
|
||||
message: string;
|
||||
check_interval_sec?: number;
|
||||
throttle_sec?: number;
|
||||
author?: string;
|
||||
}): Promise<Trigger> {
|
||||
return _send('/triggers/watcher', {
|
||||
method: 'POST',
|
||||
body: { author: 'app', ...body },
|
||||
});
|
||||
},
|
||||
|
||||
/** Trigger patchen (active/message/condition/throttle/interval/fires_at). */
|
||||
updateTrigger(name: string, body: Partial<{
|
||||
active: boolean;
|
||||
message: string;
|
||||
condition: string;
|
||||
throttle_sec: number;
|
||||
check_interval_sec: number;
|
||||
fires_at: string;
|
||||
}>): Promise<Trigger> {
|
||||
return _send(`/triggers/${encodeURIComponent(name)}`, {
|
||||
method: 'PATCH',
|
||||
body,
|
||||
});
|
||||
},
|
||||
|
||||
/** Trigger loeschen. */
|
||||
deleteTrigger(name: string): Promise<{ deleted: string }> {
|
||||
return _send(`/triggers/${encodeURIComponent(name)}`, {
|
||||
method: 'DELETE',
|
||||
timeoutMs: 15000,
|
||||
});
|
||||
},
|
||||
};
|
||||
|
||||
export default brainApi;
|
||||
|
||||
@@ -26,6 +26,13 @@ class GpsTrackingService {
|
||||
private listeners: Set<Listener> = new Set();
|
||||
// Defensive: nicht zu schnell oeffentlich togglen
|
||||
private lastChangeAt = 0;
|
||||
// Letzte bekannte Position — wird vom Heartbeat-Timer alle 60s erneut
|
||||
// an die Bridge gesendet, sonst veraltet near() im Brain (NEAR_MAX_AGE_SEC
|
||||
// = 5 min) wenn der User stationaer ist und distanceFilter keine Updates
|
||||
// mehr triggert.
|
||||
private lastLat: number | null = null;
|
||||
private lastLon: number | null = null;
|
||||
private heartbeatTimer: ReturnType<typeof setInterval> | null = null;
|
||||
|
||||
isActive(): boolean {
|
||||
return this.active;
|
||||
@@ -84,6 +91,8 @@ class GpsTrackingService {
|
||||
(pos) => {
|
||||
const lat = pos.coords.latitude;
|
||||
const lon = pos.coords.longitude;
|
||||
this.lastLat = lat;
|
||||
this.lastLon = lon;
|
||||
rvs.send('location_update' as any, { lat, lon });
|
||||
},
|
||||
(err) => {
|
||||
@@ -96,6 +105,17 @@ class GpsTrackingService {
|
||||
fastestInterval: 10000, // (Android) max Frequenz
|
||||
} as any,
|
||||
);
|
||||
// Heartbeat: alle 60s die letzte bekannte Position erneut senden.
|
||||
// Sonst bleibt der Brain-State stale wenn der User stationaer ist
|
||||
// (distanceFilter blockt watchPosition-Updates) → near()-Watcher
|
||||
// verwerfen die Position als veraltet (NEAR_MAX_AGE_SEC = 300s).
|
||||
// Kein neuer GPS-Wakeup, nur Re-Send der letzten Werte → akkufreundlich.
|
||||
if (this.heartbeatTimer) clearInterval(this.heartbeatTimer);
|
||||
this.heartbeatTimer = setInterval(() => {
|
||||
if (this.lastLat != null && this.lastLon != null) {
|
||||
rvs.send('location_update' as any, { lat: this.lastLat, lon: this.lastLon });
|
||||
}
|
||||
}, 60_000);
|
||||
this.active = true;
|
||||
this.lastChangeAt = Date.now();
|
||||
this.notify();
|
||||
@@ -118,6 +138,10 @@ class GpsTrackingService {
|
||||
try { Geolocation.clearWatch(this.watchId); } catch {}
|
||||
this.watchId = null;
|
||||
}
|
||||
if (this.heartbeatTimer) {
|
||||
clearInterval(this.heartbeatTimer);
|
||||
this.heartbeatTimer = null;
|
||||
}
|
||||
this.active = false;
|
||||
this.lastChangeAt = Date.now();
|
||||
this.notify();
|
||||
|
||||
@@ -43,6 +43,42 @@ class PhoneCallService {
|
||||
/** Damit Resume nach VoIP-Loss nicht doppelt feuert wenn auch
|
||||
* TelephonyManager-IDLE-Event kommt. */
|
||||
private interruptedByFocus: boolean = false;
|
||||
/** True wenn der TelephonyManager-Listener (Pfad 1) wirklich registriert
|
||||
* ist. False wenn READ_PHONE_STATE abgelehnt wurde oder Native nicht ging. */
|
||||
private telephonyAttached: boolean = false;
|
||||
|
||||
/** Status fuer Diagnose: laeuft die Anruf-Erkennung tatsaechlich? */
|
||||
status(): { focusAttached: boolean; telephonyAttached: boolean } {
|
||||
return {
|
||||
focusAttached: this.focusSubscription !== null,
|
||||
telephonyAttached: this.telephonyAttached,
|
||||
};
|
||||
}
|
||||
|
||||
/** Nach App-Resume: pruefen ob die Listener noch leben. Wenn der
|
||||
* TelephonyManager-Listener verloren ging (kann passieren wenn der
|
||||
* React-Bridge-Context recreated wurde), neu attachen. */
|
||||
async refresh(): Promise<void> {
|
||||
if (!this.started) return;
|
||||
if (this.telephonyAttached) return; // alles ok
|
||||
if (!PhoneCall) return;
|
||||
try {
|
||||
const ok = await PhoneCall.start();
|
||||
if (ok) {
|
||||
if (!this.subscription) {
|
||||
const emitter = new NativeEventEmitter(NativeModules.PhoneCall as any);
|
||||
this.subscription = emitter.addListener(
|
||||
'PhoneCallStateChanged',
|
||||
(e: { state: PhoneState }) => this._onStateChanged(e.state),
|
||||
);
|
||||
}
|
||||
this.telephonyAttached = true;
|
||||
console.log('[PhoneCall] refresh: TelephonyManager-Listener re-attached');
|
||||
}
|
||||
} catch (err: any) {
|
||||
console.warn('[PhoneCall] refresh fehlgeschlagen:', err?.message || err);
|
||||
}
|
||||
}
|
||||
|
||||
async start(): Promise<boolean> {
|
||||
if (this.started || Platform.OS !== 'android') return false;
|
||||
@@ -82,7 +118,10 @@ class PhoneCallService {
|
||||
'PhoneCallStateChanged',
|
||||
(e: { state: PhoneState }) => this._onStateChanged(e.state),
|
||||
);
|
||||
this.telephonyAttached = true;
|
||||
console.log('[PhoneCall] TelephonyManager-Listener aktiv');
|
||||
} else {
|
||||
console.warn('[PhoneCall] PhoneCall.start() lieferte false — Native-Listener nicht aktiv');
|
||||
}
|
||||
} else {
|
||||
console.warn('[PhoneCall] READ_PHONE_STATE abgelehnt — VoIP-Calls werden trotzdem ueber AudioFocus erkannt');
|
||||
@@ -108,6 +147,7 @@ class PhoneCallService {
|
||||
this.started = false;
|
||||
this.lastState = 'idle';
|
||||
this.interruptedByFocus = false;
|
||||
this.telephonyAttached = false;
|
||||
}
|
||||
|
||||
private _onStateChanged(state: PhoneState): void {
|
||||
|
||||
@@ -86,6 +86,11 @@ class WakeWordService {
|
||||
* oft einen Audio-Pegel-Spike (AudioFocus-Switch, AudioTrack re-route),
|
||||
* der openWakeWord faelschlich triggern kann. */
|
||||
private cooldownUntilMs: number = 0;
|
||||
/** Zeitpunkt des letzten echten Wake-Word-Triggers — gebraucht damit
|
||||
* ChatScreen entscheiden kann ob ein 'conversing'-State bei App-Resume
|
||||
* ein false-positive war (Wake-Word im Hintergrund getriggert waehrend
|
||||
* Stefan gar nicht in der App war). */
|
||||
private lastTriggerAt: number = 0;
|
||||
|
||||
private keyword: WakeKeyword = DEFAULT_KEYWORD;
|
||||
private nativeReady: boolean = false;
|
||||
@@ -231,6 +236,7 @@ class WakeWordService {
|
||||
}
|
||||
console.log('[WakeWord] Wake-Word "%s" erkannt! (state=%s, barge=%s)',
|
||||
this.keyword, this.state, this.bargeListening);
|
||||
this.lastTriggerAt = now;
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try { await OpenWakeWord.stop(); } catch {}
|
||||
}
|
||||
@@ -341,6 +347,33 @@ class WakeWordService {
|
||||
this.setState('off');
|
||||
}
|
||||
|
||||
/** Wenn ein conversing-State auf einem Wake-Word-Trigger juenger als
|
||||
* maxAgeMs basiert: false-positive verwerfen, zurueck zu armed.
|
||||
* Wird vom ChatScreen aufgerufen wenn die App aus laengerem Hintergrund
|
||||
* zurueck kommt — dann ist ein „gerade getriggertes" Wake-Word sehr
|
||||
* wahrscheinlich ein TV-Spike, Husten, ARIAs eigene TTS-Aufnahme etc.
|
||||
* Returnt true wenn verworfen wurde. */
|
||||
async discardIfFreshlyTriggered(maxAgeMs: number = 10_000): Promise<boolean> {
|
||||
if (this.state !== 'conversing') return false;
|
||||
if (this.lastTriggerAt === 0) return false;
|
||||
const age = Date.now() - this.lastTriggerAt;
|
||||
if (age > maxAgeMs) return false;
|
||||
console.log('[WakeWord] Resume: verwerfe verdaechtiges conversing (age=%dms)', age);
|
||||
this.lastTriggerAt = 0;
|
||||
if (this.nativeReady && OpenWakeWord) {
|
||||
try {
|
||||
await OpenWakeWord.start();
|
||||
ToastAndroid.show('Hintergrund-Trigger verworfen — lausche wieder', ToastAndroid.SHORT);
|
||||
this.setState('armed');
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.warn('[WakeWord] re-arm nach discard fehlgeschlagen:', err);
|
||||
}
|
||||
}
|
||||
this.setState('off');
|
||||
return true;
|
||||
}
|
||||
|
||||
/** Nach ARIA-Antwort (TTS fertig): naechste Aufnahme im Conversation-Window starten */
|
||||
async resume(): Promise<void> {
|
||||
if (this.state !== 'conversing') return;
|
||||
|
||||
+165
-1
@@ -18,6 +18,9 @@ from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from typing import Optional
|
||||
|
||||
from conversation import Conversation, Turn
|
||||
@@ -28,6 +31,33 @@ import skills as skills_mod
|
||||
import triggers as triggers_mod
|
||||
import watcher as watcher_mod
|
||||
|
||||
BRIDGE_URL = os.environ.get("BRIDGE_URL", "http://aria-bridge:8090")
|
||||
# FLUX-Render kann bis ~90s dauern, beim ersten Render nach Container-Start
|
||||
# laedt die flux-bridge zudem ~24 GB Modell von HF (~5-10 min). Brain wartet
|
||||
# synchron — Stefan kuendigt es vorher an wenn er weiss dass es feuert.
|
||||
FLUX_HTTP_TIMEOUT_SEC = 1200
|
||||
# Diagnostic-Settings fuer FLUX (Default-Modell + User-Keywords) liegen im
|
||||
# selben File wie F5-TTS/Whisper Config — von der aria-bridge geschrieben.
|
||||
VOICE_CONFIG_PATH = "/shared/config/voice_config.json"
|
||||
|
||||
|
||||
def _load_flux_config() -> dict:
|
||||
"""Liest fluxXxx-Felder aus der Voice-Config. Default-Werte wenn nichts
|
||||
persistiert ist — Stefan hat in Diagnostic vielleicht noch nichts gesetzt."""
|
||||
try:
|
||||
with open(VOICE_CONFIG_PATH, encoding="utf-8") as f:
|
||||
data = json.load(f) or {}
|
||||
except (FileNotFoundError, json.JSONDecodeError):
|
||||
data = {}
|
||||
except Exception as exc:
|
||||
logger.debug("Voice-Config lesen fehlgeschlagen: %s", exc)
|
||||
data = {}
|
||||
return {
|
||||
"fluxDefaultModel": data.get("fluxDefaultModel", "dev"),
|
||||
"fluxKeywordRaw": data.get("fluxKeywordRaw", "flux"),
|
||||
"fluxKeywordSwitch": data.get("fluxKeywordSwitch", "fix"),
|
||||
}
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -215,6 +245,78 @@ META_TOOLS = [
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "flux_generate",
|
||||
"description": (
|
||||
"Generiere ein Bild aus einem Text-Prompt via FLUX auf der Gamebox-GPU. "
|
||||
"Brauchbar fuer 'mal mir ein X', 'wie sieht ein Y aus?', Mockups, "
|
||||
"Konzept-Skizzen, Memes. Render dauert 20-90s — kuendige es Stefan "
|
||||
"kurz an, dann ist er nicht ueberrascht.\n\n"
|
||||
"**Schreibe deine Antwort wie immer auf Deutsch**, und referenziere das "
|
||||
"fertige Bild MIT dem `[FILE: ...]`-Marker, GENAU im Pfad-Format das das "
|
||||
"Tool zurueckgibt. Beispiel:\n"
|
||||
" 'Hier dein Aquarell:\\n[FILE: /shared/uploads/aria_generated_1234.png]'\n\n"
|
||||
"Der Marker wird beim App-Renderer ausgeblendet und das Bild stattdessen "
|
||||
"inline als Anhang gezeigt.\n\n"
|
||||
"**Prompt-Sprache: bevorzugt Englisch.** FLUX versteht zwar Deutsch, "
|
||||
"liefert aber mit englischen Prompts deutlich konsistentere Ergebnisse. "
|
||||
"Uebersetze Stefans deutsche Beschreibung selbststaendig — AUSSER `raw=true`.\n\n"
|
||||
"**Modus `raw=true` (Pipe-Modus):** Wenn Stefan das Raw-Keyword aus dem "
|
||||
"FLUX-Settings-Block im System-Prompt nutzt (typischerweise `flux`), "
|
||||
"leite seinen Text 1:1 als prompt durch — KEIN Uebersetzen, KEIN "
|
||||
"Beautify, KEINE Qualitaets-Keywords. Stefan formuliert dann selbst und "
|
||||
"der Prompt geht roh an FLUX. Brauchbar wenn er den vollen Output ohne "
|
||||
"ARIAs Filter haben will.\n\n"
|
||||
"**Modell-Wahl (`model`):** \n"
|
||||
"- `default` (oder weglassen): das in den Diagnostic-Settings eingestellte "
|
||||
"Default-Modell (steht im FLUX-Block im System-Prompt).\n"
|
||||
"- `dev`: hochqualitatives FLUX.1-dev, 20-90s, ~28 steps.\n"
|
||||
"- `schnell`: FLUX.1-schnell, 4-step distillation, ~5-15s.\n"
|
||||
"Wenn Stefan das Switch-Keyword (steht ebenfalls im FLUX-Block) im Prompt "
|
||||
"verwendet → setze `model` auf das ANDERE Modell als das Default. Bei "
|
||||
"'in hoher Qualitaet'/'detailliert' → `dev`. Bei 'schnell mal'/'fix' → `schnell`.\n\n"
|
||||
"Modell-Switch kostet einmalig 15-30s (Pipeline-Reload aus HF-Cache). "
|
||||
"Stefan sieht den Status im Diagnostic-Banner.\n\n"
|
||||
"Caps:\n"
|
||||
"- `width`/`height`: 256-1536, wird auf Vielfache von 64 gesnappt (Default 1024)\n"
|
||||
"- `steps`: 1-50 (Default 28 fuer dev, 4 fuer schnell)\n"
|
||||
"- `guidance_scale`: 0.0-20.0 (Default 3.5)\n"
|
||||
"- `seed`: optional, gleicher seed + gleicher prompt → gleiches Bild"
|
||||
),
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"prompt": {
|
||||
"type": "string",
|
||||
"description": (
|
||||
"Bei raw=false (Default): englischer Bild-Prompt, von dir aus Stefans Worten gebaut, "
|
||||
"mit Stil/Licht/Kamera-Stichworten. Bei raw=true: Stefans Text 1:1 ohne Aenderung."
|
||||
),
|
||||
},
|
||||
"raw": {
|
||||
"type": "boolean",
|
||||
"description": (
|
||||
"true = Pipe-Modus, kein Rewriting. Setzen wenn Stefan das Raw-Keyword "
|
||||
"(siehe FLUX-Block im System-Prompt) am Anfang seiner Nachricht verwendet."
|
||||
),
|
||||
},
|
||||
"model": {
|
||||
"type": "string",
|
||||
"enum": ["default", "dev", "schnell"],
|
||||
"description": "Default-Modell oder explizit dev/schnell. Default = Diagnostic-Setting.",
|
||||
},
|
||||
"width": {"type": "integer", "description": "Breite in px (Default 1024, max 1536)"},
|
||||
"height": {"type": "integer", "description": "Hoehe in px (Default 1024, max 1536)"},
|
||||
"steps": {"type": "integer", "description": "Inference-Steps (Default 28, max 50). Mehr = besser+langsamer."},
|
||||
"guidance_scale": {"type": "number", "description": "Wie strikt am Prompt kleben (Default 3.5)"},
|
||||
"seed": {"type": "integer", "description": "Reproduzierbarkeits-Seed (optional)"},
|
||||
},
|
||||
"required": ["prompt"],
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
@@ -437,10 +539,12 @@ class Agent:
|
||||
condition_funcs = watcher_mod.describe_functions()
|
||||
|
||||
# 5. System-Prompt + Window-Messages
|
||||
flux_config = _load_flux_config()
|
||||
system_prompt = build_system_prompt(hot, cold, skills=all_skills,
|
||||
triggers=all_triggers,
|
||||
condition_vars=condition_vars,
|
||||
condition_funcs=condition_funcs)
|
||||
condition_funcs=condition_funcs,
|
||||
flux_config=flux_config)
|
||||
messages = [ProxyMessage(role="system", content=system_prompt)]
|
||||
for t in self.conversation.window():
|
||||
messages.append(ProxyMessage(role=t.role, content=t.content))
|
||||
@@ -607,6 +711,66 @@ class Agent:
|
||||
else:
|
||||
lines.append(f"- {t['name']} ({t['type']}, {state})")
|
||||
return "\n".join(lines)
|
||||
if name == "flux_generate":
|
||||
prompt = (arguments.get("prompt") or "").strip()
|
||||
if not prompt:
|
||||
return "FEHLER: prompt ist Pflicht."
|
||||
req: dict = {"prompt": prompt}
|
||||
for key in ("width", "height", "steps", "seed"):
|
||||
if key in arguments and arguments[key] is not None:
|
||||
try:
|
||||
req[key] = int(arguments[key])
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
if arguments.get("guidance_scale") is not None:
|
||||
try:
|
||||
req["guidance_scale"] = float(arguments["guidance_scale"])
|
||||
except (TypeError, ValueError):
|
||||
pass
|
||||
# Modell-Wahl: 'default' (oder weglassen) → flux-bridge nimmt Diagnostic-Default.
|
||||
# 'dev' / 'schnell' → expliziter Override.
|
||||
model_arg = (arguments.get("model") or "").strip().lower()
|
||||
if model_arg in ("dev", "schnell"):
|
||||
req["model"] = model_arg
|
||||
# `raw` ist Brain-Domain (kein Rewriting des prompt) und wird hier
|
||||
# nicht durchgereicht — der prompt enthaelt bei raw=true bereits
|
||||
# Stefans Originaltext.
|
||||
try:
|
||||
body = json.dumps(req).encode("utf-8")
|
||||
http_req = urllib.request.Request(
|
||||
f"{BRIDGE_URL}/internal/flux-generate", data=body, method="POST",
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
with urllib.request.urlopen(http_req, timeout=FLUX_HTTP_TIMEOUT_SEC) as resp:
|
||||
raw = resp.read()
|
||||
result = json.loads(raw.decode("utf-8", "ignore"))
|
||||
except urllib.error.HTTPError as exc:
|
||||
try:
|
||||
err_body = exc.read().decode("utf-8", "ignore")
|
||||
err_data = json.loads(err_body)
|
||||
err = err_data.get("error") or err_body
|
||||
except Exception:
|
||||
err = str(exc)
|
||||
return f"FEHLER (flux-bridge): {err}"
|
||||
except Exception as exc:
|
||||
logger.exception("flux_generate HTTP-Call fehlgeschlagen")
|
||||
return f"FEHLER: flux-bridge nicht erreichbar ({exc})"
|
||||
|
||||
if not result.get("ok"):
|
||||
return f"FEHLER (flux-bridge): {result.get('error', 'unbekannt')}"
|
||||
# Kompakte Rueckmeldung: Pfad + Render-Stats. Brain bettet den
|
||||
# Pfad in ihre Antwort als [FILE: ...]-Marker ein (siehe Tool-Beschreibung).
|
||||
return (
|
||||
f"OK — Bild generiert.\n"
|
||||
f"path: {result['path']}\n"
|
||||
f"size: {result.get('width','?')}x{result.get('height','?')} "
|
||||
f"({result.get('sizeBytes',0)//1024} KB)\n"
|
||||
f"steps={result.get('steps','?')} guidance={result.get('guidance','?')} "
|
||||
f"seed={result.get('seed','?')} model={result.get('model','?')}\n"
|
||||
f"renderSeconds={result.get('renderSeconds','?')}\n\n"
|
||||
f"WICHTIG: Schreibe in deiner Antwort an Stefan den Pfad EXAKT als "
|
||||
f"Marker: [FILE: {result['path']}] — dann zeigt die App das Bild inline."
|
||||
)
|
||||
if name == "memory_search":
|
||||
query = (arguments.get("query") or "").strip()
|
||||
if not query:
|
||||
|
||||
+47
-11
@@ -164,15 +164,17 @@ def build_skills_section(skills: List[dict]) -> str:
|
||||
"static-ffmpeg, beautifulsoup4, …). Falls etwas WIRKLICH nur via apt geht: "
|
||||
"Stefan fragen ob es ins Brain-Dockerfile soll.")
|
||||
lines.append("")
|
||||
lines.append("**Harte Regel — IMMER Skill anlegen wenn:** die Loesung erfordert eine "
|
||||
"pip-Library. Begruendung: Brain-Container hat keinen persistenten State "
|
||||
"ausser /data/skills/. Ohne Skill wuerde der Install bei jedem "
|
||||
"Container-Restart wiederholt.")
|
||||
lines.append("**Goldene Regel: NIE ungefragt Skills anlegen.** Selbst wenn die Aufgabe "
|
||||
"eine pip-Library braucht — erst die Aufgabe loesen (mit Bash, `pip install` "
|
||||
"im Brain ist ok, oder Workaround), und nur wenn Stefan EXPLIZIT sagt "
|
||||
"'mach daraus einen Skill' / 'leg den als Skill an' / 'dafuer einen Skill' "
|
||||
"rufst du `skill_create` auf. Begruendung: Skill-Setup (venv + pip install) "
|
||||
"blockt das Brain bis zu 12 Minuten. Ein unaufgefordert angelegter Skill "
|
||||
"macht ARIA stumm und nervt Stefan jedes Mal.")
|
||||
lines.append("")
|
||||
lines.append("**Sonst — Skill nur wenn alle vier zutreffen:**")
|
||||
lines.append("**Wenn Stefan einen Skill explizit moechte, pruef:**")
|
||||
lines.append("")
|
||||
lines.append("1. **Wiederkehrend** — die Aufgabe wird realistisch nochmal gestellt. "
|
||||
"Einmal-Faelle (\"wie spaet ist es jetzt\") kein Skill.")
|
||||
lines.append("1. **Wiederkehrend** — die Aufgabe wird realistisch nochmal gestellt.")
|
||||
lines.append("2. **Nicht-trivial** — mehrere Schritte. Ein einzelner Shell-Befehl "
|
||||
"(`date`, `hostname`, `ls`) ist KEIN Skill — das macht Bash direkt.")
|
||||
lines.append("3. **Parametrisierbar** — der Skill nimmt Eingaben (URL, Datei, Suchbegriff) "
|
||||
@@ -180,9 +182,8 @@ def build_skills_section(skills: List[dict]) -> str:
|
||||
lines.append("4. **Wiederverwendbar als ganzes** — Stefan wuerde es zukuenftig per Name "
|
||||
"ansprechen (\"mach mir den YouTube zu MP3\") statt jedes Mal zu erklaeren.")
|
||||
lines.append("")
|
||||
lines.append("Wenn nichts installiert werden muss UND nicht alle vier zutreffen: einfach "
|
||||
"die Aufgabe loesen ohne Skill anzulegen. Stefan kann jederzeit sagen "
|
||||
"'bau daraus einen Skill'.")
|
||||
lines.append("Wenn auch nur EINE der vier nicht zutrifft: hoeflich nachfragen ob er "
|
||||
"wirklich einen permanenten Skill will oder die Aufgabe einmalig reicht.")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
@@ -239,6 +240,37 @@ def build_triggers_section(
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def build_flux_section(flux_config: dict) -> str:
|
||||
"""Block fuer den System-Prompt: aktuelle Diagnostic-Settings fuer
|
||||
Bildgenerierung (Default-Modell + User-konfigurierbare Keywords).
|
||||
|
||||
flux_config kommt aus /shared/config/voice_config.json:
|
||||
fluxDefaultModel: "dev" | "schnell" (Default "dev")
|
||||
fluxKeywordRaw: z.B. "flux" (Pipe-Modus, kein Rewriting)
|
||||
fluxKeywordSwitch:z.B. "fix" (anderes Modell als Default)
|
||||
"""
|
||||
default_model = (flux_config or {}).get("fluxDefaultModel", "dev")
|
||||
kw_raw = (flux_config or {}).get("fluxKeywordRaw", "flux")
|
||||
kw_switch = (flux_config or {}).get("fluxKeywordSwitch", "fix")
|
||||
other_model = "schnell" if default_model == "dev" else "dev"
|
||||
lines = [
|
||||
"## FLUX Bildgenerierung",
|
||||
f"- Default-Modell: `{default_model}` (alternativ: `{other_model}`).",
|
||||
f"- Raw-Keyword: `{kw_raw}` — wenn Stefans Nachricht damit beginnt "
|
||||
f"oder das Wort als ersten echten Wortteil enthaelt, ruf "
|
||||
f"`flux_generate(..., raw=true)` und leite seinen Text 1:1 als prompt "
|
||||
f"durch. KEIN Uebersetzen, KEIN Beautify, KEINE Stil-Adds.",
|
||||
f"- Switch-Keyword: `{kw_switch}` — taucht's in der Nachricht auf, "
|
||||
f"setze `model=\"{other_model}\"` (das ANDERE Modell als das Default).",
|
||||
"- Natuerliche Sprache funktioniert auch: 'mal eben fix' / 'schnell' → schnell, "
|
||||
"'in hoher Qualitaet' / 'detailliert' → dev.",
|
||||
"- Whisper-Erkennung des Raw-Keywords ist nicht perfekt — wenn Stefans "
|
||||
"Sprachnachricht z.B. mit 'fluks', 'flocks', 'fluxx' anfaengt, behandle "
|
||||
"das auch als Raw-Keyword.",
|
||||
]
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def build_system_prompt(
|
||||
pinned: List[MemoryPoint],
|
||||
cold: List[MemoryPoint] | None = None,
|
||||
@@ -246,8 +278,9 @@ def build_system_prompt(
|
||||
triggers: List[dict] | None = None,
|
||||
condition_vars: List[dict] | None = None,
|
||||
condition_funcs: List[dict] | None = None,
|
||||
flux_config: dict | None = None,
|
||||
) -> str:
|
||||
"""Kompletter System-Prompt: Hot + Cold + Skills + Triggers."""
|
||||
"""Kompletter System-Prompt: Hot + Cold + Skills + Triggers + FLUX."""
|
||||
parts = [build_hot_memory_section(pinned), "", build_time_section()]
|
||||
if skills:
|
||||
parts.append("")
|
||||
@@ -255,6 +288,9 @@ def build_system_prompt(
|
||||
if condition_vars:
|
||||
parts.append("")
|
||||
parts.append(build_triggers_section(triggers or [], condition_vars, condition_funcs))
|
||||
if flux_config is not None:
|
||||
parts.append("")
|
||||
parts.append(build_flux_section(flux_config))
|
||||
if cold:
|
||||
parts.append("")
|
||||
parts.append(build_cold_memory_section(cold))
|
||||
|
||||
@@ -25,7 +25,7 @@ logger = logging.getLogger(__name__)
|
||||
RUNTIME_CONFIG_FILE = Path("/shared/config/runtime.json")
|
||||
ENV_MODEL = os.environ.get("BRAIN_MODEL", "claude-sonnet-4")
|
||||
PROXY_URL = os.environ.get("PROXY_URL", "http://proxy:3456")
|
||||
PROXY_TIMEOUT_SEC = float(os.environ.get("PROXY_TIMEOUT_SEC", "300"))
|
||||
PROXY_TIMEOUT_SEC = float(os.environ.get("PROXY_TIMEOUT_SEC", "1200"))
|
||||
|
||||
|
||||
def _read_model_from_runtime() -> str:
|
||||
|
||||
+323
-19
@@ -487,6 +487,7 @@ class ARIABridge:
|
||||
self.tts_enabled = True
|
||||
self.xtts_voice = ""
|
||||
self._f5tts_config: dict = {}
|
||||
self._flux_config: dict = {}
|
||||
vc: dict = {}
|
||||
# Gespeicherte Voice-Config laden
|
||||
try:
|
||||
@@ -503,9 +504,14 @@ class ARIABridge:
|
||||
"f5ttsCfgStrength", "f5ttsNfeStep"):
|
||||
if k in vc:
|
||||
self._f5tts_config[k] = vc[k]
|
||||
logger.info("Voice-Config geladen: tts=%s voice=%s f5tts=%s",
|
||||
# FLUX-Felder (Default-Modell + Keywords) gleicher Mechanismus
|
||||
for k in ("fluxDefaultModel", "fluxKeywordRaw", "fluxKeywordSwitch", "huggingfaceToken"):
|
||||
if k in vc:
|
||||
self._flux_config[k] = vc[k]
|
||||
logger.info("Voice-Config geladen: tts=%s voice=%s f5tts=%s flux=%s",
|
||||
self.tts_enabled, self.xtts_voice or "default",
|
||||
self._f5tts_config or "defaults")
|
||||
self._f5tts_config or "defaults",
|
||||
self._flux_config or "defaults")
|
||||
except Exception as e:
|
||||
logger.warning("Voice-Config laden fehlgeschlagen: %s", e)
|
||||
# Whisper-Modell: Config hat Vorrang, dann env/Default (medium)
|
||||
@@ -541,6 +547,12 @@ class ARIABridge:
|
||||
# Beeinflusst das Timeout fuer stt_request — bei "loading" warten wir laenger,
|
||||
# weil das Modell beim ersten Request noch ~1-2 Min runtergeladen werden kann.
|
||||
self._remote_stt_ready: bool = False
|
||||
# FLUX-Render-Requests die aktuell auf Antwort der flux-bridge (Gamebox) warten.
|
||||
# requestId → Future mit dem flux_response-Payload (oder None bei Fehler).
|
||||
self._pending_flux: dict[str, asyncio.Future] = {}
|
||||
# flux-bridge service_status: True wenn ready. Render-Timeouts werden
|
||||
# bei 'loading' deutlich grosszuegiger gesetzt (Modell-Download ~24 GB).
|
||||
self._remote_flux_ready: bool = False
|
||||
# User-Message-Counter fuer Auto-Compact. Bei zu langer Konversation
|
||||
# sprengt die argv-Liste beim Claude-Subprocess-Spawn (E2BIG). Bei
|
||||
# COMPACT_AFTER erreicht → Sessions reset + Container restart.
|
||||
@@ -997,8 +1009,13 @@ class ARIABridge:
|
||||
"""Schreibt eine Zeile in /shared/config/chat_backup.jsonl.
|
||||
Wird von Diagnostic + App als History-Quelle gelesen.
|
||||
entry braucht mindestens {role, text}; ts wird ergaenzt.
|
||||
Returns den ts (auch fuer Bubble-Loeschen-Tracking)."""
|
||||
ts = int(asyncio.get_event_loop().time() * 1000)
|
||||
Returns den ts (auch fuer Bubble-Loeschen-Tracking).
|
||||
|
||||
WICHTIG: ts ist UNIX-ms (time.time()*1000), NICHT loop-time.
|
||||
Loop-time ist Container-monotonic — bei jedem Restart wieder 0.
|
||||
Das brach die App-History-Sortierung weil App-side Date.now()
|
||||
(echtes UNIX-ms) mit Bridge-Container-Uptime gemischt wurde."""
|
||||
ts = int(time.time() * 1000)
|
||||
try:
|
||||
line = {"ts": ts}
|
||||
line.update(entry)
|
||||
@@ -1227,6 +1244,7 @@ class ARIABridge:
|
||||
"whisperModel": self.stt_engine.model_size,
|
||||
}
|
||||
payload.update(getattr(self, "_f5tts_config", {}) or {})
|
||||
payload.update(getattr(self, "_flux_config", {}) or {})
|
||||
await self._send_to_rvs({
|
||||
"type": "config",
|
||||
"payload": payload,
|
||||
@@ -1316,7 +1334,9 @@ class ARIABridge:
|
||||
self._pending_files_flush_task = None
|
||||
text = self._build_pending_files_message(user_text)
|
||||
self._pending_files = []
|
||||
await self.send_to_core(text, source="app-file+chat")
|
||||
# create_task statt await — sonst blockt der RVS-recv-Loop bis Brain
|
||||
# fertig ist (siehe chat-handler oben).
|
||||
asyncio.create_task(self.send_to_core(text, source="app-file+chat"))
|
||||
return True
|
||||
|
||||
async def send_to_core(self, text: str, source: str = "bridge", client_msg_id: Optional[str] = None) -> None:
|
||||
@@ -1351,8 +1371,10 @@ class ARIABridge:
|
||||
url, data=payload, method="POST",
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
# Cold-Start kann lange dauern, 5min Timeout
|
||||
with urllib.request.urlopen(req, timeout=300) as resp:
|
||||
# 20 Min Timeout — lange Multi-Tool-Workflows (Karten,
|
||||
# PDFs, viele curl-Calls) brauchen das. 5 Min waren chronisch
|
||||
# zu knapp und haben ARIA mitten in der Arbeit gekappt.
|
||||
with urllib.request.urlopen(req, timeout=1200) as resp:
|
||||
return resp.status, resp.read().decode("utf-8", errors="ignore")
|
||||
except Exception as exc:
|
||||
return None, str(exc)
|
||||
@@ -1469,8 +1491,11 @@ class ARIABridge:
|
||||
try:
|
||||
url = f"{current_url}?token={self.rvs_token}"
|
||||
logger.info("[rvs] Verbinde: %s", current_url)
|
||||
# max_size=50MB (siehe core-Connect oben — gleicher Grund).
|
||||
async with websockets.connect(url, max_size=50 * 1024 * 1024) as ws:
|
||||
# max_size=100MB synchron zum RVS-Server (siehe rvs/server.js).
|
||||
# File-Re-Download fuer Anhaenge braucht Platz fuer base64-
|
||||
# inflate (~1.33×). Groessere Files lehnt der file_request-
|
||||
# Handler proaktiv ab bevor's zur 1009-Disconnection kommt.
|
||||
async with websockets.connect(url, max_size=100 * 1024 * 1024) as ws:
|
||||
self.ws_rvs = ws
|
||||
retry_delay = 2
|
||||
logger.info("[rvs] Verbunden — warte auf App-Nachrichten")
|
||||
@@ -1639,14 +1664,27 @@ class ARIABridge:
|
||||
" [BARGE-IN]" if interrupted else "",
|
||||
" [GPS]" if location else "",
|
||||
text[:80])
|
||||
await self.send_to_core(core_text,
|
||||
source="app" + (" [barge-in]" if interrupted else ""),
|
||||
client_msg_id=client_msg_id)
|
||||
# KEIN await: send_to_core kann 20 Min dauern. Wenn wir
|
||||
# hier awaiten, blockt der `async for raw_message in ws`-
|
||||
# Loop solange → RVS-Server droppt uns nach ~4 Min idle.
|
||||
# Als Task: Brain laeuft im Hintergrund, RVS-recv bleibt
|
||||
# bedienbar, Pings werden beantwortet, Verbindung lebt.
|
||||
asyncio.create_task(self.send_to_core(
|
||||
core_text,
|
||||
source="app" + (" [barge-in]" if interrupted else ""),
|
||||
client_msg_id=client_msg_id,
|
||||
))
|
||||
return
|
||||
|
||||
if msg_type == "cancel_request":
|
||||
logger.info("[rvs] Cancel-Request von App — rufe Diagnostic /api/cancel auf")
|
||||
await self._cancel_via_diagnostic()
|
||||
hard = bool(payload.get("hard"))
|
||||
if hard:
|
||||
logger.warning("[rvs] NOT-AUS — hard cancel: Diagnostic /api/cancel + Proxy /cancel-all")
|
||||
await self._cancel_via_diagnostic()
|
||||
await self._cancel_proxy_subprocesses()
|
||||
else:
|
||||
logger.info("[rvs] Cancel-Request von App — rufe Diagnostic /api/cancel auf")
|
||||
await self._cancel_via_diagnostic()
|
||||
await self._emit_activity("idle", "")
|
||||
return
|
||||
|
||||
@@ -1751,6 +1789,15 @@ class ARIABridge:
|
||||
self._f5tts_config = {}
|
||||
self._f5tts_config[k] = payload[k]
|
||||
changed = True
|
||||
# FLUX-Felder: gleiche Logik wie F5-TTS. flux-bridge applied
|
||||
# fluxDefaultModel selbst (Pipeline-Swap). Keywords nutzt Brain
|
||||
# via /shared/config/voice_config.json.
|
||||
for k in ("fluxDefaultModel", "fluxKeywordRaw", "fluxKeywordSwitch", "huggingfaceToken"):
|
||||
if k in payload:
|
||||
if not hasattr(self, "_flux_config"):
|
||||
self._flux_config = {}
|
||||
self._flux_config[k] = payload[k]
|
||||
changed = True
|
||||
# Persistent speichern in Shared Volume
|
||||
if changed:
|
||||
try:
|
||||
@@ -1761,6 +1808,7 @@ class ARIABridge:
|
||||
"whisperModel": self.stt_engine.model_size,
|
||||
}
|
||||
config_data.update(getattr(self, "_f5tts_config", {}))
|
||||
config_data.update(getattr(self, "_flux_config", {}))
|
||||
with open("/shared/config/voice_config.json", "w") as f:
|
||||
json.dump(config_data, f, indent=2)
|
||||
logger.info("[rvs] Voice-Config gespeichert: %s", config_data)
|
||||
@@ -1817,7 +1865,8 @@ class ARIABridge:
|
||||
|
||||
if not file_b64:
|
||||
text = f"Stefan hat eine Datei gesendet ({file_name}, {file_type}) aber die Daten sind leer angekommen."
|
||||
await self.send_to_core(text, source="app-file")
|
||||
# create_task statt await — RVS-recv darf nicht blocken
|
||||
asyncio.create_task(self.send_to_core(text, source="app-file"))
|
||||
return
|
||||
|
||||
if file_type.startswith("image/"):
|
||||
@@ -2187,6 +2236,33 @@ class ARIABridge:
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
return
|
||||
# Groessen-Check VOR base64-Encode + Send. Sonst zerreisst's bei
|
||||
# grossen Files (>~70 MB binaer) die WebSocket-Verbindung mit
|
||||
# Code 1009 (message too big) — RVS-Server droppt, Bridge crasht
|
||||
# im cleanup (websockets-Lib-Bug). Limit deckt typische Videos
|
||||
# und Bilder ab; alles drueber soll der User per SSH abholen.
|
||||
FILE_MAX_BYTES = 70 * 1024 * 1024
|
||||
try:
|
||||
file_size = os.path.getsize(server_path)
|
||||
except OSError as exc:
|
||||
logger.warning("[rvs] getsize fehlgeschlagen: %s", exc)
|
||||
file_size = 0
|
||||
if file_size > FILE_MAX_BYTES:
|
||||
logger.warning("[rvs] Re-Download abgelehnt: %s zu gross (%dMB > %dMB)",
|
||||
server_path, file_size // (1024 * 1024),
|
||||
FILE_MAX_BYTES // (1024 * 1024))
|
||||
await self._send_to_rvs({
|
||||
"type": "file_response",
|
||||
"payload": {
|
||||
"requestId": req_id,
|
||||
"serverPath": server_path,
|
||||
"name": os.path.basename(server_path),
|
||||
"error": f"Datei zu gross fuer Transfer ({file_size // (1024 * 1024)} MB, Limit {FILE_MAX_BYTES // (1024 * 1024)} MB)",
|
||||
"sizeBytes": file_size,
|
||||
},
|
||||
"timestamp": int(asyncio.get_event_loop().time() * 1000),
|
||||
})
|
||||
return
|
||||
with open(server_path, "rb") as f:
|
||||
file_b64 = base64.b64encode(f.read()).decode("ascii")
|
||||
mime, _ = mimetypes.guess_type(server_path)
|
||||
@@ -2262,8 +2338,36 @@ class ARIABridge:
|
||||
future.set_result(text)
|
||||
return
|
||||
|
||||
elif msg_type == "flux_response":
|
||||
# Antwort der flux-bridge auf unseren flux_request. Erste Nachricht
|
||||
# mit state='rendering' ist nur Progress-Ping — die echte Antwort
|
||||
# kommt mit state='done' (oder error).
|
||||
request_id = payload.get("requestId", "")
|
||||
future = self._pending_flux.get(request_id)
|
||||
if future is None or future.done():
|
||||
return
|
||||
error = payload.get("error", "")
|
||||
if error:
|
||||
logger.warning("[rvs] flux_response Fehler: %s", error)
|
||||
future.set_result({"error": error})
|
||||
return
|
||||
state = payload.get("state", "")
|
||||
if state == "rendering":
|
||||
# Nur Progress-Info, future bleibt offen
|
||||
logger.info("[rvs] flux: rendering %dx%d steps=%d ...",
|
||||
payload.get("width", 0), payload.get("height", 0),
|
||||
payload.get("steps", 0))
|
||||
return
|
||||
# state == "done" oder fehlt → final
|
||||
logger.info("[rvs] flux fertig: %dx%d, %.1fs, %d KB",
|
||||
payload.get("width", 0), payload.get("height", 0),
|
||||
payload.get("renderSeconds", 0),
|
||||
(payload.get("sizeBytes", 0)) // 1024)
|
||||
future.set_result(payload)
|
||||
return
|
||||
|
||||
elif msg_type == "service_status":
|
||||
# Gamebox-Bridges (whisper / f5tts) melden ihren Lade-Status.
|
||||
# Gamebox-Bridges (whisper / f5tts / flux) melden ihren Lade-Status.
|
||||
# Wir nutzen das fuer den dynamischen STT-Timeout: solange whisper
|
||||
# im 'loading' steckt, geben wir der Bridge mehr Zeit (Modell-Download
|
||||
# kann 1-2 Min dauern), statt nach 45s lokal zu fallbacken.
|
||||
@@ -2274,6 +2378,11 @@ class ARIABridge:
|
||||
self._remote_stt_ready = (state == "ready")
|
||||
if self._remote_stt_ready != was_ready:
|
||||
logger.info("[rvs] whisper-bridge -> %s", state)
|
||||
elif svc == "flux":
|
||||
was_ready = self._remote_flux_ready
|
||||
self._remote_flux_ready = (state == "ready")
|
||||
if self._remote_flux_ready != was_ready:
|
||||
logger.info("[rvs] flux-bridge -> %s", state)
|
||||
return
|
||||
|
||||
elif msg_type == "config_request":
|
||||
@@ -2458,6 +2567,105 @@ class ARIABridge:
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# ── Flux-Roundtrip: Brain → Bridge → RVS → flux-bridge → zurueck ──
|
||||
# FLUX-Render auf der 3060 dauert je nach Aufloesung/Steps 20-90 s.
|
||||
# Beim 1. Render frisch nach Container-Start muss zudem das ~24 GB
|
||||
# Modell von HF geladen werden — daher der grosse Loading-Timeout.
|
||||
_FLUX_TIMEOUT_READY_S = 240.0 # 4 min nach erstem Render
|
||||
_FLUX_TIMEOUT_LOADING_S = 900.0 # 15 min beim allerersten Mal (Modell-Download)
|
||||
|
||||
async def _flux_generate(self, prompt: str, width: int, height: int,
|
||||
steps: Optional[int], guidance: Optional[float],
|
||||
seed: Optional[int], model: Optional[str] = None) -> dict:
|
||||
"""Schickt einen flux_request an die flux-bridge, wartet auf das fertige
|
||||
PNG, speichert es nach /shared/uploads/aria_generated_<ts>.png.
|
||||
|
||||
Rueckgabe:
|
||||
{ok: True, path, sizeBytes, width, height, steps, guidance, seed, model, renderSeconds}
|
||||
{ok: False, error}
|
||||
"""
|
||||
if self.ws_rvs is None:
|
||||
return {"ok": False, "error": "RVS-Verbindung nicht aktiv"}
|
||||
|
||||
request_id = str(uuid.uuid4())
|
||||
loop = asyncio.get_event_loop()
|
||||
future: asyncio.Future = loop.create_future()
|
||||
self._pending_flux[request_id] = future
|
||||
|
||||
try:
|
||||
req_payload: dict = {"requestId": request_id, "prompt": prompt,
|
||||
"width": width, "height": height}
|
||||
if steps is not None:
|
||||
req_payload["steps"] = steps
|
||||
if guidance is not None:
|
||||
req_payload["guidance_scale"] = guidance
|
||||
if seed is not None:
|
||||
req_payload["seed"] = seed
|
||||
if model:
|
||||
# 'dev' | 'schnell' — flux-bridge mappt das auf HF-IDs.
|
||||
# Ohne Angabe nimmt die flux-bridge ihren konfigurierten Default.
|
||||
req_payload["model"] = model
|
||||
|
||||
logger.info("[rvs] flux_request → flux-bridge (id=%s, %dx%d, steps=%s, model=%s, prompt=%r)",
|
||||
request_id[:8], width, height, steps, model or "default", prompt[:60])
|
||||
ok = await self._send_to_rvs({
|
||||
"type": "flux_request",
|
||||
"payload": req_payload,
|
||||
"timestamp": int(time.time() * 1000),
|
||||
})
|
||||
if not ok:
|
||||
return {"ok": False, "error": "flux_request konnte nicht gesendet werden"}
|
||||
|
||||
timeout_s = (self._FLUX_TIMEOUT_READY_S
|
||||
if self._remote_flux_ready
|
||||
else self._FLUX_TIMEOUT_LOADING_S)
|
||||
result = await asyncio.wait_for(future, timeout=timeout_s)
|
||||
|
||||
if not isinstance(result, dict) or result.get("error"):
|
||||
err = (result or {}).get("error") if isinstance(result, dict) else "leeres Resultat"
|
||||
return {"ok": False, "error": err or "flux-bridge Fehler"}
|
||||
|
||||
b64 = result.get("base64") or ""
|
||||
if not b64:
|
||||
return {"ok": False, "error": "flux_response ohne Bilddaten"}
|
||||
|
||||
try:
|
||||
png_bytes = base64.b64decode(b64)
|
||||
except Exception as e:
|
||||
return {"ok": False, "error": f"PNG-Decode fehlgeschlagen: {e}"}
|
||||
|
||||
SHARED_DIR = "/shared/uploads"
|
||||
os.makedirs(SHARED_DIR, exist_ok=True)
|
||||
ts_ms = int(time.time() * 1000)
|
||||
file_name = f"aria_generated_{ts_ms}.png"
|
||||
path = os.path.join(SHARED_DIR, file_name)
|
||||
try:
|
||||
with open(path, "wb") as f:
|
||||
f.write(png_bytes)
|
||||
except Exception as e:
|
||||
return {"ok": False, "error": f"Speichern fehlgeschlagen: {e}"}
|
||||
|
||||
logger.info("[rvs] flux PNG gespeichert: %s (%d KB)", path, len(png_bytes) // 1024)
|
||||
return {
|
||||
"ok": True,
|
||||
"path": path,
|
||||
"sizeBytes": len(png_bytes),
|
||||
"width": result.get("width", width),
|
||||
"height": result.get("height", height),
|
||||
"steps": result.get("steps"),
|
||||
"guidance": result.get("guidance"),
|
||||
"seed": result.get("seed"),
|
||||
"model": result.get("model", ""),
|
||||
"renderSeconds": result.get("renderSeconds", 0),
|
||||
}
|
||||
except asyncio.TimeoutError:
|
||||
return {"ok": False, "error": f"Render-Timeout ({int(timeout_s)}s) — flux-bridge offline?"}
|
||||
except Exception as e:
|
||||
logger.exception("[rvs] _flux_generate Fehler")
|
||||
return {"ok": False, "error": str(e)[:200]}
|
||||
finally:
|
||||
self._pending_flux.pop(request_id, None)
|
||||
|
||||
async def _send_to_rvs(self, message: dict) -> bool:
|
||||
"""Sendet eine Nachricht an die App (via RVS) mit Verbindungs-Check.
|
||||
|
||||
@@ -2507,17 +2715,40 @@ class ARIABridge:
|
||||
status = await asyncio.get_event_loop().run_in_executor(None, _do_request)
|
||||
logger.info("[cancel] Diagnostic /api/cancel: %s", status)
|
||||
|
||||
async def _emit_activity(self, activity: str, tool: str = "") -> None:
|
||||
async def _cancel_proxy_subprocesses(self) -> None:
|
||||
"""Not-Aus: ruft den proxy-internen /cancel-all Side-Channel auf
|
||||
(siehe proxy-patches/routes.js). Killt alle aktiven Claude-Code-
|
||||
Subprocesses sofort. Bridge ist auf aria-net, Proxy auch — also
|
||||
per Container-Name + Side-Channel-Port (Default 3457) erreichbar."""
|
||||
url = os.environ.get("PROXY_INTERNAL_URL", "http://aria-proxy:3457") + "/cancel-all"
|
||||
|
||||
def _do_request():
|
||||
try:
|
||||
req = urllib.request.Request(url, method="POST", data=b"")
|
||||
with urllib.request.urlopen(req, timeout=3) as resp:
|
||||
return resp.status, resp.read().decode("utf-8", "ignore")[:200]
|
||||
except Exception as e:
|
||||
return f"error: {e}", ""
|
||||
|
||||
status, body = await asyncio.get_event_loop().run_in_executor(None, _do_request)
|
||||
logger.warning("[NOT-AUS] proxy /cancel-all: %s %s", status, body)
|
||||
|
||||
async def _emit_activity(self, activity: str, tool: str = "", force: bool = False) -> None:
|
||||
"""Sendet agent_activity an die App — nur wenn sich der State geaendert hat.
|
||||
|
||||
Trailing Agent-Events nach chat:final werden 3s lang unterdrueckt
|
||||
(nur 'idle' kommt immer durch)."""
|
||||
(nur 'idle' kommt immer durch).
|
||||
|
||||
force=True: kein State-Dedup — wird vom Proxy-Tool-Hook genutzt
|
||||
damit auch wiederholte gleiche Tool-Aufrufe (z.B. 3x Bash
|
||||
hintereinander) im Gedanken-Stream als eigene Eintraege sichtbar
|
||||
bleiben."""
|
||||
if activity != "idle" and self._last_chat_final_at > 0:
|
||||
since_final = asyncio.get_event_loop().time() - self._last_chat_final_at
|
||||
if since_final < 3.0:
|
||||
return
|
||||
state = (activity, tool)
|
||||
if state == self._last_activity_state:
|
||||
if not force and state == self._last_activity_state:
|
||||
return
|
||||
self._last_activity_state = state
|
||||
await self._send_to_rvs({
|
||||
@@ -2665,6 +2896,79 @@ class ARIABridge:
|
||||
self._handle_trigger_fired(reply, trigger_name, ttype, events)
|
||||
)
|
||||
await _send_response(writer, 200, {"ok": True})
|
||||
elif method == "POST" and path == "/internal/agent-activity":
|
||||
# Vom Proxy gefeuert bei jedem Claude-Code-tool_use-Event
|
||||
# (Bash, Read, Edit, Grep, ...). Wir spiegeln das als
|
||||
# RVS agent_activity an App+Diagnostic damit der Gedanken-
|
||||
# Stream live mitlaufen kann.
|
||||
try:
|
||||
data = json.loads(body.decode("utf-8", "ignore"))
|
||||
except Exception as exc:
|
||||
await _send_response(writer, 400, {"error": f"bad json: {exc}"})
|
||||
return
|
||||
tool = (data.get("tool") or "").strip()
|
||||
if not tool:
|
||||
await _send_response(writer, 400, {"error": "tool erforderlich"})
|
||||
return
|
||||
# Force-emit (kein Dedup): User soll JEDEN Tool-Call sehen
|
||||
# selbst wenn derselbe Name zweimal in Folge kommt.
|
||||
asyncio.create_task(self._emit_activity("tool", tool, force=True))
|
||||
await _send_response(writer, 200, {"ok": True})
|
||||
elif method == "POST" and path == "/internal/agent-stream":
|
||||
# Vom Proxy gefeuert: voller Live-Stream der Claude-Code-
|
||||
# Session (assistant_text, tool_use mit Input, tool_result
|
||||
# mit truncated Output, start/end Markers). Wir leiten 1:1
|
||||
# als RVS agent_stream an Diagnostic (ARIA-Live-View) und
|
||||
# App weiter — read-only Mirror der gerade laufenden
|
||||
# ARIA-Aktivitaet.
|
||||
try:
|
||||
data = json.loads(body.decode("utf-8", "ignore"))
|
||||
except Exception as exc:
|
||||
await _send_response(writer, 400, {"error": f"bad json: {exc}"})
|
||||
return
|
||||
asyncio.create_task(self._send_to_rvs({
|
||||
"type": "agent_stream",
|
||||
"payload": data,
|
||||
"timestamp": int(time.time() * 1000),
|
||||
}))
|
||||
await _send_response(writer, 200, {"ok": True})
|
||||
elif method == "POST" and path == "/internal/flux-generate":
|
||||
# Vom Brain (flux_generate-Tool) gefeuert. Wir routen den
|
||||
# Render-Request via RVS an die flux-bridge (Gamebox),
|
||||
# warten synchron auf die PNG-Antwort, speichern sie nach
|
||||
# /shared/uploads/ und melden Pfad + Render-Stats zurueck.
|
||||
# Brain referenziert das Bild dann mit [FILE:]-Marker in
|
||||
# seiner Antwort, die Bridge broadcastet daraufhin
|
||||
# automatisch ein file_from_aria-Event an App+Diagnostic.
|
||||
try:
|
||||
data = json.loads(body.decode("utf-8", "ignore"))
|
||||
except Exception as exc:
|
||||
await _send_response(writer, 400, {"error": f"bad json: {exc}"})
|
||||
return
|
||||
prompt = (data.get("prompt") or "").strip()
|
||||
if not prompt:
|
||||
await _send_response(writer, 400, {"error": "prompt erforderlich"})
|
||||
return
|
||||
try:
|
||||
width = int(data.get("width") or 1024)
|
||||
height = int(data.get("height") or 1024)
|
||||
except (TypeError, ValueError):
|
||||
width, height = 1024, 1024
|
||||
steps_raw = data.get("steps")
|
||||
guidance_raw = data.get("guidance_scale")
|
||||
seed_raw = data.get("seed")
|
||||
steps = int(steps_raw) if isinstance(steps_raw, (int, float)) else None
|
||||
guidance = float(guidance_raw) if isinstance(guidance_raw, (int, float)) else None
|
||||
seed = int(seed_raw) if isinstance(seed_raw, (int, float)) else None
|
||||
model_raw = data.get("model")
|
||||
model = model_raw.strip() if isinstance(model_raw, str) and model_raw.strip() in ("dev", "schnell") else None
|
||||
|
||||
result = await self._flux_generate(
|
||||
prompt=prompt, width=width, height=height,
|
||||
steps=steps, guidance=guidance, seed=seed, model=model,
|
||||
)
|
||||
status = 200 if result.get("ok") else 502
|
||||
await _send_response(writer, status, result)
|
||||
elif method == "POST" and path == "/internal/delete-chat-message":
|
||||
try:
|
||||
data = json.loads(body.decode("utf-8", "ignore"))
|
||||
|
||||
+458
-132
@@ -301,6 +301,7 @@
|
||||
<input type="checkbox" id="gps-debug-toggle" onchange="toggleGpsDebug()" style="margin-right:4px;vertical-align:middle;">
|
||||
GPS-Position einblenden
|
||||
</label>
|
||||
<button class="btn secondary" onclick="openThoughtStream()" id="btn-thoughts" title="Gedanken-Stream — was ARIA intern tut" style="padding:4px 10px;font-size:11px;">💭 Gedanken <span id="thoughts-count" style="color:#8888AA;"></span></button>
|
||||
<button class="btn secondary" onclick="toggleChatFullscreen()" id="btn-chat-fs" style="padding:4px 10px;font-size:11px;">Vollbild</button>
|
||||
</div>
|
||||
</div>
|
||||
@@ -319,8 +320,7 @@
|
||||
<input type="file" id="diag-file-input" multiple accept="image/*,application/pdf,.doc,.docx,.txt" style="display:none;" onchange="handleDiagFileSelect(this.files)">
|
||||
</label>
|
||||
<textarea id="chat-input" placeholder="Nachricht an ARIA... (Enter sendet, Shift+Enter neue Zeile)" rows="2" onpaste="handleDiagPaste(event)" oninput="autoResizeTextarea(this)"></textarea>
|
||||
<button class="btn" id="btn-gw" onclick="testGateway()">Gateway senden</button>
|
||||
<button class="btn" id="btn-rvs" onclick="testRVS()">Via RVS senden</button>
|
||||
<button class="btn" id="btn-rvs" onclick="testRVS()">Senden</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -337,8 +337,23 @@
|
||||
</div>
|
||||
<div class="input-row" style="margin-top:8px;">
|
||||
<textarea id="chat-input-fs" placeholder="Nachricht an ARIA... (Enter sendet, Shift+Enter neue Zeile)" rows="2" oninput="autoResizeTextarea(this)"></textarea>
|
||||
<button class="btn" onclick="testGatewayFS()">Gateway senden</button>
|
||||
<button class="btn" onclick="testRVSFS()">Via RVS senden</button>
|
||||
<button class="btn" onclick="testRVSFS()">Senden</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Gedanken-Stream Modal — chronologisches Log was ARIA intern tut.
|
||||
Zentrales Modal (max 720px breit), Liste mit Auto-Scroll ans Ende
|
||||
wenn neue Eintraege reinkommen. -->
|
||||
<div id="thought-stream-modal" style="display:none;position:fixed;top:0;left:0;width:100vw;height:100vh;background:rgba(0,0,0,0.7);z-index:1100;align-items:center;justify-content:center;padding:24px;" onclick="if(event.target===this) closeThoughtStream();">
|
||||
<div style="background:#0D0D1A;border:1px solid #1E1E2E;border-radius:12px;width:100%;max-width:720px;height:70vh;display:flex;flex-direction:column;">
|
||||
<div style="display:flex;align-items:center;padding:14px;border-bottom:1px solid #1E1E2E;">
|
||||
<h2 style="margin:0;color:#FFD60A;flex:1;font-size:16px;">💭 Gedanken-Stream <span id="thoughts-count-modal" style="color:#8888AA;font-weight:normal;"></span></h2>
|
||||
<button class="btn secondary" onclick="clearThoughtStream()" id="btn-clear-thoughts" title="Stream leeren" style="padding:4px 10px;font-size:11px;color:#FF3B30;border-color:#FF3B30;margin-right:6px;">🗑 Leeren</button>
|
||||
<button class="btn secondary" onclick="closeThoughtStream()" style="padding:4px 12px;">Schliessen</button>
|
||||
</div>
|
||||
<div id="thought-stream-list" style="flex:1;overflow-y:auto;padding:8px 0;font-size:13px;font-family:monospace;">
|
||||
<!-- gefuellt durch renderThoughtStream() -->
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -350,7 +365,6 @@
|
||||
<div style="padding: 0 12px;">
|
||||
<div class="tab-bar">
|
||||
<button class="tab-btn active" data-tab="all" onclick="switchTab('all')">Alle <span class="tab-count" id="count-all">0</span></button>
|
||||
<button class="tab-btn" data-tab="gateway" onclick="switchTab('gateway')">Gateway <span class="tab-count" id="count-gateway">0</span></button>
|
||||
<button class="tab-btn" data-tab="rvs" onclick="switchTab('rvs')">RVS <span class="tab-count" id="count-rvs">0</span></button>
|
||||
<button class="tab-btn" data-tab="proxy" onclick="switchTab('proxy')">Proxy <span class="tab-count" id="count-proxy">0</span></button>
|
||||
<button class="tab-btn" data-tab="bridge" onclick="switchTab('bridge')">Bridge <span class="tab-count" id="count-bridge">0</span></button>
|
||||
@@ -369,7 +383,6 @@
|
||||
</span>
|
||||
</div>
|
||||
<div class="log-box" id="log-all"></div>
|
||||
<div class="log-box hidden" id="log-gateway"></div>
|
||||
<div class="log-box hidden" id="log-rvs"></div>
|
||||
<div class="log-box hidden" id="log-proxy"></div>
|
||||
<div class="log-box hidden" id="log-bridge"></div>
|
||||
@@ -382,18 +395,29 @@
|
||||
<div class="card" style="margin-top:12px; padding: 8px 0 0 0;">
|
||||
<div style="padding: 0 12px;">
|
||||
<div class="tab-bar">
|
||||
<button class="tab-btn active" id="live-tab-ssh" onclick="switchLiveTab('ssh')">SSH Terminal</button>
|
||||
<button class="tab-btn active" id="live-tab-aria" onclick="switchLiveTab('aria')">ARIA Live</button>
|
||||
<button class="tab-btn" id="live-tab-desktop" onclick="switchLiveTab('desktop')">Desktop</button>
|
||||
</div>
|
||||
</div>
|
||||
<div style="background:#080810; border:1px solid #1E1E2E; border-radius:0 0 6px 6px; position:relative;">
|
||||
<!-- SSH Terminal -->
|
||||
<div id="live-ssh" style="height:350px; padding:4px;">
|
||||
<div id="live-ssh-bar" style="display:flex;gap:6px;align-items:center;padding:4px 4px 6px;">
|
||||
<button class="btn" onclick="startLiveSSH()" id="btn-live-ssh" style="padding:4px 12px;font-size:11px;">Verbinden</button>
|
||||
<span id="live-ssh-status" style="font-size:11px;color:#8888AA;">Nicht verbunden</span>
|
||||
<!-- ARIA Live (read-only Mirror der Claude-Code-Session) -->
|
||||
<div id="live-aria" style="height:350px; padding:4px; display:flex; flex-direction:column;">
|
||||
<div id="live-aria-bar" style="display:flex;gap:6px;align-items:center;padding:4px 4px 6px;flex-shrink:0;">
|
||||
<span id="live-aria-status" style="font-size:11px;color:#8888AA;flex:1;">Idle — warte auf ARIA-Aktivitaet</span>
|
||||
<button class="btn" onclick="clearAriaLive()" style="padding:4px 12px;font-size:11px;" title="Live-Mitschrift leeren">Leeren</button>
|
||||
<label style="font-size:11px;color:#8888AA;display:flex;align-items:center;gap:4px;cursor:pointer;" title="Bei jeder neuen Zeile ans Ende scrollen">
|
||||
<input type="checkbox" id="live-aria-autoscroll" checked style="margin:0;"> Auto-Scroll
|
||||
</label>
|
||||
<button class="btn" onclick="ariaPanicStop()"
|
||||
style="padding:4px 14px;font-size:11px;background:#FF3B30;color:#fff;border-color:#FF3B30;font-weight:bold;"
|
||||
title="NOT-AUS: killt alle aktiven Claude-Code-Subprocesses sofort">
|
||||
⛔ Not-Aus
|
||||
</button>
|
||||
</div>
|
||||
<div id="live-aria-stream"
|
||||
style="flex:1;overflow-y:auto;background:#040408;font-family:'Courier New',monospace;font-size:11px;line-height:1.4;color:#C0C0D0;padding:6px 8px;border-top:1px solid #1E1E2E;">
|
||||
<div style="color:#555570;font-style:italic;">Sobald ARIA denkt oder ein Tool nutzt, taucht es hier in Echtzeit auf.</div>
|
||||
</div>
|
||||
<div id="live-ssh-term" style="height:calc(100% - 32px);"></div>
|
||||
</div>
|
||||
<!-- Desktop Viewer -->
|
||||
<div id="live-desktop" style="height:350px; display:none; position:relative;">
|
||||
@@ -596,6 +620,66 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- FLUX Bildgenerierung -->
|
||||
<div class="settings-section">
|
||||
<h2>FLUX Bildgenerierung</h2>
|
||||
<div style="font-size:11px;color:#8888AA;margin-bottom:8px;">
|
||||
Steuerung der Image-Generation (flux-bridge auf der Gamebox).
|
||||
Default-Modell wird via RVS gepusht — Wechsel triggert Pipeline-Reload (15-30s
|
||||
aus HF-Cache, mehrere Minuten beim Erst-Download). Keywords nutzt ARIAs Brain
|
||||
im System-Prompt.
|
||||
</div>
|
||||
<div class="card" style="max-width:500px;">
|
||||
<div style="display:flex;flex-direction:column;gap:8px;">
|
||||
|
||||
<label style="color:#8888AA;font-size:12px;">Default-Modell:</label>
|
||||
<select id="diag-flux-default-model" onchange="sendVoiceConfig()"
|
||||
style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
<option value="dev">FLUX.1-dev (hoechste Qualitaet, 20-90s)</option>
|
||||
<option value="schnell">FLUX.1-schnell (4-step, 5-15s)</option>
|
||||
</select>
|
||||
|
||||
<label style="color:#8888AA;font-size:12px;">
|
||||
Raw-Keyword — Pipe-Modus, ARIA leitet den Prompt 1:1 durch (kein Rewriting):
|
||||
</label>
|
||||
<input type="text" id="diag-flux-keyword-raw"
|
||||
placeholder="flux"
|
||||
style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
|
||||
<label style="color:#8888AA;font-size:12px;">
|
||||
Switch-Keyword — zwingt das ANDERE Modell als das Default fuer diesen Request:
|
||||
</label>
|
||||
<input type="text" id="diag-flux-keyword-switch"
|
||||
placeholder="fix"
|
||||
style="background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;">
|
||||
|
||||
<label style="color:#8888AA;font-size:12px;margin-top:4px;">
|
||||
HuggingFace-Token (nur fuer FLUX.1-dev — gated Modell, Lizenz-Bestaetigung).
|
||||
Wird per RVS an die flux-bridge gepusht. Leer = kein Token (Schnell-Modell laeuft auch ohne).
|
||||
</label>
|
||||
<div style="display:flex;gap:4px;">
|
||||
<input type="password" id="diag-flux-hf-token"
|
||||
placeholder="hf_..."
|
||||
style="flex:1;min-width:0;box-sizing:border-box;background:#1E1E2E;color:#fff;border:1px solid #2A2A3E;border-radius:6px;padding:6px 10px;font-size:13px;font-family:monospace;">
|
||||
<button type="button" class="btn secondary" onclick="toggleSecret('diag-flux-hf-token', this)" style="padding:4px 10px;flex-shrink:0;" title="Anzeigen/Verbergen">👁</button>
|
||||
</div>
|
||||
<div style="color:#666680;font-size:10px;">
|
||||
Erst auf <a href="https://huggingface.co/black-forest-labs/FLUX.1-dev" target="_blank" style="color:#0096FF;">huggingface.co/.../FLUX.1-dev</a> "Agree" klicken,
|
||||
dann unter <a href="https://huggingface.co/settings/tokens" target="_blank" style="color:#0096FF;">Settings → Tokens</a> einen Read-Token erzeugen.
|
||||
</div>
|
||||
|
||||
<div style="display:flex;gap:8px;align-items:center;margin-top:6px;">
|
||||
<button class="btn primary" onclick="sendVoiceConfig()" style="padding:6px 14px;font-size:12px;">
|
||||
Anwenden
|
||||
</button>
|
||||
<div style="color:#666680;font-size:10px;">
|
||||
Beide Modelle = volle Qualitaet, schnell ist nur ein 4-Step-Distillat (Apache-2.0).
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Whisper (STT) -->
|
||||
<div class="settings-section">
|
||||
<h2>Whisper (Spracherkennung)</h2>
|
||||
@@ -951,11 +1035,11 @@
|
||||
</div>
|
||||
</div><!-- /tab-triggers -->
|
||||
|
||||
<!-- Trigger-Create Modal -->
|
||||
<!-- Trigger-Create/Edit Modal -->
|
||||
<div class="modal-overlay" id="trigger-modal">
|
||||
<div class="modal-box" style="max-width:600px;">
|
||||
<div class="modal-header">
|
||||
<h3>Neuer Trigger</h3>
|
||||
<h3 id="trigger-modal-title">Neuer Trigger</h3>
|
||||
<button class="modal-close" onclick="closeTriggerModal()">×</button>
|
||||
</div>
|
||||
<div class="modal-body" style="padding:16px;">
|
||||
@@ -969,8 +1053,16 @@
|
||||
|
||||
<!-- Timer-spezifisch -->
|
||||
<div id="trigger-timer-fields">
|
||||
<label style="display:block;font-size:11px;color:#8888AA;margin-bottom:4px;">In wievielen Minuten?</label>
|
||||
<input type="number" id="trigger-timer-minutes" min="1" max="10080" value="10" style="width:100%;background:#0D0D1A;color:#E0E0F0;border:1px solid #1E1E2E;padding:6px;border-radius:4px;font-family:inherit;margin-bottom:10px;">
|
||||
<!-- Create-mode: relativ („in X Minuten ab jetzt") -->
|
||||
<div id="trigger-timer-create-fields">
|
||||
<label style="display:block;font-size:11px;color:#8888AA;margin-bottom:4px;">In wievielen Minuten?</label>
|
||||
<input type="number" id="trigger-timer-minutes" min="1" max="10080" value="10" style="width:100%;background:#0D0D1A;color:#E0E0F0;border:1px solid #1E1E2E;padding:6px;border-radius:4px;font-family:inherit;margin-bottom:10px;">
|
||||
</div>
|
||||
<!-- Edit-mode: absoluter ISO-Timestamp (UTC) -->
|
||||
<div id="trigger-timer-edit-fields" style="display:none;">
|
||||
<label style="display:block;font-size:11px;color:#8888AA;margin-bottom:4px;">Feuert am (ISO, UTC)</label>
|
||||
<input type="text" id="trigger-timer-fires-at" placeholder="2026-05-15T20:00:00+00:00" style="width:100%;background:#0D0D1A;color:#E0E0F0;border:1px solid #1E1E2E;padding:6px;border-radius:4px;font-family:monospace;margin-bottom:10px;">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Watcher-spezifisch -->
|
||||
@@ -991,7 +1083,7 @@
|
||||
</div>
|
||||
<div class="modal-footer" style="padding:10px 16px;border-top:1px solid #1E1E2E;display:flex;justify-content:flex-end;gap:8px;">
|
||||
<button class="btn secondary" onclick="closeTriggerModal()">Abbrechen</button>
|
||||
<button class="btn" onclick="saveTrigger()">Anlegen</button>
|
||||
<button class="btn" id="trigger-modal-save-btn" onclick="saveTrigger()">Anlegen</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -1093,13 +1185,12 @@
|
||||
const btnScroll = document.getElementById('btn-scroll');
|
||||
let ws;
|
||||
let activeTab = 'all';
|
||||
const DOCKER_TABS = ['gateway', 'proxy', 'bridge'];
|
||||
const autoScroll = { all: true, gateway: true, rvs: true, proxy: true, bridge: true, server: true, trace: true };
|
||||
const logCounts = { all: 0, gateway: 0, rvs: 0, proxy: 0, bridge: 0, server: 0, trace: 0 };
|
||||
const DOCKER_TABS = ['proxy', 'bridge'];
|
||||
const autoScroll = { all: true, rvs: true, proxy: true, bridge: true, server: true, trace: true };
|
||||
const logCounts = { all: 0, rvs: 0, proxy: 0, bridge: 0, server: 0, trace: 0 };
|
||||
|
||||
const logBoxes = {
|
||||
all: document.getElementById('log-all'),
|
||||
gateway: document.getElementById('log-gateway'),
|
||||
rvs: document.getElementById('log-rvs'),
|
||||
proxy: document.getElementById('log-proxy'),
|
||||
bridge: document.getElementById('log-bridge'),
|
||||
@@ -1153,7 +1244,9 @@
|
||||
}
|
||||
|
||||
function mapSourceToTab(source) {
|
||||
if (source === 'gateway') return 'gateway';
|
||||
// Gateway-Source: deprecated — falls noch was reinkommt zeigen wir's
|
||||
// einfach unter 'server'. Spart einen toten Tab.
|
||||
if (source === 'gateway') return 'server';
|
||||
if (source === 'rvs') return 'rvs';
|
||||
if (source === 'proxy') return 'proxy';
|
||||
if (source === 'bridge') return 'bridge';
|
||||
@@ -1317,6 +1410,11 @@
|
||||
setIfPresent('diag-f5tts-vocab', msg.f5ttsVocabFile);
|
||||
setIfPresent('diag-f5tts-cfg', msg.f5ttsCfgStrength);
|
||||
setIfPresent('diag-f5tts-nfe', msg.f5ttsNfeStep);
|
||||
// FLUX-Settings wiederherstellen
|
||||
setIfPresent('diag-flux-default-model', msg.fluxDefaultModel);
|
||||
setIfPresent('diag-flux-keyword-raw', msg.fluxKeywordRaw);
|
||||
setIfPresent('diag-flux-keyword-switch', msg.fluxKeywordSwitch);
|
||||
setIfPresent('diag-flux-hf-token', msg.huggingfaceToken);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1325,6 +1423,11 @@
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'agent_stream') {
|
||||
appendAriaStreamEvent(msg.payload || {});
|
||||
return;
|
||||
}
|
||||
|
||||
if (msg.type === 'voice_preview_audio') {
|
||||
const statusEl = document.getElementById('voice-preview-status');
|
||||
const audio = document.getElementById('voice-preview-audio');
|
||||
@@ -1468,8 +1571,8 @@
|
||||
return;
|
||||
}
|
||||
// core_auth WS-Event entfernt — aria-core ist raus.
|
||||
// Live SSH + Desktop
|
||||
if (msg.type?.startsWith('live_ssh_')) { handleLiveSSH(msg); return; }
|
||||
// SSH-Terminal entfernt — durch ARIA-Live-Mirror ersetzt.
|
||||
// Desktop bleibt.
|
||||
if (msg.type === 'desktop_status') { handleDesktop(msg); return; }
|
||||
|
||||
if (msg.type === 'term_ready') {
|
||||
@@ -1595,18 +1698,6 @@
|
||||
renderDiagPending();
|
||||
}
|
||||
|
||||
function testGateway() {
|
||||
const input = document.getElementById('chat-input');
|
||||
const text = input.value.trim();
|
||||
if (!text && diagPendingFiles.length === 0) return;
|
||||
if (diagPendingFiles.length > 0) sendDiagAttachments();
|
||||
if (text) {
|
||||
addChat('sent', text, 'Gateway direkt');
|
||||
send({ action: 'test_gateway', text });
|
||||
}
|
||||
input.value = '';
|
||||
}
|
||||
|
||||
function testRVS() {
|
||||
const input = document.getElementById('chat-input');
|
||||
const text = input.value.trim();
|
||||
@@ -1746,7 +1837,6 @@
|
||||
if (proxy.models && proxy.models.length) showProxyModels(proxy.models);
|
||||
|
||||
// Buttons
|
||||
document.getElementById('btn-gw').disabled = gw.status !== 'connected';
|
||||
document.getElementById('btn-rvs').disabled = rvs.status !== 'connected';
|
||||
}
|
||||
|
||||
@@ -2069,14 +2159,6 @@
|
||||
modal.style.display = 'none';
|
||||
}
|
||||
}
|
||||
function testGatewayFS() {
|
||||
const input = document.getElementById('chat-input-fs');
|
||||
const text = input.value.trim();
|
||||
if (!text) return;
|
||||
addChat('sent', text, 'Gateway direkt');
|
||||
send({ action: 'test_gateway', text });
|
||||
input.value = '';
|
||||
}
|
||||
function testRVSFS() {
|
||||
const input = document.getElementById('chat-input-fs');
|
||||
const text = input.value.trim();
|
||||
@@ -2122,18 +2204,23 @@
|
||||
// Liste neu aufbauen
|
||||
list.innerHTML = '';
|
||||
let anyLoading = false, anyError = false;
|
||||
const labels = { f5tts: 'F5-TTS', whisper: 'Whisper STT' };
|
||||
const labels = { f5tts: 'F5-TTS', whisper: 'Whisper STT', flux: 'FLUX Image-Gen' };
|
||||
for (const [s, info] of Object.entries(_serviceState)) {
|
||||
const row = document.createElement('div');
|
||||
row.style.cssText = 'display:flex;align-items:center;gap:6px;';
|
||||
let dot = '⚫', color = '#666680', text = '';
|
||||
if (info.state === 'loading') {
|
||||
dot = '⏳'; color = '#FFD60A'; anyLoading = true;
|
||||
text = `${labels[s] || s}: laedt${info.model ? ' ' + info.model : ''}...`;
|
||||
dot = info.downloading ? '⬇' : '⏳';
|
||||
color = '#FFD60A'; anyLoading = true;
|
||||
const action = info.downloading
|
||||
? 'laedt erstmalig runter (mehrere GB, kann dauern)'
|
||||
: 'laedt';
|
||||
text = `${labels[s] || s}: ${action}${info.model ? ' ' + info.model : ''}...`;
|
||||
} else if (info.state === 'ready') {
|
||||
dot = '✅'; color = '#34C759';
|
||||
dot = info.freshlyDownloaded ? '🎉' : '✅'; color = '#34C759';
|
||||
const sec = info.loadSeconds ? ` (${info.loadSeconds.toFixed(1)}s)` : '';
|
||||
text = `${labels[s] || s}: bereit${info.model ? ' ' + info.model : ''}${sec}`;
|
||||
const downloadedHint = info.freshlyDownloaded ? ' — Download fertig!' : '';
|
||||
text = `${labels[s] || s}: bereit${info.model ? ' ' + info.model : ''}${sec}${downloadedHint}`;
|
||||
} else if (info.state === 'error') {
|
||||
dot = '❌'; color = '#FF3B30'; anyError = true;
|
||||
text = `${labels[s] || s}: Fehler ${info.error || ''}`;
|
||||
@@ -2166,6 +2253,9 @@
|
||||
}
|
||||
|
||||
function updateThinkingIndicator(msg) {
|
||||
// Gedanken-Stream fuettern — JEDES Event (auch idle als ✓ fertig)
|
||||
pushThought(msg.activity || '', msg.tool || '');
|
||||
|
||||
const indicators = [
|
||||
document.getElementById('thinking-indicator'),
|
||||
document.getElementById('thinking-indicator-fs'),
|
||||
@@ -2202,6 +2292,114 @@
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// ── Gedanken-Stream ─────────────────────────────
|
||||
// Chronologisches Log von agent_activity-Events. Wird in localStorage
|
||||
// persistiert (ueberlebt Page-Reload), capped auf MAX_THOUGHTS.
|
||||
const THOUGHT_STORAGE_KEY = 'aria_thought_stream';
|
||||
const MAX_THOUGHTS = 500;
|
||||
let thoughtStream = [];
|
||||
let lastThoughtKey = '';
|
||||
let _thoughtSaveTimer = null;
|
||||
|
||||
function loadThoughtStream() {
|
||||
try {
|
||||
const raw = localStorage.getItem(THOUGHT_STORAGE_KEY);
|
||||
if (!raw) return;
|
||||
const parsed = JSON.parse(raw);
|
||||
if (Array.isArray(parsed)) thoughtStream = parsed.slice(-MAX_THOUGHTS);
|
||||
} catch {}
|
||||
updateThoughtsBadge();
|
||||
}
|
||||
|
||||
function persistThoughtStream() {
|
||||
if (_thoughtSaveTimer) clearTimeout(_thoughtSaveTimer);
|
||||
_thoughtSaveTimer = setTimeout(() => {
|
||||
try {
|
||||
if (thoughtStream.length === 0) localStorage.removeItem(THOUGHT_STORAGE_KEY);
|
||||
else localStorage.setItem(THOUGHT_STORAGE_KEY, JSON.stringify(thoughtStream.slice(-MAX_THOUGHTS)));
|
||||
} catch {}
|
||||
}, 500);
|
||||
}
|
||||
|
||||
function pushThought(activity, tool) {
|
||||
// Dedup gegen direkt aufeinanderfolgende identische Events. Tool-
|
||||
// Events NIE dedupen — drei Bash-Calls in Folge sollen drei Eintraege
|
||||
// ergeben, nicht einen.
|
||||
const key = `${activity}|${tool || ''}`;
|
||||
if (activity !== 'tool' && key === lastThoughtKey) return;
|
||||
lastThoughtKey = key;
|
||||
thoughtStream.push({ ts: Date.now(), activity, tool: tool || '' });
|
||||
if (thoughtStream.length > MAX_THOUGHTS) thoughtStream = thoughtStream.slice(-MAX_THOUGHTS);
|
||||
updateThoughtsBadge();
|
||||
// Wenn das Modal offen ist: live nachrendern + ans Ende scrollen
|
||||
const modal = document.getElementById('thought-stream-modal');
|
||||
if (modal && modal.style.display !== 'none') renderThoughtStream(true);
|
||||
persistThoughtStream();
|
||||
}
|
||||
|
||||
function updateThoughtsBadge() {
|
||||
const a = document.getElementById('thoughts-count');
|
||||
if (a) a.textContent = thoughtStream.length ? `(${thoughtStream.length})` : '';
|
||||
const b = document.getElementById('thoughts-count-modal');
|
||||
if (b) b.textContent = thoughtStream.length ? `(${thoughtStream.length})` : '';
|
||||
}
|
||||
|
||||
function openThoughtStream() {
|
||||
const modal = document.getElementById('thought-stream-modal');
|
||||
if (!modal) return;
|
||||
modal.style.display = 'flex';
|
||||
renderThoughtStream(true);
|
||||
}
|
||||
|
||||
function closeThoughtStream() {
|
||||
const modal = document.getElementById('thought-stream-modal');
|
||||
if (modal) modal.style.display = 'none';
|
||||
}
|
||||
|
||||
function clearThoughtStream() {
|
||||
if (thoughtStream.length === 0) return;
|
||||
if (!confirm(`Gedanken-Stream leeren? ${thoughtStream.length} Eintraege werden geloescht.`)) return;
|
||||
thoughtStream = [];
|
||||
lastThoughtKey = '';
|
||||
updateThoughtsBadge();
|
||||
renderThoughtStream(false);
|
||||
persistThoughtStream();
|
||||
}
|
||||
|
||||
function _escapeHtml(s) {
|
||||
return String(s).replace(/[&<>"']/g, c => ({'&':'&','<':'<','>':'>','"':'"',"'":'''}[c]));
|
||||
}
|
||||
|
||||
function renderThoughtStream(autoscroll) {
|
||||
const list = document.getElementById('thought-stream-list');
|
||||
if (!list) return;
|
||||
if (thoughtStream.length === 0) {
|
||||
list.innerHTML = '<div style="padding:24px;text-align:center;color:#555570;font-style:italic;">Noch keine Gedanken aufgezeichnet.<br>Sobald ARIA was tut, taucht\'s hier auf.</div>';
|
||||
return;
|
||||
}
|
||||
const rows = [];
|
||||
let prevTs = 0;
|
||||
for (const t of thoughtStream) {
|
||||
const gapMin = prevTs ? Math.floor((t.ts - prevTs) / 60000) : 0;
|
||||
if (gapMin >= 1) {
|
||||
const label = gapMin < 60 ? `${gapMin} Min` : `${Math.floor(gapMin/60)}h ${gapMin%60}m`;
|
||||
rows.push(`<div style="display:flex;align-items:center;padding:6px 16px;gap:8px;"><div style="flex:1;height:1px;background:#1E1E2E;"></div><span style="color:#555570;font-size:10px;">${label}</span><div style="flex:1;height:1px;background:#1E1E2E;"></div></div>`);
|
||||
}
|
||||
prevTs = t.ts;
|
||||
const d = new Date(t.ts);
|
||||
const time = `${String(d.getHours()).padStart(2,'0')}:${String(d.getMinutes()).padStart(2,'0')}:${String(d.getSeconds()).padStart(2,'0')}`;
|
||||
let icon, label, color;
|
||||
if (t.activity === 'idle') { icon = '✓'; label = 'fertig'; color = '#34C759'; }
|
||||
else if (t.activity === 'tool') { icon = '🔧'; label = t.tool || 'tool'; color = '#E0E0F0'; }
|
||||
else if (t.activity === 'assistant'){ icon = '✍️'; label = 'schreibt'; color = '#E0E0F0'; }
|
||||
else if (t.activity === 'thinking'){ icon = '💭'; label = 'denkt'; color = '#E0E0F0'; }
|
||||
else { icon = '•'; label = t.activity; color = '#E0E0F0'; }
|
||||
rows.push(`<div style="display:flex;padding:4px 16px;align-items:baseline;"><span style="color:#555570;width:78px;font-size:11px;">${time}</span><span style="width:24px;">${icon}</span><span style="color:${color};flex:1;">${_escapeHtml(label)}</span></div>`);
|
||||
}
|
||||
list.innerHTML = rows.join('');
|
||||
if (autoscroll) list.scrollTop = list.scrollHeight;
|
||||
}
|
||||
|
||||
// ── XTTS Panel ─────────────────────────────
|
||||
function renderVoiceList(voices) {
|
||||
const box = document.getElementById('xtts-voice-list');
|
||||
@@ -2537,11 +2735,16 @@
|
||||
const f5ttsNfeRaw = document.getElementById('diag-f5tts-nfe')?.value || '';
|
||||
const f5ttsCfgStrength = f5ttsCfgRaw ? parseFloat(f5ttsCfgRaw) : undefined;
|
||||
const f5ttsNfeStep = f5ttsNfeRaw ? parseInt(f5ttsNfeRaw, 10) : undefined;
|
||||
const fluxDefaultModel = document.getElementById('diag-flux-default-model')?.value || undefined;
|
||||
const fluxKeywordRaw = document.getElementById('diag-flux-keyword-raw')?.value;
|
||||
const fluxKeywordSwitch = document.getElementById('diag-flux-keyword-switch')?.value;
|
||||
const huggingfaceToken = document.getElementById('diag-flux-hf-token')?.value;
|
||||
send({
|
||||
action: 'send_voice_config',
|
||||
ttsEnabled, xttsVoice, whisperModel,
|
||||
f5ttsModel, f5ttsCkptFile, f5ttsVocabFile,
|
||||
f5ttsCfgStrength, f5ttsNfeStep,
|
||||
fluxDefaultModel, fluxKeywordRaw, fluxKeywordSwitch, huggingfaceToken,
|
||||
});
|
||||
const statusEl = document.getElementById('voice-status');
|
||||
if (statusEl && xttsVoice) {
|
||||
@@ -2775,96 +2978,133 @@
|
||||
|
||||
// ── ARIA Live-Ansicht (SSH + Desktop) ──────────────────
|
||||
|
||||
let liveSshTerm = null;
|
||||
let liveSshFit = null;
|
||||
|
||||
function switchLiveTab(tab) {
|
||||
document.getElementById('live-ssh').style.display = tab === 'ssh' ? 'block' : 'none';
|
||||
document.getElementById('live-aria').style.display = tab === 'aria' ? 'flex' : 'none';
|
||||
document.getElementById('live-desktop').style.display = tab === 'desktop' ? 'block' : 'none';
|
||||
document.getElementById('live-tab-ssh').className = 'tab-btn' + (tab === 'ssh' ? ' active' : '');
|
||||
document.getElementById('live-tab-aria').className = 'tab-btn' + (tab === 'aria' ? ' active' : '');
|
||||
document.getElementById('live-tab-desktop').className = 'tab-btn' + (tab === 'desktop' ? ' active' : '');
|
||||
if (tab === 'ssh' && liveSshTerm && liveSshFit) {
|
||||
setTimeout(() => liveSshFit.fit(), 50);
|
||||
}
|
||||
}
|
||||
|
||||
function startLiveSSH() {
|
||||
const statusEl = document.getElementById('live-ssh-status');
|
||||
const btn = document.getElementById('btn-live-ssh');
|
||||
|
||||
// Wenn schon verbunden, trennen
|
||||
if (liveSshTerm && liveSshTerm._sshConnected) {
|
||||
send({ action: 'live_ssh_close' });
|
||||
statusEl.textContent = 'Getrennt';
|
||||
statusEl.style.color = '#FF6B6B';
|
||||
btn.textContent = 'Verbinden';
|
||||
liveSshTerm._sshConnected = false;
|
||||
return;
|
||||
// ── ARIA Live (read-only Mirror der Claude-Code-Session) ──────
|
||||
//
|
||||
// Empfaengt agent_stream Events vom RVS (Proxy → Bridge → RVS → wir).
|
||||
// Rendert sie als monospace-Liste — Tool-Calls in cyan, Tool-Results
|
||||
// in grau (truncated), ARIA-Text in weiss, Thinking kursiv. Auto-Scroll
|
||||
// bleibt am unteren Rand kleben solange der User nicht hochgescrollt hat.
|
||||
// Not-Aus killt via Bridge → Proxy-Side-Channel alle Subprocesses.
|
||||
function _ariaStreamEl() { return document.getElementById('live-aria-stream'); }
|
||||
function _ariaStatusEl() { return document.getElementById('live-aria-status'); }
|
||||
function _ariaIsAtBottom() {
|
||||
const el = _ariaStreamEl();
|
||||
if (!el) return true;
|
||||
return (el.scrollHeight - el.scrollTop - el.clientHeight) < 24;
|
||||
}
|
||||
function _ariaMaybeScroll() {
|
||||
if (!document.getElementById('live-aria-autoscroll')?.checked) return;
|
||||
const el = _ariaStreamEl();
|
||||
if (el) el.scrollTop = el.scrollHeight;
|
||||
}
|
||||
// Truncate UI: groessere Backlogs koennen viele MB werden. Wir halten
|
||||
// max 2000 Zeilen — beim Ueberlauf den oberen Block wegwerfen.
|
||||
const ARIA_MAX_LINES = 2000;
|
||||
function _ariaTrimBacklog() {
|
||||
const el = _ariaStreamEl();
|
||||
if (!el) return;
|
||||
while (el.childElementCount > ARIA_MAX_LINES) {
|
||||
el.removeChild(el.firstChild);
|
||||
}
|
||||
|
||||
statusEl.textContent = 'Verbinde...';
|
||||
statusEl.style.color = '#FFD60A';
|
||||
|
||||
function initSSHTerm() {
|
||||
const container = document.getElementById('live-ssh-term');
|
||||
if (!liveSshTerm) {
|
||||
liveSshTerm = new Terminal({
|
||||
theme: { background: '#080810', foreground: '#E0E0F0', cursor: '#0096FF' },
|
||||
fontFamily: 'Courier New, monospace',
|
||||
fontSize: 12,
|
||||
cursorBlink: true,
|
||||
});
|
||||
liveSshFit = new FitAddon.FitAddon();
|
||||
liveSshTerm.loadAddon(liveSshFit);
|
||||
liveSshTerm.open(container);
|
||||
liveSshFit.fit();
|
||||
liveSshTerm.onData((data) => {
|
||||
send({ action: 'live_ssh_input', data });
|
||||
});
|
||||
}
|
||||
liveSshTerm.clear();
|
||||
send({ action: 'live_ssh_start' });
|
||||
}
|
||||
|
||||
if (typeof Terminal === 'undefined') {
|
||||
const s = document.createElement('script');
|
||||
s.src = 'https://cdn.jsdelivr.net/npm/@xterm/xterm@5.5.0/lib/xterm.min.js';
|
||||
s.onload = () => {
|
||||
const s2 = document.createElement('script');
|
||||
s2.src = 'https://cdn.jsdelivr.net/npm/@xterm/addon-fit@0.10.0/lib/addon-fit.min.js';
|
||||
s2.onload = () => initSSHTerm();
|
||||
document.head.appendChild(s2);
|
||||
};
|
||||
document.head.appendChild(s);
|
||||
}
|
||||
function _ariaTimePrefix(ts) {
|
||||
try {
|
||||
const d = ts ? new Date(ts) : new Date();
|
||||
const h = String(d.getHours()).padStart(2, '0');
|
||||
const m = String(d.getMinutes()).padStart(2, '0');
|
||||
const s = String(d.getSeconds()).padStart(2, '0');
|
||||
return `${h}:${m}:${s}`;
|
||||
} catch (_) { return ''; }
|
||||
}
|
||||
function _ariaEsc(s) {
|
||||
return String(s ?? '').replace(/[&<>"']/g, c => ({'&':'&','<':'<','>':'>','"':'"',"'":'''}[c]));
|
||||
}
|
||||
function _ariaPushLine(html, color, opts = {}) {
|
||||
const el = _ariaStreamEl();
|
||||
if (!el) return;
|
||||
const wasAtBottom = _ariaIsAtBottom();
|
||||
const row = document.createElement('div');
|
||||
row.style.cssText = `color:${color};${opts.style||''}`;
|
||||
row.innerHTML = html;
|
||||
// Erste statische "Sobald ARIA..."-Zeile beim ersten Event entfernen
|
||||
const placeholder = el.querySelector('div[style*="italic"]');
|
||||
if (placeholder && el.childElementCount === 1) el.removeChild(placeholder);
|
||||
el.appendChild(row);
|
||||
_ariaTrimBacklog();
|
||||
if (wasAtBottom) _ariaMaybeScroll();
|
||||
}
|
||||
function appendAriaStreamEvent(p) {
|
||||
const t = _ariaTimePrefix(p.ts);
|
||||
const kind = p.kind || '';
|
||||
if (kind === 'start') {
|
||||
_ariaPushLine(
|
||||
`<span style="color:#444460;">━━━ ${t} session start (${_ariaEsc(p.model || 'unknown')}) ━━━</span>`,
|
||||
'#444460',
|
||||
);
|
||||
const st = _ariaStatusEl(); if (st) { st.textContent = 'ARIA aktiv...'; st.style.color = '#34C759'; }
|
||||
} else if (kind === 'end') {
|
||||
const reason = p.reason || '?';
|
||||
const codePart = (p.code !== undefined && p.code !== null) ? ` code=${_ariaEsc(p.code)}` : '';
|
||||
const errPart = p.error ? ` err=${_ariaEsc(String(p.error).slice(0,120))}` : '';
|
||||
_ariaPushLine(
|
||||
`<span style="color:#444460;">━━━ ${t} session end (${_ariaEsc(reason)}${codePart}${errPart}) ━━━</span>`,
|
||||
'#444460',
|
||||
);
|
||||
const st = _ariaStatusEl(); if (st) { st.textContent = 'Idle'; st.style.color = '#8888AA'; }
|
||||
} else if (kind === 'text') {
|
||||
_ariaPushLine(
|
||||
`<span style="color:#777799;">[${t}]</span> ${_ariaEsc(p.text || '')}`,
|
||||
'#D0D0E0',
|
||||
{ style: 'white-space:pre-wrap;word-break:break-word;' },
|
||||
);
|
||||
} else if (kind === 'thinking') {
|
||||
_ariaPushLine(
|
||||
`<span style="color:#777799;">[${t}]</span> <span style="font-style:italic;color:#888866;">💭 ${_ariaEsc(p.text || '')}</span>`,
|
||||
'#888866',
|
||||
{ style: 'white-space:pre-wrap;word-break:break-word;' },
|
||||
);
|
||||
} else if (kind === 'tool_use') {
|
||||
const name = _ariaEsc(p.name || '?');
|
||||
const inp = _ariaEsc(p.input || '');
|
||||
const tail = p.inputTruncatedBytes ? `<span style="color:#777799;"> ...(+${p.inputTruncatedBytes} bytes)</span>` : '';
|
||||
_ariaPushLine(
|
||||
`<span style="color:#777799;">[${t}]</span> <span style="color:#0096FF;">▶ ${name}</span> <span style="color:#8888AA;">${inp}${tail}</span>`,
|
||||
'#C0C0D0',
|
||||
{ style: 'white-space:pre-wrap;word-break:break-word;' },
|
||||
);
|
||||
} else if (kind === 'tool_result') {
|
||||
const isError = p.isError === true;
|
||||
const head = isError ? '<span style="color:#FF6B6B;">✗ result (ERROR)</span>' : '<span style="color:#34C759;">✓ result</span>';
|
||||
const tail = p.truncatedBytes ? `<span style="color:#777799;"> ...(+${p.truncatedBytes} bytes)</span>` : '';
|
||||
_ariaPushLine(
|
||||
`<span style="color:#777799;">[${t}]</span> ${head}<br><span style="color:#9090A0;white-space:pre-wrap;display:block;padding-left:14px;border-left:2px solid #2A2A3E;">${_ariaEsc(p.content || '')}${tail}</span>`,
|
||||
'#9090A0',
|
||||
);
|
||||
} else {
|
||||
initSSHTerm();
|
||||
_ariaPushLine(
|
||||
`<span style="color:#777799;">[${t}]</span> <span style="color:#AAAACC;">${_ariaEsc(kind)}: ${_ariaEsc(JSON.stringify(p))}</span>`,
|
||||
'#AAAACC',
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
function handleLiveSSH(msg) {
|
||||
const statusEl = document.getElementById('live-ssh-status');
|
||||
const btn = document.getElementById('btn-live-ssh');
|
||||
if (msg.type === 'live_ssh_data' && liveSshTerm) {
|
||||
const raw = atob(msg.data);
|
||||
const bytes = new Uint8Array(raw.length);
|
||||
for (let i = 0; i < raw.length; i++) bytes[i] = raw.charCodeAt(i);
|
||||
liveSshTerm.write(bytes);
|
||||
} else if (msg.type === 'live_ssh_connected') {
|
||||
statusEl.textContent = 'Verbunden mit aria-wohnung';
|
||||
statusEl.style.color = '#34C759';
|
||||
btn.textContent = 'Trennen';
|
||||
if (liveSshTerm) liveSshTerm._sshConnected = true;
|
||||
} else if (msg.type === 'live_ssh_error') {
|
||||
statusEl.textContent = msg.error || 'Fehler';
|
||||
statusEl.style.color = '#FF6B6B';
|
||||
btn.textContent = 'Verbinden';
|
||||
if (liveSshTerm) liveSshTerm._sshConnected = false;
|
||||
} else if (msg.type === 'live_ssh_closed') {
|
||||
statusEl.textContent = 'Getrennt';
|
||||
statusEl.style.color = '#8888AA';
|
||||
btn.textContent = 'Verbinden';
|
||||
if (liveSshTerm) liveSshTerm._sshConnected = false;
|
||||
}
|
||||
function clearAriaLive() {
|
||||
const el = _ariaStreamEl();
|
||||
if (el) el.innerHTML = '<div style="color:#555570;font-style:italic;">Geleert.</div>';
|
||||
}
|
||||
function ariaPanicStop() {
|
||||
if (!confirm('Wirklich NOT-AUS? Alle aktiven Claude-Subprocesses werden sofort gekillt.')) return;
|
||||
send({ action: 'aria_panic_stop' });
|
||||
_ariaPushLine(
|
||||
`<span style="color:#FF3B30;font-weight:bold;">━━━ ${_ariaTimePrefix()} ⛔ NOT-AUS ausgeloest ━━━</span>`,
|
||||
'#FF3B30',
|
||||
);
|
||||
}
|
||||
|
||||
function checkDesktop() {
|
||||
@@ -2973,6 +3213,7 @@
|
||||
<div style="color:#8888AA;font-size:11px;margin-top:4px;">${detailLine}</div>
|
||||
<div style="color:#888;font-size:12px;margin-top:2px;">"${escapeHtml(t.message || '')}"</div>
|
||||
<div style="margin-top:6px;display:flex;gap:6px;">
|
||||
<button class="btn secondary" onclick="openTriggerEdit('${escapeHtml(t.name)}')" style="padding:2px 10px;font-size:10px;color:#0096FF;border-color:#0096FF;">✎ Bearbeiten</button>
|
||||
<button class="btn secondary" onclick="toggleTriggerActive('${escapeHtml(t.name)}', ${!active})" style="padding:2px 10px;font-size:10px;color:#FF9500;border-color:#FF9500;">${active ? '⏸ Deaktivieren' : '▶ Aktivieren'}</button>
|
||||
<button class="btn secondary" onclick="deleteTrigger('${escapeHtml(t.name)}')" style="padding:2px 10px;font-size:10px;color:#FF6B6B;border-color:#FF6B6B;">🗑 Löschen</button>
|
||||
</div>
|
||||
@@ -3010,10 +3251,21 @@
|
||||
document.getElementById('trigger-watcher-fields').style.display = t === 'watcher' ? '' : 'none';
|
||||
}
|
||||
|
||||
// null = Create-Modus, string = Edit-Modus (Name der bearbeiteten Bubble)
|
||||
let editingTriggerName = null;
|
||||
|
||||
async function openTriggerCreate() {
|
||||
editingTriggerName = null;
|
||||
document.getElementById('trigger-modal-title').textContent = 'Neuer Trigger';
|
||||
document.getElementById('trigger-modal-save-btn').textContent = 'Anlegen';
|
||||
document.getElementById('trigger-type').disabled = false;
|
||||
document.getElementById('trigger-name').disabled = false;
|
||||
document.getElementById('trigger-timer-create-fields').style.display = '';
|
||||
document.getElementById('trigger-timer-edit-fields').style.display = 'none';
|
||||
document.getElementById('trigger-type').value = 'timer';
|
||||
document.getElementById('trigger-name').value = '';
|
||||
document.getElementById('trigger-timer-minutes').value = '10';
|
||||
document.getElementById('trigger-timer-fires-at').value = '';
|
||||
document.getElementById('trigger-condition').value = '';
|
||||
document.getElementById('trigger-check-interval').value = '300';
|
||||
document.getElementById('trigger-throttle').value = '3600';
|
||||
@@ -3042,6 +3294,52 @@
|
||||
|
||||
function closeTriggerModal() {
|
||||
document.getElementById('trigger-modal').classList.remove('open');
|
||||
editingTriggerName = null;
|
||||
}
|
||||
|
||||
/** Edit-Modus: Modal mit existierenden Trigger-Werten fuellen. */
|
||||
async function openTriggerEdit(name) {
|
||||
const t = triggersCache.find(x => x.name === name);
|
||||
if (!t) { alert('Trigger nicht in cache, lade neu...'); loadTriggers(); return; }
|
||||
editingTriggerName = name;
|
||||
document.getElementById('trigger-modal-title').textContent = 'Trigger bearbeiten — ' + name;
|
||||
document.getElementById('trigger-modal-save-btn').textContent = 'Speichern';
|
||||
// Type + Name sind im Edit-Modus nicht aenderbar
|
||||
document.getElementById('trigger-type').value = t.type;
|
||||
document.getElementById('trigger-type').disabled = true;
|
||||
document.getElementById('trigger-name').value = t.name;
|
||||
document.getElementById('trigger-name').disabled = true;
|
||||
// Timer: relative-Minutes-Feld aus, absoluter ISO-Feld an
|
||||
document.getElementById('trigger-timer-create-fields').style.display = 'none';
|
||||
document.getElementById('trigger-timer-edit-fields').style.display = '';
|
||||
document.getElementById('trigger-timer-fires-at').value = t.fires_at || '';
|
||||
// Watcher-Felder vorbefuellen
|
||||
document.getElementById('trigger-condition').value = t.condition || '';
|
||||
document.getElementById('trigger-check-interval').value = String(t.check_interval_sec || 300);
|
||||
document.getElementById('trigger-throttle').value = String(t.throttle_sec || 3600);
|
||||
document.getElementById('trigger-message').value = t.message || '';
|
||||
document.getElementById('trigger-modal-error').style.display = 'none';
|
||||
onTriggerTypeChange();
|
||||
// Variablen-Hinweis fuer Watcher auch im Edit-Modus
|
||||
if (t.type === 'watcher') {
|
||||
try {
|
||||
const r = await fetch('/api/brain/triggers/conditions');
|
||||
const d = await r.json();
|
||||
const info = document.getElementById('trigger-vars-info');
|
||||
if (info) {
|
||||
const vars = (d.variables || []).map(v =>
|
||||
`<code>${escapeHtml(v.name)}</code>=${escapeHtml(String(d.current[v.name]))} <span style="color:#444;">(${escapeHtml(v.desc)})</span>`
|
||||
).join(' · ');
|
||||
const fns = (d.functions || []).map(f =>
|
||||
`<code>${escapeHtml(f.signature)}</code> — ${escapeHtml(f.desc)}`
|
||||
).join('<br>');
|
||||
info.innerHTML =
|
||||
'<strong>Variablen:</strong> ' + vars +
|
||||
(fns ? '<br><br><strong>Funktionen:</strong><br>' + fns : '');
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
document.getElementById('trigger-modal').classList.add('open');
|
||||
}
|
||||
|
||||
async function saveTrigger() {
|
||||
@@ -3053,6 +3351,33 @@
|
||||
if (!name) { errEl.textContent = 'Name fehlt.'; errEl.style.display = 'block'; return; }
|
||||
if (!message) { errEl.textContent = 'Nachricht fehlt.'; errEl.style.display = 'block'; return; }
|
||||
try {
|
||||
// ── EDIT-MODUS ──────────────────────────────────────────
|
||||
if (editingTriggerName) {
|
||||
const patch = { message };
|
||||
if (ttype === 'watcher') {
|
||||
const condition = document.getElementById('trigger-condition').value.trim();
|
||||
if (!condition) { errEl.textContent = 'Condition fehlt.'; errEl.style.display = 'block'; return; }
|
||||
patch.condition = condition;
|
||||
patch.check_interval_sec = parseInt(document.getElementById('trigger-check-interval').value, 10) || 300;
|
||||
patch.throttle_sec = parseInt(document.getElementById('trigger-throttle').value, 10) || 3600;
|
||||
} else if (ttype === 'timer') {
|
||||
const fa = document.getElementById('trigger-timer-fires-at').value.trim();
|
||||
if (fa) patch.fires_at = fa;
|
||||
}
|
||||
const r = await fetch('/api/brain/triggers/' + encodeURIComponent(editingTriggerName), {
|
||||
method: 'PATCH',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(patch),
|
||||
});
|
||||
if (!r.ok) {
|
||||
const txt = await r.text();
|
||||
throw new Error('HTTP ' + r.status + ': ' + txt.slice(0, 200));
|
||||
}
|
||||
closeTriggerModal();
|
||||
loadTriggers();
|
||||
return;
|
||||
}
|
||||
// ── CREATE-MODUS ────────────────────────────────────────
|
||||
let url, body;
|
||||
if (ttype === 'timer') {
|
||||
const mins = parseInt(document.getElementById('trigger-timer-minutes').value, 10) || 10;
|
||||
@@ -4696,6 +5021,7 @@
|
||||
});
|
||||
}
|
||||
|
||||
loadThoughtStream();
|
||||
connectWS();
|
||||
</script>
|
||||
</body>
|
||||
|
||||
+54
-15
@@ -492,9 +492,10 @@ function handleGatewayMessage(msg) {
|
||||
}
|
||||
|
||||
function sendToGateway(text, isTrace) {
|
||||
// OpenClaw-Gateway ist raus — Brain via Bridge via RVS ist die einzige
|
||||
// Route. Wir loggen nichts mehr; alte Trace-Aufrufe schliessen wir clean.
|
||||
if (!gatewayWs || gatewayWs.readyState !== WebSocket.OPEN) {
|
||||
log("error", "gateway", "Nicht verbunden — kann nicht senden");
|
||||
if (isTrace) traceEnd(false, "Gateway nicht verbunden");
|
||||
if (isTrace) traceEnd(false, "Gateway deprecated — nutze RVS");
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -632,6 +633,11 @@ function connectRVS(forcePlain) {
|
||||
tool: msg.payload?.tool || msg.tool || "",
|
||||
});
|
||||
}
|
||||
} else if (msg.type === "agent_stream") {
|
||||
// Voller Live-Stream der Claude-Code-Session (assistant_text +
|
||||
// tool_use mit Input + tool_result mit truncated Output). Geht
|
||||
// 1:1 an Browser durch — die ARIA-Live-View rendert's.
|
||||
broadcast({ type: "agent_stream", payload: msg.payload });
|
||||
} else if (msg.type === "memory_saved") {
|
||||
// ARIA hat selber etwas in die Qdrant-DB gespeichert (via memory_save Tool).
|
||||
const m = msg.payload || {};
|
||||
@@ -757,22 +763,20 @@ function sendToRVS_raw(msgObj) {
|
||||
}
|
||||
|
||||
function sendToRVS(text, isTrace) {
|
||||
// Ueber Gateway senden (zuverlaessig) UND an RVS fuer App-Sichtbarkeit
|
||||
// Die Bridge empfaengt RVS-Nachrichten von der App zuverlaessig,
|
||||
// aber die Diagnostic→RVS→Bridge Route hat Zombie-Probleme.
|
||||
// Deshalb: Gateway fuer ARIA, RVS nur fuer App-Anzeige.
|
||||
|
||||
// 1. An Gateway senden (damit ARIA antwortet)
|
||||
const gatewayOk = sendToGateway(text, isTrace);
|
||||
|
||||
// 2. An RVS senden (damit die App die Nachricht sieht)
|
||||
// Brain-Pipeline: Diagnostic → RVS → Bridge → Brain (HTTP). OpenClaw-
|
||||
// Gateway-Pfad ist abgeschaltet. Sender 'diagnostic' damit die Bridge
|
||||
// den Text als User-Nachricht ans Brain weiterleitet und die App +
|
||||
// Diagnostic die Bubble live spiegeln koennen.
|
||||
if (!rvsWs || rvsWs.readyState !== WebSocket.OPEN) {
|
||||
if (isTrace) traceEnd(false, "RVS nicht verbunden");
|
||||
return false;
|
||||
}
|
||||
sendToRVS_raw({
|
||||
type: "chat",
|
||||
payload: { text, sender: "diagnostic" },
|
||||
timestamp: Date.now(),
|
||||
});
|
||||
|
||||
return gatewayOk;
|
||||
return true;
|
||||
}
|
||||
|
||||
// ── Claude Proxy Test ────────────────────────────────────
|
||||
@@ -1836,8 +1840,11 @@ wss.on("connection", (ws) => {
|
||||
const msg = JSON.parse(raw.toString());
|
||||
|
||||
if (msg.action === "test_gateway") {
|
||||
traceStart("Gateway", msg.text || "aria lebst du noch?");
|
||||
sendToGateway(msg.text || "aria lebst du noch?", true);
|
||||
// Deprecated — Gateway-Pfad ist raus. Wir leiten an RVS um damit
|
||||
// alte Browser-Sessions die noch den Button anzeigen nicht stumm
|
||||
// ins Leere klicken. Neue Versionen kennen den Button nicht mehr.
|
||||
traceStart("RVS", msg.text || "aria lebst du noch?");
|
||||
sendToRVS(msg.text || "aria lebst du noch?", true);
|
||||
} else if (msg.action === "test_rvs") {
|
||||
traceStart("RVS", msg.text || "aria lebst du noch?");
|
||||
sendToRVS(msg.text || "aria lebst du noch?", true);
|
||||
@@ -1885,6 +1892,18 @@ wss.on("connection", (ws) => {
|
||||
if (traceActive) traceEnd(false, "Vom Benutzer abgebrochen");
|
||||
broadcast({ type: "agent_activity", activity: "idle" });
|
||||
dockerExec("aria-core", "openclaw doctor --fix 2>/dev/null || true").catch(() => {});
|
||||
} else if (msg.action === "aria_panic_stop") {
|
||||
// NOT-AUS aus ARIA-Live-View: lokales /api/cancel UND Hard-Kill via
|
||||
// Bridge (die wiederum den Proxy-Side-Channel /cancel-all anruft).
|
||||
log("warn", "server", "⛔ NOT-AUS — hard cancel + proxy /cancel-all");
|
||||
pendingMessageTime = 0;
|
||||
watchdogWarned = false;
|
||||
watchdogFixAttempted = false;
|
||||
if (traceActive) traceEnd(false, "Vom Benutzer per NOT-AUS abgebrochen");
|
||||
broadcast({ type: "agent_activity", activity: "idle" });
|
||||
// RVS-Broadcast cancel_request mit hard:true → aria-bridge ruft
|
||||
// den Proxy-/cancel-all Side-Channel an, killt alle Subprocesses.
|
||||
sendToRVS_raw({ type: "cancel_request", payload: { hard: true, source: "diagnostic-panic" }, timestamp: Date.now() });
|
||||
} else if (msg.action === "voice_upload") {
|
||||
// Voice-Samples an XTTS-Bridge via RVS weiterleiten, auf Bestätigung warten
|
||||
log("info", "server", `Voice-Upload '${msg.name}' (${(msg.samples || []).length} Samples) sende an RVS...`);
|
||||
@@ -1943,6 +1962,26 @@ wss.on("connection", (ws) => {
|
||||
if (msg.f5ttsNfeStep !== undefined && !isNaN(msg.f5ttsNfeStep)) {
|
||||
voiceConfig.f5ttsNfeStep = msg.f5ttsNfeStep;
|
||||
}
|
||||
// FLUX-Settings (Default-Modell + User-Keywords). flux-bridge nutzt
|
||||
// fluxDefaultModel zum Hot-Swap, Brain liest die Keywords direkt aus
|
||||
// /shared/config/voice_config.json fuer den System-Prompt.
|
||||
if (msg.fluxDefaultModel !== undefined) {
|
||||
voiceConfig.fluxDefaultModel = (msg.fluxDefaultModel === "schnell") ? "schnell" : "dev";
|
||||
}
|
||||
if (msg.fluxKeywordRaw !== undefined) {
|
||||
voiceConfig.fluxKeywordRaw = String(msg.fluxKeywordRaw || "").trim().toLowerCase() || "flux";
|
||||
}
|
||||
if (msg.fluxKeywordSwitch !== undefined) {
|
||||
voiceConfig.fluxKeywordSwitch = String(msg.fluxKeywordSwitch || "").trim().toLowerCase() || "fix";
|
||||
}
|
||||
// HuggingFace-Token fuer gated FLUX.1-dev. Wird per RVS an die
|
||||
// flux-bridge gepusht, dort als HF_TOKEN env gesetzt vor dem
|
||||
// naechsten from_pretrained. Leerer String = "kein Token" (statt
|
||||
// 'behalt was du hattest'), damit Stefan ihn auch wieder loeschen
|
||||
// kann.
|
||||
if (msg.huggingfaceToken !== undefined) {
|
||||
voiceConfig.huggingfaceToken = String(msg.huggingfaceToken || "").trim();
|
||||
}
|
||||
try {
|
||||
fs.mkdirSync("/shared/config", { recursive: true });
|
||||
fs.writeFileSync("/shared/config/voice_config.json", JSON.stringify(voiceConfig, null, 2));
|
||||
|
||||
@@ -12,8 +12,10 @@ services:
|
||||
DIST=$$(find /usr/local/lib -path '*/claude-max-api-proxy/dist' -type d | head -1) &&
|
||||
sed -i 's/startServer({ port })/startServer({ port, host: process.env.HOST || \"127.0.0.1\" })/' $$DIST/server/standalone.js &&
|
||||
sed -i 's/\"--no-session-persistence\",/\"--no-session-persistence\",\"--dangerously-skip-permissions\",/' $$DIST/subprocess/manager.js &&
|
||||
sed -i 's/const DEFAULT_TIMEOUT = 300000;/const DEFAULT_TIMEOUT = 1200000;/' $$DIST/subprocess/manager.js &&
|
||||
cp /proxy-patches/openai-to-cli.js $$DIST/adapter/openai-to-cli.js &&
|
||||
cp /proxy-patches/cli-to-openai.js $$DIST/adapter/cli-to-openai.js &&
|
||||
cp /proxy-patches/routes.js $$DIST/server/routes.js &&
|
||||
claude-max-api"
|
||||
volumes:
|
||||
- ~/.claude:/root/.claude # Claude CLI Auth (Credentials in /root/.claude/.credentials.json)
|
||||
|
||||
@@ -0,0 +1,180 @@
|
||||
# FLUX.1-dev Bildgenerierung — Architektur & Stand
|
||||
|
||||
Ergaenzung des ARIA-Agent-Stacks um native Text-to-Image-Generierung via
|
||||
FLUX.1-dev auf der Gamebox. Folgt dem **gleichen Pattern wie f5tts / whisper**:
|
||||
ein eigener Container auf dem Gaming-PC, der sich selbst per WebSocket zum
|
||||
RVS verbindet und auf seinen Request-Typ lauscht.
|
||||
|
||||
## Pipeline
|
||||
|
||||
```
|
||||
Stefan / App
|
||||
│ Chat-Nachricht ("mal mir einen Sonnenuntergang ueberm Hangar")
|
||||
▼
|
||||
aria-bridge ── send_to_core ──▶ aria-brain
|
||||
│ chooses tool: flux_generate(prompt=..., width=..., ...)
|
||||
│ POST /internal/flux-generate
|
||||
▼
|
||||
aria-bridge (VM)
|
||||
│ pushes {type: "flux_request",
|
||||
│ payload: {requestId, prompt, ...}}
|
||||
│ via RVS-Broadcast
|
||||
▼
|
||||
RVS
|
||||
│ fanout
|
||||
▼
|
||||
flux-bridge (Gamebox)
|
||||
│ FluxPipeline.from_pretrained(...)
|
||||
│ pipeline(prompt, width, height, steps, guidance).images[0]
|
||||
│ PIL → PNG → base64
|
||||
│ {type: "flux_response", payload: {state:"done",
|
||||
│ requestId, base64, mimeType, ...}}
|
||||
▼
|
||||
RVS
|
||||
│
|
||||
▼
|
||||
aria-bridge (VM)
|
||||
│ _pending_flux[requestId].set_result(payload)
|
||||
│ base64-decode → /shared/uploads/aria_generated_<ts>.png
|
||||
│ HTTP 200 zurueck an Brain mit {path, sizeBytes, ...}
|
||||
▼
|
||||
aria-brain
|
||||
│ Tool-Result + Hint: "schreib [FILE: {path}] in deine Antwort"
|
||||
│ Final-Reply: "Hier dein Bild:\n[FILE: /shared/uploads/aria_generated_<ts>.png]"
|
||||
▼
|
||||
aria-bridge
|
||||
│ _FILE_MARKER_RE → file_from_aria-Event
|
||||
│ Marker bleibt im Chat-Text fuer Hist; App rendert das Bild inline
|
||||
▼
|
||||
App + Diagnostic
|
||||
```
|
||||
|
||||
## Komponenten
|
||||
|
||||
### 1. `flux/bridge.py` (neu) — flux-bridge Container
|
||||
|
||||
- `FluxPipeline` (diffusers) mit `enable_model_cpu_offload()` als Default,
|
||||
damit FLUX.1-dev (~24 GB on disk, ~12 B params) auf einer RTX 3060
|
||||
(12 GB VRAM) ueberhaupt laeuft.
|
||||
- Lazy-Load: Modell wird beim ersten `flux_request` (oder im Initial-Load)
|
||||
geladen, `service_status: "flux", state: "loading" | "ready" | "error"`
|
||||
wird via RVS broadcastet → Diagnostic-Badge zeigt's an.
|
||||
- Single-Worker-Queue (`_flux_queue`) — GPU darf nicht parallel rendern,
|
||||
sonst OOM oder Crash.
|
||||
- Progress-Ping: `flux_response {state: "rendering"}` direkt nach
|
||||
Queue-Pickup, damit die aria-bridge weiss "Auftrag angekommen", auch
|
||||
wenn der eigentliche Render 60s braucht.
|
||||
- Caps:
|
||||
- `width`/`height`: 256 .. `FLUX_MAX_DIM` (Default 1536), gesnappt auf
|
||||
Vielfache von 64.
|
||||
- `steps`: 1 .. `FLUX_MAX_STEPS` (Default 50).
|
||||
- `guidance_scale`: 0.0 .. 20.0.
|
||||
- `prompt`: max 2000 chars.
|
||||
- Env-Switches:
|
||||
- `FLUX_MODEL` — Default `black-forest-labs/FLUX.1-dev` (non-commercial).
|
||||
Alt: `FLUX.1-schnell` (Apache-2.0, 4 Steps, deutlich schneller).
|
||||
- `FLUX_OFFLOAD` — `model` (default), `sequential` (sparsamer, langsamer)
|
||||
oder `none` (alles auf GPU; nur fuer >=24 GB VRAM-Karten).
|
||||
- `FLUX_DTYPE` — `bfloat16` (default) oder `float16`.
|
||||
- `HF_TOKEN` — FLUX.1-dev braucht HuggingFace-Login.
|
||||
|
||||
### 2. `flux/docker-compose.yml` — eigener Stack
|
||||
|
||||
Bewusst NICHT mit in `xtts/docker-compose.yml` gepackt: FLUX kann auch
|
||||
separat laufen (z.B. spaeter auf einer 4090, waehrend die 3060 weiter
|
||||
TTS+STT bedient). Eigener Compose, eigene `.env.example`, eigenes
|
||||
`hf-cache/`-Volume.
|
||||
|
||||
- GPU-Reservation analog zu f5tts/whisper.
|
||||
- Volume `./hf-cache:/root/.cache/huggingface` — wenn flux auf der
|
||||
gleichen Maschine wie xtts laeuft kann man `../xtts/hf-cache`
|
||||
symlinken, dann ist der Modell-Cache geteilt.
|
||||
- Restart `unless-stopped`.
|
||||
|
||||
### 3. `rvs/server.js` — Allowlist erweitert
|
||||
|
||||
Neue Typen: `flux_request`, `flux_response` (auch wenn das Initial-Load-
|
||||
broadcast `service_status` bereits zugelassen war).
|
||||
|
||||
### 4. `bridge/aria_bridge.py`
|
||||
|
||||
- `self._pending_flux: dict[str, asyncio.Future]` — request_id → future.
|
||||
- `self._remote_flux_ready: bool` — wird von `service_status` Updates
|
||||
gefuellt; steuert den HTTP-Timeout (240 s wenn ready, 900 s waehrend
|
||||
des allerersten Modell-Downloads).
|
||||
- `flux_response`-Handler: Progress-Ping (`state == "rendering"`) bleibt
|
||||
no-op auf der Future; `state == "done"` setzt die Future, Error setzt
|
||||
`{"error": ...}`.
|
||||
- `_flux_generate(prompt, width, height, steps, guidance, seed)` — Helper:
|
||||
1. UUID + Future
|
||||
2. `flux_request` broadcasten
|
||||
3. `asyncio.wait_for(future, timeout=...)`
|
||||
4. base64 → `/shared/uploads/aria_generated_<ts>.png`
|
||||
5. dict mit `{ok, path, sizeBytes, width, height, steps, guidance, seed, model, renderSeconds}`
|
||||
- HTTP-Endpoint `POST /internal/flux-generate` im internen Listener
|
||||
(Port 8090). Validiert prompt + clamps, ruft `_flux_generate`, gibt
|
||||
Result als JSON zurueck.
|
||||
|
||||
### 5. `aria-brain/agent.py` — META-Tool `flux_generate`
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"name": "flux_generate",
|
||||
"parameters": {
|
||||
"prompt": "string (englischer Prompt — FLUX liefert auf EN besser)",
|
||||
"width": "integer (256..1536, default 1024)",
|
||||
"height": "integer (256..1536, default 1024)",
|
||||
"steps": "integer (1..50, default 28)",
|
||||
"guidance_scale": "number (default 3.5)",
|
||||
"seed": "integer (optional)"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Dispatcher:
|
||||
- POSTet `{prompt, width, height, steps, guidance_scale, seed}` an
|
||||
`http://aria-bridge:8090/internal/flux-generate` (urllib, 1200 s Timeout
|
||||
— der erste Render kann den 24 GB Modell-Download triggern).
|
||||
- Bei `ok=true` gibt das Tool den **Pfad** + Render-Stats zurueck und
|
||||
weist Claude explizit an: *"Schreibe `[FILE: <path>]` in deine
|
||||
Antwort an Stefan, dann zeigt die App das Bild inline."*
|
||||
- Brain ueberlegt sich den Begleittext selber und packt den Marker an
|
||||
passende Stelle.
|
||||
|
||||
### 6. `diagnostic/index.html` — Status-Badge
|
||||
|
||||
Label `flux: 'FLUX Image-Gen'` zum bestehenden `updateServiceStatus()`-Switch
|
||||
hinzugefuegt — kein neuer Code, gleicher Banner-Mechanismus wie F5-TTS /
|
||||
Whisper.
|
||||
|
||||
## File-Lifecycle
|
||||
|
||||
Generierte Bilder leben unter `/shared/uploads/aria_generated_<ts>.png`
|
||||
(gleicher Folder wie User-Uploads). Damit:
|
||||
- `[FILE: ...]`-Marker funktioniert (Bridge erlaubt nur Pfade unter
|
||||
`/shared/uploads/`).
|
||||
- File-Manager-Endpoints in Diagnostic (Liste/Loeschen/Zip) sehen sie
|
||||
ohne Sonderbehandlung.
|
||||
- Memory-Anhaenge: ARIA kann ein generiertes Bild im selben Turn an
|
||||
einen Memory-Eintrag haengen (`memory_save(attach_paths=[path])`).
|
||||
|
||||
## Bekannte Stolpersteine
|
||||
|
||||
- **HF-Login**: FLUX.1-dev ist gated. Vor erstem Start `HF_TOKEN` im
|
||||
`.env` setzen oder im Container `huggingface-cli login` machen, sonst
|
||||
403 beim ersten Download.
|
||||
- **Erster Render dauert lang**: 24 GB Modell laden + CUDA-Warmup → 5-10
|
||||
min realistisch. Brain-HTTP-Timeout ist 1200 s, RVS-Future-Timeout
|
||||
900 s (loading-Modus). Stefan sollte beim ersten "Mal mir was"-Request
|
||||
ein bisschen Geduld haben — danach sind Renders ~30-90 s.
|
||||
- **Lizenz**: FLUX.1-dev ist *non-commercial* (FLUX.1 Dev Non-Commercial
|
||||
License). Fuer kommerzielle Nutzung muss man auf `FLUX.1-schnell`
|
||||
(Apache-2.0) oder `FLUX.1-pro` (API only) wechseln. Stefan kann das
|
||||
ueber `FLUX_MODEL` in der `.env` umstellen.
|
||||
- **VRAM**: 12 GB (3060) reichen NUR mit `enable_model_cpu_offload`. Bei
|
||||
Out-of-Memory in den Logs auf `FLUX_OFFLOAD=sequential` switchen
|
||||
(deutlich langsamer, aber peak-VRAM ~6 GB).
|
||||
- **Parallele Calls**: Single-Worker-Queue in der flux-bridge — ein
|
||||
zweiter `flux_generate`-Tool-Call von Brain wartet, bis der erste fertig
|
||||
ist. In der Praxis kein Problem, weil Stefan eh nicht zwei Bilder
|
||||
gleichzeitig macht.
|
||||
@@ -0,0 +1,36 @@
|
||||
# ════════════════════════════════════════════════
|
||||
# ARIA FLUX-Bridge — Konfiguration
|
||||
# Kopieren nach .env und anpassen
|
||||
# ════════════════════════════════════════════════
|
||||
|
||||
# RVS Verbindung (gleiche Daten wie auf der ARIA-VM / xtts/.env)
|
||||
RVS_HOST=mobil.hacker-net.de
|
||||
RVS_PORT=444
|
||||
RVS_TLS=true
|
||||
RVS_TLS_FALLBACK=true
|
||||
RVS_TOKEN=dein_token_hier
|
||||
|
||||
# HuggingFace-Token + Default-Modell werden in ARIA Diagnostic verwaltet
|
||||
# (Section "FLUX Bildgenerierung") und per RVS an die flux-bridge gepusht.
|
||||
# Hier nichts noetig.
|
||||
#
|
||||
# Token-Pflicht NUR fuer FLUX.1-dev (gated). Workflow falls Du dev nutzen
|
||||
# willst:
|
||||
# 1) https://huggingface.co/black-forest-labs/FLUX.1-dev → "Agree"
|
||||
# 2) https://huggingface.co/settings/tokens → "Read"-Token erzeugen
|
||||
# 3) Token in Diagnostic > FLUX Bildgenerierung > HuggingFace-Token
|
||||
# FLUX.1-schnell (Apache-2.0) laeuft ohne Token.
|
||||
|
||||
# Offloading-Strategie (VRAM-Steuerung):
|
||||
# model — Default. Komponentenweise CPU-Offload, gut fuer 12 GB Karten.
|
||||
# sequential — sparsamer (Peak ~6 GB), aber 2-3x langsamer.
|
||||
# none — alles auf GPU. Nur fuer >= 24 GB VRAM-Karten.
|
||||
FLUX_OFFLOAD=model
|
||||
|
||||
# Float-Type. bfloat16 ist FLUX-native; auf alten Karten ohne BF16-Support
|
||||
# auf float16 wechseln.
|
||||
FLUX_DTYPE=bfloat16
|
||||
|
||||
# Hard-Caps gegen versehentlich teure Renders
|
||||
FLUX_MAX_STEPS=50
|
||||
FLUX_MAX_DIM=1536
|
||||
@@ -0,0 +1,5 @@
|
||||
# HuggingFace Model-Cache (FLUX.1-dev ~24 GB on disk)
|
||||
hf-cache/
|
||||
|
||||
# Docker .env
|
||||
.env
|
||||
@@ -0,0 +1,30 @@
|
||||
FROM nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
python3 python3-pip git \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# PyTorch CUDA-Wheels zuerst, damit diffusers nicht CPU-Torch zieht.
|
||||
# Torch 2.5+ ist Pflicht: aktuelle transformers (4.50+, von diffusers
|
||||
# transitiv reingezogen) registriert in integrations/moe.py einen
|
||||
# custom_op mit String-Forward-References (`input: 'torch.Tensor'`).
|
||||
# Erst torch 2.5's infer_schema kann die aufloesen — 2.4.1 crasht mit
|
||||
# "Parameter input has unsupported type torch.Tensor" beim Import von
|
||||
# diffusers.pipelines.flux.pipeline_flux.
|
||||
# torchvision wird von den CLIP-/Siglip-ImageProcessors verlangt.
|
||||
# cu121 bleibt — passt zum CUDA 12.2 Base-Image.
|
||||
RUN pip3 install --no-cache-dir \
|
||||
torch==2.5.1 torchvision==0.20.1 \
|
||||
--index-url https://download.pytorch.org/whl/cu121
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip3 install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY bridge.py .
|
||||
|
||||
CMD ["python3", "bridge.py"]
|
||||
+557
@@ -0,0 +1,557 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
ARIA FLUX-Bridge — laeuft auf der Gamebox (RTX 3060).
|
||||
|
||||
Empfaengt flux_request via RVS → FLUX.1-dev/-schnell auf GPU → sendet
|
||||
flux_response mit base64-PNG zurueck an die aria-bridge. Diese speichert
|
||||
die Datei nach /shared/uploads/ und ARIA referenziert sie mit
|
||||
[FILE: ...]-Marker in ihrer Antwort.
|
||||
|
||||
12 GB VRAM auf der 3060 reichen fuer FLUX.1-dev nur mit
|
||||
`enable_model_cpu_offload()` — sonst OOM. Setze FLUX_OFFLOAD=sequential
|
||||
fuer Maximal-Sparsamkeit (langsamer) oder FLUX_OFFLOAD=none wenn die
|
||||
GPU genug VRAM hat (z.B. spaeter 4090).
|
||||
|
||||
Env:
|
||||
RVS_HOST, RVS_PORT, RVS_TLS, RVS_TLS_FALLBACK, RVS_TOKEN
|
||||
FLUX_MODEL Default: black-forest-labs/FLUX.1-dev
|
||||
Alt: black-forest-labs/FLUX.1-schnell (4-Step, Apache-2.0)
|
||||
FLUX_DEVICE Default: cuda
|
||||
FLUX_DTYPE Default: bfloat16 (alt: float16)
|
||||
FLUX_OFFLOAD Default: model (alt: sequential | none)
|
||||
FLUX_MAX_STEPS Default: 50
|
||||
FLUX_MAX_DIM Default: 1536
|
||||
"""
|
||||
import asyncio
|
||||
import base64
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
from typing import Optional
|
||||
|
||||
import websockets
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
datefmt="%H:%M:%S",
|
||||
)
|
||||
logger = logging.getLogger("flux-bridge")
|
||||
# HuggingFace/Torch download-Logs daempfen
|
||||
logging.getLogger("httpx").setLevel(logging.WARNING)
|
||||
logging.getLogger("urllib3").setLevel(logging.WARNING)
|
||||
|
||||
RVS_HOST = os.getenv("RVS_HOST", "").strip()
|
||||
RVS_PORT = int(os.getenv("RVS_PORT", "443"))
|
||||
RVS_TLS = os.getenv("RVS_TLS", "true").lower() == "true"
|
||||
RVS_TLS_FALLBACK = os.getenv("RVS_TLS_FALLBACK", "true").lower() == "true"
|
||||
RVS_TOKEN = os.getenv("RVS_TOKEN", "").strip()
|
||||
|
||||
# Bootstrap-Fallback: nur relevant wenn beim allerersten Start KEIN
|
||||
# Diagnostic-config-Broadcast eintrifft UND der erste Render-Request
|
||||
# auch kein 'model' enthaelt. Default 'schnell', weil Apache-2.0
|
||||
# (kein HF-Token noetig) — Stefan stellt sein gewuenschtes Default ueber
|
||||
# Diagnostic ein. ENV ist also nur fuer den extremen Edge-Case da, in
|
||||
# der .env.example absichtlich nicht mehr dokumentiert.
|
||||
FLUX_MODEL = os.getenv("FLUX_MODEL", "black-forest-labs/FLUX.1-schnell").strip()
|
||||
FLUX_DEVICE = os.getenv("FLUX_DEVICE", "cuda").strip()
|
||||
FLUX_DTYPE = os.getenv("FLUX_DTYPE", "bfloat16").strip().lower()
|
||||
FLUX_OFFLOAD = os.getenv("FLUX_OFFLOAD", "model").strip().lower()
|
||||
FLUX_MAX_STEPS = int(os.getenv("FLUX_MAX_STEPS", "50"))
|
||||
FLUX_MAX_DIM = int(os.getenv("FLUX_MAX_DIM", "1536"))
|
||||
|
||||
# FLUX-dev native: guidance=3.5, steps=28. FLUX-schnell: guidance=0.0, steps=4.
|
||||
DEFAULT_STEPS_DEV = 28
|
||||
DEFAULT_STEPS_SCHNELL = 4
|
||||
DEFAULT_GUIDANCE_DEV = 3.5
|
||||
DEFAULT_GUIDANCE_SCHNELL = 0.0
|
||||
|
||||
# Mapping fuer das User-facing Tag → HF-Modell-ID. Stefan stellt in Diagnostic
|
||||
# nur 'dev' / 'schnell' ein; FLUX_MODEL aus der env kann zwar eine custom-ID
|
||||
# sein (Bootstrap), wird aber beim ersten config-Broadcast normalerweise
|
||||
# durch die Diagnostic-Wahl uebersteuert.
|
||||
MODEL_TAGS: dict[str, str] = {
|
||||
"dev": "black-forest-labs/FLUX.1-dev",
|
||||
"schnell": "black-forest-labs/FLUX.1-schnell",
|
||||
}
|
||||
|
||||
|
||||
def _tag_to_model_id(tag: str) -> str:
|
||||
"""Mappt 'dev'/'schnell' auf HF-ID. Andere Strings werden 1:1 durchgereicht
|
||||
(custom-IDs aus FLUX_MODEL env). Leere/ungueltige Werte → FLUX_MODEL Default."""
|
||||
if not tag:
|
||||
return FLUX_MODEL
|
||||
t = tag.strip()
|
||||
return MODEL_TAGS.get(t, t)
|
||||
|
||||
|
||||
def _is_schnell(model_id: str) -> bool:
|
||||
return "schnell" in model_id.lower()
|
||||
|
||||
|
||||
def _is_model_cached(model_id: str) -> bool:
|
||||
"""Prueft ob ein HF-Modell-Snapshot lokal im hf-cache vorhanden ist.
|
||||
|
||||
HF speichert unter ~/.cache/huggingface/hub/models--{org}--{name}/snapshots/{rev}/.
|
||||
Wenn das snapshots-Verzeichnis nicht existiert oder leer ist → Erst-Download
|
||||
steht an (24+ GB fuer FLUX.1-dev, 24+ GB fuer FLUX.1-schnell — Stefan kriegt
|
||||
dann nen Hinweis im Banner).
|
||||
"""
|
||||
if not model_id:
|
||||
return False
|
||||
cache_root = os.environ.get("HF_HOME") or os.path.expanduser("~/.cache/huggingface")
|
||||
safe = "models--" + model_id.replace("/", "--")
|
||||
snapshots = os.path.join(cache_root, "hub", safe, "snapshots")
|
||||
if not os.path.isdir(snapshots):
|
||||
return False
|
||||
try:
|
||||
for rev in os.listdir(snapshots):
|
||||
rev_dir = os.path.join(snapshots, rev)
|
||||
if os.path.isdir(rev_dir) and any(os.scandir(rev_dir)):
|
||||
return True
|
||||
except OSError:
|
||||
return False
|
||||
return False
|
||||
|
||||
|
||||
def _torch_dtype():
|
||||
"""Lazy-resolve damit Torch erst beim Modell-Laden importiert wird."""
|
||||
import torch
|
||||
return {"bfloat16": torch.bfloat16, "float16": torch.float16, "float32": torch.float32}\
|
||||
.get(FLUX_DTYPE, torch.bfloat16)
|
||||
|
||||
|
||||
def _snap_dim(v: int, default: int = 1024) -> int:
|
||||
"""FLUX braucht Multiples von 16 (sicher: 64). Clamp + Snap."""
|
||||
try:
|
||||
n = int(v)
|
||||
except (TypeError, ValueError):
|
||||
n = default
|
||||
n = max(256, min(FLUX_MAX_DIM, n))
|
||||
# Auf naechstes Vielfaches von 64 abrunden
|
||||
n = (n // 64) * 64
|
||||
return max(256, n)
|
||||
|
||||
|
||||
class FluxRunner:
|
||||
"""Haelt EINE FLUX-Pipeline. Bei Modell-Wechsel wird die alte verworfen
|
||||
und die neue geladen (~15-30 s aus HF-Cache, keine Re-Downloads).
|
||||
|
||||
Pro Request kann ein 'dev'/'schnell'-Tag mitkommen; ohne Angabe wird
|
||||
`default_model_id` genommen (steht Bootstrap auf FLUX_MODEL, wird beim
|
||||
ersten config-Broadcast von der aria-bridge auf die Diagnostic-Wahl
|
||||
aktualisiert).
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.pipe = None
|
||||
self._lock = asyncio.Lock()
|
||||
# Aktuell geladenes Modell — leer solange noch nix geladen wurde.
|
||||
self.model_id: str = ""
|
||||
# Was bei einem Request OHNE explizite model-Angabe benutzt wird.
|
||||
# Wird durch Diagnostic-config gesetzt; FLUX_MODEL bleibt nur als
|
||||
# Edge-Case-Fallback wenn weder Config noch Request einen Wert nennen.
|
||||
self.default_model_id: str = FLUX_MODEL
|
||||
self.last_load_seconds: float = 0.0
|
||||
# True wenn der letzte _load_blocking einen Fresh-Download triggern
|
||||
# musste (Modell war nicht im HF-Cache). Wird vom Caller geprueft
|
||||
# und in den 'ready'-service_status als freshlyDownloaded gesetzt.
|
||||
self.last_load_was_download: bool = False
|
||||
|
||||
def _load_blocking(self, model_id: str) -> None:
|
||||
import torch
|
||||
from diffusers import FluxPipeline
|
||||
|
||||
# Alte Pipeline freigeben damit der HF-Loader VRAM/RAM kriegt
|
||||
if self.pipe is not None:
|
||||
logger.info("Verwerfe alte Pipeline '%s'", self.model_id)
|
||||
try:
|
||||
del self.pipe
|
||||
except Exception:
|
||||
pass
|
||||
self.pipe = None
|
||||
try:
|
||||
torch.cuda.empty_cache()
|
||||
except Exception:
|
||||
pass
|
||||
import gc
|
||||
gc.collect()
|
||||
|
||||
was_cached = _is_model_cached(model_id)
|
||||
self.last_load_was_download = not was_cached
|
||||
if not was_cached:
|
||||
logger.warning("FLUX '%s' nicht im HF-Cache — Erst-Download steht bevor (kann 5-10 min dauern).",
|
||||
model_id)
|
||||
logger.info("Lade FLUX '%s' (dtype=%s, offload=%s, cached=%s)...",
|
||||
model_id, FLUX_DTYPE, FLUX_OFFLOAD, was_cached)
|
||||
t0 = time.time()
|
||||
pipe = FluxPipeline.from_pretrained(model_id, torch_dtype=_torch_dtype())
|
||||
|
||||
if FLUX_OFFLOAD == "sequential":
|
||||
pipe.enable_sequential_cpu_offload()
|
||||
elif FLUX_OFFLOAD == "none":
|
||||
pipe.to(FLUX_DEVICE)
|
||||
else: # "model" — default, Sweet-Spot fuer 12 GB Karten
|
||||
pipe.enable_model_cpu_offload()
|
||||
|
||||
# VAE-Tiling spart VRAM bei grossen Bildern (>1024)
|
||||
try:
|
||||
pipe.vae.enable_tiling()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
self.pipe = pipe
|
||||
self.model_id = model_id
|
||||
self.last_load_seconds = time.time() - t0
|
||||
logger.info("FLUX '%s' geladen in %.1fs", model_id, self.last_load_seconds)
|
||||
try:
|
||||
torch.cuda.empty_cache()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
async def ensure_loaded(self, model_id: Optional[str] = None) -> bool:
|
||||
"""Stellt sicher dass die richtige Pipeline geladen ist. Wenn ein
|
||||
anderes Modell gewuenscht ist als gerade aktiv, wird geswappt.
|
||||
Returns True wenn ein Swap/Load stattgefunden hat."""
|
||||
target = model_id or self.default_model_id or FLUX_MODEL
|
||||
async with self._lock:
|
||||
if self.pipe is not None and self.model_id == target:
|
||||
return False
|
||||
loop = asyncio.get_event_loop()
|
||||
await loop.run_in_executor(None, self._load_blocking, target)
|
||||
return True
|
||||
|
||||
def _generate_blocking(self, prompt: str, width: int, height: int,
|
||||
steps: int, guidance: float, seed: Optional[int]) -> bytes:
|
||||
import torch
|
||||
gen = None
|
||||
if seed is not None and seed >= 0:
|
||||
gen = torch.Generator(device=FLUX_DEVICE).manual_seed(int(seed))
|
||||
|
||||
logger.info("Render (%s): %dx%d, steps=%d, guidance=%.2f, seed=%s, prompt=%r",
|
||||
self.model_id, width, height, steps, guidance, seed, prompt[:80])
|
||||
out = self.pipe(
|
||||
prompt=prompt,
|
||||
width=width,
|
||||
height=height,
|
||||
num_inference_steps=steps,
|
||||
guidance_scale=guidance,
|
||||
generator=gen,
|
||||
)
|
||||
image = out.images[0]
|
||||
buf = io.BytesIO()
|
||||
image.save(buf, format="PNG", optimize=True)
|
||||
png_bytes = buf.getvalue()
|
||||
# VRAM zurueckgeben fuer den naechsten Render
|
||||
try:
|
||||
torch.cuda.empty_cache()
|
||||
except Exception:
|
||||
pass
|
||||
return png_bytes
|
||||
|
||||
async def generate(self, prompt: str, width: int, height: int,
|
||||
steps: int, guidance: float, seed: Optional[int],
|
||||
model_id: Optional[str] = None) -> bytes:
|
||||
await self.ensure_loaded(model_id)
|
||||
loop = asyncio.get_event_loop()
|
||||
return await loop.run_in_executor(
|
||||
None, self._generate_blocking, prompt, width, height, steps, guidance, seed,
|
||||
)
|
||||
|
||||
|
||||
# ── Helpers ─────────────────────────────────────────────────
|
||||
|
||||
|
||||
async def _send(ws, mtype: str, payload: dict) -> None:
|
||||
try:
|
||||
await ws.send(json.dumps({
|
||||
"type": mtype,
|
||||
"payload": payload,
|
||||
"timestamp": int(time.time() * 1000),
|
||||
}))
|
||||
except Exception as e:
|
||||
logger.warning("Send fehlgeschlagen (%s): %s", mtype, e)
|
||||
|
||||
|
||||
async def _broadcast_status(ws, state: str, **extra) -> None:
|
||||
"""Sendet service_status fuer das Flux-Modul.
|
||||
state: 'loading' | 'ready' | 'error'."""
|
||||
payload = {"service": "flux", "state": state}
|
||||
payload.update(extra)
|
||||
await _send(ws, "service_status", payload)
|
||||
|
||||
|
||||
# ── Flux-Request Queue ──────────────────────────────────────
|
||||
|
||||
# Eine GPU, ein Render gleichzeitig. Parallele Requests OOM-en sonst.
|
||||
_flux_queue: "asyncio.Queue[tuple]" = asyncio.Queue()
|
||||
|
||||
|
||||
def _resolve_request(payload: dict, runner: FluxRunner) -> tuple[str, int, int, int, float, Optional[int], str]:
|
||||
"""Liest Felder aus dem flux_request payload + clampt auf Caps.
|
||||
Returns (prompt, width, height, steps, guidance, seed, resolved_model_id).
|
||||
"""
|
||||
prompt = (payload.get("prompt") or "").strip()
|
||||
if not prompt:
|
||||
raise ValueError("prompt fehlt")
|
||||
if len(prompt) > 2000:
|
||||
prompt = prompt[:2000]
|
||||
|
||||
width = _snap_dim(payload.get("width", 1024))
|
||||
height = _snap_dim(payload.get("height", 1024))
|
||||
|
||||
# Modell-Wahl: explizit per Request > runner.default_model_id > FLUX_MODEL.
|
||||
req_model = (payload.get("model") or "").strip()
|
||||
resolved_model_id = _tag_to_model_id(req_model) if req_model else (runner.default_model_id or FLUX_MODEL)
|
||||
|
||||
schnell = _is_schnell(resolved_model_id)
|
||||
default_steps = DEFAULT_STEPS_SCHNELL if schnell else DEFAULT_STEPS_DEV
|
||||
default_guidance = DEFAULT_GUIDANCE_SCHNELL if schnell else DEFAULT_GUIDANCE_DEV
|
||||
|
||||
try:
|
||||
steps = int(payload.get("steps", default_steps))
|
||||
except (TypeError, ValueError):
|
||||
steps = default_steps
|
||||
steps = max(1, min(FLUX_MAX_STEPS, steps))
|
||||
|
||||
try:
|
||||
guidance = float(payload.get("guidance_scale", default_guidance))
|
||||
except (TypeError, ValueError):
|
||||
guidance = default_guidance
|
||||
if not (0.0 <= guidance <= 20.0):
|
||||
guidance = default_guidance
|
||||
|
||||
seed = payload.get("seed")
|
||||
if seed is not None:
|
||||
try:
|
||||
seed = int(seed)
|
||||
except (TypeError, ValueError):
|
||||
seed = None
|
||||
|
||||
return prompt, width, height, steps, guidance, seed, resolved_model_id
|
||||
|
||||
|
||||
async def _flux_worker(ws, runner: FluxRunner) -> None:
|
||||
"""Serialisiert Renders — eine GPU, ein Bild gleichzeitig."""
|
||||
while True:
|
||||
payload = await _flux_queue.get()
|
||||
request_id = payload.get("requestId") or str(uuid.uuid4())
|
||||
try:
|
||||
await _do_render(ws, runner, payload, request_id)
|
||||
except Exception:
|
||||
logger.exception("Flux-Worker Fehler")
|
||||
await _send(ws, "flux_response", {
|
||||
"requestId": request_id,
|
||||
"error": "internal error",
|
||||
})
|
||||
finally:
|
||||
_flux_queue.task_done()
|
||||
|
||||
|
||||
async def _do_render(ws, runner: FluxRunner, payload: dict, request_id: str) -> None:
|
||||
t0 = time.time()
|
||||
try:
|
||||
prompt, width, height, steps, guidance, seed, target_model_id = _resolve_request(payload, runner)
|
||||
except ValueError as e:
|
||||
logger.warning("flux_request invalid: %s", e)
|
||||
await _send(ws, "flux_response", {"requestId": request_id, "error": str(e)})
|
||||
return
|
||||
|
||||
# Modell-Swap noetig? Status broadcasten damit Diagnostic-Banner es zeigt.
|
||||
swap_needed = (runner.pipe is None or runner.model_id != target_model_id)
|
||||
will_download = swap_needed and not _is_model_cached(target_model_id)
|
||||
if swap_needed:
|
||||
await _broadcast_status(ws, "loading", model=target_model_id,
|
||||
downloading=will_download)
|
||||
await _send(ws, "flux_response", {
|
||||
"requestId": request_id,
|
||||
"state": "switching_model",
|
||||
"model": target_model_id,
|
||||
"downloading": will_download,
|
||||
})
|
||||
|
||||
# Progress-Ping: User soll sehen dass was passiert (Render >30s realistisch)
|
||||
await _send(ws, "flux_response", {
|
||||
"requestId": request_id,
|
||||
"state": "rendering",
|
||||
"width": width, "height": height, "steps": steps,
|
||||
"model": target_model_id,
|
||||
})
|
||||
|
||||
try:
|
||||
png = await runner.generate(prompt, width, height, steps, guidance, seed,
|
||||
model_id=target_model_id)
|
||||
except Exception as e:
|
||||
logger.exception("FLUX Render-Fehler")
|
||||
await _send(ws, "flux_response", {"requestId": request_id, "error": str(e)[:200]})
|
||||
if swap_needed:
|
||||
await _broadcast_status(ws, "error", error=str(e)[:200])
|
||||
return
|
||||
|
||||
if swap_needed:
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_id,
|
||||
loadSeconds=runner.last_load_seconds,
|
||||
freshlyDownloaded=runner.last_load_was_download)
|
||||
|
||||
dt = time.time() - t0
|
||||
b64 = base64.b64encode(png).decode("ascii")
|
||||
logger.info("Render fertig: %dx%d, %d KB PNG, %.1fs (%s)",
|
||||
width, height, len(png) // 1024, dt, runner.model_id)
|
||||
|
||||
await _send(ws, "flux_response", {
|
||||
"requestId": request_id,
|
||||
"state": "done",
|
||||
"base64": b64,
|
||||
"mimeType": "image/png",
|
||||
"width": width,
|
||||
"height": height,
|
||||
"steps": steps,
|
||||
"guidance": guidance,
|
||||
"seed": seed,
|
||||
"model": runner.model_id,
|
||||
"renderSeconds": round(dt, 2),
|
||||
"sizeBytes": len(png),
|
||||
})
|
||||
|
||||
|
||||
# ── Haupt-Loop ──────────────────────────────────────────────
|
||||
|
||||
|
||||
async def run_loop(runner: FluxRunner) -> None:
|
||||
use_tls = RVS_TLS
|
||||
retry_s = 2
|
||||
tls_fallback_tried = False
|
||||
|
||||
while True:
|
||||
scheme = "wss" if use_tls else "ws"
|
||||
url = f"{scheme}://{RVS_HOST}:{RVS_PORT}/ws?token={RVS_TOKEN}"
|
||||
masked = url.replace(RVS_TOKEN, "***") if RVS_TOKEN else url
|
||||
|
||||
try:
|
||||
logger.info("Verbinde zu RVS: %s", masked)
|
||||
# max_size 100 MB damit ein 4 MP PNG (~5-10 MB → ~13 MB base64)
|
||||
# locker reinpasst. Mit dem RVS-Limit (100 MB) konsistent.
|
||||
async with websockets.connect(url, ping_interval=20, ping_timeout=10,
|
||||
max_size=100 * 1024 * 1024) as ws:
|
||||
logger.info("RVS verbunden")
|
||||
retry_s = 2
|
||||
tls_fallback_tried = False
|
||||
|
||||
async def _load_with_status():
|
||||
"""Bei Connect KEIN Eager-Load — wir fragen erst die
|
||||
Diagnostic-Config ab. Welches Modell tatsaechlich geladen
|
||||
wird entscheidet sich entweder durch den config-Broadcast
|
||||
(kommt direkt danach) oder durch den ersten flux_request.
|
||||
Bis dahin gibt's keinen service_status, das Banner taucht
|
||||
erst auf wenn wir wirklich was laden."""
|
||||
try:
|
||||
if runner.pipe is not None:
|
||||
# Pipeline ueberlebt nur Container-Lifetime; hier
|
||||
# also nur falls schon ein Modell aktiv ist (Reconnect).
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_id,
|
||||
loadSeconds=runner.last_load_seconds)
|
||||
logger.info("Initial: sende config_request an aria-bridge "
|
||||
"(kein Eager-Load, warte auf Diagnostic-Wahl)")
|
||||
await _send(ws, "config_request", {"service": "flux"})
|
||||
except Exception as e:
|
||||
logger.exception("Initial-Setup crashed: %s", e)
|
||||
try:
|
||||
await _broadcast_status(ws, "error", error=str(e)[:200])
|
||||
except Exception:
|
||||
pass
|
||||
asyncio.create_task(_load_with_status())
|
||||
|
||||
worker = asyncio.create_task(_flux_worker(ws, runner))
|
||||
|
||||
async def _apply_default_change(new_tag: str):
|
||||
"""Wechselt den Default. Wenn ein anderes Modell als aktuell
|
||||
aktiv gewuenscht ist, wird eager geladen — der naechste
|
||||
Render ist dann ohne Swap-Delay."""
|
||||
new_model_id = _tag_to_model_id(new_tag)
|
||||
runner.default_model_id = new_model_id
|
||||
if runner.model_id == new_model_id:
|
||||
logger.info("[config] Default-Modell bleibt: %s", new_model_id)
|
||||
return
|
||||
will_download = not _is_model_cached(new_model_id)
|
||||
logger.info("[config] Default-Modell wechselt: %s → %s (download=%s)",
|
||||
runner.model_id or "(none)", new_model_id, will_download)
|
||||
try:
|
||||
await _broadcast_status(ws, "loading", model=new_model_id,
|
||||
downloading=will_download)
|
||||
await runner.ensure_loaded(new_model_id)
|
||||
await _broadcast_status(ws, "ready",
|
||||
model=runner.model_id,
|
||||
loadSeconds=runner.last_load_seconds,
|
||||
freshlyDownloaded=runner.last_load_was_download)
|
||||
except Exception as e:
|
||||
logger.exception("Modell-Swap fehlgeschlagen")
|
||||
try:
|
||||
await _broadcast_status(ws, "error", error=str(e)[:200])
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
async for raw in ws:
|
||||
try:
|
||||
msg = json.loads(raw)
|
||||
except Exception:
|
||||
continue
|
||||
mtype = msg.get("type", "")
|
||||
payload = msg.get("payload", {}) or {}
|
||||
|
||||
if mtype == "flux_request":
|
||||
await _flux_queue.put(payload)
|
||||
elif mtype == "config":
|
||||
# Diagnostic-Broadcast (oder aria-bridge nach Reconnect).
|
||||
# HuggingFace-Token MUSS vor dem Modell-Swap gesetzt sein,
|
||||
# weil FluxPipeline.from_pretrained den Token aus der env
|
||||
# liest. Reihenfolge im selben Tick gewaehrleistet das.
|
||||
if "huggingfaceToken" in payload:
|
||||
tok = (payload.get("huggingfaceToken") or "").strip()
|
||||
if tok:
|
||||
os.environ["HF_TOKEN"] = tok
|
||||
os.environ["HUGGING_FACE_HUB_TOKEN"] = tok
|
||||
logger.info("[config] HF-Token gesetzt (len=%d)", len(tok))
|
||||
else:
|
||||
os.environ.pop("HF_TOKEN", None)
|
||||
os.environ.pop("HUGGING_FACE_HUB_TOKEN", None)
|
||||
logger.info("[config] HF-Token entfernt (leerer Wert)")
|
||||
tag = (payload.get("fluxDefaultModel") or "").strip()
|
||||
if tag:
|
||||
asyncio.create_task(_apply_default_change(tag))
|
||||
finally:
|
||||
worker.cancel()
|
||||
try:
|
||||
await worker
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.warning("Verbindung verloren: %s", e)
|
||||
if use_tls and RVS_TLS_FALLBACK and not tls_fallback_tried:
|
||||
logger.info("TLS fehlgeschlagen — Fallback auf ws://")
|
||||
use_tls = False
|
||||
tls_fallback_tried = True
|
||||
continue
|
||||
await asyncio.sleep(min(retry_s, 30))
|
||||
retry_s = min(retry_s * 2, 30)
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
if not RVS_HOST:
|
||||
logger.error("RVS_HOST nicht gesetzt — Abbruch")
|
||||
sys.exit(1)
|
||||
runner = FluxRunner()
|
||||
await run_loop(runner)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
asyncio.run(main())
|
||||
except KeyboardInterrupt:
|
||||
sys.exit(0)
|
||||
@@ -0,0 +1,57 @@
|
||||
# ════════════════════════════════════════════════
|
||||
# ARIA FLUX-Bridge — Text-to-Image (GPU)
|
||||
# Eigener Stack, weil FLUX auch auf einer anderen
|
||||
# Maschine als f5tts/whisper laufen kann (z.B. 4090
|
||||
# separat vom Gaming-PC). Verbindet sich selbst per
|
||||
# WebSocket zum RVS und lauscht auf flux_request.
|
||||
# ════════════════════════════════════════════════
|
||||
#
|
||||
# Voraussetzungen:
|
||||
# - NVIDIA-GPU mit >= 12 GB VRAM (3060 reicht mit
|
||||
# enable_model_cpu_offload). Bei < 12 GB:
|
||||
# FLUX_OFFLOAD=sequential setzen, sonst OOM.
|
||||
# - Docker mit NVIDIA Container Toolkit
|
||||
# - HuggingFace-Token in .env (FLUX.1-dev ist gated)
|
||||
# - .env mit RVS-Verbindungsdaten (gleiche wie xtts!)
|
||||
#
|
||||
# Start: docker compose up -d
|
||||
# ════════════════════════════════════════════════
|
||||
|
||||
services:
|
||||
|
||||
# ─── FLUX Bildgenerierung (GPU) ─────────
|
||||
# Empfaengt flux_request via RVS, rendert PNG mit FLUX (12B Params)
|
||||
# und broadcastet flux_response mit base64-PNG zurueck. aria-bridge speichert
|
||||
# die Datei nach /shared/uploads/ und ARIA referenziert sie via [FILE:]-Marker.
|
||||
#
|
||||
# Modell-Wahl + HuggingFace-Token werden in ARIA Diagnostic eingestellt
|
||||
# ("FLUX Bildgenerierung") und per RVS gepusht — hier nichts noetig.
|
||||
flux-bridge:
|
||||
build: .
|
||||
container_name: aria-flux-bridge
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
environment:
|
||||
- RVS_HOST=${RVS_HOST}
|
||||
- RVS_PORT=${RVS_PORT:-443}
|
||||
- RVS_TLS=${RVS_TLS:-true}
|
||||
- RVS_TLS_FALLBACK=${RVS_TLS_FALLBACK:-true}
|
||||
- RVS_TOKEN=${RVS_TOKEN}
|
||||
# Hardware-Bootstrap (Diagnostic-Settings uebersteuern alles andere
|
||||
# zur Laufzeit — diese envs sind nur Edge-Case-Fallbacks).
|
||||
- FLUX_DEVICE=${FLUX_DEVICE:-cuda}
|
||||
- FLUX_DTYPE=${FLUX_DTYPE:-bfloat16}
|
||||
- FLUX_OFFLOAD=${FLUX_OFFLOAD:-model}
|
||||
- FLUX_MAX_STEPS=${FLUX_MAX_STEPS:-50}
|
||||
- FLUX_MAX_DIM=${FLUX_MAX_DIM:-1536}
|
||||
volumes:
|
||||
- ./hf-cache:/root/.cache/huggingface # Bind-Mount. FLUX.1-dev ~24 GB on disk!
|
||||
# Wenn flux auf der gleichen Maschine
|
||||
# wie xtts laeuft: ../xtts/hf-cache
|
||||
# symlinken um den Cache zu teilen.
|
||||
restart: unless-stopped
|
||||
@@ -0,0 +1,9 @@
|
||||
diffusers>=0.30.0
|
||||
transformers>=4.43.0
|
||||
accelerate>=0.33.0
|
||||
sentencepiece>=0.2.0
|
||||
protobuf>=4.25.0
|
||||
pillow>=10.0.0
|
||||
huggingface_hub>=0.24.0
|
||||
websockets>=12.0
|
||||
numpy>=1.24
|
||||
@@ -341,10 +341,45 @@ Skills mit Tool-Use.
|
||||
- [x] Info-Buttons mit Modal-Erklaerungen im Gehirn-Tab
|
||||
- [x] Token/Call-Metrics + Subscription-Quota-Tracking: pro Claude-Call ein Log-Eintrag mit Token-Schaetzung (chars/4). Gehirn-Tab zeigt 1h/5h/24h/30d-Aggregat + Progress-Bar gegen Plan-Limit (Pro=45/5h, Max 5x=225/5h, Max 20x=900/5h, Custom). Warn-Schwelle 80%, kritisch 90%.
|
||||
|
||||
### Chat-Stabilitaet: Such-Scroll, Stuck-Watchdog, Delivery-Handshake
|
||||
|
||||
- [x] **Such-Scroll springt nicht mehr permanent**: `onScrollToIndexFailed` hatte 3 cascading `setTimeout`s (120/320/600 ms) — jeder failed Retry triggerte den Handler wieder → 3, 9, 27 Scrolls in der Pipeline. Plus `invertedMessages` war in den useEffect-Deps: jede neue ARIA-Nachricht re-triggerte den Such-Scroll. Fix: nur EIN Retry nach 300 ms, in einer Ref-getrackten Timer-Variable; bei neuem Such-Hit wird der pending Retry gecancelt. `invertedMessages`-Snapshot via Ref statt Dep
|
||||
- [x] **Jump-to-Bottom-Button** rechts unten in der Chat-Liste — taucht ab ~250 px Scroll-Weg auf, scrollt zur neuesten Nachricht (bei inverted FlatList `scrollToOffset(0)`)
|
||||
- [x] **AsyncStorage-Init-Race**: zwischen Mount und „Verlauf aus AsyncStorage geladen" konnte eine User-Nachricht oder ein WS-Event ankommen — `setMessages(parsed)` ueberschrieb's mit dem alten Stand und die frische Nachricht war spurlos weg. Fix: Merge per `id` (frischere `prev`-Eintraege schlagen Gespeichertes), sortiert nach `timestamp`. `messageIdCounter` wird nur noch erhoeht, nie zurueckgesetzt
|
||||
- [x] **Stuck-Thinking-Watchdog**: „ARIA denkt..." blieb gelegentlich kleben (Brain-Crash, WS-Disconnect ohne idle-Event, Cancel mit Race). Fix: jeder `agent_activity != idle` armiert einen 180s-Timer; ohne neues Lebenszeichen geht's auto-idle + Bubble „⚠ Habe gerade keine Verbindung zurueck bekommen". Watchdog wird beim ARIA-Reply, beim Cancel/Barge-In und beim Screen-Unmount gecleart
|
||||
- [x] **Delivery-Handshake (WhatsApp-Style)**: pro User-Bubble ein lokaler `clientMsgId` + `deliveryStatus` (queued/sending/sent/delivered/failed). Bridge sendet `chat_ack` zurueck (✓ sent) und schreibt die ID ins `chat_backup.jsonl`. ARIA-Reply markiert alle vorigen User-Bubbles als delivered (✓✓). LRU-Idempotenz auf der Bridge (200 cmids) verhindert Doppelte beim Retry. Offline-Queue: Nachrichten im Flugmodus bleiben lokal als ⏱-queued, beim Reconnect feuert `flushQueuedMessages`. ACK-Timeout 30 s, bis zu 3 Retries, danach ⚠ + Tap-fuer-Retry
|
||||
- [x] **Offline-Bubble verschwand nach Reconnect (Race)**: parallel laufen `chat_history_request` und `flushQueuedMessages` beim Reconnect; die History-Antwort kam an bevor die Bridge die Bubble persistiert hatte → Merge ersetzte den lokalen Stand → Bubble weg (war aber in Diagnostic drin). Fix: Bridge spiegelt `clientMsgId` im `chat_backup.jsonl`, App-Merge dedupt per cmid und behaelt lokale Bubbles deren ID der Server noch nicht kennt
|
||||
- [x] **Doppel-Bubble nach Retry**: Backup-Eintraege von vor dem cmid-Patch hatten keine `clientMsgId` — Server-Bubble (ohne cmid) und lokale failed-Bubble (mit cmid) standen beide im Merge. Plus ACK-Timer lief gelegentlich weiter obwohl die Bubble schon `delivered` war → Retry pushte den Status zurueck auf `sending`. Fix: Merge faellt zusaetzlich auf `text+timestamp`-Heuristik im 5-Min-Fenster zurueck; `dispatchWithAck` prueft per Ref ob die Bubble inzwischen `delivered` ist und cancelt dann; bei ARIA-Reply werden alle laufenden ACK-Timer gecleart
|
||||
- [x] **chat_backup ts war Container-Uptime statt UNIX-ms**: `_append_chat_backup` nutzte `asyncio.get_event_loop().time()` (Monotonic, bei jedem Restart wieder 0) statt `time.time()`. Folge: Server-Bubbles mit ts wie 394M (6 min Uptime) wurden in der App-History neben App-side Bubbles mit Date.now() (1.778e12) sortiert — Hello-Kitty-Konversation von gestern landete chronologisch nach heutigen Karten-Routen, neue Nachrichten verschwanden unter dem 500er-Cap. Plus: Doppelpost-Schutz griff nicht weil das 5-Min-ts-Fenster bei 1.7 Bio ms Diff nie zutraf. Fix: Bridge schreibt jetzt UNIX-ms, Migration-Script `tools/migrate_chat_backup_ts.py` repariert vorhandene jsonl (284/299 ts umgeschrieben auf der VM, Datei-Reihenfolge bleibt). App-Merge dedupt zusaetzlich per blossem Text-Match (ohne ts-Diff) — schuetzt auch gegen vorhandene lokale Duplikate
|
||||
- [x] **User-Bubble ⏳→failed bei langsamen ARIA-Antworten**: ACK-Timer (30 s × 3 Retries) lief durch obwohl Brain laengst arbeitete — wenn `chat_ack` aus irgendwelchen Gruenden nicht durchkam (RVS-Frame verloren etc.), wurde die Bubble nach 90 s auf failed gesetzt obwohl die Antwort gleich danach kam. Fix: jedes `agent_activity != idle`-Event ist impliziter ACK — Brain wuerde nicht arbeiten wenn es die Nachricht nicht haette. Beim ersten non-idle Event werden alle laufenden ACK-Timer gecanceled und sending-Bubbles auf 'sent' gesetzt. ACK_TIMEOUT_MS zusaetzlich von 30 s auf 60 s als Backup
|
||||
- [x] **Gedanken-Stream Modal scrollte nicht**: innerer `TouchableOpacity` (eigentlich nur fuer close-on-tap-outside-Schutz) hat alle Touch-Events konsumiert. Fix: durch `View` mit `onStartShouldSetResponder={true}` + `onResponderTerminationRequest={false}` ersetzt — blockt Tap-Propagation ohne Scrolls der Children zu verschlucken
|
||||
|
||||
### Brain-Hang: Multi-Tool-Timeouts + RVS-Block + Skill-Aggressivitaet
|
||||
|
||||
- [x] **Skill-Erstellung aggressiver als gewollt**: Prompt sagte „Harte Regel — IMMER Skill anlegen wenn pip-Library noetig". ARIA hat das wortwoertlich genommen und bei einer simplen pdf-extract-Frage sofort `skill_create` aufgerufen → Brain 12 Min blockiert (venv 2 min + pip install 10 min Timeout in `skills.py`). App zeigt „ARIA denkt", Bridge emitted nach 5 Min Timeout idle, User ohne Antwort. Fix in `prompts.py`: „Goldene Regel: NIE ungefragt Skills anlegen" + nur bei expliziter Anfrage („mach daraus einen Skill") und auch dann nur wenn die 4 Kriterien (wiederkehrend / nicht-trivial / parametrisierbar / wiederverwendbar) zutreffen. Greift auf der VM nach `docker compose restart aria-brain` ohne Re-Build
|
||||
- [x] **Brain-Timeouts 5 Min → 20 Min**: drei verkettete 5-Min-Timeouts (Bridge `urlopen`, Brain `proxy_client`, Proxy `DEFAULT_TIMEOUT` im claude-max-api-proxy npm-Modul) feuerten exakt gleichzeitig. Live in den Logs nachvollzogen: ein Proxy-Call brauchte 4m51s und wurde von der Bridge auf den Sekundenbruchteil genau gekappt. Aufgabenstellungen wie Karten-Rekonstruktion mit 10+ curl-Calls oder PDF-Verarbeitung brauchen aber locker 8–15 Min. Fix: alle drei Timeouts auf 1200 s, plus dritter sed-Patch im docker-compose proxy-Service (`DEFAULT_TIMEOUT = 300000 → 1200000`). App-Stuck-Watchdog auf 1260 s (21 Min, knapp drueber)
|
||||
- [x] **RVS-Block waehrend Brain-Call** (mobil.hacker-net.de:444 droppt nach 4 Min idle): `async for raw_message in ws: await _handle_rvs_message(...)` — das await blockierte den recv-Loop solange `send_to_core` lief. Die websockets-Lib beantwortete Pings im Hintergrund, aber der RVS-Server zaehlt nur echte App-Frames und droppt sonst die Verbindung. Symptom: App+Diagnostic zeigten „abgebrochen" obwohl Brain noch arbeitete. Fix: `send_to_core` als `asyncio.create_task` statt `await` — RVS-recv-Loop bleibt frei, neue Messages werden weiter verarbeitet, Verbindung bleibt lebendig
|
||||
|
||||
### Gedanken-Stream + Live-Tool-Events
|
||||
|
||||
- [x] **Gedanken-Stream in App + Diagnostic**: chronologisches Log was ARIA intern macht, gefuettert aus `agent_activity`-Events (thinking/tool/assistant/idle). Bleibt zwischen Denk-Phasen stehen, lange Pausen sichtbar als Trennlinie mit Minuten-Hint. App: 💭-Icon in der Statusleiste oeffnet Bottom-Sheet mit chronologischer Liste, 🗑-Confirm zum Leeren. Diagnostic: 💭 Gedanken-Button im Chat-Test-Header oeffnet zentrales Modal, Live-Update wenn neue Eintraege kommen (autoscroll ans Ende). Persistierung in AsyncStorage / localStorage, capped auf 500 Eintraege
|
||||
- [x] **Live-Tool-Events vom Proxy**: dritter Proxy-Patch (`proxy-patches/routes.js`) hookt Claude-CLI `assistant`-Events — bei jedem `tool_use`-Block (Bash, Read, Edit, Grep, ...) wird per HTTP-POST an die Bridge gemeldet. Bridge spiegelt das als `agent_activity tool=<name>` an RVS-Clients. Vorher kam pro Brain-Call nur EIN „💭 denkt" am Anfang und EIN „✓ fertig" am Ende — jetzt sieht man **live** in beiden UIs wie ARIA durch die Tools haengt. Hook ist fire-and-forget (ARIA_TOOL_HOOK_URL Env-Variable, default http://aria-bridge:8090/internal/agent-activity)
|
||||
|
||||
### Such-Sprung-Praezision + Such-Reihenfolge
|
||||
|
||||
- [x] **Such-Sprung kalt nach App-Start**: scrollToIndex landete bei langen Listen weit daneben (Cessna-Treffer → Sprung zur Oberhausen-Bubble 15 Stellen daneben). `info.averageItemLength` aus `onScrollToIndexFailed` basierte auf den ersten ~10 gerenderten Items — bei sehr unterschiedlichen Bubble-Hoehen (Voice ~70 px, lange ARIA-Antworten 400+ px) eine grottige Schaetzung. Fix: `itemHeights`-Ref-Map wird per `onLayout` in `renderMessage` gefuettert; Pre-Scroll summiert echte gemessene Hoehen (Fallback `AVG_BUBBLE_HEIGHT=150` fuer noch nicht gerenderte). Plus `initialNumToRender: 30` (Default 10) und `windowSize: 41` (Default 21) → mehr Items beim Mount gemessen
|
||||
- [x] **Such-Scroll Endlos-Loop (Wiederkehr)**: `onScrollToIndexFailed` retried unbegrenzt — jeder failed Retry rief den Handler erneut auf → neuer Timer → fail → loop. Plus: `setMessages` im `agent_activity`-Handler rief `prev.map()` auch wenn nichts zu aendern war → neues Array bei jedem Tool-Event → FlatList-Layouts invalidiert mitten in der Scroll-Sequenz. Fix: hartes `MAX_SCROLL_RETRIES = 3` plus `prev.some()`-Check vor `.map()` damit reference-stable bei No-Op
|
||||
- [x] **Such-Treffer in Spezial-Bubbles**: `searchMatchIds` suchte in `messages` (alle Bubbles inkl. Memory/Skill/Trigger), aber gescrollt wird in `invertedMessages` die diese filtert → `findIndex=-1` → kein Scroll, alter Pre-Scroll-Stand bleibt sichtbar. Fix: `searchMatchIds` aus `chatVisibleMessages`. Memory-Inhalte sind weiterhin ueber die 🗂️-Inbox erreichbar
|
||||
- [x] **Such-Reihenfolge: neueste zuerst** (WhatsApp/Telegram-analog): User ist visuell unten im Chat, der erste Treffer ist meist schon im Viewport ohne weiten Pre-Scroll. „Naechster" geht in die Vergangenheit. Plus Pre-Scroll-Wartezeit 80→200 ms damit FlatList beim ersten Versuch Render-Zeit hat
|
||||
|
||||
### Misc App-Polish
|
||||
|
||||
- [x] **About-Text rendete `—` literal**: JSX-Text-Knoten interpretieren keine JS-String-Escapes — `—` blieb als Backslash-u-Sequenz sichtbar. Fix: `{'—'}` als JS-Expression-Block
|
||||
- [x] **GPS-Heartbeat fuer stationaere User**: `watchPosition` mit `distanceFilter: 30` sendet keine Updates ohne 30 m Bewegung. Stefan stationaer → nach initialer Position keine weiteren Updates → Brain verwirft Position nach `NEAR_MAX_AGE_SEC=300` als veraltet → `near()`-Watcher feuern nie. Fix: zusaetzlich zum watchPosition laeuft ein `setInterval(60s)` Heartbeat der die zuletzt empfangene Position erneut sendet. Kein extra GPS-Wakeup, akkufreundlich — und Brain-State bleibt frisch auch ohne Bewegung
|
||||
|
||||
## Offen
|
||||
|
||||
### App Features
|
||||
- [ ] Chat-History zuverlaessiger laden (AsyncStorage Race Condition)
|
||||
- [ ] Custom-Wake-Word-Upload via Diagnostic (eigene .onnx-Files ohne App-Rebuild)
|
||||
|
||||
### Architektur
|
||||
|
||||
@@ -0,0 +1,448 @@
|
||||
/**
|
||||
* ARIA-patched API Route Handlers
|
||||
*
|
||||
* Erweiterung der npm-Version von claude-max-api-proxy:
|
||||
* - Bei jedem Claude-CLI-`assistant`-Event mit tool_use-Block (Bash, Read,
|
||||
* Edit, Grep, …) wird ein HTTP-POST an die Bridge gefeuert
|
||||
* (ARIA_TOOL_HOOK_URL, default http://aria-bridge:8090/internal/agent-activity).
|
||||
* Bridge spiegelt das als RVS `agent_activity` an App+Diagnostic →
|
||||
* Gedanken-Stream zeigt live was ARIA gerade tool-maessig macht.
|
||||
* - Voller Live-Stream (assistant_text, tool_use mit input, tool_result)
|
||||
* geht an ARIA_STREAM_HOOK_URL → Bridge → RVS `agent_stream` → Diagnostic
|
||||
* "ARIA Live"-View (TeamViewer-mäßiger Mirror der Claude-Code-Session).
|
||||
* - Subprocess-Tracking + POST /v1/cancel-all fuer Not-Aus (Hard-Kill).
|
||||
* - Fire-and-forget, fail-open. Wenn die Bridge nicht antwortet, bricht
|
||||
* der Brain-Call NICHT ab.
|
||||
*
|
||||
* Wird zur Container-Startzeit ueber die npm-Version geschrieben
|
||||
* (siehe docker-compose.yml proxy-Block).
|
||||
*/
|
||||
import { v4 as uuidv4 } from "uuid";
|
||||
import http from "http";
|
||||
import { ClaudeSubprocess } from "../subprocess/manager.js";
|
||||
import { openaiToCli } from "../adapter/openai-to-cli.js";
|
||||
import { cliResultToOpenai, createDoneChunk, } from "../adapter/cli-to-openai.js";
|
||||
|
||||
const TOOL_HOOK_URL = process.env.ARIA_TOOL_HOOK_URL
|
||||
|| "http://aria-bridge:8090/internal/agent-activity";
|
||||
const STREAM_HOOK_URL = process.env.ARIA_STREAM_HOOK_URL
|
||||
|| "http://aria-bridge:8090/internal/agent-stream";
|
||||
|
||||
// Tool-Output kann sehr lang werden (git log -p, find /). Wir truncaten
|
||||
// hart auf 4 KB pro Event — der User sieht weiterhin den Anfang und einen
|
||||
// "...(N bytes truncated)" Hinweis. Vollstaendiger Output bleibt im Brain
|
||||
// und wird normal verarbeitet, das hier ist NUR fuer den Live-Mirror.
|
||||
const TOOL_RESULT_MAX_CHARS = 4096;
|
||||
const TOOL_INPUT_MAX_CHARS = 2048;
|
||||
|
||||
/**
|
||||
* Generic Fire-and-forget POST an die Bridge. Keine Awaits, keine Fehler
|
||||
* nach oben. Eingesetzt fuer Tool-Hook + Stream-Hook.
|
||||
*/
|
||||
function _postJson(url, body) {
|
||||
try {
|
||||
const u = new URL(url);
|
||||
const data = JSON.stringify(body);
|
||||
const req = http.request({
|
||||
method: "POST",
|
||||
hostname: u.hostname,
|
||||
port: u.port || 80,
|
||||
path: u.pathname,
|
||||
headers: { "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) },
|
||||
timeout: 2000,
|
||||
}, (res) => { res.resume(); });
|
||||
req.on("error", () => {});
|
||||
req.on("timeout", () => req.destroy());
|
||||
req.write(data);
|
||||
req.end();
|
||||
} catch (_) { /* niemals weiterwerfen */ }
|
||||
}
|
||||
|
||||
/**
|
||||
* Pusht einen Tool-Use-Event an die Bridge (alter Gedanken-Stream-Pfad).
|
||||
*/
|
||||
function _emitToolEvent(toolName) {
|
||||
if (!toolName) return;
|
||||
_postJson(TOOL_HOOK_URL, { tool: String(toolName) });
|
||||
}
|
||||
|
||||
/**
|
||||
* Pusht ein Stream-Event an die Bridge (neuer "ARIA Live"-Pfad).
|
||||
* kind: "start" | "text" | "tool_use" | "tool_result" | "end"
|
||||
*/
|
||||
function _emitStreamEvent(requestId, kind, fields) {
|
||||
_postJson(STREAM_HOOK_URL, { requestId, kind, ts: Date.now(), ...fields });
|
||||
}
|
||||
|
||||
function _truncate(str, max) {
|
||||
if (typeof str !== "string") str = String(str ?? "");
|
||||
if (str.length <= max) return { text: str, truncatedBytes: 0 };
|
||||
return { text: str.slice(0, max), truncatedBytes: str.length - max };
|
||||
}
|
||||
|
||||
// ── Subprocess-Tracking fuer Not-Aus ──────────────────────────
|
||||
// requestId → ClaudeSubprocess. Eintraege werden beim close/result-Event
|
||||
// wieder entfernt. /v1/cancel-all iteriert und ruft .kill() auf jeden.
|
||||
const _activeSubprocesses = new Map();
|
||||
function _trackSubprocess(requestId, subprocess) {
|
||||
_activeSubprocesses.set(requestId, subprocess);
|
||||
const cleanup = () => _activeSubprocesses.delete(requestId);
|
||||
subprocess.on("close", cleanup);
|
||||
subprocess.on("error", cleanup);
|
||||
}
|
||||
|
||||
/**
|
||||
* Hookt assistant + user Events und pusht beides an Bridge:
|
||||
* - Alt-API: nur Tool-Namen an /internal/agent-activity (Gedanken-Stream)
|
||||
* - Neu-API: voller Stream (text/tool_use/tool_result) an /internal/agent-stream
|
||||
*/
|
||||
function _attachToolHook(subprocess, requestId) {
|
||||
subprocess.on("assistant", (message) => {
|
||||
try {
|
||||
const blocks = message?.message?.content || [];
|
||||
for (const b of blocks) {
|
||||
if (!b) continue;
|
||||
if (b.type === "tool_use") {
|
||||
if (b.name) _emitToolEvent(b.name);
|
||||
const inputStr = b.input ? JSON.stringify(b.input) : "";
|
||||
const inp = _truncate(inputStr, TOOL_INPUT_MAX_CHARS);
|
||||
_emitStreamEvent(requestId, "tool_use", {
|
||||
id: b.id || null,
|
||||
name: b.name || "",
|
||||
input: inp.text,
|
||||
inputTruncatedBytes: inp.truncatedBytes,
|
||||
});
|
||||
} else if (b.type === "text" && b.text) {
|
||||
_emitStreamEvent(requestId, "text", { text: b.text });
|
||||
} else if (b.type === "thinking" && b.thinking) {
|
||||
// Wenn das Modell Extended Thinking emittiert — selten in
|
||||
// Claude Code CLI, aber moeglich. Markieren wir extra.
|
||||
_emitStreamEvent(requestId, "thinking", { text: b.thinking });
|
||||
}
|
||||
}
|
||||
} catch (_) { /* fail-open */ }
|
||||
});
|
||||
// user-Events enthalten tool_result-Blocks
|
||||
subprocess.on("user", (message) => {
|
||||
try {
|
||||
const blocks = message?.message?.content || [];
|
||||
for (const b of blocks) {
|
||||
if (b && b.type === "tool_result") {
|
||||
let content = "";
|
||||
if (typeof b.content === "string") content = b.content;
|
||||
else if (Array.isArray(b.content)) {
|
||||
content = b.content.map(c => (c && c.type === "text" && c.text) ? c.text : "").join("");
|
||||
}
|
||||
const out = _truncate(content, TOOL_RESULT_MAX_CHARS);
|
||||
_emitStreamEvent(requestId, "tool_result", {
|
||||
id: b.tool_use_id || null,
|
||||
content: out.text,
|
||||
truncatedBytes: out.truncatedBytes,
|
||||
isError: b.is_error === true,
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (_) { /* fail-open */ }
|
||||
});
|
||||
}
|
||||
/**
|
||||
* Handle POST /v1/chat/completions
|
||||
*
|
||||
* Main endpoint for chat requests, supports both streaming and non-streaming
|
||||
*/
|
||||
export async function handleChatCompletions(req, res) {
|
||||
const requestId = uuidv4().replace(/-/g, "").slice(0, 24);
|
||||
const body = req.body;
|
||||
const stream = body.stream === true;
|
||||
try {
|
||||
// Validate request
|
||||
if (!body.messages || !Array.isArray(body.messages) || body.messages.length === 0) {
|
||||
res.status(400).json({
|
||||
error: {
|
||||
message: "messages is required and must be a non-empty array",
|
||||
type: "invalid_request_error",
|
||||
code: "invalid_messages",
|
||||
},
|
||||
});
|
||||
return;
|
||||
}
|
||||
// Convert to CLI input format
|
||||
const cliInput = openaiToCli(body);
|
||||
const subprocess = new ClaudeSubprocess();
|
||||
// ARIA-Patch: Tool-Use-Events + voller Live-Stream an die Bridge.
|
||||
// Plus: Subprocess fuer Not-Aus tracken (Hard-Kill via /v1/cancel-all).
|
||||
_attachToolHook(subprocess, requestId);
|
||||
_trackSubprocess(requestId, subprocess);
|
||||
_emitStreamEvent(requestId, "start", { model: body.model || null });
|
||||
subprocess.on("result", () => _emitStreamEvent(requestId, "end", { reason: "result" }));
|
||||
subprocess.on("close", (code) => _emitStreamEvent(requestId, "end", { reason: "close", code }));
|
||||
subprocess.on("error", (err) => _emitStreamEvent(requestId, "end", { reason: "error", error: String(err?.message || err) }));
|
||||
if (stream) {
|
||||
await handleStreamingResponse(req, res, subprocess, cliInput, requestId);
|
||||
}
|
||||
else {
|
||||
await handleNonStreamingResponse(res, subprocess, cliInput, requestId);
|
||||
}
|
||||
}
|
||||
catch (error) {
|
||||
const message = error instanceof Error ? error.message : "Unknown error";
|
||||
console.error("[handleChatCompletions] Error:", message);
|
||||
if (!res.headersSent) {
|
||||
res.status(500).json({
|
||||
error: {
|
||||
message,
|
||||
type: "server_error",
|
||||
code: null,
|
||||
},
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Handle streaming response (SSE)
|
||||
*
|
||||
* IMPORTANT: The Express req.on("close") event fires when the request body
|
||||
* is fully received, NOT when the client disconnects. For SSE connections,
|
||||
* we use res.on("close") to detect actual client disconnection.
|
||||
*/
|
||||
async function handleStreamingResponse(req, res, subprocess, cliInput, requestId) {
|
||||
// Set SSE headers
|
||||
res.setHeader("Content-Type", "text/event-stream");
|
||||
res.setHeader("Cache-Control", "no-cache");
|
||||
res.setHeader("Connection", "keep-alive");
|
||||
res.setHeader("X-Request-Id", requestId);
|
||||
// CRITICAL: Flush headers immediately to establish SSE connection
|
||||
// Without this, headers are buffered and client times out waiting
|
||||
res.flushHeaders();
|
||||
// Send initial comment to confirm connection is alive
|
||||
res.write(":ok\n\n");
|
||||
return new Promise((resolve, reject) => {
|
||||
let isFirst = true;
|
||||
let lastModel = "claude-sonnet-4";
|
||||
let isComplete = false;
|
||||
// Handle actual client disconnect (response stream closed)
|
||||
res.on("close", () => {
|
||||
if (!isComplete) {
|
||||
// Client disconnected before response completed - kill subprocess
|
||||
subprocess.kill();
|
||||
}
|
||||
resolve();
|
||||
});
|
||||
// Handle streaming content deltas
|
||||
subprocess.on("content_delta", (event) => {
|
||||
const text = event.event.delta?.text || "";
|
||||
if (text && !res.writableEnded) {
|
||||
const chunk = {
|
||||
id: `chatcmpl-${requestId}`,
|
||||
object: "chat.completion.chunk",
|
||||
created: Math.floor(Date.now() / 1000),
|
||||
model: lastModel,
|
||||
choices: [{
|
||||
index: 0,
|
||||
delta: {
|
||||
role: isFirst ? "assistant" : undefined,
|
||||
content: text,
|
||||
},
|
||||
finish_reason: null,
|
||||
}],
|
||||
};
|
||||
res.write(`data: ${JSON.stringify(chunk)}\n\n`);
|
||||
isFirst = false;
|
||||
}
|
||||
});
|
||||
// Handle final assistant message (for model name)
|
||||
subprocess.on("assistant", (message) => {
|
||||
lastModel = message.message.model;
|
||||
});
|
||||
subprocess.on("result", (_result) => {
|
||||
isComplete = true;
|
||||
if (!res.writableEnded) {
|
||||
// Send final done chunk with finish_reason
|
||||
const doneChunk = createDoneChunk(requestId, lastModel);
|
||||
res.write(`data: ${JSON.stringify(doneChunk)}\n\n`);
|
||||
res.write("data: [DONE]\n\n");
|
||||
res.end();
|
||||
}
|
||||
resolve();
|
||||
});
|
||||
subprocess.on("error", (error) => {
|
||||
console.error("[Streaming] Error:", error.message);
|
||||
if (!res.writableEnded) {
|
||||
res.write(`data: ${JSON.stringify({
|
||||
error: { message: error.message, type: "server_error", code: null },
|
||||
})}\n\n`);
|
||||
res.end();
|
||||
}
|
||||
resolve();
|
||||
});
|
||||
subprocess.on("close", (code) => {
|
||||
// Subprocess exited - ensure response is closed
|
||||
if (!res.writableEnded) {
|
||||
if (code !== 0 && !isComplete) {
|
||||
// Abnormal exit without result - send error
|
||||
res.write(`data: ${JSON.stringify({
|
||||
error: { message: `Process exited with code ${code}`, type: "server_error", code: null },
|
||||
})}\n\n`);
|
||||
}
|
||||
res.write("data: [DONE]\n\n");
|
||||
res.end();
|
||||
}
|
||||
resolve();
|
||||
});
|
||||
// Start the subprocess
|
||||
subprocess.start(cliInput.prompt, {
|
||||
model: cliInput.model,
|
||||
sessionId: cliInput.sessionId,
|
||||
}).catch((err) => {
|
||||
console.error("[Streaming] Subprocess start error:", err);
|
||||
reject(err);
|
||||
});
|
||||
});
|
||||
}
|
||||
/**
|
||||
* Handle non-streaming response
|
||||
*/
|
||||
async function handleNonStreamingResponse(res, subprocess, cliInput, requestId) {
|
||||
return new Promise((resolve) => {
|
||||
let finalResult = null;
|
||||
subprocess.on("result", (result) => {
|
||||
finalResult = result;
|
||||
});
|
||||
subprocess.on("error", (error) => {
|
||||
console.error("[NonStreaming] Error:", error.message);
|
||||
res.status(500).json({
|
||||
error: {
|
||||
message: error.message,
|
||||
type: "server_error",
|
||||
code: null,
|
||||
},
|
||||
});
|
||||
resolve();
|
||||
});
|
||||
subprocess.on("close", (code) => {
|
||||
if (finalResult) {
|
||||
res.json(cliResultToOpenai(finalResult, requestId));
|
||||
}
|
||||
else if (!res.headersSent) {
|
||||
res.status(500).json({
|
||||
error: {
|
||||
message: `Claude CLI exited with code ${code} without response`,
|
||||
type: "server_error",
|
||||
code: null,
|
||||
},
|
||||
});
|
||||
}
|
||||
resolve();
|
||||
});
|
||||
// Start the subprocess
|
||||
subprocess
|
||||
.start(cliInput.prompt, {
|
||||
model: cliInput.model,
|
||||
sessionId: cliInput.sessionId,
|
||||
})
|
||||
.catch((error) => {
|
||||
res.status(500).json({
|
||||
error: {
|
||||
message: error.message,
|
||||
type: "server_error",
|
||||
code: null,
|
||||
},
|
||||
});
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
/**
|
||||
* Handle GET /v1/models
|
||||
*
|
||||
* Returns available models
|
||||
*/
|
||||
export function handleModels(_req, res) {
|
||||
res.json({
|
||||
object: "list",
|
||||
data: [
|
||||
{
|
||||
id: "claude-opus-4",
|
||||
object: "model",
|
||||
owned_by: "anthropic",
|
||||
created: Math.floor(Date.now() / 1000),
|
||||
},
|
||||
{
|
||||
id: "claude-sonnet-4",
|
||||
object: "model",
|
||||
owned_by: "anthropic",
|
||||
created: Math.floor(Date.now() / 1000),
|
||||
},
|
||||
{
|
||||
id: "claude-haiku-4",
|
||||
object: "model",
|
||||
owned_by: "anthropic",
|
||||
created: Math.floor(Date.now() / 1000),
|
||||
},
|
||||
],
|
||||
});
|
||||
}
|
||||
/**
|
||||
* Handle GET /health
|
||||
*
|
||||
* Health check endpoint
|
||||
*/
|
||||
export function handleHealth(_req, res) {
|
||||
res.json({
|
||||
status: "ok",
|
||||
provider: "claude-code-cli",
|
||||
timestamp: new Date().toISOString(),
|
||||
});
|
||||
}
|
||||
|
||||
// ── Not-Aus Side-Channel ───────────────────────────────────
|
||||
//
|
||||
// claude-max-api-proxy steuert seine eigene Route-Registrierung — wir
|
||||
// koennen da nicht reinpatchen ohne sed-Operationen am npm-Paket. Saubrer:
|
||||
// ein dedizierter kleiner HTTP-Listener nur fuer den Not-Aus, auf einem
|
||||
// internen Port im aria-net. Bridge ruft den, killt alle aktiven Claude-
|
||||
// Subprocesses. App + Diagnostic sehen den Stream sofort enden.
|
||||
const INTERNAL_PORT = parseInt(process.env.ARIA_PROXY_INTERNAL_PORT || "3457", 10);
|
||||
const INTERNAL_HOST = "0.0.0.0"; // im aria-net erreichbar, nicht nach extern exposed
|
||||
|
||||
function _cancelAll() {
|
||||
const ids = Array.from(_activeSubprocesses.keys());
|
||||
let killed = 0;
|
||||
for (const [id, subp] of _activeSubprocesses) {
|
||||
try {
|
||||
subp.kill();
|
||||
killed++;
|
||||
} catch (e) {
|
||||
console.error("[aria-not-aus] kill failed for", id, e?.message);
|
||||
}
|
||||
}
|
||||
_activeSubprocesses.clear();
|
||||
return { killed, requestIds: ids };
|
||||
}
|
||||
|
||||
try {
|
||||
const internalServer = http.createServer((req, res) => {
|
||||
if (req.method === "POST" && req.url === "/cancel-all") {
|
||||
const result = _cancelAll();
|
||||
console.warn("[aria-not-aus] /cancel-all — killed", result.killed, "subprocess(es)");
|
||||
res.writeHead(200, { "Content-Type": "application/json" });
|
||||
res.end(JSON.stringify({ ok: true, ...result }));
|
||||
return;
|
||||
}
|
||||
if (req.method === "GET" && req.url === "/health") {
|
||||
res.writeHead(200, { "Content-Type": "application/json" });
|
||||
res.end(JSON.stringify({ ok: true, active: _activeSubprocesses.size }));
|
||||
return;
|
||||
}
|
||||
res.writeHead(404).end();
|
||||
});
|
||||
internalServer.on("error", (err) => {
|
||||
console.error("[aria-not-aus] internal listener error:", err.message);
|
||||
});
|
||||
internalServer.listen(INTERNAL_PORT, INTERNAL_HOST, () => {
|
||||
console.log("[aria-not-aus] internal listener on", INTERNAL_HOST + ":" + INTERNAL_PORT);
|
||||
});
|
||||
} catch (e) {
|
||||
console.error("[aria-not-aus] startup failed:", e?.message);
|
||||
}
|
||||
//# sourceMappingURL=routes.js.map
|
||||
+9
-3
@@ -39,6 +39,8 @@ const ALLOWED_TYPES = new Set([
|
||||
"stt_request", "stt_response",
|
||||
"service_status",
|
||||
"config_request",
|
||||
"flux_request", "flux_response",
|
||||
"agent_stream",
|
||||
]);
|
||||
|
||||
// Token-Raum: token -> { clients: Set<ws> }
|
||||
@@ -71,10 +73,14 @@ function cleanupRooms() {
|
||||
|
||||
// ── WebSocket-Server starten ────────────────────────────────────────
|
||||
|
||||
// maxPayload 50MB: TTS-Streaming + Voice-Upload (WAV als base64) +
|
||||
// maxPayload 100MB: TTS-Streaming + Voice-Upload (WAV als base64) +
|
||||
// audio_pcm Chunks koennen die ws-Library Default 1MB ueberschreiten.
|
||||
// Default-Limit war der Killer fuer die voice_upload Pipeline.
|
||||
const wss = new WebSocketServer({ port: PORT, maxPayload: 50 * 1024 * 1024 });
|
||||
// Plus: file_request/file_response fuer Re-Download von Anhaengen.
|
||||
// 40 MB MP4 → ~53 MB base64 → vorher mit 50 MB Limit zerschossen
|
||||
// (Code 1009 message too big, Bridge crashed im cleanup). 100 MB
|
||||
// deckt bis ~70 MB binaer ab; groessere Files werden Bridge-seitig
|
||||
// abgewiesen (siehe file_request-Handler) bevor die WS abreisst.
|
||||
const wss = new WebSocketServer({ port: PORT, maxPayload: 100 * 1024 * 1024 });
|
||||
|
||||
wss.on("listening", () => {
|
||||
log(`RVS läuft auf Port ${PORT} | Max Sessions: ${MAX_SESSIONS}`);
|
||||
|
||||
@@ -0,0 +1,93 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Migration: chat_backup.jsonl ts-Werte von Container-Uptime-ms auf UNIX-ms umstellen.
|
||||
|
||||
Hintergrund: vor dem Fix nutzte _append_chat_backup() `asyncio.get_event_loop().time()`,
|
||||
was Container-Monotonic ist (bei Restart wieder 0). Mischte sich mit App-side
|
||||
`Date.now()` (echtes UNIX-ms) → falsche Sortierung in der App-History.
|
||||
|
||||
Strategie: ts < 1e12 (keine UNIX-ms) werden umgeschrieben. Anker = file-mtime,
|
||||
decay 60 Sekunden pro Eintrag rueckwaerts. Datei-Reihenfolge bleibt erhalten
|
||||
(append-only war chronologisch korrekt, nur ts-Werte waren Unsinn).
|
||||
|
||||
Vorhandene UNIX-ms-Eintraege (file_deleted-Marker, neue Eintraege ab Bridge-Fix)
|
||||
werden unveraendert gelassen.
|
||||
|
||||
Idempotent: zweimal laufen lassen ist sicher — beim zweiten Mal sind alle ts
|
||||
schon UNIX-ms und werden nicht angefasst.
|
||||
|
||||
Backup: schreibt erst chat_backup.jsonl.bak, dann atomar replace.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
UNIX_MS_THRESHOLD = 10 ** 12 # < 1e12 ms = vor 2001 = unrealistisch fuer UNIX
|
||||
GAP_SECONDS = 60 # 1 Eintrag pro Minute rueckwaerts ab mtime
|
||||
|
||||
|
||||
def migrate(path: Path) -> None:
|
||||
if not path.exists():
|
||||
print(f"Datei nicht da: {path}")
|
||||
sys.exit(1)
|
||||
|
||||
raw = path.read_text(encoding="utf-8").splitlines()
|
||||
entries = []
|
||||
for raw_line in raw:
|
||||
s = raw_line.strip()
|
||||
if not s:
|
||||
continue
|
||||
try:
|
||||
entries.append(json.loads(s))
|
||||
except Exception as e:
|
||||
print(f" ueberspringe kaputte Zeile: {e}")
|
||||
continue
|
||||
|
||||
if not entries:
|
||||
print("Datei leer")
|
||||
return
|
||||
|
||||
file_mtime_ms = int(os.path.getmtime(path) * 1000)
|
||||
n = len(entries)
|
||||
fixed = 0
|
||||
|
||||
# Wir bauen einen Ersatz-ts (file_mtime - gap*minutes_back) nur fuer
|
||||
# Eintraege deren ts < UNIX_MS_THRESHOLD. file_deleted etc. mit echtem
|
||||
# UNIX-ms bleiben unangetastet.
|
||||
for i, entry in enumerate(entries):
|
||||
ts = entry.get("ts", 0)
|
||||
if not isinstance(ts, (int, float)) or ts < UNIX_MS_THRESHOLD:
|
||||
# Synth-ts vergeben: aelteste = mtime - n*gap, neueste = mtime
|
||||
new_ts = file_mtime_ms - (n - 1 - i) * GAP_SECONDS * 1000
|
||||
entry["ts"] = new_ts
|
||||
fixed += 1
|
||||
|
||||
if fixed == 0:
|
||||
print(f"Nichts zu migrieren ({n} Eintraege, alle ts schon UNIX-ms)")
|
||||
return
|
||||
|
||||
# Backup
|
||||
bak = path.with_suffix(path.suffix + ".bak")
|
||||
shutil.copy2(path, bak)
|
||||
print(f"Backup: {bak}")
|
||||
|
||||
# Atomic rewrite
|
||||
tmp = path.with_suffix(path.suffix + ".tmp")
|
||||
with open(tmp, "w", encoding="utf-8") as f:
|
||||
for entry in entries:
|
||||
f.write(json.dumps(entry, ensure_ascii=False) + "\n")
|
||||
tmp.replace(path)
|
||||
|
||||
print(f"Migration fertig: {fixed}/{n} ts umgeschrieben")
|
||||
print(f" aelteste neu : {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(entries[0]['ts'] / 1000))}")
|
||||
print(f" neueste neu : {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(entries[-1]['ts'] / 1000))}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
default = Path("/var/lib/docker/volumes/aria-agent_aria-shared/_data/config/chat_backup.jsonl")
|
||||
path = Path(sys.argv[1]) if len(sys.argv) > 1 else default
|
||||
migrate(path)
|
||||
@@ -2,6 +2,9 @@
|
||||
# ARIA Gamebox Stack — GPU F5-TTS + Whisper STT
|
||||
# Laeuft auf dem Gaming-PC (RTX 3060)
|
||||
# Verbindet sich zum RVS fuer TTS/STT-Requests
|
||||
#
|
||||
# FLUX-Bildgenerierung liegt im /flux Verzeichnis im Repo-Root —
|
||||
# eigener Compose-Stack, kann auch auf einer anderen Maschine laufen.
|
||||
# ════════════════════════════════════════════════
|
||||
#
|
||||
# Voraussetzungen:
|
||||
|
||||
Reference in New Issue
Block a user