Compare commits

...

92 Commits

Author SHA1 Message Date
Stefan Hacker dd40c55f7d fix(cloud-files): Pin loest Hydration aus, Icon-Refresh via SHChangeNotify
CfSetPinState aendert nur das Pin-Flag - ohne expliziten Call
passiert am Disk-Inhalt nichts und das Explorer-Icon bleibt
unveraendert. Darum klickte "Immer offline verfuegbar" scheinbar
ins Leere.

- Bei Pin: CfHydratePlaceholder triggert FETCH_DATA und laedt die
  Datei komplett herunter
- Bei Unpin: CfDehydratePlaceholder (war schon da)
- Nach jeder Zustandsaenderung SHChangeNotify(SHCNE_UPDATEITEM)
  damit das Overlay-Icon sofort neu gezeichnet wird, ohne dass
  der User F5 druecken muss
- Log bekommt zusaetzlich hydrate_err fuer Debugging

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 23:09:28 +02:00
Stefan Hacker 78615d8897 fix(cloud-files): Existierende normale Dateien vor Placeholder-Erstellung loeschen
Wenn der Client vorher aktiv war und dann deaktiviert wurde (oder
hart beendet), wandelt CfUnregisterSyncRoot alle Platzhalter in
normale Dateien um. Beim erneuten Aktivieren versuchte
populate_placeholders einen neuen Platzhalter anzulegen, was aber
mit ERROR_FILE_EXISTS scheiterte - der Fehler wurde zudem nur per
eprintln geloggt und verschluckt.

Ergebnis: die Datei blieb eine ganz normale Datei (kein Platzhalter,
kein Wolken-Icon). Spaeter fragt CfDehydratePlaceholder dann mit
HRESULT 0x80070178 "Die Datei ist keine Clouddatei", und "Speicher
freigeben" funktioniert nicht.

Jetzt prueft populate_placeholders vor jedem Create, ob die Datei
schon existiert und KEIN Platzhalter ist. Wenn ja: loeschen,
dann neu als Platzhalter anlegen. Erfolge und Fehler gehen beide
ins .minicloud-cloudfiles.log, damit man das Ergebnis prueft.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 22:56:58 +02:00
Stefan Hacker 3c340f9653 fix(cloud-files): Pin/Unpin tatsaechlich wirksam machen + CLI-Logging
set_pin_state hatte drei Probleme:
- FILE_READ_ATTRIBUTES: CfSetPinState braucht WRITE_ATTRIBUTES
- Kein OPEN_REPARSE_POINT: das Oeffnen selbst hat evtl. die
  Hydration getriggert, bevor wir unpinnen konnten
- Kein CfDehydratePlaceholder: Pin-Wechsel auf UNPINNED aendert
  nur das Flag, der Disk-Space wird nicht freigegeben

Jetzt:
- WRITE_ATTRIBUTES + OPEN_REPARSE_POINT beim Handle-Oeffnen
- Bei Unpin zusaetzlich CfDehydratePlaceholder, damit "Speicher
  freigeben" auch wirklich Platz freiraeumt
- Ergebnis + Fehler werden nach <parent>\.minicloud-cloudfiles.log
  geschrieben, damit wir sehen was passiert

handle_cli_shortcuts loggt nun nach %LOCALAPPDATA%\MiniCloud Sync\
cli.log, weil Explorer die stdout/stderr eines gestarteten Prozesses
verwirft. Ohne das Log kann man die vom Kontextmenue gestarteten
Aktionen nicht debuggen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 17:29:25 +02:00
Stefan Hacker 85dae4377f fix(cloud-files): AppliesTo-Syntax fuer Kontextmenue reparieren
Alter AppliesTo-Wert hatte:
- verdoppelte Backslashes (Windows AQS will einfache)
- einen verirrten Schluss-Backslash in der Quote, was die Query
  aufgebrochen hat

Neu:
- Saubere AQS-Syntax: System.ItemPathDisplay:~< "C:\\..." mit
  einfachen Backslashes (winreg schreibt REG_SZ 1:1)
- Registrierung unter AllFilesystemObjects statt *, damit auch
  Ordner den Menueeintrag erhalten
- Default-Wert (MUIVerb zusaetzlich) gesetzt, weil manche Windows-
  Versionen den Default fuer den Anzeigename nutzen
- uninstall entfernt beide Registry-Stellen (alte und neue)

Hinweis fuer Windows 11: klassische Shell-Verben stehen standard-
maessig nur unter "Weitere Optionen anzeigen" (Shift+F10). Fuer
das Haupt-Menue braeuchte man IExplorerCommand via COM-Extension.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 16:54:46 +02:00
Stefan Hacker 88c9617ae7 feat(client): Sync-Pfade und lokalen Dateibrowser bei aktivem Cloud-Files ausblenden
Wenn der Windows-Client mit Cloud-Files (OneDrive-Style) laeuft,
macht der klassische Sync-Pfade-Abschnitt samt lokalem .cloud-
Dateibrowser keinen Sinn mehr - Cloud-Files erzeugt Platzhalter
direkt im Explorer und bietet das gleiche On-Demand-Verhalten
mit nativer Shell-Integration.

Server-Dateien bleiben sichtbar (nuetzlich als Remote-Browser
unabhaengig vom Mount).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 11:47:43 +02:00
Stefan Hacker 78cfbf1ad3 feat(cloud-files): Geteilte Ordner + Rechtsklick-Menue
Backend:
- /api/sync/tree liefert jetzt {tree, shared} - shared enthaelt alle
  Dateien die MIT dem Benutzer geteilt wurden (FilePermission), nur
  Top-Level-Shares, mit Owner-Name im Anzeigenamen
- updated_at zusaetzlich als modified_at im Response fuer Client-
  Kompatibilitaet

Client:
- fetch_remote_entries merged Shared-Subtree unter virtuellem Ordner
  "Geteilt mit mir" (synthetische ID -1) in den Mount-Point
- modified_at faellt auf updated_at zurueck, falls nicht vorhanden

Kontextmenue:
- Neue HKCU-Registry-Eintraege fuer "Immer offline verfuegbar" und
  "Speicher freigeben", AppliesTo filtert auf Mount-Pfad, sodass die
  Verben nur bei Dateien unterhalb des Sync-Ordners erscheinen
- Aufruf der eigenen .exe mit --pin / --unpin <file>
- handle_cli_shortcuts fuehrt die Aktion aus und beendet sofort,
  ohne die UI/Tray/Single-Instance-Logik anzustossen

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 11:15:04 +02:00
Stefan Hacker 4026defe79 feat(cloud-files): Explorer-Sidebar-Integration fuer Windows
Registriert den Sync-Ordner als Shell-Namespace-Extension unter
HKEY_CURRENT_USER (kein Admin noetig), sodass er mit eigenem Icon
in der linken Leiste des Datei-Explorers erscheint - wie bei
OneDrive oder Dropbox.

- Neues Modul cloud_files::shell_integration mit install/uninstall
- Registry-Eintraege unter HKCU\Software\Classes\CLSID\{GUID} und
  HKCU\...\Explorer\Desktop\NameSpace\{GUID}
- Nutzt die laufende .exe als Icon-Quelle (fallback: imageres.dll)
- SHChangeNotify(SHCNE_ASSOCCHANGED) damit Explorer sofort aktualisiert
- install/uninstall werden aus register_sync_root/unregister aufgerufen
- winreg-Crate fuer sauberen Registry-Zugriff

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-22 15:47:05 +02:00
Stefan Hacker 2937082ba2 fix(cloud-files): Sauberes Re-Register + FETCH_PLACEHOLDERS-Stub + mehr Log
- CfUnregisterSyncRoot VOR CfRegisterSyncRoot, damit alte Policies
  (z.B. PARTIAL) nicht durch UPDATE-Flag kleben bleiben
- FETCH_PLACEHOLDERS-Stub registriert, der mit leerer Antwort und
  DISABLE_ON_DEMAND_POPULATION-Flag antwortet. Safety-Net falls
  Windows trotz FULL-Policy doch danach fragt
- log_msg an kritischen Stellen (register, connect, populate), damit
  wir beim naechsten Timeout sehen wo es haengt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:29:11 +02:00
Stefan Hacker e55ce106d4 fix(cloud-files): Population-Policy FULL statt PARTIAL
Mit PARTIAL erwartet Windows einen FETCH_PLACEHOLDERS-Callback
fuer die Ordnerenumeration. Den haben wir nicht registriert, also
lief der Explorer beim Oeffnen des Mount-Ordners in Timeout.

FULL bedeutet: wir fuellen alle Platzhalter selbst vor (machen wir
schon in populate_placeholders) und Windows fragt nicht nach.
Hydration bleibt PARTIAL - Datei-Inhalt wird weiter bei Zugriff
per FETCH_DATA geladen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:42:44 +02:00
Stefan Hacker 601e0741b1 fix(cloud-files): Platzhalter nicht als lokale Aenderung hochladen + Logging
Ursache des "voll gesynct"-Problems: der notify-Watcher feuerte auf die
cfapi-Platzhalter, die wir selbst beim Aktivieren angelegt haben. Der
sync_loop hat die dann als lokale Aenderung hochgeladen, was implizit
die Hydration ausgeloest hat. Ergebnis: keine On-Demand-Platzhalter,
sondern voller Sync.

- is_cfapi_placeholder() prueft FILE_ATTRIBUTE_OFFLINE /
  RECALL_ON_DATA_ACCESS / RECALL_ON_OPEN - solche Dateien werden beim
  Upload uebersprungen
- Log-Datei liegt jetzt NEBEN dem Mount (nicht drin), damit sie nicht
  selbst als Cloud-Datei behandelt wird
- FETCH_DATA loggt jetzt auch Success, damit man sieht dass der
  Callback ueberhaupt feuert

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:42:00 +02:00
Stefan Hacker be121190b3 feat(cloud-files): Mount-Pfad persistieren + Force-Cleanup fuer tote Sync-Roots
- cloud_files_mount in AppConfig -> bleibt ueber Neustarts erhalten
- Beim Auto-Login wird Cloud-Files automatisch wieder aktiviert
- Neue Commands cloud_files_get_mount und cloud_files_force_cleanup
- UI zeigt "Aufraeumen"-Button wenn Mount gesetzt aber nicht aktiv,
  damit User einen Ordner der nach hartem Beenden des Clients als
  toter Sync-Root haengt wieder freigeben/loeschen kann

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:32:02 +02:00
Stefan Hacker 6274567219 fix(cloud-files): Timeout-Ursachen im FETCH_DATA-Callback beheben
- HTTP-Client bekommt 60s-Timeout (statt unendlich)
- Bei Send-/Netzwerkfehler wird CfExecute immer mit Failure-Status
  abgeschlossen, damit Explorer nicht ins OS-Timeout laeuft
- Wenn Server kein Range unterstuetzt (200 statt 206), wird aus dem
  Full-Body der angeforderte Bereich herausgeschnitten und die
  tatsaechliche Laenge an CfExecute uebergeben
- Fehler werden in <mount>\.minicloud-cloudfiles.log geschrieben,
  damit man das Problem bei Timeout ueberhaupt sehen kann

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:24:51 +02:00
Stefan Hacker 204dbb6ab5 fix(client): Cloud-Files-Sektion immer sichtbar, Hinweis bei nicht unterstuetzter Plattform
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:06:54 +02:00
Stefan Hacker d9a4ee6a0b feat(client/windows): cfapi-Sync lebendig machen (Loop + Watcher + UI)
Jetzt tatsaechlich funktionsfaehig, nicht mehr nur Dummy:

- Register-Fallback: erst CF_REGISTER_FLAG_NONE, bei "bereits registriert"
  automatisch mit UPDATE erneut versuchen. Klappt damit bei Erstaktivierung
  und bei Client-Neustart.
- Hintergrund-Loop (cloud_files::sync_loop) pollt alle 30s
  /api/sync/changes, legt neue Placeholder an und ersetzt geaenderte.
- Eigener Callback-Watcher (cloud_files::watcher::CallbackWatcher) hoert
  auf den Mount-Ordner und sendet lokale Aenderungen (Create/Modify) an
  den Loop, der sie via POST /api/files/upload hochlaedt.
- Helper create_placeholder_at() vom Windows-Modul exportiert, damit der
  Loop neue Server-Dateien als Placeholder anlegen kann.
- AppState erhaelt cloud_files_loop + cloud_files_watcher Felder; beim
  Disable wird der Loop sauber gestoppt und der Watcher gedroppt.

Frontend (App.vue):
- Neue Sektion "Cloud-Files (OneDrive-Style)" nur sichtbar wenn die
  Plattform es unterstuetzt (cloud_files_supported).
- Ordner-Picker + Aktivieren/Deaktivieren-Button.
- Fehlermeldungen + Sync-Log-Eintraege.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 08:46:52 +02:00
Stefan Hacker 8f70b047d8 fix(client/windows): CfConnectSyncRoot liefert Key als Return-Value
In windows-rs 0.58 hat CfConnectSyncRoot nur 4 Argumente und liefert
den CF_CONNECTION_KEY direkt zurueck, keinen out-Parameter mehr.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:37:16 +02:00
Stefan Hacker f9bf53803f fix(client/windows): cfapi-Code auf windows-rs 0.58 umgestellt
- Feature Win32_System_CorrelationVector aktiviert (gate fuer
  CF_CALLBACK_INFO / CfExecute / CfConnectSyncRoot / CfCreatePlaceholders
  / CfSetPinState / CF_OPERATION_INFO / CF_CALLBACK_REGISTRATION)
- reqwest "blocking" aktiviert (wird im cfapi-Callback-Thread genutzt)
- Cf*-Funktionen liefern jetzt Result<(), Error> statt HRESULT; alle
  Aufrufe ueber ? / .map_err umgestellt
- CF_SYNC_POLICIES.Hydration/Population sind Wrapper-Structs;
  .Primary-Feld setzen statt direkter Enum-Zuweisung
- LARGE_INTEGER entfernt (Felder sind in 0.58 einfach i64)
- FILETIME-Ticks direkt als i64 schreiben (BasicInfo.*Time)
- FetchData.RequiredFileOffset/Length direkt als i64 verwenden
- CfCreatePlaceholders nimmt Slice + Option<*mut u32>
- CfSetPinState nimmt Option<*mut OVERLAPPED>
- Tauri-Command: MutexGuard vor .await freigeben (Send-Constraint)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:29:18 +02:00
Stefan Hacker de1039fc7d feat(client): Windows Cloud-Files-API als File-Provider (OneDrive-Style)
Neuer Modus neben dem bestehenden Full-Sync: Dateien erscheinen im
Explorer als Platzhalter mit Wolken-Icon und werden erst bei Zugriff
vom Mini-Cloud-Server gestreamt.

Windows (MVP):
- CfRegisterSyncRoot + CfConnectSyncRoot
- CfCreatePlaceholders fuer jede Datei aus /api/sync/tree
- FETCH_DATA-Callback mit Range-basiertem HTTPS-Download + CfExecute
- CfSetPinState fuer manuelles "Immer offline halten"

Linux (Skelett):
- FUSE-Provider hinter Feature-Flag linux_fuse (libfuse3-dev)
- Stub-Funktionen - Implementierung folgt

macOS:
- Platzhalter, erfordert Apple-Signatur - spaeter

Tauri-Commands: cloud_files_supported/enable/disable/pin/unpin.
Cargo.toml: target-spezifische windows-rs Dependency.
Doku: clients/desktop/CLOUD_FILES.md

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:19:22 +02:00
Stefan Hacker 2610e3b183 ui(files): Upload-Pfeil vor dem Ordner-Icon im Button "Ordner"
Damit ist auf den ersten Blick erkennbar, dass auch der Ordner-Button
einen Upload ausloest (und nicht bloss eine Ordner-Aktion).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:00:36 +02:00
Stefan Hacker 9f6132a400 feat: Auswahl-Dropdowns zeigen "(geteilt von <Name>)" bei Freigaben
Wenn der eigene und ein freigegebener Kalender/Adressbuch/Aufgabenliste
denselben Namen tragen, sind sie in den Anlegen-Dialogen jetzt
unterscheidbar.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:53:46 +02:00
Stefan Hacker ed944339c4 feat: Listen/Kalender/Adressbuch-Namen im 3-Punkte-Menue umbenennbar
Stift-Icon neben dem Namen oeffnet Inline-Editor (Eingabefeld + Check/X).
Enter speichert, ESC bricht ab. Nur fuer Eigentuemer sichtbar.
Backend-PUT-Endpunkte sind bereits vorhanden.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:52:12 +02:00
Stefan Hacker 2ef186e262 feat: Liste/Kalender/Adressbuch beim Anlegen waehlbar (nur Schreibrecht)
- ContactsView: Adressbuch-Auswahl im Kontakt-Dialog (versteckt bei nur
  einem beschreibbaren Buch). Neuer-Kontakt-Button disabled wenn keiner.
- TasksView: gleiches fuer Aufgabenlisten.
- CalendarView: writableCalendars (eigene + Schreibfreigaben) ersetzt
  ownCalendars in Event-Dialog und Import-Auswahl. Auswahlfeld nur ab 2.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:44:44 +02:00
Stefan Hacker 4d67819cac feat: Vor-/Nachname, geteilte Listen zeigen Eigentuemer
Backend:
- User.first_name / User.last_name (nullable, Auto-Migrate fuegt sie an)
  full_name/display_name als Properties + in to_dict
- TaskList.owner-Relationship ergaenzt (fehlte, daher wurden geteilte
  Listen beim Empfaenger nicht korrekt aufgeloest)
- /auth/me GET + PUT (Profil bearbeiten: Vorname, Nachname, E-Mail)
- /users/search findet jetzt auch nach Vor-/Nachname und liefert
  full_name/display_name mit
- list_tasklists/list_calendars/list_addressbooks liefern
  owner_full_name und owner_display_name

Frontend:
- Sidebars bei Kontakten/Kalender/Aufgaben: "(geteilt von <Voller Name>)"
  mit Fallback auf Username
- User-Search-Popup zeigt vollen Namen neben Username
- SettingsView: Vorname/Nachname/E-Mail bearbeiten

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:34:22 +02:00
Stefan Hacker e4dd555bd1 feat(tasks): Berechtigung bestehender Freigaben nachtraeglich aendern
Stift-Icon neben Freigabe oeffnet Inline-Editor mit Select "Lesen" /
"Lesen+Schreiben" (analog zu Kontakten/Kalender).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:26:59 +02:00
Stefan Hacker a21bf6de1b fix(docker): tzdata-Install entfernt - im python:3.11-slim schon drin
Vermeidet unnoetigen Platzbedarf beim Build (31 Pakete / 192 MB werden
sonst mitgezogen).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:22:42 +02:00
Stefan Hacker 3eb038abd8 feat(tasks): Benutzer-Suche beim Teilen (statt Freitext)
Analog zu Kontakten/Kalender: ab 2 Zeichen werden Vorschlaege per
/users/search eingeblendet.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:21:14 +02:00
Stefan Hacker 9bb22eb17b feat: Admin-Sicht System-Zeit + TZ-Liste in README/.env.example
- /api/settings gibt zusaetzlich timezone, timezone_abbr, server_time,
  ntp_server zurueck (alle read-only, aus Config/ENV).
- AdminView zeigt neuen Abschnitt "System-Zeit" mit Zeitzone, aktueller
  Server-Zeit und NTP-Server samt Hinweis "wird in der .env festgelegt".
- .env.example: Liste gaengiger TZ-Werte mit Link zur IANA-Vollliste.
- README.md: neuer Abschnitt "Zeitzone & NTP" mit Werte-Tabelle.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:19:40 +02:00
Stefan Hacker dca064427e feat(config): TZ + NTP_SERVER in .env mit sinnvollen Defaults
- .env / .env.example: TZ=Europe/Berlin und NTP_SERVER=ptbtime1.ptb.de
  (offizielle deutsche Zeitreferenz, hohe Verfuegbarkeit)
- app/__init__.py setzt prozessweite Zeitzone frueh via os.environ+tzset
- Leichtgewichtiger SNTP-Client (pure socket, keine deps) prueft den
  Uhr-Offset beim Start im Hintergrund-Thread und warnt bei Abweichung >5s
- Dockerfile installiert tzdata und ENV TZ=Europe/Berlin als Fallback

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:15:57 +02:00
Stefan Hacker ba3e619963 feat: Aufgaben (Tasks) mit CalDAV VTODO-Sync
Neuer Menuepunkt "Aufgaben" unterhalb Kontakte.

Backend:
- TaskList + Task + TaskListShare Models
- REST-API: CRUD, Teilen, my-color, Import/Export (.ics mit VTODO, CSV)
- CalDAV: Task-Listen tauchen als Calendar-Collection mit
  supported-calendar-component-set=VTODO im autodiscovery auf
- PROPFIND/REPORT/GET/PUT/DELETE/PROPPATCH/MKCOL fuer /dav/<user>/tl-<id>/
- SSE-Notifications bei Aenderungen

Frontend:
- TasksView mit Listen-Sidebar, Suche, "Erledigte ausblenden"
- Mehrfachauswahl + Bulk-Loeschen, Status-Toggle per Checkbox
- Editor mit Titel/Beschreibung/Faellig/Prioritaet/Status/Fortschritt
- Teilen, Farbe persoenlich anpassen, Import/Export

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:07:06 +02:00
Stefan Hacker 2ce088e96b feat: Import/Export fuer Kontakte und Kalender + Bulk-Loeschen Kontakte
Kontakte:
- Mehrfachauswahl in der Liste (Checkbox-Spalte) mit Bulk-Loeschen
- Export als Sammel-vCard (.vcf), als ZIP mit Einzel-vCards oder als CSV
- Import aus vCard (mehrere im File moeglich) oder CSV; Match per UID,
  bestehende Kontakte werden aktualisiert

Kalender:
- Export als iCalendar (.ics) oder CSV
- Import aus .ics oder CSV; bestehende Termine via UID aktualisiert

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:23:23 +02:00
Stefan Hacker c6241519a6 feat(calendar): Hinweis bei passwortgeschuetztem iCal-Link
Browser/Kalender-App fragen sonst nach Benutzername+Passwort - der
Benutzername muss leer bleiben.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:13:50 +02:00
Stefan Hacker f6626da114 feat(calendar): Mehrfachauswahl + Bulk-Loeschen in der Listen-Ansicht
Checkbox-Spalte plus Header-Checkbox "Alle". Bulk-Aktion mit
Bestaetigung loescht ausgewaehlte Termine; Read-Only-Eintraege
werden uebersprungen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:11:12 +02:00
Stefan Hacker e96c84b5f7 feat(ui): Browser-Titel "Mini-Cloud - <username>" + Wolken-Favicon
Titel reagiert reaktiv auf Login/Logout. Favicon ist die Wolke aus
der Sidebar (pi-cloud-Style).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:05:51 +02:00
Stefan Hacker 1eba5d0adc revert(contacts): Titel-Feld wieder raus, nur Anrede (Herr/Frau/Divers)
Sync-Probleme durch zusammengesetzten PREFIX vermeiden.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:52:23 +02:00
Stefan Hacker 655b789e06 feat(contacts): Anrede + Titel als getrennte Dropdowns
Anrede: Herr/Frau/Divers (fest), Titel: Dr./Prof./Dipl.-Ing./... (editierbar).
Beim Speichern werden beide zu vCard-PREFIX zusammengesetzt, beim Laden
wieder aufgesplittet.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:37:41 +02:00
Stefan Hacker 50df055794 feat(contacts): Anrede als Dropdown (Herr/Frau/Divers/Dr./Prof.)
editable bleibt aktiv, damit eigene Werte weiterhin moeglich sind.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:35:59 +02:00
Stefan Hacker 848e4b9b0f fix(contacts): Inputs in .field-row fuellen Container, kein Ueberlappen mehr
Anrede/Suffix/PLZ etc. hatten max-width-Container, das InputText darin
behielt aber die Default-Breite und ueberlief. Globale CSS-Regel sorgt
nun dafuer, dass jedes Input/Select seinen Field-Container ausfuellt.
field-row wrappt auf schmalen Dialogen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:32:17 +02:00
Stefan Hacker e02c4f97c1 feat(calendar): Live-Refresh ueber CalDAV, Tagklick-Navigation, Listen-Ansicht
- caldav.py sendet SSE-Notifications bei Event-PUT/DELETE und Kalender-Loeschung,
  damit das Web-UI auch auf Aenderungen aus DAVx5 sofort reagiert.
- FullCalendar navLinks: Klick auf Tagesnummer im Monatsraster wechselt in
  die Tagesansicht.
- Neue Listen-Ansicht mit Volltext-Suche, Datumsbereich, Kalender-Filter,
  Sortierung nach Datum/Titel und Loeschen-Button pro Zeile.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:28:44 +02:00
Stefan Hacker 10a1dec448 fix(calendar): wiederkehrende Termine nicht per Range filtern
Master-Event eines Serientermins liegt oft vor dem sichtbaren Bereich -
das FullCalendar-RRULE-Plugin braucht ihn trotzdem zur Expansion.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:22:24 +02:00
Stefan Hacker b398d6d800 fix: CalDAV-Routen delegieren ab-N-URLs an CardDAV (Loeschen/Aendern)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:16:39 +02:00
Stefan Hacker b2567d379c fix: CardDAV-Aenderungen loesen SSE-Refresh im Web-UI aus
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:56:52 +02:00
Stefan Hacker 1762437528 fix(dav): REPORT auf Kalender-URLs an CalDAV-Handler delegieren
Die CardDAV-Route /<username>/<ab_part>/ fing REPORT auf Kalender-URLs
(z.B. /dav/Adam/cal-1/) mit 404 ab, weil 'cal-1' nicht mit 'ab-' startet.
DAVx5 bekam bei der calendar-query einen 404 und markierte den EVENTS-
Sync als Hard Error. Fix analog zu PROPFIND/OPTIONS: wenn ab_part nicht
ab-* ist, an den CalDAV-REPORT-Handler delegieren.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:48:26 +02:00
Stefan Hacker 35535fb84b fix(dav): DAV-Header bewirbt jetzt auch 'addressbook'
DAVx5 registriert Dienste basierend auf dem DAV-Response-Header. Ohne
'addressbook' im Header wurde CardDAV bei der Auto-Discovery ignoriert,
obwohl addressbook-home-set korrekt gemeldet wurde. Das erklaert warum
nur der caldav-Service fuer Adam angelegt wurde.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:38:04 +02:00
Stefan Hacker 8772e02410 fix(dav): Principal-Depth-1 liefert keine Sub-Container mehr
Die zuletzt eingefuehrten Sub-Container (calendars/, addressbooks/) bei
PROPFIND Depth 1 auf /dav/<user>/ wurden von DAVx5 als leere Kalender
gezaehlt (DEFAULT_TASK_CALENDAR_NAME-Phantom-Eintraege). Da die CardDAV-
Route jetzt korrekt an den Home-Set-Handler delegiert, reicht es wenn der
Principal nur sich selbst liefert - Clients folgen den Home-Sets.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:32:22 +02:00
Stefan Hacker 0ef480858e fix(dav): CardDAV-Route fing PROPFIND auf /dav/<user>/calendars/ ab
Die CardDAV-Route /<username>/<ab_part>/ ist in Flask spezifischer als
die generische /<path:subpath> des CalDAV-Handlers und hat daher auch
/dav/<user>/calendars/ abgefangen - mit 404, weil 'calendars' nicht mit
'ab-' anfaengt. Ergebnis: DAVx5 bekam auf das Home-Set eine 404 und
zeigte keine Eintraege mehr an.

Fix: wenn ab_part nicht mit 'ab-' anfaengt, an den CalDAV-PROPFIND/OPTIONS
delegieren statt 404 zurueckzugeben.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:25:46 +02:00
Stefan Hacker 58ba130cd9 feat: Passwort-Manager Mehrfachauswahl + Bulk-Loeschen
Checkbox pro Eintrag, "Alle auswaehlen" oben und roter Loesch-Button mit
Anzahl. Sicherheitsabfrage vor dem Loeschen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 16:08:18 +02:00
Stefan Hacker 230c83f124 fix(dav): Principal-PROPFIND liefert calendars/ + addressbooks/ Container bei Depth 1
DAVx5 brauchte Kind-Container unter /dav/<user>/ - sonst blieben die
Listen nach Aktualisieren leer. Die Home-Sets bleiben getrennt
(calendar-home-set vs addressbook-home-set), aber der Principal zeigt
beide Sub-Container jetzt explizit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 15:33:03 +02:00
Stefan Hacker 24a6015841 fix: Separate CalDAV/CardDAV Home-Sets + UI-URLs ohne /dav/
Kalender und Adressbuecher teilten sich den gleichen Home-Set
(/dav/<user>/). DAVx5 hat bei Depth-1-PROPFIND beide Collection-
Typen angezeigt und mangels bekanntem Resourcetype als
"DEFAULT_TASK_CALENDAR_NAME"-Kacheln gelistet.

Loesung:
* calendar-home-set zeigt auf /dav/<user>/calendars/
* addressbook-home-set zeigt auf /dav/<user>/addressbooks/
* Beide Pfade sind eigene Container-Collections - PROPFIND Depth 1
  liefert nur den jeweils passenden Typ
* /dav/<user>/ selbst gibt bei Depth 1 keine Kinder mehr zurueck,
  Clients folgen den Home-Sets
* Die konkreten URLs cal-<id> / ab-<id> liegen weiterhin unter
  /dav/<user>/ (keine Breaking Change fuer existierende Clients;
  nur die Discovery-URL aendert sich)

Frontend:
CalendarView + ContactsView zeigen als Auto-Discovery-URL nur
noch den Hostname - PROPFIND auf / funktioniert ja jetzt. Die
Direkt-URL bleibt vollstaendig mit /dav/<user>/cal-<id> bzw.
ab-<id> fuer Clients die das brauchen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 15:22:29 +02:00
Stefan Hacker 9c102823e4 feat: Kontakte mit Outlook-Feldern + CardDAV-Server + Sharing
Komplette Kontakte-Ueberarbeitung analog zum Kalender-Ausbau.

Backend-Model:
* AddressBook: color (pro Buch), ausserdem Per-User-Color via
  AddressBookShare.color wie bei CalendarShare.
* Contact: volle Outlook-artige Struktur - prefix/first/middle/
  last/suffix, display_name, nickname, organization, department,
  job_title, birthday, anniversary, notes, photo sowie JSON-
  Spalten fuer mehrfach vorhandene Felder (emails, phones,
  addresses mit allen Adressteilen, websites, impp, categories).

Backend-API:
* REST CRUD uebernimmt die neuen Felder und generiert vCard 3.0
  als Source of Truth fuer CardDAV. Voller vCard-Parser +
  -Builder mit Escape/Unescape, TYPE-Parametern, Line-Folding.
* Neuer Endpoint PUT /addressbooks/<id>/my-color - persoenliche
  Farbe pro Buch ohne den Besitzer zu beeinflussen.
* SSE-Events vom Typ 'addressbook' an Besitzer + alle Share-
  Empfaenger bei jeder Aenderung.

CardDAV-Server (backend/app/dav/carddav.py):
* Volle Discovery via principal - addressbook-home-set wird
  neben calendar-home-set annonciert.
* PROPFIND/REPORT/GET/PUT/DELETE/MKCOL fuer
  /dav/<user>/ab-<id>/ und /<...>/{uid}.vcf
* addressbook-query + addressbook-multiget REPORTs
* ETag-basierte Konfliktpruefung via If-Match/If-None-Match

Frontend (ContactsView.vue):
* Komplett neuer Editor mit vier Tabs: Allgemein (Name, Org),
  Kommunikation (Emails/Phones/Websites/IMPP dynamisch),
  Adressen (mehrere mit allen Teilen), Details (Geburtstag,
  Jahrestag, Kategorien, Notizen).
* Avatar mit Fotoauswahl oder Initialen-Farbkreis.
* Kalender-Sharing-Flow 1:1 uebernommen: Autocomplete fuer
  Benutzersuche, Share-Liste mit Stift zum Bearbeiten, Muelleimer
  zum Entfernen, Per-User-Farbe, CardDAV-URL-Info-Block pro
  Adressbuch, Live-Refresh via SSE.
* Suche durchsucht Displayname, E-Mail und Firma.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 15:16:01 +02:00
Stefan Hacker fbf10197d7 fix: CalDAV calendar-query liefert nur angefragte Props
Bisher wurde immer die komplette calendar-data mitgeschickt, auch
wenn der Client nur getetag wollte. DAVx5 macht einen zweistufigen
Sync: erst calendar-query nach ETags, dann multiget fuer die
neuen/geaenderten Events. Server-seitig zu viel zu liefern bricht
diesen Ablauf - Client denkt er hat alles und ueberspringt die
zweite Stufe, aber die Events landen nicht in der Android-Kalender-
DB.

Jetzt: calendar-query schaut nach ob <c:calendar-data/> in den
angefragten Props steht und liefert entsprechend.
calendar-multiget liefert weiterhin immer die vollen Daten.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:31:53 +02:00
Stefan Hacker 0edd41e46a fix: CalDAV REPORT time-range - 500 wenn end fehlt
DAVx5 sendet bei calendar-query oft nur <time-range start=.../>
ohne end. Mein Code hat dann blind CalendarEvent.dtstart < None
gefiltert, was SQLAlchemy mit TypeError abbrechen liess - Ergebnis
HTTP 500, Sync scheitert komplett.

Zwei Korrekturen:
* end-Filter wird nur gesetzt wenn end wirklich vorhanden ist
* time-range-Parser strippt tzinfo, damit Vergleiche mit den
  tz-naiven DB-Spalten keine Exception werfen

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:21:56 +02:00
Stefan Hacker e7f469f477 fix: CalDAV HEAD auf Events + PROPPATCH auf Kalender
* GET-Route akzeptiert jetzt auch HEAD - manche Clients pruefen
  Existenz einer Ressource via HEAD bevor sie GET senden.
* Neue PROPPATCH-Route auf der Kalender-Collection: erkennt
  calendar-color + displayname und persistiert beides. Andere
  Properties werden als "angewendet" bestaetigt, damit DAVx5
  und Apple Calendar nicht enttaeuscht sind.

Damit sollten die 500-Fehler beim Sync verschwinden. Falls nicht,
bitte Server- oder DAVx5-Log posten.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:18:53 +02:00
Stefan Hacker 189aa18be8 fix: PROPFIND-Response-Href stimmt mit Anfrage-URL ueberein
Bisher war der href in der Response immer /dav/, auch wenn DAVx5
einen PROPFIND auf / oder /.well-known/caldav gemacht hat. Das
kann Clients verwirren - die erwarten, dass der Response-Pfad zum
angefragten Pfad passt. current-user-principal zeigt weiterhin
korrekt auf /dav/Adam/.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:09:01 +02:00
Stefan Hacker 39e68eee6a fix: PROPFIND/OPTIONS auf / (Root) akzeptieren - DAVx5 startet dort
DAVx5 macht beim Account-Setup zuerst PROPFIND auf / bevor es
/.well-known/caldav probiert. Der Server antwortete mit 405
Method Not Allowed (weil / nur fuer SPA-GET registriert war),
woraufhin DAVx5 den gesamten Server als "kein DAV" verwirft.

Jetzt: PROPFIND und OPTIONS auf / werden an die DAV-Handler
delegiert (gleiches Verhalten wie auf /dav/). GET/HEAD auf /
laeuft unveraendert zur SPA.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:04:38 +02:00
Stefan Hacker 3c762e1476 fix: Well-Known DAV - OPTIONS liefert jetzt korrekt DAV-Header
Flask hat trotz expliziter OPTIONS-Route Auto-OPTIONS generiert,
wodurch der DAV-Header fehlte. DAVx5 sieht so keinen Calendar-
Access und lehnt den Server ab.

Konsolidiert zu einem Handler mit method-basiertem Dispatch und
provide_automatic_options=False, damit Flask nicht dazwischenfunkt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:53:24 +02:00
Stefan Hacker 3f0d823dbf fix: CalDAV fuer DAVx5 - Well-Known intern dispatchen, mehr Properties
Aenderungen fuer besseren DAVx5-Support:

* /.well-known/caldav reagiert jetzt direkt auf PROPFIND/OPTIONS
  ohne Redirect-Zickerei. GET/HEAD redirecten weiterhin auf /dav/
  als visuelle Fallback.
* strict_slashes app-weit aus: /dav und /dav/ sind gleichwertig,
  ebenso die Unterpfade. DAVx5 nutzt beides gemischt.
* Jede DAV-Response traegt jetzt den DAV-Header (1, 2, 3,
  calendar-access), nicht nur OPTIONS.
* Kalender-Response enthaelt jetzt supported-report-set mit
  calendar-query + calendar-multiget (DAVx5 prueft das).
* current-user-privilege-set wird mit konkreten Privilegien gefuellt
  (read, write, write-properties, write-content, bind, unbind)
  statt leer.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:50:50 +02:00
Stefan Hacker c4b381c5e9 fix: CalDAV Autodiscovery - XML war doppelt verschachtelt
Property-Elemente wurden unter einem Container mit demselben Tag
erzeugt, z.B.:
  <current-user-principal>
    <current-user-principal>    <!-- falsch, doppelt -->
      <href>/dav/adam/</href>
    </current-user-principal>
  </current-user-principal>

Clients wie DAVx5 und Thunderbird erkennen dadurch den Principal
nicht und melden "Kein CalDAV-Dienst gefunden". XML-Generierung
umgebaut - Response-Helfer bekommen jetzt eine populate_prop-
Callback, die die tatsaechlichen Property-Children direkt ins
<prop>-Element setzt.

Zusaetzlich:
* /.well-known/caldav und /carddav akzeptieren jetzt auch PROPFIND,
  OPTIONS, HEAD (einige Clients halten die Methode beim ersten
  Aufruf bei).
* Kalender-Response enthaelt current-user-privilege-set (leer, als
  Signal dass der Client nicht ACL-abhaengig pruefen muss).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:44:44 +02:00
Stefan Hacker e85338761d feat: Persoenliche Farbe fuer freigegebene Kalender
CalendarShare bekommt color-Spalte. Im Kalender-Menue kann jeder
Benutzer eine eigene Anzeigefarbe fuer einen mit ihm geteilten
Kalender setzen, ohne dass sich dadurch die Farbe beim
Eigentuemer oder anderen Share-Empfaengern aendert.

* Owner: Farbe aendert den Kalender direkt (wie bisher).
* Share-Empfaenger: Farbe landet in CalendarShare.color und wird
  nur fuer ihn ausgeliefert (list_calendars injiziert sie in
  'color', Owner-Farbe bleibt in 'owner_color' als Referenz).

Neuer Endpoint: PUT /calendars/<id>/my-color.
UI-Hinweis: "Nur fuer deine Ansicht - <Owner> behaelt seine Farbe".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:14:45 +02:00
Stefan Hacker 2170f4a7b1 feat: Kalender-Ansicht aktualisiert sich live via SSE
Backend:
Neuer Event-Typ 'calendar' im Broadcaster. Wird bei Event-CRUD,
Serien-Ausnahmen, Freigaben hinzufuegen/entfernen und beim
Loeschen ganzer Kalender emittiert. Empfaenger: Eigentuemer +
alle User mit CalendarShare auf dem jeweiligen Kalender.

Frontend:
CalendarView oeffnet beim Mount eine EventSource zu
/api/sync/events und reloaded Kalenderliste + Events bei jedem
'calendar'-Event (300ms debounced). Damit sehen beteiligte
Nutzer Aenderungen in praktisch Echtzeit - kein F5 mehr noetig.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:10:54 +02:00
Stefan Hacker ce4faedd88 feat: CalDAV-URLs im Kalender-Menue anzeigen
Im Drei-Punkte-Menue jedes Kalenders wird jetzt ein Info-Block mit
den CalDAV-URLs angezeigt:

* Auto-Discovery URL fuer Thunderbird / DAVx5 / Apple Calendar
* Direkt-URL fuer diesen speziellen Kalender (z.B. Outlook
  CalDAV-Synchronizer)
* Kurz-Hinweis welcher Client welche URL nimmt

Jede URL hat ein Kopier-Icon. Ergaenzt den bestehenden iCal-Link
um die bidirektionale Sync-Moeglichkeit ueber CalDAV.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:08:11 +02:00
Stefan Hacker fda9e685a9 feat: Kalender-Freigaben per Stift-Button bearbeiten
Analog zu den Datei-Freigaben: Stift neben der Muelltonne in der
Share-Liste macht die Zeile zur Inline-Edit-Zeile mit Permission-
Dropdown + Check/X. Speichern nutzt denselben POST /share-
Endpoint, der auch das initiale Teilen erledigt - er erkennt den
existierenden User und aktualisiert nur die Berechtigung.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:04:48 +02:00
Stefan Hacker c73be6fac5 fix: Startup-Crash - doppelt definierte Calendar.owner-Relation entfernt
User.calendars hat bereits backref='owner', mein zusaetzlich
hinzugefuegtes Calendar.owner kollidierte damit und SQLAlchemy
weigerte sich, die Mappers zu initialisieren ("Error creating
backref 'owner'..."). Damit waren alle Auth-Endpoints tot.

Jetzt nur noch Kommentar, die backref uebernimmt die Aufgabe.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:00:00 +02:00
Stefan Hacker a143325bbe feat: Kalender - Autocomplete + Privat-Flag + Share-Liste + Bugfix
Sharing-Fix:
Calendar-Model hatte keine owner-Relation zu User - list_calendars
stuerzte beim Listen geteilter Kalender ab (c.owner.username ->
AttributeError). Jetzt mit explizitem foreign_keys Relationship.

Benutzer-Autocomplete:
"Kalender teilen" nutzt jetzt /users/search wie bei Dateien.
Tippt man 2+ Zeichen, erscheint ein Dropdown mit passenden
Benutzernamen. Klick uebernimmt den Namen.

Bestehende Freigaben werden im Menue angezeigt mit Muelleimer
zum Entfernen.

Privat-Flag fuer Termine:
CalendarEvent bekommt is_private-Spalte. Checkbox im Termin-
Dialog "🔒 Privat (Teilnehmer sehen nur den Zeitblock)".

Redaction greift an drei Stellen:
* GET /events: Nicht-Owner sehen summary="Privat", description
  und location = null. Zeitfenster bleibt voll sichtbar.
* iCal-Export (/ical/<token>): Privat-Events werden mit
  CLASS:PRIVATE ausgegeben und SUMMARY/DESCRIPTION/LOCATION
  werden gestrippt.
* CalDAV: aktuell werden eh nur eigene Kalender exportiert,
  also keine Redaction noetig. Kommt bei Share-Support rein.

Der Eigentuemer sieht natuerlich in seiner eigenen Ansicht alle
Details seines privaten Termins.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:56:25 +02:00
Stefan Hacker 5797a7b738 feat: CalDAV-Server (RFC 4791 Subset) fuer native Client-Sync
Vollstaendige CalDAV-Implementierung unter /dav/ - Thunderbird,
DAVx5, Apple Calendar und Outlook (CalDAV-Synchronizer) koennen
sich einfach ueber HTTP-Basic-Auth mit ihrem Mini-Cloud-Account
anmelden und ihre Kalender synchronisieren.

Unterstuetzte Methoden:
* OPTIONS      - DAV-Capabilities
* PROPFIND     - Discovery, Principal, Calendar-Home, Kalender,
                 Termin-Listings (Depth 0/1 beachtet)
* REPORT       - calendar-query + calendar-multiget mit
                 optionalem Zeitraumfilter (<time-range>)
* GET          - einzelner Termin als VCALENDAR
* PUT          - Termin erstellen/aktualisieren (mit ETag-Check
                 via If-Match + If-None-Match)
* DELETE       - Termin oder ganzer Kalender
* MKCALENDAR   - neuen Kalender vom Client aus anlegen

iCal-Parser verarbeitet SUMMARY, DESCRIPTION, LOCATION, DTSTART,
DTEND, RRULE, EXDATE - inklusive Line-Folding (RFC 5545).
Ganztages-Termine (VALUE=DATE) werden korrekt erkannt.

ETags basieren auf updated_at-Zeitstempel und werden pro
PUT-Response zurueckgegeben, damit Clients Konflikte erkennen.

nginx.example.conf: /dav/ mit proxy_request_buffering off fuer
groessere PUTs und Weiterleitung der .well-known-URLs.

README: eigener "CalDAV-Zugriff"-Block mit Tabelle pro Client.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:51:21 +02:00
Stefan Hacker cbb2786130 fix: Kalender - Termine immer als Balken statt Punkt+Zeit
eventDisplay: 'block' zwingt FullCalendar dazu, auch zeitlich
terminierte Termine in der Monatsansicht als farbige Balken
anzuzeigen statt als Punkt mit Uhrzeit-Label. Damit sieht ein
per "Neuer Termin"-Button angelegter Termin genauso aus wie einer,
der per Klick auf den Tag erstellt wurde.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:44:28 +02:00
Stefan Hacker c1b05e2525 feat: Serientermin-Bearbeitung: Nur diesen Termin oder Serie
Klick auf einen wiederkehrenden Termin oeffnet zuerst einen Dialog:
"Nur diesen Termin" oder "Ganze Serie".

* Serie: bearbeitet den Master wie bisher
* Nur dieser: fuegt EXDATE fuer das geklickte Datum zum Master
  hinzu und legt einen eigenstaendigen Ersatz-Termin mit den
  bearbeiteten Daten an

Backend:
* CalendarEvent.exdates speichert Ausnahmedaten kommasepariert
* POST /events/<id>/exception fuegt EXDATE hinzu, erstellt
  optional das Replacement-Event mit frischer UID
* _build_vevent schreibt jetzt EXDATE-Zeilen in die ical_data,
  sodass CalDAV-Clients die Ausnahmen auch sehen werden

Frontend:
* FullCalendar rrule-Plugin bekommt die exdate-Liste und blendet
  die uebersprungenen Tage aus
* Drag & Drop verschiebt weiterhin die ganze Serie (Shortcut -
  fuer Einzelverschiebung Termin anklicken und bearbeiten)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:41:35 +02:00
Stefan Hacker ddd8f57e69 feat: Kalender-Termine zeigen Icons + Start-Ende-Uhrzeit
* 📅-Icon bei ganztaegigen Terminen
* 🔁-Icon bei wiederkehrenden Terminen
* Anzeige "09:00-10:30" statt nur "09:00" in Woche/Tag-Ansicht
* Mouseover-Tooltip mit allen Termin-Infos inklusive Ort und
  Beschreibung

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:38:44 +02:00
Stefan Hacker c5284f57e0 feat: Kalender mit FullCalendar - Woche/Monat/Tag, Drag&Drop, Wiederholungen
Kalender-UI komplett neu aufgesetzt mit FullCalendar:
* Drei Ansichten: Monat, Woche, Tag - ueber Toolbar wechselbar
* Drag & Drop: Termine zwischen Tagen verschieben
* Resize: Termindauer direkt am Rand ziehen
* Sidebar mit aktiven Kalendern (Checkbox fuers Ein-/Ausblenden)
* Deutsch lokalisiert, Woche startet Mo, Wochennummern
* Heute-Marker + Jetzt-Linie in Woche/Tag

Terminbearbeitung:
* Titel, Ort, Beschreibung, Zeitraum (oder ganztaegig)
* Wiederholungs-Editor: taeglich, woechentlich (mit Wochentagen),
  monatlich (auch "jeden 2. Mittwoch"), jaehrlich - jeweils mit
  Intervall, Enddatum oder Wiederholungsanzahl
* RRULE-Feld (RFC 5545) wird generiert und vom rrule-Plugin fuer
  die Anzeige im Kalender gerendert

Backend:
* CalendarEvent: description + location Spalten ergaenzt
* Calendar: ical_password_hash fuer passwortgeschuetzte Abo-Links
* /calendars/<id>/ical-link unterstuetzt password + clear_password
* DELETE /calendars/<id>/ical-link zum Zurueckziehen
* ical_export erzwingt HTTP Basic Auth wenn Passwort gesetzt -
  DAVx5, Apple Cal, Thunderbird verstehen das out-of-the-box

Frontend-Deps: @fullcalendar/{core,daygrid,timegrid,interaction,
rrule,vue3}, rrule - ca. 150KB Bundle-Overhead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:32:59 +02:00
Stefan Hacker 04bc3f80ec feat: Bestehende Benutzerfreigaben per Stift-Button bearbeiten
Neben der Muelltonne jetzt ein Stift-Icon im Share-Dialog:
Klick macht die Zeile zur Inline-Edit-Zeile mit Permission-
Dropdown + Weiterteilen-Checkbox + Speichern/Abbrechen-Buttons.
Speichern ruft POST /permissions mit user_id auf - Backend
erkennt die bestehende Freigabe und aktualisiert sie, statt
loeschen + neu anlegen zu muessen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:14:03 +02:00
Stefan Hacker 9b135e42b7 feat: Freigaben-Aenderung live + "Ordner nicht mehr verfuegbar"-Handling
Backend:
set_permission und remove_permission feuern jetzt ein SSE-Event vom
Typ 'permission' an Target-User + Owner + weitere Share-Empfaenger.
Damit aktualisieren sich die Dateilisten aller Beteiligten in
Echtzeit - auch beim Betroffenen, der gerade seinen Zugriff
verliert.

Frontend:
FilesView wrapped loadFiles in safeLoadCurrentFolder(). Bei
403/404 erscheint ein Toast "Dieser Ordner wurde geloescht oder
die Freigabe wurde entfernt" und nach 600ms wird zurueck zum
Root navigiert. Greift beim Direktaufruf, beim Ordnerwechsel und
bei durch SSE ausgeloesten Auto-Reloads.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:00:15 +02:00
Stefan Hacker 9369c851a0 feat: Benutzerfreigabe - Weiterteilen-Recht + Lesezugriff wird erzwungen
Neues Berechtigungs-Modell fuer Benutzerfreigaben:

* FilePermission bekommt zwei neue Spalten:
  - can_reshare (bool): darf dieser Nutzer die Freigabe weiterverteilen?
  - granted_by (user_id): wer hat diese Freigabe erstellt?

* set_permission / create_share_link erlauben jetzt auch Nicht-Owner,
  sofern sie can_reshare haben. Dabei gilt:
  - Lesend + reshare -> kann nur lesend weiterteilen
  - Schreibend + reshare -> kann lesend ODER schreibend weiterteilen
  - Admin kann nur der Eigentuemer vergeben
  - Jeder Re-Sharer kann wiederum can_reshare weitergeben

* remove_permission: Owner kann alle Freigaben entfernen; Re-Sharer
  nur die von ihnen selbst erstellten.

* get_permissions: Owner sieht alle; Re-Sharer nur selbst-erstellte.

* list_files liefert my_permission + my_can_reshare pro Eintrag -
  Frontend kann Rename/Delete/Share-Buttons gezielt ein- und
  ausblenden statt blind alle anzuzeigen.

Frontend:
* Rename/Delete-Buttons nur fuer Write-Zugriff
* Share-Button nur fuer Owner oder Re-Sharer
* "darf weiterteilen" Checkbox neben Permission-Dropdown im Dialog
* Dropdown-Optionen nach eigenem Level gefiltert (Re-Sharer sieht
  keine hoeheren Stufen als seine eigene)
* Hinweis-Text "Du hast X - du kannst maximal X weiterteilen"

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:54:36 +02:00
Stefan Hacker 035923834b docs: README erklaert Reichweite des File Locks in Alltagssprache
Neuer Abschnitt "Was das Lock wirklich kann (und was nicht)" mit
Tabelle + Beispielszenario Adam/Anna. Zeigt Laien, dass das Lock
Web-GUI, Client und Upload schuetzt, aber nicht den Windows-
Explorer - und dass die Konflikt-Kopie das Sicherheitsnetz ist.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:36:07 +02:00
Stefan Hacker 23563622f8 feat: Lock-Badge + smartes Kontextmenue in lokaler Client-Ansicht
Die lokale Dateiliste im Client zeigt jetzt pro Datei ein 🔒-Badge
mit Nutzername wenn ausgecheckt (wie Server-Ansicht + Web-GUI).
browse_sync_folder zieht den Server-Tree bei jedem Aufruf und
korreliert via Journal-Lookup (oder .cloud-Metadaten) die lokale
Datei mit dem File-Lock-Status.

Rechtsklick-Menue reagiert jetzt auf den Lock-Status:
- Frei              -> "Auschecken (sperren)"
- Eigener/fremder   -> "Entsperren (einchecken)"
Neuer Tauri-Command lock_file_cmd fuer reines Sperren ohne Oeffnen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:32:01 +02:00
Stefan Hacker 5afb87c9cd fix: SSE-Reload in FilesView etwas robuster
Beim Verbindungsaufbau (open-Event) wird jetzt ein initialer Reload
ausgeloest, damit eventuelle Changes in der Zeit zwischen letzter
Anzeige und SSE-Verbindung nicht verloren gehen. Gilt fuer eigene
und freigegebene Ordner gleichermassen (selbe FilesView-Komponente).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:21:10 +02:00
Stefan Hacker 8c7a14c38f fix: Server-Ansicht aktualisiert Lock-Status sofort via SSE
Die Server-Dateiliste im Client wartete bisher auf einen abgeschlossenen
Sync-Durchlauf, bevor Lock-Aenderungen anderer Nutzer sichtbar wurden.
Ausloeser von Events ohne Datei-Download (reine Lock/Unlock-Events)
landeten teils gar nicht in der UI.

Frontend hoert jetzt direkt auf das sse-event vom Backend und ruft
loadFileTree + loadLocalFiles auf - damit Lock-Icons im Server-Tree
in Echtzeit erscheinen/verschwinden.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:18:34 +02:00
Stefan Hacker 6c9daa5783 feat: Offline-Dateien werden beim erneuten Oeffnen wieder ausgecheckt
Bisher hat der Client nur beim ersten Oeffnen (.cloud-Platzhalter ->
Download) gesperrt. Nach dem Einchecken und erneutem Doppelklick
blieb die Datei ungesperrt, weil der Open-Pfad fehlte.

Neuer Tauri-Command open_offline_file loest die Server-Datei-ID
ueber das Sync-Journal auf, sperrt auf dem Server und oeffnet
lokal mit der Standard-App. Im lokalen Dateibrowser:
- Doppelklick auf eine bereits offline vorhandene Datei checkt sie
  nun aus und oeffnet sie (vorher: keine Reaktion)
- Rechtsklick-Menue hat "Oeffnen (auschecken)" fuer Offline-Dateien

Das Lock triggert wie gehabt notify_file_change -> SSE -> Web-UI
aktualisiert den Lock-Status sofort.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:09:06 +02:00
Stefan Hacker 88ab3c9b8d fix: Save-Endpoints feuern SSE-Event - Web-Edits synchronisieren sich
/files/<id>/save (Text/HTML/Spreadsheet) und der OnlyOffice-
Callback aktualisierten Inhalt + Checksum, riefen aber
notify_file_change nicht auf. Der Client bekam dadurch keinen
SSE-Trigger und merkte die neue Server-Version erst beim naechsten
30s-Fallback-Sync - wenn ueberhaupt.

Jetzt: beide Endpoints emittieren 'updated' an Owner + Share-
Empfaenger, Desktop- und Web-Clients reagieren sofort.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:56:51 +02:00
Stefan Hacker e3cf7b1b64 fix: SSE-Broadcaster nur 1 Worker - sonst Events zwischen Prozessen verloren
Mit 2 Gunicorn-Workern laeuft der In-Memory-Broadcaster in zwei
voneinander getrennten Prozessen. Landet ein Lock-Request auf
Worker A und die SSE-Verbindung des Empfaengers auf Worker B, kommt
das Event nie beim Client an - genau deshalb klappte der Live-
Refresh bei freigegebenen Ordnern nicht zuverlaessig.

Jetzt: 1 Worker mit 32 Threads. Threads teilen Memory, der
Broadcaster ist fuer alle Verbindungen derselbe. Fuer mehr Durchsatz
waere Redis Pub/Sub noetig - hier reicht aber Single-Process-Modus.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:51:49 +02:00
Stefan Hacker 3af2bc3312 fix: SSE blockiert gunicorn-Worker - auf gthread umstellen
Mit 4 synchronen Workern hielt jede SSE-Verbindung dauerhaft einen
ganzen Worker besetzt. 4 offene Browser-Tabs -> alle anderen
Requests blockiert -> "Dateien laden dauert ewig".

Loesung: gthread worker-class mit 2 Workern x 16 Threads = 32
gleichzeitige Slots. Lang laufende SSE-Streams belegen nur je
einen Thread, regulaere Requests laufen unbeeintraechtigt.

nginx.example.conf: separater Location-Block fuer /api/sync/events
mit proxy_buffering off und 24h Read-Timeout, damit die Events
sofort durchkommen und die Verbindung nicht abbricht.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:33:02 +02:00
Stefan Hacker 5f905b4925 fix: Sync-Fehler "error decoding response body" + Server-Edits
Drei Probleme in einem:

1. create_folder/get_sync_tree parsten die Response auch bei HTTP-
   Fehlern als JSON. Bei 401/409/etc. kam "error decoding response
   body" statt der eigentlichen Fehlermeldung. Status wird jetzt
   zuerst geprueft, Body-Text wird bei Fehlern zurueckgegeben.

2. Ohne Journal-Eintrag und unterschiedlichen Hashes wurde vorher
   eine Konflikt-Kopie erstellt. Fuer Server-Edits aus dem Web-UI
   (wo der Client die Datei gar nie mit Journal erfasst hatte) war
   das falsch. Nextcloud-Ansatz: beim Erstkontakt Server
   autoritativ - Download statt Konflikt-Kopie.

3. run_sync_now uebernimmt neu konfigurierte sync_paths aus dem
   State, damit manuelle Syncs auch nach add_sync_path greifen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:25:01 +02:00
Stefan Hacker 28fb1c47c2 feat: Web-GUI Live-Refresh via SSE
FilesView abonniert beim Mount die SSE-Events des Backends. Lock/
Unlock, Create, Update oder Delete durch andere Clients loest einen
debounced Reload der aktuellen Ordner-Ansicht aus. EventSource
reconnected automatisch; wird beim Unmount sauber geschlossen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:21:00 +02:00
Stefan Hacker b33e66cad9 fix: Freigegebene Ordner zeigen Dateien auch an
list_files filterte Kinder-Dateien nach owner_id=current_user, wodurch
in einem freigegebenen Ordner (der einem anderen User gehoert) keine
Dateien angezeigt wurden. Jetzt wird beim Betreten eines Ordners die
Zugriffsberechtigung geprueft; bei eigenem Ordner wie gehabt, bei
freigegebenem Ordner werden alle Kinder-Dateien gelistet.

_check_file_access laeuft jetzt auch den Ordner-Baum hoch, damit
eine Permission auf einem Vorfahren-Ordner automatisch Zugriff auf
alle Nachkommen gewaehrt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:13:35 +02:00
Stefan Hacker c63a52629d fix: Lock/Unlock-Buttons in FilesView - doppelter /api-Prefix
apiClient hat baseURL '/api' - die URL darf nicht nochmal mit /api
anfangen, sonst wird daraus /api/api/... und der Request geht ins
Leere.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:01:43 +02:00
Stefan Hacker 5ba007ef51 fix: Borrow-Checker in Background-Sync-Thread
Temporary-Drop-Order: MutexGuard hielt Referenz auf State-Binding,
das am Block-Ende schon fallen gelassen wurde. Zwischenvariable
erzwingt Drop der MutexGuard vor dem Binding.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 09:57:06 +02:00
Stefan Hacker 6aad986d78 fix: PDFs im Preview-iframe statt neuem Tab
Download-Endpoint unterstuetzt jetzt ?inline=1, wodurch
Content-Disposition auf inline statt attachment gesetzt wird.
PDF- und Bild-Preview nutzen diesen Parameter, damit der
Browser das PDF im Preview-Iframe rendert statt einen Download
auszuloesen. Normale Download-Buttons bleiben unveraendert.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 09:55:40 +02:00
Stefan Hacker 50385faa02 feat: Echtzeit-Sync via SSE + Journal-basierter 3-Wege-Vergleich
Desktop-Client komplett ueberarbeitet nach Nextcloud-Vorbild:
- Persistentes SQLite-Journal (journal.rs) speichert letzten bekannten
  Stand pro Datei - ueberlebt Client-Neustarts (Hauptbug behoben).
- Engine.rs neu: 3-Wege-Vergleich Local <-> Journal <-> Server mit
  sauberer Konflikt-Kopie (inkl. Username + Zeitstempel).
- Loesch-Propagation: Lokal geloeschte Dateien landen im Server-
  Papierkorb des Owners (auch bei Freigaben). Auf dem Server
  geloeschte Dateien werden lokal entfernt.
- Lock-Flow repariert: frischer Token bei jedem Call, Fehler-Feedback.

Echtzeit-Sync:
- Backend: SSE-Endpoint /api/sync/events mit In-Memory-Broadcaster.
  Events bei Create/Update/Delete/Lock/Unlock, Zustellung an Owner
  plus alle User mit Share-Permission.
- Client: persistente SSE-Verbindung mit Auto-Reconnect. Events
  triggern sofortigen Sync (<100ms). 30s-Polling bleibt als
  Fallback fuer Netzwerk-Aussetzer.

Weitere Fixes:
- /api/sync/tree filtert is_trashed=False (Papierkorb wird nicht
  mehr an Clients gesynct).
- Web-GUI: Lock/Unlock-Buttons pro Datei, Admin darf fremde Locks
  zwangsweise loesen. Rename/Delete disabled bei fremdem Lock.
- Lock-Check im Backend bei PUT/DELETE (423 Locked Response).
- Background-Sync nur noch einmal pro Prozess gestartet, liest
  sync_paths pro Iteration neu - add/remove wirkt sofort, kein
  Client-Neustart mehr noetig.
- Watcher werden pro Sync-Pfad individuell verwaltet.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 09:50:44 +02:00
Stefan Hacker e65d330d1d docs: README File Locking Tabelle aktualisiert
- Feature-Beschreibung angepasst (manuelles Entsperren, auto-unlock)
- Neue File Locking Tabelle mit allen Szenarien
  (oeffnen, entsperren, vergessen, client beenden, admin)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 03:06:40 +02:00
Stefan Hacker 2bd8a2e1b5 feat: Heartbeat fuer Locks - vergessene Locks laufen nach 15 Min ab
Wenn jemand vergisst zu entsperren:
- Client laeuft -> Heartbeat alle 60s -> Lock bleibt aktiv
- Client geschlossen -> kein Heartbeat -> Lock laeuft nach 15 Min ab
- Laptop zugeklappt -> gleicher Effekt -> 15 Min -> frei

Tracking: locked_files Vec merkt sich welche Dateien wir gesperrt haben.
Heartbeat laeuft im Token-Refresh Thread mit (alle 60s Heartbeat,
alle 10 Min Token-Refresh).

Lock wird beim Oeffnen getrackt, beim Entsperren/Unmark-Offline entfernt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 03:04:28 +02:00
Stefan Hacker 597dafc461 feat: File Lock beim Oeffnen + Entsperren per Rechtsklick
Beim Oeffnen einer .cloud-Datei:
- Download + Datei bleibt lokal (wie bisher)
- Lock wird auf dem Server gesetzt (andere sehen "gesperrt von X")
- Kein Auto-Unlock - Datei bleibt gesperrt bis manuell entsperrt

Rechtsklick im Datei-Browser auf Offline-Dateien:
- "Entsperren (Freigeben fuer andere)" - hebt den Lock auf
- "Nicht mehr offline" - .cloud zurueck + automatisch unlock

So bleiben Dateien gesperrt solange man daran arbeitet.
Wenn fertig: Rechtsklick -> Entsperren. Einfach und explizit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 03:03:01 +02:00
Stefan Hacker 0845659c84 refactor: Auto-Close komplett entfernt - Nextcloud-Ansatz
.cloud oeffnen = Download + Datei bleibt als echte Datei (wie Nextcloud).
Aenderungen werden automatisch vom Watcher gesynct.
"Nicht mehr offline" per Rechtsklick im Datei-Browser -> .cloud zurueck.

Entfernt:
- Auto-Close Detection (is_file_in_use, OpenedFile tracking,
  Heartbeat, Lock/Unlock beim Oeffnen)
- Lock-Kommandos (lock_file_cmd, unlock_file_cmd)
- opened_files HashMap, locked_files Vec
- is_file_in_use Funktion
- ~100 Zeilen Code weniger

Beibehalten:
- Token-Refresh Thread (alle 10 Min)
- File-Locking API im Backend (wird vom Web-UI weiterhin genutzt)
- Watcher + Sofort-Sync
- mark_offline / unmark_offline Kommandos

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 03:01:02 +02:00
Stefan Hacker 763fd4d563 fix: Auto-Close erkennt Datei-Aktivitaet statt nur File-Lock
Problem: Notepad und die meisten Editoren halten keinen File-Lock.
is_file_in_use() fand sofort "nicht in Benutzung" und raeumte die
Datei auf bevor der User sie bearbeiten konnte.

Neuer Ansatz - drei Bedingungen muessen erfuellt sein:
1. Mindestens 30 Sekunden seit dem Oeffnen (Schutzzeit)
2. Kein File-Lock UND Dateigroesse unveraendert
3. Mindestens 2 Minuten seit der letzten Aenderung/Lock

Datei-Aktivitaet wird getrackt:
- Groesse aendert sich -> Timer zuruecksetzen
- File-Lock aktiv (Office) -> Timer zuruecksetzen
- Erst nach 2 Minuten Inaktivitaet -> Auto-Close

So funktioniert es fuer alle Programme:
- Office (haelt Lock): Lock verschwindet -> 2 Min warten -> Close
- Notepad (kein Lock): Letzte Groessenaenderung -> 2 Min -> Close
- Schnell oeffnen+schliessen: 30s Schutzzeit verhindert sofortiges Close

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:57:12 +02:00
Stefan Hacker 0714d96668 fix: .cloud Platzhalter werden bei Server-Aenderung aktualisiert
Vorher: Platzhalter wurde nur erstellt wenn er nicht existierte.
Wenn sich die Datei auf dem Server aenderte (neue Groesse, neuer
Checksum), blieb der Platzhalter mit den alten Metadaten.

Jetzt: Bei jedem Sync wird der Checksum im Platzhalter mit dem
Server-Checksum verglichen. Bei Unterschied -> Platzhalter neu
schreiben mit aktueller Groesse, Checksum und Datum.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:43:09 +02:00
Stefan Hacker b6afc05148 fix: .cloud Oeffnen - besseres Error-Handling + Fallback-Dateiname
- Dateiname: Erst aus JSON "name" Feld, Fallback: .cloud von Dateiname strippen
- Alle Fehler werden jetzt gemeldet statt verschluckt (download, lock, open)
- open::that Fehler wird zurueckgegeben statt ignoriert
- Ausfuehrliches Logging: Pfade, Groesse, Lock-Status
- Pruefung ob Download-Datei existiert bevor geoeffnet wird

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:41:58 +02:00
58 changed files with 9935 additions and 1073 deletions
+18
View File
@@ -31,6 +31,24 @@ FRONTEND_URL=https://cloud.example.com
# Max Upload-Groesse in MB
MAX_UPLOAD_SIZE_MB=500
# Zeitzone (prozessweit) - IANA-Format "Region/Stadt".
# Wirkt auf datetime.now(), strftime %Z und Kalender/Task-Zeitstempel.
# Haeufige Werte:
# Europe/Berlin, Europe/Vienna, Europe/Zurich, Europe/Amsterdam,
# Europe/Paris, Europe/London, Europe/Madrid, Europe/Rome,
# Europe/Warsaw, Europe/Prague, Europe/Copenhagen, Europe/Stockholm,
# UTC, America/New_York, America/Los_Angeles, Asia/Tokyo, Australia/Sydney
# Vollstaendige Liste: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=Europe/Berlin
# NTP-Server zum Pruefen der Uhrzeit beim Start (nicht-invasiver Offset-Check
# - im Container kann die Systemuhr nicht gesetzt werden; bei Abweichung >5s
# erscheint eine Warnung im Log, dann bitte die Host-Uhr synchronisieren).
# Leerlassen um den Check zu deaktivieren.
# Default: Physikalisch-Technische Bundesanstalt (offizielle deutsche Zeit).
# Alternativen: ptbtime2.ptb.de, ptbtime3.ptb.de, de.pool.ntp.org, time.cloudflare.com
NTP_SERVER=ptbtime1.ptb.de
# OnlyOffice Document Server (optional)
# Eigene Subdomain mit HTTPS, z.B. https://office.example.com
# JWT wird automatisch vom JWT_SECRET_KEY oben verwendet
+10 -1
View File
@@ -11,6 +11,7 @@ FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
# tzdata ist im python:3.11-slim bereits enthalten - nur gcc nachinstallieren.
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
@@ -30,9 +31,17 @@ RUN mkdir -p /app/data/files
# Environment
ENV FLASK_ENV=production
ENV TZ=Europe/Berlin
ENV DATABASE_PATH=/app/data/minicloud.db
ENV UPLOAD_PATH=/app/data/files
EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "--timeout", "120", "wsgi:application"]
# Single worker with many threads. The SSE broadcaster lives in process
# memory - mit mehreren Workern wuerden Events den Empfaenger nicht
# erreichen wenn Sender und Empfaenger auf verschiedenen Workern haengen.
# 32 Threads = bis zu 32 gleichzeitige Requests/SSE-Streams.
CMD ["gunicorn", "--bind", "0.0.0.0:5000", \
"--worker-class", "gthread", "--workers", "1", "--threads", "32", \
"--timeout", "120", "--keep-alive", "65", \
"wsgi:application"]
+93 -8
View File
@@ -191,6 +191,36 @@ docker-compose up --build -d
**Ohne OnlyOffice** (`ONLYOFFICE_URL` leer) werden Office-Dateien in einer einfachen Vorschau angezeigt. **Mit OnlyOffice** erhaelt man einen vollwertigen Editor (wie Google Docs).
### Zeitzone & NTP
In der `.env` stehen zwei Variablen die die Systemzeit betreffen:
```env
TZ=Europe/Berlin
NTP_SERVER=ptbtime1.ptb.de
```
**`TZ`** setzt die prozessweite Zeitzone (wirkt auf Log-Zeitstempel, Kalender/Task-Zeiten, `datetime.now()`). IANA-Format `Region/Stadt`.
Haeufige Werte:
| Region | Beispielwerte |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| Deutschland | `Europe/Berlin` |
| DACH/EU | `Europe/Vienna`, `Europe/Zurich`, `Europe/Amsterdam`, `Europe/Paris`, `Europe/London`, `Europe/Madrid`, `Europe/Rome`, `Europe/Warsaw` |
| Nord-EU | `Europe/Copenhagen`, `Europe/Stockholm`, `Europe/Helsinki`, `Europe/Oslo` |
| Sonstige | `UTC`, `America/New_York`, `America/Los_Angeles`, `Asia/Tokyo`, `Australia/Sydney` |
Vollstaendige Liste: <https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>
**`NTP_SERVER`** wird beim Start abgefragt, um die Abweichung der Systemuhr zu pruefen. Bei Drift > 5 s erscheint eine Warnung im Log. **Hinweis:** Im Container wird die Uhr dadurch nicht gesetzt (benoetigt `CAP_SYS_TIME`) - auf dem Host sollte ein NTP-Daemon laufen. Der Check dient nur zur Sichtbarkeit.
Default: `ptbtime1.ptb.de` (offizielle deutsche Zeitreferenz der Physikalisch-Technischen Bundesanstalt, Stratum 1, sehr hohe Verfuegbarkeit).
Alternativen: `ptbtime2.ptb.de`, `ptbtime3.ptb.de`, `de.pool.ntp.org`, `time.cloudflare.com`. Leerlassen um den Check zu deaktivieren.
Aktuelle Werte sind im Admin-Bereich unter **Einstellungen > System** einsehbar.
## Verwendung
### Dateien
@@ -210,14 +240,32 @@ docker-compose up --build -d
### Kalender
- Kalender erstellen, Events anlegen (Monats-/Tagesansicht)
- Monats-/Wochen-/Tagesansicht (FullCalendar)
- Drag & Drop zwischen Tagen, Termindauer per Rand-Ziehen
- Wiederkehrende Termine: taeglich/woechentlich/monatlich/jaehrlich,
"jeden 2. Mittwoch", eigene Intervalle, Enddatum oder Anzahl
- Serientermine: "Nur diesen" oder "Ganze Serie" bearbeiten
- Kalender-Sichtbarkeit pro Kalender per Checkbox
- Kalender mit anderen Benutzern teilen (Lesen oder Lesen+Schreiben)
- iCal-Link generieren fuer Read-Only-Import in Google Calendar, Apple Kalender etc.
- CalDAV-Zugriff fuer native Sync:
- **iOS**: Einstellungen > Kalender > Accounts > Anderer > CalDAV
- **Android (DAVx5)**: Server-URL: `https://<deine-domain>/dav/`
- **Thunderbird**: Neuer Kalender > Im Netzwerk > CalDAV
- **Outlook (CalDAV-Synchronizer)**: Server-URL: `https://<deine-domain>/dav/`
- iCal-Abo-Link mit optionalem Passwort (HTTP Basic Auth)
- Voller CalDAV-Server (RFC 4791 Subset) - siehe unten
#### CalDAV-Zugriff
Native Sync mit Handy/Laptop-Kalendern. Server-URL ist immer
`https://<deine-domain>/dav/` - Benutzername + Passwort wie im Web.
| Client | Einrichtung |
|-----------------|-------------|
| **iOS/macOS** | Einstellungen > Kalender > Accounts > Anderer > CalDAV-Account, Server `cloud.example.com/dav/` |
| **Android (DAVx5)** | Konto hinzufuegen > Anmeldung mit URL und Benutzername, URL `https://cloud.example.com/dav/` |
| **Thunderbird** | Neuer Kalender > Im Netzwerk > CalDAV, URL `https://cloud.example.com/dav/` (Thunderbird findet die Kalender selbst) |
| **Outlook** | Plugin CalDAV-Synchronizer, Server-URL `https://cloud.example.com/dav/` |
Unterstuetzte Operationen: PROPFIND (Auto-Discovery via `/.well-known/caldav`),
REPORT (calendar-query / calendar-multiget inkl. Zeitraumfilter), GET/PUT/DELETE
fuer einzelne Termine, MKCALENDAR, EXDATE fuer Serienausnahmen. ETags werden
benutzt damit Clients erkennen, was sich geaendert hat.
### Kontakte
@@ -306,7 +354,7 @@ Der Desktop-Client (`clients/desktop/`) synchronisiert Dateien zwischen der Clou
- **Sofort-Sync**: Filesystem-Watcher erkennt lokale Aenderungen sofort (3s Debounce), kein Polling
- **Intelligenter Sync**: Checksum-Tracking erkennt wer sich geaendert hat (Server oder Lokal)
- **Konflikt-Erkennung**: Bei gleichzeitiger Aenderung wird eine Konflikt-Kopie erstellt
- **File Locking**: Automatisches Ein-/Auschecken mit Heartbeat, Auto-Unlock bei Datei-Schliessung
- **File Locking**: Lock beim Oeffnen, Heartbeat alle 60s, manuelles Entsperren per Rechtsklick, auto-unlock nach 15 Min ohne Heartbeat
- **System-Tray**: Minimiert in den Tray statt zu beenden, Doppelklick oeffnet Fenster
- **Minimiert starten**: Optional direkt im Tray starten (Checkbox in Einstellungen)
- **Auto-Login**: Zugangsdaten und Sync-Pfade bleiben nach Neustart/Update erhalten
@@ -346,6 +394,43 @@ Der Client merkt sich den Checksum jeder Datei beim letzten Sync. Beim naechsten
Beim ersten Sync (kein gespeicherter Checksum) gewinnt immer der Server.
### File Locking
Dateien werden beim Oeffnen ueber den Client automatisch auf dem Server gesperrt. Andere Benutzer sehen "Datei gesperrt von X" und koennen sie nicht bearbeiten.
| Szenario | Was passiert |
|----------|-------------|
| .cloud Datei oeffnen | Download + Lock + Heartbeat alle 60s |
| Fertig -> Rechtsklick "Entsperren" | Lock sofort aufgehoben |
| Rechtsklick "Nicht mehr offline" | Lock aufgehoben + zurueck zu .cloud |
| Client beenden ohne Entsperren | Kein Heartbeat -> Lock laeuft nach 15 Min ab |
| Laptop zugeklappt / Netzwerk weg | Kein Heartbeat -> Lock laeuft nach 15 Min ab |
| Admin im Web-UI | Kann jeden Lock jederzeit manuell loesen |
#### Was das Lock wirklich kann (und was nicht)
Das Auschecken ist ein **Hinweis-Schloss**, kein physikalisches Dateischloss. Kurz gesagt: es hindert alle **Mini-Cloud-Wege** am Bearbeiten, aber nicht den Windows-Explorer oder andere Programme auf der Festplatte.
| Wo greift das Lock? | Beispiel |
|---------------------|----------|
| ✅ Web-Oberflaeche | Anna kann im Browser die Datei nicht oeffnen/bearbeiten - "wird von Adam bearbeitet" |
| ✅ Desktop-Client | Doppelklick in der Client-Ansicht -> Fehlermeldung, Datei oeffnet nicht |
| ✅ Automatischer Upload | Hat Anna die Datei trotzdem editiert, hebt der Client sie nicht hoch, solange Adam das Lock haelt |
| ❌ Windows-/Mac-Explorer | Anna kann die lokale Datei im Dateimanager oeffnen (ist ja eine ganz normale Datei auf der Platte) |
| ❌ Externe Programme | Word, Excel, Notepad usw. sehen das Lock nicht - jedes Programm kann die Datei oeffnen |
**Beispiel im Alltag:**
1. Adam checkt `Bericht.xlsx` aus (oeffnet sie im Client)
2. Anna hat den Ordner auch gesynct und die Datei liegt bei ihr lokal
3. Anna versucht, sie im Browser zu oeffnen -> **blockiert**
4. Anna versucht, sie im Client zu oeffnen -> **blockiert**
5. Anna oeffnet sie im Explorer direkt -> **geht auf** (weil die Datei technisch ja nur eine normale Datei ist)
6. Anna bearbeitet und speichert lokal -> Client bemerkt die Aenderung, sieht das Fremd-Lock und **haelt den Upload zurueck**
7. Adam checkt ein: jetzt vergleicht der Client - hat Adam auch geaendert? Wenn ja, wird Annas Version zu `Bericht (Konflikt Anna 2026-04-12 143022).xlsx` und Adams Version gewinnt. Niemand verliert Daten, aber ein Mensch muss die Versionen zusammenfuehren.
Das ist derselbe Ansatz wie bei Nextcloud oder Dropbox: **Konflikt-Kopie als Sicherheitsnetz**, keine kernel-tiefe Dateisperre. Der Schutz kommt ueber die Upload-Sperre - damit landet ein versehentliches Bearbeiten nie beim eigentlichen Owner.
### Bauen
```bash
+76 -7
View File
@@ -1,13 +1,28 @@
import os
import time
from pathlib import Path
from flask import Flask, redirect, send_from_directory
from flask import Flask, Response, redirect, send_from_directory
from flask_cors import CORS
from app.config import Config
from app.extensions import db, bcrypt, migrate
def _configure_timezone(tz_name: str) -> None:
"""Prozess-Zeitzone setzen, sodass datetime.now(), strftime %Z etc.
die konfigurierte TZ verwenden. Sichere no-op wenn tzdata fehlt."""
if not tz_name:
return
os.environ['TZ'] = tz_name
tzset = getattr(time, 'tzset', None)
if tzset:
try:
tzset()
except Exception:
pass
def _auto_migrate(db):
"""Add missing columns to existing tables by comparing model definitions
with actual database schema. This handles the case where new columns are
@@ -61,6 +76,9 @@ def _auto_migrate(db):
def create_app(config_class=Config):
# Zeitzone moeglichst frueh setzen - vor allen datetime.now()-Aufrufen
_configure_timezone(getattr(config_class, 'TIMEZONE', None) or os.environ.get('TZ'))
# Check if static frontend build exists (Docker production mode)
static_dir = Path(__file__).resolve().parent.parent / 'static'
if static_dir.exists():
@@ -69,6 +87,9 @@ def create_app(config_class=Config):
app = Flask(__name__)
app.config.from_object(config_class)
# DAV-Clients setzen Trailing-Slashes uneinheitlich - daher deaktivieren
# wir die strikte Pruefung app-weit. Betrifft alle Blueprints.
app.url_map.strict_slashes = False
# Ensure data directories exist
Path(app.config['UPLOAD_PATH']).mkdir(parents=True, exist_ok=True)
@@ -88,14 +109,51 @@ def create_app(config_class=Config):
from app.api import api_bp
app.register_blueprint(api_bp)
# Well-known URLs for CalDAV/CardDAV auto-discovery (iOS, DAVx5, etc.)
@app.route('/.well-known/caldav')
def wellknown_caldav():
from app.dav import dav_bp
app.register_blueprint(dav_bp)
# Well-known URLs for CalDAV/CardDAV auto-discovery (iOS, DAVx5, etc.).
# 301-Redirect bei PROPFIND ist bei einigen Clients zickig, deshalb
# delegieren wir intern direkt an die DAV-Handler, statt zu redirecten.
from flask import request
from app.dav.caldav import propfind as dav_propfind, options as dav_options
def _wellknown_dav():
if request.method == 'PROPFIND':
return dav_propfind(subpath='')
if request.method == 'OPTIONS':
return dav_options()
return redirect('/dav/', code=301)
@app.route('/.well-known/carddav')
def wellknown_carddav():
return redirect('/dav/', code=301)
app.add_url_rule(
'/.well-known/caldav', view_func=_wellknown_dav,
methods=['GET', 'HEAD', 'PROPFIND', 'OPTIONS'],
provide_automatic_options=False,
)
app.add_url_rule(
'/.well-known/carddav', view_func=_wellknown_dav,
endpoint='_wellknown_carddav',
methods=['GET', 'HEAD', 'PROPFIND', 'OPTIONS'],
provide_automatic_options=False,
)
# Root DAV discovery: DAVx5 und einige andere Clients probieren zuerst
# PROPFIND/OPTIONS auf / (nur Hostname), bevor sie /.well-known nutzen.
# Wir reagieren hier auch mit DAV-Properties.
def _root_dav():
if request.method == 'PROPFIND':
return dav_propfind(subpath='')
if request.method == 'OPTIONS':
return dav_options()
# GET/HEAD: SPA index handhabt das woanders - dieser View matcht nur DAV-Methoden
return Response('', 405)
app.add_url_rule(
'/', view_func=_root_dav,
endpoint='_root_dav',
methods=['PROPFIND', 'OPTIONS'],
provide_automatic_options=False,
)
# iCal export (public, no auth)
@app.route('/ical/<token>')
@@ -131,4 +189,15 @@ def create_app(config_class=Config):
from app.services.backup_scheduler import start_backup_scheduler
start_backup_scheduler(app)
# NTP-Offset gegen den konfigurierten Zeitserver pruefen (nicht fatal).
ntp_server = app.config.get('NTP_SERVER') or ''
if ntp_server.strip():
import threading
from app.services.ntp_check import check_and_log
threading.Thread(
target=check_and_log,
args=(ntp_server.strip(), app.logger),
daemon=True,
).start()
return app
+1 -1
View File
@@ -2,4 +2,4 @@ from flask import Blueprint
api_bp = Blueprint('api', __name__, url_prefix='/api')
from app.api import auth, users, files, calendar, contacts, email, office, passwords, backup, client_downloads # noqa: E402, F401
from app.api import auth, users, files, calendar, contacts, tasks, email, office, passwords, backup, client_downloads # noqa: E402, F401
+425 -20
View File
@@ -1,14 +1,68 @@
import csv
import io
import re
import secrets
import uuid
from datetime import datetime, timezone
from flask import request, jsonify
from flask import request, jsonify, Response
from app.api import api_bp
from app.api.auth import token_required
from app.extensions import db
from app.extensions import db, bcrypt
from app.models.calendar import Calendar, CalendarEvent, CalendarShare
from app.models.user import User
from app.services.events import notify_calendar_change
def _calendar_recipients(cal: Calendar):
return [s.shared_with_id for s in CalendarShare.query.filter_by(calendar_id=cal.id).all()]
def _redact_if_private(event_dict: dict, is_owner: bool) -> dict:
"""For shared viewers, strip summary/description/location from private
events so only the time slot remains visible."""
if is_owner or not event_dict.get('is_private'):
return event_dict
d = dict(event_dict)
d['summary'] = 'Privat'
d['description'] = None
d['location'] = None
return d
def _redact_vevent(raw: str) -> str:
"""Strip SUMMARY/DESCRIPTION/LOCATION from a VEVENT block and set
CLASS:PRIVATE. Used for shared iCal exports and CalDAV responses."""
if not raw:
return raw
import re as _re
out_lines = []
has_class = False
for line in raw.split('\n'):
stripped = line.rstrip('\r')
upper = stripped.split(':', 1)[0].split(';', 1)[0].upper()
if upper == 'SUMMARY':
out_lines.append('SUMMARY:Privat')
elif upper in ('DESCRIPTION', 'LOCATION'):
continue
elif upper == 'CLASS':
has_class = True
out_lines.append('CLASS:PRIVATE')
else:
out_lines.append(stripped)
if not has_class:
# Inject CLASS right after UID if possible, else before END:VEVENT
for i, l in enumerate(out_lines):
if l.startswith('UID:'):
out_lines.insert(i + 1, 'CLASS:PRIVATE')
break
else:
for i, l in enumerate(out_lines):
if l.upper().startswith('END:VEVENT'):
out_lines.insert(i, 'CLASS:PRIVATE')
break
return '\r\n'.join(out_lines)
def _get_calendar_or_err(cal_id, user, need_write=False):
@@ -49,7 +103,14 @@ def list_calendars():
calendar_id=c.id, shared_with_id=user.id
).first()
d['permission'] = share.permission if share else 'read'
# Per-user color override: the owner's color is kept in 'owner_color'
# so the UI can show both, and 'color' reflects what this user picked.
d['owner_color'] = c.color
if share and share.color:
d['color'] = share.color
d['owner_name'] = c.owner.username
d['owner_full_name'] = c.owner.full_name
d['owner_display_name'] = c.owner.display_name
result.append(d)
return jsonify(result), 200
@@ -95,6 +156,33 @@ def update_calendar(cal_id):
return jsonify(cal.to_dict()), 200
@api_bp.route('/calendars/<int:cal_id>/my-color', methods=['PUT'])
@token_required
def set_my_calendar_color(cal_id):
"""Personal display color for a shared calendar. Doesn't affect the
owner's calendar color or any other user's view."""
user = request.current_user
cal = db.session.get(Calendar, cal_id)
if not cal:
return jsonify({'error': 'Nicht gefunden'}), 404
color = (request.get_json() or {}).get('color', '').strip()
if cal.owner_id == user.id:
# Owner -> update the calendar itself
if color:
cal.color = color
db.session.commit()
return jsonify({'color': cal.color}), 200
share = CalendarShare.query.filter_by(calendar_id=cal_id, shared_with_id=user.id).first()
if not share:
return jsonify({'error': 'Kein Zugriff'}), 403
share.color = color or None
db.session.commit()
return jsonify({'color': share.color or cal.color}), 200
@api_bp.route('/calendars/<int:cal_id>', methods=['DELETE'])
@token_required
def delete_calendar(cal_id):
@@ -103,8 +191,12 @@ def delete_calendar(cal_id):
if not cal or cal.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden oder keine Berechtigung'}), 404
recipients = _calendar_recipients(cal)
owner_id = cal.owner_id
cal_id = cal.id
db.session.delete(cal)
db.session.commit()
notify_calendar_change(owner_id, cal_id, 'deleted', shared_with=recipients)
return jsonify({'message': 'Kalender geloescht'}), 200
@@ -122,21 +214,183 @@ def list_events(cal_id):
end = request.args.get('end')
query = CalendarEvent.query.filter_by(calendar_id=cal_id)
# Wiederkehrende Termine duerfen nicht per Range gefiltert werden -
# die FullCalendar-RRULE-Plugin-Expansion im Frontend braucht den
# Master-Event auch wenn dessen dtstart vor dem sichtbaren Bereich liegt.
if start:
try:
start_dt = datetime.fromisoformat(start)
query = query.filter(CalendarEvent.dtend >= start_dt)
query = query.filter(db.or_(
CalendarEvent.recurrence_rule.isnot(None),
CalendarEvent.dtend >= start_dt,
))
except ValueError:
pass
if end:
try:
end_dt = datetime.fromisoformat(end)
query = query.filter(CalendarEvent.dtstart <= end_dt)
query = query.filter(db.or_(
CalendarEvent.recurrence_rule.isnot(None),
CalendarEvent.dtstart <= end_dt,
))
except ValueError:
pass
events = query.order_by(CalendarEvent.dtstart).all()
return jsonify([e.to_dict() for e in events]), 200
is_owner = (cal.owner_id == user.id)
return jsonify([_redact_if_private(e.to_dict(), is_owner) for e in events]), 200
@api_bp.route('/calendars/<int:cal_id>/export', methods=['GET'])
@token_required
def export_calendar(cal_id):
"""Export VEVENTs als .ics oder .csv."""
user = request.current_user
cal, err = _get_calendar_or_err(cal_id, user)
if err:
return err
fmt = (request.args.get('format') or 'ics').lower()
events = CalendarEvent.query.filter_by(calendar_id=cal_id).order_by(CalendarEvent.dtstart).all()
safe_name = re.sub(r'[^A-Za-z0-9._-]+', '_', cal.name or 'kalender') or 'kalender'
if fmt == 'ics':
lines = ['BEGIN:VCALENDAR', 'VERSION:2.0', 'PRODID:-//Mini-Cloud//DE', 'CALSCALE:GREGORIAN']
for e in events:
block = (e.ical_data or '').strip()
if not block:
block = _build_vevent(e.uid, e.summary or '', e.dtstart, e.dtend,
e.all_day, e.description or '', e.location or '',
e.recurrence_rule or '',
(e.exdates or '').split(',') if e.exdates else None)
# Make sure block contains BEGIN/END VEVENT
if 'BEGIN:VEVENT' not in block.upper():
continue
lines.append(block.strip())
lines.append('END:VCALENDAR')
body = '\r\n'.join(lines) + '\r\n'
return Response(
body, mimetype='text/calendar; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.ics"'},
)
if fmt == 'csv':
out = io.StringIO()
cols = ['summary', 'dtstart', 'dtend', 'all_day', 'location',
'description', 'recurrence_rule', 'uid']
w = csv.writer(out, delimiter=';', quoting=csv.QUOTE_ALL)
w.writerow(cols)
for e in events:
w.writerow([
e.summary or '',
e.dtstart.isoformat() if e.dtstart else '',
e.dtend.isoformat() if e.dtend else '',
'1' if e.all_day else '0',
e.location or '',
(e.description or '').replace('\r\n', ' ').replace('\n', ' '),
e.recurrence_rule or '',
e.uid or '',
])
return Response(
'\ufeff' + out.getvalue(), mimetype='text/csv; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.csv"'},
)
return jsonify({'error': 'Unbekanntes Format'}), 400
@api_bp.route('/calendars/<int:cal_id>/import', methods=['POST'])
@token_required
def import_calendar(cal_id):
"""Import .ics oder .csv -> Termine ins Kalender."""
from app.dav.caldav import _parse_vevent, _extract_vevent_block
user = request.current_user
cal, err = _get_calendar_or_err(cal_id, user, need_write=True)
if err:
return err
file = request.files.get('file')
if not file:
return jsonify({'error': 'Keine Datei'}), 400
raw = file.read()
name = (file.filename or '').lower()
try:
text = raw.decode('utf-8-sig')
except UnicodeDecodeError:
text = raw.decode('latin-1', errors='replace')
imported = 0
skipped = 0
def _save(parsed: dict, ical_block: str | None = None):
nonlocal imported, skipped
if not parsed.get('summary') or not parsed.get('dtstart'):
skipped += 1
return
uid = parsed.get('uid') or str(uuid.uuid4())
existing = CalendarEvent.query.filter_by(calendar_id=cal_id, uid=uid).first()
ev = existing or CalendarEvent(calendar_id=cal_id, uid=uid, ical_data='')
ev.summary = parsed.get('summary') or '(ohne Titel)'
ev.description = parsed.get('description')
ev.location = parsed.get('location')
ev.dtstart = parsed.get('dtstart')
ev.dtend = parsed.get('dtend')
ev.all_day = parsed.get('all_day', False)
ev.recurrence_rule = parsed.get('rrule')
ev.exdates = ','.join(parsed.get('exdates', [])) or None
ev.ical_data = (ical_block or '').strip() or _build_vevent(
uid, ev.summary, ev.dtstart, ev.dtend, ev.all_day,
ev.description or '', ev.location or '', ev.recurrence_rule or '',
(ev.exdates or '').split(',') if ev.exdates else None,
)
ev.updated_at = datetime.now(timezone.utc)
if not existing:
db.session.add(ev)
imported += 1
if name.endswith('.csv') or (b';' in raw[:200] and b'BEGIN:VCALENDAR' not in raw[:200]):
reader = csv.DictReader(io.StringIO(text), delimiter=';')
if not reader.fieldnames or len(reader.fieldnames) < 2:
reader = csv.DictReader(io.StringIO(text), delimiter=',')
for row in reader:
row = {k.strip().lower(): (v or '').strip() for k, v in row.items() if k}
try:
dtstart = datetime.fromisoformat(row.get('dtstart') or row.get('start') or '')
except (ValueError, TypeError):
skipped += 1
continue
try:
dtend = datetime.fromisoformat(row.get('dtend') or row.get('end') or '') if (row.get('dtend') or row.get('end')) else None
except ValueError:
dtend = None
parsed = {
'uid': row.get('uid'),
'summary': row.get('summary') or row.get('titel') or row.get('title'),
'description': row.get('description') or row.get('beschreibung'),
'location': row.get('location') or row.get('ort'),
'dtstart': dtstart,
'dtend': dtend,
'all_day': (row.get('all_day') or '').lower() in ('1', 'true', 'ja', 'yes'),
'rrule': row.get('recurrence_rule') or row.get('rrule'),
'exdates': [],
}
_save(parsed)
else:
# iCal: Kalender-Datei mit beliebig vielen VEVENTs
blocks = re.findall(r'BEGIN:VEVENT.*?END:VEVENT', text, flags=re.DOTALL | re.IGNORECASE)
if not blocks:
return jsonify({'error': 'Keine VEVENT-Daten gefunden'}), 400
for block in blocks:
try:
parsed = _parse_vevent(block)
except Exception:
parsed = None
if not parsed:
skipped += 1
continue
_save(parsed, ical_block=block)
db.session.commit()
if imported:
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_calendar_recipients(cal))
return jsonify({'imported': imported, 'skipped': skipped}), 200
@api_bp.route('/calendars/<int:cal_id>/events', methods=['POST'])
@@ -166,24 +420,30 @@ def create_event(cal_id):
return jsonify({'error': 'Ungueltiges Datumsformat'}), 400
event_uid = str(uuid.uuid4())
description = (data.get('description') or '').strip()
location = (data.get('location') or '').strip()
rrule = (data.get('recurrence_rule') or '').strip()
# Build simple iCal data
ical_data = _build_ical(event_uid, summary, dtstart_dt, dtend_dt, all_day,
data.get('description', ''), data.get('location', ''),
data.get('recurrence_rule', ''))
description, location, rrule, None)
event = CalendarEvent(
calendar_id=cal_id,
uid=event_uid,
ical_data=ical_data,
summary=summary,
description=description or None,
location=location or None,
dtstart=dtstart_dt,
dtend=dtend_dt,
all_day=all_day,
recurrence_rule=data.get('recurrence_rule'),
recurrence_rule=rrule or None,
is_private=bool(data.get('is_private', False)),
)
db.session.add(event)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_calendar_recipients(cal))
return jsonify(event.to_dict()), 201
@@ -202,14 +462,20 @@ def update_event(event_id):
data = request.get_json()
if 'summary' in data:
event.summary = data['summary'].strip()
if 'description' in data:
event.description = (data['description'] or '').strip() or None
if 'location' in data:
event.location = (data['location'] or '').strip() or None
if 'dtstart' in data:
event.dtstart = datetime.fromisoformat(data['dtstart'])
if 'dtend' in data:
event.dtend = datetime.fromisoformat(data['dtend'])
event.dtend = datetime.fromisoformat(data['dtend']) if data['dtend'] else None
if 'all_day' in data:
event.all_day = data['all_day']
if 'recurrence_rule' in data:
event.recurrence_rule = data['recurrence_rule']
event.recurrence_rule = (data['recurrence_rule'] or '').strip() or None
if 'is_private' in data:
event.is_private = bool(data['is_private'])
if 'calendar_id' in data:
new_cal, cerr = _get_calendar_or_err(data['calendar_id'], user, need_write=True)
if cerr:
@@ -218,14 +484,90 @@ def update_event(event_id):
event.ical_data = _build_ical(
event.uid, event.summary, event.dtstart, event.dtend,
event.all_day, data.get('description', ''), data.get('location', ''),
event.recurrence_rule or ''
event.all_day, event.description or '', event.location or '',
event.recurrence_rule or '',
event.exdates.split(',') if event.exdates else None,
)
event.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_calendar_recipients(cal))
return jsonify(event.to_dict()), 200
@api_bp.route('/events/<int:event_id>/exception', methods=['POST'])
@token_required
def add_event_exception(event_id):
"""Exclude a single occurrence of a recurring event ("nur dieser Termin").
Optionally creates a standalone replacement event for that date."""
user = request.current_user
event = db.session.get(CalendarEvent, event_id)
if not event:
return jsonify({'error': 'Event nicht gefunden'}), 404
cal, err = _get_calendar_or_err(event.calendar_id, user, need_write=True)
if err:
return err
if not event.recurrence_rule:
return jsonify({'error': 'Kein Serientermin'}), 400
data = request.get_json()
occurrence_date = data.get('occurrence_date') # ISO date or datetime
if not occurrence_date:
return jsonify({'error': 'occurrence_date erforderlich'}), 400
# Normalize to YYYY-MM-DD for storage key
try:
parsed = datetime.fromisoformat(occurrence_date.replace('Z', '+00:00'))
key = parsed.strftime('%Y-%m-%d' if event.all_day else '%Y-%m-%dT%H:%M:%S')
except ValueError:
key = occurrence_date
existing = (event.exdates or '').split(',') if event.exdates else []
if key not in existing:
existing.append(key)
event.exdates = ','.join(filter(None, existing))
# Optional: create replacement single event
replacement = None
if data.get('replacement'):
r = data['replacement']
rep_uid = str(uuid.uuid4())
rep_start = datetime.fromisoformat(r['dtstart'])
rep_end = datetime.fromisoformat(r['dtend']) if r.get('dtend') else rep_start
replacement = CalendarEvent(
calendar_id=event.calendar_id,
uid=rep_uid,
summary=r.get('summary', event.summary),
description=r.get('description', event.description),
location=r.get('location', event.location),
dtstart=rep_start,
dtend=rep_end,
all_day=r.get('all_day', event.all_day),
recurrence_rule=None,
ical_data='',
)
replacement.ical_data = _build_ical(
rep_uid, replacement.summary, rep_start, rep_end,
replacement.all_day, replacement.description or '',
replacement.location or '', '',
)
db.session.add(replacement)
event.ical_data = _build_ical(
event.uid, event.summary, event.dtstart, event.dtend,
event.all_day, event.description or '', event.location or '',
event.recurrence_rule or '',
event.exdates.split(',') if event.exdates else None,
)
event.updated_at = datetime.now(timezone.utc)
db.session.commit()
return jsonify({
'event': event.to_dict(),
'replacement': replacement.to_dict() if replacement else None,
}), 200
@api_bp.route('/events/<int:event_id>', methods=['DELETE'])
@token_required
def delete_event(event_id):
@@ -238,8 +580,12 @@ def delete_event(event_id):
if err:
return err
cal = db.session.get(Calendar, event.calendar_id)
db.session.delete(event)
db.session.commit()
if cal:
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_calendar_recipients(cal))
return jsonify({'message': 'Event geloescht'}), 200
@@ -287,6 +633,9 @@ def share_calendar(cal_id):
except Exception:
pass
notify_calendar_change(cal.owner_id, cal.id, 'share',
shared_with=[target.id, *_calendar_recipients(cal)])
return jsonify({'message': f'Kalender mit {username} geteilt'}), 200
@@ -319,8 +668,11 @@ def remove_calendar_share(cal_id, share_id):
if not share or share.calendar_id != cal_id:
return jsonify({'error': 'Freigabe nicht gefunden'}), 404
target_id = share.shared_with_id
db.session.delete(share)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'share',
shared_with=[target_id, *_calendar_recipients(cal)])
return jsonify({'message': 'Freigabe entfernt'}), 200
@@ -334,19 +686,58 @@ def generate_ical_link(cal_id):
if not cal or cal.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
cal.ical_token = secrets.token_urlsafe(32)
data = request.get_json(silent=True) or {}
password = (data.get('password') or '').strip()
if not cal.ical_token:
cal.ical_token = secrets.token_urlsafe(32)
if password:
cal.ical_password_hash = bcrypt.generate_password_hash(password).decode('utf-8')
elif data.get('clear_password'):
cal.ical_password_hash = None
db.session.commit()
return jsonify({
'ical_url': f'/ical/{cal.ical_token}',
'token': cal.ical_token,
'has_password': bool(cal.ical_password_hash),
}), 200
@api_bp.route('/calendars/<int:cal_id>/ical-link', methods=['DELETE'])
@token_required
def revoke_ical_link(cal_id):
user = request.current_user
cal = db.session.get(Calendar, cal_id)
if not cal or cal.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
cal.ical_token = None
cal.ical_password_hash = None
db.session.commit()
return jsonify({'message': 'Link zurueckgezogen'}), 200
def _basic_auth_challenge():
return Response(
'Kalender erfordert Passwort', 401,
{'WWW-Authenticate': 'Basic realm="Mini-Cloud Kalender"'}
)
def ical_export(token):
cal = Calendar.query.filter_by(ical_token=token).first()
if not cal:
return jsonify({'error': 'Nicht gefunden'}), 404
# Password protection via HTTP Basic (compatible with DAVx5, Apple Cal,
# Thunderbird, curl, etc.). Username is ignored.
if cal.ical_password_hash:
auth = request.authorization
if not auth or not auth.password:
return _basic_auth_challenge()
if not bcrypt.check_password_hash(cal.ical_password_hash, auth.password):
return _basic_auth_challenge()
events = CalendarEvent.query.filter_by(calendar_id=cal.id).all()
lines = [
@@ -357,13 +748,14 @@ def ical_export(token):
]
for e in events:
if e.ical_data:
# Extract VEVENT from stored ical_data
lines.append(e.ical_data)
block = _redact_vevent(e.ical_data) if e.is_private else e.ical_data
lines.append(block)
elif e.is_private:
lines.append(_build_vevent(e.uid, 'Privat', e.dtstart, e.dtend, e.all_day))
else:
lines.append(_build_vevent(e.uid, e.summary, e.dtstart, e.dtend, e.all_day))
lines.append('END:VCALENDAR')
from flask import Response
return Response(
'\r\n'.join(lines),
mimetype='text/calendar',
@@ -379,7 +771,9 @@ def _format_dt(dt, all_day=False):
return dt.strftime('%Y%m%dT%H%M%SZ')
def _build_vevent(uid, summary, dtstart, dtend, all_day, description='', location='', rrule=''):
def _build_vevent(uid, summary, dtstart, dtend, all_day, description='', location='', rrule='', exdates=None):
if not dtend:
dtend = dtstart
lines = [
'BEGIN:VEVENT',
f'UID:{uid}',
@@ -397,10 +791,21 @@ def _build_vevent(uid, summary, dtstart, dtend, all_day, description='', locatio
lines.append(f'LOCATION:{location}')
if rrule:
lines.append(f'RRULE:{rrule}')
if exdates:
for ex in exdates:
if all_day:
lines.append(f'EXDATE;VALUE=DATE:{ex.replace("-", "")}')
else:
# Convert ISO datetime (with or without TZ) into YYYYMMDDTHHMMSSZ
try:
dt = datetime.fromisoformat(ex.replace('Z', '+00:00'))
lines.append(f'EXDATE:{dt.strftime("%Y%m%dT%H%M%SZ")}')
except ValueError:
pass
lines.append(f'DTSTAMP:{datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")}')
lines.append('END:VEVENT')
return '\r\n'.join(lines)
def _build_ical(uid, summary, dtstart, dtend, all_day, description='', location='', rrule=''):
return _build_vevent(uid, summary, dtstart, dtend, all_day, description, location, rrule)
def _build_ical(uid, summary, dtstart, dtend, all_day, description='', location='', rrule='', exdates=None):
return _build_vevent(uid, summary, dtstart, dtend, all_day, description, location, rrule, exdates)
+488 -106
View File
@@ -1,13 +1,35 @@
import csv
import io
import json
import re
import uuid
import zipfile
from datetime import datetime, timezone
from flask import request, jsonify
from flask import request, jsonify, Response
from app.api import api_bp
from app.api.auth import token_required
from app.extensions import db
from app.models.contact import AddressBook, Contact, AddressBookShare
from app.models.user import User
from app.services.events import broadcaster
def _notify_addressbook(owner_id: int, book_id: int, change: str, shared_with=()):
"""SSE event for a vcard or share change. Re-uses the calendar event
infrastructure with a separate 'addressbook' type."""
recipients = [owner_id, *shared_with]
broadcaster.publish(recipients, {
'type': 'addressbook',
'change': change,
'address_book_id': book_id,
})
def _book_recipients(book: AddressBook):
return [s.shared_with_id for s in
AddressBookShare.query.filter_by(address_book_id=book.id).all()]
def _get_addressbook_or_err(book_id, user, need_write=False):
@@ -26,7 +48,224 @@ def _get_addressbook_or_err(book_id, user, need_write=False):
return book, None
# --- Address Books ---
# ---------------------------------------------------------------------------
# vCard helpers
# ---------------------------------------------------------------------------
def _escape(s):
if s is None:
return ''
return str(s).replace('\\', '\\\\').replace(',', '\\,').replace(';', '\\;').replace('\n', '\\n')
def _unescape(s):
if not s:
return ''
return s.replace('\\n', '\n').replace('\\;', ';').replace('\\,', ',').replace('\\\\', '\\')
def _apply_fields_to_contact(contact: Contact, data: dict):
"""Copy fields from a JSON request into a Contact model instance."""
for field in ('prefix', 'first_name', 'middle_name', 'last_name', 'suffix',
'nickname', 'organization', 'department', 'job_title',
'notes', 'photo', 'birthday', 'anniversary'):
if field in data:
value = data[field]
setattr(contact, field, (value.strip() if isinstance(value, str) else value) or None)
if 'display_name' in data:
contact.display_name = (data['display_name'] or '').strip() or None
for jsonfield in ('emails', 'phones', 'addresses', 'websites', 'impp', 'categories'):
if jsonfield in data:
value = data[jsonfield] or []
setattr(contact, jsonfield, json.dumps(value) if value else None)
# Denormalised primary fields for list display
emails = data.get('emails') if 'emails' in data else json.loads(contact.emails) if contact.emails else []
phones = data.get('phones') if 'phones' in data else json.loads(contact.phones) if contact.phones else []
contact.primary_email = (emails[0]['value'] if emails else None)
contact.primary_phone = (phones[0]['value'] if phones else None)
# Legacy columns
contact.email = contact.primary_email
contact.phone = contact.primary_phone
# Compose display name if not provided
if not contact.display_name:
parts = [contact.prefix, contact.first_name, contact.middle_name,
contact.last_name, contact.suffix]
contact.display_name = ' '.join(p for p in parts if p) or contact.organization or None
def _build_vcard(contact: Contact) -> str:
"""Render a Contact into vCard 3.0 text."""
lines = ['BEGIN:VCARD', 'VERSION:3.0', f'UID:{contact.uid}']
if contact.display_name:
lines.append(f'FN:{_escape(contact.display_name)}')
# N: lastname;firstname;middle;prefix;suffix
n_parts = [_escape(contact.last_name), _escape(contact.first_name),
_escape(contact.middle_name), _escape(contact.prefix),
_escape(contact.suffix)]
if any(n_parts):
lines.append('N:' + ';'.join(n_parts))
if contact.nickname:
lines.append(f'NICKNAME:{_escape(contact.nickname)}')
if contact.organization or contact.department:
lines.append(f'ORG:{_escape(contact.organization or "")};{_escape(contact.department or "")}')
if contact.job_title:
lines.append(f'TITLE:{_escape(contact.job_title)}')
for e in (json.loads(contact.emails) if contact.emails else []):
typ = (e.get('type') or 'home').upper()
lines.append(f'EMAIL;TYPE={typ}:{_escape(e.get("value", ""))}')
for p in (json.loads(contact.phones) if contact.phones else []):
typ = (p.get('type') or 'cell').upper()
lines.append(f'TEL;TYPE={typ}:{_escape(p.get("value", ""))}')
for a in (json.loads(contact.addresses) if contact.addresses else []):
typ = (a.get('type') or 'home').upper()
# ADR: po_box;extended;street;city;region;postal_code;country
parts = [_escape(a.get('po_box', '')), '', _escape(a.get('street', '')),
_escape(a.get('city', '')), _escape(a.get('region', '')),
_escape(a.get('postal_code', '')), _escape(a.get('country', ''))]
lines.append(f'ADR;TYPE={typ}:' + ';'.join(parts))
for w in (json.loads(contact.websites) if contact.websites else []):
typ = (w.get('type') or '').upper()
tag = f'URL;TYPE={typ}' if typ else 'URL'
lines.append(f'{tag}:{_escape(w.get("value", ""))}')
for i in (json.loads(contact.impp) if contact.impp else []):
proto = (i.get('protocol') or 'xmpp').lower()
lines.append(f'IMPP:{proto}:{_escape(i.get("value", ""))}')
if contact.birthday:
lines.append(f'BDAY:{contact.birthday}')
if contact.anniversary:
lines.append(f'ANNIVERSARY:{contact.anniversary}')
cats = json.loads(contact.categories) if contact.categories else []
if cats:
lines.append('CATEGORIES:' + ','.join(_escape(c) for c in cats))
if contact.notes:
lines.append(f'NOTE:{_escape(contact.notes)}')
if contact.photo:
# Photo can be a data: URL or http URL. In vCard 3.0 we use PHOTO;VALUE=uri.
lines.append(f'PHOTO;VALUE=uri:{contact.photo}')
lines.append(f'REV:{datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")}')
lines.append('END:VCARD')
return '\r\n'.join(lines)
def _unfold_vcard(raw: str):
"""Undo RFC 6350 line folding (continuation lines start with space/tab)."""
lines = []
for line in raw.replace('\r\n', '\n').split('\n'):
if line.startswith((' ', '\t')) and lines:
lines[-1] += line[1:]
else:
lines.append(line)
return lines
def parse_vcard(raw: str) -> dict:
"""Parse a VCARD text into a dict of fields usable by _apply_fields_to_contact.
Returns dict with keys matching contact fields + 'uid'."""
result = {
'emails': [], 'phones': [], 'addresses': [],
'websites': [], 'impp': [], 'categories': [],
}
for line in _unfold_vcard(raw):
if ':' not in line:
continue
key, _, value = line.partition(':')
parts = key.split(';')
name = parts[0].upper()
params = {}
for p in parts[1:]:
if '=' in p:
k, v = p.split('=', 1)
params[k.upper()] = v.upper()
if name == 'UID':
result['uid'] = value.strip()
elif name == 'FN':
result['display_name'] = _unescape(value)
elif name == 'N':
fields = value.split(';')
if len(fields) >= 5:
result['last_name'] = _unescape(fields[0]) or None
result['first_name'] = _unescape(fields[1]) or None
result['middle_name'] = _unescape(fields[2]) or None
result['prefix'] = _unescape(fields[3]) or None
result['suffix'] = _unescape(fields[4]) or None
elif name == 'NICKNAME':
result['nickname'] = _unescape(value)
elif name == 'ORG':
fields = value.split(';')
result['organization'] = _unescape(fields[0]) if fields else None
if len(fields) > 1:
result['department'] = _unescape(fields[1]) or None
elif name == 'TITLE':
result['job_title'] = _unescape(value)
elif name == 'EMAIL':
result['emails'].append({
'type': (params.get('TYPE') or 'home').lower(),
'value': _unescape(value),
})
elif name == 'TEL':
result['phones'].append({
'type': (params.get('TYPE') or 'cell').lower(),
'value': _unescape(value),
})
elif name == 'ADR':
fields = value.split(';')
pad = fields + [''] * (7 - len(fields))
result['addresses'].append({
'type': (params.get('TYPE') or 'home').lower(),
'po_box': _unescape(pad[0]),
'street': _unescape(pad[2]),
'city': _unescape(pad[3]),
'region': _unescape(pad[4]),
'postal_code': _unescape(pad[5]),
'country': _unescape(pad[6]),
})
elif name == 'URL':
result['websites'].append({
'type': (params.get('TYPE') or '').lower(),
'value': _unescape(value),
})
elif name == 'IMPP':
proto, _, addr = value.partition(':')
result['impp'].append({'protocol': proto.lower(), 'value': _unescape(addr or value)})
elif name == 'CATEGORIES':
result['categories'] = [_unescape(c).strip() for c in value.split(',') if c.strip()]
elif name == 'BDAY':
result['birthday'] = _normalise_date(value)
elif name == 'ANNIVERSARY':
result['anniversary'] = _normalise_date(value)
elif name == 'NOTE':
result['notes'] = _unescape(value)
elif name == 'PHOTO':
result['photo'] = value.strip() or None
return result
def _normalise_date(s: str):
s = s.strip()
m = re.match(r'^(\d{4})-?(\d{2})-?(\d{2})$', s[:10])
if m:
return f'{m.group(1)}-{m.group(2)}-{m.group(3)}'
return None
# ---------------------------------------------------------------------------
# Address books
# ---------------------------------------------------------------------------
@api_bp.route('/addressbooks', methods=['GET'])
@token_required
@@ -49,7 +288,12 @@ def list_addressbooks():
address_book_id=b.id, shared_with_id=user.id
).first()
d['permission'] = share.permission if share else 'read'
d['owner_color'] = d.get('color')
if share and share.color:
d['color'] = share.color
d['owner_name'] = b.owner.username
d['owner_full_name'] = b.owner.full_name
d['owner_display_name'] = b.owner.display_name
d['contact_count'] = b.contacts.count()
result.append(d)
@@ -61,13 +305,19 @@ def list_addressbooks():
def create_addressbook():
user = request.current_user
data = request.get_json()
name = data.get('name', '').strip()
name = (data.get('name') or '').strip()
if not name:
return jsonify({'error': 'Name erforderlich'}), 400
book = AddressBook(owner_id=user.id, name=name, description=data.get('description', ''))
book = AddressBook(
owner_id=user.id,
name=name,
color=data.get('color', '#3788d8'),
description=data.get('description') or None,
)
db.session.add(book)
db.session.commit()
_notify_addressbook(user.id, book.id, 'created')
return jsonify(book.to_dict()), 201
@@ -77,31 +327,66 @@ def update_addressbook(book_id):
user = request.current_user
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
return jsonify({'error': 'Nicht gefunden oder keine Berechtigung'}), 404
data = request.get_json()
if 'name' in data:
book.name = data['name'].strip()
if 'description' in data:
book.description = data['description']
book.description = data['description'] or None
if 'color' in data:
book.color = data['color']
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'updated',
shared_with=_book_recipients(book))
return jsonify(book.to_dict()), 200
@api_bp.route('/addressbooks/<int:book_id>/my-color', methods=['PUT'])
@token_required
def set_my_addressbook_color(book_id):
user = request.current_user
book = db.session.get(AddressBook, book_id)
if not book:
return jsonify({'error': 'Nicht gefunden'}), 404
color = ((request.get_json() or {}).get('color') or '').strip()
if book.owner_id == user.id:
if color:
book.color = color
db.session.commit()
return jsonify({'color': book.color}), 200
share = AddressBookShare.query.filter_by(
address_book_id=book_id, shared_with_id=user.id
).first()
if not share:
return jsonify({'error': 'Kein Zugriff'}), 403
share.color = color or None
db.session.commit()
return jsonify({'color': share.color or book.color}), 200
@api_bp.route('/addressbooks/<int:book_id>', methods=['DELETE'])
@token_required
def delete_addressbook(book_id):
user = request.current_user
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
return jsonify({'error': 'Nicht gefunden oder keine Berechtigung'}), 404
recipients = _book_recipients(book)
owner_id = book.owner_id
bid = book.id
db.session.delete(book)
db.session.commit()
_notify_addressbook(owner_id, bid, 'deleted', shared_with=recipients)
return jsonify({'message': 'Adressbuch geloescht'}), 200
# --- Contacts ---
# ---------------------------------------------------------------------------
# Contacts
# ---------------------------------------------------------------------------
@api_bp.route('/addressbooks/<int:book_id>/contacts', methods=['GET'])
@token_required
@@ -111,14 +396,174 @@ def list_contacts(book_id):
if err:
return err
search = request.args.get('search', '').strip()
query = Contact.query.filter_by(address_book_id=book_id)
search = (request.args.get('q') or '').strip()
q = Contact.query.filter_by(address_book_id=book_id)
if search:
query = query.filter(Contact.display_name.ilike(f'%{search}%'))
contacts = query.order_by(Contact.display_name).all()
like = f'%{search}%'
q = q.filter(
(Contact.display_name.ilike(like)) |
(Contact.primary_email.ilike(like)) |
(Contact.organization.ilike(like))
)
contacts = q.order_by(Contact.display_name).all()
return jsonify([c.to_dict() for c in contacts]), 200
@api_bp.route('/addressbooks/<int:book_id>/export', methods=['GET'])
@token_required
def export_addressbook(book_id):
"""Export contacts as a single .vcf, a .zip with one .vcf per contact, or .csv."""
user = request.current_user
book, err = _get_addressbook_or_err(book_id, user)
if err:
return err
fmt = (request.args.get('format') or 'vcf').lower()
contacts = Contact.query.filter_by(address_book_id=book_id).order_by(Contact.display_name).all()
safe_name = re.sub(r'[^A-Za-z0-9._-]+', '_', book.name or 'kontakte') or 'kontakte'
if fmt == 'vcf':
body = '\r\n'.join((c.vcard_data or _build_vcard(c)).strip() for c in contacts) + '\r\n'
return Response(
body, mimetype='text/vcard; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.vcf"'},
)
if fmt == 'vcf-zip':
buf = io.BytesIO()
with zipfile.ZipFile(buf, 'w', zipfile.ZIP_DEFLATED) as zf:
seen = {}
for c in contacts:
base = re.sub(r'[^A-Za-z0-9._-]+', '_', c.display_name or c.uid) or c.uid
seen[base] = seen.get(base, 0) + 1
fname = f"{base}.vcf" if seen[base] == 1 else f"{base}_{seen[base]}.vcf"
zf.writestr(fname, (c.vcard_data or _build_vcard(c)).strip() + '\r\n')
buf.seek(0)
return Response(
buf.read(), mimetype='application/zip',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.zip"'},
)
if fmt == 'csv':
out = io.StringIO()
cols = ['display_name', 'prefix', 'first_name', 'middle_name', 'last_name', 'suffix',
'nickname', 'organization', 'department', 'job_title',
'primary_email', 'primary_phone', 'birthday', 'anniversary',
'emails', 'phones', 'addresses', 'websites', 'categories', 'notes']
w = csv.writer(out, delimiter=';', quoting=csv.QUOTE_ALL)
w.writerow(cols)
for c in contacts:
d = c.to_dict()
row = []
for col in cols:
v = d.get(col, '')
if isinstance(v, list):
if v and isinstance(v[0], dict):
v = '; '.join(
(x.get('value') or x.get('street') or '') +
(f" ({x.get('type')})" if x.get('type') else '')
for x in v if isinstance(x, dict)
)
else:
v = ', '.join(str(x) for x in v)
row.append('' if v is None else str(v))
w.writerow(row)
return Response(
'\ufeff' + out.getvalue(), mimetype='text/csv; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.csv"'},
)
return jsonify({'error': 'Unbekanntes Format'}), 400
@api_bp.route('/addressbooks/<int:book_id>/import', methods=['POST'])
@token_required
def import_addressbook(book_id):
"""Import vCard (.vcf, single oder mehrere im File) oder CSV."""
user = request.current_user
book, err = _get_addressbook_or_err(book_id, user, need_write=True)
if err:
return err
file = request.files.get('file')
if not file:
return jsonify({'error': 'Keine Datei'}), 400
raw = file.read()
name = (file.filename or '').lower()
try:
text = raw.decode('utf-8-sig')
except UnicodeDecodeError:
text = raw.decode('latin-1', errors='replace')
imported = 0
skipped = 0
def _add_from_parsed(parsed: dict, raw_text: str | None = None) -> bool:
nonlocal imported, skipped
if not parsed.get('display_name') and not parsed.get('first_name') \
and not parsed.get('last_name') and not parsed.get('organization'):
skipped += 1
return False
uid = parsed.get('uid') or str(uuid.uuid4())
existing = Contact.query.filter_by(address_book_id=book_id, uid=uid).first()
contact = existing or Contact(address_book_id=book_id, uid=uid, vcard_data='')
_apply_fields_to_contact(contact, parsed)
contact.vcard_data = (raw_text or '').strip() or _build_vcard(contact)
contact.updated_at = datetime.now(timezone.utc)
if not existing:
db.session.add(contact)
imported += 1
return True
if name.endswith('.csv') or (b',' in raw[:200] and b'BEGIN:VCARD' not in raw[:200]):
# CSV import
reader = csv.DictReader(io.StringIO(text), delimiter=';')
if not reader.fieldnames or len(reader.fieldnames) < 2:
# try comma
reader = csv.DictReader(io.StringIO(text), delimiter=',')
for row in reader:
row = {k.strip().lower(): (v or '').strip() for k, v in row.items() if k}
parsed = {
'display_name': row.get('display_name') or row.get('name')
or row.get('vollname') or row.get('full name'),
'first_name': row.get('first_name') or row.get('vorname'),
'last_name': row.get('last_name') or row.get('nachname'),
'middle_name': row.get('middle_name'),
'prefix': row.get('prefix') or row.get('anrede'),
'suffix': row.get('suffix'),
'nickname': row.get('nickname') or row.get('spitzname'),
'organization': row.get('organization') or row.get('firma') or row.get('company'),
'department': row.get('department') or row.get('abteilung'),
'job_title': row.get('job_title') or row.get('position') or row.get('title'),
'birthday': row.get('birthday') or row.get('geburtstag'),
'notes': row.get('notes') or row.get('notizen'),
'emails': [], 'phones': [], 'addresses': [], 'websites': [], 'categories': [],
}
email = row.get('primary_email') or row.get('email') or row.get('e-mail')
if email:
parsed['emails'].append({'type': 'home', 'value': email})
phone = row.get('primary_phone') or row.get('phone') or row.get('telefon') or row.get('mobil')
if phone:
parsed['phones'].append({'type': 'cell', 'value': phone})
cats = row.get('categories') or row.get('kategorien')
if cats:
parsed['categories'] = [c.strip() for c in cats.split(',') if c.strip()]
_add_from_parsed(parsed)
else:
# vCard - eine oder mehrere im File
parts = re.findall(r'BEGIN:VCARD.*?END:VCARD', text, flags=re.DOTALL | re.IGNORECASE)
if not parts:
return jsonify({'error': 'Keine VCARD-Daten gefunden'}), 400
for vcf in parts:
try:
parsed = parse_vcard(vcf)
except Exception:
skipped += 1
continue
_add_from_parsed(parsed, raw_text=vcf)
db.session.commit()
if imported:
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return jsonify({'imported': imported, 'skipped': skipped}), 200
@api_bp.route('/addressbooks/<int:book_id>/contacts', methods=['POST'])
@token_required
def create_contact(book_id):
@@ -127,29 +572,16 @@ def create_contact(book_id):
if err:
return err
data = request.get_json()
display_name = data.get('display_name', '').strip()
if not display_name:
return jsonify({'error': 'Name erforderlich'}), 400
contact_uid = str(uuid.uuid4())
email = data.get('email', '')
phone = data.get('phone', '')
org = data.get('organization', '')
notes = data.get('notes', '')
vcard = _build_vcard(contact_uid, display_name, email, phone, org, notes)
contact = Contact(
address_book_id=book_id,
uid=contact_uid,
vcard_data=vcard,
display_name=display_name,
email=email or None,
phone=phone or None,
)
data = request.get_json() or {}
contact = Contact(address_book_id=book_id, uid=str(uuid.uuid4()), vcard_data='')
_apply_fields_to_contact(contact, data)
if not contact.display_name:
return jsonify({'error': 'Name oder Firma erforderlich'}), 400
contact.vcard_data = _build_vcard(contact)
db.session.add(contact)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return jsonify(contact.to_dict()), 201
@@ -160,11 +592,9 @@ def get_contact(contact_id):
contact = db.session.get(Contact, contact_id)
if not contact:
return jsonify({'error': 'Kontakt nicht gefunden'}), 404
book, err = _get_addressbook_or_err(contact.address_book_id, user)
if err:
return err
result = contact.to_dict()
result['vcard_data'] = contact.vcard_data
return jsonify(result), 200
@@ -177,29 +607,17 @@ def update_contact(contact_id):
contact = db.session.get(Contact, contact_id)
if not contact:
return jsonify({'error': 'Kontakt nicht gefunden'}), 404
book, err = _get_addressbook_or_err(contact.address_book_id, user, need_write=True)
if err:
return err
data = request.get_json()
if 'display_name' in data:
contact.display_name = data['display_name'].strip()
if 'email' in data:
contact.email = data['email'] or None
if 'phone' in data:
contact.phone = data['phone'] or None
contact.vcard_data = _build_vcard(
contact.uid,
contact.display_name,
data.get('email', contact.email or ''),
data.get('phone', contact.phone or ''),
data.get('organization', ''),
data.get('notes', ''),
)
data = request.get_json() or {}
_apply_fields_to_contact(contact, data)
contact.vcard_data = _build_vcard(contact)
contact.updated_at = datetime.now(timezone.utc)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return jsonify(contact.to_dict()), 200
@@ -210,17 +628,19 @@ def delete_contact(contact_id):
contact = db.session.get(Contact, contact_id)
if not contact:
return jsonify({'error': 'Kontakt nicht gefunden'}), 404
book, err = _get_addressbook_or_err(contact.address_book_id, user, need_write=True)
if err:
return err
db.session.delete(contact)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return jsonify({'message': 'Kontakt geloescht'}), 200
# --- Sharing ---
# ---------------------------------------------------------------------------
# Sharing
# ---------------------------------------------------------------------------
@api_bp.route('/addressbooks/<int:book_id>/share', methods=['POST'])
@token_required
@@ -230,10 +650,9 @@ def share_addressbook(book_id):
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nur der Eigentuemer kann teilen'}), 403
data = request.get_json()
username = data.get('username', '').strip()
data = request.get_json() or {}
username = (data.get('username') or '').strip()
permission = data.get('permission', 'read')
if permission not in ('read', 'readwrite'):
return jsonify({'error': 'Ungueltige Berechtigung'}), 400
@@ -246,7 +665,6 @@ def share_addressbook(book_id):
existing = AddressBookShare.query.filter_by(
address_book_id=book_id, shared_with_id=target.id
).first()
is_new = not existing
if existing:
existing.permission = permission
else:
@@ -254,16 +672,9 @@ def share_addressbook(book_id):
address_book_id=book_id, shared_with_id=target.id, permission=permission
)
db.session.add(share)
db.session.commit()
if is_new:
try:
from app.services.system_mail import notify_contacts_shared
notify_contacts_shared(book.name, user.username, target, permission)
except Exception:
pass
_notify_addressbook(book.owner_id, book.id, 'share',
shared_with=[target.id, *_book_recipients(book)])
return jsonify({'message': f'Adressbuch mit {username} geteilt'}), 200
@@ -274,7 +685,6 @@ def list_addressbook_shares(book_id):
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
shares = AddressBookShare.query.filter_by(address_book_id=book_id).all()
return jsonify([{
'id': s.id,
@@ -291,17 +701,20 @@ def remove_addressbook_share(book_id, share_id):
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
share = db.session.get(AddressBookShare, share_id)
if not share or share.address_book_id != book_id:
return jsonify({'error': 'Freigabe nicht gefunden'}), 404
target_id = share.shared_with_id
db.session.delete(share)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'share',
shared_with=[target_id, *_book_recipients(book)])
return jsonify({'message': 'Freigabe entfernt'}), 200
# --- Import/Export ---
# ---------------------------------------------------------------------------
# vCard export (all contacts of a book)
# ---------------------------------------------------------------------------
@api_bp.route('/addressbooks/<int:book_id>/export', methods=['GET'])
@token_required
@@ -310,40 +723,9 @@ def export_contacts(book_id):
book, err = _get_addressbook_or_err(book_id, user)
if err:
return err
contacts = Contact.query.filter_by(address_book_id=book_id).all()
vcards = '\r\n'.join(c.vcard_data for c in contacts)
from flask import Response
parts = [c.vcard_data for c in book.contacts]
return Response(
vcards,
mimetype='text/vcard',
'\r\n'.join(parts),
mimetype='text/vcard; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{book.name}.vcf"'},
)
# --- Helpers ---
def _build_vcard(uid, display_name, email='', phone='', org='', notes=''):
parts = display_name.split(' ', 1)
first = parts[0]
last = parts[1] if len(parts) > 1 else ''
lines = [
'BEGIN:VCARD',
'VERSION:3.0',
f'UID:{uid}',
f'FN:{display_name}',
f'N:{last};{first};;;',
]
if email:
lines.append(f'EMAIL:{email}')
if phone:
lines.append(f'TEL:{phone}')
if org:
lines.append(f'ORG:{org}')
if notes:
lines.append(f'NOTE:{notes}')
lines.append(f'REV:{datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")}')
lines.append('END:VCARD')
return '\r\n'.join(lines)
+277 -43
View File
@@ -16,6 +16,44 @@ from app.api.auth import token_required
from app.extensions import db, bcrypt
from app.models.file import File, FilePermission, ShareLink
from app.models.file_lock import FileLock
from app.services.events import broadcaster, notify_file_change
def _share_recipients(file_obj):
"""Return a list of user ids (besides the owner) that should see changes
to this file because they have a direct share permission on it or on
any of its ancestor folders."""
ids = set()
cur = file_obj
while cur is not None:
for p in FilePermission.query.filter_by(file_id=cur.id).all():
ids.add(p.user_id)
cur = cur.parent
ids.discard(file_obj.owner_id)
return list(ids)
def _effective_permission(file_obj, user):
"""Returns (permission_level, can_reshare) for the given user on this file,
walking up the folder tree. Owner gets ('admin', True). Returns
(None, False) if no access."""
if file_obj.owner_id == user.id:
return ('admin', True)
levels = {'read': 0, 'write': 1, 'admin': 2}
best_level = -1
best_perm = None
best_reshare = False
cur = file_obj
while cur is not None:
perm = FilePermission.query.filter_by(file_id=cur.id, user_id=user.id).first()
if perm:
lvl = levels.get(perm.permission, -1)
if lvl > best_level:
best_level = lvl
best_perm = perm.permission
best_reshare = perm.can_reshare
cur = cur.parent
return (best_perm, best_reshare)
def _user_upload_dir(user_id):
@@ -26,16 +64,22 @@ def _user_upload_dir(user_id):
def _check_file_access(file_obj, user, permission='read'):
"""Check if user has access to file. Owner always has full access."""
"""Check if user has access to file. Owner always has full access.
A permission on an ancestor folder also grants access to all descendants."""
if file_obj.owner_id == user.id:
return True
perm = FilePermission.query.filter_by(
file_id=file_obj.id, user_id=user.id
).first()
if not perm:
return False
perm_levels = {'read': 0, 'write': 1, 'admin': 2}
return perm_levels.get(perm.permission, -1) >= perm_levels.get(permission, 0)
needed = perm_levels.get(permission, 0)
# Walk up the tree looking for a permission on this file or any ancestor
cur = file_obj
while cur is not None:
perm = FilePermission.query.filter_by(
file_id=cur.id, user_id=user.id
).first()
if perm and perm_levels.get(perm.permission, -1) >= needed:
return True
cur = cur.parent
return False
def _get_file_or_403(file_id, user, permission='read'):
@@ -63,9 +107,25 @@ def list_files():
user = request.current_user
parent_id = request.args.get('parent_id', None, type=int)
# Own files in this folder (exclude trashed)
query = File.query.filter_by(owner_id=user.id, parent_id=parent_id, is_trashed=False)
files = query.order_by(File.is_folder.desc(), File.name).all()
# When browsing into a folder, verify access first. If the folder is
# shared with us (directly or via an ancestor), list ALL its children
# - not just ones owned by us.
if parent_id is not None:
parent_folder, perr = _get_file_or_403(parent_id, user, 'read')
if perr:
return perr
if parent_folder.owner_id == user.id:
files = File.query.filter_by(
owner_id=user.id, parent_id=parent_id, is_trashed=False
).order_by(File.is_folder.desc(), File.name).all()
else:
files = File.query.filter_by(
parent_id=parent_id, is_trashed=False
).order_by(File.is_folder.desc(), File.name).all()
else:
files = File.query.filter_by(
owner_id=user.id, parent_id=None, is_trashed=False
).order_by(File.is_folder.desc(), File.name).all()
# Shared files at root level
shared = []
@@ -75,7 +135,7 @@ def list_files():
if shared_file_ids:
shared = File.query.filter(
File.id.in_(shared_file_ids),
File.parent_id.is_(None)
File.is_trashed == False # noqa: E712
).order_by(File.is_folder.desc(), File.name).all()
result = []
@@ -83,6 +143,9 @@ def list_files():
d = f.to_dict()
d['has_shares'] = ShareLink.query.filter_by(file_id=f.id).count() > 0
d['has_permissions'] = FilePermission.query.filter_by(file_id=f.id).count() > 0
my_perm, my_reshare = _effective_permission(f, user)
d['my_permission'] = my_perm
d['my_can_reshare'] = bool(my_reshare)
lock = FileLock.get_lock(f.id)
if lock:
d['locked'] = True
@@ -92,6 +155,9 @@ def list_files():
for f in shared:
d = f.to_dict()
d['shared'] = True
my_perm, my_reshare = _effective_permission(f, user)
d['my_permission'] = my_perm
d['my_can_reshare'] = bool(my_reshare)
result.append(d)
# Build breadcrumb
@@ -137,6 +203,8 @@ def create_folder():
)
db.session.add(folder)
db.session.commit()
notify_file_change(folder.owner_id, folder.id, 'created',
shared_with=_share_recipients(folder))
return jsonify(folder.to_dict()), 201
@@ -228,6 +296,8 @@ def upload_file():
existing.checksum = checksum
existing.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_file_change(existing.owner_id, existing.id, 'updated',
shared_with=_share_recipients(existing))
return jsonify(existing.to_dict()), 200
file_obj = File(
@@ -242,6 +312,8 @@ def upload_file():
)
db.session.add(file_obj)
db.session.commit()
notify_file_change(file_obj.owner_id, file_obj.id, 'created',
shared_with=_share_recipients(file_obj))
return jsonify(file_obj.to_dict()), 201
@@ -262,8 +334,11 @@ def download_file(file_id):
if not filepath.exists():
return jsonify({'error': 'Datei auf Datentraeger nicht gefunden'}), 404
return send_file(str(filepath), mimetype=f.mime_type, as_attachment=True,
download_name=f.name)
# inline=1 renders the file in-browser (used by PDF/image previews).
# Default is attachment so normal download buttons still save to disk.
inline = request.args.get('inline', '0') == '1'
return send_file(str(filepath), mimetype=f.mime_type,
as_attachment=not inline, download_name=f.name)
def _download_folder_as_zip(folder):
@@ -306,6 +381,11 @@ def update_file(file_id):
if err:
return err
# Lock-Check: fremder Lock blockiert Aenderungen (admin kann durch)
lock = FileLock.get_lock(file_id)
if lock and lock.locked_by != user.id and user.role != 'admin':
return jsonify({'error': f'Datei ist von {lock.user.username} ausgecheckt'}), 423
data = request.get_json()
if 'name' in data:
name = data['name'].strip()
@@ -331,6 +411,8 @@ def update_file(file_id):
f.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'updated',
shared_with=_share_recipients(f))
return jsonify(f.to_dict()), 200
@@ -346,9 +428,18 @@ def delete_file(file_id):
if not f or f.owner_id != user.id:
return jsonify({'error': 'Zugriff verweigert'}), 403
# Lock-Check
lock = FileLock.get_lock(file_id)
if lock and lock.locked_by != user.id and user.role != 'admin':
return jsonify({'error': f'Datei ist von {lock.user.username} ausgecheckt'}), 423
# Capture recipients BEFORE we detach the file from its parent tree
recipients = _share_recipients(f)
owner_id = f.owner_id
# Soft-delete: move to trash
_trash_recursive(f)
db.session.commit()
notify_file_change(owner_id, f.id, 'deleted', shared_with=recipients)
return jsonify({'message': 'In Papierkorb verschoben'}), 200
@@ -481,12 +572,21 @@ def empty_trash():
@token_required
def get_permissions(file_id):
user = request.current_user
f, err = _get_file_or_403(file_id, user, 'admin')
if err:
if not (f := db.session.get(File, file_id)) or f.owner_id != user.id:
return jsonify({'error': 'Zugriff verweigert'}), 403
f = db.session.get(File, file_id)
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
is_owner = (f.owner_id == user.id)
my_perm, my_reshare = _effective_permission(f, user)
if not is_owner and not my_reshare:
return jsonify({'error': 'Zugriff verweigert'}), 403
# Owners see everyone; re-sharers only see perms they granted themselves.
if is_owner:
perms = FilePermission.query.filter_by(file_id=file_id).all()
else:
perms = FilePermission.query.filter_by(file_id=file_id, granted_by=user.id).all()
perms = FilePermission.query.filter_by(file_id=file_id).all()
from app.models.user import User
result = []
for p in perms:
@@ -496,6 +596,8 @@ def get_permissions(file_id):
'user_id': p.user_id,
'username': u.username if u else None,
'permission': p.permission,
'can_reshare': bool(p.can_reshare),
'granted_by': p.granted_by,
})
return jsonify(result), 200
@@ -505,33 +607,69 @@ def get_permissions(file_id):
def set_permission(file_id):
user = request.current_user
f = db.session.get(File, file_id)
if not f or f.owner_id != user.id:
return jsonify({'error': 'Nur der Eigentuemer kann Berechtigungen setzen'}), 403
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
is_owner = (f.owner_id == user.id)
my_perm, my_reshare = _effective_permission(f, user)
if not is_owner and not my_reshare:
return jsonify({'error': 'Keine Berechtigung zum Weiterteilen'}), 403
data = request.get_json()
target_user_id = data.get('user_id')
permission = data.get('permission', 'read')
can_reshare_req = bool(data.get('can_reshare', False))
if permission not in ('read', 'write', 'admin'):
return jsonify({'error': 'Ungueltige Berechtigung'}), 400
# Re-sharers can't hand out more than they have themselves.
levels = {'read': 0, 'write': 1, 'admin': 2}
if not is_owner:
max_allowed = levels.get(my_perm, -1)
if levels.get(permission, -1) > max_allowed:
return jsonify({
'error': f'Du kannst hoechstens "{my_perm}" weiterverteilen'
}), 403
if permission == 'admin':
return jsonify({'error': 'Admin-Recht kann nur der Eigentuemer vergeben'}), 403
from app.models.user import User
target = db.session.get(User, target_user_id)
if not target:
return jsonify({'error': 'Benutzer nicht gefunden'}), 404
if target.id == f.owner_id:
return jsonify({'error': 'Eigentuemer hat bereits Vollzugriff'}), 400
existing = FilePermission.query.filter_by(
file_id=file_id, user_id=target_user_id
).first()
is_new = not existing
if existing:
# Re-sharers may only modify perms they themselves granted
if not is_owner and existing.granted_by != user.id:
return jsonify({'error': 'Diese Freigabe wurde von jemand anderem erstellt'}), 403
existing.permission = permission
existing.can_reshare = can_reshare_req
if is_new or existing.granted_by is None:
existing.granted_by = user.id
else:
perm = FilePermission(file_id=file_id, user_id=target_user_id, permission=permission)
perm = FilePermission(
file_id=file_id,
user_id=target_user_id,
permission=permission,
can_reshare=can_reshare_req,
granted_by=user.id,
)
db.session.add(perm)
db.session.commit()
# SSE: notify target user (they just got/updated access) + owner + other
# share recipients so everyone's file list refreshes.
notify_file_change(f.owner_id, f.id, 'permission',
shared_with=[target.id, *_share_recipients(f)])
# Notify user via email
if is_new:
try:
@@ -548,15 +686,24 @@ def set_permission(file_id):
def remove_permission(file_id, perm_id):
user = request.current_user
f = db.session.get(File, file_id)
if not f or f.owner_id != user.id:
return jsonify({'error': 'Nur der Eigentuemer kann Berechtigungen entfernen'}), 403
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
perm = db.session.get(FilePermission, perm_id)
if not perm or perm.file_id != file_id:
return jsonify({'error': 'Berechtigung nicht gefunden'}), 404
is_owner = (f.owner_id == user.id)
if not is_owner and perm.granted_by != user.id:
return jsonify({'error': 'Du kannst nur selbst erstellte Freigaben entfernen'}), 403
target_user_id = perm.user_id
db.session.delete(perm)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'permission',
shared_with=[target_user_id, *_share_recipients(f)])
return jsonify({'message': 'Berechtigung entfernt'}), 200
@@ -566,9 +713,14 @@ def remove_permission(file_id, perm_id):
@token_required
def create_share_link(file_id):
user = request.current_user
f, err = _get_file_or_403(file_id, user, 'read')
if err:
return err
f = db.session.get(File, file_id)
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
is_owner = (f.owner_id == user.id)
my_perm, my_reshare = _effective_permission(f, user)
if not is_owner and not my_reshare:
return jsonify({'error': 'Keine Berechtigung zum Weiterteilen'}), 403
data = request.get_json() or {}
password = data.get('password')
@@ -579,6 +731,18 @@ def create_share_link(file_id):
if permission not in ('read', 'write', 'upload_only'):
return jsonify({'error': 'Berechtigung muss "read", "write" oder "upload_only" sein'}), 400
# Re-sharers can only hand out what they have themselves.
if not is_owner:
levels = {'read': 0, 'write': 1}
max_allowed = levels.get(my_perm, -1)
requested = levels.get(permission, 99)
if requested > max_allowed:
return jsonify({
'error': f'Du hast selbst nur "{my_perm}" - kannst nicht schreibend weiterteilen'
}), 403
if permission == 'upload_only' and my_perm not in ('write', 'admin'):
return jsonify({'error': 'Upload-Links nur mit Schreibrecht moeglich'}), 403
token = secrets.token_urlsafe(32)
password_hash = None
if password:
@@ -1014,6 +1178,8 @@ def lock_file(file_id):
)
db.session.add(lock)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'locked',
shared_with=_share_recipients(f))
return jsonify(lock.to_dict()), 200
@@ -1031,6 +1197,10 @@ def unlock_file(file_id):
db.session.delete(lock)
db.session.commit()
f = db.session.get(File, file_id)
if f:
notify_file_change(f.owner_id, f.id, 'unlocked',
shared_with=_share_recipients(f))
return jsonify({'message': 'Datei entsperrt'}), 200
@@ -1084,32 +1254,96 @@ def list_locks():
@api_bp.route('/sync/tree', methods=['GET'])
@token_required
def sync_tree():
"""Returns complete file tree with checksums for sync clients."""
"""Returns complete file tree with checksums for sync clients.
Includes both files owned by the user (under 'tree') and files
shared WITH the user (as a virtual 'Geteilt mit mir' folder under
'shared'). The client merges both.
"""
user = request.current_user
def _entry(f):
entry = {
'id': f.id,
'name': f.name,
'is_folder': f.is_folder,
'size': f.size,
'checksum': f.checksum,
'updated_at': f.updated_at.isoformat() if f.updated_at else None,
'modified_at': f.updated_at.isoformat() if f.updated_at else None,
}
lock = FileLock.get_lock(f.id)
if lock:
entry['locked'] = True
entry['locked_by'] = lock.user.username
return entry
def _build_tree(parent_id):
files = File.query.filter_by(owner_id=user.id, parent_id=parent_id)\
files = File.query.filter_by(owner_id=user.id, parent_id=parent_id, is_trashed=False)\
.order_by(File.is_folder.desc(), File.name).all()
result = []
for f in files:
entry = {
'id': f.id,
'name': f.name,
'is_folder': f.is_folder,
'size': f.size,
'checksum': f.checksum,
'updated_at': f.updated_at.isoformat() if f.updated_at else None,
}
lock = FileLock.get_lock(f.id)
if lock:
entry['locked'] = True
entry['locked_by'] = lock.user.username
e = _entry(f)
if f.is_folder:
entry['children'] = _build_tree(f.id)
result.append(entry)
e['children'] = _build_tree(f.id)
result.append(e)
return result
return jsonify({'tree': _build_tree(None)}), 200
def _build_shared_children(parent_id):
files = File.query.filter_by(parent_id=parent_id, is_trashed=False)\
.order_by(File.is_folder.desc(), File.name).all()
out = []
for f in files:
e = _entry(f)
if f.is_folder:
e['children'] = _build_shared_children(f.id)
out.append(e)
return out
shared_perms = FilePermission.query.filter_by(user_id=user.id).all()
shared_roots = []
seen = set()
for perm in shared_perms:
f = perm.file
if not f or f.is_trashed or f.id in seen:
continue
seen.add(f.id)
# Nur "Top-Level"-Shares: wenn der Eltern-Ordner NICHT auch geteilt
# ist, ist dieses Item die Wurzel des Shares beim Empfaenger.
parent_shared = any(
p.file_id == f.parent_id for p in shared_perms
) if f.parent_id else False
if parent_shared:
continue
e = _entry(f)
owner = f.owner.display_name if hasattr(f, 'owner') and f.owner else None
if owner:
e['name'] = f'{f.name} (von {owner})'
if f.is_folder:
e['children'] = _build_shared_children(f.id)
shared_roots.append(e)
return jsonify({
'tree': _build_tree(None),
'shared': shared_roots,
}), 200
@api_bp.route('/sync/events', methods=['GET'])
@token_required
def sync_events():
"""Server-Sent Events stream: real-time file change notifications."""
user = request.current_user
user_id = user.id
def event_stream():
yield from broadcaster.stream(user_id)
resp = Response(event_stream(), mimetype='text/event-stream')
resp.headers['Cache-Control'] = 'no-cache'
resp.headers['X-Accel-Buffering'] = 'no' # disable nginx buffering
resp.headers['Connection'] = 'keep-alive'
return resp
@api_bp.route('/sync/changes', methods=['GET'])
+6 -1
View File
@@ -8,9 +8,10 @@ from flask import request, jsonify, current_app, send_file
from app.api import api_bp
from app.api.auth import token_required
from app.api.files import _get_file_or_403
from app.api.files import _get_file_or_403, _share_recipients
from app.extensions import db
from app.models.settings import AppSettings
from app.services.events import notify_file_change
@api_bp.route('/files/<int:file_id>/preview', methods=['GET'])
@@ -219,6 +220,8 @@ def save_file(file_id):
f.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'updated',
shared_with=_share_recipients(f))
return jsonify({'message': 'Gespeichert', 'size': f.size}), 200
except Exception as e:
return jsonify({'error': f'Speichern fehlgeschlagen: {str(e)}'}), 500
@@ -482,6 +485,8 @@ def onlyoffice_callback():
f.checksum = h.hexdigest()
f.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'updated',
shared_with=_share_recipients(f))
print(f'[OnlyOffice Callback] File saved: {f.name} ({f.size} bytes)')
# Status 2, 4, 6: cleanup
+590
View File
@@ -0,0 +1,590 @@
"""REST API for task lists / tasks (VTODO).
Mirror der calendar.py-Architektur: TaskList = Calendar-aehnliche Sammlung,
Task = VTODO. CalDAV-Anbindung erfolgt in app/dav/caldav.py: TaskLists
erscheinen als Kalender-Collection mit supported-calendar-component-set
auf VTODO und unter URL /dav/<user>/tl-<id>/.
"""
from __future__ import annotations
import re
import uuid
from datetime import datetime, timezone
from flask import request, jsonify, Response
from app.api import api_bp
from app.api.auth import token_required
from app.extensions import db
from app.models.task import TaskList, Task, TaskListShare
from app.models.user import User
from app.services.events import notify_tasklist_change
# ---------------------------------------------------------------------------
# Access helpers
# ---------------------------------------------------------------------------
def _list_recipients(tl: TaskList):
return [s.shared_with_id for s in
TaskListShare.query.filter_by(task_list_id=tl.id).all()]
def _get_list_or_err(list_id, user, need_write=False):
tl = db.session.get(TaskList, list_id)
if not tl:
return None, (jsonify({'error': 'Aufgabenliste nicht gefunden'}), 404)
if tl.owner_id == user.id:
return tl, None
share = TaskListShare.query.filter_by(
task_list_id=list_id, shared_with_id=user.id
).first()
if not share:
return None, (jsonify({'error': 'Zugriff verweigert'}), 403)
if need_write and share.permission != 'readwrite':
return None, (jsonify({'error': 'Schreibzugriff verweigert'}), 403)
return tl, None
# ---------------------------------------------------------------------------
# VTODO build / parse
# ---------------------------------------------------------------------------
def _fmt_dt(dt: datetime | None) -> str | None:
if not dt:
return None
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
return dt.astimezone(timezone.utc).strftime('%Y%m%dT%H%M%SZ')
def build_vtodo(task: Task) -> str:
lines = ['BEGIN:VTODO', f'UID:{task.uid}',
f'DTSTAMP:{_fmt_dt(datetime.now(timezone.utc))}',
f'SUMMARY:{(task.summary or "").replace(chr(10), " ")}']
if task.description:
lines.append(f'DESCRIPTION:{task.description.replace(chr(10), chr(92) + "n")}')
if task.status:
lines.append(f'STATUS:{task.status}')
if task.priority is not None:
lines.append(f'PRIORITY:{task.priority}')
if task.percent_complete is not None:
lines.append(f'PERCENT-COMPLETE:{task.percent_complete}')
if task.due:
lines.append(f'DUE:{_fmt_dt(task.due)}')
if task.dtstart:
lines.append(f'DTSTART:{_fmt_dt(task.dtstart)}')
if task.completed_at:
lines.append(f'COMPLETED:{_fmt_dt(task.completed_at)}')
if task.categories:
lines.append(f'CATEGORIES:{task.categories}')
lines.append('END:VTODO')
return '\r\n'.join(lines)
def _unfold(text: str):
out, current = [], ''
for line in text.replace('\r\n', '\n').split('\n'):
if line.startswith((' ', '\t')) and current:
current += line[1:]
else:
if current:
out.append(current)
current = line
if current:
out.append(current)
return out
def _parse_dt(value: str) -> datetime | None:
value = value.strip()
try:
if value.endswith('Z'):
return datetime.strptime(value, '%Y%m%dT%H%M%SZ').replace(tzinfo=timezone.utc)
if 'T' in value:
return datetime.strptime(value, '%Y%m%dT%H%M%S')
return datetime.strptime(value, '%Y%m%d')
except ValueError:
try:
return datetime.fromisoformat(value)
except ValueError:
return None
def parse_vtodo(raw: str) -> dict | None:
if 'BEGIN:VTODO' not in raw.upper():
return None
result: dict = {}
in_block = False
for line in _unfold(raw):
upper = line.upper()
if upper.startswith('BEGIN:VTODO'):
in_block = True
continue
if upper.startswith('END:VTODO'):
break
if not in_block or ':' not in line:
continue
key, _, value = line.partition(':')
name = key.split(';')[0].upper()
if name == 'UID':
result['uid'] = value.strip()
elif name == 'SUMMARY':
result['summary'] = value.strip()
elif name == 'DESCRIPTION':
result['description'] = value.replace('\\n', '\n').replace('\\,', ',').strip()
elif name == 'STATUS':
result['status'] = value.strip().upper()
elif name == 'PRIORITY':
try:
result['priority'] = int(value.strip())
except ValueError:
pass
elif name == 'PERCENT-COMPLETE':
try:
result['percent_complete'] = int(value.strip())
except ValueError:
pass
elif name == 'DUE':
result['due'] = _parse_dt(value)
elif name == 'DTSTART':
result['dtstart'] = _parse_dt(value)
elif name == 'COMPLETED':
result['completed_at'] = _parse_dt(value)
elif name == 'CATEGORIES':
result['categories'] = value.strip()
return result if result.get('summary') or result.get('uid') else None
def _apply(task: Task, data: dict):
if 'summary' in data:
task.summary = (data.get('summary') or '').strip() or None
if 'description' in data:
task.description = (data.get('description') or '').strip() or None
if 'status' in data:
s = (data.get('status') or '').upper().strip() or None
task.status = s
if s == 'COMPLETED' and not task.completed_at:
task.completed_at = datetime.now(timezone.utc)
task.percent_complete = 100
elif s != 'COMPLETED':
task.completed_at = None
if 'priority' in data:
task.priority = data['priority'] if data['priority'] is not None else None
if 'percent_complete' in data:
task.percent_complete = data['percent_complete']
if 'due' in data:
v = data['due']
task.due = datetime.fromisoformat(v) if v else None
if 'dtstart' in data:
v = data['dtstart']
task.dtstart = datetime.fromisoformat(v) if v else None
if 'completed_at' in data:
v = data['completed_at']
task.completed_at = datetime.fromisoformat(v) if v else None
if 'categories' in data:
cats = data['categories']
if isinstance(cats, list):
task.categories = ','.join(c.strip() for c in cats if c and c.strip()) or None
else:
task.categories = (cats or '').strip() or None
# ---------------------------------------------------------------------------
# REST endpoints - lists
# ---------------------------------------------------------------------------
@api_bp.route('/tasklists', methods=['GET'])
@token_required
def list_tasklists():
user = request.current_user
own = TaskList.query.filter_by(owner_id=user.id).all()
shared = TaskListShare.query.filter_by(shared_with_id=user.id).all()
out = []
for tl in own:
d = tl.to_dict()
d['permission'] = 'owner'
d['task_count'] = tl.tasks.count()
out.append(d)
for s in shared:
tl = s.task_list
if not tl:
continue
d = tl.to_dict()
d['permission'] = s.permission
owner = tl.owner
d['owner_name'] = owner.username if owner else ''
d['owner_full_name'] = owner.full_name if owner else ''
d['owner_display_name'] = owner.display_name if owner else ''
d['task_count'] = tl.tasks.count()
if s.color:
d['color'] = s.color
out.append(d)
return jsonify(out), 200
@api_bp.route('/tasklists', methods=['POST'])
@token_required
def create_tasklist():
user = request.current_user
data = request.get_json() or {}
name = (data.get('name') or '').strip()
if not name:
return jsonify({'error': 'Name erforderlich'}), 400
tl = TaskList(owner_id=user.id, name=name,
color=data.get('color') or '#10b981',
description=(data.get('description') or '').strip() or None)
db.session.add(tl)
db.session.commit()
notify_tasklist_change(user.id, tl.id, 'created')
return jsonify(tl.to_dict()), 201
@api_bp.route('/tasklists/<int:list_id>', methods=['PUT'])
@token_required
def update_tasklist(list_id):
user = request.current_user
tl, err = _get_list_or_err(list_id, user, need_write=True)
if err:
return err
if tl.owner_id != user.id:
return jsonify({'error': 'Nur Eigentuemer kann die Liste umbenennen'}), 403
data = request.get_json() or {}
if 'name' in data:
tl.name = data['name'].strip()
if 'color' in data:
tl.color = data['color']
if 'description' in data:
tl.description = (data['description'] or '').strip() or None
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'updated', shared_with=_list_recipients(tl))
return jsonify(tl.to_dict()), 200
@api_bp.route('/tasklists/<int:list_id>/my-color', methods=['PUT'])
@token_required
def set_my_tasklist_color(list_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl:
return jsonify({'error': 'Nicht gefunden'}), 404
color = (request.get_json() or {}).get('color')
if not color:
return jsonify({'error': 'color erforderlich'}), 400
if tl.owner_id == user.id:
tl.color = color
db.session.commit()
return jsonify({'color': tl.color}), 200
share = TaskListShare.query.filter_by(task_list_id=list_id, shared_with_id=user.id).first()
if not share:
return jsonify({'error': 'Zugriff verweigert'}), 403
share.color = color
db.session.commit()
return jsonify({'color': share.color}), 200
@api_bp.route('/tasklists/<int:list_id>', methods=['DELETE'])
@token_required
def delete_tasklist(list_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl or tl.owner_id != user.id:
return jsonify({'error': 'Nur Eigentuemer kann loeschen'}), 403
recipients = _list_recipients(tl)
db.session.delete(tl)
db.session.commit()
notify_tasklist_change(user.id, list_id, 'deleted', shared_with=recipients)
return jsonify({'message': 'Aufgabenliste geloescht'}), 200
# ---------------------------------------------------------------------------
# REST endpoints - tasks
# ---------------------------------------------------------------------------
@api_bp.route('/tasklists/<int:list_id>/tasks', methods=['GET'])
@token_required
def list_tasks(list_id):
user = request.current_user
tl, err = _get_list_or_err(list_id, user)
if err:
return err
show_done = (request.args.get('include_done') or 'true').lower() != 'false'
q = Task.query.filter_by(task_list_id=list_id)
if not show_done:
q = q.filter((Task.status.is_(None)) | (Task.status != 'COMPLETED'))
tasks = q.order_by(Task.due.asc().nullslast(), Task.priority.desc().nullslast(), Task.id).all()
return jsonify([t.to_dict() for t in tasks]), 200
@api_bp.route('/tasklists/<int:list_id>/tasks', methods=['POST'])
@token_required
def create_task(list_id):
user = request.current_user
tl, err = _get_list_or_err(list_id, user, need_write=True)
if err:
return err
data = request.get_json() or {}
if not (data.get('summary') or '').strip():
return jsonify({'error': 'Titel erforderlich'}), 400
task = Task(task_list_id=list_id, uid=str(uuid.uuid4()), ical_data='')
_apply(task, data)
if not task.status:
task.status = 'NEEDS-ACTION'
task.ical_data = build_vtodo(task)
db.session.add(task)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return jsonify(task.to_dict()), 201
@api_bp.route('/tasks/<int:task_id>', methods=['GET'])
@token_required
def get_task(task_id):
user = request.current_user
task = db.session.get(Task, task_id)
if not task:
return jsonify({'error': 'Aufgabe nicht gefunden'}), 404
tl, err = _get_list_or_err(task.task_list_id, user)
if err:
return err
return jsonify(task.to_dict()), 200
@api_bp.route('/tasks/<int:task_id>', methods=['PUT'])
@token_required
def update_task(task_id):
user = request.current_user
task = db.session.get(Task, task_id)
if not task:
return jsonify({'error': 'Aufgabe nicht gefunden'}), 404
tl, err = _get_list_or_err(task.task_list_id, user, need_write=True)
if err:
return err
data = request.get_json() or {}
if 'task_list_id' in data and data['task_list_id'] != task.task_list_id:
new_tl, e2 = _get_list_or_err(data['task_list_id'], user, need_write=True)
if e2:
return e2
task.task_list_id = data['task_list_id']
_apply(task, data)
task.ical_data = build_vtodo(task)
task.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return jsonify(task.to_dict()), 200
@api_bp.route('/tasks/<int:task_id>', methods=['DELETE'])
@token_required
def delete_task(task_id):
user = request.current_user
task = db.session.get(Task, task_id)
if not task:
return jsonify({'error': 'Aufgabe nicht gefunden'}), 404
tl, err = _get_list_or_err(task.task_list_id, user, need_write=True)
if err:
return err
db.session.delete(task)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return jsonify({'message': 'Aufgabe geloescht'}), 200
# ---------------------------------------------------------------------------
# Sharing
# ---------------------------------------------------------------------------
@api_bp.route('/tasklists/<int:list_id>/share', methods=['POST'])
@token_required
def share_tasklist(list_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl or tl.owner_id != user.id:
return jsonify({'error': 'Nur Eigentuemer kann teilen'}), 403
data = request.get_json() or {}
username = (data.get('username') or '').strip()
permission = data.get('permission', 'read')
if permission not in ('read', 'readwrite'):
return jsonify({'error': 'Ungueltige Berechtigung'}), 400
target = User.query.filter_by(username=username).first()
if not target:
return jsonify({'error': 'Benutzer nicht gefunden'}), 404
if target.id == user.id:
return jsonify({'error': 'Kann nicht mit sich selbst teilen'}), 400
existing = TaskListShare.query.filter_by(task_list_id=list_id, shared_with_id=target.id).first()
if existing:
existing.permission = permission
else:
db.session.add(TaskListShare(task_list_id=list_id, shared_with_id=target.id,
permission=permission))
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'share',
shared_with=[target.id, *_list_recipients(tl)])
return jsonify({'message': f'Geteilt mit {username}'}), 200
@api_bp.route('/tasklists/<int:list_id>/shares', methods=['GET'])
@token_required
def list_tasklist_shares(list_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl or tl.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
shares = TaskListShare.query.filter_by(task_list_id=list_id).all()
return jsonify([{
'id': s.id, 'user_id': s.shared_with_id,
'username': s.shared_with.username, 'permission': s.permission,
} for s in shares]), 200
@api_bp.route('/tasklists/<int:list_id>/shares/<int:share_id>', methods=['DELETE'])
@token_required
def remove_tasklist_share(list_id, share_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl or tl.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
share = db.session.get(TaskListShare, share_id)
if not share or share.task_list_id != list_id:
return jsonify({'error': 'Freigabe nicht gefunden'}), 404
target_id = share.shared_with_id
db.session.delete(share)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'share',
shared_with=[target_id, *_list_recipients(tl)])
return jsonify({'message': 'Freigabe entfernt'}), 200
# ---------------------------------------------------------------------------
# Import / Export (.ics with VTODO; CSV)
# ---------------------------------------------------------------------------
@api_bp.route('/tasklists/<int:list_id>/export', methods=['GET'])
@token_required
def export_tasklist(list_id):
import csv
import io
user = request.current_user
tl, err = _get_list_or_err(list_id, user)
if err:
return err
fmt = (request.args.get('format') or 'ics').lower()
tasks = Task.query.filter_by(task_list_id=list_id).all()
safe = re.sub(r'[^A-Za-z0-9._-]+', '_', tl.name or 'aufgaben') or 'aufgaben'
if fmt == 'ics':
lines = ['BEGIN:VCALENDAR', 'VERSION:2.0', 'PRODID:-//Mini-Cloud//DE', 'CALSCALE:GREGORIAN']
for t in tasks:
block = (t.ical_data or '').strip() or build_vtodo(t)
lines.append(block)
lines.append('END:VCALENDAR')
return Response(
'\r\n'.join(lines) + '\r\n',
mimetype='text/calendar; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe}.ics"'},
)
if fmt == 'csv':
out = io.StringIO()
w = csv.writer(out, delimiter=';', quoting=csv.QUOTE_ALL)
w.writerow(['summary', 'status', 'priority', 'percent_complete',
'due', 'dtstart', 'completed_at', 'categories', 'description', 'uid'])
for t in tasks:
w.writerow([
t.summary or '', t.status or '',
t.priority if t.priority is not None else '',
t.percent_complete if t.percent_complete is not None else '',
t.due.isoformat() if t.due else '',
t.dtstart.isoformat() if t.dtstart else '',
t.completed_at.isoformat() if t.completed_at else '',
t.categories or '',
(t.description or '').replace('\r\n', ' ').replace('\n', ' '),
t.uid or '',
])
return Response(
'\ufeff' + out.getvalue(), mimetype='text/csv; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe}.csv"'},
)
return jsonify({'error': 'Unbekanntes Format'}), 400
@api_bp.route('/tasklists/<int:list_id>/import', methods=['POST'])
@token_required
def import_tasklist(list_id):
import csv
import io
user = request.current_user
tl, err = _get_list_or_err(list_id, user, need_write=True)
if err:
return err
file = request.files.get('file')
if not file:
return jsonify({'error': 'Keine Datei'}), 400
raw = file.read()
try:
text = raw.decode('utf-8-sig')
except UnicodeDecodeError:
text = raw.decode('latin-1', errors='replace')
name = (file.filename or '').lower()
imported, skipped = 0, 0
def _save(parsed: dict, ical_block: str | None = None):
nonlocal imported, skipped
if not parsed.get('summary'):
skipped += 1
return
uid = parsed.get('uid') or str(uuid.uuid4())
existing = Task.query.filter_by(task_list_id=list_id, uid=uid).first()
t = existing or Task(task_list_id=list_id, uid=uid, ical_data='')
t.summary = parsed.get('summary')
t.description = parsed.get('description')
t.status = parsed.get('status') or 'NEEDS-ACTION'
t.priority = parsed.get('priority')
t.percent_complete = parsed.get('percent_complete')
t.due = parsed.get('due')
t.dtstart = parsed.get('dtstart')
t.completed_at = parsed.get('completed_at')
cats = parsed.get('categories')
if isinstance(cats, list):
t.categories = ','.join(cats)
elif isinstance(cats, str):
t.categories = cats or None
t.ical_data = (ical_block or '').strip() or build_vtodo(t)
if not existing:
db.session.add(t)
imported += 1
if name.endswith('.csv') or (b';' in raw[:200] and b'BEGIN:VCALENDAR' not in raw[:200]):
reader = csv.DictReader(__import__('io').StringIO(text), delimiter=';')
if not reader.fieldnames or len(reader.fieldnames) < 2:
reader = csv.DictReader(__import__('io').StringIO(text), delimiter=',')
for row in reader:
row = {k.strip().lower(): (v or '').strip() for k, v in row.items() if k}
try:
due = datetime.fromisoformat(row['due']) if row.get('due') else None
except ValueError:
due = None
_save({
'uid': row.get('uid'),
'summary': row.get('summary') or row.get('titel'),
'description': row.get('description') or row.get('beschreibung'),
'status': (row.get('status') or '').upper() or None,
'priority': int(row['priority']) if row.get('priority', '').isdigit() else None,
'percent_complete': int(row['percent_complete']) if row.get('percent_complete', '').isdigit() else None,
'due': due,
'categories': row.get('categories') or row.get('kategorien'),
})
else:
blocks = re.findall(r'BEGIN:VTODO.*?END:VTODO', text, flags=re.DOTALL | re.IGNORECASE)
if not blocks:
return jsonify({'error': 'Keine VTODO-Daten gefunden'}), 400
for block in blocks:
parsed = parse_vtodo(block)
if not parsed:
skipped += 1
continue
_save(parsed, ical_block=block)
db.session.commit()
if imported:
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return jsonify({'imported': imported, 'skipped': skipped}), 200
+44 -2
View File
@@ -145,6 +145,12 @@ def delete_user(user_id):
@api_bp.route('/settings', methods=['GET'])
@admin_required
def get_settings():
import time as _time
from datetime import datetime as _dt
try:
tzname = _time.strftime('%Z')
except Exception:
tzname = ''
return jsonify({
'public_registration': AppSettings.get_bool('public_registration', default=True),
'system_smtp_host': AppSettings.get('system_smtp_host', ''),
@@ -155,6 +161,11 @@ def get_settings():
'system_email_from': AppSettings.get('system_email_from', ''),
'onlyoffice_url': os.environ.get('ONLYOFFICE_URL', ''),
'onlyoffice_configured': bool(os.environ.get('ONLYOFFICE_URL', '')),
# Read-only system info aus der .env
'timezone': os.environ.get('TZ', 'Europe/Berlin'),
'timezone_abbr': tzname,
'server_time': _dt.now().isoformat(timespec='seconds'),
'ntp_server': os.environ.get('NTP_SERVER', ''),
}), 200
@@ -270,6 +281,31 @@ def create_invite_link():
# --- User search (for sharing dialogs) ---
@api_bp.route('/auth/me', methods=['GET'])
@token_required
def get_me():
return jsonify(request.current_user.to_dict(include_email=True)), 200
@api_bp.route('/auth/me', methods=['PUT'])
@token_required
def update_me():
user = request.current_user
data = request.get_json() or {}
if 'first_name' in data:
user.first_name = (data.get('first_name') or '').strip() or None
if 'last_name' in data:
user.last_name = (data.get('last_name') or '').strip() or None
if 'email' in data:
email = (data.get('email') or '').strip() or None
if email and email != user.email:
if User.query.filter(User.email == email, User.id != user.id).first():
return jsonify({'error': 'E-Mail ist bereits vergeben'}), 409
user.email = email
db.session.commit()
return jsonify(user.to_dict(include_email=True)), 200
@api_bp.route('/users/search', methods=['GET'])
@token_required
def search_users():
@@ -278,13 +314,19 @@ def search_users():
if len(query) < 2:
return jsonify([]), 200
like = f'%{query}%'
users = User.query.filter(
User.username.ilike(f'%{query}%'),
(User.username.ilike(like)) | (User.first_name.ilike(like)) | (User.last_name.ilike(like)),
User.id != request.current_user.id,
User.is_active == True,
).limit(10).all()
return jsonify([{'id': u.id, 'username': u.username} for u in users]), 200
return jsonify([{
'id': u.id,
'username': u.username,
'full_name': u.full_name,
'display_name': u.display_name,
} for u in users]), 200
# --- Change password (non-admin, own account) ---
+5
View File
@@ -40,3 +40,8 @@ class Config:
# CORS
FRONTEND_URL = os.environ.get('FRONTEND_URL', 'http://localhost:3000')
# Zeitzone (prozessweit, wirkt nach time.tzset())
TIMEZONE = os.environ.get('TZ', 'Europe/Berlin')
# NTP-Server fuer Offset-Check beim Start. Leerstring deaktiviert den Check.
NTP_SERVER = os.environ.get('NTP_SERVER', 'ptbtime1.ptb.de')
+6
View File
@@ -0,0 +1,6 @@
from flask import Blueprint
dav_bp = Blueprint('dav', __name__, url_prefix='/dav')
from . import caldav # noqa: F401,E402
from . import carddav # noqa: F401,E402
+780
View File
@@ -0,0 +1,780 @@
"""Minimal CalDAV server (RFC 4791 subset).
Implements the endpoints that Thunderbird, DAVx5 and Apple Calendar
actually use in practice:
OPTIONS - capability advertisement (DAV: 1, 2, calendar-access)
PROPFIND Depth 0/1 - discovery chain + listings
REPORT calendar-query + calendar-multiget
GET single VCALENDAR resource
PUT create/update VCALENDAR resource
DELETE remove a resource or calendar collection
Non-goals for this revision: ACL reports, free-busy, sync-token based
incremental sync, scheduling (iTIP/iMIP). Clients fall back to full
PROPFIND refresh when sync-token isn't advertised, which is fine for
small personal calendars.
"""
from __future__ import annotations
import re
import uuid
import xml.etree.ElementTree as ET
from datetime import datetime, timezone
from functools import wraps
from flask import Response, request
from app.extensions import db
from app.models.calendar import Calendar, CalendarEvent, CalendarShare
from app.models.user import User
from app.services.events import notify_calendar_change
def _cal_recipients(cal: 'Calendar'):
return [s.shared_with_id for s in
CalendarShare.query.filter_by(calendar_id=cal.id).all()]
from . import dav_bp
# ---------------------------------------------------------------------------
# XML namespace plumbing
# ---------------------------------------------------------------------------
NS = {
'd': 'DAV:',
'c': 'urn:ietf:params:xml:ns:caldav',
'cs': 'http://calendarserver.org/ns/',
'ic': 'http://apple.com/ns/ical/',
}
for prefix, uri in NS.items():
ET.register_namespace('' if prefix == 'd' else prefix, uri)
def _qn(prefix: str, local: str) -> str:
return f'{{{NS[prefix]}}}{local}'
def _xml_response(root: ET.Element, status: int = 207) -> Response:
body = b'<?xml version="1.0" encoding="utf-8"?>\n' + ET.tostring(root, encoding='utf-8')
headers = {
'DAV': '1, 2, 3, calendar-access, addressbook',
'Content-Type': 'application/xml; charset=utf-8',
}
return Response(body, status=status, headers=headers)
# ---------------------------------------------------------------------------
# Authentication (HTTP Basic over the existing user table)
# ---------------------------------------------------------------------------
def _challenge() -> Response:
return Response(
'Authentication required', 401,
{'WWW-Authenticate': 'Basic realm="Mini-Cloud DAV"'}
)
def basic_auth(f):
@wraps(f)
def wrapper(*args, **kwargs):
auth = request.authorization
if not auth or not auth.username or not auth.password:
return _challenge()
user = User.query.filter_by(username=auth.username).first()
if not user or not user.is_active or not user.check_password(auth.password):
return _challenge()
request.dav_user = user
return f(*args, **kwargs)
return wrapper
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
DAV_HEADERS = {
'DAV': '1, 2, 3, calendar-access, addressbook',
}
ALLOW_COLLECTION = 'OPTIONS, PROPFIND, REPORT, DELETE, MKCALENDAR'
ALLOW_RESOURCE = 'OPTIONS, PROPFIND, GET, PUT, DELETE'
def _etag_for_event(event: CalendarEvent) -> str:
ts = int((event.updated_at or event.created_at or datetime.now(timezone.utc)).timestamp() * 1000)
return f'"{event.id}-{ts}"'
def _href_calendar(username: str, cal_id: int) -> str:
return f'/dav/{username}/cal-{cal_id}/'
def _href_event(username: str, cal_id: int, uid: str) -> str:
return f'/dav/{username}/cal-{cal_id}/{uid}.ics'
def _user_calendars(user: User):
return Calendar.query.filter_by(owner_id=user.id).all()
def _parse_calendar_path(path_part: str):
"""Input: "cal-42" -> 42, otherwise None."""
m = re.match(r'cal-(\d+)$', path_part)
return int(m.group(1)) if m else None
def _calendar_for(user: User, cal_id: int):
cal = db.session.get(Calendar, cal_id)
if not cal or cal.owner_id != user.id:
return None
return cal
# ---------------------------------------------------------------------------
# OPTIONS (advertise DAV capabilities on any path)
# ---------------------------------------------------------------------------
@dav_bp.route('/', methods=['OPTIONS'])
@dav_bp.route('/<path:subpath>', methods=['OPTIONS'])
def options(subpath=''):
headers = {
**DAV_HEADERS,
'Allow': 'OPTIONS, PROPFIND, REPORT, GET, PUT, DELETE, MKCALENDAR',
}
return Response('', status=200, headers=headers)
# ---------------------------------------------------------------------------
# PROPFIND
# ---------------------------------------------------------------------------
def _make_response(href: str, populate_prop) -> ET.Element:
"""Build a <response><href/><propstat><prop>...</prop><status>200</status>
</propstat></response> element. `populate_prop` is a callable that gets
the <prop> element and appends the actual property sub-elements to it."""
resp = ET.Element(_qn('d', 'response'))
ET.SubElement(resp, _qn('d', 'href')).text = href
propstat = ET.SubElement(resp, _qn('d', 'propstat'))
prop = ET.SubElement(propstat, _qn('d', 'prop'))
populate_prop(prop)
ET.SubElement(propstat, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
return resp
def _root_response(href: str, user: User) -> ET.Element:
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(prop, _qn('d', 'displayname')).text = 'Mini-Cloud DAV'
cup = ET.SubElement(prop, _qn('d', 'current-user-principal'))
ET.SubElement(cup, _qn('d', 'href')).text = f'/dav/{user.username}/'
return _make_response(href, populate)
def _principal_response(user: User) -> ET.Element:
href = f'/dav/{user.username}/'
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(rt, _qn('d', 'principal'))
ET.SubElement(prop, _qn('d', 'displayname')).text = user.username
cup = ET.SubElement(prop, _qn('d', 'current-user-principal'))
ET.SubElement(cup, _qn('d', 'href')).text = href
pu = ET.SubElement(prop, _qn('d', 'principal-URL'))
ET.SubElement(pu, _qn('d', 'href')).text = href
# Separate home-sets so clients (DAVx5!) don't mix calendars and
# addressbooks in the same listing.
cal_home = ET.SubElement(prop, _qn('c', 'calendar-home-set'))
ET.SubElement(cal_home, _qn('d', 'href')).text = f'/dav/{user.username}/calendars/'
ab_home = ET.SubElement(prop, '{urn:ietf:params:xml:ns:carddav}addressbook-home-set')
ET.SubElement(ab_home, _qn('d', 'href')).text = f'/dav/{user.username}/addressbooks/'
return _make_response(href, populate)
def _calendar_response(user: User, cal: Calendar) -> ET.Element:
href = _href_calendar(user.username, cal.id)
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(rt, _qn('c', 'calendar'))
ET.SubElement(prop, _qn('d', 'displayname')).text = cal.name
ET.SubElement(prop, _qn('c', 'calendar-description')).text = cal.description or ''
supported = ET.SubElement(prop, _qn('c', 'supported-calendar-component-set'))
comp = ET.SubElement(supported, _qn('c', 'comp'))
comp.set('name', 'VEVENT')
# supported-report-set: advertise which REPORTs this collection handles
srs = ET.SubElement(prop, _qn('d', 'supported-report-set'))
for report_name in ('calendar-query', 'calendar-multiget'):
sup = ET.SubElement(srs, _qn('d', 'supported-report'))
rep = ET.SubElement(sup, _qn('d', 'report'))
ET.SubElement(rep, _qn('c', report_name))
ET.SubElement(prop, _qn('ic', 'calendar-color')).text = cal.color or '#3788d8'
ET.SubElement(prop, _qn('cs', 'getctag')).text = _calendar_ctag(cal)
# current-user-privilege-set: advertise what the authenticated user is
# allowed to do. DAVx5 checks this to decide read-only vs read-write.
cups = ET.SubElement(prop, _qn('d', 'current-user-privilege-set'))
for priv_name in ('read', 'write', 'write-properties', 'write-content', 'bind', 'unbind'):
p = ET.SubElement(cups, _qn('d', 'privilege'))
ET.SubElement(p, _qn('d', priv_name))
return _make_response(href, populate)
def _calendar_ctag(cal: Calendar) -> str:
"""Collection tag: changes when any event in the calendar changes."""
last = db.session.query(db.func.max(CalendarEvent.updated_at)).filter_by(calendar_id=cal.id).scalar()
ts = int((last or cal.updated_at or datetime.now(timezone.utc)).timestamp())
return f'"{cal.id}-{ts}"'
def _event_response(user: User, cal: Calendar, event: CalendarEvent, include_data: bool = False) -> ET.Element:
href = _href_event(user.username, cal.id, event.uid)
def populate(prop):
ET.SubElement(prop, _qn('d', 'getetag')).text = _etag_for_event(event)
ET.SubElement(prop, _qn('d', 'getcontenttype')).text = \
'text/calendar; charset=utf-8; component=VEVENT'
ET.SubElement(prop, _qn('d', 'resourcetype')) # empty -> regular resource
if include_data:
ET.SubElement(prop, _qn('c', 'calendar-data')).text = _wrap_vcalendar(cal, event)
return _make_response(href, populate)
def _wrap_vcalendar(cal: Calendar, event: CalendarEvent) -> str:
"""Return a full VCALENDAR envelope around the event's ical_data."""
lines = [
'BEGIN:VCALENDAR',
'VERSION:2.0',
'PRODID:-//Mini-Cloud//DE',
'CALSCALE:GREGORIAN',
event.ical_data.strip() if event.ical_data else '',
'END:VCALENDAR',
]
return '\r\n'.join(lines)
@dav_bp.route('/', methods=['PROPFIND'])
@dav_bp.route('/<path:subpath>', methods=['PROPFIND'])
@basic_auth
def propfind(subpath=''):
user: User = request.dav_user
depth = request.headers.get('Depth', '0')
multistatus = ET.Element(_qn('d', 'multistatus'))
parts = [p for p in subpath.split('/') if p]
# /dav/ (root) or / (when called via the app-level shortcut for DAVx5)
if not parts:
# Use the actual request path so Clients wie DAVx5 die href passend
# zu ihrer Anfrage sehen.
request_href = request.path if request.path.endswith('/') else request.path + '/'
multistatus.append(_root_response(request_href, user))
if depth != '0':
multistatus.append(_principal_response(user))
return _xml_response(multistatus)
# /dav/<username>/ : nur der Principal. Clients MUESSEN den Home-Sets
# (calendar-home-set / addressbook-home-set) folgen - sonst wuerden die
# Container hier faelschlich als leere Kalender angezeigt (DAVx5).
if len(parts) == 1:
if parts[0] != user.username:
return Response('', 403)
multistatus.append(_principal_response(user))
return _xml_response(multistatus)
# /dav/<username>/calendars/ : Kalender + Aufgabenlisten (DAVx5 erkennt
# VTODO-Listen automatisch an supported-calendar-component-set).
if len(parts) == 2 and parts[1] == 'calendars':
if parts[0] != user.username:
return Response('', 403)
container = ET.Element(_qn('d', 'response'))
ET.SubElement(container, _qn('d', 'href')).text = f'/dav/{user.username}/calendars/'
propstat = ET.SubElement(container, _qn('d', 'propstat'))
prop = ET.SubElement(propstat, _qn('d', 'prop'))
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(prop, _qn('d', 'displayname')).text = 'Kalender'
ET.SubElement(propstat, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
multistatus.append(container)
if depth != '0':
for cal in _user_calendars(user):
multistatus.append(_calendar_response(user, cal))
from .taskdav import user_lists, list_response
for tl in user_lists(user):
multistatus.append(list_response(user, tl))
return _xml_response(multistatus)
# /dav/<username>/addressbooks/ : only addressbook collections
if len(parts) == 2 and parts[1] == 'addressbooks':
if parts[0] != user.username:
return Response('', 403)
from .carddav import _addressbook_response, _user_addressbooks
container = ET.Element(_qn('d', 'response'))
ET.SubElement(container, _qn('d', 'href')).text = f'/dav/{user.username}/addressbooks/'
propstat = ET.SubElement(container, _qn('d', 'propstat'))
prop = ET.SubElement(propstat, _qn('d', 'prop'))
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(prop, _qn('d', 'displayname')).text = 'Adressbücher'
ET.SubElement(propstat, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
multistatus.append(container)
if depth != '0':
for ab in _user_addressbooks(user):
multistatus.append(_addressbook_response(user, ab))
return _xml_response(multistatus)
# /dav/<username>/cal-<id>/ : calendar + events (auch tl-N delegieren)
if len(parts) == 2:
if parts[0] != user.username:
return Response('', 403)
if parts[1].startswith('tl-'):
from .taskdav import tl_propfind
return tl_propfind(username=parts[0], tl_part=parts[1])
cal_id = _parse_calendar_path(parts[1])
if cal_id is None:
return Response('Not found', 404)
cal = _calendar_for(user, cal_id)
if not cal:
return Response('Not found', 404)
multistatus.append(_calendar_response(user, cal))
if depth != '0':
for ev in CalendarEvent.query.filter_by(calendar_id=cal.id).all():
multistatus.append(_event_response(user, cal, ev))
return _xml_response(multistatus)
# /dav/<username>/cal-<id>/<uid>.ics : single event (tl-N delegieren)
if len(parts) == 3:
if parts[0] != user.username:
return Response('', 403)
if parts[1].startswith('tl-'):
from .taskdav import tl_task_propfind
return tl_task_propfind(username=parts[0], tl_part=parts[1], filename=parts[2])
cal_id = _parse_calendar_path(parts[1])
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
uid = parts[2].removesuffix('.ics')
ev = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=uid).first()
if not ev:
return Response('Not found', 404)
multistatus.append(_event_response(user, cal, ev, include_data=True))
return _xml_response(multistatus)
return Response('Not found', 404)
# ---------------------------------------------------------------------------
# REPORT (calendar-query, calendar-multiget)
# ---------------------------------------------------------------------------
@dav_bp.route('/<path:subpath>', methods=['REPORT'])
@basic_auth
def report(subpath):
user: User = request.dav_user
parts = [p for p in subpath.split('/') if p]
if len(parts) < 2 or parts[0] != user.username:
return Response('', 403)
if parts[1].startswith('tl-'):
from .taskdav import tl_report
return tl_report(username=parts[0], tl_part=parts[1])
cal_id = _parse_calendar_path(parts[1])
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
multistatus = ET.Element(_qn('d', 'multistatus'))
tag = root.tag
# Prüfen ob der Client calendar-data angefragt hat. Falls nicht,
# liefern wir es auch nicht mit - strikter nach RFC und DAVx5
# entscheidet dann sauber "ich brauche Phase 2: multiget".
wants_data = root.find(f".//{_qn('c', 'calendar-data')}") is not None
if tag == _qn('c', 'calendar-multiget'):
hrefs = [h.text for h in root.findall(_qn('d', 'href')) if h.text]
for href in hrefs:
uid = href.rsplit('/', 1)[-1].removesuffix('.ics')
ev = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=uid).first()
if ev:
multistatus.append(_event_response(user, cal, ev, include_data=True))
return _xml_response(multistatus)
if tag == _qn('c', 'calendar-query'):
start, end = _extract_time_range(root)
q = CalendarEvent.query.filter_by(calendar_id=cal.id)
if end is not None:
q = q.filter(CalendarEvent.dtstart < end)
if start is not None:
q = q.filter(
(CalendarEvent.dtend >= start) | (CalendarEvent.dtstart >= start)
| (CalendarEvent.recurrence_rule.isnot(None))
)
for ev in q.all():
multistatus.append(_event_response(user, cal, ev, include_data=wants_data))
return _xml_response(multistatus)
# Unknown report - return empty multistatus so clients don't break
return _xml_response(multistatus)
def _extract_time_range(root: ET.Element):
tr = root.find(f".//{_qn('c', 'time-range')}")
if tr is None:
return None, None
def parse(s):
if not s:
return None
s = s.replace('Z', '+00:00')
dt = None
try:
dt = datetime.fromisoformat(s)
except ValueError:
try:
dt = datetime.strptime(s, '%Y%m%dT%H%M%S%z')
except ValueError:
try:
dt = datetime.strptime(s[:15], '%Y%m%dT%H%M%S').replace(tzinfo=timezone.utc)
except ValueError:
return None
# Unsere DB-Spalten sind tz-naive (lokal UTC) - Vergleich ginge
# sonst mit TypeError. Also tz-Info abstreifen.
if dt.tzinfo is not None:
dt = dt.astimezone(timezone.utc).replace(tzinfo=None)
return dt
return parse(tr.get('start')), parse(tr.get('end'))
# ---------------------------------------------------------------------------
# GET single event
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/<filename>', methods=['GET', 'HEAD'])
@basic_auth
def get_event(username, cal_part, filename):
if cal_part.startswith('ab-'):
from .carddav import ab_get
return ab_get(username=username, ab_part=cal_part, filename=filename)
if cal_part.startswith('tl-'):
from .taskdav import tl_get
return tl_get(username=username, tl_part=cal_part, filename=filename)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
ev = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=uid).first()
if not ev:
return Response('Not found', 404)
return Response(
_wrap_vcalendar(cal, ev),
mimetype='text/calendar; charset=utf-8',
headers={'ETag': _etag_for_event(ev)},
)
# ---------------------------------------------------------------------------
# PUT event (create or update)
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/<filename>', methods=['PUT'])
@basic_auth
def put_event(username, cal_part, filename):
if cal_part.startswith('ab-'):
from .carddav import ab_put
return ab_put(username=username, ab_part=cal_part, filename=filename)
if cal_part.startswith('tl-'):
from .taskdav import tl_put
return tl_put(username=username, tl_part=cal_part, filename=filename)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
raw = request.get_data(as_text=True) or ''
parsed = _parse_vevent(raw)
if not parsed:
return Response('Cannot parse VEVENT', 400)
# UID inside the body wins over the filename if present
body_uid = parsed.get('uid') or uid
existing = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=body_uid).first()
if_match = request.headers.get('If-Match')
if_none_match = request.headers.get('If-None-Match')
if existing and if_none_match == '*':
return Response('', 412)
if if_match and existing and if_match.strip() != _etag_for_event(existing):
return Response('', 412)
if not existing:
existing = CalendarEvent(calendar_id=cal.id, uid=body_uid, ical_data=raw)
db.session.add(existing)
existing.summary = parsed.get('summary') or '(ohne Titel)'
existing.description = parsed.get('description')
existing.location = parsed.get('location')
existing.dtstart = parsed.get('dtstart')
existing.dtend = parsed.get('dtend')
existing.all_day = parsed.get('all_day', False)
existing.recurrence_rule = parsed.get('rrule')
existing.exdates = ','.join(parsed.get('exdates', [])) or None
# Keep the raw VEVENT as-is so CalDAV clients round-trip faithfully.
existing.ical_data = _extract_vevent_block(raw)
existing.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_cal_recipients(cal))
status = 201 if request.method == 'PUT' and not if_match else 204
return Response('', status, {'ETag': _etag_for_event(existing)})
# ---------------------------------------------------------------------------
# DELETE
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/<filename>', methods=['DELETE'])
@basic_auth
def delete_event(username, cal_part, filename):
if cal_part.startswith('ab-'):
from .carddav import ab_delete
return ab_delete(username=username, ab_part=cal_part, filename=filename)
if cal_part.startswith('tl-'):
from .taskdav import tl_delete
return tl_delete(username=username, tl_part=cal_part, filename=filename)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
ev = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=uid).first()
if not ev:
return Response('', 404)
db.session.delete(ev)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_cal_recipients(cal))
return Response('', 204)
@dav_bp.route('/<username>/<cal_part>/', methods=['DELETE'])
@dav_bp.route('/<username>/<cal_part>', methods=['DELETE'])
@basic_auth
def delete_calendar(username, cal_part):
if cal_part.startswith('ab-'):
from .carddav import ab_delete_collection
return ab_delete_collection(username=username, ab_part=cal_part)
if cal_part.startswith('tl-'):
from .taskdav import tl_delete_collection
return tl_delete_collection(username=username, tl_part=cal_part)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('', 404)
recipients = _cal_recipients(cal)
owner_id = cal.owner_id
cid = cal.id
db.session.delete(cal)
db.session.commit()
notify_calendar_change(owner_id, cid, 'deleted', shared_with=recipients)
return Response('', 204)
# ---------------------------------------------------------------------------
# PROPPATCH (Clients setzen gerne Anzeigefarbe/-name). Wir persistieren
# den Kalenderfarbe (calendar-color) + Displayname; andere Properties
# bestaetigen wir als "angewendet" damit DAVx5/Apple zufrieden sind.
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/', methods=['PROPPATCH'])
@dav_bp.route('/<username>/<cal_part>', methods=['PROPPATCH'])
@basic_auth
def proppatch_calendar(username, cal_part):
if cal_part.startswith('tl-'):
from .taskdav import tl_proppatch
return tl_proppatch(username=username, tl_part=cal_part)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
for el in root.iter():
tag = el.tag
if tag == _qn('ic', 'calendar-color') and el.text:
cal.color = el.text.strip()[:7]
elif tag == _qn('d', 'displayname') and el.text:
cal.name = el.text.strip()[:255]
db.session.commit()
# Respond with 207 marking everything as applied so the client is happy.
multistatus = ET.Element(_qn('d', 'multistatus'))
href = _href_calendar(user.username, cal.id)
resp = ET.SubElement(multistatus, _qn('d', 'response'))
ET.SubElement(resp, _qn('d', 'href')).text = href
propstat = ET.SubElement(resp, _qn('d', 'propstat'))
prop = ET.SubElement(propstat, _qn('d', 'prop'))
# Echo back everything the client asked to set
for set_block in root.findall(_qn('d', 'set')):
inner_prop = set_block.find(_qn('d', 'prop'))
if inner_prop is not None:
for child in inner_prop:
ET.SubElement(prop, child.tag)
ET.SubElement(propstat, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
return _xml_response(multistatus)
# ---------------------------------------------------------------------------
# MKCALENDAR (create a new calendar collection via the DAV URL)
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/', methods=['MKCALENDAR'])
@dav_bp.route('/<username>/<cal_part>', methods=['MKCALENDAR'])
@basic_auth
def mkcalendar(username, cal_part):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
# Extract display name from body if present
name = 'Neuer Kalender'
color = '#3788d8'
try:
body = request.get_data()
if body:
root = ET.fromstring(body)
dn = root.find(f".//{_qn('d', 'displayname')}")
if dn is not None and dn.text:
name = dn.text
col = root.find(f".//{_qn('ic', 'calendar-color')}")
if col is not None and col.text:
color = col.text[:7]
except ET.ParseError:
pass
cal = Calendar(owner_id=user.id, name=name, color=color)
db.session.add(cal)
db.session.commit()
return Response('', 201, {'Location': _href_calendar(user.username, cal.id)})
# ---------------------------------------------------------------------------
# VEVENT parser (quick & pragmatic - covers what the major CalDAV clients send)
# ---------------------------------------------------------------------------
def _extract_vevent_block(raw: str) -> str:
"""Return only the VEVENT block from a full VCALENDAR body. If none
is found the input is returned as-is."""
m = re.search(r'BEGIN:VEVENT[\s\S]*?END:VEVENT', raw, flags=re.IGNORECASE)
return m.group(0) if m else raw
def _unfold(raw: str) -> list[str]:
"""Undo RFC 5545 line folding (continuation lines start with space/tab)."""
lines = []
for line in raw.replace('\r\n', '\n').split('\n'):
if line.startswith((' ', '\t')) and lines:
lines[-1] += line[1:]
else:
lines.append(line)
return lines
def _parse_dt(value: str, params: dict) -> tuple[datetime | None, bool]:
"""Parse an iCalendar DATE or DATE-TIME. Returns (datetime, all_day)."""
if not value:
return None, False
is_date = params.get('VALUE', '').upper() == 'DATE' or len(value) == 8
if is_date:
try:
return datetime.strptime(value, '%Y%m%d'), True
except ValueError:
return None, True
# Try Z (UTC), TZID-tagged, or naive floating time
val = value.replace('Z', '')
for fmt in ('%Y%m%dT%H%M%S', '%Y-%m-%dT%H:%M:%S', '%Y-%m-%d %H:%M:%S'):
try:
dt = datetime.strptime(val, fmt)
if value.endswith('Z'):
dt = dt.replace(tzinfo=timezone.utc)
return dt, False
except ValueError:
continue
return None, False
def _parse_vevent(raw: str) -> dict | None:
block = _extract_vevent_block(raw)
if 'BEGIN:VEVENT' not in block.upper():
return None
result: dict = {'exdates': []}
for line in _unfold(block):
if ':' not in line:
continue
key, _, value = line.partition(':')
# Separate parameters: "DTSTART;TZID=Europe/Berlin"
parts = key.split(';')
name = parts[0].upper()
params = {}
for p in parts[1:]:
if '=' in p:
k, v = p.split('=', 1)
params[k.upper()] = v
if name == 'UID':
result['uid'] = value.strip()
elif name == 'SUMMARY':
result['summary'] = _unescape(value)
elif name == 'DESCRIPTION':
result['description'] = _unescape(value)
elif name == 'LOCATION':
result['location'] = _unescape(value)
elif name == 'DTSTART':
dt, all_day = _parse_dt(value, params)
result['dtstart'] = dt
result['all_day'] = all_day
elif name == 'DTEND':
dt, _ = _parse_dt(value, params)
result['dtend'] = dt
elif name == 'RRULE':
result['rrule'] = value.strip()
elif name == 'EXDATE':
dt, all_day = _parse_dt(value, params)
if dt:
result['exdates'].append(
dt.strftime('%Y-%m-%d' if all_day else '%Y-%m-%dT%H:%M:%S')
)
if 'uid' not in result:
result['uid'] = str(uuid.uuid4())
return result
def _unescape(s: str) -> str:
return s.replace('\\n', '\n').replace('\\,', ',').replace('\\;', ';').replace('\\\\', '\\')
+367
View File
@@ -0,0 +1,367 @@
"""Minimal CardDAV server (RFC 6352 subset).
Mirror structure of caldav.py - adds addressbook collections under
/dav/<username>/ab-<id>/
and serves vCard 3.0 resources via GET/PUT/DELETE plus addressbook-query
and addressbook-multiget REPORTs.
Reuses the auth + XML helpers from caldav.py to stay consistent.
"""
from __future__ import annotations
import re
import uuid
import xml.etree.ElementTree as ET
from datetime import datetime, timezone
from flask import Response, request
from app.extensions import db
from app.models.contact import AddressBook, Contact, AddressBookShare
from app.models.user import User
from app.api.contacts import (
_apply_fields_to_contact, _build_vcard, parse_vcard,
_notify_addressbook, _book_recipients,
)
from . import dav_bp
from .caldav import (
NS, _qn, _xml_response, basic_auth, _make_response,
_principal_response, # reused - we extend below
)
# ---------------------------------------------------------------------------
# URL helpers
# ---------------------------------------------------------------------------
def _href_addressbook(username: str, book_id: int) -> str:
return f'/dav/{username}/ab-{book_id}/'
def _href_vcard(username: str, book_id: int, uid: str) -> str:
return f'/dav/{username}/ab-{book_id}/{uid}.vcf'
def _parse_addressbook_path(part: str):
m = re.match(r'ab-(\d+)$', part)
return int(m.group(1)) if m else None
def _user_addressbooks(user: User):
return AddressBook.query.filter_by(owner_id=user.id).all()
def _addressbook_for(user: User, book_id: int):
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return None
return book
# ---------------------------------------------------------------------------
# Property responses
# ---------------------------------------------------------------------------
def _addressbook_ctag(book: AddressBook) -> str:
last = db.session.query(db.func.max(Contact.updated_at)).filter_by(address_book_id=book.id).scalar()
ts = int((last or book.updated_at or datetime.now(timezone.utc)).timestamp())
return f'"ab{book.id}-{ts}"'
def _addressbook_response(user: User, book: AddressBook) -> ET.Element:
href = _href_addressbook(user.username, book.id)
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
# urn:ietf:params:xml:ns:carddav addressbook element
ab = ET.SubElement(rt, '{urn:ietf:params:xml:ns:carddav}addressbook') # noqa: F841
ET.SubElement(prop, _qn('d', 'displayname')).text = book.name
ET.SubElement(prop, '{urn:ietf:params:xml:ns:carddav}addressbook-description').text = book.description or ''
srs = ET.SubElement(prop, _qn('d', 'supported-report-set'))
for r in ('addressbook-query', 'addressbook-multiget'):
sup = ET.SubElement(srs, _qn('d', 'supported-report'))
rep = ET.SubElement(sup, _qn('d', 'report'))
ET.SubElement(rep, '{urn:ietf:params:xml:ns:carddav}' + r)
ET.SubElement(prop, _qn('ic', 'calendar-color')).text = book.color or '#3788d8'
ET.SubElement(prop, _qn('cs', 'getctag')).text = _addressbook_ctag(book)
cups = ET.SubElement(prop, _qn('d', 'current-user-privilege-set'))
for priv in ('read', 'write', 'write-properties', 'write-content', 'bind', 'unbind'):
p = ET.SubElement(cups, _qn('d', 'privilege'))
ET.SubElement(p, _qn('d', priv))
return _make_response(href, populate)
def _vcard_response(user: User, book: AddressBook, contact: Contact, include_data: bool = False) -> ET.Element:
href = _href_vcard(user.username, book.id, contact.uid)
def populate(prop):
ts = int((contact.updated_at or datetime.now(timezone.utc)).timestamp() * 1000)
ET.SubElement(prop, _qn('d', 'getetag')).text = f'"{contact.id}-{ts}"'
ET.SubElement(prop, _qn('d', 'getcontenttype')).text = 'text/vcard; charset=utf-8'
ET.SubElement(prop, _qn('d', 'resourcetype'))
if include_data:
ET.SubElement(prop, '{urn:ietf:params:xml:ns:carddav}address-data').text = \
contact.vcard_data or _build_vcard(contact)
return _make_response(href, populate)
def _etag_for_contact(contact: Contact) -> str:
ts = int((contact.updated_at or contact.created_at or datetime.now(timezone.utc)).timestamp() * 1000)
return f'"{contact.id}-{ts}"'
# ---------------------------------------------------------------------------
# Extend the principal response from caldav.py to include addressbook-home-set
# This is done by wrapping the existing helper and appending the extra prop.
# ---------------------------------------------------------------------------
# We import caldav.propfind and add a separate URL-rule set here. For the
# principal, caldav._principal_response already emits calendar-home-set; we
# leave the combined principal to that function. CardDAV clients that check
# addressbook-home-set via PROPFIND get it via our own route below, because
# the URL `/dav/<username>/` is handled by caldav's propfind. To also return
# addressbook-home-set there we monkey-patch the principal populate.
# Simpler approach: re-implement the principal for our own URL-space by
# hooking into the propfind dispatcher's principal branch.
# Since caldav.propfind already builds the principal response, we inject the
# addressbook-home-set via a wrapper. Let's override by providing our own
# handler in the blueprint that augments the response.
# ---------------------------------------------------------------------------
# OPTIONS / PROPFIND / REPORT / GET / PUT / DELETE for /dav/<user>/ab-<id>/...
# ---------------------------------------------------------------------------
_DAV_HEADERS = {'DAV': '1, 2, 3, addressbook'}
@dav_bp.route('/<username>/<ab_part>/', methods=['OPTIONS'])
@dav_bp.route('/<username>/<ab_part>', methods=['OPTIONS'])
def ab_options(username, ab_part):
if not ab_part.startswith('ab-'):
from .caldav import options as _cal_options
return _cal_options(subpath=f'{username}/{ab_part}')
return Response('', 200, {
'DAV': '1, 2, 3, addressbook',
'Allow': 'OPTIONS, PROPFIND, REPORT, GET, PUT, DELETE, PROPPATCH, MKCOL',
})
@dav_bp.route('/<username>/<ab_part>/', methods=['PROPFIND'])
@dav_bp.route('/<username>/<ab_part>', methods=['PROPFIND'])
@basic_auth
def ab_propfind(username, ab_part):
if not ab_part.startswith('ab-'):
from .caldav import propfind as _cal_propfind
return _cal_propfind(subpath=f'{username}/{ab_part}')
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
depth = request.headers.get('Depth', '0')
multistatus = ET.Element(_qn('d', 'multistatus'))
multistatus.append(_addressbook_response(user, book))
if depth != '0':
for c in book.contacts.all():
multistatus.append(_vcard_response(user, book, c))
return _xml_response(multistatus)
@dav_bp.route('/<username>/<ab_part>/<filename>', methods=['PROPFIND'])
@basic_auth
def ab_contact_propfind(username, ab_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
uid = filename.removesuffix('.vcf')
contact = Contact.query.filter_by(address_book_id=book.id, uid=uid).first()
if not contact:
return Response('Not found', 404)
multistatus = ET.Element(_qn('d', 'multistatus'))
multistatus.append(_vcard_response(user, book, contact, include_data=True))
return _xml_response(multistatus)
@dav_bp.route('/<username>/<ab_part>/', methods=['REPORT'])
@dav_bp.route('/<username>/<ab_part>', methods=['REPORT'])
@basic_auth
def ab_report(username, ab_part):
if not ab_part.startswith('ab-'):
from .caldav import report as _cal_report
return _cal_report(subpath=f'{username}/{ab_part}')
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
wants_data = root.find(f".//{{urn:ietf:params:xml:ns:carddav}}address-data") is not None
multistatus = ET.Element(_qn('d', 'multistatus'))
if root.tag == '{urn:ietf:params:xml:ns:carddav}addressbook-multiget':
hrefs = [h.text for h in root.findall(_qn('d', 'href')) if h.text]
for href in hrefs:
uid = href.rsplit('/', 1)[-1].removesuffix('.vcf')
contact = Contact.query.filter_by(address_book_id=book.id, uid=uid).first()
if contact:
multistatus.append(_vcard_response(user, book, contact, include_data=True))
return _xml_response(multistatus)
if root.tag == '{urn:ietf:params:xml:ns:carddav}addressbook-query':
# No filter implementation yet - return all
for contact in book.contacts.all():
multistatus.append(_vcard_response(user, book, contact, include_data=wants_data))
return _xml_response(multistatus)
return _xml_response(multistatus)
@dav_bp.route('/<username>/<ab_part>/<filename>', methods=['GET', 'HEAD'])
@basic_auth
def ab_get(username, ab_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
uid = filename.removesuffix('.vcf')
contact = Contact.query.filter_by(address_book_id=book.id, uid=uid).first()
if not contact:
return Response('Not found', 404)
return Response(
contact.vcard_data or _build_vcard(contact),
mimetype='text/vcard; charset=utf-8',
headers={'ETag': _etag_for_contact(contact)},
)
@dav_bp.route('/<username>/<ab_part>/<filename>', methods=['PUT'])
@basic_auth
def ab_put(username, ab_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
uid = filename.removesuffix('.vcf')
raw = request.get_data(as_text=True) or ''
parsed = parse_vcard(raw)
body_uid = parsed.get('uid') or uid
existing = Contact.query.filter_by(address_book_id=book.id, uid=body_uid).first()
if_match = request.headers.get('If-Match')
if_none_match = request.headers.get('If-None-Match')
if existing and if_none_match == '*':
return Response('', 412)
if if_match and existing and if_match.strip() != _etag_for_contact(existing):
return Response('', 412)
is_new = existing is None
if is_new:
existing = Contact(address_book_id=book.id, uid=body_uid, vcard_data=raw)
db.session.add(existing)
_apply_fields_to_contact(existing, parsed)
# Keep the original raw VCARD so round-tripping is faithful - but also
# record the server-rebuilt version for web UI consumers. We prefer the
# raw source of truth here.
existing.vcard_data = raw.strip() or _build_vcard(existing)
existing.updated_at = datetime.now(timezone.utc)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
status = 201 if is_new else 204
return Response('', status, {'ETag': _etag_for_contact(existing)})
@dav_bp.route('/<username>/<ab_part>/<filename>', methods=['DELETE'])
@basic_auth
def ab_delete(username, ab_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
uid = filename.removesuffix('.vcf')
contact = Contact.query.filter_by(address_book_id=book.id, uid=uid).first()
if not contact:
return Response('', 404)
db.session.delete(contact)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return Response('', 204)
@dav_bp.route('/<username>/<ab_part>/', methods=['DELETE'])
@dav_bp.route('/<username>/<ab_part>', methods=['DELETE'])
@basic_auth
def ab_delete_collection(username, ab_part):
if not ab_part.startswith('ab-'):
return Response('', 404)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('', 404)
recipients = _book_recipients(book)
owner_id = book.owner_id
book_id = book.id
db.session.delete(book)
db.session.commit()
_notify_addressbook(owner_id, book_id, 'deleted', shared_with=recipients)
return Response('', 204)
@dav_bp.route('/<username>/<ab_part>/', methods=['MKCOL'])
@dav_bp.route('/<username>/<ab_part>', methods=['MKCOL'])
@basic_auth
def ab_mkcol(username, ab_part):
"""Create a new addressbook collection via MKCOL (RFC 5689 extended).
Some CardDAV clients (Apple) use this instead of MKCALENDAR."""
user: User = request.dav_user
if username != user.username:
return Response('', 403)
name = 'Neues Adressbuch'
try:
body = request.get_data()
if body:
root = ET.fromstring(body)
dn = root.find(f".//{_qn('d', 'displayname')}")
if dn is not None and dn.text:
name = dn.text
except ET.ParseError:
pass
book = AddressBook(owner_id=user.id, name=name)
db.session.add(book)
db.session.commit()
_notify_addressbook(user.id, book.id, 'created')
return Response('', 201, {'Location': _href_addressbook(user.username, book.id)})
+368
View File
@@ -0,0 +1,368 @@
"""CalDAV Task-List Handler (VTODO).
TaskLists werden parallel zu Calendars als Calendar-Collection
ausgeliefert, jedoch mit `<supported-calendar-component-set>` = VTODO
(statt VEVENT). DAVx5/OpenTasks erkennen sie dadurch automatisch als
Aufgabenliste.
URL-Schema:
/dav/<user>/tl-<id>/ Collection
/dav/<user>/tl-<id>/<uid>.ics VTODO-Resource
Diese Funktionen werden aus caldav.py heraus aufgerufen, sobald der
URL-Bestandteil mit `tl-` beginnt - parallel zur ab-/CardDAV-Delegation.
"""
from __future__ import annotations
import re
import xml.etree.ElementTree as ET
from datetime import datetime, timezone
from flask import Response, request
from app.extensions import db
from app.models.task import TaskList, Task
from app.models.user import User
from app.api.tasks import build_vtodo, parse_vtodo, _list_recipients
from app.services.events import notify_tasklist_change
# Re-use XML helpers from caldav.py
def _import_caldav_helpers():
from . import caldav
return caldav
def _qn(prefix, name):
return _import_caldav_helpers()._qn(prefix, name)
def _xml_response(elem):
return _import_caldav_helpers()._xml_response(elem)
def _make_response(href, populate):
return _import_caldav_helpers()._make_response(href, populate)
# ---------------------------------------------------------------------------
# Path / URL helpers
# ---------------------------------------------------------------------------
def parse_tl_path(part: str):
m = re.match(r'tl-(\d+)$', part)
return int(m.group(1)) if m else None
def href_list(username, lid):
return f'/dav/{username}/tl-{lid}/'
def href_task(username, lid, uid):
return f'/dav/{username}/tl-{lid}/{uid}.ics'
def user_lists(user: User):
return TaskList.query.filter_by(owner_id=user.id).all()
def list_for(user: User, lid: int):
tl = db.session.get(TaskList, lid)
if not tl or tl.owner_id != user.id:
return None
return tl
def _ctag(tl: TaskList) -> str:
last = db.session.query(db.func.max(Task.updated_at)).filter_by(task_list_id=tl.id).scalar()
ts = int((last or tl.updated_at or datetime.now(timezone.utc)).timestamp())
return f'"tl{tl.id}-{ts}"'
def _etag(t: Task) -> str:
ts = int((t.updated_at or t.created_at or datetime.now(timezone.utc)).timestamp() * 1000)
return f'"{t.id}-{ts}"'
def _wrap_vcalendar(t: Task) -> str:
block = (t.ical_data or '').strip() or build_vtodo(t)
return '\r\n'.join([
'BEGIN:VCALENDAR', 'VERSION:2.0', 'PRODID:-//Mini-Cloud//DE',
'CALSCALE:GREGORIAN', block, 'END:VCALENDAR',
])
# ---------------------------------------------------------------------------
# PROPFIND building blocks
# ---------------------------------------------------------------------------
def list_response(user: User, tl: TaskList) -> ET.Element:
href = href_list(user.username, tl.id)
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(rt, _qn('c', 'calendar'))
ET.SubElement(prop, _qn('d', 'displayname')).text = tl.name
ET.SubElement(prop, _qn('c', 'calendar-description')).text = tl.description or ''
supported = ET.SubElement(prop, _qn('c', 'supported-calendar-component-set'))
comp = ET.SubElement(supported, _qn('c', 'comp'))
comp.set('name', 'VTODO')
srs = ET.SubElement(prop, _qn('d', 'supported-report-set'))
for r in ('calendar-query', 'calendar-multiget'):
sup = ET.SubElement(srs, _qn('d', 'supported-report'))
rep = ET.SubElement(sup, _qn('d', 'report'))
ET.SubElement(rep, _qn('c', r))
ET.SubElement(prop, _qn('ic', 'calendar-color')).text = tl.color or '#10b981'
ET.SubElement(prop, _qn('cs', 'getctag')).text = _ctag(tl)
cups = ET.SubElement(prop, _qn('d', 'current-user-privilege-set'))
for priv in ('read', 'write', 'write-properties', 'write-content', 'bind', 'unbind'):
p = ET.SubElement(cups, _qn('d', 'privilege'))
ET.SubElement(p, _qn('d', priv))
return _make_response(href, populate)
def task_response(user: User, tl: TaskList, t: Task, include_data=False) -> ET.Element:
href = href_task(user.username, tl.id, t.uid)
def populate(prop):
ET.SubElement(prop, _qn('d', 'getetag')).text = _etag(t)
ET.SubElement(prop, _qn('d', 'getcontenttype')).text = \
'text/calendar; charset=utf-8; component=VTODO'
ET.SubElement(prop, _qn('d', 'resourcetype'))
if include_data:
ET.SubElement(prop, _qn('c', 'calendar-data')).text = _wrap_vcalendar(t)
return _make_response(href, populate)
# ---------------------------------------------------------------------------
# Handlers (entered from caldav.py when path starts with tl-)
# ---------------------------------------------------------------------------
def tl_propfind(username, tl_part):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
depth = request.headers.get('Depth', '0')
multi = ET.Element(_qn('d', 'multistatus'))
multi.append(list_response(user, tl))
if depth != '0':
for t in tl.tasks.all():
multi.append(task_response(user, tl, t))
return _xml_response(multi)
def tl_task_propfind(username, tl_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
t = Task.query.filter_by(task_list_id=tl.id, uid=uid).first()
if not t:
return Response('Not found', 404)
multi = ET.Element(_qn('d', 'multistatus'))
multi.append(task_response(user, tl, t, include_data=True))
return _xml_response(multi)
def tl_report(username, tl_part):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
wants_data = root.find(f".//{_qn('c', 'calendar-data')}") is not None
multi = ET.Element(_qn('d', 'multistatus'))
if root.tag == _qn('c', 'calendar-multiget'):
hrefs = [h.text for h in root.findall(_qn('d', 'href')) if h.text]
for href in hrefs:
uid = href.rsplit('/', 1)[-1].removesuffix('.ics')
t = Task.query.filter_by(task_list_id=tl.id, uid=uid).first()
if t:
multi.append(task_response(user, tl, t, include_data=True))
return _xml_response(multi)
if root.tag == _qn('c', 'calendar-query'):
for t in tl.tasks.all():
multi.append(task_response(user, tl, t, include_data=wants_data))
return _xml_response(multi)
return _xml_response(multi)
def tl_get(username, tl_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
t = Task.query.filter_by(task_list_id=tl.id, uid=uid).first()
if not t:
return Response('Not found', 404)
return Response(_wrap_vcalendar(t),
mimetype='text/calendar; charset=utf-8',
headers={'ETag': _etag(t)})
def tl_put(username, tl_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
raw = request.get_data(as_text=True) or ''
parsed = parse_vtodo(raw)
if not parsed:
return Response('Cannot parse VTODO', 400)
body_uid = parsed.get('uid') or uid
existing = Task.query.filter_by(task_list_id=tl.id, uid=body_uid).first()
if_match = request.headers.get('If-Match')
if_none_match = request.headers.get('If-None-Match')
if existing and if_none_match == '*':
return Response('', 412)
if if_match and existing and if_match.strip() != _etag(existing):
return Response('', 412)
is_new = existing is None
if is_new:
existing = Task(task_list_id=tl.id, uid=body_uid, ical_data=raw)
db.session.add(existing)
existing.summary = parsed.get('summary') or '(ohne Titel)'
existing.description = parsed.get('description')
existing.status = parsed.get('status') or 'NEEDS-ACTION'
existing.priority = parsed.get('priority')
existing.percent_complete = parsed.get('percent_complete')
existing.due = parsed.get('due')
existing.dtstart = parsed.get('dtstart')
existing.completed_at = parsed.get('completed_at')
cats = parsed.get('categories')
if isinstance(cats, str):
existing.categories = cats or None
elif isinstance(cats, list):
existing.categories = ','.join(cats) or None
# Roh-Block sichern fuer Round-Trip
block = re.search(r'BEGIN:VTODO.*?END:VTODO', raw, flags=re.DOTALL | re.IGNORECASE)
existing.ical_data = (block.group(0).strip() if block else raw.strip()) or build_vtodo(existing)
existing.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return Response('', 201 if is_new else 204, {'ETag': _etag(existing)})
def tl_delete(username, tl_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
t = Task.query.filter_by(task_list_id=tl.id, uid=uid).first()
if not t:
return Response('', 404)
db.session.delete(t)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return Response('', 204)
def tl_delete_collection(username, tl_part):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('', 404)
recipients = _list_recipients(tl)
owner_id = tl.owner_id
list_id = tl.id
db.session.delete(tl)
db.session.commit()
notify_tasklist_change(owner_id, list_id, 'deleted', shared_with=recipients)
return Response('', 204)
def tl_options(username, tl_part):
return Response('', 200, {
'DAV': '1, 2, 3, calendar-access, addressbook',
'Allow': 'OPTIONS, PROPFIND, REPORT, GET, PUT, DELETE, MKCALENDAR, PROPPATCH',
})
def tl_proppatch(username, tl_part):
"""Bestaetige Property-Updates damit Clients zufrieden sind. Wir
persistieren Displayname + Color, alles andere wird stillschweigend
akzeptiert."""
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
changed = False
for el in root.iter():
tag = (el.tag.split('}', 1)[1] if '}' in el.tag else el.tag).lower()
if tag == 'displayname' and el.text:
tl.name = el.text
changed = True
elif tag == 'calendar-color' and el.text:
tl.color = el.text[:7]
changed = True
if changed:
db.session.commit()
multi = ET.Element(_qn('d', 'multistatus'))
resp = ET.SubElement(multi, _qn('d', 'response'))
ET.SubElement(resp, _qn('d', 'href')).text = href_list(user.username, tl.id)
ps = ET.SubElement(resp, _qn('d', 'propstat'))
ET.SubElement(ps, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
return _xml_response(multi)
def tl_mkcol(username, tl_part):
"""Erstelle eine neue TaskList per MKCOL/MKCALENDAR. Der Pfadteil
`tl-N` ist bei MKCOL aber unbekannt - DAVx5 schickt einen frei
gewaehlten Namen wie `mein-task-uuid`. Daher: wir akzeptieren jeden
Pfadteil und legen eine TaskList an."""
user: User = request.dav_user
if username != user.username:
return Response('', 403)
name = 'Neue Aufgabenliste'
try:
body = request.get_data()
if body:
root = ET.fromstring(body)
for el in root.iter():
tag = (el.tag.split('}', 1)[1] if '}' in el.tag else el.tag).lower()
if tag == 'displayname' and el.text:
name = el.text
except ET.ParseError:
pass
tl = TaskList(owner_id=user.id, name=name)
db.session.add(tl)
db.session.commit()
notify_tasklist_change(user.id, tl.id, 'created')
return Response('', 201, {'Location': href_list(user.username, tl.id)})
+2
View File
@@ -2,6 +2,7 @@ from app.models.user import User
from app.models.file import File, FilePermission, ShareLink
from app.models.calendar import Calendar, CalendarEvent, CalendarShare
from app.models.contact import AddressBook, Contact, AddressBookShare
from app.models.task import TaskList, Task, TaskListShare
from app.models.email_account import EmailAccount
from app.models.password_vault import PasswordFolder, PasswordEntry, PasswordShare
from app.models.settings import AppSettings
@@ -13,6 +14,7 @@ __all__ = [
'File', 'FilePermission', 'ShareLink',
'Calendar', 'CalendarEvent', 'CalendarShare',
'AddressBook', 'Contact', 'AddressBookShare',
'TaskList', 'Task', 'TaskListShare',
'EmailAccount',
'PasswordFolder', 'PasswordEntry', 'PasswordShare',
'AppSettings',
+12
View File
@@ -12,6 +12,7 @@ class Calendar(db.Model):
color = db.Column(db.String(7), default='#3788d8')
description = db.Column(db.Text, nullable=True)
ical_token = db.Column(db.String(64), unique=True, nullable=True, index=True)
ical_password_hash = db.Column(db.String(255), nullable=True)
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
@@ -20,6 +21,7 @@ class Calendar(db.Model):
cascade='all, delete-orphan')
shares = db.relationship('CalendarShare', backref='calendar', lazy='dynamic',
cascade='all, delete-orphan')
# Note: `owner` is auto-created as a backref by User.calendars relationship
def to_dict(self):
return {
@@ -29,6 +31,7 @@ class Calendar(db.Model):
'color': self.color,
'description': self.description,
'ical_token': self.ical_token,
'ical_has_password': bool(self.ical_password_hash),
'created_at': self.created_at.isoformat() if self.created_at else None,
}
@@ -41,10 +44,14 @@ class CalendarEvent(db.Model):
uid = db.Column(db.String(255), unique=True, nullable=False)
ical_data = db.Column(db.Text, nullable=False) # Full VCALENDAR component
summary = db.Column(db.String(500), nullable=True)
description = db.Column(db.Text, nullable=True)
location = db.Column(db.String(500), nullable=True)
dtstart = db.Column(db.DateTime, nullable=True, index=True)
dtend = db.Column(db.DateTime, nullable=True)
all_day = db.Column(db.Boolean, default=False)
recurrence_rule = db.Column(db.Text, nullable=True)
exdates = db.Column(db.Text, nullable=True) # Komma-separiert, ISO-Datum (YYYY-MM-DD)
is_private = db.Column(db.Boolean, default=False, nullable=False)
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
@@ -55,10 +62,14 @@ class CalendarEvent(db.Model):
'calendar_id': self.calendar_id,
'uid': self.uid,
'summary': self.summary,
'description': self.description,
'location': self.location,
'dtstart': self.dtstart.isoformat() if self.dtstart else None,
'dtend': self.dtend.isoformat() if self.dtend else None,
'all_day': self.all_day,
'recurrence_rule': self.recurrence_rule,
'exdates': self.exdates.split(',') if self.exdates else [],
'is_private': bool(self.is_private),
'created_at': self.created_at.isoformat() if self.created_at else None,
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
}
@@ -71,6 +82,7 @@ class CalendarShare(db.Model):
calendar_id = db.Column(db.Integer, db.ForeignKey('calendars.id'), nullable=False, index=True)
shared_with_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
permission = db.Column(db.String(20), nullable=False, default='read') # 'read' or 'readwrite'
color = db.Column(db.String(7), nullable=True) # Persoenliche Anzeige-Farbe
shared_with = db.relationship('User', backref='shared_calendars')
+77 -3
View File
@@ -10,6 +10,7 @@ class AddressBook(db.Model):
owner_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
name = db.Column(db.String(255), nullable=False)
description = db.Column(db.Text, nullable=True)
color = db.Column(db.String(7), default='#3788d8')
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
@@ -18,6 +19,7 @@ class AddressBook(db.Model):
cascade='all, delete-orphan')
shares = db.relationship('AddressBookShare', backref='address_book', lazy='dynamic',
cascade='all, delete-orphan')
# `owner` wird automatisch durch User.address_books backref erzeugt
def to_dict(self):
return {
@@ -25,6 +27,7 @@ class AddressBook(db.Model):
'owner_id': self.owner_id,
'name': self.name,
'description': self.description,
'color': self.color,
'created_at': self.created_at.isoformat() if self.created_at else None,
}
@@ -36,22 +39,92 @@ class Contact(db.Model):
address_book_id = db.Column(db.Integer, db.ForeignKey('address_books.id'),
nullable=False, index=True)
uid = db.Column(db.String(255), unique=True, nullable=False)
vcard_data = db.Column(db.Text, nullable=False) # Full VCARD
vcard_data = db.Column(db.Text, nullable=False)
# Structured name fields
prefix = db.Column(db.String(64), nullable=True)
first_name = db.Column(db.String(128), nullable=True)
middle_name = db.Column(db.String(128), nullable=True)
last_name = db.Column(db.String(128), nullable=True, index=True)
suffix = db.Column(db.String(64), nullable=True)
display_name = db.Column(db.String(255), nullable=True, index=True)
nickname = db.Column(db.String(128), nullable=True)
# Organisation
organization = db.Column(db.String(255), nullable=True)
department = db.Column(db.String(255), nullable=True)
job_title = db.Column(db.String(255), nullable=True)
# Primary fields for quick listing (denormalised)
primary_email = db.Column(db.String(255), nullable=True, index=True)
primary_phone = db.Column(db.String(50), nullable=True)
# JSON-encoded multi-valued fields
# Each list entry: {"type": "home|work|other|mobile|fax|pager|...", "value": "..."}
emails = db.Column(db.Text, nullable=True)
phones = db.Column(db.Text, nullable=True)
# address: {"type": ..., "street": ..., "po_box": ..., "city": ...,
# "region": ..., "postal_code": ..., "country": ...}
addresses = db.Column(db.Text, nullable=True)
websites = db.Column(db.Text, nullable=True)
impp = db.Column(db.Text, nullable=True) # {"protocol": "skype", "value": "..."}
categories = db.Column(db.Text, nullable=True) # ["family", "work", ...]
# Dates
birthday = db.Column(db.String(10), nullable=True) # YYYY-MM-DD
anniversary = db.Column(db.String(10), nullable=True)
# Free text
notes = db.Column(db.Text, nullable=True)
# Photo: data URL (data:image/jpeg;base64,...) oder http(s)://
photo = db.Column(db.Text, nullable=True)
# Legacy column kept for old clients / migrations
email = db.Column(db.String(255), nullable=True)
phone = db.Column(db.String(50), nullable=True)
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
def to_dict(self):
import json
def _loads(s, default):
if not s:
return default
try:
return json.loads(s)
except (ValueError, TypeError):
return default
return {
'id': self.id,
'address_book_id': self.address_book_id,
'uid': self.uid,
'prefix': self.prefix,
'first_name': self.first_name,
'middle_name': self.middle_name,
'last_name': self.last_name,
'suffix': self.suffix,
'display_name': self.display_name,
'email': self.email,
'phone': self.phone,
'nickname': self.nickname,
'organization': self.organization,
'department': self.department,
'job_title': self.job_title,
'emails': _loads(self.emails, []),
'phones': _loads(self.phones, []),
'addresses': _loads(self.addresses, []),
'websites': _loads(self.websites, []),
'impp': _loads(self.impp, []),
'categories': _loads(self.categories, []),
'birthday': self.birthday,
'anniversary': self.anniversary,
'notes': self.notes,
'photo': self.photo,
'primary_email': self.primary_email or self.email,
'primary_phone': self.primary_phone or self.phone,
'created_at': self.created_at.isoformat() if self.created_at else None,
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
}
@@ -65,6 +138,7 @@ class AddressBookShare(db.Model):
nullable=False, index=True)
shared_with_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
permission = db.Column(db.String(20), nullable=False, default='read')
color = db.Column(db.String(7), nullable=True) # personal display color
shared_with = db.relationship('User', backref='shared_address_books')
+4 -1
View File
@@ -55,8 +55,11 @@ class FilePermission(db.Model):
file_id = db.Column(db.Integer, db.ForeignKey('files.id'), nullable=False, index=True)
user_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
permission = db.Column(db.String(20), nullable=False) # 'read', 'write', 'admin'
can_reshare = db.Column(db.Boolean, default=False, nullable=False)
granted_by = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=True)
user = db.relationship('User', backref='file_permissions')
user = db.relationship('User', foreign_keys=[user_id], backref='file_permissions')
grantor = db.relationship('User', foreign_keys=[granted_by])
__table_args__ = (
db.UniqueConstraint('file_id', 'user_id', name='uq_file_user_permission'),
+86
View File
@@ -0,0 +1,86 @@
from datetime import datetime, timezone
from app.extensions import db
class TaskList(db.Model):
__tablename__ = 'task_lists'
id = db.Column(db.Integer, primary_key=True)
owner_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
name = db.Column(db.String(255), nullable=False)
color = db.Column(db.String(7), default='#10b981')
description = db.Column(db.Text, nullable=True)
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
tasks = db.relationship('Task', backref='task_list', lazy='dynamic',
cascade='all, delete-orphan')
shares = db.relationship('TaskListShare', backref='task_list', lazy='dynamic',
cascade='all, delete-orphan')
def to_dict(self):
return {
'id': self.id,
'owner_id': self.owner_id,
'name': self.name,
'color': self.color,
'description': self.description,
'created_at': self.created_at.isoformat() if self.created_at else None,
}
class Task(db.Model):
__tablename__ = 'tasks'
id = db.Column(db.Integer, primary_key=True)
task_list_id = db.Column(db.Integer, db.ForeignKey('task_lists.id'), nullable=False, index=True)
uid = db.Column(db.String(255), unique=True, nullable=False)
ical_data = db.Column(db.Text, nullable=False, default='') # Full VTODO block
summary = db.Column(db.String(500), nullable=True)
description = db.Column(db.Text, nullable=True)
status = db.Column(db.String(32), nullable=True) # NEEDS-ACTION | IN-PROCESS | COMPLETED | CANCELLED
priority = db.Column(db.Integer, nullable=True) # 0 (keine) - 9
percent_complete = db.Column(db.Integer, nullable=True) # 0..100
due = db.Column(db.DateTime, nullable=True, index=True)
dtstart = db.Column(db.DateTime, nullable=True)
completed_at = db.Column(db.DateTime, nullable=True)
categories = db.Column(db.Text, nullable=True) # kommagetrennt
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
def to_dict(self):
return {
'id': self.id,
'task_list_id': self.task_list_id,
'uid': self.uid,
'summary': self.summary,
'description': self.description,
'status': self.status or 'NEEDS-ACTION',
'priority': self.priority,
'percent_complete': self.percent_complete,
'due': self.due.isoformat() if self.due else None,
'dtstart': self.dtstart.isoformat() if self.dtstart else None,
'completed_at': self.completed_at.isoformat() if self.completed_at else None,
'categories': self.categories.split(',') if self.categories else [],
'created_at': self.created_at.isoformat() if self.created_at else None,
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
}
class TaskListShare(db.Model):
__tablename__ = 'task_list_shares'
id = db.Column(db.Integer, primary_key=True)
task_list_id = db.Column(db.Integer, db.ForeignKey('task_lists.id'), nullable=False, index=True)
shared_with_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
permission = db.Column(db.String(20), nullable=False, default='read')
color = db.Column(db.String(7), nullable=True)
shared_with = db.relationship('User', backref='shared_task_lists')
__table_args__ = (
db.UniqueConstraint('task_list_id', 'shared_with_id', name='uq_task_list_share'),
)
+18
View File
@@ -9,6 +9,8 @@ class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False, index=True)
email = db.Column(db.String(255), unique=True, nullable=True)
first_name = db.Column(db.String(100), nullable=True)
last_name = db.Column(db.String(100), nullable=True)
password_hash = db.Column(db.String(255), nullable=False)
role = db.Column(db.String(20), default='user', nullable=False) # 'admin' or 'user'
master_key_salt = db.Column(db.LargeBinary, nullable=True) # For password manager
@@ -23,6 +25,7 @@ class User(db.Model):
foreign_keys='File.owner_id')
calendars = db.relationship('Calendar', backref='owner', lazy='dynamic')
address_books = db.relationship('AddressBook', backref='owner', lazy='dynamic')
task_lists = db.relationship('TaskList', backref='owner', lazy='dynamic')
email_accounts = db.relationship('EmailAccount', backref='user', lazy='dynamic',
order_by='EmailAccount.sort_order')
password_folders = db.relationship('PasswordFolder', backref='owner', lazy='dynamic')
@@ -33,10 +36,25 @@ class User(db.Model):
def check_password(self, password):
return bcrypt.check_password_hash(self.password_hash, password)
@property
def full_name(self) -> str:
"""Vor- + Nachname zusammengesetzt, sonst Leerstring."""
parts = [self.first_name or '', self.last_name or '']
return ' '.join(p.strip() for p in parts if p and p.strip())
@property
def display_name(self) -> str:
"""Voller Name falls vorhanden, sonst Username."""
return self.full_name or self.username
def to_dict(self, include_email=False):
data = {
'id': self.id,
'username': self.username,
'first_name': self.first_name or '',
'last_name': self.last_name or '',
'full_name': self.full_name,
'display_name': self.display_name,
'role': self.role,
'is_active': self.is_active,
'storage_quota_mb': self.storage_quota_mb,
+104
View File
@@ -0,0 +1,104 @@
"""In-memory event broadcaster for SSE clients.
Each logged-in user can have multiple connected clients (desktop, web,
mobile). Every client gets its own queue. Mutating file operations push
an event into the queues of every affected user.
"""
from __future__ import annotations
import json
import queue
import threading
import time
from typing import Iterable
class EventBroadcaster:
def __init__(self) -> None:
self._lock = threading.Lock()
# user_id -> list[queue.Queue]
self._subs: dict[int, list[queue.Queue]] = {}
def subscribe(self, user_id: int) -> queue.Queue:
q: queue.Queue = queue.Queue(maxsize=256)
with self._lock:
self._subs.setdefault(user_id, []).append(q)
return q
def unsubscribe(self, user_id: int, q: queue.Queue) -> None:
with self._lock:
lst = self._subs.get(user_id)
if not lst:
return
try:
lst.remove(q)
except ValueError:
pass
if not lst:
self._subs.pop(user_id, None)
def publish(self, user_ids: Iterable[int], event: dict) -> None:
payload = dict(event)
payload.setdefault('ts', time.time())
with self._lock:
for uid in set(user_ids):
for q in self._subs.get(uid, []):
try:
q.put_nowait(payload)
except queue.Full:
pass # slow client - drop event
def stream(self, user_id: int):
"""Generator yielding SSE-formatted strings for one client."""
q = self.subscribe(user_id)
try:
# Initial hello so the client knows it's connected
yield f"event: hello\ndata: {json.dumps({'user_id': user_id})}\n\n"
while True:
try:
event = q.get(timeout=20.0)
except queue.Empty:
# Heartbeat / keepalive comment - also keeps proxies happy
yield ": keepalive\n\n"
continue
kind = event.get('type', 'change')
yield f"event: {kind}\ndata: {json.dumps(event)}\n\n"
finally:
self.unsubscribe(user_id, q)
broadcaster = EventBroadcaster()
def notify_file_change(owner_id: int, file_id: int | None, change: str,
shared_with: Iterable[int] = ()) -> None:
"""Emit a file change event to the owner plus any users with share access."""
recipients = [owner_id, *shared_with]
broadcaster.publish(recipients, {
'type': 'file',
'change': change, # 'created' | 'updated' | 'deleted' | 'locked' | 'unlocked'
'file_id': file_id,
})
def notify_calendar_change(owner_id: int, calendar_id: int, change: str,
shared_with: Iterable[int] = ()) -> None:
"""Emit a calendar-level change event (event added/changed/deleted or
share membership changed). Sent to owner + all users the calendar is
shared with."""
recipients = [owner_id, *shared_with]
broadcaster.publish(recipients, {
'type': 'calendar',
'change': change, # 'event'|'share'|'deleted'
'calendar_id': calendar_id,
})
def notify_tasklist_change(owner_id: int, list_id: int, change: str,
shared_with: Iterable[int] = ()) -> None:
recipients = [owner_id, *shared_with]
broadcaster.publish(recipients, {
'type': 'tasklist',
'change': change, # 'task'|'share'|'deleted'|'created'
'task_list_id': list_id,
})
+56
View File
@@ -0,0 +1,56 @@
"""Leichtgewichtiger SNTP-Client zum Pruefen des Zeit-Offsets.
Im Container koennen wir die Systemzeit nicht wirklich setzen (braucht
CAP_SYS_TIME). Aber wir koennen den Offset ermitteln und loggen, damit
der Admin weiss, ob der Host driftet. Fuer einen harten Sync muss auf
dem Host selbst ein NTP-Daemon laufen.
"""
from __future__ import annotations
import socket
import struct
import time
_NTP_EPOCH_OFFSET = 2208988800 # Sekunden zwischen 1900 und 1970
def query_ntp(server: str, timeout: float = 3.0, port: int = 123) -> float | None:
"""Fragt einen NTP-Server und gibt das Offset (Server - Local) in
Sekunden zurueck, oder None bei Fehler."""
packet = b'\x1b' + 47 * b'\0' # LI=0, VN=3, Mode=3 (client)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(timeout)
try:
t0 = time.time()
sock.sendto(packet, (server, port))
data, _ = sock.recvfrom(1024)
t3 = time.time()
except (socket.gaierror, socket.timeout, OSError):
return None
finally:
sock.close()
if len(data) < 48:
return None
# Transmit timestamp: Offset 40, 8 bytes, fixed point 32.32
secs, frac = struct.unpack('!II', data[40:48])
if secs == 0:
return None
t2 = secs - _NTP_EPOCH_OFFSET + frac / 2**32
# Einfacher Offset (sans roundtrip): (t2 - ((t0 + t3) / 2))
return t2 - (t0 + t3) / 2
def check_and_log(server: str, logger=None) -> float | None:
import logging
log = logger or logging.getLogger('ntp')
offset = query_ntp(server)
if offset is None:
log.warning('NTP-Check: Server %s nicht erreichbar', server)
return None
if abs(offset) > 5.0:
log.warning('NTP-Check: Systemzeit weicht um %.2fs von %s ab -> Host-Uhr synchronisieren!',
offset, server)
else:
log.info('NTP-Check: Offset %.3fs gegen %s (ok)', offset, server)
return offset
+70
View File
@@ -0,0 +1,70 @@
# Native File-Provider-Integration (Platzhalter-Modus)
Zusaetzlich zum klassischen "alles-kopieren"-Sync bietet der Desktop-Client
einen **OneDrive-aehnlichen Platzhalter-Modus**: Dateien erscheinen im
Dateimanager als kleine Metadata-Dateien (Platzhalter) und werden erst
bei Doppelklick vom Server geladen.
## Status
| Plattform | Status | Technologie |
| --------- | --------- | ------------------------------------ |
| Windows | **MVP** | Cloud Files API (`cfapi.dll`) |
| Linux | Skelett | FUSE (libfuse3) - feature `linux_fuse` |
| macOS | Geplant | `NSFileProviderExtension` + Signatur |
## Windows
### Voraussetzungen
- Windows 10 1709 (Build 16299) oder neuer
- Der Client laeuft als regulaerer Benutzerprozess (keine Admin-Rechte noetig)
### Was funktioniert
- `CfRegisterSyncRoot` registriert einen Ordner als Sync-Root, der Explorer
zeigt Wolken-Overlay-Icons an.
- `CfCreatePlaceholders` legt fuer jede Mini-Cloud-Datei einen Platzhalter
mit korrekter Groesse und Aenderungszeit an.
- `FETCH_DATA`-Callback laedt per Range-Request vom Server, sobald der
Explorer Dateidaten anfordert (z.B. beim Oeffnen).
- `CfSetPinState` erlaubt manuelles "Immer offline halten" / "Nur in Cloud".
### Was noch fehlt
- Upload-Callback (`NOTIFY_FILE_CLOSE_COMPLETION`) fuer lokal geaenderte Dateien
- Context-Menue "Ein-/Auschecken" via Shell-Extension
- Delta-Updates (neue/geloeschte Dateien auf dem Server -> lokale Placeholder)
- Konflikt-Aufloesung
### Einschalten
Im Client-UI den Schalter **"Cloud-Files-Modus"** aktivieren (ruft intern
`cloud_files_enable`-Command auf). Alternativ per Kommandozeile beim Build:
```powershell
# Aus clients/desktop/src-tauri:
cargo build --release
```
Windows-Targets brauchen das Windows-SDK (uebersetzt aber sauber mit
cross-compile via `cargo xwin` aus Linux, wenn `build.sh windows` laeuft).
## Linux
FUSE-Provider ist optional und mit einem Feature-Flag versehen, damit
normale Linux-Builds nicht `libfuse3-dev` voraussetzen:
```bash
cargo build --features linux_fuse
```
Overlay-Icons im Dateimanager (Nautilus / Dolphin / Caja) brauchen
zusaetzlich eine native Extension pro DE - folgt in einem spaeteren
Commit.
## macOS
Braucht eine Apple Developer ID + Notarization, da `NSFileProviderExtension`
sonst vom Finder nicht geladen wird. Wird angegangen, sobald ein
Apple-Dev-Zugang verfuegbar ist.
+2 -11
View File
@@ -2009,16 +2009,6 @@ dependencies = [
"unicode-segmentation",
]
[[package]]
name = "keyring"
version = "3.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eebcc3aff044e5944a8fbaf69eb277d11986064cba30c468730e8b9909fb551c"
dependencies = [
"log",
"zeroize",
]
[[package]]
name = "kqueue"
version = "1.1.1"
@@ -2248,10 +2238,11 @@ dependencies = [
name = "minicloud-sync"
version = "0.1.0"
dependencies = [
"base64 0.22.1",
"chrono",
"dirs",
"keyring",
"notify",
"open",
"reqwest 0.12.28",
"rusqlite",
"serde",
+28 -1
View File
@@ -19,7 +19,7 @@ tauri-plugin-dialog = "2"
tauri-plugin-notification = "2"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
reqwest = { version = "0.12", features = ["json", "multipart", "rustls-tls"], default-features = false }
reqwest = { version = "0.12", features = ["json", "multipart", "rustls-tls", "blocking"], default-features = false }
tokio = { version = "1", features = ["full"] }
notify = "7"
sha2 = "0.10"
@@ -28,3 +28,30 @@ rusqlite = { version = "0.34", features = ["bundled"] }
chrono = { version = "0.4", features = ["serde"] }
base64 = "0.22"
open = "5"
once_cell = "1"
# Plattform-spezifische File-Provider-Integration (OneDrive-artig).
# Nur auf Windows gegen die Cloud Files API (cfapi.dll) linken.
[target.'cfg(windows)'.dependencies]
windows = { version = "0.58", features = [
"Win32_Foundation",
"Win32_Storage_FileSystem",
"Win32_Storage_CloudFilters",
"Win32_System_IO",
"Win32_System_Com",
"Win32_System_CorrelationVector", # gate fuer CF_CALLBACK_INFO / CfExecute / CfConnectSyncRoot
"Win32_UI_Shell",
"Win32_Security",
"Win32_System_Registry",
] }
widestring = "1"
winreg = "0.52"
# Linux: FUSE-basiertes Virtual-Filesystem (optional, cargo build --features linux_fuse)
[target.'cfg(target_os = "linux")'.dependencies]
fuser = { version = "0.15", optional = true }
libc = "0.2"
[features]
default = []
linux_fuse = ["fuser"]
@@ -0,0 +1,25 @@
//! Linux FUSE-basierte File-Provider-Integration (Platzhalter-Modus).
//!
//! Status: Skelett. Funktioniert nur wenn mit `--features linux_fuse`
//! gebaut wird und `libfuse3-dev` installiert ist. Overlay-Icons im
//! Dateimanager (Nautilus/Dolphin) werden spaeter als separate Extension
//! nachgereicht - das FUSE-Filesystem selbst kann die nicht setzen.
#![cfg(all(target_os = "linux", feature = "linux_fuse"))]
use super::RemoteEntry;
use std::path::PathBuf;
pub fn mount(mount_point: &PathBuf) -> Result<(), String> {
std::fs::create_dir_all(mount_point).map_err(|e| e.to_string())?;
// TODO: fuser::Filesystem-Impl mit auf-Abruf-Download
Err("Linux FUSE-Provider: noch nicht implementiert (MVP folgt)".into())
}
pub fn unmount(_mount_point: &PathBuf) -> Result<(), String> {
Err("Linux FUSE-Provider: noch nicht implementiert".into())
}
pub fn populate(_mount_point: &PathBuf, _entries: &[RemoteEntry]) -> Result<(), String> {
Err("Linux FUSE-Provider: noch nicht implementiert".into())
}
@@ -0,0 +1,121 @@
//! Native File-Provider-Integration (Platzhalter-Dateien wie bei OneDrive).
//!
//! Auf Windows realisiert ueber die Cloud Files API (cfapi.dll), auf Linux
//! ueber FUSE (optional, hinter `linux_fuse`-Feature). macOS folgt spaeter
//! ueber NSFileProviderExtension (braucht Apple-Signatur).
//!
//! Der bestehende `sync::engine` bleibt unberuehrt und bietet weiterhin
//! den klassischen "kopiere-alles-lokal"-Modus. Der Cloud-Files-Modus
//! ist sozusagen "files-on-demand": Datei wird erst bei Zugriff geladen.
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
/// Ein Eintrag aus dem Mini-Cloud-Syncbaum, so wie er vom Server kommt.
/// Wird von beiden Plattformen genutzt, um Platzhalter / FUSE-Inodes zu
/// erzeugen.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RemoteEntry {
pub id: i64,
pub name: String,
pub parent_id: Option<i64>,
pub is_folder: bool,
pub size: i64,
/// UTC-ISO8601
pub modified_at: String,
/// SHA-256 falls vom Server ausgeliefert, sonst None.
pub checksum: Option<String>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum SyncState {
/// Datei existiert nur als Platzhalter (online-only).
Cloud,
/// Datei ist vollstaendig lokal vorhanden und aktuell.
InSync,
/// Lokal geaendert, Upload ausstehend.
PendingUpload,
/// Auf dem Server gesperrt (durch anderen Nutzer).
LockedByOther,
/// Durch diesen Client gesperrt.
LockedLocal,
}
#[cfg(windows)]
pub mod windows;
#[cfg(windows)]
pub mod shell_integration;
#[cfg(all(target_os = "linux", feature = "linux_fuse"))]
pub mod linux;
pub mod sync_loop;
pub mod watcher;
/// Registriere den Sync-Root beim Betriebssystem. Ruft je nach Plattform
/// cfapi/CfRegisterSyncRoot bzw. mountet ein FUSE-Dateisystem.
#[allow(unused_variables)]
pub fn register_sync_root(
mount_point: &PathBuf,
provider_name: &str,
account_id: &str,
) -> Result<(), String> {
#[cfg(windows)]
return windows::register_sync_root(mount_point, provider_name, account_id);
#[cfg(all(target_os = "linux", feature = "linux_fuse"))]
return linux::mount(mount_point);
#[cfg(not(any(windows, all(target_os = "linux", feature = "linux_fuse"))))]
Err("File-Provider-Integration fuer diese Plattform noch nicht verfuegbar".into())
}
#[allow(unused_variables)]
pub fn unregister_sync_root(mount_point: &PathBuf) -> Result<(), String> {
#[cfg(windows)]
return windows::unregister_sync_root(mount_point);
#[cfg(all(target_os = "linux", feature = "linux_fuse"))]
return linux::unmount(mount_point);
#[cfg(not(any(windows, all(target_os = "linux", feature = "linux_fuse"))))]
Err("File-Provider-Integration fuer diese Plattform noch nicht verfuegbar".into())
}
/// Erzeuge fuer alle Remote-Eintraege Platzhalter (cloud-only Dateien).
/// Ordner werden als echte Verzeichnisse angelegt, Dateien als
/// Platzhalter mit gespeicherten Metadaten (Groesse, Mtime, ID).
#[allow(unused_variables)]
pub fn populate_placeholders(
mount_point: &PathBuf,
entries: &[RemoteEntry],
) -> Result<(), String> {
#[cfg(windows)]
return windows::populate_placeholders(mount_point, entries);
#[cfg(all(target_os = "linux", feature = "linux_fuse"))]
return linux::populate(mount_point, entries);
#[cfg(not(any(windows, all(target_os = "linux", feature = "linux_fuse"))))]
Err("File-Provider-Integration fuer diese Plattform noch nicht verfuegbar".into())
}
/// Ist File-Provider-Integration auf dieser Plattform grundsaetzlich verfuegbar?
pub fn is_supported() -> bool {
cfg!(windows) || cfg!(all(target_os = "linux", feature = "linux_fuse"))
}
/// Markiere eine lokal bereits vorhandene Datei als "immer offline halten".
#[allow(unused_variables)]
pub fn pin_file(path: &PathBuf) -> Result<(), String> {
#[cfg(windows)]
return windows::set_pin_state(path, true);
#[cfg(not(windows))]
Err("Nur auf Windows unterstuetzt".into())
}
#[allow(unused_variables)]
pub fn unpin_file(path: &PathBuf) -> Result<(), String> {
#[cfg(windows)]
return windows::set_pin_state(path, false);
#[cfg(not(windows))]
Err("Nur auf Windows unterstuetzt".into())
}
@@ -0,0 +1,206 @@
//! Explorer-Sidebar-Integration fuer Windows (ohne Admin-Rechte).
//!
//! Registriert den Sync-Ordner als Shell-Namespace-Extension unter
//! HKEY_CURRENT_USER, sodass er mit eigenem Icon in der Navigation
//! des Datei-Explorers erscheint (wie OneDrive/Dropbox).
//!
//! Anders als die eigentliche Cloud Files API ist das reine Registry-
//! Kosmetik - der Ordner funktioniert auch ohne Sidebar-Eintrag,
//! nur sieht man ihn dann nicht in der linken Leiste.
#![cfg(windows)]
use std::path::Path;
use winreg::enums::*;
use winreg::RegKey;
// Stabile GUID fuer Mini-Cloud - gleiche wie in windows.rs als ProviderId.
const CLSID_GUID: &str = "{4D696E69-436C-6F75-6444-7566667944AB}";
// Standard-CLSID fuer "Generic Shell Folder Implementation".
const SHELL_FOLDER_CLSID: &str = "{0E5AAE11-A475-4c5b-AB00-C66DE400274E}";
/// Registriere den Mount-Ordner in der Explorer-Navigation.
/// `icon_source`: Pfad zu ICO oder EXE mit Icon-Index (z.B. "C:\\app.exe,0")
pub fn install(
display_name: &str,
mount_point: &Path,
icon_source: &str,
) -> Result<(), String> {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
// 1) CLSID-Eintrag unter Software\Classes\CLSID\{GUID}
let clsid_path = format!("Software\\Classes\\CLSID\\{}", CLSID_GUID);
let (clsid, _) = hkcu
.create_subkey(&clsid_path)
.map_err(|e| format!("create CLSID: {e}"))?;
clsid
.set_value("", &display_name.to_string())
.map_err(|e| format!("set displayname: {e}"))?;
clsid
.set_value("System.IsPinnedToNameSpaceTree", &1u32)
.map_err(|e| format!("set pinned: {e}"))?;
clsid
.set_value("SortOrderIndex", &0x42u32)
.map_err(|e| format!("set sortorder: {e}"))?;
// 2) DefaultIcon
let (icon_key, _) = clsid
.create_subkey("DefaultIcon")
.map_err(|e| format!("create DefaultIcon: {e}"))?;
icon_key
.set_value("", &icon_source.to_string())
.map_err(|e| format!("set icon: {e}"))?;
// 3) InProcServer32 -> shell32.dll (Standard Shell-Folder-Host)
let (inproc, _) = clsid
.create_subkey("InProcServer32")
.map_err(|e| format!("create InProcServer32: {e}"))?;
inproc
.set_value("", &"%SystemRoot%\\system32\\shell32.dll".to_string())
.map_err(|e| format!("set shell32: {e}"))?;
inproc
.set_value("ThreadingModel", &"Both".to_string())
.map_err(|e| format!("set threading: {e}"))?;
// 4) Instance -> zeigt auf generischen Shell-Folder
let (instance, _) = clsid
.create_subkey("Instance")
.map_err(|e| format!("create Instance: {e}"))?;
instance
.set_value("CLSID", &SHELL_FOLDER_CLSID.to_string())
.map_err(|e| format!("set inst clsid: {e}"))?;
let (pb, _) = instance
.create_subkey("InitPropertyBag")
.map_err(|e| format!("create InitPropertyBag: {e}"))?;
pb.set_value("Attributes", &0x11u32)
.map_err(|e| format!("set attrs pb: {e}"))?;
pb.set_value(
"TargetFolderPath",
&mount_point.to_string_lossy().into_owned(),
)
.map_err(|e| format!("set target: {e}"))?;
// 5) ShellFolder-Flags
let (sf, _) = clsid
.create_subkey("ShellFolder")
.map_err(|e| format!("create ShellFolder: {e}"))?;
sf.set_value("FolderValueFlags", &0x28u32)
.map_err(|e| format!("set folderflags: {e}"))?;
sf.set_value("Attributes", &0xF080004Du32)
.map_err(|e| format!("set attrs sf: {e}"))?;
// 6) In die Navigation einhaengen
let ns_path = format!(
"Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Desktop\\NameSpace\\{}",
CLSID_GUID
);
let (ns, _) = hkcu
.create_subkey(&ns_path)
.map_err(|e| format!("create NameSpace: {e}"))?;
ns.set_value("", &display_name.to_string())
.map_err(|e| format!("set ns name: {e}"))?;
// 7) Kontext-Menue-Verben (Rechtsklick) fuer Dateien unter dem Mount
install_context_menu(mount_point)?;
// 8) Explorer informieren (SHChangeNotify)
notify_shell();
Ok(())
}
/// Registriert "Immer offline halten" / "Speicher freigeben" als
/// Rechtsklick-Menuepunkte, die nur fuer Dateien unterhalb des Mounts
/// angezeigt werden (AppliesTo-Filter).
fn install_context_menu(mount_point: &Path) -> Result<(), String> {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let exe = std::env::current_exe()
.map_err(|e| format!("current_exe: {e}"))?
.to_string_lossy()
.into_owned();
// Trailing Backslash wegstrippen, dann eine saubere AQS-Query bauen.
// Registry-Werte sind normale Strings; Backslashes bleiben einfach.
let mount_clean = mount_point
.to_string_lossy()
.trim_end_matches('\\')
.to_string();
// AppliesTo: nur Dateien, deren Pfad mit dem Mount-Ordner beginnt.
let applies_to = format!("System.ItemPathDisplay:~< \"{}\"", mount_clean);
for (verb, label, flag) in [
("MiniCloudPin", "Immer offline verfuegbar", "--pin"),
("MiniCloudUnpin", "Speicher freigeben", "--unpin"),
] {
// Unter AllFilesystemObjects statt * - das greift auch fuer
// Ordner und vermeidet Konflikte mit Dateityp-spezifischen Verben.
let key_path = format!("Software\\Classes\\AllFilesystemObjects\\shell\\{}", verb);
let (k, _) = hkcu
.create_subkey(&key_path)
.map_err(|e| format!("verb {verb}: {e}"))?;
k.set_value("", &label.to_string())
.map_err(|e| format!("default: {e}"))?;
k.set_value("MUIVerb", &label.to_string())
.map_err(|e| format!("MUIVerb: {e}"))?;
k.set_value("AppliesTo", &applies_to)
.map_err(|e| format!("AppliesTo: {e}"))?;
k.set_value("Icon", &exe)
.map_err(|e| format!("Icon: {e}"))?;
let (cmd, _) = k
.create_subkey("command")
.map_err(|e| format!("cmd: {e}"))?;
cmd.set_value("", &format!("\"{}\" {} \"%1\"", exe, flag))
.map_err(|e| format!("cmdline: {e}"))?;
}
Ok(())
}
fn uninstall_context_menu() {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
for verb in ["MiniCloudPin", "MiniCloudUnpin"] {
// alte (falsche) Stelle ebenfalls aufraeumen
let _ = hkcu.delete_subkey_all(format!("Software\\Classes\\*\\shell\\{}", verb));
let _ = hkcu.delete_subkey_all(format!(
"Software\\Classes\\AllFilesystemObjects\\shell\\{}",
verb
));
}
}
/// Entferne die Shell-Integration wieder.
pub fn uninstall() -> Result<(), String> {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let ns_path = format!(
"Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Desktop\\NameSpace\\{}",
CLSID_GUID
);
let _ = hkcu.delete_subkey_all(&ns_path);
let clsid_path = format!("Software\\Classes\\CLSID\\{}", CLSID_GUID);
let _ = hkcu.delete_subkey_all(&clsid_path);
uninstall_context_menu();
notify_shell();
Ok(())
}
/// Teilt Explorer mit, dass sich die Shell-Namespace-Liste geaendert hat.
/// Ohne das sieht man den neuen Eintrag erst nach Explorer-Neustart.
fn notify_shell() {
use windows::Win32::UI::Shell::{SHChangeNotify, SHCNE_ASSOCCHANGED, SHCNF_IDLIST};
unsafe {
SHChangeNotify(SHCNE_ASSOCCHANGED, SHCNF_IDLIST, None, None);
}
}
/// Standard-Icon-Quelle: die laufende .exe mit Index 0.
pub fn default_icon_source() -> String {
std::env::current_exe()
.ok()
.and_then(|p| p.to_str().map(|s| format!("{},0", s)))
.unwrap_or_else(|| "%SystemRoot%\\system32\\imageres.dll,2".to_string())
}
@@ -0,0 +1,221 @@
//! Hintergrund-Synchronisation fuer den Cloud-Files-Modus.
//!
//! Zwei Aufgaben:
//! 1. Lokale Aenderungen im Mount-Point beobachten (notify-Watcher) und
//! geaenderte Dateien hochladen. Neu angelegte Dateien werden als
//! neue Datei beim Server registriert und als Platzhalter markiert.
//! 2. Serverseitige Aenderungen pollen (/api/sync/changes?since=...) und
//! fehlende Platzhalter erzeugen bzw. entfernte loeschen.
//!
//! Der Loop laeuft in einem dedizierten Tokio-Task; ein gespeicherter
//! `Stop`-Channel beendet ihn sauber beim Deaktivieren.
use super::RemoteEntry;
use serde::Deserialize;
use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::mpsc;
#[derive(Clone)]
pub struct SyncLoopConfig {
pub server_url: String,
pub access_token: String,
pub mount_point: PathBuf,
pub poll_interval_secs: u64,
}
pub struct SyncLoopHandle {
pub stop_flag: Arc<AtomicBool>,
pub tx: mpsc::UnboundedSender<LoopMessage>,
}
pub enum LoopMessage {
LocalChange(PathBuf),
Shutdown,
}
/// Starte den Sync-Loop. Gibt einen Handle zurueck, mit dem man ihn
/// stoppen oder externe Events (z.B. vom Watcher) einspeisen kann.
pub fn start(cfg: SyncLoopConfig) -> SyncLoopHandle {
let stop_flag = Arc::new(AtomicBool::new(false));
let (tx, mut rx) = mpsc::unbounded_channel::<LoopMessage>();
let stop = stop_flag.clone();
let cfg_task = cfg.clone();
tokio::spawn(async move {
let client = reqwest::Client::new();
let mut since: Option<String> = None;
let mut interval = tokio::time::interval(Duration::from_secs(cfg_task.poll_interval_secs));
loop {
if stop.load(Ordering::Relaxed) {
break;
}
tokio::select! {
_ = interval.tick() => {
if let Err(e) = poll_server_changes(&client, &cfg_task, &mut since).await {
eprintln!("[cloud_files] poll error: {e}");
}
}
Some(msg) = rx.recv() => {
match msg {
LoopMessage::Shutdown => break,
LoopMessage::LocalChange(path) => {
if let Err(e) = upload_local_change(&client, &cfg_task, &path).await {
eprintln!("[cloud_files] upload error: {e}");
}
}
}
}
}
}
});
SyncLoopHandle { stop_flag, tx }
}
#[derive(Debug, Deserialize)]
struct ChangesResponse {
#[serde(default)]
created: Vec<RemoteEntry>,
#[serde(default)]
updated: Vec<RemoteEntry>,
#[serde(default)]
deleted: Vec<i64>,
timestamp: Option<String>,
}
async fn poll_server_changes(
client: &reqwest::Client,
cfg: &SyncLoopConfig,
since: &mut Option<String>,
) -> Result<(), String> {
let base = cfg.server_url.trim_end_matches('/');
let mut url = format!("{}/api/sync/changes", base);
if let Some(s) = since.as_deref() {
url.push_str(&format!("?since={}", urlencode(s)));
}
let resp = client
.get(&url)
.bearer_auth(&cfg.access_token)
.send()
.await
.map_err(|e| e.to_string())?;
if !resp.status().is_success() {
return Err(format!("HTTP {}", resp.status()));
}
let body: ChangesResponse = resp.json().await.map_err(|e| e.to_string())?;
// Created + Updated: jeweils passendes Verzeichnis sichern, dann
// Platzhalter (neu) anlegen. Bei Updates muss der alte Platzhalter
// erst geloescht werden - Windows erlaubt kein "replace in place".
for e in body.created.iter().chain(body.updated.iter()) {
let rel = build_relative_path(e);
let full = cfg.mount_point.join(&rel);
if e.is_folder {
let _ = std::fs::create_dir_all(&full);
continue;
}
let parent = full.parent().map(|p| p.to_path_buf()).unwrap_or_else(|| cfg.mount_point.clone());
let _ = std::fs::create_dir_all(&parent);
let _ = std::fs::remove_file(&full); // ignoriert falls nicht da
#[cfg(windows)]
{
let identity = e.id.to_string();
if let Err(err) = super::windows::create_placeholder_at(
&parent,
&e.name,
e.size,
&e.modified_at,
identity.as_bytes(),
) {
eprintln!("[cloud_files] placeholder {}: {}", e.name, err);
}
}
}
// Deleted: nur per ID vom Server - wir kennen den Pfad nicht mehr.
// MVP: ignorieren. In Version 2 fuehren wir ein lokales Mapping.
let _ = body.deleted;
if let Some(ts) = body.timestamp {
*since = Some(ts);
}
Ok(())
}
async fn upload_local_change(
client: &reqwest::Client,
cfg: &SyncLoopConfig,
path: &PathBuf,
) -> Result<(), String> {
if !path.is_file() {
return Ok(());
}
// cfapi-Platzhalter oder gerade hydrierende Dateien NICHT hochladen -
// sonst wird jede Wolken-Datei sofort komplett gesynct und wir haben
// keinen On-Demand-Modus mehr.
#[cfg(windows)]
{
if super::windows::is_cfapi_placeholder(path) {
super::windows::log_msg(
&cfg.mount_point,
&format!("skip upload (placeholder): {}", path.display()),
);
return Ok(());
}
}
// Eigene Log-Datei nicht mit hochladen.
if path
.file_name()
.and_then(|n| n.to_str())
.map(|n| n.starts_with(".minicloud-"))
.unwrap_or(false)
{
return Ok(());
}
// Relativer Pfad im Mount = Ziel-Pfad auf Server
let rel = path
.strip_prefix(&cfg.mount_point)
.map_err(|_| "path outside mount".to_string())?
.to_string_lossy()
.replace('\\', "/");
let bytes = std::fs::read(path).map_err(|e| e.to_string())?;
let base = cfg.server_url.trim_end_matches('/');
let url = format!("{}/api/files/upload", base);
let file_name = path
.file_name()
.and_then(|s| s.to_str())
.unwrap_or("unnamed")
.to_string();
let form = reqwest::multipart::Form::new()
.text("path", rel.clone())
.part(
"file",
reqwest::multipart::Part::bytes(bytes).file_name(file_name),
);
let resp = client
.post(&url)
.bearer_auth(&cfg.access_token)
.multipart(form)
.send()
.await
.map_err(|e| e.to_string())?;
if !resp.status().is_success() {
return Err(format!("HTTP {}", resp.status()));
}
Ok(())
}
fn build_relative_path(e: &RemoteEntry) -> PathBuf {
// Achtung: RemoteEntry hat nur parent_id, nicht den kompletten Pfad.
// Fuer diesen einfachen Fall nehmen wir nur den Namen. Bei geschachtelten
// Ordnern muesste man die Hierarchie ueber /api/sync/tree vor-laden -
// das passiert einmal beim Aktivieren; Delta-Updates kommen meistens
// flach (oder in einer gemeinsamen Wurzel).
PathBuf::from(&e.name)
}
fn urlencode(s: &str) -> String {
// Sehr minimalistisch: wir ersetzen nur problematische Zeichen.
s.replace(' ', "%20").replace(':', "%3A").replace('+', "%2B")
}
@@ -0,0 +1,43 @@
//! Leichtgewichtiger Callback-basierter FS-Watcher fuer den Cloud-Files-Modus.
//!
//! Anders als `sync::watcher::FileWatcher` gibt dieser hier einen Closure
//! direkt an notify weiter, sodass wir kein Channel-Pumpen brauchen.
use notify::{Event, EventKind, RecommendedWatcher, RecursiveMode, Watcher, Config};
use std::path::{Path, PathBuf};
pub struct CallbackWatcher {
_watcher: RecommendedWatcher,
}
impl CallbackWatcher {
pub fn new<F>(watch_dir: &Path, mut on_change: F) -> Result<Self, String>
where
F: FnMut(PathBuf, EventKind) + Send + 'static,
{
let mut watcher = RecommendedWatcher::new(
move |res: Result<Event, notify::Error>| {
if let Ok(ev) = res {
for path in ev.paths {
let name = path.file_name().and_then(|n| n.to_str()).unwrap_or("");
if name.starts_with('.')
|| name.starts_with('~')
|| name.ends_with(".tmp")
{
continue;
}
on_change(path, ev.kind.clone());
}
}
},
Config::default(),
)
.map_err(|e| format!("Watcher-Fehler: {e}"))?;
watcher
.watch(watch_dir, RecursiveMode::Recursive)
.map_err(|e| format!("Watch-Fehler: {e}"))?;
Ok(Self { _watcher: watcher })
}
}
@@ -0,0 +1,639 @@
//! Windows Cloud Files API Integration.
//!
//! Registriert den Sync-Ordner als Sync-Root, legt Platzhalter-Dateien an
//! und reicht Zugriffe auf Dateidaten als HTTPS-Download durch. Der
//! Explorer zeigt Wolken-/Haken-Overlays automatisch an, solange die
//! Pin-Stati korrekt gesetzt sind.
//!
//! Voraussetzung: Windows 10 1709+ (cfapi.dll). Der Account-Identifier
//! sollte stabil sein (z.B. Hash(Server-URL + Username)).
#![cfg(windows)]
use super::RemoteEntry;
use once_cell::sync::Lazy;
use std::path::{Path, PathBuf};
use std::ptr;
use std::sync::{Arc, Mutex};
use widestring::U16CString;
use windows::core::PCWSTR;
use windows::Win32::Storage::CloudFilters as CF;
use windows::Win32::Storage::FileSystem::FILE_ATTRIBUTE_NORMAL;
use windows::Win32::System::Com::{CoInitializeEx, COINIT_MULTITHREADED};
#[derive(Default, Clone)]
pub struct CloudContext {
pub server_url: String,
pub access_token: String,
pub mount_point: PathBuf,
}
static CONTEXT: Lazy<Arc<Mutex<CloudContext>>> =
Lazy::new(|| Arc::new(Mutex::new(CloudContext::default())));
static CONNECTION_KEY: Lazy<Mutex<Option<CF::CF_CONNECTION_KEY>>> =
Lazy::new(|| Mutex::new(None));
pub fn set_context(server_url: String, access_token: String, mount_point: PathBuf) {
let mut ctx = CONTEXT.lock().unwrap();
ctx.server_url = server_url;
ctx.access_token = access_token;
ctx.mount_point = mount_point;
}
fn ctx_snapshot() -> CloudContext {
CONTEXT.lock().unwrap().clone()
}
const PROVIDER_VERSION: &str = "1.0";
// Windows-FILETIME: 100ns-Ticks seit 1601-01-01. Unix-Epoch liegt
// 11_644_473_600 Sekunden danach.
fn unix_to_ft_ticks(unix_secs: i64) -> i64 {
(unix_secs + 11_644_473_600) * 10_000_000
}
// ---------------------------------------------------------------------------
// Sync-Root-Registrierung
// ---------------------------------------------------------------------------
pub fn register_sync_root(
mount_point: &PathBuf,
provider_name: &str,
account_id: &str,
) -> Result<(), String> {
// COM initialisieren (cfapi benoetigt MTA-Apartment)
unsafe {
let _ = CoInitializeEx(Some(ptr::null()), COINIT_MULTITHREADED);
}
std::fs::create_dir_all(mount_point).map_err(|e| format!("mkdir: {e}"))?;
let display = format!("Mini-Cloud - {}", account_id);
let path_wide = U16CString::from_str(mount_point.to_string_lossy().as_ref())
.map_err(|e| format!("path encode: {e}"))?;
let display_wide = U16CString::from_str(&display).map_err(|e| e.to_string())?;
let provider_wide = U16CString::from_str(provider_name).map_err(|e| e.to_string())?;
let version_wide = U16CString::from_str(PROVIDER_VERSION).map_err(|e| e.to_string())?;
let mut info = CF::CF_SYNC_REGISTRATION::default();
info.StructSize = std::mem::size_of::<CF::CF_SYNC_REGISTRATION>() as u32;
info.ProviderName = PCWSTR(provider_wide.as_ptr());
info.ProviderVersion = PCWSTR(version_wide.as_ptr());
// Stabile GUID fuer "Mini-Cloud" (random einmalig generiert).
info.ProviderId = windows::core::GUID::from_u128(0x4D696E69_436C_6F75_6444_7566667944ab);
let mut policies = CF::CF_SYNC_POLICIES::default();
policies.StructSize = std::mem::size_of::<CF::CF_SYNC_POLICIES>() as u32;
policies.HardLink = CF::CF_HARDLINK_POLICY::default();
policies.Hydration = CF::CF_HYDRATION_POLICY::default();
policies.Population = CF::CF_POPULATION_POLICY::default();
policies.InSync = CF::CF_INSYNC_POLICY::default();
// Hydration PARTIAL = Datei-Inhalt kommt bei Zugriff per FETCH_DATA.
// Population FULL = Ordnerinhalt ist komplett vorgefuellt durch uns
// (populate_placeholders). So muss Windows NICHT FETCH_PLACEHOLDERS
// callen, den wir nicht implementieren - sonst timeout beim Oeffnen.
policies.Hydration.Primary = CF::CF_HYDRATION_POLICY_PARTIAL;
policies.Population.Primary = CF::CF_POPULATION_POLICY_FULL;
// Holder fuer displayname, damit wir ihn spaeter ggf. in ein eigenes
// struct einbauen koennen. windows-rs verlangt hier nichts weiter.
let _ = display_wide;
// Erst eine eventuell vorhandene Registrierung wegraeumen. Sonst
// uebernimmt UPDATE nur einen Teil der Policies und alte PARTIAL-
// Population-Einstellungen bleiben aktiv -> Explorer-Timeout.
unsafe {
let _ = CF::CfUnregisterSyncRoot(PCWSTR(path_wide.as_ptr()));
}
log_msg(mount_point, &format!(
"register_sync_root path={} provider={} account={}",
mount_point.display(), provider_name, account_id
));
unsafe {
if let Err(e) = CF::CfRegisterSyncRoot(
PCWSTR(path_wide.as_ptr()),
&info,
&policies,
CF::CF_REGISTER_FLAG_NONE,
) {
log_err(mount_point, &format!("CfRegisterSyncRoot FAILED: {e:?}"));
// Als Fallback mit UPDATE-Flag
CF::CfRegisterSyncRoot(
PCWSTR(path_wide.as_ptr()),
&info,
&policies,
CF::CF_REGISTER_FLAG_UPDATE,
)
.map_err(|e| format!("CfRegisterSyncRoot(UPDATE): {e}"))?;
}
}
log_msg(mount_point, "CfRegisterSyncRoot OK");
connect_callbacks(mount_point)?;
log_msg(mount_point, "callbacks connected");
// Explorer-Sidebar-Eintrag mit Wolken-Icon
let icon = super::shell_integration::default_icon_source();
match super::shell_integration::install(provider_name, mount_point, &icon) {
Ok(()) => log_msg(mount_point, "shell integration installed"),
Err(e) => log_err(mount_point, &format!("shell integration FAILED: {e}")),
}
Ok(())
}
pub fn unregister_sync_root(mount_point: &PathBuf) -> Result<(), String> {
// Shell-Eintrag zuerst entfernen (schlaegt nie fehl).
let _ = super::shell_integration::uninstall();
let _ = disconnect_callbacks();
let path_wide = U16CString::from_str(mount_point.to_string_lossy().as_ref())
.map_err(|e| e.to_string())?;
unsafe {
CF::CfUnregisterSyncRoot(PCWSTR(path_wide.as_ptr()))
.map_err(|e| format!("CfUnregisterSyncRoot: {e}"))?;
}
Ok(())
}
// ---------------------------------------------------------------------------
// Callback-Tabelle
// ---------------------------------------------------------------------------
unsafe extern "system" fn on_fetch_data(
info: *const CF::CF_CALLBACK_INFO,
params: *const CF::CF_CALLBACK_PARAMETERS,
) {
let info = &*info;
let params = &*params;
let fetch = &params.Anonymous.FetchData;
// FileIdentity enthaelt unsere Mini-Cloud-File-ID als UTF-8-Bytes.
let identity = std::slice::from_raw_parts(
info.FileIdentity as *const u8,
info.FileIdentityLength as usize,
);
let file_id: i64 = std::str::from_utf8(identity)
.ok()
.and_then(|s| s.parse().ok())
.unwrap_or(0);
let offset: i64 = fetch.RequiredFileOffset;
let length: u64 = fetch.RequiredLength as u64;
let connection_key = info.ConnectionKey;
let transfer_key = info.TransferKey;
// HTTPS-Download im separaten Thread (Callback darf nicht blockieren).
let ctx = ctx_snapshot();
std::thread::spawn(move || {
log_msg(&ctx.mount_point, &format!(
"FETCH_DATA file_id={file_id} offset={offset} len={length}"
));
match transfer_range(connection_key, transfer_key, file_id, offset, length, &ctx) {
Ok(()) => log_msg(&ctx.mount_point, &format!(
"fetch file_id={file_id} OK"
)),
Err(e) => {
log_err(&ctx.mount_point, &format!(
"fetch file_id={file_id} offset={offset} len={length} FAILED: {e}"
));
// Garantiert Fehler-Completion, damit Windows nicht in Timeout laeuft.
let _ = complete_transfer(connection_key, transfer_key, None, offset, length);
}
}
});
}
pub fn log_msg(mount: &Path, msg: &str) {
use std::io::Write;
// Log-Datei NEBEN den Mount, damit sie nicht selbst als Platzhalter
// behandelt wird.
let log = mount
.parent()
.map(|p| p.join(".minicloud-cloudfiles.log"))
.unwrap_or_else(|| PathBuf::from(".minicloud-cloudfiles.log"));
if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open(&log) {
let _ = writeln!(f, "[{}] {}", chrono::Utc::now().to_rfc3339(), msg);
}
}
fn log_err(mount: &Path, msg: &str) {
log_msg(mount, msg);
}
/// True wenn die Datei ein cfapi-Platzhalter ist (noch nicht hydriert)
/// oder gerade vom Cloud-Filter verwaltet wird. Fuer solche Dateien
/// duerfen wir KEINEN Upload ausloesen, sonst verwandelt der Sync-Loop
/// jeden Platzhalter sofort in eine vollstaendig lokale Datei.
pub fn is_cfapi_placeholder(path: &Path) -> bool {
use windows::Win32::Storage::FileSystem::GetFileAttributesW;
let Ok(w) = U16CString::from_str(path.to_string_lossy().as_ref()) else {
return false;
};
let attrs = unsafe { GetFileAttributesW(PCWSTR(w.as_ptr())) };
if attrs == u32::MAX {
return false;
}
// FILE_ATTRIBUTE_OFFLINE (0x1000) oder
// FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS (0x400000) oder
// FILE_ATTRIBUTE_RECALL_ON_OPEN (0x40000)
(attrs & 0x0040_1000) != 0 || (attrs & 0x0004_0000) != 0
}
fn transfer_range(
connection_key: CF::CF_CONNECTION_KEY,
transfer_key: i64,
file_id: i64,
offset: i64,
length: u64,
ctx: &CloudContext,
) -> Result<(), String> {
if ctx.server_url.is_empty() || ctx.access_token.is_empty() {
return Err("CloudContext nicht gesetzt (Server/Token leer)".into());
}
let client = reqwest::blocking::Client::builder()
.timeout(std::time::Duration::from_secs(60))
.build()
.map_err(|e| format!("client: {e}"))?;
let url = format!(
"{}/api/files/{}/download",
ctx.server_url.trim_end_matches('/'),
file_id
);
let range = format!("bytes={}-{}", offset, offset as u64 + length - 1);
let resp = client
.get(&url)
.bearer_auth(&ctx.access_token)
.header("Range", &range)
.send()
.map_err(|e| format!("send: {e}"))?;
let status = resp.status();
if !status.is_success() && status.as_u16() != 206 {
return Err(format!("HTTP {}", status));
}
let bytes = resp.bytes().map_err(|e: reqwest::Error| e.to_string())?;
// Wenn Server kein Range unterstuetzt und volle Datei liefert,
// aus dem Body den angeforderten Bereich ausschneiden.
let slice: &[u8] = if status.as_u16() == 206 {
&bytes[..]
} else {
let start = offset as usize;
let end = (start + length as usize).min(bytes.len());
if start >= bytes.len() {
&[]
} else {
&bytes[start..end]
}
};
complete_transfer(connection_key, transfer_key, Some(slice), offset, slice.len() as u64)
}
fn complete_transfer(
connection_key: CF::CF_CONNECTION_KEY,
transfer_key: i64,
data: Option<&[u8]>,
offset: i64,
length: u64,
) -> Result<(), String> {
let mut op_info = CF::CF_OPERATION_INFO::default();
op_info.StructSize = std::mem::size_of::<CF::CF_OPERATION_INFO>() as u32;
op_info.Type = CF::CF_OPERATION_TYPE_TRANSFER_DATA;
op_info.ConnectionKey = connection_key;
op_info.TransferKey = transfer_key;
let mut params = CF::CF_OPERATION_PARAMETERS::default();
params.ParamSize = std::mem::size_of::<CF::CF_OPERATION_PARAMETERS>() as u32;
unsafe {
let transfer = &mut params.Anonymous.TransferData;
if let Some(data) = data {
transfer.CompletionStatus = windows::Win32::Foundation::NTSTATUS(0); // STATUS_SUCCESS
transfer.Buffer = data.as_ptr() as _;
transfer.Offset = offset;
transfer.Length = length as i64;
} else {
transfer.CompletionStatus =
windows::Win32::Foundation::NTSTATUS(0xC0000001u32 as i32); // STATUS_UNSUCCESSFUL
}
CF::CfExecute(&op_info, &mut params).map_err(|e| format!("CfExecute: {e}"))?;
}
Ok(())
}
unsafe extern "system" fn on_fetch_placeholders(
info: *const CF::CF_CALLBACK_INFO,
_params: *const CF::CF_CALLBACK_PARAMETERS,
) {
// Safety-Net: wir populieren schon ueber populate_placeholders,
// aber falls Windows trotzdem ruft, geben wir leere Antwort.
let info = &*info;
let mut op_info = CF::CF_OPERATION_INFO::default();
op_info.StructSize = std::mem::size_of::<CF::CF_OPERATION_INFO>() as u32;
op_info.Type = CF::CF_OPERATION_TYPE_TRANSFER_PLACEHOLDERS;
op_info.ConnectionKey = info.ConnectionKey;
op_info.TransferKey = info.TransferKey;
let mut params = CF::CF_OPERATION_PARAMETERS::default();
params.ParamSize = std::mem::size_of::<CF::CF_OPERATION_PARAMETERS>() as u32;
let transfer = &mut params.Anonymous.TransferPlaceholders;
transfer.CompletionStatus = windows::Win32::Foundation::NTSTATUS(0);
transfer.PlaceholderTotalCount = 0;
transfer.PlaceholderArray = std::ptr::null_mut();
transfer.PlaceholderCount = 0;
transfer.EntriesProcessed = 0;
transfer.Flags = CF::CF_OPERATION_TRANSFER_PLACEHOLDERS_FLAG_DISABLE_ON_DEMAND_POPULATION;
let _ = CF::CfExecute(&op_info, &mut params);
}
fn connect_callbacks(mount_point: &Path) -> Result<(), String> {
let callbacks = [
CF::CF_CALLBACK_REGISTRATION {
Type: CF::CF_CALLBACK_TYPE_FETCH_DATA,
Callback: Some(on_fetch_data),
},
CF::CF_CALLBACK_REGISTRATION {
Type: CF::CF_CALLBACK_TYPE_FETCH_PLACEHOLDERS,
Callback: Some(on_fetch_placeholders),
},
// Sentinel: Type = INVALID beendet die Tabelle.
CF::CF_CALLBACK_REGISTRATION {
Type: CF::CF_CALLBACK_TYPE_NONE,
Callback: None,
},
];
let path_wide = U16CString::from_str(mount_point.to_string_lossy().as_ref())
.map_err(|e| e.to_string())?;
let key = unsafe {
CF::CfConnectSyncRoot(
PCWSTR(path_wide.as_ptr()),
callbacks.as_ptr(),
None,
CF::CF_CONNECT_FLAG_REQUIRE_PROCESS_INFO
| CF::CF_CONNECT_FLAG_REQUIRE_FULL_FILE_PATH,
)
.map_err(|e| format!("CfConnectSyncRoot: {e}"))?
};
*CONNECTION_KEY.lock().unwrap() = Some(key);
Ok(())
}
fn disconnect_callbacks() -> Result<(), String> {
if let Some(key) = CONNECTION_KEY.lock().unwrap().take() {
unsafe {
CF::CfDisconnectSyncRoot(key)
.map_err(|e| format!("CfDisconnectSyncRoot: {e}"))?;
}
}
Ok(())
}
// ---------------------------------------------------------------------------
// Placeholder-Erzeugung
// ---------------------------------------------------------------------------
pub fn populate_placeholders(
mount_point: &PathBuf,
entries: &[RemoteEntry],
) -> Result<(), String> {
use std::collections::HashMap;
log_msg(mount_point, &format!(
"populate_placeholders: {} Eintraege", entries.len()
));
let by_id: HashMap<i64, &RemoteEntry> = entries.iter().map(|e| (e.id, e)).collect();
fn rel_path<'a>(
entry: &'a RemoteEntry,
by_id: &HashMap<i64, &'a RemoteEntry>,
) -> PathBuf {
let mut parts = vec![entry.name.as_str()];
let mut cur = entry.parent_id;
while let Some(id) = cur {
if let Some(p) = by_id.get(&id) {
parts.push(p.name.as_str());
cur = p.parent_id;
} else {
break;
}
}
parts.reverse();
parts.iter().collect()
}
// Erst Ordner anlegen
for e in entries.iter().filter(|e| e.is_folder) {
let p = mount_point.join(rel_path(e, &by_id));
std::fs::create_dir_all(&p).ok();
}
// Dann Dateien als Platzhalter. Existierende "normale" Dateien
// (z.B. nach vorherigem CfUnregisterSyncRoot) vorher loeschen,
// weil CfCreatePlaceholders sonst mit ERROR_FILE_EXISTS scheitert
// und die Datei nie zum Platzhalter wird -> spaeter koennte man
// sie nicht mehr dehydrieren (0x80070178 "keine Clouddatei").
for e in entries.iter().filter(|e| !e.is_folder) {
let rel = rel_path(e, &by_id);
let full = mount_point.join(&rel);
let parent = rel
.parent()
.map(|p| mount_point.join(p))
.unwrap_or_else(|| mount_point.clone());
let identity = e.id.to_string();
if full.exists() && !is_cfapi_placeholder(&full) {
log_msg(mount_point, &format!(
"deleting non-placeholder {} to recreate",
full.display()
));
if let Err(err) = std::fs::remove_file(&full) {
log_err(mount_point, &format!(
"remove {} failed: {err}", full.display()
));
}
}
match create_placeholder(&parent, &e.name, e.size, &e.modified_at, identity.as_bytes()) {
Ok(()) => log_msg(mount_point, &format!("placeholder created: {}", full.display())),
Err(err) => log_err(mount_point, &format!(
"placeholder {} FAILED: {err}", full.display()
)),
}
}
Ok(())
}
pub fn create_placeholder_at(
parent_dir: &Path,
name: &str,
size: i64,
modified_iso: &str,
file_identity: &[u8],
) -> Result<(), String> {
create_placeholder(parent_dir, name, size, modified_iso, file_identity)
}
fn create_placeholder(
parent_dir: &Path,
name: &str,
size: i64,
modified_iso: &str,
file_identity: &[u8],
) -> Result<(), String> {
let parent_wide = U16CString::from_str(parent_dir.to_string_lossy().as_ref())
.map_err(|e| e.to_string())?;
let name_wide = U16CString::from_str(name).map_err(|e| e.to_string())?;
let mtime_unix = chrono::DateTime::parse_from_rfc3339(modified_iso)
.map(|dt| dt.timestamp())
.unwrap_or(0);
let ft_ticks = unix_to_ft_ticks(mtime_unix);
let mut ph = CF::CF_PLACEHOLDER_CREATE_INFO::default();
ph.RelativeFileName = PCWSTR(name_wide.as_ptr());
ph.FsMetadata.FileSize = size;
ph.FsMetadata.BasicInfo.FileAttributes = FILE_ATTRIBUTE_NORMAL.0;
ph.FsMetadata.BasicInfo.LastWriteTime = ft_ticks;
ph.FsMetadata.BasicInfo.CreationTime = ft_ticks;
ph.FsMetadata.BasicInfo.ChangeTime = ft_ticks;
ph.FsMetadata.BasicInfo.LastAccessTime = ft_ticks;
ph.Flags = CF::CF_PLACEHOLDER_CREATE_FLAG_MARK_IN_SYNC;
ph.FileIdentity = file_identity.as_ptr() as _;
ph.FileIdentityLength = file_identity.len() as u32;
// CfCreatePlaceholders nimmt in windows-rs 0.58 einen Slice und einen
// Option<*mut u32> fuer "wie viele wurden angelegt".
let mut phs = [ph];
let mut count: u32 = 0;
unsafe {
CF::CfCreatePlaceholders(
PCWSTR(parent_wide.as_ptr()),
&mut phs,
CF::CF_CREATE_FLAG_NONE,
Some(&mut count as *mut u32),
)
.map_err(|e| format!("CfCreatePlaceholders: {e}"))?;
}
Ok(())
}
// ---------------------------------------------------------------------------
// Pin / Unpin (offline halten)
// ---------------------------------------------------------------------------
pub fn set_pin_state(file: &Path, pinned: bool) -> Result<(), String> {
use windows::Win32::Storage::FileSystem::{
CreateFileW, FILE_FLAG_BACKUP_SEMANTICS, FILE_FLAG_OPEN_REPARSE_POINT,
FILE_WRITE_ATTRIBUTES, FILE_READ_ATTRIBUTES,
FILE_SHARE_READ, FILE_SHARE_WRITE, FILE_SHARE_DELETE, OPEN_EXISTING,
};
let path_wide = U16CString::from_str(file.to_string_lossy().as_ref())
.map_err(|e| e.to_string())?;
// CfSetPinState / CfDehydratePlaceholder brauchen WRITE_ATTRIBUTES.
// OPEN_REPARSE_POINT verhindert, dass das Oeffnen selbst eine
// Hydration ausloest (sonst waere Unpin bedeutungslos).
let handle = unsafe {
CreateFileW(
PCWSTR(path_wide.as_ptr()),
(FILE_READ_ATTRIBUTES | FILE_WRITE_ATTRIBUTES).0,
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
None,
OPEN_EXISTING,
FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OPEN_REPARSE_POINT,
None,
)
}
.map_err(|e| format!("open: {e}"))?;
let state = if pinned {
CF::CF_PIN_STATE_PINNED
} else {
CF::CF_PIN_STATE_UNPINNED
};
let set_res = unsafe {
CF::CfSetPinState(handle, state, CF::CF_SET_PIN_FLAG_NONE, None)
};
// Hydrate bei Pin / Dehydrate bei Unpin. CfSetPinState aendert nur
// das Flag - ohne explizite Hydrate-/Dehydrate-Calls passiert am
// Disk-Inhalt und am Icon nichts Sichtbares.
let (hydrate_err, dehydrate_err) = if set_res.is_ok() {
if pinned {
let r = unsafe {
CF::CfHydratePlaceholder(
handle,
0,
-1,
CF::CF_HYDRATE_FLAG_NONE,
None,
)
};
(r.err().map(|e| format!("{:?}", e)), None)
} else {
let r = unsafe {
CF::CfDehydratePlaceholder(
handle,
0,
-1,
CF::CF_DEHYDRATE_FLAG_NONE,
None,
)
};
(None, r.err().map(|e| format!("{:?}", e)))
}
} else {
(None, None)
};
unsafe {
let _ = windows::Win32::Foundation::CloseHandle(handle);
}
// Explorer Icon-Overlay aktualisieren
notify_file_update(file);
// Log-Verzeichnis ist der Mount-Ordner oder dessen Parent
let log_dir = file
.ancestors()
.find(|p| p.parent().is_some())
.map(|p| p.to_path_buf())
.unwrap_or_else(|| file.to_path_buf());
log_msg(
&log_dir,
&format!(
"set_pin_state file={} pinned={} result={:?} hydrate_err={:?} dehydrate_err={:?}",
file.display(),
pinned,
set_res,
hydrate_err,
dehydrate_err
),
);
set_res.map_err(|e| format!("CfSetPinState: {e}"))?;
Ok(())
}
/// Sagt dem Shell "diese Datei hat sich geaendert" damit das Overlay-
/// Icon (Wolke/Haken) aktualisiert wird, ohne dass der User F5 druecken
/// muss.
fn notify_file_update(file: &Path) {
use windows::Win32::UI::Shell::{SHChangeNotify, SHCNE_UPDATEITEM, SHCNF_PATHW};
let Ok(w) = U16CString::from_str(file.to_string_lossy().as_ref()) else {
return;
};
unsafe {
SHChangeNotify(
SHCNE_UPDATEITEM,
SHCNF_PATHW,
Some(w.as_ptr() as _),
None,
);
}
}
+615 -157
View File
@@ -1,6 +1,8 @@
mod sync;
mod cloud_files;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex};
use std::time::Duration;
use tauri::{
@@ -12,27 +14,21 @@ use tauri::{
use sync::api::MiniCloudApi;
use sync::config::AppConfig;
use sync::engine::{SyncEngine, SyncMode, SyncPath};
use sync::journal::Journal;
use sync::watcher::{FileWatcher, ChangeKind};
use std::collections::HashMap;
/// Tracks a file opened from a .cloud placeholder
#[derive(Clone, Debug)]
struct OpenedFile {
_file_id: i64,
real_path: PathBuf,
cloud_name: String, // original .cloud filename
}
struct AppState {
api: Mutex<Option<MiniCloudApi>>,
sync_engine: Mutex<Option<SyncEngine>>,
username: Mutex<Option<String>>,
watchers: Mutex<Vec<FileWatcher>>,
locked_files: Mutex<Vec<i64>>, // file IDs we have locked on server
sync_running: Arc<Mutex<bool>>,
locked_files: Mutex<Vec<i64>>,
opened_files: Mutex<HashMap<i64, OpenedFile>>, // file_id -> opened file info
sync_paths: Mutex<Vec<SyncPath>>,
journal: Arc<Journal>,
background_started: AtomicBool,
cloud_files_loop: Mutex<Option<cloud_files::sync_loop::SyncLoopHandle>>,
cloud_files_watcher: Mutex<Option<cloud_files::watcher::CallbackWatcher>>,
}
// --- Auth ---
@@ -144,6 +140,11 @@ fn add_sync_path(
state.sync_paths.lock().unwrap().push(sp.clone());
// Also attach a filesystem watcher for this path so background sync picks it up
if let Ok(w) = FileWatcher::new(&local) {
state.watchers.lock().unwrap().push(w);
}
// Save to config
let mut config = AppConfig::load();
config.sync_paths = state.sync_paths.lock().unwrap().clone();
@@ -154,8 +155,18 @@ fn add_sync_path(
#[tauri::command]
fn remove_sync_path(state: State<'_, AppState>, id: String) -> Result<String, String> {
// Capture the local_dir of the removed path so we can drop its watcher too
let removed_dir = {
let paths = state.sync_paths.lock().unwrap();
paths.iter().find(|p| p.id == id).map(|p| p.local_dir.clone())
};
state.sync_paths.lock().unwrap().retain(|p| p.id != id);
if let Some(dir) = removed_dir {
let target = PathBuf::from(&dir);
state.watchers.lock().unwrap().retain(|w| w.path != target);
}
let mut config = AppConfig::load();
config.sync_paths = state.sync_paths.lock().unwrap().clone();
let _ = config.save();
@@ -194,23 +205,34 @@ async fn start_sync(app: AppHandle, state: State<'_, AppState>) -> Result<Vec<St
return Err("Keine Sync-Pfade konfiguriert".to_string());
}
let mut engine = SyncEngine::new(api.clone());
let username = state.username.lock().unwrap().clone().unwrap_or_default();
let journal = state.journal.clone();
let mut engine = SyncEngine::new(api.clone(), journal, username);
engine.sync_paths = paths.clone();
let log = engine.sync_all().await?;
*state.sync_engine.lock().unwrap() = Some(engine);
// Start watchers for each sync path
let mut watchers = Vec::new();
for sp in &paths {
if let Ok(w) = FileWatcher::new(&PathBuf::from(&sp.local_dir)) {
watchers.push(w);
// Ensure a watcher exists for every sync path (skip paths already watched)
{
let mut guard = state.watchers.lock().unwrap();
for sp in &paths {
let target = PathBuf::from(&sp.local_dir);
if guard.iter().any(|w| w.path == target) { continue; }
if let Ok(w) = FileWatcher::new(&target) {
guard.push(w);
}
}
}
*state.watchers.lock().unwrap() = watchers;
// Start background threads
start_background_sync(app, state.sync_running.clone(), api, paths);
// Start background threads only once per process lifetime.
// They re-read sync_paths from state each iteration, so adding/removing
// paths later takes effect without respawning threads.
if !state.background_started.swap(true, Ordering::SeqCst) {
let username = state.username.lock().unwrap().clone().unwrap_or_default();
let journal = state.journal.clone();
start_background_sync(app, state.sync_running.clone(), api, journal, username);
}
Ok(log)
}
@@ -221,6 +243,12 @@ async fn run_sync_now(state: State<'_, AppState>) -> Result<Vec<String>, String>
let mut guard = state.sync_engine.lock().unwrap();
guard.take().ok_or("Sync nicht gestartet")?
};
// Sync engine's API token with current state (refresh_token may have updated it)
if let Some(ref api) = *state.api.lock().unwrap() {
engine.api.access_token = api.access_token.clone();
}
// Refresh sync_paths from state: user may have added/removed paths
engine.sync_paths = state.sync_paths.lock().unwrap().clone();
let result = engine.sync_all().await;
*state.sync_engine.lock().unwrap() = Some(engine);
result
@@ -230,40 +258,133 @@ async fn run_sync_now(state: State<'_, AppState>) -> Result<Vec<String>, String>
#[tauri::command]
async fn open_cloud_file(state: State<'_, AppState>, cloud_path: String) -> Result<String, String> {
let engine = state.api.lock().unwrap().clone()
.ok_or("Nicht eingeloggt")?;
let api = state.api.lock().unwrap().clone()
.ok_or("Nicht eingeloggt - bitte zuerst anmelden")?;
let path = PathBuf::from(&cloud_path);
let content = std::fs::read_to_string(&path).map_err(|e| e.to_string())?;
let placeholder: serde_json::Value = serde_json::from_str(&content).map_err(|e| e.to_string())?;
let file_id = placeholder.get("id").and_then(|v| v.as_i64()).ok_or("Keine ID")?;
let file_name = placeholder.get("name").and_then(|v| v.as_str()).unwrap_or("file");
if !path.exists() {
return Err(format!("Datei nicht gefunden: {}", cloud_path));
}
let real_path = path.parent().unwrap().join(file_name);
// Read placeholder JSON
let content = std::fs::read_to_string(&path)
.map_err(|e| format!("Platzhalter lesen: {}", e))?;
let placeholder: serde_json::Value = serde_json::from_str(&content)
.map_err(|e| format!("Platzhalter ungueltig: {}", e))?;
let file_id = placeholder.get("id").and_then(|v| v.as_i64())
.ok_or("Keine Datei-ID im Platzhalter")?;
// Download
engine.download_file(file_id, &real_path).await?;
// Get real filename: from JSON "name" field, or strip .cloud from filename
let file_name = placeholder.get("name")
.and_then(|v| v.as_str())
.map(|s| s.to_string())
.unwrap_or_else(|| {
let name = path.file_name().unwrap().to_string_lossy().to_string();
name.strip_suffix(".cloud").unwrap_or(&name).to_string()
});
// Remove placeholder
let real_path = path.parent().unwrap().join(&file_name);
eprintln!("[OpenCloud] {} -> {} (ID: {})", cloud_path, real_path.display(), file_id);
// Download the actual file
api.download_file(file_id, &real_path).await
.map_err(|e| format!("Download fehlgeschlagen: {}", e))?;
// Verify file was downloaded
if !real_path.exists() {
return Err(format!("Download fehlgeschlagen - Datei nicht vorhanden: {}", real_path.display()));
}
eprintln!("[OpenCloud] Downloaded {} bytes", std::fs::metadata(&real_path).map(|m| m.len()).unwrap_or(0));
// Remove .cloud placeholder - file stays as real file
// Changes will be synced automatically by the file watcher
// User can "unmark offline" or "unlock" via right-click
std::fs::remove_file(&path).ok();
// Lock on server
let _ = engine.lock_file(file_id, "Desktop Sync Client").await;
state.locked_files.lock().unwrap().push(file_id);
// Lock on server (fresh token) - prevents others from editing
let fresh_api = state.api.lock().unwrap().clone().ok_or("Nicht eingeloggt")?;
match fresh_api.lock_file(file_id, "Desktop Sync Client").await {
Ok(_) => {
eprintln!("[OpenCloud] Locked on server");
state.locked_files.lock().unwrap().push(file_id);
}
Err(e) => {
eprintln!("[OpenCloud] Lock failed: {}", e);
return Err(format!("Datei heruntergeladen, aber Sperre fehlgeschlagen: {}", e));
}
}
// Track opened file for auto-close detection
state.opened_files.lock().unwrap().insert(file_id, OpenedFile {
_file_id: file_id,
real_path: real_path.clone(),
cloud_name: path.file_name().unwrap().to_string_lossy().to_string(),
});
// Open with default application
let _ = open::that(&real_path);
// Open with default application for this file type
eprintln!("[OpenCloud] Opening with default app: {}", real_path.display());
open::that(&real_path)
.map_err(|e| format!("Oeffnen fehlgeschlagen: {} - {}", real_path.display(), e))?;
Ok(real_path.to_string_lossy().to_string())
}
/// Open a real (already-downloaded) file: lock it on the server, then open
/// it with the default application. Used for files that are already offline-
/// available so they still get checked out.
#[tauri::command]
async fn open_offline_file(state: State<'_, AppState>, real_path: String) -> Result<String, String> {
let path = PathBuf::from(&real_path);
if !path.exists() {
return Err(format!("Datei nicht gefunden: {}", real_path));
}
// Resolve file_id by matching this path against the configured sync paths
// and looking the relative path up in the journal.
let (sync_path_id, rel_path) = {
let paths = state.sync_paths.lock().unwrap().clone();
let mut best: Option<(String, String)> = None;
for sp in &paths {
let base = PathBuf::from(&sp.local_dir);
if let Ok(rel) = path.strip_prefix(&base) {
let rel_str = rel.to_string_lossy().replace('\\', "/");
best = Some((sp.id.clone(), rel_str));
break;
}
}
best.ok_or("Datei gehoert zu keinem konfigurierten Sync-Pfad")?
};
let journal = state.journal.clone();
let entry = journal.get(&sync_path_id, &rel_path)
.ok_or("Datei nicht im Sync-Journal - erst einmal synchronisieren")?;
let file_id = entry.file_id.ok_or("Keine Server-ID im Journal")?;
let api = state.api.lock().unwrap().clone().ok_or("Nicht eingeloggt")?;
match api.lock_file(file_id, "Desktop Sync Client").await {
Ok(_) => {
eprintln!("[OpenOffline] Locked {} on server", rel_path);
let mut locked = state.locked_files.lock().unwrap();
if !locked.contains(&file_id) { locked.push(file_id); }
}
Err(e) => return Err(format!("Sperre fehlgeschlagen: {}", e)),
}
open::that(&path)
.map_err(|e| format!("Oeffnen fehlgeschlagen: {}", e))?;
Ok(real_path)
}
#[tauri::command]
async fn unlock_file_cmd(state: State<'_, AppState>, file_id: i64) -> Result<String, String> {
let api = state.api.lock().unwrap().clone().ok_or("Nicht eingeloggt")?;
api.unlock_file(file_id).await?;
state.locked_files.lock().unwrap().retain(|&id| id != file_id);
Ok("Datei entsperrt".to_string())
}
#[tauri::command]
async fn lock_file_cmd(state: State<'_, AppState>, file_id: i64) -> Result<String, String> {
let api = state.api.lock().unwrap().clone().ok_or("Nicht eingeloggt")?;
api.lock_file(file_id, "Desktop Sync Client").await?;
let mut locked = state.locked_files.lock().unwrap();
if !locked.contains(&file_id) { locked.push(file_id); }
Ok("Datei ausgecheckt".to_string())
}
#[tauri::command]
async fn get_file_tree(state: State<'_, AppState>) -> Result<serde_json::Value, String> {
let api = state.api.lock().unwrap().clone().ok_or("Nicht eingeloggt")?;
@@ -288,21 +409,6 @@ async fn get_status(state: State<'_, AppState>) -> Result<serde_json::Value, Str
}))
}
#[tauri::command]
async fn lock_file_cmd(state: State<'_, AppState>, file_id: i64) -> Result<String, String> {
let api = state.api.lock().unwrap().clone().ok_or("Nicht eingeloggt")?;
api.lock_file(file_id, "Desktop Sync Client").await?;
state.locked_files.lock().unwrap().push(file_id);
Ok("Datei gesperrt".to_string())
}
#[tauri::command]
async fn unlock_file_cmd(state: State<'_, AppState>, file_id: i64) -> Result<String, String> {
let api = state.api.lock().unwrap().clone().ok_or("Nicht eingeloggt")?;
api.unlock_file(file_id).await?;
state.locked_files.lock().unwrap().retain(|&id| id != file_id);
Ok("Datei entsperrt".to_string())
}
// --- Local File Browser ---
@@ -315,11 +421,33 @@ struct LocalFileEntry {
is_offline: bool, // real file (offline available)
size: i64,
cloud_size: Option<i64>, // original size from .cloud metadata
file_id: Option<i64>,
locked: bool,
locked_by: Option<String>,
}
fn collect_locks(
entries: &[sync::api::FileEntry],
out: &mut std::collections::HashMap<i64, (bool, Option<String>)>,
) {
for e in entries {
if e.locked.unwrap_or(false) {
out.insert(e.id, (true, e.locked_by.clone()));
}
if let Some(children) = &e.children {
collect_locks(children, out);
}
}
}
#[tauri::command]
fn browse_sync_folder(state: State<'_, AppState>, sub_path: Option<String>) -> Result<Vec<LocalFileEntry>, String> {
let paths = state.sync_paths.lock().unwrap();
async fn browse_sync_folder(state: State<'_, AppState>, sub_path: Option<String>) -> Result<Vec<LocalFileEntry>, String> {
let (paths, api_opt, journal) = {
let p = state.sync_paths.lock().unwrap().clone();
let a = state.api.lock().unwrap().clone();
(p, a, state.journal.clone())
};
if paths.is_empty() {
return Err("Keine Sync-Pfade konfiguriert".to_string());
}
@@ -335,6 +463,27 @@ fn browse_sync_folder(state: State<'_, AppState>, sub_path: Option<String>) -> R
return Ok(Vec::new());
}
// Figure out which sync path this base_dir belongs to so we can compute
// relative paths for the journal lookup.
let sync_path = paths.iter().find(|sp| {
base_dir.starts_with(&sp.local_dir) || PathBuf::from(&sp.local_dir) == base_dir
}).cloned();
// Fetch server tree once so we know which files are locked. If the
// server is unreachable we simply show no lock badges.
let locks: std::collections::HashMap<i64, (bool, Option<String>)> = if let Some(api) = api_opt {
match api.get_sync_tree().await {
Ok(tree) => {
let mut map = std::collections::HashMap::new();
collect_locks(&tree, &mut map);
map
}
Err(_) => std::collections::HashMap::new(),
}
} else {
std::collections::HashMap::new()
};
let mut entries = Vec::new();
let dir = std::fs::read_dir(&base_dir).map_err(|e| e.to_string())?;
@@ -342,26 +491,44 @@ fn browse_sync_folder(state: State<'_, AppState>, sub_path: Option<String>) -> R
let name = entry.file_name().to_string_lossy().to_string();
let path = entry.path();
// Skip hidden files
if name.starts_with('.') || name.starts_with('~') { continue; }
let is_folder = path.is_dir();
let is_cloud = name.ends_with(".cloud");
let size = std::fs::metadata(&path).map(|m| m.len() as i64).unwrap_or(0);
// For .cloud files, read the original size from JSON
let mut cloud_size = None;
let mut display_name = name.clone();
let mut file_id: Option<i64> = None;
if is_cloud {
display_name = name.trim_end_matches(".cloud").to_string();
if let Ok(content) = std::fs::read_to_string(&path) {
if let Ok(json) = serde_json::from_str::<serde_json::Value>(&content) {
cloud_size = json.get("size").and_then(|v| v.as_i64());
file_id = json.get("id").and_then(|v| v.as_i64());
}
}
}
// A real (non-.cloud) file = offline available
// For offline files / folders: look up file_id via journal
if file_id.is_none() && !is_folder {
if let Some(sp) = &sync_path {
if let Ok(rel) = path.strip_prefix(&sp.local_dir) {
let rel_str = rel.to_string_lossy().replace('\\', "/");
if let Some(je) = journal.get(&sp.id, &rel_str) {
file_id = je.file_id;
}
}
}
}
let (locked, locked_by) = file_id
.and_then(|id| locks.get(&id))
.cloned()
.map(|(b, by)| (b, by))
.unwrap_or((false, None));
let is_offline = !is_cloud && !is_folder;
entries.push(LocalFileEntry {
@@ -372,10 +539,12 @@ fn browse_sync_folder(state: State<'_, AppState>, sub_path: Option<String>) -> R
is_offline,
size,
cloud_size,
file_id,
locked,
locked_by,
});
}
// Sort: folders first, then by name
entries.sort_by(|a, b| {
b.is_folder.cmp(&a.is_folder).then(a.name.to_lowercase().cmp(&b.name.to_lowercase()))
});
@@ -438,7 +607,8 @@ fn start_background_sync(
app: AppHandle,
sync_running: Arc<Mutex<bool>>,
api: MiniCloudApi,
paths: Vec<SyncPath>,
journal: Arc<Journal>,
username: String,
) {
// Shared flag: watcher sets true when changes detected, sync thread checks it
let watcher_triggered = Arc::new(Mutex::new(false));
@@ -446,13 +616,13 @@ fn start_background_sync(
// Main sync thread: syncs on watcher trigger OR every 60s as fallback
let app_sync = app.clone();
let api_sync = api.clone();
let paths_sync = paths.clone();
let trigger_sync = watcher_triggered.clone();
let journal_sync = journal.clone();
let username_sync = username.clone();
std::thread::spawn(move || {
let rt = tokio::runtime::Runtime::new().unwrap();
let mut engine = SyncEngine::new(api_sync);
engine.sync_paths = paths_sync;
let mut engine = SyncEngine::new(api_sync, journal_sync, username_sync);
let mut idle_counter = 0u32;
loop {
@@ -466,18 +636,41 @@ fn start_background_sync(
*triggered = false;
true
} else {
// Fallback: sync every 60 seconds even without changes
idle_counter >= 60
// Fallback: sync every 30 seconds even without changes
idle_counter >= 30
}
};
if !should_sync { continue; }
idle_counter = 0;
// Re-read sync_paths from state every iteration so add/remove
// takes effect without restarting the thread.
let paths_now = {
let state = app_sync.state::<AppState>();
let p = state.sync_paths.lock().unwrap().clone();
p
};
if paths_now.is_empty() {
// Nothing to sync - idle quietly.
continue;
}
engine.sync_paths = paths_now;
// Run sync
*sync_running.lock().unwrap() = true;
let _ = app_sync.emit("sync-status", "syncing");
// Refresh engine's API token from state (token may have been refreshed)
let fresh_token: Option<String> = {
let state = app_sync.state::<AppState>();
let t = state.api.lock().unwrap().as_ref().map(|a| a.access_token.clone());
t
};
if let Some(t) = fresh_token {
engine.api.access_token = t;
}
match rt.block_on(engine.sync_all()) {
Ok(log) => {
if !log.is_empty() {
@@ -492,88 +685,104 @@ fn start_background_sync(
}
});
// Heartbeat + token refresh + check if opened files still in use
// Token refresh (every 10 min) + Heartbeat for locks (every 60s)
let app_hb = app.clone();
let api_hb = api.clone();
std::thread::spawn(move || {
let rt = tokio::runtime::Runtime::new().unwrap();
let mut api_mut = api_hb.clone();
let mut token_refresh_counter = 0u32;
let mut tick = 0u32;
loop {
std::thread::sleep(Duration::from_secs(10));
tick += 10;
let state = app_hb.state::<AppState>();
// Refresh JWT token every 10 minutes (before 15 min expiry)
token_refresh_counter += 10;
if token_refresh_counter >= 600 {
token_refresh_counter = 0;
// Heartbeat every 60 seconds for locked files
if tick % 60 == 0 {
let locked = state.locked_files.lock().unwrap().clone();
for file_id in &locked {
let _ = rt.block_on(api_mut.heartbeat(*file_id));
}
}
// Token refresh every 10 minutes
if tick >= 600 {
tick = 0;
if let Ok(new_token) = rt.block_on(api_mut.refresh_token()) {
// Update the shared API instance with new token
if let Some(ref mut api) = *state.api.lock().unwrap() {
api.access_token = new_token;
}
eprintln!("[Auth] Token refreshed");
}
}
}
});
// Heartbeat for locked files
let locked = state.locked_files.lock().unwrap().clone();
for file_id in &locked {
let _ = rt.block_on(api_mut.heartbeat(*file_id));
// Server-Sent Events: real-time change notifications from server
let app_sse = app.clone();
let trigger_sse = watcher_triggered.clone();
std::thread::spawn(move || {
let rt = tokio::runtime::Runtime::new().unwrap();
loop {
let (server_url, token) = {
let state = app_sse.state::<AppState>();
let guard = state.api.lock().unwrap();
match guard.as_ref() {
Some(a) => (a.server_url.clone(), a.access_token.clone()),
None => { drop(guard); std::thread::sleep(Duration::from_secs(3)); continue; }
}
};
if token.is_empty() {
std::thread::sleep(Duration::from_secs(3));
continue;
}
// Check if opened files are still in use by another process
let opened = state.opened_files.lock().unwrap().clone();
for (file_id, info) in &opened {
if !info.real_path.exists() {
// File was deleted/moved - clean up
state.opened_files.lock().unwrap().remove(file_id);
state.locked_files.lock().unwrap().retain(|&id| id != *file_id);
let _ = rt.block_on(api_hb.unlock_file(*file_id));
let _ = app_hb.emit("file-change",
format!("Geschlossen + entsperrt: {}", info.cloud_name));
continue;
let url = format!("{}/api/sync/events?token={}", server_url, token);
let trigger = trigger_sse.clone();
let app_cb = app_sse.clone();
let result: Result<(), String> = rt.block_on(async move {
let client = reqwest::Client::builder()
.connect_timeout(Duration::from_secs(10))
.build()
.map_err(|e| e.to_string())?;
let mut resp = client.get(&url).send().await.map_err(|e| e.to_string())?;
if !resp.status().is_success() {
return Err(format!("SSE status {}", resp.status()));
}
eprintln!("[SSE] Connected");
let _ = app_cb.emit("sse-status", "connected");
// Check if file is still locked by another process
let still_open = is_file_in_use(&info.real_path);
if !still_open {
// File closed! Sync back, recreate .cloud, unlock
let _ = app_hb.emit("file-change",
format!("Datei geschlossen, synchronisiere: {}", info.real_path.file_name().unwrap().to_string_lossy()));
// Upload changes
let _ = rt.block_on(api_hb.upload_file(&info.real_path, None));
// Unlock on server
let _ = rt.block_on(api_hb.unlock_file(*file_id));
// Recreate .cloud placeholder
let cloud_path = info.real_path.parent().unwrap().join(&info.cloud_name);
let size = std::fs::metadata(&info.real_path).map(|m| m.len() as i64).unwrap_or(0);
let checksum = sync::engine::compute_file_hash(&info.real_path);
let placeholder = serde_json::json!({
"id": file_id,
"name": info.real_path.file_name().unwrap().to_string_lossy(),
"size": size,
"checksum": checksum,
"updated_at": chrono::Utc::now().to_rfc3339(),
"server_path": "",
});
std::fs::write(&cloud_path, serde_json::to_string_pretty(&placeholder).unwrap()).ok();
// Remove local copy
std::fs::remove_file(&info.real_path).ok();
// Clean up tracking
state.opened_files.lock().unwrap().remove(file_id);
state.locked_files.lock().unwrap().retain(|&id| id != *file_id);
let _ = app_hb.emit("file-change",
format!("Entsperrt + .cloud: {}", info.cloud_name));
let mut buffer = String::new();
while let Some(chunk) = resp.chunk().await.map_err(|e| e.to_string())? {
buffer.push_str(&String::from_utf8_lossy(&chunk));
while let Some(pos) = buffer.find("\n\n") {
let raw = buffer[..pos].to_string();
buffer.drain(..pos + 2);
let lines: Vec<&str> = raw.lines().collect();
// Skip keepalive/comment lines (start with ':')
if lines.iter().all(|l| l.starts_with(':') || l.is_empty()) {
continue;
}
let mut event_name = String::from("message");
for l in &lines {
if let Some(v) = l.strip_prefix("event: ") { event_name = v.to_string(); }
}
if event_name == "hello" { continue; }
// Any real event -> trigger sync
*trigger.lock().unwrap() = true;
let _ = app_cb.emit("sse-event", event_name);
}
}
Ok(())
});
if let Err(e) = result {
eprintln!("[SSE] Disconnected: {}", e);
let _ = app_sse.emit("sse-status", format!("reconnecting: {}", e));
}
std::thread::sleep(Duration::from_secs(3));
}
});
@@ -625,32 +834,6 @@ fn start_background_sync(
// --- App Setup ---
/// Check if another instance is running. If yes, pass the .cloud file to it and exit.
/// Check if a file is still being used by another process
fn is_file_in_use(path: &Path) -> bool {
// Try to open the file with exclusive access
// If it fails, another process has it open
#[cfg(target_os = "windows")]
{
use std::fs::OpenOptions;
// On Windows, try to open with write access - fails if file is locked by Office etc.
match OpenOptions::new().write(true).open(path) {
Ok(_) => false, // We could open it -> not in use
Err(_) => true, // Can't open -> still in use
}
}
#[cfg(not(target_os = "windows"))]
{
// On Linux/Mac, check /proc or lsof
let output = std::process::Command::new("lsof")
.arg(path.to_string_lossy().as_ref())
.output();
match output {
Ok(o) => !o.stdout.is_empty(), // lsof found processes -> in use
Err(_) => false, // lsof not available -> assume not in use
}
}
}
/// Single instance per user. On terminal servers each user gets their own
/// instance because the lock file is in %APPDATA% (user-specific).
fn handle_single_instance() {
@@ -704,8 +887,272 @@ fn handle_single_instance() {
// ---------------------------------------------------------------------------
// Native File-Provider-Integration (OneDrive-artige Platzhalter)
// ---------------------------------------------------------------------------
#[tauri::command]
fn cloud_files_supported() -> bool {
cloud_files::is_supported()
}
#[tauri::command]
async fn cloud_files_enable(
state: State<'_, AppState>,
mount_point: String,
) -> Result<(), String> {
let mp = PathBuf::from(&mount_point);
// MutexGuards nur kurz halten, damit der Future Send bleibt.
let (server, token, username) = {
let api_guard = state.api.lock().unwrap();
let api = api_guard.as_ref().ok_or("Nicht eingeloggt")?;
let username = state
.username
.lock()
.unwrap()
.clone()
.unwrap_or_else(|| "user".into());
(api.server_url.clone(), api.access_token.clone(), username)
};
#[cfg(windows)]
{
cloud_files::windows::set_context(server.clone(), token.clone(), mp.clone());
}
cloud_files::register_sync_root(&mp, "Mini-Cloud", &username)?;
// Baum vom Server holen und Platzhalter anlegen
let entries = fetch_remote_entries(&server, &token).await?;
cloud_files::populate_placeholders(&mp, &entries)?;
// Hintergrund-Loop starten: poll Changes + upload lokaler Aenderungen
let cfg = cloud_files::sync_loop::SyncLoopConfig {
server_url: server.clone(),
access_token: token.clone(),
mount_point: mp.clone(),
poll_interval_secs: 30,
};
let handle = cloud_files::sync_loop::start(cfg);
// Filesystem-Watcher mit Callback; leitet geaenderte Dateien
// direkt an den Sync-Loop weiter.
let tx = handle.tx.clone();
let watcher = cloud_files::watcher::CallbackWatcher::new(&mp, move |path, kind| {
use notify::EventKind;
let relevant = matches!(kind, EventKind::Create(_) | EventKind::Modify(_));
if relevant {
let _ = tx.send(cloud_files::sync_loop::LoopMessage::LocalChange(path));
}
})
.map_err(|e| format!("watcher: {e}"))?;
*state.cloud_files_loop.lock().unwrap() = Some(handle);
*state.cloud_files_watcher.lock().unwrap() = Some(watcher);
// Mount-Pfad persistieren, damit er beim Neustart wiederkommt.
let mut cfg = AppConfig::load();
cfg.cloud_files_mount = mount_point.clone();
let _ = cfg.save();
Ok(())
}
#[tauri::command]
async fn cloud_files_disable(
state: State<'_, AppState>,
mount_point: String,
) -> Result<(), String> {
// Loop und Watcher stoppen
if let Some(handle) = state.cloud_files_loop.lock().unwrap().take() {
handle.stop_flag.store(true, std::sync::atomic::Ordering::Relaxed);
let _ = handle.tx.send(cloud_files::sync_loop::LoopMessage::Shutdown);
}
state.cloud_files_watcher.lock().unwrap().take();
let result = cloud_files::unregister_sync_root(&PathBuf::from(&mount_point));
// Auch bei Fehler Mount aus Config loeschen, damit der Client nicht
// endlos versucht, einen toten Pfad wiederherzustellen.
let mut cfg = AppConfig::load();
cfg.cloud_files_mount.clear();
let _ = cfg.save();
result
}
#[tauri::command]
fn cloud_files_get_mount() -> String {
AppConfig::load().cloud_files_mount
}
/// Notfall-Aufraeumen: Ordner als Sync-Root deregistrieren, auch wenn
/// kein Callback-Handle existiert. Nuetzlich wenn der Client hart beendet
/// wurde und ein "toter" Ordner in Windows haengt.
#[tauri::command]
async fn cloud_files_force_cleanup(mount_point: String) -> Result<(), String> {
let mp = PathBuf::from(&mount_point);
let _ = cloud_files::unregister_sync_root(&mp);
let mut cfg = AppConfig::load();
cfg.cloud_files_mount.clear();
let _ = cfg.save();
Ok(())
}
#[tauri::command]
async fn cloud_files_pin(path: String) -> Result<(), String> {
cloud_files::pin_file(&PathBuf::from(path))
}
#[tauri::command]
async fn cloud_files_unpin(path: String) -> Result<(), String> {
cloud_files::unpin_file(&PathBuf::from(path))
}
async fn fetch_remote_entries(
server: &str,
token: &str,
) -> Result<Vec<cloud_files::RemoteEntry>, String> {
let client = reqwest::Client::new();
let url = format!("{}/api/sync/tree", server.trim_end_matches('/'));
let resp = client
.get(&url)
.bearer_auth(token)
.send()
.await
.map_err(|e| format!("tree: {e}"))?;
if !resp.status().is_success() {
return Err(format!("HTTP {}", resp.status()));
}
let json: serde_json::Value = resp.json().await.map_err(|e| e.to_string())?;
let tree = json
.get("tree")
.ok_or("Antwort ohne 'tree'")?
.as_array()
.cloned()
.unwrap_or_default();
let shared = json
.get("shared")
.and_then(|v| v.as_array())
.cloned()
.unwrap_or_default();
// Rekursiv flach machen (Struktur parent_id beibehalten).
// modified_at akzeptiert beides: das neue "modified_at" oder das
// alte "updated_at" als Fallback.
fn walk(
nodes: &[serde_json::Value],
parent: Option<i64>,
out: &mut Vec<cloud_files::RemoteEntry>,
) {
for n in nodes {
let id = n.get("id").and_then(|x| x.as_i64()).unwrap_or(0);
let name = n
.get("name")
.and_then(|x| x.as_str())
.unwrap_or("")
.to_string();
let is_folder = n.get("is_folder").and_then(|x| x.as_bool()).unwrap_or(false);
let size = n.get("size").and_then(|x| x.as_i64()).unwrap_or(0);
let modified_at = n
.get("modified_at")
.and_then(|x| x.as_str())
.or_else(|| n.get("updated_at").and_then(|x| x.as_str()))
.unwrap_or("")
.to_string();
let checksum = n
.get("checksum")
.and_then(|x| x.as_str())
.map(|s| s.to_string());
out.push(cloud_files::RemoteEntry {
id,
name,
parent_id: parent,
is_folder,
size,
modified_at,
checksum,
});
if let Some(children) = n.get("children").and_then(|x| x.as_array()) {
walk(children, Some(id), out);
}
}
}
let mut flat = Vec::new();
walk(&tree, None, &mut flat);
// Virtueller Ordner "Geteilt mit mir" nur dann, wenn es geteilte
// Dateien gibt. ID -1 ist reserviert dafuer (keine Kollision
// mit echten DB-IDs).
if !shared.is_empty() {
flat.push(cloud_files::RemoteEntry {
id: -1,
name: "Geteilt mit mir".to_string(),
parent_id: None,
is_folder: true,
size: 0,
modified_at: String::new(),
checksum: None,
});
walk(&shared, Some(-1), &mut flat);
}
Ok(flat)
}
/// Short-circuit fuer Shell-Kontextmenue-Aufrufe:
/// `minicloud-sync --pin <file>` oder `--unpin <file>` fuehrt die
/// Aktion direkt aus und beendet. Kein UI, kein Tray.
/// Logs landen in %LOCALAPPDATA%\MiniCloud Sync\cli.log - sonst
/// wuerden wir vom Explorer gestartete Prozesse nie debuggen koennen.
#[cfg(windows)]
fn handle_cli_shortcuts() {
use std::io::Write;
let args: Vec<String> = std::env::args().collect();
if args.len() < 3 {
return;
}
let cmd = args[1].as_str();
if cmd != "--pin" && cmd != "--unpin" {
return;
}
let path = std::path::PathBuf::from(&args[2]);
let log_path = dirs::data_local_dir()
.unwrap_or_else(|| std::path::PathBuf::from("."))
.join("MiniCloud Sync")
.join("cli.log");
if let Some(p) = log_path.parent() {
let _ = std::fs::create_dir_all(p);
}
let log = |msg: &str| {
if let Ok(mut f) = std::fs::OpenOptions::new()
.create(true)
.append(true)
.open(&log_path)
{
let _ = writeln!(f, "[{}] {}", chrono::Utc::now().to_rfc3339(), msg);
}
};
log(&format!("CLI invoked: {} {}", cmd, path.display()));
let result = match cmd {
"--pin" => cloud_files::pin_file(&path),
"--unpin" => cloud_files::unpin_file(&path),
_ => unreachable!(),
};
match &result {
Ok(()) => log(&format!("{cmd} OK: {}", path.display())),
Err(e) => log(&format!("{cmd} FAILED: {e}")),
}
std::process::exit(if result.is_ok() { 0 } else { 1 });
}
#[cfg(not(windows))]
fn handle_cli_shortcuts() {}
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
handle_cli_shortcuts();
handle_single_instance();
tauri::Builder::default()
@@ -719,8 +1166,11 @@ pub fn run() {
watchers: Mutex::new(Vec::new()),
sync_running: Arc::new(Mutex::new(false)),
locked_files: Mutex::new(Vec::new()),
opened_files: Mutex::new(HashMap::new()),
sync_paths: Mutex::new(Vec::new()),
journal: Arc::new(Journal::open().expect("Journal konnte nicht geoeffnet werden")),
background_started: AtomicBool::new(false),
cloud_files_loop: Mutex::new(None),
cloud_files_watcher: Mutex::new(None),
})
.on_window_event(|window, event| {
// Close button = minimize to tray instead of quit
@@ -827,13 +1277,21 @@ pub fn run() {
start_sync,
run_sync_now,
open_cloud_file,
open_offline_file,
get_file_tree,
get_status,
lock_file_cmd,
unlock_file_cmd,
lock_file_cmd,
browse_sync_folder,
mark_offline,
unmark_offline,
cloud_files_supported,
cloud_files_enable,
cloud_files_disable,
cloud_files_get_mount,
cloud_files_force_cleanup,
cloud_files_pin,
cloud_files_unpin,
])
.run(tauri::generate_context!())
.expect("error while running tauri application");
+27 -3
View File
@@ -124,8 +124,13 @@ impl MiniCloudApi {
.await
.map_err(|e| format!("Sync-Tree Fehler: {}", e))?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!("Sync-Tree HTTP {}: {}", status, text));
}
let data: SyncTreeResponse = resp.json().await
.map_err(|e| format!("Parse-Fehler: {}", e))?;
.map_err(|e| format!("Sync-Tree Parse-Fehler: {}", e))?;
Ok(data.tree)
}
@@ -204,9 +209,14 @@ impl MiniCloudApi {
.json(&body)
.send()
.await
.map_err(|e| e.to_string())?;
.map_err(|e| format!("Create-Folder Verbindungsfehler: {}", e))?;
resp.json().await.map_err(|e| e.to_string())
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!("Create-Folder fehlgeschlagen ({}): {}", status, text));
}
resp.json().await.map_err(|e| format!("Create-Folder Parse-Fehler: {}", e))
}
pub async fn lock_file(&self, file_id: i64, client_info: &str) -> Result<(), String> {
@@ -241,6 +251,20 @@ impl MiniCloudApi {
Ok(())
}
pub async fn delete_file(&self, file_id: i64) -> Result<(), String> {
let url = format!("{}/api/files/{}", self.server_url, file_id);
let resp = self.client.delete(&url)
.header("Authorization", self.auth_header())
.send()
.await
.map_err(|e| format!("Delete Fehler: {}", e))?;
if !resp.status().is_success() {
let text = resp.text().await.unwrap_or_default();
return Err(format!("Delete fehlgeschlagen: {}", text));
}
Ok(())
}
pub async fn heartbeat(&self, file_id: i64) -> Result<(), String> {
let url = format!("{}/api/files/{}/heartbeat", self.server_url, file_id);
self.client.post(&url)
@@ -13,6 +13,10 @@ pub struct AppConfig {
pub auto_start: bool,
#[serde(default)]
pub start_minimized: bool,
/// Persistierter Mount-Punkt der Cloud-Files-Integration.
/// Leer = nicht aktiv. Wird beim App-Start wieder aktiviert.
#[serde(default)]
pub cloud_files_mount: String,
}
impl AppConfig {
+380 -406
View File
@@ -1,27 +1,28 @@
use crate::sync::api::{FileEntry, MiniCloudApi};
use crate::sync::journal::{Journal, JournalEntry};
use sha2::{Digest, Sha256};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::sync::Arc;
/// A configured sync path: maps a server folder to a local folder
/// A configured sync path: maps a server folder to a local folder.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SyncPath {
pub id: String, // unique ID
pub server_path: String, // e.g. "/" (root) or "/Projekte/2026"
pub server_folder_id: Option<i64>, // server folder ID (None = root)
pub local_dir: String, // local directory path
pub mode: SyncMode, // virtual or full
pub id: String,
pub server_path: String,
pub server_folder_id: Option<i64>,
pub local_dir: String,
pub mode: SyncMode,
pub enabled: bool,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum SyncMode {
Virtual, // .cloud placeholder files, download on demand
Full, // full sync, all files downloaded
Virtual,
Full,
}
/// Cloud placeholder file content (small JSON inside .cloud files)
/// `.cloud` placeholder content (JSON payload of the 0-byte-ish placeholder).
#[derive(Debug, Serialize, Deserialize)]
struct CloudPlaceholder {
id: i64,
@@ -35,473 +36,446 @@ struct CloudPlaceholder {
pub struct SyncEngine {
pub api: MiniCloudApi,
pub sync_paths: Vec<SyncPath>,
last_sync: Option<String>,
/// Checksums from last sync - used to detect who changed a file
/// Key: file path (relative), Value: server checksum at last sync
known_checksums: HashMap<String, String>,
pub journal: Arc<Journal>,
pub username: String,
}
impl SyncEngine {
pub fn new(api: MiniCloudApi) -> Self {
Self { api, sync_paths: Vec::new(), last_sync: None, known_checksums: HashMap::new() }
pub fn new(api: MiniCloudApi, journal: Arc<Journal>, username: String) -> Self {
Self { api, sync_paths: Vec::new(), journal, username }
}
/// Sync all configured paths
/// Sync every configured path.
pub async fn sync_all(&mut self) -> Result<Vec<String>, String> {
let mut all_logs = Vec::new();
let mut log = Vec::new();
let tree = self.api.get_sync_tree().await?;
let sync_paths = self.sync_paths.clone();
for sp in &sync_paths {
if !sp.enabled { continue; }
let local_dir = PathBuf::from(&sp.local_dir);
std::fs::create_dir_all(&local_dir).ok();
// Find the server subtree for this sync path
let subtree = if sp.server_folder_id.is_some() {
find_subtree(&tree, sp.server_folder_id.unwrap())
} else {
Some(tree.clone())
let subtree = match sp.server_folder_id {
Some(id) => find_subtree(&tree, id).unwrap_or_default(),
None => tree.clone(),
};
if let Some(entries) = subtree {
let mut log = Vec::new();
match sp.mode {
SyncMode::Virtual => {
self.sync_virtual(&entries, &local_dir, &sp.server_path, &mut log).await;
// Also upload new local files (not on server yet)
self.sync_upload_new(&entries, &local_dir, sp.server_folder_id, &mut log).await;
}
SyncMode::Full => {
self.sync_full_download(&entries, &local_dir, &mut log).await;
self.sync_full_upload(&entries, &local_dir, sp.server_folder_id, &mut log).await;
}
}
all_logs.extend(log);
}
}
// Phase 1: propagate deletions based on journal history.
self.detect_deletions(sp, &subtree, &local_dir, &mut log).await;
self.last_sync = Some(chrono::Utc::now().to_rfc3339());
Ok(all_logs)
// Phase 2: normal sync (downloads, uploads, conflicts).
self.sync_dir(&subtree, &local_dir, "", sp.server_folder_id, sp, &mut log).await;
}
Ok(log)
}
/// Virtual sync: create .cloud placeholder files
async fn sync_virtual(&mut self, entries: &[FileEntry], local_dir: &Path,
server_path: &str, log: &mut Vec<String>) {
for entry in entries {
let local_path = local_dir.join(&entry.name);
/// Walks the journal for this sync path and reconciles existence:
/// - file was in journal and is gone locally but still on server -> delete on server
/// - file was in journal and is gone on server but still local -> delete locally
/// - file is gone on both sides -> clean journal entry
async fn detect_deletions(
&self,
sp: &SyncPath,
subtree: &[FileEntry],
local_root: &Path,
log: &mut Vec<String>,
) {
use std::collections::HashMap;
let mut server_files: HashMap<String, i64> = HashMap::new();
collect_server_files(subtree, "", &mut server_files);
for je in self.journal.list_for_sync(&sp.id) {
let local_real = local_root.join(&je.relative_path);
let local_cloud = {
let parent = local_real.parent().map(|p| p.to_path_buf());
let fname = local_real.file_name().map(|n| n.to_string_lossy().to_string());
match (parent, fname) {
(Some(p), Some(n)) => p.join(format!("{}.cloud", n)),
_ => PathBuf::new(),
}
};
let local_exists = local_real.exists() || local_cloud.exists();
let server_id = server_files.get(&je.relative_path).copied();
match (local_exists, server_id) {
(true, Some(_)) => { /* present on both sides - normal sync handles it */ }
(false, None) => {
let _ = self.journal.delete(&sp.id, &je.relative_path);
}
(false, Some(id)) => {
match self.api.delete_file(id).await {
Ok(_) => {
log.push(format!("Server-Papierkorb: {}", je.relative_path));
let _ = self.journal.delete(&sp.id, &je.relative_path);
}
Err(e) => log.push(format!("Server-Delete-Fehler {}: {}", je.relative_path, e)),
}
}
(true, None) => {
std::fs::remove_file(&local_real).ok();
std::fs::remove_file(&local_cloud).ok();
let _ = self.journal.delete(&sp.id, &je.relative_path);
log.push(format!("Lokal geloescht: {}", je.relative_path));
}
}
}
}
/// Recursively sync a single directory level.
/// `rel_prefix` is the journal-relative path prefix (e.g. "", or "sub/dir/").
async fn sync_dir(
&mut self,
server_entries: &[FileEntry],
local_dir: &Path,
rel_prefix: &str,
parent_id: Option<i64>,
sp: &SyncPath,
log: &mut Vec<String>,
) {
use std::collections::HashMap;
let server_by_name: HashMap<String, &FileEntry> = server_entries
.iter().map(|e| (e.name.clone(), e)).collect();
// --- Pass 1: iterate server entries, reconcile each against local/journal ---
for entry in server_entries {
let rel = if rel_prefix.is_empty() {
entry.name.clone()
} else {
format!("{}/{}", rel_prefix, entry.name)
};
if entry.is_folder {
std::fs::create_dir_all(&local_path).ok();
let sub_local = local_dir.join(&entry.name);
std::fs::create_dir_all(&sub_local).ok();
if let Some(children) = &entry.children {
let sub_path = format!("{}/{}", server_path.trim_end_matches('/'), entry.name);
Box::pin(self.sync_virtual(children, &local_path, &sub_path, log)).await;
Box::pin(self.sync_dir(children, &sub_local, &rel, Some(entry.id), sp, log)).await;
}
} else {
// Check if real file exists (manually downloaded or offline-marked)
if local_path.exists() {
let local_hash = compute_file_hash(&local_path);
let server_hash = entry.checksum.as_deref().unwrap_or("");
let file_key = format!("{}/{}", server_path, entry.name);
if local_hash != server_hash {
if entry.locked.unwrap_or(false) {
log.push(format!("Zurueckgehalten (gesperrt): {}", entry.name));
continue;
}
// Check if WE changed the file locally
let last_known = self.known_checksums.get(&file_key);
let local_changed = match last_known {
Some(known) => local_hash != *known, // local differs from last sync
None => false, // first sync, don't assume local changed
};
let server_changed = match last_known {
Some(known) => server_hash != known, // server differs from last sync
None => true, // first sync, trust server
};
if server_changed && !local_changed {
// Only server changed -> download
match self.api.download_file(entry.id, &local_path).await {
Ok(_) => log.push(format!("Server->Lokal: {}", entry.name)),
Err(e) => log.push(format!("Download-Fehler {}: {}", entry.name, e)),
}
} else if local_changed && !server_changed {
// Only local changed -> upload
match self.api.upload_file(&local_path, None).await {
Ok(_) => log.push(format!("Lokal->Server: {}", entry.name)),
Err(e) => log.push(format!("Upload-Fehler {}: {}", entry.name, e)),
}
} else {
// Both changed -> conflict! Download server, keep local as conflict copy
let conflict_name = format!("{} (Konflikt).{}",
local_path.file_stem().unwrap().to_string_lossy(),
local_path.extension().map(|e| e.to_string_lossy().to_string()).unwrap_or_default());
let conflict_path = local_path.parent().unwrap().join(&conflict_name);
std::fs::rename(&local_path, &conflict_path).ok();
match self.api.download_file(entry.id, &local_path).await {
Ok(_) => log.push(format!("KONFLIKT: {} (lokale Kopie: {})", entry.name, conflict_name)),
Err(e) => log.push(format!("Download-Fehler {}: {}", entry.name, e)),
}
}
}
// Track current server checksum
self.known_checksums.insert(file_key, server_hash.to_string());
continue;
}
// Create .cloud placeholder
let cloud_path = local_dir.join(format!("{}.cloud", entry.name));
if !cloud_path.exists() {
let placeholder = CloudPlaceholder {
id: entry.id,
name: entry.name.clone(),
size: entry.size.unwrap_or(0),
checksum: entry.checksum.clone().unwrap_or_default(),
updated_at: entry.updated_at.clone().unwrap_or_default(),
server_path: format!("{}/{}", server_path.trim_end_matches('/'), entry.name),
};
if let Ok(json) = serde_json::to_string_pretty(&placeholder) {
std::fs::write(&cloud_path, json).ok();
log.push(format!("Platzhalter: {}.cloud", entry.name));
}
}
}
}
// Remove .cloud files for deleted server files
if let Ok(dir_entries) = std::fs::read_dir(local_dir) {
for entry in dir_entries.flatten() {
let name = entry.file_name().to_string_lossy().to_string();
if name.ends_with(".cloud") {
let real_name = name.trim_end_matches(".cloud");
let exists_on_server = entries.iter().any(|e| e.name == real_name);
if !exists_on_server {
std::fs::remove_file(entry.path()).ok();
log.push(format!("Entfernt: {}", name));
}
}
}
}
}
/// Upload new local files that don't exist on server yet (for both Virtual + Full mode)
async fn sync_upload_new(&mut self, server_entries: &[FileEntry], local_dir: &Path,
parent_id: Option<i64>, log: &mut Vec<String>) {
let server_names: std::collections::HashSet<String> = server_entries.iter()
.map(|e| e.name.clone()).collect();
let entries = match std::fs::read_dir(local_dir) {
Ok(e) => e,
Err(_) => return,
};
for entry in entries.flatten() {
let name = entry.file_name().to_string_lossy().to_string();
let path = entry.path();
// Skip hidden, temp, .cloud files
if name.starts_with('.') || name.starts_with('~')
|| name.ends_with(".tmp") || name.ends_with(".cloud") {
continue;
}
if path.is_dir() {
// New folder: create on server + recurse
if !server_names.contains(&name) {
match self.api.create_folder(&name, parent_id).await {
Ok(folder) => {
log.push(format!("Ordner erstellt: {}", name));
Box::pin(self.sync_upload_new(&[], &path, Some(folder.id), log)).await;
}
Err(e) => log.push(format!("Ordner-Fehler {}: {}", name, e)),
self.reconcile_file(entry, local_dir, &rel, parent_id, sp, log).await;
}
// --- Pass 2: iterate local entries, upload new local files/folders ---
let dir_iter = match std::fs::read_dir(local_dir) {
Ok(d) => d,
Err(_) => return,
};
for e in dir_iter.flatten() {
let name = e.file_name().to_string_lossy().to_string();
if should_skip_name(&name) { continue; }
let path = e.path();
let is_dir = path.is_dir();
// `.cloud` placeholders are stored locally under "foo.txt.cloud"
// but represent the server-side "foo.txt".
let real_name = name.trim_end_matches(".cloud").to_string();
let is_placeholder = name.ends_with(".cloud") && !is_dir;
// Already covered by server pass?
if server_by_name.contains_key(&real_name) { continue; }
if is_placeholder { continue; } // orphan placeholder - handled below
let rel = if rel_prefix.is_empty() {
real_name.clone()
} else {
format!("{}/{}", rel_prefix, real_name)
};
if is_dir {
match self.api.create_folder(&real_name, parent_id).await {
Ok(folder) => {
log.push(format!("Ordner erstellt: {}", rel));
self.upload_local_tree(&path, Some(folder.id), &rel, sp, log).await;
}
} else {
// Existing folder: recurse into it
let sub = server_entries.iter().find(|e| e.name == name);
let children = sub.and_then(|e| e.children.as_ref())
.map(|c| c.as_slice()).unwrap_or(&[]);
let sub_id = sub.map(|e| e.id);
Box::pin(self.sync_upload_new(children, &path, sub_id, log)).await;
Err(e) => log.push(format!("Ordner-Fehler {}: {}", rel, e)),
}
} else {
// New file: upload
if !server_names.contains(&name) {
match self.api.upload_file(&path, parent_id).await {
Ok(_) => log.push(format!("Hochgeladen: {}", name)),
Err(e) => log.push(format!("Upload-Fehler {}: {}", name, e)),
}
} else {
if let Some(se) = server_entries.iter().find(|e| e.name == name) {
if se.locked.unwrap_or(false) {
log.push(format!("Zurueckgehalten (gesperrt): {}", name));
continue;
}
let local_hash = compute_file_hash(&path);
let server_hash = se.checksum.as_deref().unwrap_or("");
if local_hash != server_hash {
let file_key = name.clone();
let last_known = self.known_checksums.get(&file_key);
let local_changed = match last_known {
Some(known) => local_hash != *known,
None => false,
};
let server_changed = match last_known {
Some(known) => server_hash != known,
None => true,
};
if server_changed && !local_changed {
match self.api.download_file(se.id, &path).await {
Ok(_) => log.push(format!("Server->Lokal: {}", name)),
Err(e) => log.push(format!("Download-Fehler {}: {}", name, e)),
}
} else if local_changed && !server_changed {
match self.api.upload_file(&path, parent_id).await {
Ok(_) => log.push(format!("Lokal->Server: {}", name)),
Err(e) => log.push(format!("Upload-Fehler {}: {}", name, e)),
}
} else {
// Both changed -> server wins, local becomes conflict copy
let ext = path.extension().map(|e| e.to_string_lossy().to_string()).unwrap_or_default();
let stem = path.file_stem().unwrap().to_string_lossy();
let conflict_path = path.parent().unwrap().join(format!("{} (Konflikt).{}", stem, ext));
std::fs::rename(&path, &conflict_path).ok();
match self.api.download_file(se.id, &path).await {
Ok(_) => log.push(format!("KONFLIKT: {} -> {}", name, conflict_path.file_name().unwrap().to_string_lossy())),
Err(e) => log.push(format!("Download-Fehler {}: {}", name, e)),
}
}
}
self.known_checksums.insert(name, server_hash.to_string());
match self.api.upload_file(&path, parent_id).await {
Ok(fe) => {
log.push(format!("Hochgeladen: {}", rel));
let checksum = fe.checksum.unwrap_or_default();
let size = fe.size.unwrap_or(0);
let _ = self.journal.upsert(&JournalEntry {
sync_path_id: sp.id.clone(),
relative_path: rel.clone(),
file_id: Some(fe.id),
synced_checksum: checksum,
synced_size: size,
synced_mtime: fe.updated_at.unwrap_or_default(),
local_state: "offline".to_string(),
});
}
Err(e) => log.push(format!("Upload-Fehler {}: {}", rel, e)),
}
}
}
}
/// Full sync: download all files from server
async fn sync_full_download(&self, entries: &[FileEntry], local_dir: &Path,
log: &mut Vec<String>) {
for entry in entries {
let local_path = local_dir.join(&entry.name);
if entry.is_folder {
std::fs::create_dir_all(&local_path).ok();
if let Some(children) = &entry.children {
Box::pin(self.sync_full_download(children, &local_path, log)).await;
}
} else {
if entry.locked.unwrap_or(false) { continue; }
let needs_download = if local_path.exists() {
let local_hash = compute_file_hash(&local_path);
local_hash != entry.checksum.as_deref().unwrap_or("")
// --- Pass 3: clean up orphan .cloud placeholders for files gone from server ---
if let Ok(dir_iter) = std::fs::read_dir(local_dir) {
for e in dir_iter.flatten() {
let name = e.file_name().to_string_lossy().to_string();
if !name.ends_with(".cloud") || e.path().is_dir() { continue; }
let real_name = name.trim_end_matches(".cloud");
if server_by_name.contains_key(real_name) { continue; }
std::fs::remove_file(e.path()).ok();
let rel = if rel_prefix.is_empty() {
real_name.to_string()
} else {
true
format!("{}/{}", rel_prefix, real_name)
};
// Remove stale .cloud placeholder
let cloud_path = local_dir.join(format!("{}.cloud", entry.name));
if cloud_path.exists() {
std::fs::remove_file(&cloud_path).ok();
}
if needs_download {
match self.api.download_file(entry.id, &local_path).await {
Ok(_) => log.push(format!("Heruntergeladen: {}", entry.name)),
Err(e) => log.push(format!("Fehler {}: {}", entry.name, e)),
}
}
let _ = self.journal.delete(&sp.id, &rel);
log.push(format!("Entfernt (Server): {}", name));
}
}
}
/// Full sync: upload new/changed local files
async fn sync_full_upload(&mut self, server_entries: &[FileEntry], local_dir: &Path,
parent_id: Option<i64>, log: &mut Vec<String>) {
let server_names: HashMap<String, &FileEntry> = server_entries.iter()
.map(|e| (e.name.clone(), e))
.collect();
/// Core 3-way reconciliation for a single server file.
async fn reconcile_file(
&self,
entry: &FileEntry,
local_dir: &Path,
rel: &str,
parent_id: Option<i64>,
sp: &SyncPath,
log: &mut Vec<String>,
) {
let real_path = local_dir.join(&entry.name);
let cloud_path = local_dir.join(format!("{}.cloud", entry.name));
let journal_entry = self.journal.get(&sp.id, rel);
let server_hash = entry.checksum.clone().unwrap_or_default();
let server_size = entry.size.unwrap_or(0);
let server_mtime = entry.updated_at.clone().unwrap_or_default();
let entries = match std::fs::read_dir(local_dir) {
Ok(e) => e,
Err(_) => return,
};
// Case A: real file exists locally = offline state
if real_path.exists() && !real_path.is_dir() {
// Avoid race: if placeholder still around, remove it
if cloud_path.exists() { std::fs::remove_file(&cloud_path).ok(); }
for entry in entries.flatten() {
let name = entry.file_name().to_string_lossy().to_string();
let path = entry.path();
let local_hash = compute_file_hash(&real_path);
// Skip hidden, temp, .cloud files
if name.starts_with('.') || name.starts_with('~') || name.ends_with(".tmp")
|| name.ends_with(".cloud") {
continue;
if local_hash == server_hash {
// In sync - just (re)record journal
self.journal_offline(sp, rel, entry, &server_hash, server_size, &server_mtime);
return;
}
if path.is_dir() {
if let Some(se) = server_names.get(&name) {
if let Some(children) = &se.children {
Box::pin(self.sync_full_upload(children, &path, Some(se.id), log)).await;
// Hashes differ. Locked by someone else? Hold back.
if entry.locked.unwrap_or(false) {
let by = entry.locked_by.clone().unwrap_or_default();
if by != self.username {
log.push(format!("Zurueckgehalten (gesperrt von {}): {}", by, rel));
return;
}
}
let (local_changed, server_changed) = match &journal_entry {
Some(j) => (local_hash != j.synced_checksum, server_hash != j.synced_checksum),
None => {
// No journal history: this is the first time we're tracking
// this file. Treat the server as authoritative (Nextcloud
// does the same on first sync) so edits made on the web
// GUI or other clients propagate down cleanly.
(false, true)
}
};
if local_changed && !server_changed {
// Upload
match self.api.upload_file(&real_path, parent_id).await {
Ok(fe) => {
log.push(format!("Lokal->Server: {}", rel));
let new_hash = fe.checksum.unwrap_or(local_hash.clone());
self.journal_offline(sp, rel, entry, &new_hash,
fe.size.unwrap_or(server_size),
&fe.updated_at.unwrap_or(server_mtime.clone()));
}
} else {
match self.api.create_folder(&name, parent_id).await {
Ok(folder) => {
log.push(format!("Ordner erstellt: {}", name));
Box::pin(self.sync_full_upload(&[], &path, Some(folder.id), log)).await;
}
Err(e) => log.push(format!("Ordner-Fehler {}: {}", name, e)),
Err(e) => log.push(format!("Upload-Fehler {}: {}", rel, e)),
}
} else if server_changed && !local_changed {
// Download
match self.api.download_file(entry.id, &real_path).await {
Ok(_) => {
log.push(format!("Server->Lokal: {}", rel));
self.journal_offline(sp, rel, entry, &server_hash, server_size, &server_mtime);
}
Err(e) => log.push(format!("Download-Fehler {}: {}", rel, e)),
}
} else {
if let Some(se) = server_names.get(&name) {
if se.locked.unwrap_or(false) {
log.push(format!("Zurueckgehalten (gesperrt): {}", name));
continue;
// Both changed OR no journal -> conflict copy
let conflict_path = make_conflict_path(&real_path, &self.username);
std::fs::rename(&real_path, &conflict_path).ok();
match self.api.download_file(entry.id, &real_path).await {
Ok(_) => {
log.push(format!("KONFLIKT: {} (lokal: {})", rel,
conflict_path.file_name().unwrap().to_string_lossy()));
self.journal_offline(sp, rel, entry, &server_hash, server_size, &server_mtime);
}
let local_hash = compute_file_hash(&path);
let server_hash = se.checksum.as_deref().unwrap_or("");
if local_hash != server_hash {
let last_known = self.known_checksums.get(&name);
let local_changed = match last_known {
Some(known) => local_hash != *known,
None => false,
};
let server_changed = match last_known {
Some(known) => server_hash != known,
None => true,
};
Err(e) => {
// Restore original
std::fs::rename(&conflict_path, &real_path).ok();
log.push(format!("Download-Fehler {}: {}", rel, e));
}
}
}
return;
}
if server_changed && !local_changed {
match self.api.download_file(se.id, &path).await {
Ok(_) => log.push(format!("Server->Lokal: {}", name)),
Err(e) => log.push(format!("Download-Fehler {}: {}", name, e)),
}
} else if local_changed && !server_changed {
match self.api.upload_file(&path, parent_id).await {
Ok(_) => log.push(format!("Lokal->Server: {}", name)),
Err(e) => log.push(format!("Upload-Fehler {}: {}", name, e)),
}
} else {
let ext = path.extension().map(|e| e.to_string_lossy().to_string()).unwrap_or_default();
let stem = path.file_stem().unwrap().to_string_lossy();
let conflict_path = path.parent().unwrap().join(format!("{} (Konflikt).{}", stem, ext));
std::fs::rename(&path, &conflict_path).ok();
match self.api.download_file(se.id, &path).await {
Ok(_) => log.push(format!("KONFLIKT: {} -> {}", name, conflict_path.file_name().unwrap().to_string_lossy())),
Err(e) => log.push(format!("Download-Fehler {}: {}", name, e)),
}
}
}
self.known_checksums.insert(name, server_hash.to_string());
// Case B: local has a .cloud placeholder (or neither) = virtual state
// Virtual placeholders never have local edits, just keep them fresh.
let needs_write = match std::fs::read_to_string(&cloud_path) {
Ok(content) => match serde_json::from_str::<CloudPlaceholder>(&content) {
Ok(old) => old.checksum != server_hash || old.id != entry.id,
Err(_) => true,
},
Err(_) => true,
};
if needs_write {
let placeholder = CloudPlaceholder {
id: entry.id,
name: entry.name.clone(),
size: server_size,
checksum: server_hash.clone(),
updated_at: server_mtime.clone(),
server_path: rel.to_string(),
};
if let Ok(json) = serde_json::to_string_pretty(&placeholder) {
if !cloud_path.exists() {
log.push(format!("Platzhalter: {}.cloud", entry.name));
} else {
// New file, not on server
match self.api.upload_file(&path, parent_id).await {
Ok(_) => log.push(format!("Hochgeladen: {}", name)),
Err(e) => log.push(format!("Upload-Fehler {}: {}", name, e)),
}
log.push(format!("Platzhalter aktualisiert: {}.cloud", entry.name));
}
std::fs::write(&cloud_path, json).ok();
}
}
self.journal.upsert(&JournalEntry {
sync_path_id: sp.id.clone(),
relative_path: rel.to_string(),
file_id: Some(entry.id),
synced_checksum: server_hash,
synced_size: server_size,
synced_mtime: server_mtime,
local_state: "virtual".to_string(),
}).ok();
// If Full mode and no real file yet, download now
if sp.mode == SyncMode::Full && !real_path.exists() {
if let Err(e) = self.api.download_file(entry.id, &real_path).await {
log.push(format!("Full-Download-Fehler {}: {}", rel, e));
} else {
std::fs::remove_file(&cloud_path).ok();
log.push(format!("Heruntergeladen: {}", rel));
// Update journal to offline
if let Some(mut j) = self.journal.get(&sp.id, rel) {
j.local_state = "offline".to_string();
let _ = self.journal.upsert(&j);
}
}
}
}
/// Open a .cloud placeholder file: download the real file, rename, return path
#[allow(dead_code)]
pub async fn open_cloud_file(&self, cloud_path: &Path) -> Result<PathBuf, String> {
let content = std::fs::read_to_string(cloud_path)
.map_err(|e| format!("Platzhalter lesen: {}", e))?;
let placeholder: CloudPlaceholder = serde_json::from_str(&content)
.map_err(|e| format!("Platzhalter ungueltig: {}", e))?;
let _real_path = cloud_path.with_extension("");
// Remove .cloud extension to get real filename
let real_path = cloud_path.parent().unwrap().join(&placeholder.name);
// Download
self.api.download_file(placeholder.id, &real_path).await?;
// Remove placeholder
std::fs::remove_file(cloud_path).ok();
// Lock on server
let _ = self.api.lock_file(placeholder.id, "Desktop Sync Client").await;
Ok(real_path)
fn journal_offline(
&self, sp: &SyncPath, rel: &str, entry: &FileEntry,
hash: &str, size: i64, mtime: &str,
) {
let _ = self.journal.upsert(&JournalEntry {
sync_path_id: sp.id.clone(),
relative_path: rel.to_string(),
file_id: Some(entry.id),
synced_checksum: hash.to_string(),
synced_size: size,
synced_mtime: mtime.to_string(),
local_state: "offline".to_string(),
});
}
/// Close a previously opened file: sync back, recreate .cloud, unlock
#[allow(dead_code)]
pub async fn close_cloud_file(&self, real_path: &Path, file_id: i64) -> Result<(), String> {
// Upload changes
// We need the parent_id - for now upload to the same location
// The server handles overwrite by filename
let _ = self.api.upload_file(real_path, None).await;
// Unlock
let _ = self.api.unlock_file(file_id).await;
// Delete local copy and recreate placeholder
let cloud_path = real_path.parent().unwrap()
.join(format!("{}.cloud", real_path.file_name().unwrap().to_string_lossy()));
let size = std::fs::metadata(real_path).map(|m| m.len() as i64).unwrap_or(0);
let checksum = compute_file_hash(real_path);
let placeholder = CloudPlaceholder {
id: file_id,
name: real_path.file_name().unwrap().to_string_lossy().to_string(),
size,
checksum,
updated_at: chrono::Utc::now().to_rfc3339(),
server_path: String::new(),
};
if let Ok(json) = serde_json::to_string_pretty(&placeholder) {
std::fs::write(&cloud_path, json).ok();
/// Walk a freshly-created local tree and upload every file (used after
/// creating a new folder on the server).
async fn upload_local_tree(
&self, dir: &Path, parent_id: Option<i64>, rel_prefix: &str,
sp: &SyncPath, log: &mut Vec<String>,
) {
let iter = match std::fs::read_dir(dir) { Ok(d) => d, Err(_) => return };
for e in iter.flatten() {
let name = e.file_name().to_string_lossy().to_string();
if should_skip_name(&name) { continue; }
let path = e.path();
let rel = format!("{}/{}", rel_prefix, name);
if path.is_dir() {
match self.api.create_folder(&name, parent_id).await {
Ok(folder) => {
log.push(format!("Ordner erstellt: {}", rel));
Box::pin(self.upload_local_tree(&path, Some(folder.id), &rel, sp, log)).await;
}
Err(e) => log.push(format!("Ordner-Fehler {}: {}", rel, e)),
}
} else {
match self.api.upload_file(&path, parent_id).await {
Ok(fe) => {
log.push(format!("Hochgeladen: {}", rel));
self.journal_offline(sp, &rel, &fe,
&fe.checksum.clone().unwrap_or_default(),
fe.size.unwrap_or(0),
&fe.updated_at.clone().unwrap_or_default());
}
Err(e) => log.push(format!("Upload-Fehler {}: {}", rel, e)),
}
}
}
}
}
std::fs::remove_file(real_path).ok();
Ok(())
fn should_skip_name(name: &str) -> bool {
name.starts_with('.') || name.starts_with('~') || name.ends_with(".tmp")
}
fn make_conflict_path(original: &Path, username: &str) -> PathBuf {
let stem = original.file_stem().map(|s| s.to_string_lossy().to_string()).unwrap_or_default();
let ext = original.extension().map(|e| e.to_string_lossy().to_string());
let ts = chrono::Local::now().format("%Y-%m-%d %H%M%S").to_string();
let name = match ext {
Some(e) if !e.is_empty() => format!("{} (Konflikt {} {}).{}", stem, username, ts, e),
_ => format!("{} (Konflikt {} {})", stem, username, ts),
};
original.parent().map(|p| p.join(&name)).unwrap_or_else(|| PathBuf::from(&name))
}
fn collect_server_files(
entries: &[FileEntry],
prefix: &str,
out: &mut std::collections::HashMap<String, i64>,
) {
for e in entries {
let rel = if prefix.is_empty() {
e.name.clone()
} else {
format!("{}/{}", prefix, e.name)
};
if e.is_folder {
if let Some(children) = &e.children {
collect_server_files(children, &rel, out);
}
} else {
out.insert(rel, e.id);
}
}
}
fn find_subtree(tree: &[FileEntry], folder_id: i64) -> Option<Vec<FileEntry>> {
for entry in tree {
if entry.id == folder_id {
return entry.children.clone();
}
if entry.id == folder_id { return entry.children.clone(); }
if let Some(children) = &entry.children {
if let Some(result) = find_subtree(children, folder_id) {
return Some(result);
}
if let Some(r) = find_subtree(children, folder_id) { return Some(r); }
}
}
None
}
/// Parse a server timestamp (may or may not have timezone)
fn parse_server_time(s: &str) -> Option<std::time::SystemTime> {
// Try with timezone first (RFC3339)
if let Ok(dt) = chrono::DateTime::parse_from_rfc3339(s) {
return Some(std::time::SystemTime::from(dt));
}
// Try without timezone (naive, assume UTC)
if let Ok(dt) = chrono::NaiveDateTime::parse_from_str(s, "%Y-%m-%dT%H:%M:%S%.f") {
let utc = dt.and_utc();
return Some(std::time::SystemTime::from(utc));
}
if let Ok(dt) = chrono::NaiveDateTime::parse_from_str(s, "%Y-%m-%dT%H:%M:%S") {
let utc = dt.and_utc();
return Some(std::time::SystemTime::from(utc));
}
None
}
pub fn compute_file_hash(path: &Path) -> String {
let data = match std::fs::read(path) {
Ok(d) => d,
@@ -0,0 +1,120 @@
use rusqlite::{params, Connection};
use std::path::PathBuf;
use std::sync::Mutex;
/// One row of the sync journal. Represents the "last known synced state"
/// for a single file within a sync path. The server and local checksum
/// matched this value at the last successful sync.
#[derive(Debug, Clone)]
pub struct JournalEntry {
pub sync_path_id: String,
pub relative_path: String,
pub file_id: Option<i64>,
pub synced_checksum: String,
pub synced_size: i64,
pub synced_mtime: String,
pub local_state: String, // "virtual" or "offline"
}
pub struct Journal {
conn: Mutex<Connection>,
}
impl Journal {
pub fn open() -> Result<Self, String> {
let dir = dirs::config_dir()
.or_else(|| dirs::home_dir().map(|h| h.join(".config")))
.unwrap_or_else(|| PathBuf::from("."))
.join("MiniCloud Sync");
std::fs::create_dir_all(&dir).ok();
let path = dir.join("journal.db");
let conn = Connection::open(&path).map_err(|e| format!("Journal open: {}", e))?;
conn.execute_batch(
r#"
CREATE TABLE IF NOT EXISTS sync_journal (
sync_path_id TEXT NOT NULL,
relative_path TEXT NOT NULL,
file_id INTEGER,
synced_checksum TEXT NOT NULL DEFAULT '',
synced_size INTEGER NOT NULL DEFAULT 0,
synced_mtime TEXT NOT NULL DEFAULT '',
local_state TEXT NOT NULL DEFAULT 'virtual',
PRIMARY KEY (sync_path_id, relative_path)
);
"#,
).map_err(|e| format!("Journal schema: {}", e))?;
Ok(Self { conn: Mutex::new(conn) })
}
pub fn get(&self, sync_path_id: &str, rel: &str) -> Option<JournalEntry> {
let conn = self.conn.lock().unwrap();
conn.query_row(
"SELECT file_id, synced_checksum, synced_size, synced_mtime, local_state
FROM sync_journal WHERE sync_path_id = ?1 AND relative_path = ?2",
params![sync_path_id, rel],
|row| Ok(JournalEntry {
sync_path_id: sync_path_id.to_string(),
relative_path: rel.to_string(),
file_id: row.get(0)?,
synced_checksum: row.get(1)?,
synced_size: row.get(2)?,
synced_mtime: row.get(3)?,
local_state: row.get(4)?,
}),
).ok()
}
pub fn upsert(&self, e: &JournalEntry) -> Result<(), String> {
let conn = self.conn.lock().unwrap();
conn.execute(
"INSERT INTO sync_journal
(sync_path_id, relative_path, file_id, synced_checksum, synced_size, synced_mtime, local_state)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)
ON CONFLICT(sync_path_id, relative_path) DO UPDATE SET
file_id = excluded.file_id,
synced_checksum = excluded.synced_checksum,
synced_size = excluded.synced_size,
synced_mtime = excluded.synced_mtime,
local_state = excluded.local_state",
params![e.sync_path_id, e.relative_path, e.file_id, e.synced_checksum,
e.synced_size, e.synced_mtime, e.local_state],
).map_err(|e| format!("Journal upsert: {}", e))?;
Ok(())
}
pub fn delete(&self, sync_path_id: &str, rel: &str) -> Result<(), String> {
let conn = self.conn.lock().unwrap();
conn.execute(
"DELETE FROM sync_journal WHERE sync_path_id = ?1 AND relative_path = ?2",
params![sync_path_id, rel],
).map_err(|e| format!("Journal delete: {}", e))?;
Ok(())
}
pub fn list_for_sync(&self, sync_path_id: &str) -> Vec<JournalEntry> {
let conn = self.conn.lock().unwrap();
let mut stmt = match conn.prepare(
"SELECT relative_path, file_id, synced_checksum, synced_size, synced_mtime, local_state
FROM sync_journal WHERE sync_path_id = ?1") {
Ok(s) => s,
Err(_) => return Vec::new(),
};
let rows = stmt.query_map(params![sync_path_id], |row| {
Ok(JournalEntry {
sync_path_id: sync_path_id.to_string(),
relative_path: row.get(0)?,
file_id: row.get(1)?,
synced_checksum: row.get(2)?,
synced_size: row.get(3)?,
synced_mtime: row.get(4)?,
local_state: row.get(5)?,
})
});
match rows {
Ok(it) => it.filter_map(|r| r.ok()).collect(),
Err(_) => Vec::new(),
}
}
}
@@ -1,4 +1,5 @@
pub mod api;
pub mod config;
pub mod engine;
pub mod journal;
pub mod watcher;
@@ -5,6 +5,7 @@ use std::sync::mpsc;
pub struct FileWatcher {
_watcher: RecommendedWatcher,
pub receiver: mpsc::Receiver<FileChange>,
pub path: PathBuf,
}
#[derive(Debug, Clone)]
@@ -53,6 +54,6 @@ impl FileWatcher {
watcher.watch(watch_dir.as_ref(), RecursiveMode::Recursive)
.map_err(|e| format!("Watch-Fehler: {}", e))?;
Ok(Self { _watcher: watcher, receiver: rx })
Ok(Self { _watcher: watcher, receiver: rx, path: watch_dir.clone() })
}
}
+201 -4
View File
@@ -31,6 +31,75 @@ const newPathLocal = ref("");
const newPathServerFolder = ref("");
const newPathServerId = ref(null);
const newPathMode = ref("virtual");
// Cloud-Files (Windows cfapi / Linux FUSE)
const cloudFilesSupported = ref(false);
const cloudFilesActive = ref(false);
const cloudFilesBusy = ref(false);
const cloudFilesMountPoint = ref("");
const cloudFilesError = ref("");
async function checkCloudFilesSupport() {
try { cloudFilesSupported.value = await invoke("cloud_files_supported"); }
catch { cloudFilesSupported.value = false; }
try {
const saved = await invoke("cloud_files_get_mount");
if (saved) cloudFilesMountPoint.value = saved;
} catch { /* no saved mount */ }
}
async function forceCleanupCloudFiles() {
if (!cloudFilesMountPoint.value) return;
if (!confirm(`Sync-Root unter ${cloudFilesMountPoint.value} zwangsweise aufraeumen?\n\nDanach kann der Ordner ggf. geloescht werden.`)) return;
cloudFilesError.value = "";
cloudFilesBusy.value = true;
try {
await invoke("cloud_files_force_cleanup", { mountPoint: cloudFilesMountPoint.value });
cloudFilesActive.value = false;
cloudFilesMountPoint.value = "";
syncLog.value = [`[${ts()}] Cloud-Files Zwangsbereinigung durchgefuehrt`, ...syncLog.value].slice(0, 200);
} catch (err) {
cloudFilesError.value = String(err);
} finally {
cloudFilesBusy.value = false;
}
}
async function browseCfMount() {
try {
const selected = await dialogOpen({ directory: true, multiple: false,
title: "Cloud-Files-Ordner waehlen" });
if (selected) cloudFilesMountPoint.value = selected;
} catch { /* cancelled */ }
}
async function enableCloudFiles() {
cloudFilesError.value = "";
cloudFilesBusy.value = true;
try {
await invoke("cloud_files_enable", { mountPoint: cloudFilesMountPoint.value });
cloudFilesActive.value = true;
syncLog.value = [`[${ts()}] Cloud-Files aktiviert: ${cloudFilesMountPoint.value}`, ...syncLog.value].slice(0, 200);
} catch (err) {
cloudFilesError.value = String(err);
} finally {
cloudFilesBusy.value = false;
}
}
async function disableCloudFiles() {
cloudFilesError.value = "";
cloudFilesBusy.value = true;
try {
await invoke("cloud_files_disable", { mountPoint: cloudFilesMountPoint.value });
cloudFilesActive.value = false;
syncLog.value = [`[${ts()}] Cloud-Files deaktiviert`, ...syncLog.value].slice(0, 200);
} catch (err) {
cloudFilesError.value = String(err);
} finally {
cloudFilesBusy.value = false;
}
}
const serverFolders = ref([]);
// Local file browser
@@ -84,6 +153,47 @@ async function doMarkOffline(file) {
}
}
async function doUnlockFile(file) {
hideContextMenu();
const fileId = file.file_id ?? findFileInTree(fileTree.value, file.name)?.id;
if (!fileId) {
syncLog.value = [`[${ts()}] Fehler: Datei nicht auf Server gefunden`, ...syncLog.value];
return;
}
try {
await invoke("unlock_file_cmd", { fileId });
syncLog.value = [`[${ts()}] Entsperrt: ${file.name}`, ...syncLog.value].slice(0, 200);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
async function doLockOnly(file) {
hideContextMenu();
const fileId = file.file_id ?? findFileInTree(fileTree.value, file.name)?.id;
if (!fileId) {
syncLog.value = [`[${ts()}] Fehler: Datei nicht auf Server gefunden`, ...syncLog.value];
return;
}
try {
await invoke("lock_file_cmd", { fileId });
syncLog.value = [`[${ts()}] Ausgecheckt: ${file.name}`, ...syncLog.value].slice(0, 200);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
function findFileInTree(entries, name) {
for (const e of entries) {
if (e.name === name) return e;
if (e.children) {
const found = findFileInTree(e.children, name);
if (found) return found;
}
}
return null;
}
async function doUnmarkOffline(file) {
hideContextMenu();
try {
@@ -105,6 +215,16 @@ async function doOpenCloudFile(file) {
}
}
async function doOpenOfflineFile(file) {
hideContextMenu();
try {
await invoke("open_offline_file", { realPath: file.path });
syncLog.value = [`[${ts()}] Ausgecheckt + geoeffnet: ${file.name}`, ...syncLog.value].slice(0, 200);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
let unlistenStatus, unlistenLog, unlistenError, unlistenFileChange, unlistenTrigger, unlistenCloudOpen;
async function handleLogin() {
@@ -175,12 +295,21 @@ async function addSyncPath() {
newPathServerId.value = null;
newPathMode.value = "virtual";
await loadSyncPaths();
// Auto-start sync now that we have a path (if not already running)
if (!autoSyncActive.value && syncPaths.value.length > 0) {
await startSync();
}
} catch (err) { alert(err); }
}
async function removeSyncPath(id) {
await invoke("remove_sync_path", { id });
await loadSyncPaths();
// If no paths remain, stop auto-sync
if (syncPaths.value.length === 0) {
autoSyncActive.value = false;
syncStatus.value = "Keine Sync-Pfade konfiguriert";
}
}
async function toggleMode(id) {
@@ -229,6 +358,7 @@ function formatSize(b) {
}
onMounted(async () => {
await checkCloudFilesSupport();
// Try auto-login with saved credentials
try {
const saved = await invoke("load_saved_config");
@@ -248,6 +378,15 @@ onMounted(async () => {
if (syncPaths.value.length > 0) {
await startSync();
}
// Cloud-Files automatisch reaktivieren, wenn Mount gespeichert.
if (cloudFilesSupported.value && cloudFilesMountPoint.value) {
try {
await invoke("cloud_files_enable", { mountPoint: cloudFilesMountPoint.value });
cloudFilesActive.value = true;
} catch (e) {
cloudFilesError.value = `Auto-Reaktivierung fehlgeschlagen: ${e}`;
}
}
} catch (err) {
syncStatus.value = "Auto-Login fehlgeschlagen";
// Show login screen with pre-filled fields
@@ -275,6 +414,12 @@ onMounted(async () => {
fileChanges.value = [`[${ts()}] ${e.payload}`, ...fileChanges.value].slice(0, 50);
});
unlistenTrigger = await listen("trigger-sync", () => syncNow());
// Server-Push: bei jedem File-Event Server-Tree + Lokale Liste neu laden,
// damit Lock-Status, neue/geloeschte Dateien sofort angezeigt werden.
await listen("sse-event", () => {
loadFileTree();
loadLocalFiles(null);
});
unlistenCloudOpen = await listen("open-cloud-file", async (e) => {
const cloudPath = e.payload;
syncLog.value = [`[${ts()}] Oeffne: ${cloudPath}`, ...syncLog.value].slice(0, 200);
@@ -321,8 +466,47 @@ onUnmounted(() => { unlistenStatus?.(); unlistenLog?.(); unlistenError?.(); unli
</div>
<div class="content">
<!-- Sync Paths -->
<!-- Cloud-Files (Windows Cloud Files API, OneDrive-artig) -->
<div class="section">
<div class="section-header">
<h3>Cloud-Files (OneDrive-Style)</h3>
<span v-if="cloudFilesActive" class="status-badge syncing">☁ aktiv</span>
<span v-else-if="!cloudFilesSupported" class="status-badge error">nicht verfuegbar</span>
</div>
<p class="hint">
Dateien erscheinen als Platzhalter im Explorer mit Wolken-Icon und
werden erst bei Zugriff geladen. Rechtsklick im Explorer &rarr;
"Immer offline halten" oder "Speicher freigeben".
</p>
<p v-if="!cloudFilesSupported" class="hint" style="color:#c62828">
Auf dieser Plattform noch nicht verfuegbar. Aktuell: Windows 10/11.
Linux-FUSE ist in Vorbereitung, macOS folgt mit Apple-Signatur.
</p>
<template v-else>
<div class="cf-row">
<input v-model="cloudFilesMountPoint" placeholder="Ordner waehlen..." />
<button class="btn-secondary" @click="browseCfMount">Durchsuchen</button>
<button v-if="!cloudFilesActive" class="btn-primary"
:disabled="!cloudFilesMountPoint || cloudFilesBusy"
@click="enableCloudFiles">
{{ cloudFilesBusy ? "Aktiviere..." : "Aktivieren" }}
</button>
<button v-else class="btn-secondary" :disabled="cloudFilesBusy"
@click="disableCloudFiles">Deaktivieren</button>
<button v-if="cloudFilesMountPoint && !cloudFilesActive"
class="btn-secondary" :disabled="cloudFilesBusy"
@click="forceCleanupCloudFiles"
title="Toten Sync-Root nach hartem Beenden des Clients aufraeumen">
Aufraeumen
</button>
</div>
<div v-if="cloudFilesError" class="error" style="margin-top:0.5rem">{{ cloudFilesError }}</div>
</template>
</div>
<!-- Sync Paths (Legacy) - auf Windows ausgeblendet sobald Cloud-Files
aktiv ist; Cloud-Files ersetzt diese Ansicht vollstaendig. -->
<div v-if="!cloudFilesActive" class="section">
<div class="section-header">
<h3>Sync-Pfade</h3>
<div class="header-btns">
@@ -388,8 +572,8 @@ onUnmounted(() => { unlistenStatus?.(); unlistenLog?.(); unlistenError?.(); unli
</div>
</div>
<!-- Local File Browser -->
<div v-if="autoSyncActive" class="section" @click="hideContextMenu">
<!-- Local File Browser (Legacy, nur fuer Full-Sync-Modus) -->
<div v-if="autoSyncActive && !cloudFilesActive" class="section" @click="hideContextMenu">
<div class="section-header">
<h3>Lokale Dateien</h3>
<button @click="loadLocalFiles(null)" class="btn-small">↻</button>
@@ -405,12 +589,13 @@ onUnmounted(() => { unlistenStatus?.(); unlistenLog?.(); unlistenError?.(); unli
<div class="local-file-list">
<div v-for="f in localFiles" :key="f.path"
class="local-file-item"
@dblclick="f.is_folder ? openLocalFolder(f) : (f.is_cloud ? doOpenCloudFile(f) : null)"
@dblclick="f.is_folder ? openLocalFolder(f) : (f.is_cloud ? doOpenCloudFile(f) : doOpenOfflineFile(f))"
@contextmenu="showContextMenu($event, f)">
<span class="lf-icon">{{ f.is_folder ? '📁' : (f.is_cloud ? '☁' : '📄') }}</span>
<span class="lf-name">{{ f.name }}</span>
<span v-if="f.is_cloud" class="lf-badge cloud">Cloud</span>
<span v-else-if="f.is_offline" class="lf-badge offline">Offline</span>
<span v-if="f.locked" class="lf-badge locked" :title="'Ausgecheckt von ' + f.locked_by">🔒 {{ f.locked_by }}</span>
<span class="lf-size">{{ formatSize(f.cloud_size || f.size) }}</span>
</div>
<div v-if="!localFiles.length" class="empty">Ordner ist leer</div>
@@ -426,6 +611,15 @@ onUnmounted(() => { unlistenStatus?.(); unlistenLog?.(); unlistenError?.(); unli
<div v-if="contextMenu.file?.is_cloud" class="cm-item" @click="doMarkOffline(contextMenu.file)">
💾 Offline verfuegbar machen
</div>
<div v-if="contextMenu.file?.is_offline" class="cm-item" @click="doOpenOfflineFile(contextMenu.file)">
📂 Oeffnen (auschecken)
</div>
<div v-if="contextMenu.file?.is_offline && !contextMenu.file?.locked" class="cm-item" @click="doLockOnly(contextMenu.file)">
🔒 Auschecken (sperren)
</div>
<div v-if="contextMenu.file?.is_offline && contextMenu.file?.locked" class="cm-item" @click="doUnlockFile(contextMenu.file)">
🔓 Entsperren (einchecken)
</div>
<div v-if="contextMenu.file?.is_offline" class="cm-item" @click="doUnmarkOffline(contextMenu.file)">
☁ Nicht mehr offline (Platzhalter)
</div>
@@ -528,6 +722,8 @@ body{font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;f
.sp-actions{display:flex;align-items:center;gap:.375rem;flex-shrink:0}
.sp-mode{font-size:.75rem;padding:.2rem .4rem;border-radius:4px;cursor:pointer;background:#f0f0f0}
.sp-mode.Full{background:#e3f2fd;color:#1565c0}.sp-mode.Virtual{background:#f3e5f5;color:#7b1fa2}
.cf-row{display:flex;gap:.5rem;align-items:center;flex-wrap:wrap}
.cf-row input{flex:1;min-width:300px}
.file-tree{max-height:250px;overflow-y:auto}
.tree-item{display:flex;align-items:center;gap:.5rem;padding:.3rem 0;border-bottom:1px solid #f5f5f5;font-size:.85rem}
.tree-item.indent{padding-left:1.5rem}.tree-icon{flex-shrink:0}.tree-name{flex:1;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}
@@ -546,6 +742,7 @@ body{font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;f
.lf-badge{font-size:.65rem;padding:.1rem .3rem;border-radius:3px;flex-shrink:0}
.lf-badge.cloud{background:#e3f2fd;color:#1565c0}
.lf-badge.offline{background:#e8f5e9;color:#2e7d32}
.lf-badge.locked{background:#fff3e0;color:#e65100}
.lf-size{font-size:.75rem;color:#999;flex-shrink:0}
.checkbox-row{display:flex;align-items:center;gap:.5rem;font-size:.85rem;cursor:pointer}
.context-menu{position:fixed;background:#fff;border:1px solid #ddd;border-radius:6px;box-shadow:0 4px 12px rgba(0,0,0,.15);z-index:9999;min-width:200px;padding:.25rem 0}
+1 -1
View File
@@ -4,7 +4,7 @@
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>frontend</title>
<title>Mini-Cloud</title>
</head>
<body>
<div id="app"></div>
+86 -3
View File
@@ -8,11 +8,18 @@
"name": "frontend",
"version": "0.0.0",
"dependencies": {
"@fullcalendar/core": "^6.1.15",
"@fullcalendar/daygrid": "^6.1.15",
"@fullcalendar/interaction": "^6.1.15",
"@fullcalendar/rrule": "^6.1.15",
"@fullcalendar/timegrid": "^6.1.15",
"@fullcalendar/vue3": "^6.1.15",
"@primevue/themes": "^4.5.4",
"axios": "^1.15.0",
"pinia": "^3.0.4",
"primeicons": "^7.0.0",
"primevue": "^4.5.5",
"rrule": "^2.8.1",
"vue": "^3.5.32",
"vue-router": "^4.6.4"
},
@@ -101,6 +108,65 @@
"tslib": "^2.4.0"
}
},
"node_modules/@fullcalendar/core": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/core/-/core-6.1.20.tgz",
"integrity": "sha512-1cukXLlePFiJ8YKXn/4tMKsy0etxYLCkXk8nUCFi11nRONF2Ba2CD5b21/ovtOO2tL6afTJfwmc1ed3HG7eB1g==",
"license": "MIT",
"dependencies": {
"preact": "~10.12.1"
}
},
"node_modules/@fullcalendar/daygrid": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/daygrid/-/daygrid-6.1.20.tgz",
"integrity": "sha512-AO9vqhkLP77EesmJzuU+IGXgxNulsA8mgQHynclJ8U70vSwAVnbcLG9qftiTAFSlZjiY/NvhE7sflve6cJelyQ==",
"license": "MIT",
"peerDependencies": {
"@fullcalendar/core": "~6.1.20"
}
},
"node_modules/@fullcalendar/interaction": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/interaction/-/interaction-6.1.20.tgz",
"integrity": "sha512-p6txmc5txL0bMiPaJxe2ip6o0T384TyoD2KGdsU6UjZ5yoBlaY+dg7kxfnYKpYMzEJLG58n+URrHr2PgNL2fyA==",
"license": "MIT",
"peerDependencies": {
"@fullcalendar/core": "~6.1.20"
}
},
"node_modules/@fullcalendar/rrule": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/rrule/-/rrule-6.1.20.tgz",
"integrity": "sha512-5Awk7bmaA97hSZRpIBehenXkYreVIvx8nnaMFZ/LDGRuK1mgbR4vSUrDTvVU+oEqqKnj/rqMBByWqN5NeehQxw==",
"license": "MIT",
"peerDependencies": {
"@fullcalendar/core": "~6.1.20",
"rrule": "^2.6.0"
}
},
"node_modules/@fullcalendar/timegrid": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/timegrid/-/timegrid-6.1.20.tgz",
"integrity": "sha512-4H+/MWbz3ntA50lrPif+7TsvMeX3R1GSYjiLULz0+zEJ7/Yfd9pupZmAwUs/PBpA6aAcFmeRr0laWfcz1a9V1A==",
"license": "MIT",
"dependencies": {
"@fullcalendar/daygrid": "~6.1.20"
},
"peerDependencies": {
"@fullcalendar/core": "~6.1.20"
}
},
"node_modules/@fullcalendar/vue3": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/vue3/-/vue3-6.1.20.tgz",
"integrity": "sha512-8qg6pS27II9QBwFkkJC+7SfflMpWqOe7i3ii5ODq9KpLAjwQAd/zjfq8RvKR1Yryoh5UmMCmvRbMB7i4RGtqog==",
"license": "MIT",
"peerDependencies": {
"@fullcalendar/core": "~6.1.20",
"vue": "^3.0.11"
}
},
"node_modules/@jridgewell/sourcemap-codec": {
"version": "1.5.5",
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
@@ -1393,6 +1459,16 @@
"node": "^10 || ^12 || >=14"
}
},
"node_modules/preact": {
"version": "10.12.1",
"resolved": "https://registry.npmjs.org/preact/-/preact-10.12.1.tgz",
"integrity": "sha512-l8386ixSsBdbreOAkqtrwqHwdvR35ID8c3rKPa8lCWuO86dBi32QWHV4vfsZK1utLLFMvw+Z5Ad4XLkZzchscg==",
"license": "MIT",
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/preact"
}
},
"node_modules/primeicons": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/primeicons/-/primeicons-7.0.0.tgz",
@@ -1471,6 +1547,15 @@
"dev": true,
"license": "MIT"
},
"node_modules/rrule": {
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/rrule/-/rrule-2.8.1.tgz",
"integrity": "sha512-hM3dHSBMeaJ0Ktp7W38BJZ7O1zOgaFEsn41PDk+yHoEtfLV+PoJt9E9xAlZiWgf/iqEqionN0ebHFZIDAp+iGw==",
"license": "BSD-3-Clause",
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/source-map-js": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz",
@@ -1522,9 +1607,7 @@
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
"dev": true,
"license": "0BSD",
"optional": true
"license": "0BSD"
},
"node_modules/vite": {
"version": "8.0.8",
+7
View File
@@ -9,11 +9,18 @@
"preview": "vite preview"
},
"dependencies": {
"@fullcalendar/core": "^6.1.15",
"@fullcalendar/daygrid": "^6.1.15",
"@fullcalendar/interaction": "^6.1.15",
"@fullcalendar/rrule": "^6.1.15",
"@fullcalendar/timegrid": "^6.1.15",
"@fullcalendar/vue3": "^6.1.15",
"@primevue/themes": "^4.5.4",
"axios": "^1.15.0",
"pinia": "^3.0.4",
"primeicons": "^7.0.0",
"primevue": "^4.5.5",
"rrule": "^2.8.1",
"vue": "^3.5.32",
"vue-router": "^4.6.4"
},
File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 9.3 KiB

After

Width:  |  Height:  |  Size: 337 B

+13
View File
@@ -1,3 +1,16 @@
<template>
<router-view />
</template>
<script setup>
import { watchEffect } from 'vue'
import { useAuthStore } from './stores/auth'
const auth = useAuthStore()
watchEffect(() => {
document.title = auth.user?.username
? `Mini-Cloud - ${auth.user.username}`
: 'Mini-Cloud'
})
</script>
+5
View File
@@ -48,6 +48,11 @@ const routes = [
name: 'Contacts',
component: () => import('../views/ContactsView.vue'),
},
{
path: 'tasks',
name: 'Tasks',
component: () => import('../views/TasksView.vue'),
},
{
path: 'email',
name: 'Email',
+7
View File
@@ -17,6 +17,13 @@ export const useFilesStore = defineStore('files', () => {
const response = await apiClient.get('/files', { params })
files.value = response.data.files
breadcrumb.value = response.data.breadcrumb
} catch (err) {
// Let the caller handle access/deletion errors - just clear the list
if (err.response && (err.response.status === 403 || err.response.status === 404)) {
files.value = []
breadcrumb.value = []
}
throw err
} finally {
loading.value = false
}
+45
View File
@@ -37,6 +37,28 @@
</div>
</div>
<!-- System-Info: Zeitzone & NTP (read-only) -->
<div class="admin-section">
<h3>System-Zeit</h3>
<p class="hint">Wird in der <code>.env</code> festgelegt (Keys <code>TZ</code> und <code>NTP_SERVER</code>).
Aenderungen erfordern einen Neustart des Backends.</p>
<div class="sysinfo">
<div class="sysinfo-row">
<span class="sysinfo-label">Zeitzone:</span>
<code>{{ settings.timezone || '—' }}</code>
<span v-if="settings.timezone_abbr" class="sysinfo-extra">({{ settings.timezone_abbr }})</span>
</div>
<div class="sysinfo-row">
<span class="sysinfo-label">Aktuelle Server-Zeit:</span>
<code>{{ formatServerTime(settings.server_time) }}</code>
</div>
<div class="sysinfo-row">
<span class="sysinfo-label">NTP-Server:</span>
<code>{{ settings.ntp_server || '(deaktiviert)' }}</code>
</div>
</div>
</div>
<!-- System Email -->
<div class="admin-section">
<h3>System-E-Mail (SMTP)</h3>
@@ -551,6 +573,17 @@ const smtpForm = ref({
const smtpPasswordSet = ref(false)
const onlyofficeConfigured = ref(false)
const onlyofficeUrl = ref('')
const settings = ref({ timezone: '', timezone_abbr: '', server_time: '', ntp_server: '' })
function formatServerTime(iso) {
if (!iso) return ''
try {
return new Date(iso).toLocaleString('de-DE', {
day: '2-digit', month: '2-digit', year: 'numeric',
hour: '2-digit', minute: '2-digit', second: '2-digit',
})
} catch { return iso }
}
const smtpTesting = ref(false)
// Backup & Restore
@@ -660,6 +693,12 @@ async function loadSettings() {
smtpPasswordSet.value = res.data.system_smtp_password_set
onlyofficeConfigured.value = res.data.onlyoffice_configured
onlyofficeUrl.value = res.data.onlyoffice_url || ''
settings.value = {
timezone: res.data.timezone || '',
timezone_abbr: res.data.timezone_abbr || '',
server_time: res.data.server_time || '',
ntp_server: res.data.ntp_server || '',
}
} catch { /* first load, defaults */ }
}
@@ -1216,6 +1255,12 @@ onMounted(() => {
.field-row { display: flex; gap: 0.75rem; align-items: flex-end; }
.flex-grow { flex: 1; }
.hint { font-size: 0.85rem; color: var(--p-text-muted-color); margin: 0 0 0.75rem; }
.hint code { background: var(--p-surface-100); padding: 0.05rem 0.35rem; border-radius: 3px; font-size: 0.8rem; }
.sysinfo { display: flex; flex-direction: column; gap: 0.4rem; font-size: 0.875rem; }
.sysinfo-row { display: flex; gap: 0.5rem; align-items: center; flex-wrap: wrap; }
.sysinfo-label { min-width: 180px; color: var(--p-text-muted-color); }
.sysinfo code { background: var(--p-surface-100); padding: 0.15rem 0.5rem; border-radius: 4px; }
.sysinfo-extra { color: var(--p-text-muted-color); font-size: 0.8rem; }
.invite-section { margin-top: 1.5rem; padding-top: 1rem; border-top: 1px solid var(--p-surface-200); }
.invite-section h4 { margin: 0 0 0.25rem; font-size: 0.95rem; }
.invite-row { display: flex; gap: 0.5rem; align-items: flex-start; }
+5
View File
@@ -22,6 +22,11 @@
<span>Kontakte</span>
</router-link>
<router-link to="/tasks" class="nav-item" active-class="active">
<i class="pi pi-check-square"></i>
<span>Aufgaben</span>
</router-link>
<router-link
v-if="auth.hasEmailAccounts"
to="/email"
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
+206 -14
View File
@@ -22,7 +22,11 @@
<div class="header-actions">
<Button icon="pi pi-folder-plus" label="Neuer Ordner" size="small" outlined @click="showNewFolder = true" />
<Button icon="pi pi-upload" label="Dateien" size="small" @click="triggerUpload" />
<Button icon="pi pi-folder" label="Ordner" size="small" outlined @click="triggerFolderUpload" />
<Button size="small" outlined @click="triggerFolderUpload">
<i class="pi pi-upload" style="margin-right:0.35rem"></i>
<i class="pi pi-folder" style="margin-right:0.5rem"></i>
Ordner
</Button>
<input ref="fileInput" type="file" multiple hidden @change="handleUpload" />
<input ref="folderInput" type="file" hidden webkitdirectory @change="handleFolderUpload" />
</div>
@@ -94,6 +98,7 @@
@click.stop="downloadFile(data)"
/>
<Button
v-if="canShare(data)"
:icon="(data.has_shares || data.has_permissions) ? 'pi pi-users' : 'pi pi-share-alt'"
text rounded size="small"
:severity="(data.has_shares || data.has_permissions) ? 'success' : undefined"
@@ -101,14 +106,33 @@
@click.stop="openShare(data)"
/>
<Button
v-if="!data.is_folder && !data.locked"
icon="pi pi-lock-open"
text rounded size="small"
title="Auschecken (sperren)"
@click.stop="lockFile(data)"
/>
<Button
v-if="!data.is_folder && data.locked && (data.locked_by === auth.user?.username || auth.user?.role === 'admin')"
icon="pi pi-lock"
text rounded size="small"
severity="warn"
:title="data.locked_by === auth.user?.username ? 'Einchecken (entsperren)' : 'Lock zwangsweise entfernen (Admin)'"
@click.stop="unlockFile(data)"
/>
<Button
v-if="canWrite(data)"
icon="pi pi-pencil"
text rounded size="small"
:disabled="data.locked && data.locked_by !== auth.user?.username"
@click.stop="openRename(data)"
/>
<Button
v-if="canWrite(data)"
icon="pi pi-trash"
text rounded size="small"
severity="danger"
:disabled="data.locked && data.locked_by !== auth.user?.username"
@click.stop="confirmDelete(data)"
/>
</div>
@@ -150,9 +174,15 @@
<h5>Mit Benutzer teilen</h5>
<div class="user-share-row">
<InputText v-model="shareUserQuery" placeholder="Benutzername suchen..." fluid @input="searchUsers" />
<Select v-model="shareUserPermission" :options="userPermOptions" optionLabel="label" optionValue="value" />
<Select v-model="shareUserPermission" :options="availableUserPermOptions" optionLabel="label" optionValue="value" />
<label class="reshare-check">
<input type="checkbox" v-model="shareUserReshare" /> darf weiterteilen
</label>
<Button label="Teilen" size="small" @click="shareWithUser" :disabled="!selectedShareUser" />
</div>
<div v-if="!isOwner(shareFile) && shareFile" class="share-hint">
Du hast {{ myPermLabel(shareFile) }} - du kannst maximal {{ myPermLabel(shareFile) }} weiterteilen.
</div>
<div v-if="userSearchResults.length" class="user-search-results">
<div v-for="u in userSearchResults" :key="u.id"
class="user-result" :class="{ selected: selectedShareUser?.id === u.id }"
@@ -161,12 +191,26 @@
</div>
</div>
<div v-if="filePermissions.length" class="existing-shares">
<div v-for="perm in filePermissions" :key="perm.id" class="share-perm-item">
<i class="pi pi-user"></i>
<span>{{ perm.username }}</span>
<Tag :value="permLabel(perm.permission)" size="small" />
<Button icon="pi pi-trash" text size="small" severity="danger" @click="removeUserShare(perm.id)" />
</div>
<template v-for="perm in filePermissions" :key="perm.id">
<div v-if="editingPermId !== perm.id" class="share-perm-item">
<i class="pi pi-user"></i>
<span>{{ perm.username }}</span>
<Tag :value="permLabel(perm.permission)" size="small" />
<Tag v-if="perm.can_reshare" value="darf weiterteilen" severity="info" size="small" />
<Button icon="pi pi-pencil" text size="small" @click="startEditPerm(perm)" title="Bearbeiten" />
<Button icon="pi pi-trash" text size="small" severity="danger" @click="removeUserShare(perm.id)" title="Entfernen" />
</div>
<div v-else class="share-perm-item editing">
<i class="pi pi-user"></i>
<span>{{ perm.username }}</span>
<Select v-model="editPermValue" :options="availableUserPermOptions" optionLabel="label" optionValue="value" />
<label class="reshare-check">
<input type="checkbox" v-model="editPermReshare" /> darf weiterteilen
</label>
<Button icon="pi pi-check" text size="small" severity="success" @click="saveEditPerm(perm)" title="Speichern" />
<Button icon="pi pi-times" text size="small" @click="cancelEditPerm" title="Abbrechen" />
</div>
</template>
</div>
</div>
@@ -176,7 +220,7 @@
<div class="share-form">
<div class="field">
<label>Berechtigung</label>
<Select v-model="shareLinkPermission" :options="linkPermOptions" optionLabel="label" optionValue="value" fluid />
<Select v-model="shareLinkPermission" :options="availableLinkPermOptions" optionLabel="label" optionValue="value" fluid />
</div>
<div class="field">
<label>Passwort (optional)</label>
@@ -224,7 +268,7 @@
</template>
<script setup>
import { ref, watch, onMounted } from 'vue'
import { ref, computed, watch, onMounted, onUnmounted } from 'vue'
import { useRoute, useRouter } from 'vue-router'
import { useAuthStore } from '../stores/auth'
import { useFilesStore } from '../stores/files'
@@ -267,6 +311,10 @@ const filePermissions = ref([])
const shareUserQuery = ref('')
const selectedShareUser = ref(null)
const shareUserPermission = ref('read')
const shareUserReshare = ref(false)
const editingPermId = ref(null)
const editPermValue = ref('read')
const editPermReshare = ref(false)
const userSearchResults = ref([])
const userPermOptions = [{ label: 'Lesen', value: 'read' }, { label: 'Schreiben', value: 'write' }, { label: 'Admin', value: 'admin' }]
const linkPermOptions = [
@@ -274,6 +322,12 @@ const linkPermOptions = [
{ label: 'Lesen + Hochladen (nur Ordner)', value: 'write' },
{ label: 'Nur Upload (Ordner, kein Einblick)', value: 'upload_only' },
]
const availableLinkPermOptions = computed(() => {
const f = shareFile.value
if (!f || isOwner(f)) return linkPermOptions
if (f.my_permission === 'read') return linkPermOptions.filter(o => o.value === 'read')
return linkPermOptions
})
const shareLinkPermission = ref('read')
const currentOrigin = window.location.origin
const shareLoading = ref(false)
@@ -553,6 +607,37 @@ function permLabel(perm) {
return { read: 'Lesen', write: 'Schreiben', admin: 'Admin' }[perm] || perm
}
function isOwner(data) {
return data && data.owner_id === auth.user?.id
}
function canWrite(data) {
if (!data) return false
if (isOwner(data)) return true
return data.my_permission === 'write' || data.my_permission === 'admin'
}
function canShare(data) {
if (!data) return false
if (isOwner(data)) return true
return !!data.my_can_reshare
}
function myPermLabel(data) {
if (!data || !data.my_permission) return ''
return permLabel(data.my_permission)
}
// Option list for the "Mit Benutzer teilen" dropdown - re-sharers can only
// hand out permissions up to their own level. Admin is owner-only.
const availableUserPermOptions = computed(() => {
const f = shareFile.value
const levels = { read: 0, write: 1, admin: 2 }
if (!f || isOwner(f)) return userPermOptions
const myLevel = levels[f.my_permission] ?? -1
return userPermOptions.filter(o => levels[o.value] <= myLevel && o.value !== 'admin')
})
async function openShare(data) {
shareFile.value = data
sharePassword.value = ''
@@ -594,10 +679,12 @@ async function shareWithUser() {
await apiClient.post(`/files/${shareFile.value.id}/permissions`, {
user_id: selectedShareUser.value.id,
permission: shareUserPermission.value,
can_reshare: shareUserReshare.value,
})
toast.add({ severity: 'success', summary: `Mit ${selectedShareUser.value.username} geteilt`, life: 3000 })
shareUserQuery.value = ''
selectedShareUser.value = null
shareUserReshare.value = false
const res = await apiClient.get(`/files/${shareFile.value.id}/permissions`)
filePermissions.value = res.data
await filesStore.loadFiles(currentParentId())
@@ -617,6 +704,34 @@ async function removeUserShare(permId) {
}
}
function startEditPerm(perm) {
editingPermId.value = perm.id
editPermValue.value = perm.permission
editPermReshare.value = !!perm.can_reshare
}
function cancelEditPerm() {
editingPermId.value = null
}
async function saveEditPerm(perm) {
if (!shareFile.value) return
try {
await apiClient.post(`/files/${shareFile.value.id}/permissions`, {
user_id: perm.user_id,
permission: editPermValue.value,
can_reshare: editPermReshare.value,
})
const res = await apiClient.get(`/files/${shareFile.value.id}/permissions`)
filePermissions.value = res.data
editingPermId.value = null
toast.add({ severity: 'success', summary: 'Berechtigung aktualisiert', life: 2500 })
await filesStore.loadFiles(currentParentId())
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler', detail: err.response?.data?.error || err.message, life: 5000 })
}
}
async function createShare() {
console.log('createShare called, shareFile:', shareFile.value?.id, 'permission:', shareLinkPermission.value)
if (!shareFile.value) {
@@ -659,6 +774,28 @@ async function removeShare(token) {
}
}
async function lockFile(data) {
try {
await apiClient.post(`/files/${data.id}/lock`, { client_info: 'Web-GUI' })
toast.add({ severity: 'success', summary: 'Ausgecheckt', detail: `${data.name} ist jetzt fuer dich gesperrt.`, life: 3000 })
await filesStore.loadFiles(currentParentId())
} catch (err) {
toast.add({ severity: 'error', summary: 'Sperren fehlgeschlagen', detail: err.response?.data?.error || err.message, life: 5000 })
}
}
async function unlockFile(data) {
const isAdminOverride = data.locked_by !== auth.user?.username
if (isAdminOverride && !confirm(`Den Lock von ${data.locked_by} zwangsweise entfernen?`)) return
try {
await apiClient.post(`/files/${data.id}/unlock`)
toast.add({ severity: 'success', summary: 'Eingecheckt', detail: `${data.name} ist wieder frei.`, life: 3000 })
await filesStore.loadFiles(currentParentId())
} catch (err) {
toast.add({ severity: 'error', summary: 'Entsperren fehlgeschlagen', detail: err.response?.data?.error || err.message, life: 5000 })
}
}
function confirmDelete(data) {
deleteTarget.value = data
showDeleteConfirm.value = true
@@ -675,12 +812,64 @@ async function doDelete() {
}
}
async function safeLoadCurrentFolder() {
try {
await filesStore.loadFiles(currentParentId())
} catch (err) {
const status = err.response?.status
if (status === 403 || status === 404) {
toast.add({
severity: 'warn',
summary: 'Kein Zugriff',
detail: 'Dieser Ordner wurde geloescht oder die Freigabe wurde entfernt.',
life: 5000,
})
// Redirect to root after short delay so user sees the toast
setTimeout(() => router.push('/files'), 600)
}
}
}
watch(() => route.params.folderId, () => {
filesStore.loadFiles(currentParentId())
safeLoadCurrentFolder()
})
// Live updates: subscribe to server-sent events so that lock changes /
// uploads / deletions by other users or clients refresh the current
// folder automatically.
let eventSource = null
let reloadDebounce = null
function scheduleReload() {
if (reloadDebounce) return
reloadDebounce = setTimeout(() => {
reloadDebounce = null
safeLoadCurrentFolder()
}, 300)
}
onMounted(() => {
filesStore.loadFiles(currentParentId())
safeLoadCurrentFolder()
if (auth.accessToken) {
const url = `/api/sync/events?token=${encodeURIComponent(auth.accessToken)}`
try {
eventSource = new EventSource(url)
const handler = () => scheduleReload()
// Any named event from the server triggers a reload. Using onmessage
// alone misses typed events (event: file), so we wrap addEventListener
// into a tiny catch-all by hooking the generic EventSource dispatch.
eventSource.addEventListener('file', handler)
eventSource.addEventListener('message', handler)
eventSource.addEventListener('open', () => scheduleReload())
eventSource.onerror = () => { /* browser auto-reconnects */ }
} catch { /* SSE not available - fall back to manual refresh */ }
}
})
onUnmounted(() => {
if (reloadDebounce) { clearTimeout(reloadDebounce); reloadDebounce = null }
if (eventSource) { eventSource.close(); eventSource = null }
})
</script>
@@ -738,12 +927,15 @@ onMounted(() => {
.share-section:last-child { border-bottom: none; }
.share-section h5 { margin: 0 0 0.75rem; font-size: 0.9rem; }
.share-form { }
.user-share-row { display: flex; gap: 0.5rem; align-items: flex-start; }
.user-share-row { display: flex; gap: 0.5rem; align-items: center; flex-wrap: wrap; }
.reshare-check { display: flex; align-items: center; gap: 0.25rem; font-size: 0.8rem; white-space: nowrap; }
.share-hint { font-size: 0.75rem; color: var(--p-surface-500); margin-top: 0.35rem; font-style: italic; }
.user-search-results { border: 1px solid var(--p-surface-200); border-radius: 6px; margin-top: 0.25rem; max-height: 150px; overflow-y: auto; }
.user-result { padding: 0.5rem 0.75rem; cursor: pointer; display: flex; align-items: center; gap: 0.5rem; font-size: 0.875rem; }
.user-result:hover, .user-result.selected { background: var(--p-primary-50); }
.existing-shares { margin-top: 0.5rem; }
.share-perm-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.375rem 0; font-size: 0.875rem; }
.share-perm-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.375rem 0; font-size: 0.875rem; flex-wrap: wrap; }
.share-perm-item.editing { background: var(--p-surface-50); padding: 0.5rem; border-radius: 4px; }
.share-link-item {
display: flex; justify-content: space-between; align-items: center;
padding: 0.5rem 0; border-bottom: 1px solid var(--p-surface-100);
+58 -1
View File
@@ -29,9 +29,22 @@
<InputText v-model="searchQuery" placeholder="Passwoerter suchen..." fluid />
</div>
<div v-if="filteredEntries.length" class="selection-bar">
<Checkbox v-model="allSelected" :binary="true" @change="toggleSelectAll" inputId="select-all" />
<label for="select-all" class="select-all-label">
Alle auswaehlen
<span v-if="selectedIds.length" class="selected-count">({{ selectedIds.length }} ausgewaehlt)</span>
</label>
<Button v-if="selectedIds.length" icon="pi pi-trash" :label="`${selectedIds.length} loeschen`"
severity="danger" size="small" @click="deleteSelected" />
</div>
<div class="entries-list">
<div v-for="entry in filteredEntries" :key="entry.id"
class="entry-item" @click="openEntry(entry)">
class="entry-item" :class="{ selected: selectedIds.includes(entry.id) }"
@click="openEntry(entry)">
<Checkbox :modelValue="selectedIds.includes(entry.id)" :binary="true"
@click.stop @update:modelValue="toggleSelect(entry.id)" />
<div class="entry-icon">
<i class="pi pi-key"></i>
</div>
@@ -166,6 +179,7 @@ import InputText from 'primevue/inputtext'
import Password from 'primevue/password'
import Textarea from 'primevue/textarea'
import Select from 'primevue/select'
import Checkbox from 'primevue/checkbox'
const toast = useToast()
const auth = useAuthStore()
@@ -200,6 +214,45 @@ const importAccept = computed(() => {
const showTotpDialog = ref(false)
const totpCode = ref('')
const selectedIds = ref([])
const allSelected = computed({
get: () => filteredEntries.value.length > 0 && filteredEntries.value.every(e => selectedIds.value.includes(e.id)),
set: () => {},
})
function toggleSelectAll() {
const visibleIds = filteredEntries.value.map(e => e.id)
const allSel = visibleIds.every(id => selectedIds.value.includes(id))
if (allSel) {
selectedIds.value = selectedIds.value.filter(id => !visibleIds.includes(id))
} else {
const set = new Set([...selectedIds.value, ...visibleIds])
selectedIds.value = [...set]
}
}
function toggleSelect(id) {
const i = selectedIds.value.indexOf(id)
if (i >= 0) selectedIds.value.splice(i, 1)
else selectedIds.value.push(id)
}
async function deleteSelected() {
const n = selectedIds.value.length
if (!n) return
if (!window.confirm(`${n} Eintrag/Eintraege wirklich loeschen?`)) return
let ok = 0
for (const id of [...selectedIds.value]) {
try {
await apiClient.delete(`/passwords/entries/${id}`)
ok++
} catch { /* skip */ }
}
selectedIds.value = []
toast.add({ severity: 'success', summary: `${ok} Eintrag/Eintraege geloescht`, life: 3000 })
await loadEntries()
}
const folderOptions = computed(() => [{ id: null, name: '(Kein Ordner)' }, ...folders.value])
const filteredEntries = computed(() => {
if (!searchQuery.value) return entries.value
@@ -491,6 +544,10 @@ onMounted(async () => {
.shared-label { color: var(--p-text-muted-color); font-size: 0.75rem; }
.entries-main { flex: 1; }
.search-bar { margin-bottom: 1rem; }
.selection-bar { display: flex; align-items: center; gap: 0.75rem; padding: 0.5rem 0.75rem; margin-bottom: 0.5rem; background: var(--p-surface-50); border-radius: 6px; }
.select-all-label { font-size: 0.875rem; cursor: pointer; flex: 1; }
.selected-count { color: var(--p-text-muted-color); margin-left: 0.5rem; }
.entry-item.selected { background: var(--p-primary-50); }
.entries-list { display: flex; flex-direction: column; gap: 2px; }
.entry-item { display: flex; align-items: center; gap: 0.75rem; padding: 0.75rem; background: var(--p-surface-0); border-radius: 6px; cursor: pointer; }
.entry-item:hover { background: var(--p-surface-100); }
+1 -1
View File
@@ -157,7 +157,7 @@ async function loadPreview() {
previewType.value = data.type
if (data.type === 'pdf' || data.type === 'image') {
previewUrl.value = getTokenUrl(`/api/files/${fileId}/download`)
previewUrl.value = getTokenUrl(`/api/files/${fileId}/download?inline=1`)
canEdit.value = false
} else if (data.type === 'html') {
htmlContent.value = data.content
+55 -4
View File
@@ -12,15 +12,31 @@
<span class="label">Benutzername:</span>
<span>{{ auth.user?.username }}</span>
</div>
<div class="info-row">
<span class="label">E-Mail:</span>
<span>{{ auth.user?.email || 'Nicht angegeben' }}</span>
</div>
<div class="info-row">
<span class="label">Rolle:</span>
<Tag :value="auth.user?.role" :severity="auth.user?.role === 'admin' ? 'danger' : 'info'" />
</div>
</div>
<p class="hint" style="margin:0.75rem 0 0.5rem;font-size:0.8rem;color:var(--p-text-muted-color)">
Vor- und Nachname werden anderen Benutzern angezeigt, wenn du etwas mit ihnen teilst.
</p>
<form @submit.prevent="saveProfile" class="profile-form">
<div class="field-row">
<div class="field">
<label>Vorname</label>
<InputText v-model="profile.first_name" fluid />
</div>
<div class="field">
<label>Nachname</label>
<InputText v-model="profile.last_name" fluid />
</div>
</div>
<div class="field">
<label>E-Mail</label>
<InputText v-model="profile.email" type="email" fluid />
</div>
<Button type="submit" label="Profil speichern" :loading="profileLoading" size="small" />
</form>
</div>
<!-- Change Password -->
@@ -192,6 +208,36 @@ function downloadClient(client) {
window.location.href = `/api/clients/${client.platform}/download`
}
// --- Profile (Vorname/Nachname/E-Mail) ---
const profile = ref({ first_name: '', last_name: '', email: '' })
const profileLoading = ref(false)
async function loadProfile() {
try {
const res = await apiClient.get('/auth/me')
profile.value = {
first_name: res.data.first_name || '',
last_name: res.data.last_name || '',
email: res.data.email || '',
}
auth.user = { ...auth.user, ...res.data }
} catch { /* ignore */ }
}
async function saveProfile() {
profileLoading.value = true
try {
const res = await apiClient.put('/auth/me', profile.value)
auth.user = { ...auth.user, ...res.data }
toast.add({ severity: 'success', summary: 'Profil gespeichert', life: 2500 })
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler',
detail: err.response?.data?.error || err.message, life: 4000 })
} finally {
profileLoading.value = false
}
}
// --- Password change ---
const currentPassword = ref('')
const newPassword = ref('')
@@ -334,6 +380,7 @@ async function doDeleteAccount() {
onMounted(async () => {
loadAccounts()
loadProfile()
try {
const res = await apiClient.get('/clients')
availableClients.value = res.data.clients
@@ -352,6 +399,10 @@ onMounted(async () => {
.section-header h3 { margin: 0; }
.settings-info { display: flex; flex-direction: column; gap: 0.5rem; }
.info-row { display: flex; align-items: center; gap: 0.5rem; }
.profile-form { display: flex; flex-direction: column; gap: 0.5rem; max-width: 540px; }
.profile-form .field-row { display: flex; gap: 0.75rem; }
.profile-form .field-row .field { flex: 1; }
.profile-form .field label { display: block; font-size: 0.8rem; margin-bottom: 0.25rem; }
.info-row .label { font-weight: 500; min-width: 120px; }
.password-form { max-width: 400px; }
.password-form .field { margin-bottom: 1rem; }
+773
View File
@@ -0,0 +1,773 @@
<template>
<div class="view-container">
<div class="view-header">
<h2>Aufgaben</h2>
<div class="header-actions">
<Button icon="pi pi-list" label="Neue Liste" size="small" outlined @click="showNewList = true" />
<Button icon="pi pi-upload" label="Import" size="small" outlined @click="triggerImport" />
<input ref="importInput" type="file" accept=".ics,.ical,.csv" hidden @change="onImportFile" />
<Button icon="pi pi-download" label="Export" size="small" outlined
:disabled="!selectedListId" @click="showExportDialog = true" />
<Button icon="pi pi-plus" label="Neue Aufgabe" size="small"
:disabled="!writableLists.length" @click="openNewTask" />
</div>
</div>
<div class="tasks-layout">
<aside class="lists-sidebar">
<h4>Listen</h4>
<div v-for="tl in lists" :key="tl.id"
class="list-item" :class="{ active: selectedListId === tl.id }"
@click="selectedListId = tl.id">
<span class="list-color" :style="{ background: tl.color }"></span>
<span class="list-name">{{ tl.name }}</span>
<span v-if="tl.permission !== 'owner'" class="shared-label"
:title="`Geteilt von ${tl.owner_display_name || tl.owner_name}`">
(geteilt von {{ tl.owner_display_name || tl.owner_name }})
</span>
<span class="count">{{ tl.task_count }}</span>
<Button icon="pi pi-ellipsis-v" text size="small" class="list-menu"
@click.stop="openListMenu(tl)" />
</div>
</aside>
<div class="tasks-main">
<div class="toolbar">
<InputText v-model="search" placeholder="Aufgaben suchen..." fluid />
<label class="toggle"><Checkbox v-model="hideDone" :binary="true" /> Erledigte ausblenden</label>
</div>
<div v-if="selectedTaskIds.length" class="bulk-bar">
<span>{{ selectedTaskIds.length }} ausgewaehlt</span>
<Button icon="pi pi-trash" :label="`${selectedTaskIds.length} loeschen`"
severity="danger" size="small" @click="bulkDelete" />
<Button label="Auswahl aufheben" size="small" text @click="selectedTaskIds = []" />
</div>
<table class="task-table">
<thead>
<tr>
<th class="col-check">
<Checkbox v-model="allSelected" :binary="true" @change="toggleAll" />
</th>
<th class="col-done"></th>
<th>Titel</th>
<th>Faellig</th>
<th>Prio</th>
<th>Status</th>
<th></th>
</tr>
</thead>
<tbody>
<tr v-for="t in filteredTasks" :key="t.id" class="task-row"
:class="{ done: t.status === 'COMPLETED', selected: selectedTaskIds.includes(t.id) }"
@click="openEditTask(t)">
<td class="col-check" @click.stop>
<Checkbox :modelValue="selectedTaskIds.includes(t.id)" :binary="true"
@update:modelValue="toggleSelect(t.id, $event)" />
</td>
<td class="col-done" @click.stop>
<Checkbox :modelValue="t.status === 'COMPLETED'" :binary="true"
@update:modelValue="toggleDone(t, $event)" title="Erledigt" />
</td>
<td class="col-title">
<span>{{ t.summary || '(ohne Titel)' }}</span>
<small v-if="t.description" class="meta">{{ shortDesc(t.description) }}</small>
</td>
<td class="col-date">{{ formatDue(t.due) }}</td>
<td>{{ formatPrio(t.priority) }}</td>
<td><span class="status-badge" :class="statusClass(t.status)">{{ statusLabel(t.status) }}</span></td>
<td class="col-actions" @click.stop>
<Button icon="pi pi-trash" text size="small" severity="danger" @click="confirmDelete(t)" />
</td>
</tr>
<tr v-if="!filteredTasks.length">
<td colspan="7" class="empty-row">Keine Aufgaben.</td>
</tr>
</tbody>
</table>
</div>
</div>
<!-- New List Dialog -->
<Dialog v-model:visible="showNewList" header="Neue Aufgabenliste" modal :style="{ width: '400px' }">
<div class="field">
<label>Name</label>
<InputText v-model="newListName" fluid autofocus @keyup.enter="createList" />
</div>
<div class="field">
<label>Farbe</label>
<InputText v-model="newListColor" type="color" style="width: 60px; height: 36px" />
</div>
<template #footer>
<Button label="Abbrechen" text @click="showNewList = false" />
<Button label="Erstellen" @click="createList" />
</template>
</Dialog>
<!-- List Menu -->
<Dialog v-model:visible="showListMenu" header="Listen-Optionen" modal :style="{ width: '480px' }">
<div v-if="menuList">
<div class="rename-row">
<template v-if="!isRenaming">
<strong>{{ menuList.name }}</strong>
<Button v-if="menuList.permission === 'owner'"
icon="pi pi-pencil" text size="small" title="Umbenennen"
@click="startRename" />
</template>
<template v-else>
<InputText v-model="renameValue" fluid autofocus
@keyup.enter="saveRename" @keyup.escape="isRenaming = false" />
<Button icon="pi pi-check" text size="small" severity="success"
title="Speichern" @click="saveRename" />
<Button icon="pi pi-times" text size="small"
title="Abbrechen" @click="isRenaming = false" />
</template>
</div>
<div class="field">
<label>Farbe</label>
<InputText :modelValue="menuList.color" @change="onListColor($event)" type="color" style="width:60px; height:36px" />
</div>
<div v-if="menuList.permission === 'owner'" class="field">
<label>Mit Benutzer teilen</label>
<div class="share-row">
<div style="position: relative; flex: 1;">
<InputText v-model="shareUsername" placeholder="Benutzername suchen..."
fluid @input="onShareSearch" />
<div v-if="shareSearchResults.length" class="user-search-popup">
<div v-for="u in shareSearchResults" :key="u.id" class="user-result"
@click="shareUsername = u.username; shareSearchResults = []">
<i class="pi pi-user"></i>
<span>{{ u.username }}</span>
<small v-if="u.full_name" class="user-fullname">{{ u.full_name }}</small>
</div>
</div>
</div>
<Select v-model="sharePermission" :options="permOptions" optionLabel="label" optionValue="value" />
<Button label="Teilen" size="small" @click="doShare" />
</div>
<div v-if="listShares.length" class="existing-shares">
<template v-for="s in listShares" :key="s.id">
<div v-if="editingShareId !== s.id" class="share-perm-item">
<i class="pi pi-user"></i> <span>{{ s.username }}</span>
<span class="perm-label">{{ s.permission === 'readwrite' ? 'Lesen+Schreiben' : 'Lesen' }}</span>
<Button icon="pi pi-pencil" text size="small" title="Bearbeiten" @click="startEditShare(s)" />
<Button icon="pi pi-trash" text size="small" severity="danger" title="Entfernen" @click="removeShare(s.id)" />
</div>
<div v-else class="share-perm-item editing">
<i class="pi pi-user"></i> <span>{{ s.username }}</span>
<Select v-model="editSharePermission" :options="permOptions" optionLabel="label" optionValue="value" />
<Button icon="pi pi-check" text size="small" severity="success" title="Speichern" @click="saveEditShare(s)" />
<Button icon="pi pi-times" text size="small" title="Abbrechen" @click="editingShareId = null" />
</div>
</template>
</div>
</div>
<div v-if="menuList.permission === 'owner'" class="field" style="border-top:1px solid var(--p-surface-200); padding-top:1rem">
<Button label="Liste loeschen" severity="danger" outlined size="small" @click="confirmDeleteList = true" />
</div>
<div class="field" style="border-top:1px solid var(--p-surface-200); padding-top:1rem">
<label><i class="pi pi-info-circle"></i> CalDAV-Zugang (Handy / DAVx5)</label>
<div class="caldav-hint">In DAVx5 unter demselben Konto sichtbar wie Kalender. Aufgabenlisten sind mit "OpenTasks" synchronisierbar.</div>
<div class="url-row">
<strong>Listen-URL:</strong>
<code>{{ origin }}/dav/{{ username }}/tl-{{ menuList.id }}/</code>
<Button icon="pi pi-copy" text size="small" @click="copy(`${origin}/dav/${username}/tl-${menuList.id}/`)" />
</div>
</div>
</div>
</Dialog>
<!-- Task Dialog -->
<Dialog v-model:visible="showTaskDialog" :header="editingTaskId ? 'Aufgabe bearbeiten' : 'Neue Aufgabe'"
modal :style="{ width: '560px' }">
<div v-if="writableLists.length > 1" class="field">
<label>Liste</label>
<Select v-model="taskTargetListId" :options="writableListOptions"
optionLabel="label" optionValue="id" fluid />
</div>
<div class="field">
<label>Titel</label>
<InputText v-model="taskForm.summary" fluid autofocus />
</div>
<div class="field">
<label>Beschreibung</label>
<Textarea v-model="taskForm.description" rows="3" fluid />
</div>
<div class="field-row">
<div class="field">
<label>Faellig</label>
<InputText v-model="taskForm.due" type="datetime-local" fluid />
</div>
<div class="field">
<label>Status</label>
<Select v-model="taskForm.status" :options="statusOptions" optionLabel="label" optionValue="value" fluid />
</div>
</div>
<div class="field-row">
<div class="field">
<label>Prioritaet</label>
<Select v-model="taskForm.priority" :options="prioOptions" optionLabel="label" optionValue="value" fluid />
</div>
<div class="field">
<label>Fortschritt %</label>
<InputText v-model.number="taskForm.percent_complete" type="number" min="0" max="100" fluid />
</div>
</div>
<div class="field">
<label>Kategorien (kommagetrennt)</label>
<InputText v-model="taskForm.categories" fluid />
</div>
<template #footer>
<Button v-if="editingTaskId" label="Loeschen" text severity="danger" @click="deleteCurrent" />
<Button label="Abbrechen" text @click="showTaskDialog = false" />
<Button :label="editingTaskId ? 'Speichern' : 'Erstellen'" @click="saveTask" />
</template>
</Dialog>
<Dialog v-model:visible="confirmDeleteList" header="Liste loeschen" modal :style="{ width: '400px' }">
<p>Liste <strong>{{ menuList?.name }}</strong> mit allen Aufgaben loeschen?</p>
<template #footer>
<Button label="Abbrechen" text @click="confirmDeleteList = false" />
<Button label="Loeschen" severity="danger" @click="deleteList" />
</template>
</Dialog>
<!-- Export Dialog -->
<Dialog v-model:visible="showExportDialog" header="Aufgaben exportieren" modal :style="{ width: '420px' }">
<p>Aus Liste <strong>{{ currentList?.name }}</strong></p>
<div class="field">
<label>Format</label>
<Select v-model="exportFormat" :options="exportFormats" optionLabel="label" optionValue="value" fluid />
</div>
<template #footer>
<Button label="Abbrechen" text @click="showExportDialog = false" />
<Button label="Herunterladen" icon="pi pi-download" @click="doExport" />
</template>
</Dialog>
</div>
</template>
<script setup>
import { ref, reactive, computed, onMounted, onUnmounted, watch } from 'vue'
import { useToast } from 'primevue/usetoast'
import { useAuthStore } from '../stores/auth'
import apiClient from '../api/client'
import Button from 'primevue/button'
import Dialog from 'primevue/dialog'
import InputText from 'primevue/inputtext'
import Textarea from 'primevue/textarea'
import Select from 'primevue/select'
import Checkbox from 'primevue/checkbox'
const toast = useToast()
const auth = useAuthStore()
const origin = computed(() => window.location.origin)
const username = computed(() => auth.user?.username || '')
const lists = ref([])
const selectedListId = ref(null)
const taskTargetListId = ref(null)
const writableLists = computed(() =>
lists.value.filter(l => l.permission === 'owner' || l.permission === 'readwrite')
)
const writableListOptions = computed(() => writableLists.value.map(l => ({
...l,
label: l.permission === 'owner'
? l.name
: `${l.name} (geteilt von ${l.owner_display_name || l.owner_name})`,
})))
const tasks = ref([])
const search = ref('')
const hideDone = ref(false)
const selectedTaskIds = ref([])
const showNewList = ref(false)
const newListName = ref('')
const newListColor = ref('#10b981')
const showListMenu = ref(false)
const menuList = ref(null)
const shareUsername = ref('')
const sharePermission = ref('read')
const listShares = ref([])
const shareSearchResults = ref([])
const editingShareId = ref(null)
const editSharePermission = ref('read')
const isRenaming = ref(false)
const renameValue = ref('')
function startRename() {
renameValue.value = menuList.value?.name || ''
isRenaming.value = true
}
async function saveRename() {
const newName = renameValue.value.trim()
if (!newName || !menuList.value || newName === menuList.value.name) {
isRenaming.value = false
return
}
try {
await apiClient.put(`/tasklists/${menuList.value.id}`, { name: newName })
menuList.value.name = newName
isRenaming.value = false
await loadLists()
toast.add({ severity: 'success', summary: 'Umbenannt', life: 2000 })
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler',
detail: err.response?.data?.error || err.message, life: 4000 })
}
}
let shareSearchTimer = null
function startEditShare(s) {
editingShareId.value = s.id
editSharePermission.value = s.permission
}
async function saveEditShare(s) {
if (!menuList.value) return
try {
await apiClient.post(`/tasklists/${menuList.value.id}/share`, {
username: s.username,
permission: editSharePermission.value,
})
editingShareId.value = null
await loadShares()
toast.add({ severity: 'success', summary: 'Berechtigung aktualisiert', life: 2500 })
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler',
detail: err.response?.data?.error || err.message, life: 4000 })
}
}
function onShareSearch() {
clearTimeout(shareSearchTimer)
const q = shareUsername.value.trim()
if (q.length < 2) { shareSearchResults.value = []; return }
shareSearchTimer = setTimeout(async () => {
try {
const res = await apiClient.get('/users/search', { params: { q } })
shareSearchResults.value = res.data
} catch { shareSearchResults.value = [] }
}, 250)
}
const permOptions = [
{ label: 'Lesen', value: 'read' },
{ label: 'Lesen+Schreiben', value: 'readwrite' },
]
const confirmDeleteList = ref(false)
const showTaskDialog = ref(false)
const editingTaskId = ref(null)
const taskForm = reactive({
summary: '', description: '',
due: '', status: 'NEEDS-ACTION', priority: null, percent_complete: null,
categories: '',
})
const statusOptions = [
{ label: 'Offen', value: 'NEEDS-ACTION' },
{ label: 'In Arbeit', value: 'IN-PROCESS' },
{ label: 'Erledigt', value: 'COMPLETED' },
{ label: 'Abgebrochen', value: 'CANCELLED' },
]
const prioOptions = [
{ label: '—', value: null },
{ label: 'Hoch (1)', value: 1 },
{ label: 'Mittel (5)', value: 5 },
{ label: 'Niedrig (9)', value: 9 },
]
const showExportDialog = ref(false)
const exportFormat = ref('ics')
const exportFormats = [
{ label: 'iCalendar (.ics)', value: 'ics' },
{ label: 'CSV (.csv)', value: 'csv' },
]
const importInput = ref(null)
const currentList = computed(() => lists.value.find(l => l.id === selectedListId.value))
const filteredTasks = computed(() => {
const q = search.value.trim().toLowerCase()
return tasks.value.filter(t => {
if (hideDone.value && t.status === 'COMPLETED') return false
if (q && !(t.summary || '').toLowerCase().includes(q)
&& !(t.description || '').toLowerCase().includes(q)) return false
return true
})
})
const allSelected = computed({
get: () => filteredTasks.value.length > 0 && filteredTasks.value.every(t => selectedTaskIds.value.includes(t.id)),
set: () => {},
})
function toggleAll() {
const ids = filteredTasks.value.map(t => t.id)
const allSel = ids.every(id => selectedTaskIds.value.includes(id))
if (allSel) selectedTaskIds.value = selectedTaskIds.value.filter(id => !ids.includes(id))
else {
const set = new Set(selectedTaskIds.value); ids.forEach(id => set.add(id))
selectedTaskIds.value = [...set]
}
}
function toggleSelect(id, checked) {
if (checked && !selectedTaskIds.value.includes(id)) selectedTaskIds.value = [...selectedTaskIds.value, id]
else if (!checked) selectedTaskIds.value = selectedTaskIds.value.filter(x => x !== id)
}
function shortDesc(s) { return s.length > 80 ? s.slice(0, 80) + '…' : s }
function formatDue(d) {
if (!d) return ''
return new Date(d).toLocaleString('de-DE', { day: '2-digit', month: '2-digit', year: 'numeric', hour: '2-digit', minute: '2-digit' })
}
function formatPrio(p) {
if (p === null || p === undefined) return ''
if (p <= 3) return 'Hoch'
if (p >= 7) return 'Niedrig'
return 'Mittel'
}
function statusLabel(s) {
return ({ 'NEEDS-ACTION': 'Offen', 'IN-PROCESS': 'In Arbeit', 'COMPLETED': 'Erledigt', 'CANCELLED': 'Abgebrochen' })[s] || 'Offen'
}
function statusClass(s) {
return { 'NEEDS-ACTION': 'todo', 'IN-PROCESS': 'progress', 'COMPLETED': 'done', 'CANCELLED': 'cancelled' }[s] || 'todo'
}
async function loadLists() {
const res = await apiClient.get('/tasklists')
lists.value = res.data
if (!selectedListId.value && lists.value.length) selectedListId.value = lists.value[0].id
if (!lists.value.length) {
await apiClient.post('/tasklists', { name: 'Meine Aufgaben', color: '#10b981' })
await loadLists()
}
}
async function loadTasks() {
if (!selectedListId.value) { tasks.value = []; return }
try {
const res = await apiClient.get(`/tasklists/${selectedListId.value}/tasks`)
tasks.value = res.data
} catch { tasks.value = [] }
}
async function createList() {
if (!newListName.value.trim()) return
await apiClient.post('/tasklists', { name: newListName.value.trim(), color: newListColor.value })
showNewList.value = false
newListName.value = ''
await loadLists()
}
function openListMenu(tl) {
menuList.value = tl
shareUsername.value = ''
shareSearchResults.value = []
isRenaming.value = false
showListMenu.value = true
loadShares()
}
async function loadShares() {
if (!menuList.value || menuList.value.permission !== 'owner') { listShares.value = []; return }
try {
const res = await apiClient.get(`/tasklists/${menuList.value.id}/shares`)
listShares.value = res.data
} catch { listShares.value = [] }
}
async function doShare() {
if (!menuList.value || !shareUsername.value.trim()) return
try {
await apiClient.post(`/tasklists/${menuList.value.id}/share`, {
username: shareUsername.value.trim(), permission: sharePermission.value,
})
toast.add({ severity: 'success', summary: 'Geteilt', life: 2500 })
shareUsername.value = ''
shareSearchResults.value = []
await loadShares()
} catch (err) {
toast.add({ severity: 'error', summary: err.response?.data?.error || 'Fehler', life: 4000 })
}
}
async function removeShare(id) {
await apiClient.delete(`/tasklists/${menuList.value.id}/shares/${id}`)
await loadShares()
}
async function onListColor(ev) {
const color = ev.target.value
await apiClient.put(`/tasklists/${menuList.value.id}/my-color`, { color })
menuList.value.color = color
await loadLists()
}
async function deleteList() {
if (!menuList.value) return
await apiClient.delete(`/tasklists/${menuList.value.id}`)
confirmDeleteList.value = false
showListMenu.value = false
if (selectedListId.value === menuList.value.id) selectedListId.value = null
await loadLists()
await loadTasks()
}
function openNewTask() {
if (!writableLists.value.length) {
toast.add({ severity: 'warn', summary: 'Keine beschreibbare Liste', life: 3000 })
return
}
editingTaskId.value = null
Object.assign(taskForm, {
summary: '', description: '', due: '',
status: 'NEEDS-ACTION', priority: null, percent_complete: null,
categories: '',
})
// Default-Liste: aktuell markierte falls beschreibbar, sonst erste beschreibbare
const sel = writableLists.value.find(l => l.id === selectedListId.value)
taskTargetListId.value = sel ? sel.id : writableLists.value[0].id
showTaskDialog.value = true
}
function openEditTask(t) {
editingTaskId.value = t.id
Object.assign(taskForm, {
summary: t.summary || '',
description: t.description || '',
due: t.due ? t.due.slice(0, 16) : '',
status: t.status || 'NEEDS-ACTION',
priority: t.priority,
percent_complete: t.percent_complete,
categories: (t.categories || []).join(', '),
})
showTaskDialog.value = true
}
async function saveTask() {
if (!taskForm.summary.trim()) return
const payload = {
summary: taskForm.summary.trim(),
description: taskForm.description,
due: taskForm.due ? new Date(taskForm.due).toISOString() : null,
status: taskForm.status,
priority: taskForm.priority,
percent_complete: taskForm.percent_complete,
categories: taskForm.categories.split(',').map(s => s.trim()).filter(Boolean),
}
try {
if (editingTaskId.value) {
await apiClient.put(`/tasks/${editingTaskId.value}`, payload)
} else {
const target = taskTargetListId.value || selectedListId.value
if (!target) {
toast.add({ severity: 'error', summary: 'Bitte Liste waehlen', life: 3000 })
return
}
await apiClient.post(`/tasklists/${target}/tasks`, payload)
}
showTaskDialog.value = false
await loadLists()
await loadTasks()
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler', detail: err.response?.data?.error, life: 4000 })
}
}
async function toggleDone(t, checked) {
try {
await apiClient.put(`/tasks/${t.id}`, { status: checked ? 'COMPLETED' : 'NEEDS-ACTION' })
await loadTasks()
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler', life: 3000 })
}
}
async function deleteCurrent() {
if (!editingTaskId.value) return
if (!confirm('Aufgabe wirklich loeschen?')) return
await apiClient.delete(`/tasks/${editingTaskId.value}`)
showTaskDialog.value = false
await loadLists()
await loadTasks()
}
async function confirmDelete(t) {
if (!confirm(`"${t.summary || '(ohne Titel)'}" loeschen?`)) return
await apiClient.delete(`/tasks/${t.id}`)
await loadLists()
await loadTasks()
}
async function bulkDelete() {
const ids = [...selectedTaskIds.value]
if (!ids.length || !confirm(`${ids.length} Aufgabe(n) loeschen?`)) return
let ok = 0, fail = 0
for (const id of ids) {
try { await apiClient.delete(`/tasks/${id}`); ok++ } catch { fail++ }
}
selectedTaskIds.value = []
toast.add({
severity: fail ? 'warn' : 'success',
summary: `${ok} geloescht${fail ? `, ${fail} fehlgeschlagen` : ''}`, life: 3000,
})
await loadLists()
await loadTasks()
}
function triggerImport() {
if (!selectedListId.value) {
toast.add({ severity: 'warn', summary: 'Keine Liste ausgewaehlt', life: 3000 })
return
}
importInput.value?.click()
}
async function onImportFile(ev) {
const file = ev.target.files?.[0]
ev.target.value = ''
if (!file) return
const fd = new FormData()
fd.append('file', file)
try {
const res = await apiClient.post(`/tasklists/${selectedListId.value}/import`, fd,
{ headers: { 'Content-Type': 'multipart/form-data' } })
toast.add({
severity: 'success',
summary: `${res.data.imported} importiert`,
detail: res.data.skipped ? `${res.data.skipped} uebersprungen` : undefined,
life: 4000,
})
await loadLists()
await loadTasks()
} catch (err) {
toast.add({ severity: 'error', summary: 'Import fehlgeschlagen', detail: err.response?.data?.error, life: 5000 })
}
}
async function doExport() {
if (!selectedListId.value) return
try {
const res = await apiClient.get(`/tasklists/${selectedListId.value}/export`,
{ params: { format: exportFormat.value }, responseType: 'blob' })
const ext = exportFormat.value === 'csv' ? 'csv' : 'ics'
const url = URL.createObjectURL(new Blob([res.data]))
const a = document.createElement('a')
a.href = url
a.download = `${currentList.value?.name || 'aufgaben'}.${ext}`
a.click()
URL.revokeObjectURL(url)
showExportDialog.value = false
} catch (err) {
toast.add({ severity: 'error', summary: 'Export fehlgeschlagen', life: 4000 })
}
}
function copy(text) {
navigator.clipboard.writeText(text)
toast.add({ severity: 'info', summary: 'Kopiert', life: 1500 })
}
// --- Live refresh via SSE ---
let eventSource = null
let reloadTimer = null
function scheduleReload() {
if (reloadTimer) return
reloadTimer = setTimeout(async () => {
reloadTimer = null
await loadLists()
await loadTasks()
}, 300)
}
onMounted(async () => {
await loadLists()
await loadTasks()
if (auth.accessToken) {
try {
eventSource = new EventSource(`/api/sync/events?token=${encodeURIComponent(auth.accessToken)}`)
eventSource.addEventListener('tasklist', scheduleReload)
eventSource.addEventListener('message', scheduleReload)
eventSource.onerror = () => {}
} catch {}
}
})
onUnmounted(() => {
if (reloadTimer) clearTimeout(reloadTimer)
if (eventSource) eventSource.close()
})
watch(selectedListId, loadTasks)
</script>
<style scoped>
.view-container { padding: 1.5rem; }
.view-header { display: flex; justify-content: space-between; align-items: center; margin-bottom: 1rem; }
.view-header h2 { margin: 0; }
.header-actions { display: flex; gap: 0.5rem; }
.tasks-layout { display: flex; gap: 1rem; align-items: flex-start; }
.lists-sidebar { width: 260px; flex-shrink: 0; }
.lists-sidebar h4 { margin: 0 0 0.5rem; font-size: 0.85rem; text-transform: uppercase; color: var(--p-text-muted-color); }
.list-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.5rem; border-radius: 4px;
cursor: pointer; font-size: 0.875rem; }
.list-item:hover { background: var(--p-surface-50); }
.list-item.active { background: var(--p-primary-50); }
.list-color { width: 12px; height: 12px; border-radius: 3px; flex-shrink: 0; }
.list-name { flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.shared-label { color: var(--p-text-muted-color); font-size: 0.7rem; }
.count { color: var(--p-text-muted-color); font-size: 0.8rem; }
.list-menu { opacity: 0; transition: opacity .15s; }
.list-item:hover .list-menu { opacity: 1; }
.tasks-main { flex: 1; min-width: 0; }
.toolbar { display: flex; gap: 0.75rem; align-items: center; margin-bottom: 0.75rem; }
.toggle { display: flex; align-items: center; gap: 0.35rem; font-size: 0.875rem; white-space: nowrap; }
.bulk-bar { display: flex; gap: 0.5rem; align-items: center; padding: 0.5rem 0.75rem;
background: var(--p-primary-50); border-radius: 6px; margin-bottom: 0.5rem; font-size: 0.875rem; }
.task-table { width: 100%; border-collapse: collapse; font-size: 0.875rem; }
.task-table th { text-align: left; padding: 0.5rem; border-bottom: 2px solid var(--p-surface-200); font-weight: 600; }
.task-table td { padding: 0.5rem; border-bottom: 1px solid var(--p-surface-100); vertical-align: top; }
.task-row { cursor: pointer; }
.task-row:hover { background: var(--p-surface-50); }
.task-row.done .col-title span { text-decoration: line-through; color: var(--p-text-muted-color); }
.task-row.selected { background: var(--p-primary-50); }
.col-check, .col-done { width: 36px; }
.col-actions { width: 60px; text-align: right; }
.col-date { white-space: nowrap; }
.col-title { }
.meta { display: block; color: var(--p-text-muted-color); font-size: 0.75rem; margin-top: 0.1rem; }
.empty-row { text-align: center; color: var(--p-text-muted-color); padding: 2rem !important; }
.status-badge { display: inline-block; padding: 0.15rem 0.5rem; border-radius: 10px; font-size: 0.72rem; }
.status-badge.todo { background: var(--p-surface-100); }
.status-badge.progress { background: var(--p-blue-100); color: var(--p-blue-700); }
.status-badge.done { background: var(--p-green-100); color: var(--p-green-700); }
.status-badge.cancelled { background: var(--p-red-100); color: var(--p-red-700); }
.field { margin-bottom: 0.75rem; }
.field label { display: block; margin-bottom: 0.25rem; font-weight: 500; font-size: 0.875rem; }
.field-row { display: flex; gap: 0.75rem; }
.field-row .field { flex: 1; }
.share-row { display: flex; gap: 0.5rem; align-items: center; flex-wrap: wrap; }
.rename-row { display: flex; align-items: center; gap: 0.5rem; margin-bottom: 0.75rem; }
.rename-row strong { font-size: 1rem; }
.user-search-popup { position: absolute; top: 100%; left: 0; right: 0; z-index: 10;
background: white; border: 1px solid var(--p-surface-200);
border-radius: 4px; max-height: 160px; overflow-y: auto;
box-shadow: 0 4px 12px rgba(0,0,0,0.1); }
.user-result { padding: 0.5rem 0.75rem; cursor: pointer; font-size: 0.875rem;
display: flex; gap: 0.5rem; align-items: center; }
.user-result:hover { background: var(--p-primary-50); }
.user-fullname { color: var(--p-text-muted-color); font-size: 0.75rem; margin-left: auto; }
.existing-shares { margin-top: 0.5rem; }
.share-perm-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.375rem 0; font-size: 0.875rem; flex-wrap: wrap; }
.share-perm-item.editing { background: var(--p-surface-50); padding: 0.5rem; border-radius: 4px; }
.perm-label { color: var(--p-text-muted-color); font-size: 0.75rem; }
.url-row { display: flex; gap: 0.5rem; align-items: center; flex-wrap: wrap; }
.url-row strong { min-width: 110px; font-size: 0.8rem; }
.url-row code { background: var(--p-surface-100); padding: 0.25rem 0.5rem; border-radius: 4px; font-size: 0.8rem; flex: 1; word-break: break-all; }
.caldav-hint { font-size: 0.8rem; color: var(--p-text-muted-color); margin: 0 0 0.5rem; }
</style>
+26 -1
View File
@@ -24,15 +24,40 @@ server {
proxy_set_header Connection "upgrade";
}
# CalDAV/CardDAV braucht spezielle Methoden
# Server-Sent Events: Puffer aus, lange Read-Timeouts, sonst bricht die
# Live-Refresh-Verbindung nach ein paar Sekunden ab.
location /api/sync/events {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 24h;
proxy_send_timeout 24h;
chunked_transfer_encoding on;
}
# CalDAV/CardDAV braucht spezielle Methoden (PROPFIND, REPORT, MKCALENDAR)
location /dav/ {
# Nach 2017 erlaubt nginx die meisten WebDAV-Methoden out of the box.
# Wichtig: kein Buffering der Request-Body (PUT groesserer ICS) und
# korrekte Forward-Header fuer HTTP-Basic-Auth.
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass_request_headers on;
proxy_request_buffering off;
client_max_body_size 50M;
}
location = /.well-known/caldav { return 301 https://$host/dav/; }
location = /.well-known/carddav { return 301 https://$host/dav/; }
}
# OnlyOffice Document Server (optional)