Compare commits

..

143 Commits

Author SHA1 Message Date
Stefan Hacker dd40c55f7d fix(cloud-files): Pin loest Hydration aus, Icon-Refresh via SHChangeNotify
CfSetPinState aendert nur das Pin-Flag - ohne expliziten Call
passiert am Disk-Inhalt nichts und das Explorer-Icon bleibt
unveraendert. Darum klickte "Immer offline verfuegbar" scheinbar
ins Leere.

- Bei Pin: CfHydratePlaceholder triggert FETCH_DATA und laedt die
  Datei komplett herunter
- Bei Unpin: CfDehydratePlaceholder (war schon da)
- Nach jeder Zustandsaenderung SHChangeNotify(SHCNE_UPDATEITEM)
  damit das Overlay-Icon sofort neu gezeichnet wird, ohne dass
  der User F5 druecken muss
- Log bekommt zusaetzlich hydrate_err fuer Debugging

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 23:09:28 +02:00
Stefan Hacker 78615d8897 fix(cloud-files): Existierende normale Dateien vor Placeholder-Erstellung loeschen
Wenn der Client vorher aktiv war und dann deaktiviert wurde (oder
hart beendet), wandelt CfUnregisterSyncRoot alle Platzhalter in
normale Dateien um. Beim erneuten Aktivieren versuchte
populate_placeholders einen neuen Platzhalter anzulegen, was aber
mit ERROR_FILE_EXISTS scheiterte - der Fehler wurde zudem nur per
eprintln geloggt und verschluckt.

Ergebnis: die Datei blieb eine ganz normale Datei (kein Platzhalter,
kein Wolken-Icon). Spaeter fragt CfDehydratePlaceholder dann mit
HRESULT 0x80070178 "Die Datei ist keine Clouddatei", und "Speicher
freigeben" funktioniert nicht.

Jetzt prueft populate_placeholders vor jedem Create, ob die Datei
schon existiert und KEIN Platzhalter ist. Wenn ja: loeschen,
dann neu als Platzhalter anlegen. Erfolge und Fehler gehen beide
ins .minicloud-cloudfiles.log, damit man das Ergebnis prueft.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 22:56:58 +02:00
Stefan Hacker 3c340f9653 fix(cloud-files): Pin/Unpin tatsaechlich wirksam machen + CLI-Logging
set_pin_state hatte drei Probleme:
- FILE_READ_ATTRIBUTES: CfSetPinState braucht WRITE_ATTRIBUTES
- Kein OPEN_REPARSE_POINT: das Oeffnen selbst hat evtl. die
  Hydration getriggert, bevor wir unpinnen konnten
- Kein CfDehydratePlaceholder: Pin-Wechsel auf UNPINNED aendert
  nur das Flag, der Disk-Space wird nicht freigegeben

Jetzt:
- WRITE_ATTRIBUTES + OPEN_REPARSE_POINT beim Handle-Oeffnen
- Bei Unpin zusaetzlich CfDehydratePlaceholder, damit "Speicher
  freigeben" auch wirklich Platz freiraeumt
- Ergebnis + Fehler werden nach <parent>\.minicloud-cloudfiles.log
  geschrieben, damit wir sehen was passiert

handle_cli_shortcuts loggt nun nach %LOCALAPPDATA%\MiniCloud Sync\
cli.log, weil Explorer die stdout/stderr eines gestarteten Prozesses
verwirft. Ohne das Log kann man die vom Kontextmenue gestarteten
Aktionen nicht debuggen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 17:29:25 +02:00
Stefan Hacker 85dae4377f fix(cloud-files): AppliesTo-Syntax fuer Kontextmenue reparieren
Alter AppliesTo-Wert hatte:
- verdoppelte Backslashes (Windows AQS will einfache)
- einen verirrten Schluss-Backslash in der Quote, was die Query
  aufgebrochen hat

Neu:
- Saubere AQS-Syntax: System.ItemPathDisplay:~< "C:\\..." mit
  einfachen Backslashes (winreg schreibt REG_SZ 1:1)
- Registrierung unter AllFilesystemObjects statt *, damit auch
  Ordner den Menueeintrag erhalten
- Default-Wert (MUIVerb zusaetzlich) gesetzt, weil manche Windows-
  Versionen den Default fuer den Anzeigename nutzen
- uninstall entfernt beide Registry-Stellen (alte und neue)

Hinweis fuer Windows 11: klassische Shell-Verben stehen standard-
maessig nur unter "Weitere Optionen anzeigen" (Shift+F10). Fuer
das Haupt-Menue braeuchte man IExplorerCommand via COM-Extension.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 16:54:46 +02:00
Stefan Hacker 88c9617ae7 feat(client): Sync-Pfade und lokalen Dateibrowser bei aktivem Cloud-Files ausblenden
Wenn der Windows-Client mit Cloud-Files (OneDrive-Style) laeuft,
macht der klassische Sync-Pfade-Abschnitt samt lokalem .cloud-
Dateibrowser keinen Sinn mehr - Cloud-Files erzeugt Platzhalter
direkt im Explorer und bietet das gleiche On-Demand-Verhalten
mit nativer Shell-Integration.

Server-Dateien bleiben sichtbar (nuetzlich als Remote-Browser
unabhaengig vom Mount).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 11:47:43 +02:00
Stefan Hacker 78cfbf1ad3 feat(cloud-files): Geteilte Ordner + Rechtsklick-Menue
Backend:
- /api/sync/tree liefert jetzt {tree, shared} - shared enthaelt alle
  Dateien die MIT dem Benutzer geteilt wurden (FilePermission), nur
  Top-Level-Shares, mit Owner-Name im Anzeigenamen
- updated_at zusaetzlich als modified_at im Response fuer Client-
  Kompatibilitaet

Client:
- fetch_remote_entries merged Shared-Subtree unter virtuellem Ordner
  "Geteilt mit mir" (synthetische ID -1) in den Mount-Point
- modified_at faellt auf updated_at zurueck, falls nicht vorhanden

Kontextmenue:
- Neue HKCU-Registry-Eintraege fuer "Immer offline verfuegbar" und
  "Speicher freigeben", AppliesTo filtert auf Mount-Pfad, sodass die
  Verben nur bei Dateien unterhalb des Sync-Ordners erscheinen
- Aufruf der eigenen .exe mit --pin / --unpin <file>
- handle_cli_shortcuts fuehrt die Aktion aus und beendet sofort,
  ohne die UI/Tray/Single-Instance-Logik anzustossen

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-23 11:15:04 +02:00
Stefan Hacker 4026defe79 feat(cloud-files): Explorer-Sidebar-Integration fuer Windows
Registriert den Sync-Ordner als Shell-Namespace-Extension unter
HKEY_CURRENT_USER (kein Admin noetig), sodass er mit eigenem Icon
in der linken Leiste des Datei-Explorers erscheint - wie bei
OneDrive oder Dropbox.

- Neues Modul cloud_files::shell_integration mit install/uninstall
- Registry-Eintraege unter HKCU\Software\Classes\CLSID\{GUID} und
  HKCU\...\Explorer\Desktop\NameSpace\{GUID}
- Nutzt die laufende .exe als Icon-Quelle (fallback: imageres.dll)
- SHChangeNotify(SHCNE_ASSOCCHANGED) damit Explorer sofort aktualisiert
- install/uninstall werden aus register_sync_root/unregister aufgerufen
- winreg-Crate fuer sauberen Registry-Zugriff

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-22 15:47:05 +02:00
Stefan Hacker 2937082ba2 fix(cloud-files): Sauberes Re-Register + FETCH_PLACEHOLDERS-Stub + mehr Log
- CfUnregisterSyncRoot VOR CfRegisterSyncRoot, damit alte Policies
  (z.B. PARTIAL) nicht durch UPDATE-Flag kleben bleiben
- FETCH_PLACEHOLDERS-Stub registriert, der mit leerer Antwort und
  DISABLE_ON_DEMAND_POPULATION-Flag antwortet. Safety-Net falls
  Windows trotz FULL-Policy doch danach fragt
- log_msg an kritischen Stellen (register, connect, populate), damit
  wir beim naechsten Timeout sehen wo es haengt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:29:11 +02:00
Stefan Hacker e55ce106d4 fix(cloud-files): Population-Policy FULL statt PARTIAL
Mit PARTIAL erwartet Windows einen FETCH_PLACEHOLDERS-Callback
fuer die Ordnerenumeration. Den haben wir nicht registriert, also
lief der Explorer beim Oeffnen des Mount-Ordners in Timeout.

FULL bedeutet: wir fuellen alle Platzhalter selbst vor (machen wir
schon in populate_placeholders) und Windows fragt nicht nach.
Hydration bleibt PARTIAL - Datei-Inhalt wird weiter bei Zugriff
per FETCH_DATA geladen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:42:44 +02:00
Stefan Hacker 601e0741b1 fix(cloud-files): Platzhalter nicht als lokale Aenderung hochladen + Logging
Ursache des "voll gesynct"-Problems: der notify-Watcher feuerte auf die
cfapi-Platzhalter, die wir selbst beim Aktivieren angelegt haben. Der
sync_loop hat die dann als lokale Aenderung hochgeladen, was implizit
die Hydration ausgeloest hat. Ergebnis: keine On-Demand-Platzhalter,
sondern voller Sync.

- is_cfapi_placeholder() prueft FILE_ATTRIBUTE_OFFLINE /
  RECALL_ON_DATA_ACCESS / RECALL_ON_OPEN - solche Dateien werden beim
  Upload uebersprungen
- Log-Datei liegt jetzt NEBEN dem Mount (nicht drin), damit sie nicht
  selbst als Cloud-Datei behandelt wird
- FETCH_DATA loggt jetzt auch Success, damit man sieht dass der
  Callback ueberhaupt feuert

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:42:00 +02:00
Stefan Hacker be121190b3 feat(cloud-files): Mount-Pfad persistieren + Force-Cleanup fuer tote Sync-Roots
- cloud_files_mount in AppConfig -> bleibt ueber Neustarts erhalten
- Beim Auto-Login wird Cloud-Files automatisch wieder aktiviert
- Neue Commands cloud_files_get_mount und cloud_files_force_cleanup
- UI zeigt "Aufraeumen"-Button wenn Mount gesetzt aber nicht aktiv,
  damit User einen Ordner der nach hartem Beenden des Clients als
  toter Sync-Root haengt wieder freigeben/loeschen kann

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:32:02 +02:00
Stefan Hacker 6274567219 fix(cloud-files): Timeout-Ursachen im FETCH_DATA-Callback beheben
- HTTP-Client bekommt 60s-Timeout (statt unendlich)
- Bei Send-/Netzwerkfehler wird CfExecute immer mit Failure-Status
  abgeschlossen, damit Explorer nicht ins OS-Timeout laeuft
- Wenn Server kein Range unterstuetzt (200 statt 206), wird aus dem
  Full-Body der angeforderte Bereich herausgeschnitten und die
  tatsaechliche Laenge an CfExecute uebergeben
- Fehler werden in <mount>\.minicloud-cloudfiles.log geschrieben,
  damit man das Problem bei Timeout ueberhaupt sehen kann

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:24:51 +02:00
Stefan Hacker 204dbb6ab5 fix(client): Cloud-Files-Sektion immer sichtbar, Hinweis bei nicht unterstuetzter Plattform
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 09:06:54 +02:00
Stefan Hacker d9a4ee6a0b feat(client/windows): cfapi-Sync lebendig machen (Loop + Watcher + UI)
Jetzt tatsaechlich funktionsfaehig, nicht mehr nur Dummy:

- Register-Fallback: erst CF_REGISTER_FLAG_NONE, bei "bereits registriert"
  automatisch mit UPDATE erneut versuchen. Klappt damit bei Erstaktivierung
  und bei Client-Neustart.
- Hintergrund-Loop (cloud_files::sync_loop) pollt alle 30s
  /api/sync/changes, legt neue Placeholder an und ersetzt geaenderte.
- Eigener Callback-Watcher (cloud_files::watcher::CallbackWatcher) hoert
  auf den Mount-Ordner und sendet lokale Aenderungen (Create/Modify) an
  den Loop, der sie via POST /api/files/upload hochlaedt.
- Helper create_placeholder_at() vom Windows-Modul exportiert, damit der
  Loop neue Server-Dateien als Placeholder anlegen kann.
- AppState erhaelt cloud_files_loop + cloud_files_watcher Felder; beim
  Disable wird der Loop sauber gestoppt und der Watcher gedroppt.

Frontend (App.vue):
- Neue Sektion "Cloud-Files (OneDrive-Style)" nur sichtbar wenn die
  Plattform es unterstuetzt (cloud_files_supported).
- Ordner-Picker + Aktivieren/Deaktivieren-Button.
- Fehlermeldungen + Sync-Log-Eintraege.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 08:46:52 +02:00
Stefan Hacker 8f70b047d8 fix(client/windows): CfConnectSyncRoot liefert Key als Return-Value
In windows-rs 0.58 hat CfConnectSyncRoot nur 4 Argumente und liefert
den CF_CONNECTION_KEY direkt zurueck, keinen out-Parameter mehr.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:37:16 +02:00
Stefan Hacker f9bf53803f fix(client/windows): cfapi-Code auf windows-rs 0.58 umgestellt
- Feature Win32_System_CorrelationVector aktiviert (gate fuer
  CF_CALLBACK_INFO / CfExecute / CfConnectSyncRoot / CfCreatePlaceholders
  / CfSetPinState / CF_OPERATION_INFO / CF_CALLBACK_REGISTRATION)
- reqwest "blocking" aktiviert (wird im cfapi-Callback-Thread genutzt)
- Cf*-Funktionen liefern jetzt Result<(), Error> statt HRESULT; alle
  Aufrufe ueber ? / .map_err umgestellt
- CF_SYNC_POLICIES.Hydration/Population sind Wrapper-Structs;
  .Primary-Feld setzen statt direkter Enum-Zuweisung
- LARGE_INTEGER entfernt (Felder sind in 0.58 einfach i64)
- FILETIME-Ticks direkt als i64 schreiben (BasicInfo.*Time)
- FetchData.RequiredFileOffset/Length direkt als i64 verwenden
- CfCreatePlaceholders nimmt Slice + Option<*mut u32>
- CfSetPinState nimmt Option<*mut OVERLAPPED>
- Tauri-Command: MutexGuard vor .await freigeben (Send-Constraint)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:29:18 +02:00
Stefan Hacker de1039fc7d feat(client): Windows Cloud-Files-API als File-Provider (OneDrive-Style)
Neuer Modus neben dem bestehenden Full-Sync: Dateien erscheinen im
Explorer als Platzhalter mit Wolken-Icon und werden erst bei Zugriff
vom Mini-Cloud-Server gestreamt.

Windows (MVP):
- CfRegisterSyncRoot + CfConnectSyncRoot
- CfCreatePlaceholders fuer jede Datei aus /api/sync/tree
- FETCH_DATA-Callback mit Range-basiertem HTTPS-Download + CfExecute
- CfSetPinState fuer manuelles "Immer offline halten"

Linux (Skelett):
- FUSE-Provider hinter Feature-Flag linux_fuse (libfuse3-dev)
- Stub-Funktionen - Implementierung folgt

macOS:
- Platzhalter, erfordert Apple-Signatur - spaeter

Tauri-Commands: cloud_files_supported/enable/disable/pin/unpin.
Cargo.toml: target-spezifische windows-rs Dependency.
Doku: clients/desktop/CLOUD_FILES.md

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:19:22 +02:00
Stefan Hacker 2610e3b183 ui(files): Upload-Pfeil vor dem Ordner-Icon im Button "Ordner"
Damit ist auf den ersten Blick erkennbar, dass auch der Ordner-Button
einen Upload ausloest (und nicht bloss eine Ordner-Aktion).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:00:36 +02:00
Stefan Hacker 9f6132a400 feat: Auswahl-Dropdowns zeigen "(geteilt von <Name>)" bei Freigaben
Wenn der eigene und ein freigegebener Kalender/Adressbuch/Aufgabenliste
denselben Namen tragen, sind sie in den Anlegen-Dialogen jetzt
unterscheidbar.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:53:46 +02:00
Stefan Hacker ed944339c4 feat: Listen/Kalender/Adressbuch-Namen im 3-Punkte-Menue umbenennbar
Stift-Icon neben dem Namen oeffnet Inline-Editor (Eingabefeld + Check/X).
Enter speichert, ESC bricht ab. Nur fuer Eigentuemer sichtbar.
Backend-PUT-Endpunkte sind bereits vorhanden.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:52:12 +02:00
Stefan Hacker 2ef186e262 feat: Liste/Kalender/Adressbuch beim Anlegen waehlbar (nur Schreibrecht)
- ContactsView: Adressbuch-Auswahl im Kontakt-Dialog (versteckt bei nur
  einem beschreibbaren Buch). Neuer-Kontakt-Button disabled wenn keiner.
- TasksView: gleiches fuer Aufgabenlisten.
- CalendarView: writableCalendars (eigene + Schreibfreigaben) ersetzt
  ownCalendars in Event-Dialog und Import-Auswahl. Auswahlfeld nur ab 2.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:44:44 +02:00
Stefan Hacker 4d67819cac feat: Vor-/Nachname, geteilte Listen zeigen Eigentuemer
Backend:
- User.first_name / User.last_name (nullable, Auto-Migrate fuegt sie an)
  full_name/display_name als Properties + in to_dict
- TaskList.owner-Relationship ergaenzt (fehlte, daher wurden geteilte
  Listen beim Empfaenger nicht korrekt aufgeloest)
- /auth/me GET + PUT (Profil bearbeiten: Vorname, Nachname, E-Mail)
- /users/search findet jetzt auch nach Vor-/Nachname und liefert
  full_name/display_name mit
- list_tasklists/list_calendars/list_addressbooks liefern
  owner_full_name und owner_display_name

Frontend:
- Sidebars bei Kontakten/Kalender/Aufgaben: "(geteilt von <Voller Name>)"
  mit Fallback auf Username
- User-Search-Popup zeigt vollen Namen neben Username
- SettingsView: Vorname/Nachname/E-Mail bearbeiten

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:34:22 +02:00
Stefan Hacker e4dd555bd1 feat(tasks): Berechtigung bestehender Freigaben nachtraeglich aendern
Stift-Icon neben Freigabe oeffnet Inline-Editor mit Select "Lesen" /
"Lesen+Schreiben" (analog zu Kontakten/Kalender).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:26:59 +02:00
Stefan Hacker a21bf6de1b fix(docker): tzdata-Install entfernt - im python:3.11-slim schon drin
Vermeidet unnoetigen Platzbedarf beim Build (31 Pakete / 192 MB werden
sonst mitgezogen).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:22:42 +02:00
Stefan Hacker 3eb038abd8 feat(tasks): Benutzer-Suche beim Teilen (statt Freitext)
Analog zu Kontakten/Kalender: ab 2 Zeichen werden Vorschlaege per
/users/search eingeblendet.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:21:14 +02:00
Stefan Hacker 9bb22eb17b feat: Admin-Sicht System-Zeit + TZ-Liste in README/.env.example
- /api/settings gibt zusaetzlich timezone, timezone_abbr, server_time,
  ntp_server zurueck (alle read-only, aus Config/ENV).
- AdminView zeigt neuen Abschnitt "System-Zeit" mit Zeitzone, aktueller
  Server-Zeit und NTP-Server samt Hinweis "wird in der .env festgelegt".
- .env.example: Liste gaengiger TZ-Werte mit Link zur IANA-Vollliste.
- README.md: neuer Abschnitt "Zeitzone & NTP" mit Werte-Tabelle.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:19:40 +02:00
Stefan Hacker dca064427e feat(config): TZ + NTP_SERVER in .env mit sinnvollen Defaults
- .env / .env.example: TZ=Europe/Berlin und NTP_SERVER=ptbtime1.ptb.de
  (offizielle deutsche Zeitreferenz, hohe Verfuegbarkeit)
- app/__init__.py setzt prozessweite Zeitzone frueh via os.environ+tzset
- Leichtgewichtiger SNTP-Client (pure socket, keine deps) prueft den
  Uhr-Offset beim Start im Hintergrund-Thread und warnt bei Abweichung >5s
- Dockerfile installiert tzdata und ENV TZ=Europe/Berlin als Fallback

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:15:57 +02:00
Stefan Hacker ba3e619963 feat: Aufgaben (Tasks) mit CalDAV VTODO-Sync
Neuer Menuepunkt "Aufgaben" unterhalb Kontakte.

Backend:
- TaskList + Task + TaskListShare Models
- REST-API: CRUD, Teilen, my-color, Import/Export (.ics mit VTODO, CSV)
- CalDAV: Task-Listen tauchen als Calendar-Collection mit
  supported-calendar-component-set=VTODO im autodiscovery auf
- PROPFIND/REPORT/GET/PUT/DELETE/PROPPATCH/MKCOL fuer /dav/<user>/tl-<id>/
- SSE-Notifications bei Aenderungen

Frontend:
- TasksView mit Listen-Sidebar, Suche, "Erledigte ausblenden"
- Mehrfachauswahl + Bulk-Loeschen, Status-Toggle per Checkbox
- Editor mit Titel/Beschreibung/Faellig/Prioritaet/Status/Fortschritt
- Teilen, Farbe persoenlich anpassen, Import/Export

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:07:06 +02:00
Stefan Hacker 2ce088e96b feat: Import/Export fuer Kontakte und Kalender + Bulk-Loeschen Kontakte
Kontakte:
- Mehrfachauswahl in der Liste (Checkbox-Spalte) mit Bulk-Loeschen
- Export als Sammel-vCard (.vcf), als ZIP mit Einzel-vCards oder als CSV
- Import aus vCard (mehrere im File moeglich) oder CSV; Match per UID,
  bestehende Kontakte werden aktualisiert

Kalender:
- Export als iCalendar (.ics) oder CSV
- Import aus .ics oder CSV; bestehende Termine via UID aktualisiert

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:23:23 +02:00
Stefan Hacker c6241519a6 feat(calendar): Hinweis bei passwortgeschuetztem iCal-Link
Browser/Kalender-App fragen sonst nach Benutzername+Passwort - der
Benutzername muss leer bleiben.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:13:50 +02:00
Stefan Hacker f6626da114 feat(calendar): Mehrfachauswahl + Bulk-Loeschen in der Listen-Ansicht
Checkbox-Spalte plus Header-Checkbox "Alle". Bulk-Aktion mit
Bestaetigung loescht ausgewaehlte Termine; Read-Only-Eintraege
werden uebersprungen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:11:12 +02:00
Stefan Hacker e96c84b5f7 feat(ui): Browser-Titel "Mini-Cloud - <username>" + Wolken-Favicon
Titel reagiert reaktiv auf Login/Logout. Favicon ist die Wolke aus
der Sidebar (pi-cloud-Style).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 11:05:51 +02:00
Stefan Hacker 1eba5d0adc revert(contacts): Titel-Feld wieder raus, nur Anrede (Herr/Frau/Divers)
Sync-Probleme durch zusammengesetzten PREFIX vermeiden.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:52:23 +02:00
Stefan Hacker 655b789e06 feat(contacts): Anrede + Titel als getrennte Dropdowns
Anrede: Herr/Frau/Divers (fest), Titel: Dr./Prof./Dipl.-Ing./... (editierbar).
Beim Speichern werden beide zu vCard-PREFIX zusammengesetzt, beim Laden
wieder aufgesplittet.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:37:41 +02:00
Stefan Hacker 50df055794 feat(contacts): Anrede als Dropdown (Herr/Frau/Divers/Dr./Prof.)
editable bleibt aktiv, damit eigene Werte weiterhin moeglich sind.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:35:59 +02:00
Stefan Hacker 848e4b9b0f fix(contacts): Inputs in .field-row fuellen Container, kein Ueberlappen mehr
Anrede/Suffix/PLZ etc. hatten max-width-Container, das InputText darin
behielt aber die Default-Breite und ueberlief. Globale CSS-Regel sorgt
nun dafuer, dass jedes Input/Select seinen Field-Container ausfuellt.
field-row wrappt auf schmalen Dialogen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:32:17 +02:00
Stefan Hacker e02c4f97c1 feat(calendar): Live-Refresh ueber CalDAV, Tagklick-Navigation, Listen-Ansicht
- caldav.py sendet SSE-Notifications bei Event-PUT/DELETE und Kalender-Loeschung,
  damit das Web-UI auch auf Aenderungen aus DAVx5 sofort reagiert.
- FullCalendar navLinks: Klick auf Tagesnummer im Monatsraster wechselt in
  die Tagesansicht.
- Neue Listen-Ansicht mit Volltext-Suche, Datumsbereich, Kalender-Filter,
  Sortierung nach Datum/Titel und Loeschen-Button pro Zeile.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:28:44 +02:00
Stefan Hacker 10a1dec448 fix(calendar): wiederkehrende Termine nicht per Range filtern
Master-Event eines Serientermins liegt oft vor dem sichtbaren Bereich -
das FullCalendar-RRULE-Plugin braucht ihn trotzdem zur Expansion.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:22:24 +02:00
Stefan Hacker b398d6d800 fix: CalDAV-Routen delegieren ab-N-URLs an CardDAV (Loeschen/Aendern)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 09:16:39 +02:00
Stefan Hacker b2567d379c fix: CardDAV-Aenderungen loesen SSE-Refresh im Web-UI aus
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:56:52 +02:00
Stefan Hacker 1762437528 fix(dav): REPORT auf Kalender-URLs an CalDAV-Handler delegieren
Die CardDAV-Route /<username>/<ab_part>/ fing REPORT auf Kalender-URLs
(z.B. /dav/Adam/cal-1/) mit 404 ab, weil 'cal-1' nicht mit 'ab-' startet.
DAVx5 bekam bei der calendar-query einen 404 und markierte den EVENTS-
Sync als Hard Error. Fix analog zu PROPFIND/OPTIONS: wenn ab_part nicht
ab-* ist, an den CalDAV-REPORT-Handler delegieren.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:48:26 +02:00
Stefan Hacker 35535fb84b fix(dav): DAV-Header bewirbt jetzt auch 'addressbook'
DAVx5 registriert Dienste basierend auf dem DAV-Response-Header. Ohne
'addressbook' im Header wurde CardDAV bei der Auto-Discovery ignoriert,
obwohl addressbook-home-set korrekt gemeldet wurde. Das erklaert warum
nur der caldav-Service fuer Adam angelegt wurde.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:38:04 +02:00
Stefan Hacker 8772e02410 fix(dav): Principal-Depth-1 liefert keine Sub-Container mehr
Die zuletzt eingefuehrten Sub-Container (calendars/, addressbooks/) bei
PROPFIND Depth 1 auf /dav/<user>/ wurden von DAVx5 als leere Kalender
gezaehlt (DEFAULT_TASK_CALENDAR_NAME-Phantom-Eintraege). Da die CardDAV-
Route jetzt korrekt an den Home-Set-Handler delegiert, reicht es wenn der
Principal nur sich selbst liefert - Clients folgen den Home-Sets.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:32:22 +02:00
Stefan Hacker 0ef480858e fix(dav): CardDAV-Route fing PROPFIND auf /dav/<user>/calendars/ ab
Die CardDAV-Route /<username>/<ab_part>/ ist in Flask spezifischer als
die generische /<path:subpath> des CalDAV-Handlers und hat daher auch
/dav/<user>/calendars/ abgefangen - mit 404, weil 'calendars' nicht mit
'ab-' anfaengt. Ergebnis: DAVx5 bekam auf das Home-Set eine 404 und
zeigte keine Eintraege mehr an.

Fix: wenn ab_part nicht mit 'ab-' anfaengt, an den CalDAV-PROPFIND/OPTIONS
delegieren statt 404 zurueckzugeben.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 03:25:46 +02:00
Stefan Hacker 58ba130cd9 feat: Passwort-Manager Mehrfachauswahl + Bulk-Loeschen
Checkbox pro Eintrag, "Alle auswaehlen" oben und roter Loesch-Button mit
Anzahl. Sicherheitsabfrage vor dem Loeschen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 16:08:18 +02:00
Stefan Hacker 230c83f124 fix(dav): Principal-PROPFIND liefert calendars/ + addressbooks/ Container bei Depth 1
DAVx5 brauchte Kind-Container unter /dav/<user>/ - sonst blieben die
Listen nach Aktualisieren leer. Die Home-Sets bleiben getrennt
(calendar-home-set vs addressbook-home-set), aber der Principal zeigt
beide Sub-Container jetzt explizit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 15:33:03 +02:00
Stefan Hacker 24a6015841 fix: Separate CalDAV/CardDAV Home-Sets + UI-URLs ohne /dav/
Kalender und Adressbuecher teilten sich den gleichen Home-Set
(/dav/<user>/). DAVx5 hat bei Depth-1-PROPFIND beide Collection-
Typen angezeigt und mangels bekanntem Resourcetype als
"DEFAULT_TASK_CALENDAR_NAME"-Kacheln gelistet.

Loesung:
* calendar-home-set zeigt auf /dav/<user>/calendars/
* addressbook-home-set zeigt auf /dav/<user>/addressbooks/
* Beide Pfade sind eigene Container-Collections - PROPFIND Depth 1
  liefert nur den jeweils passenden Typ
* /dav/<user>/ selbst gibt bei Depth 1 keine Kinder mehr zurueck,
  Clients folgen den Home-Sets
* Die konkreten URLs cal-<id> / ab-<id> liegen weiterhin unter
  /dav/<user>/ (keine Breaking Change fuer existierende Clients;
  nur die Discovery-URL aendert sich)

Frontend:
CalendarView + ContactsView zeigen als Auto-Discovery-URL nur
noch den Hostname - PROPFIND auf / funktioniert ja jetzt. Die
Direkt-URL bleibt vollstaendig mit /dav/<user>/cal-<id> bzw.
ab-<id> fuer Clients die das brauchen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 15:22:29 +02:00
Stefan Hacker 9c102823e4 feat: Kontakte mit Outlook-Feldern + CardDAV-Server + Sharing
Komplette Kontakte-Ueberarbeitung analog zum Kalender-Ausbau.

Backend-Model:
* AddressBook: color (pro Buch), ausserdem Per-User-Color via
  AddressBookShare.color wie bei CalendarShare.
* Contact: volle Outlook-artige Struktur - prefix/first/middle/
  last/suffix, display_name, nickname, organization, department,
  job_title, birthday, anniversary, notes, photo sowie JSON-
  Spalten fuer mehrfach vorhandene Felder (emails, phones,
  addresses mit allen Adressteilen, websites, impp, categories).

Backend-API:
* REST CRUD uebernimmt die neuen Felder und generiert vCard 3.0
  als Source of Truth fuer CardDAV. Voller vCard-Parser +
  -Builder mit Escape/Unescape, TYPE-Parametern, Line-Folding.
* Neuer Endpoint PUT /addressbooks/<id>/my-color - persoenliche
  Farbe pro Buch ohne den Besitzer zu beeinflussen.
* SSE-Events vom Typ 'addressbook' an Besitzer + alle Share-
  Empfaenger bei jeder Aenderung.

CardDAV-Server (backend/app/dav/carddav.py):
* Volle Discovery via principal - addressbook-home-set wird
  neben calendar-home-set annonciert.
* PROPFIND/REPORT/GET/PUT/DELETE/MKCOL fuer
  /dav/<user>/ab-<id>/ und /<...>/{uid}.vcf
* addressbook-query + addressbook-multiget REPORTs
* ETag-basierte Konfliktpruefung via If-Match/If-None-Match

Frontend (ContactsView.vue):
* Komplett neuer Editor mit vier Tabs: Allgemein (Name, Org),
  Kommunikation (Emails/Phones/Websites/IMPP dynamisch),
  Adressen (mehrere mit allen Teilen), Details (Geburtstag,
  Jahrestag, Kategorien, Notizen).
* Avatar mit Fotoauswahl oder Initialen-Farbkreis.
* Kalender-Sharing-Flow 1:1 uebernommen: Autocomplete fuer
  Benutzersuche, Share-Liste mit Stift zum Bearbeiten, Muelleimer
  zum Entfernen, Per-User-Farbe, CardDAV-URL-Info-Block pro
  Adressbuch, Live-Refresh via SSE.
* Suche durchsucht Displayname, E-Mail und Firma.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 15:16:01 +02:00
Stefan Hacker fbf10197d7 fix: CalDAV calendar-query liefert nur angefragte Props
Bisher wurde immer die komplette calendar-data mitgeschickt, auch
wenn der Client nur getetag wollte. DAVx5 macht einen zweistufigen
Sync: erst calendar-query nach ETags, dann multiget fuer die
neuen/geaenderten Events. Server-seitig zu viel zu liefern bricht
diesen Ablauf - Client denkt er hat alles und ueberspringt die
zweite Stufe, aber die Events landen nicht in der Android-Kalender-
DB.

Jetzt: calendar-query schaut nach ob <c:calendar-data/> in den
angefragten Props steht und liefert entsprechend.
calendar-multiget liefert weiterhin immer die vollen Daten.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:31:53 +02:00
Stefan Hacker 0edd41e46a fix: CalDAV REPORT time-range - 500 wenn end fehlt
DAVx5 sendet bei calendar-query oft nur <time-range start=.../>
ohne end. Mein Code hat dann blind CalendarEvent.dtstart < None
gefiltert, was SQLAlchemy mit TypeError abbrechen liess - Ergebnis
HTTP 500, Sync scheitert komplett.

Zwei Korrekturen:
* end-Filter wird nur gesetzt wenn end wirklich vorhanden ist
* time-range-Parser strippt tzinfo, damit Vergleiche mit den
  tz-naiven DB-Spalten keine Exception werfen

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:21:56 +02:00
Stefan Hacker e7f469f477 fix: CalDAV HEAD auf Events + PROPPATCH auf Kalender
* GET-Route akzeptiert jetzt auch HEAD - manche Clients pruefen
  Existenz einer Ressource via HEAD bevor sie GET senden.
* Neue PROPPATCH-Route auf der Kalender-Collection: erkennt
  calendar-color + displayname und persistiert beides. Andere
  Properties werden als "angewendet" bestaetigt, damit DAVx5
  und Apple Calendar nicht enttaeuscht sind.

Damit sollten die 500-Fehler beim Sync verschwinden. Falls nicht,
bitte Server- oder DAVx5-Log posten.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:18:53 +02:00
Stefan Hacker 189aa18be8 fix: PROPFIND-Response-Href stimmt mit Anfrage-URL ueberein
Bisher war der href in der Response immer /dav/, auch wenn DAVx5
einen PROPFIND auf / oder /.well-known/caldav gemacht hat. Das
kann Clients verwirren - die erwarten, dass der Response-Pfad zum
angefragten Pfad passt. current-user-principal zeigt weiterhin
korrekt auf /dav/Adam/.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:09:01 +02:00
Stefan Hacker 39e68eee6a fix: PROPFIND/OPTIONS auf / (Root) akzeptieren - DAVx5 startet dort
DAVx5 macht beim Account-Setup zuerst PROPFIND auf / bevor es
/.well-known/caldav probiert. Der Server antwortete mit 405
Method Not Allowed (weil / nur fuer SPA-GET registriert war),
woraufhin DAVx5 den gesamten Server als "kein DAV" verwirft.

Jetzt: PROPFIND und OPTIONS auf / werden an die DAV-Handler
delegiert (gleiches Verhalten wie auf /dav/). GET/HEAD auf /
laeuft unveraendert zur SPA.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 14:04:38 +02:00
Stefan Hacker 3c762e1476 fix: Well-Known DAV - OPTIONS liefert jetzt korrekt DAV-Header
Flask hat trotz expliziter OPTIONS-Route Auto-OPTIONS generiert,
wodurch der DAV-Header fehlte. DAVx5 sieht so keinen Calendar-
Access und lehnt den Server ab.

Konsolidiert zu einem Handler mit method-basiertem Dispatch und
provide_automatic_options=False, damit Flask nicht dazwischenfunkt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:53:24 +02:00
Stefan Hacker 3f0d823dbf fix: CalDAV fuer DAVx5 - Well-Known intern dispatchen, mehr Properties
Aenderungen fuer besseren DAVx5-Support:

* /.well-known/caldav reagiert jetzt direkt auf PROPFIND/OPTIONS
  ohne Redirect-Zickerei. GET/HEAD redirecten weiterhin auf /dav/
  als visuelle Fallback.
* strict_slashes app-weit aus: /dav und /dav/ sind gleichwertig,
  ebenso die Unterpfade. DAVx5 nutzt beides gemischt.
* Jede DAV-Response traegt jetzt den DAV-Header (1, 2, 3,
  calendar-access), nicht nur OPTIONS.
* Kalender-Response enthaelt jetzt supported-report-set mit
  calendar-query + calendar-multiget (DAVx5 prueft das).
* current-user-privilege-set wird mit konkreten Privilegien gefuellt
  (read, write, write-properties, write-content, bind, unbind)
  statt leer.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:50:50 +02:00
Stefan Hacker c4b381c5e9 fix: CalDAV Autodiscovery - XML war doppelt verschachtelt
Property-Elemente wurden unter einem Container mit demselben Tag
erzeugt, z.B.:
  <current-user-principal>
    <current-user-principal>    <!-- falsch, doppelt -->
      <href>/dav/adam/</href>
    </current-user-principal>
  </current-user-principal>

Clients wie DAVx5 und Thunderbird erkennen dadurch den Principal
nicht und melden "Kein CalDAV-Dienst gefunden". XML-Generierung
umgebaut - Response-Helfer bekommen jetzt eine populate_prop-
Callback, die die tatsaechlichen Property-Children direkt ins
<prop>-Element setzt.

Zusaetzlich:
* /.well-known/caldav und /carddav akzeptieren jetzt auch PROPFIND,
  OPTIONS, HEAD (einige Clients halten die Methode beim ersten
  Aufruf bei).
* Kalender-Response enthaelt current-user-privilege-set (leer, als
  Signal dass der Client nicht ACL-abhaengig pruefen muss).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:44:44 +02:00
Stefan Hacker e85338761d feat: Persoenliche Farbe fuer freigegebene Kalender
CalendarShare bekommt color-Spalte. Im Kalender-Menue kann jeder
Benutzer eine eigene Anzeigefarbe fuer einen mit ihm geteilten
Kalender setzen, ohne dass sich dadurch die Farbe beim
Eigentuemer oder anderen Share-Empfaengern aendert.

* Owner: Farbe aendert den Kalender direkt (wie bisher).
* Share-Empfaenger: Farbe landet in CalendarShare.color und wird
  nur fuer ihn ausgeliefert (list_calendars injiziert sie in
  'color', Owner-Farbe bleibt in 'owner_color' als Referenz).

Neuer Endpoint: PUT /calendars/<id>/my-color.
UI-Hinweis: "Nur fuer deine Ansicht - <Owner> behaelt seine Farbe".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:14:45 +02:00
Stefan Hacker 2170f4a7b1 feat: Kalender-Ansicht aktualisiert sich live via SSE
Backend:
Neuer Event-Typ 'calendar' im Broadcaster. Wird bei Event-CRUD,
Serien-Ausnahmen, Freigaben hinzufuegen/entfernen und beim
Loeschen ganzer Kalender emittiert. Empfaenger: Eigentuemer +
alle User mit CalendarShare auf dem jeweiligen Kalender.

Frontend:
CalendarView oeffnet beim Mount eine EventSource zu
/api/sync/events und reloaded Kalenderliste + Events bei jedem
'calendar'-Event (300ms debounced). Damit sehen beteiligte
Nutzer Aenderungen in praktisch Echtzeit - kein F5 mehr noetig.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:10:54 +02:00
Stefan Hacker ce4faedd88 feat: CalDAV-URLs im Kalender-Menue anzeigen
Im Drei-Punkte-Menue jedes Kalenders wird jetzt ein Info-Block mit
den CalDAV-URLs angezeigt:

* Auto-Discovery URL fuer Thunderbird / DAVx5 / Apple Calendar
* Direkt-URL fuer diesen speziellen Kalender (z.B. Outlook
  CalDAV-Synchronizer)
* Kurz-Hinweis welcher Client welche URL nimmt

Jede URL hat ein Kopier-Icon. Ergaenzt den bestehenden iCal-Link
um die bidirektionale Sync-Moeglichkeit ueber CalDAV.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:08:11 +02:00
Stefan Hacker fda9e685a9 feat: Kalender-Freigaben per Stift-Button bearbeiten
Analog zu den Datei-Freigaben: Stift neben der Muelltonne in der
Share-Liste macht die Zeile zur Inline-Edit-Zeile mit Permission-
Dropdown + Check/X. Speichern nutzt denselben POST /share-
Endpoint, der auch das initiale Teilen erledigt - er erkennt den
existierenden User und aktualisiert nur die Berechtigung.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:04:48 +02:00
Stefan Hacker c73be6fac5 fix: Startup-Crash - doppelt definierte Calendar.owner-Relation entfernt
User.calendars hat bereits backref='owner', mein zusaetzlich
hinzugefuegtes Calendar.owner kollidierte damit und SQLAlchemy
weigerte sich, die Mappers zu initialisieren ("Error creating
backref 'owner'..."). Damit waren alle Auth-Endpoints tot.

Jetzt nur noch Kommentar, die backref uebernimmt die Aufgabe.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 13:00:00 +02:00
Stefan Hacker a143325bbe feat: Kalender - Autocomplete + Privat-Flag + Share-Liste + Bugfix
Sharing-Fix:
Calendar-Model hatte keine owner-Relation zu User - list_calendars
stuerzte beim Listen geteilter Kalender ab (c.owner.username ->
AttributeError). Jetzt mit explizitem foreign_keys Relationship.

Benutzer-Autocomplete:
"Kalender teilen" nutzt jetzt /users/search wie bei Dateien.
Tippt man 2+ Zeichen, erscheint ein Dropdown mit passenden
Benutzernamen. Klick uebernimmt den Namen.

Bestehende Freigaben werden im Menue angezeigt mit Muelleimer
zum Entfernen.

Privat-Flag fuer Termine:
CalendarEvent bekommt is_private-Spalte. Checkbox im Termin-
Dialog "🔒 Privat (Teilnehmer sehen nur den Zeitblock)".

Redaction greift an drei Stellen:
* GET /events: Nicht-Owner sehen summary="Privat", description
  und location = null. Zeitfenster bleibt voll sichtbar.
* iCal-Export (/ical/<token>): Privat-Events werden mit
  CLASS:PRIVATE ausgegeben und SUMMARY/DESCRIPTION/LOCATION
  werden gestrippt.
* CalDAV: aktuell werden eh nur eigene Kalender exportiert,
  also keine Redaction noetig. Kommt bei Share-Support rein.

Der Eigentuemer sieht natuerlich in seiner eigenen Ansicht alle
Details seines privaten Termins.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:56:25 +02:00
Stefan Hacker 5797a7b738 feat: CalDAV-Server (RFC 4791 Subset) fuer native Client-Sync
Vollstaendige CalDAV-Implementierung unter /dav/ - Thunderbird,
DAVx5, Apple Calendar und Outlook (CalDAV-Synchronizer) koennen
sich einfach ueber HTTP-Basic-Auth mit ihrem Mini-Cloud-Account
anmelden und ihre Kalender synchronisieren.

Unterstuetzte Methoden:
* OPTIONS      - DAV-Capabilities
* PROPFIND     - Discovery, Principal, Calendar-Home, Kalender,
                 Termin-Listings (Depth 0/1 beachtet)
* REPORT       - calendar-query + calendar-multiget mit
                 optionalem Zeitraumfilter (<time-range>)
* GET          - einzelner Termin als VCALENDAR
* PUT          - Termin erstellen/aktualisieren (mit ETag-Check
                 via If-Match + If-None-Match)
* DELETE       - Termin oder ganzer Kalender
* MKCALENDAR   - neuen Kalender vom Client aus anlegen

iCal-Parser verarbeitet SUMMARY, DESCRIPTION, LOCATION, DTSTART,
DTEND, RRULE, EXDATE - inklusive Line-Folding (RFC 5545).
Ganztages-Termine (VALUE=DATE) werden korrekt erkannt.

ETags basieren auf updated_at-Zeitstempel und werden pro
PUT-Response zurueckgegeben, damit Clients Konflikte erkennen.

nginx.example.conf: /dav/ mit proxy_request_buffering off fuer
groessere PUTs und Weiterleitung der .well-known-URLs.

README: eigener "CalDAV-Zugriff"-Block mit Tabelle pro Client.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:51:21 +02:00
Stefan Hacker cbb2786130 fix: Kalender - Termine immer als Balken statt Punkt+Zeit
eventDisplay: 'block' zwingt FullCalendar dazu, auch zeitlich
terminierte Termine in der Monatsansicht als farbige Balken
anzuzeigen statt als Punkt mit Uhrzeit-Label. Damit sieht ein
per "Neuer Termin"-Button angelegter Termin genauso aus wie einer,
der per Klick auf den Tag erstellt wurde.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:44:28 +02:00
Stefan Hacker c1b05e2525 feat: Serientermin-Bearbeitung: Nur diesen Termin oder Serie
Klick auf einen wiederkehrenden Termin oeffnet zuerst einen Dialog:
"Nur diesen Termin" oder "Ganze Serie".

* Serie: bearbeitet den Master wie bisher
* Nur dieser: fuegt EXDATE fuer das geklickte Datum zum Master
  hinzu und legt einen eigenstaendigen Ersatz-Termin mit den
  bearbeiteten Daten an

Backend:
* CalendarEvent.exdates speichert Ausnahmedaten kommasepariert
* POST /events/<id>/exception fuegt EXDATE hinzu, erstellt
  optional das Replacement-Event mit frischer UID
* _build_vevent schreibt jetzt EXDATE-Zeilen in die ical_data,
  sodass CalDAV-Clients die Ausnahmen auch sehen werden

Frontend:
* FullCalendar rrule-Plugin bekommt die exdate-Liste und blendet
  die uebersprungenen Tage aus
* Drag & Drop verschiebt weiterhin die ganze Serie (Shortcut -
  fuer Einzelverschiebung Termin anklicken und bearbeiten)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:41:35 +02:00
Stefan Hacker ddd8f57e69 feat: Kalender-Termine zeigen Icons + Start-Ende-Uhrzeit
* 📅-Icon bei ganztaegigen Terminen
* 🔁-Icon bei wiederkehrenden Terminen
* Anzeige "09:00-10:30" statt nur "09:00" in Woche/Tag-Ansicht
* Mouseover-Tooltip mit allen Termin-Infos inklusive Ort und
  Beschreibung

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:38:44 +02:00
Stefan Hacker c5284f57e0 feat: Kalender mit FullCalendar - Woche/Monat/Tag, Drag&Drop, Wiederholungen
Kalender-UI komplett neu aufgesetzt mit FullCalendar:
* Drei Ansichten: Monat, Woche, Tag - ueber Toolbar wechselbar
* Drag & Drop: Termine zwischen Tagen verschieben
* Resize: Termindauer direkt am Rand ziehen
* Sidebar mit aktiven Kalendern (Checkbox fuers Ein-/Ausblenden)
* Deutsch lokalisiert, Woche startet Mo, Wochennummern
* Heute-Marker + Jetzt-Linie in Woche/Tag

Terminbearbeitung:
* Titel, Ort, Beschreibung, Zeitraum (oder ganztaegig)
* Wiederholungs-Editor: taeglich, woechentlich (mit Wochentagen),
  monatlich (auch "jeden 2. Mittwoch"), jaehrlich - jeweils mit
  Intervall, Enddatum oder Wiederholungsanzahl
* RRULE-Feld (RFC 5545) wird generiert und vom rrule-Plugin fuer
  die Anzeige im Kalender gerendert

Backend:
* CalendarEvent: description + location Spalten ergaenzt
* Calendar: ical_password_hash fuer passwortgeschuetzte Abo-Links
* /calendars/<id>/ical-link unterstuetzt password + clear_password
* DELETE /calendars/<id>/ical-link zum Zurueckziehen
* ical_export erzwingt HTTP Basic Auth wenn Passwort gesetzt -
  DAVx5, Apple Cal, Thunderbird verstehen das out-of-the-box

Frontend-Deps: @fullcalendar/{core,daygrid,timegrid,interaction,
rrule,vue3}, rrule - ca. 150KB Bundle-Overhead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:32:59 +02:00
Stefan Hacker 04bc3f80ec feat: Bestehende Benutzerfreigaben per Stift-Button bearbeiten
Neben der Muelltonne jetzt ein Stift-Icon im Share-Dialog:
Klick macht die Zeile zur Inline-Edit-Zeile mit Permission-
Dropdown + Weiterteilen-Checkbox + Speichern/Abbrechen-Buttons.
Speichern ruft POST /permissions mit user_id auf - Backend
erkennt die bestehende Freigabe und aktualisiert sie, statt
loeschen + neu anlegen zu muessen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:14:03 +02:00
Stefan Hacker 9b135e42b7 feat: Freigaben-Aenderung live + "Ordner nicht mehr verfuegbar"-Handling
Backend:
set_permission und remove_permission feuern jetzt ein SSE-Event vom
Typ 'permission' an Target-User + Owner + weitere Share-Empfaenger.
Damit aktualisieren sich die Dateilisten aller Beteiligten in
Echtzeit - auch beim Betroffenen, der gerade seinen Zugriff
verliert.

Frontend:
FilesView wrapped loadFiles in safeLoadCurrentFolder(). Bei
403/404 erscheint ein Toast "Dieser Ordner wurde geloescht oder
die Freigabe wurde entfernt" und nach 600ms wird zurueck zum
Root navigiert. Greift beim Direktaufruf, beim Ordnerwechsel und
bei durch SSE ausgeloesten Auto-Reloads.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 12:00:15 +02:00
Stefan Hacker 9369c851a0 feat: Benutzerfreigabe - Weiterteilen-Recht + Lesezugriff wird erzwungen
Neues Berechtigungs-Modell fuer Benutzerfreigaben:

* FilePermission bekommt zwei neue Spalten:
  - can_reshare (bool): darf dieser Nutzer die Freigabe weiterverteilen?
  - granted_by (user_id): wer hat diese Freigabe erstellt?

* set_permission / create_share_link erlauben jetzt auch Nicht-Owner,
  sofern sie can_reshare haben. Dabei gilt:
  - Lesend + reshare -> kann nur lesend weiterteilen
  - Schreibend + reshare -> kann lesend ODER schreibend weiterteilen
  - Admin kann nur der Eigentuemer vergeben
  - Jeder Re-Sharer kann wiederum can_reshare weitergeben

* remove_permission: Owner kann alle Freigaben entfernen; Re-Sharer
  nur die von ihnen selbst erstellten.

* get_permissions: Owner sieht alle; Re-Sharer nur selbst-erstellte.

* list_files liefert my_permission + my_can_reshare pro Eintrag -
  Frontend kann Rename/Delete/Share-Buttons gezielt ein- und
  ausblenden statt blind alle anzuzeigen.

Frontend:
* Rename/Delete-Buttons nur fuer Write-Zugriff
* Share-Button nur fuer Owner oder Re-Sharer
* "darf weiterteilen" Checkbox neben Permission-Dropdown im Dialog
* Dropdown-Optionen nach eigenem Level gefiltert (Re-Sharer sieht
  keine hoeheren Stufen als seine eigene)
* Hinweis-Text "Du hast X - du kannst maximal X weiterteilen"

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:54:36 +02:00
Stefan Hacker 035923834b docs: README erklaert Reichweite des File Locks in Alltagssprache
Neuer Abschnitt "Was das Lock wirklich kann (und was nicht)" mit
Tabelle + Beispielszenario Adam/Anna. Zeigt Laien, dass das Lock
Web-GUI, Client und Upload schuetzt, aber nicht den Windows-
Explorer - und dass die Konflikt-Kopie das Sicherheitsnetz ist.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:36:07 +02:00
Stefan Hacker 23563622f8 feat: Lock-Badge + smartes Kontextmenue in lokaler Client-Ansicht
Die lokale Dateiliste im Client zeigt jetzt pro Datei ein 🔒-Badge
mit Nutzername wenn ausgecheckt (wie Server-Ansicht + Web-GUI).
browse_sync_folder zieht den Server-Tree bei jedem Aufruf und
korreliert via Journal-Lookup (oder .cloud-Metadaten) die lokale
Datei mit dem File-Lock-Status.

Rechtsklick-Menue reagiert jetzt auf den Lock-Status:
- Frei              -> "Auschecken (sperren)"
- Eigener/fremder   -> "Entsperren (einchecken)"
Neuer Tauri-Command lock_file_cmd fuer reines Sperren ohne Oeffnen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:32:01 +02:00
Stefan Hacker 5afb87c9cd fix: SSE-Reload in FilesView etwas robuster
Beim Verbindungsaufbau (open-Event) wird jetzt ein initialer Reload
ausgeloest, damit eventuelle Changes in der Zeit zwischen letzter
Anzeige und SSE-Verbindung nicht verloren gehen. Gilt fuer eigene
und freigegebene Ordner gleichermassen (selbe FilesView-Komponente).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:21:10 +02:00
Stefan Hacker 8c7a14c38f fix: Server-Ansicht aktualisiert Lock-Status sofort via SSE
Die Server-Dateiliste im Client wartete bisher auf einen abgeschlossenen
Sync-Durchlauf, bevor Lock-Aenderungen anderer Nutzer sichtbar wurden.
Ausloeser von Events ohne Datei-Download (reine Lock/Unlock-Events)
landeten teils gar nicht in der UI.

Frontend hoert jetzt direkt auf das sse-event vom Backend und ruft
loadFileTree + loadLocalFiles auf - damit Lock-Icons im Server-Tree
in Echtzeit erscheinen/verschwinden.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:18:34 +02:00
Stefan Hacker 6c9daa5783 feat: Offline-Dateien werden beim erneuten Oeffnen wieder ausgecheckt
Bisher hat der Client nur beim ersten Oeffnen (.cloud-Platzhalter ->
Download) gesperrt. Nach dem Einchecken und erneutem Doppelklick
blieb die Datei ungesperrt, weil der Open-Pfad fehlte.

Neuer Tauri-Command open_offline_file loest die Server-Datei-ID
ueber das Sync-Journal auf, sperrt auf dem Server und oeffnet
lokal mit der Standard-App. Im lokalen Dateibrowser:
- Doppelklick auf eine bereits offline vorhandene Datei checkt sie
  nun aus und oeffnet sie (vorher: keine Reaktion)
- Rechtsklick-Menue hat "Oeffnen (auschecken)" fuer Offline-Dateien

Das Lock triggert wie gehabt notify_file_change -> SSE -> Web-UI
aktualisiert den Lock-Status sofort.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 11:09:06 +02:00
Stefan Hacker 88ab3c9b8d fix: Save-Endpoints feuern SSE-Event - Web-Edits synchronisieren sich
/files/<id>/save (Text/HTML/Spreadsheet) und der OnlyOffice-
Callback aktualisierten Inhalt + Checksum, riefen aber
notify_file_change nicht auf. Der Client bekam dadurch keinen
SSE-Trigger und merkte die neue Server-Version erst beim naechsten
30s-Fallback-Sync - wenn ueberhaupt.

Jetzt: beide Endpoints emittieren 'updated' an Owner + Share-
Empfaenger, Desktop- und Web-Clients reagieren sofort.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:56:51 +02:00
Stefan Hacker e3cf7b1b64 fix: SSE-Broadcaster nur 1 Worker - sonst Events zwischen Prozessen verloren
Mit 2 Gunicorn-Workern laeuft der In-Memory-Broadcaster in zwei
voneinander getrennten Prozessen. Landet ein Lock-Request auf
Worker A und die SSE-Verbindung des Empfaengers auf Worker B, kommt
das Event nie beim Client an - genau deshalb klappte der Live-
Refresh bei freigegebenen Ordnern nicht zuverlaessig.

Jetzt: 1 Worker mit 32 Threads. Threads teilen Memory, der
Broadcaster ist fuer alle Verbindungen derselbe. Fuer mehr Durchsatz
waere Redis Pub/Sub noetig - hier reicht aber Single-Process-Modus.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:51:49 +02:00
Stefan Hacker 3af2bc3312 fix: SSE blockiert gunicorn-Worker - auf gthread umstellen
Mit 4 synchronen Workern hielt jede SSE-Verbindung dauerhaft einen
ganzen Worker besetzt. 4 offene Browser-Tabs -> alle anderen
Requests blockiert -> "Dateien laden dauert ewig".

Loesung: gthread worker-class mit 2 Workern x 16 Threads = 32
gleichzeitige Slots. Lang laufende SSE-Streams belegen nur je
einen Thread, regulaere Requests laufen unbeeintraechtigt.

nginx.example.conf: separater Location-Block fuer /api/sync/events
mit proxy_buffering off und 24h Read-Timeout, damit die Events
sofort durchkommen und die Verbindung nicht abbricht.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:33:02 +02:00
Stefan Hacker 5f905b4925 fix: Sync-Fehler "error decoding response body" + Server-Edits
Drei Probleme in einem:

1. create_folder/get_sync_tree parsten die Response auch bei HTTP-
   Fehlern als JSON. Bei 401/409/etc. kam "error decoding response
   body" statt der eigentlichen Fehlermeldung. Status wird jetzt
   zuerst geprueft, Body-Text wird bei Fehlern zurueckgegeben.

2. Ohne Journal-Eintrag und unterschiedlichen Hashes wurde vorher
   eine Konflikt-Kopie erstellt. Fuer Server-Edits aus dem Web-UI
   (wo der Client die Datei gar nie mit Journal erfasst hatte) war
   das falsch. Nextcloud-Ansatz: beim Erstkontakt Server
   autoritativ - Download statt Konflikt-Kopie.

3. run_sync_now uebernimmt neu konfigurierte sync_paths aus dem
   State, damit manuelle Syncs auch nach add_sync_path greifen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:25:01 +02:00
Stefan Hacker 28fb1c47c2 feat: Web-GUI Live-Refresh via SSE
FilesView abonniert beim Mount die SSE-Events des Backends. Lock/
Unlock, Create, Update oder Delete durch andere Clients loest einen
debounced Reload der aktuellen Ordner-Ansicht aus. EventSource
reconnected automatisch; wird beim Unmount sauber geschlossen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:21:00 +02:00
Stefan Hacker b33e66cad9 fix: Freigegebene Ordner zeigen Dateien auch an
list_files filterte Kinder-Dateien nach owner_id=current_user, wodurch
in einem freigegebenen Ordner (der einem anderen User gehoert) keine
Dateien angezeigt wurden. Jetzt wird beim Betreten eines Ordners die
Zugriffsberechtigung geprueft; bei eigenem Ordner wie gehabt, bei
freigegebenem Ordner werden alle Kinder-Dateien gelistet.

_check_file_access laeuft jetzt auch den Ordner-Baum hoch, damit
eine Permission auf einem Vorfahren-Ordner automatisch Zugriff auf
alle Nachkommen gewaehrt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:13:35 +02:00
Stefan Hacker c63a52629d fix: Lock/Unlock-Buttons in FilesView - doppelter /api-Prefix
apiClient hat baseURL '/api' - die URL darf nicht nochmal mit /api
anfangen, sonst wird daraus /api/api/... und der Request geht ins
Leere.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 10:01:43 +02:00
Stefan Hacker 5ba007ef51 fix: Borrow-Checker in Background-Sync-Thread
Temporary-Drop-Order: MutexGuard hielt Referenz auf State-Binding,
das am Block-Ende schon fallen gelassen wurde. Zwischenvariable
erzwingt Drop der MutexGuard vor dem Binding.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 09:57:06 +02:00
Stefan Hacker 6aad986d78 fix: PDFs im Preview-iframe statt neuem Tab
Download-Endpoint unterstuetzt jetzt ?inline=1, wodurch
Content-Disposition auf inline statt attachment gesetzt wird.
PDF- und Bild-Preview nutzen diesen Parameter, damit der
Browser das PDF im Preview-Iframe rendert statt einen Download
auszuloesen. Normale Download-Buttons bleiben unveraendert.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 09:55:40 +02:00
Stefan Hacker 50385faa02 feat: Echtzeit-Sync via SSE + Journal-basierter 3-Wege-Vergleich
Desktop-Client komplett ueberarbeitet nach Nextcloud-Vorbild:
- Persistentes SQLite-Journal (journal.rs) speichert letzten bekannten
  Stand pro Datei - ueberlebt Client-Neustarts (Hauptbug behoben).
- Engine.rs neu: 3-Wege-Vergleich Local <-> Journal <-> Server mit
  sauberer Konflikt-Kopie (inkl. Username + Zeitstempel).
- Loesch-Propagation: Lokal geloeschte Dateien landen im Server-
  Papierkorb des Owners (auch bei Freigaben). Auf dem Server
  geloeschte Dateien werden lokal entfernt.
- Lock-Flow repariert: frischer Token bei jedem Call, Fehler-Feedback.

Echtzeit-Sync:
- Backend: SSE-Endpoint /api/sync/events mit In-Memory-Broadcaster.
  Events bei Create/Update/Delete/Lock/Unlock, Zustellung an Owner
  plus alle User mit Share-Permission.
- Client: persistente SSE-Verbindung mit Auto-Reconnect. Events
  triggern sofortigen Sync (<100ms). 30s-Polling bleibt als
  Fallback fuer Netzwerk-Aussetzer.

Weitere Fixes:
- /api/sync/tree filtert is_trashed=False (Papierkorb wird nicht
  mehr an Clients gesynct).
- Web-GUI: Lock/Unlock-Buttons pro Datei, Admin darf fremde Locks
  zwangsweise loesen. Rename/Delete disabled bei fremdem Lock.
- Lock-Check im Backend bei PUT/DELETE (423 Locked Response).
- Background-Sync nur noch einmal pro Prozess gestartet, liest
  sync_paths pro Iteration neu - add/remove wirkt sofort, kein
  Client-Neustart mehr noetig.
- Watcher werden pro Sync-Pfad individuell verwaltet.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 09:50:44 +02:00
Stefan Hacker e65d330d1d docs: README File Locking Tabelle aktualisiert
- Feature-Beschreibung angepasst (manuelles Entsperren, auto-unlock)
- Neue File Locking Tabelle mit allen Szenarien
  (oeffnen, entsperren, vergessen, client beenden, admin)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 03:06:40 +02:00
Stefan Hacker 2bd8a2e1b5 feat: Heartbeat fuer Locks - vergessene Locks laufen nach 15 Min ab
Wenn jemand vergisst zu entsperren:
- Client laeuft -> Heartbeat alle 60s -> Lock bleibt aktiv
- Client geschlossen -> kein Heartbeat -> Lock laeuft nach 15 Min ab
- Laptop zugeklappt -> gleicher Effekt -> 15 Min -> frei

Tracking: locked_files Vec merkt sich welche Dateien wir gesperrt haben.
Heartbeat laeuft im Token-Refresh Thread mit (alle 60s Heartbeat,
alle 10 Min Token-Refresh).

Lock wird beim Oeffnen getrackt, beim Entsperren/Unmark-Offline entfernt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 03:04:28 +02:00
Stefan Hacker 597dafc461 feat: File Lock beim Oeffnen + Entsperren per Rechtsklick
Beim Oeffnen einer .cloud-Datei:
- Download + Datei bleibt lokal (wie bisher)
- Lock wird auf dem Server gesetzt (andere sehen "gesperrt von X")
- Kein Auto-Unlock - Datei bleibt gesperrt bis manuell entsperrt

Rechtsklick im Datei-Browser auf Offline-Dateien:
- "Entsperren (Freigeben fuer andere)" - hebt den Lock auf
- "Nicht mehr offline" - .cloud zurueck + automatisch unlock

So bleiben Dateien gesperrt solange man daran arbeitet.
Wenn fertig: Rechtsklick -> Entsperren. Einfach und explizit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 03:03:01 +02:00
Stefan Hacker 0845659c84 refactor: Auto-Close komplett entfernt - Nextcloud-Ansatz
.cloud oeffnen = Download + Datei bleibt als echte Datei (wie Nextcloud).
Aenderungen werden automatisch vom Watcher gesynct.
"Nicht mehr offline" per Rechtsklick im Datei-Browser -> .cloud zurueck.

Entfernt:
- Auto-Close Detection (is_file_in_use, OpenedFile tracking,
  Heartbeat, Lock/Unlock beim Oeffnen)
- Lock-Kommandos (lock_file_cmd, unlock_file_cmd)
- opened_files HashMap, locked_files Vec
- is_file_in_use Funktion
- ~100 Zeilen Code weniger

Beibehalten:
- Token-Refresh Thread (alle 10 Min)
- File-Locking API im Backend (wird vom Web-UI weiterhin genutzt)
- Watcher + Sofort-Sync
- mark_offline / unmark_offline Kommandos

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 03:01:02 +02:00
Stefan Hacker 763fd4d563 fix: Auto-Close erkennt Datei-Aktivitaet statt nur File-Lock
Problem: Notepad und die meisten Editoren halten keinen File-Lock.
is_file_in_use() fand sofort "nicht in Benutzung" und raeumte die
Datei auf bevor der User sie bearbeiten konnte.

Neuer Ansatz - drei Bedingungen muessen erfuellt sein:
1. Mindestens 30 Sekunden seit dem Oeffnen (Schutzzeit)
2. Kein File-Lock UND Dateigroesse unveraendert
3. Mindestens 2 Minuten seit der letzten Aenderung/Lock

Datei-Aktivitaet wird getrackt:
- Groesse aendert sich -> Timer zuruecksetzen
- File-Lock aktiv (Office) -> Timer zuruecksetzen
- Erst nach 2 Minuten Inaktivitaet -> Auto-Close

So funktioniert es fuer alle Programme:
- Office (haelt Lock): Lock verschwindet -> 2 Min warten -> Close
- Notepad (kein Lock): Letzte Groessenaenderung -> 2 Min -> Close
- Schnell oeffnen+schliessen: 30s Schutzzeit verhindert sofortiges Close

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:57:12 +02:00
Stefan Hacker 0714d96668 fix: .cloud Platzhalter werden bei Server-Aenderung aktualisiert
Vorher: Platzhalter wurde nur erstellt wenn er nicht existierte.
Wenn sich die Datei auf dem Server aenderte (neue Groesse, neuer
Checksum), blieb der Platzhalter mit den alten Metadaten.

Jetzt: Bei jedem Sync wird der Checksum im Platzhalter mit dem
Server-Checksum verglichen. Bei Unterschied -> Platzhalter neu
schreiben mit aktueller Groesse, Checksum und Datum.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:43:09 +02:00
Stefan Hacker b6afc05148 fix: .cloud Oeffnen - besseres Error-Handling + Fallback-Dateiname
- Dateiname: Erst aus JSON "name" Feld, Fallback: .cloud von Dateiname strippen
- Alle Fehler werden jetzt gemeldet statt verschluckt (download, lock, open)
- open::that Fehler wird zurueckgegeben statt ignoriert
- Ausfuehrliches Logging: Pfade, Groesse, Lock-Status
- Pruefung ob Download-Datei existiert bevor geoeffnet wird

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:41:58 +02:00
Stefan Hacker f71103185c fix: Borrow-Checker - sync_paths klonen vor der Iteration
Kann nicht &self.sync_paths iterieren und gleichzeitig &mut self
Methoden aufrufen. Clone der Liste loest den Konflikt.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:21:51 +02:00
Stefan Hacker eb49a034ed fix: &self -> &mut self fuer Methoden die known_checksums aendern
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:19:09 +02:00
Stefan Hacker 2428dabed7 docs: README Sync-Logik Tabelle + aktualisierte Features
- Sync-Logik Tabelle: Checksum-Tracking erklaert (wer hat sich geaendert)
- Features aktualisiert: Intelligenter Sync, Konflikt-Erkennung,
  Auto-Unlock, Minimiert starten, .cloud Handler

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:17:59 +02:00
Stefan Hacker 9ede2d6bdb fix: Sync-Richtung korrekt - Checksum-Tracking statt Timestamps
Problem: Timestamps waren unzuverlaessig fuer die Sync-Richtung
(Download setzt lokale mtime auf 'jetzt', Timezone-Differenzen).
Offline-markierte Dateien wurden nie vom Server aktualisiert.

Loesung: known_checksums HashMap trackt den Server-Checksum
beim letzten Sync. Bei unterschiedlichen Checksums:

| Lokal geaendert | Server geaendert | Aktion |
|-----------------|------------------|--------|
| Nein | Ja | Server->Lokal (Download) |
| Ja | Nein | Lokal->Server (Upload) |
| Ja | Ja | KONFLIKT (lokale Kopie umbenennen, Server runterladen) |

Erster Sync (kein known_checksum): Server gewinnt immer (Download).
Danach wird jeder Server-Checksum gespeichert.

Betrifft: sync_virtual, sync_upload_new, sync_full_upload

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:15:53 +02:00
Stefan Hacker b3da50e6ce fix: Server->Client Sync + File Locking repariert
Server->Client Sync:
- Server sendet Timestamps ohne Timezone (2026-04-11T12:49:24.735436)
- parse_from_rfc3339 braucht Timezone -> schlug still fehl
- Client dachte IMMER er sei neuer -> Upload statt Download
- Fix: parse_server_time() akzeptiert beides (mit/ohne Timezone)
- Probiert RFC3339, dann NaiveDateTime mit Microseconds, dann ohne

File Locking:
- open_cloud_file nutzte API-Clone vom SyncEngine (evtl. alter Token)
- Jetzt direkt state.api (immer aktueller Token nach Refresh)
- Lock wird zuverlaessig gesetzt beim Oeffnen von .cloud Dateien

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:05:10 +02:00
Stefan Hacker a445256d86 fix: Alle Rust-Warnings bereinigt
- unused variables: Underscore-Prefix (_real_path, _had_changes, _file_id)
- dead_code: #[allow(dead_code)] fuer zukuenftige Methoden
  (open_cloud_file, close_cloud_file, get_changes, LockResponse, SyncChangesResponse)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 02:02:53 +02:00
Stefan Hacker 0d1fc67287 fix: use std::path::Path Import fuer is_file_in_use
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:55:56 +02:00
Stefan Hacker b606ec9a4a docs: CHANGELOG.md - komplette Projekthistorie
Von der ersten Zeile Code bis zum Desktop Sync Client.
9 Versionen, 70+ Commits, alles an einem Tag.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:52:43 +02:00
Stefan Hacker 86545ca405 feat: Minimiert starten + kein Fenster-Popup bei .cloud Oeffnung
- .cloud Doppelklick oeffnet Datei im Hintergrund ohne das Client-
  Fenster aufzupoppen (war nervig)
- Neue Einstellung "Minimiert starten (direkt im System-Tray)"
  als Checkbox im Einstellungen-Bereich
- Wird in config.json gespeichert, bleibt bei Updates erhalten
- Bei aktiviertem Haken: Client startet unsichtbar im Tray,
  Sync laeuft im Hintergrund, Fenster nur per Tray-Doppelklick

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:50:09 +02:00
Stefan Hacker e9638cc6ed fix: Lock verschwindet nicht mehr - Token-Refresh + laengerer Timeout
Problem: Lock verschwand nach 5 Minuten weil:
1. JWT-Token nach 15 Min ablief -> Heartbeat schlug still fehl
2. Server gab Lock nach 5 Min ohne Heartbeat frei

Fix Client:
- Token-Refresh alle 10 Minuten (vor dem 15-Min-Ablauf)
- Aktualisiert den Token in der shared API-Instanz
- Heartbeat nutzt immer den aktuellen Token

Fix Backend:
- Lock-Timeout von 5 auf 15 Minuten erhoeht
- Genug Puffer fuer Netzwerk-Probleme oder kurze Unterbrechungen

Timeline:
  0s    -> Lock + Heartbeat alle 10s
  600s  -> Token-Refresh
  900s  -> Lock wuerde erst jetzt ablaufen (15 Min ohne Heartbeat)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:45:52 +02:00
Stefan Hacker b937351556 feat: Auto-Unlock wenn Datei geschlossen wird
Problem: Nach Oeffnen einer .cloud-Datei blieb die Sperre auf dem
Server bestehen, auch wenn Word/Excel geschlossen wurde.

Loesung: Hintergrund-Thread prueft alle 10 Sekunden ob geoeffnete
Dateien noch von einem Prozess benutzt werden:

Windows: Versucht exklusiven Schreibzugriff - wenn erfolgreich ist
  die Datei nicht mehr in Benutzung (Office gibt den Lock frei)
Linux/Mac: lsof prueft ob ein Prozess die Datei offen hat

Wenn Datei geschlossen:
1. Aenderungen werden zum Server hochgeladen
2. Server-Lock wird aufgehoben
3. .cloud Platzhalter wird neu erstellt (mit aktuellem Checksum)
4. Lokale Kopie wird geloescht
5. UI zeigt "Geschlossen + entsperrt: datei.cloud"

Tracking: opened_files HashMap speichert file_id -> Pfad + Cloud-Name
fuer alle via .cloud geoeffneten Dateien.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:44:40 +02:00
Stefan Hacker 4673423e2f fix: Sync vergleicht Timestamps - Server-Aenderungen nicht ueberschreiben
Problem: Wenn eine Datei auf dem Server geaendert wurde, hat der
Client sie trotzdem mit der lokalen (alten) Version ueberschrieben.
Der Sync hat nur Checksums verglichen aber nicht geprueft wer neuer ist.

Fix: Bei unterschiedlichen Checksums wird jetzt der Timestamp verglichen:
- Server neuer (updated_at > lokales modified) -> Download vom Server
- Lokal neuer (modified > Server updated_at) -> Upload zum Server
- Log zeigt "Server->Lokal" oder "Lokal->Server" statt nur "Aktualisiert"

Betrifft alle drei Sync-Methoden:
- sync_virtual (Offline-markierte Dateien)
- sync_upload_new (Virtual Mode Upload)
- sync_full_upload (Full Sync Upload)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:42:46 +02:00
Stefan Hacker 11fd11aa45 docs: README Desktop Sync Client komplett dokumentiert
- Features-Liste (Virtual Files, Multi-Sync, Offline, Locking, Tray etc.)
- Terminalserver-Tabelle (pro User eigene Instanz)
- Virtual Files vs. Full Sync Vergleichstabelle
- Einstellungen-Pfade pro OS
- Config bleibt bei Updates erhalten

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:30:08 +02:00
Stefan Hacker b653f9657a fix: Single-Instance per User (Terminalserver-kompatibel)
Lock-File liegt in %APPDATA% (Windows) bzw ~/.config (Linux) -
das ist pro User verschieden. Auf Terminalservern kann jeder
User seine eigene Instanz haben.

Verbesserungen:
- Prueft ob der Prozess aus dem Lock-File noch lebt (PID-Check)
  statt nur ob die Datei existiert
- Windows: tasklist /FI "PID eq X"
- Linux: /proc/PID existiert?
- Stale Lock-Files (Prozess abgestuerzt) werden ueberschrieben
- Ohne .cloud Argument + andere Instanz laeuft -> sofort beenden
- Mit .cloud Argument + andere Instanz -> delegieren und beenden

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:28:39 +02:00
Stefan Hacker 4cc8de8a1a fix: Settings persistent (kein Keyring) + Single-Instance
Config-Persistenz:
- Passwort wird base64-kodiert in config.json gespeichert
  (statt OS-Keyring der beim Cross-Compile nicht funktioniert)
- Config-Pfad wird beim Laden/Speichern geloggt fuer Debugging
- Keyring-Dependency entfernt, base64 hinzugefuegt

Single-Instance:
- Lock-File in Config-Dir verhindert doppelte Instanz
- Wenn .cloud Datei doppelgeklickt wird und Client laeuft:
  Pfad wird in open_request.txt geschrieben und 2. Instanz beendet sich
- Laufende Instanz pollt open_request.txt und oeffnet die Datei
- Fenster wird automatisch in den Vordergrund geholt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:27:03 +02:00
Stefan Hacker c354682905 fix: Tray-Icon API kompatibel (kein Image::from_bytes)
Nutzt default_window_icon() statt Image::from_bytes das in
dieser Tauri-Version nicht existiert.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:15:02 +02:00
Stefan Hacker ac5a0a3367 feat: Settings persistent + Auto-Login + Installer Update-Modus
Settings-Persistenz:
- Config wird in OS-AppData gespeichert
  (Windows: %APPDATA%/MiniCloud Sync/config.json,
   Linux: ~/.config/MiniCloud Sync/config.json,
   Mac: ~/Library/Application Support/MiniCloud Sync/config.json)
- Gespeichert werden: Server-URL, Username, Sync-Pfade
- Passwort wird im OS-Keychain gespeichert (Windows Credential Manager,
  macOS Keychain, Linux Secret Service) - nicht in der Config-Datei

Auto-Login:
- Beim Start wird gespeicherte Config geladen
- Wenn Credentials im Keychain vorhanden: automatischer Login
- Wenn Sync-Pfade konfiguriert: Sync startet sofort automatisch
- Bei Fehler: Login-Screen mit vorausgefuellten Feldern

Config ueberlebt Updates:
- Config liegt ausserhalb des Installationsverzeichnisses
- NSIS-Installer ueberschreibt nur App-Dateien, nicht AppData
- installMode: "both" erlaubt per-User und per-Machine Installation

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:11:42 +02:00
Stefan Hacker 81574c8991 fix: Virtual Mode laedt neue lokale Dateien jetzt hoch
Problem: Im Virtual Mode wurden nur .cloud Platzhalter fuer
Server-Dateien erstellt, aber neue lokale Dateien wurden nie
hochgeladen. Der Watcher hat die Aenderung erkannt aber der
Sync hat sie ignoriert.

Fix: sync_upload_new() wird jetzt auch im Virtual Mode aufgerufen.
Scannt den lokalen Ordner nach Dateien die auf dem Server nicht
existieren und laedt sie hoch. Auch geaenderte lokale Dateien
(Checksum-Vergleich) werden aktualisiert. Gesperrte Dateien
werden zurueckgehalten.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:04:03 +02:00
Stefan Hacker 607d18a7e2 feat: Lokaler Datei-Browser mit Offline-Markierung + Kontextmenue
Datei-Browser im Client:
- Zeigt lokalen Sync-Ordner mit allen Dateien an
- Ordner navigierbar mit Breadcrumb
- Status pro Datei: ☁ Cloud (Platzhalter) / 📄 Offline (echte Datei)
- Badges: blaues "Cloud" oder gruenes "Offline"
- Cloud-Dateien zeigen Originalgroesse aus .cloud-Metadaten
- Aktualisiert sich automatisch nach jedem Sync

Rechtsklick-Kontextmenue:
- .cloud Datei: "Oeffnen (herunterladen)" + "Offline verfuegbar machen"
- Echte Datei: "Nicht mehr offline (Platzhalter)"
- Doppelklick auf Ordner = navigieren
- Doppelklick auf .cloud = herunterladen + oeffnen

Rust-Backend:
- browse_sync_folder: Listet lokale Dateien mit Status auf
  (is_cloud, is_offline, cloud_size aus JSON-Metadaten)
- Sortierung: Ordner zuerst, dann alphabetisch

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 01:02:21 +02:00
Stefan Hacker adaa19a1ef fix: Durchsuchen-Button, Tray-Icon, Minimize statt Close, .cloud Handler
1. Durchsuchen-Button: dialog:allow-open Permission in capabilities
2. Tray-Icon: Nutzt das App-Icon (32x32.png) statt leer
3. Close = Minimize: Fenster wird versteckt statt App beendet,
   Doppelklick auf Tray-Icon oeffnet wieder
4. .cloud Datei-Handler:
   - fileAssociations in tauri.conf.json registriert .cloud Extension
   - NSIS-Installer registriert den Handler automatisch
   - Doppelklick auf .cloud -> App startet, laedt Datei runter,
     oeffnet mit Standard-App (Word/Excel/etc.)
   - Wenn App laeuft: Event wird emitted, Frontend verarbeitet es

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 00:57:18 +02:00
Stefan Hacker 505545f26c feat: Watcher triggert sofort Sync + Offline-Markierung pro Datei
Sofort-Sync statt 30s-Polling:
- Filesystem-Watcher erkennt lokale Aenderungen sofort
- 3 Sekunden Debounce (wartet ob noch mehr kommt)
- Dann sofortiger Sync-Trigger statt auf den naechsten 30s-Zyklus zu warten
- .cloud-Dateien werden vom Watcher ignoriert (kein Loop)
- Fallback: alle 60s Sync auch ohne Aenderungen (Server-Aenderungen holen)
- UI zeigt "→ Sync ausgeloest" bei Watcher-Trigger

Offline-Markierung:
- mark_offline: .cloud -> echte Datei runterladen, bleibt permanent lokal
- unmark_offline: echte Datei -> zurueck zu .cloud Platzhalter
- Offline-Dateien werden bei jedem Sync automatisch aktualisiert
  (Checksum-Vergleich in sync_virtual)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 00:44:57 +02:00
Stefan Hacker e32a64ba83 fix: fehlende npm-Dependencies fuer Tauri-Plugins (dialog, notification)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 00:40:25 +02:00
Stefan Hacker 16d514f7f1 feat: Virtual Files, Multi-Sync-Pfade, Full Sync, Ordner-Dialog
Virtual Files System:
- .cloud Platzhalter-Dateien (JSON mit ID, Name, Groesse, Checksum)
- 0 Bytes Speicherverbrauch pro Datei
- Doppelklick auf .cloud -> Download + Oeffnen mit Standard-App + Lock
- Nach Schliessen: Sync zurueck, lokale Kopie entfernen, .cloud neu
- Offline-Markierung: Echte Dateien bleiben lokal (kein .cloud)
- Server-Dateien loeschen -> .cloud wird automatisch entfernt

Multi-Sync-Pfade (wie Nextcloud):
- Beliebig viele Server-Ordner auf lokale Ordner mappen
- z.B. /Projekte/2026 -> ~/Projekte oder /Shared/Team -> ~/Team
- Freigegebene Ordner von anderen Benutzern sync-bar
- Jeder Pfad hat eigenen Modus (Virtual oder Full)
- Hinzufuegen/Entfernen/Modus wechseln in der UI

Full Sync:
- Pro Sync-Pfad waehlbar: Virtual oder Full
- Full = alle Dateien lokal spiegeln (bidirektional)
- Virtual = .cloud Platzhalter (Standard)
- Klick auf Modus-Badge zum Umschalten

Ordner-Dialog:
- "Durchsuchen..." Button oeffnet nativen Ordner-Auswahl-Dialog
- Server-Ordner per Dropdown aus Dateibaum waehlen
- Ordner werden automatisch erstellt wenn noetig

UI:
- Sync-Pfade als Karten: ☁ /Server/Pfad → 📁 /Lokaler/Pfad
- Modus-Badge (Virtual/Full) mit Klick zum Wechseln
- Tray-Menue: "Jetzt synchronisieren" Eintrag

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 00:34:03 +02:00
Stefan Hacker 4662286959 fix: Windows Build laedt Setup-Installer hoch statt nackte .exe
Die nackte .exe braucht WebView2 separat installiert.
Der NSIS-Setup-Installer (nsis/*setup*.exe) installiert WebView2
automatisch mit. Jetzt wird der Setup bevorzugt hochgeladen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 00:09:32 +02:00
Stefan Hacker ba7e541260 docs: Docker aufraeumen - Speicher freigeben Anleitung
docker system prune, image prune, builder prune, system df

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 00:02:54 +02:00
Stefan Hacker 60e9f2699e docs: Docker-Cache loeschen Anleitung in README
build --no-cache, Image loeschen, Browser-Cache Hinweis

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 00:01:02 +02:00
Stefan Hacker 96d82967fc fix: build.sh zeigt klare Fehlermeldung bei falschem Upload-Token
403 -> Erklaerung dass BUILD_UPLOAD_TOKEN der SECRET_KEY vom Server sein muss
000 -> Server nicht erreichbar
Sonstige -> HTTP-Code + Response-Body

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 00:00:33 +02:00
Stefan Hacker 29cc00e284 fix: Client-Upload akzeptiert SECRET_KEY oder JWT_SECRET_KEY + Download in Settings
Upload-Auth:
- Akzeptiert jetzt sowohl SECRET_KEY als auch JWT_SECRET_KEY
  (BUILD_UPLOAD_TOKEN in Entwicklungs-.env kann einer von beiden sein)

Settings-View:
- Zeigt verfuegbare Desktop/Mobile Clients zum Download an
  (nur wenn mindestens ein Client vorhanden)
- Pro Client: Name, Dateiname, Download-Button

.env.example:
- Klarere Kommentare: "SECRET_KEY oder JWT_SECRET_KEY des Zielservers"

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:58:54 +02:00
Stefan Hacker 9391a58683 fix: build-output/ in .gitignore + aus Git entfernt
Binaries gehoeren nicht ins Repository.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:55:09 +02:00
Stefan Hacker 714ce1ae53 feat: Desktop-Client komplett - Auto-Sync, Watcher, Locking, Tray
Alles eingebunden was vorher nur als unused code existierte:

Auto-Sync:
- Nach erstem Sync laeuft alle 30s ein Delta-Sync im Hintergrund
- Status-Badge zeigt live: Synchronisiert / Synchronisiere... / Fehler
- Sync-Protokoll mit Timestamps

File-Watcher:
- Ueberwacht den Sync-Ordner auf lokale Aenderungen (Erstellt/Geaendert/Geloescht)
- Aenderungen werden im UI unter "Lokale Aenderungen" angezeigt
- Filtert temp/hidden files automatisch

File-Locking:
- lock_file_cmd / unlock_file_cmd Tauri-Kommandos
- Heartbeat-Thread sendet alle 60s Heartbeat fuer gesperrte Dateien
- locked_files Liste im State

System-Tray:
- Tray-Icon mit "Mini-Cloud Sync" Tooltip
- Rechtsklick-Menue: Oeffnen / Beenden
- "Oeffnen" zeigt das Hauptfenster

UI:
- Status-Badge mit Farbe (gruen=synced, orange=syncing, rot=error)
- Spinning-Icon waehrend Sync
- "Auto-Sync aktiv" Hinweis nach erstem Sync
- Sync-Ordner wird nach Start gesperrt (nicht mehr aenderbar)
- Lokale Aenderungen und Sync-Log mit Timestamps

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:54:54 +02:00
Stefan Hacker 8342cbfa17 fix: main.rs lib-Name auf minicloud_sync_lib korrigiert
War noch tauri_app_lib vom Template, muss minicloud_sync_lib heissen
(wie in Cargo.toml definiert).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:47:18 +02:00
Stefan Hacker 10e6211820 fix: Rust MutexGuard ueber await - delta_sync Send-Fehler
MutexGuard wird jetzt vor dem .await gedroppt (take + put back),
damit der Future Send-kompatibel ist wie Tauri es erfordert.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:42:28 +02:00
Stefan Hacker ec3d4866e0 refactor: Build-Upload nutzt SECRET_KEY + Doku klargestellt
- Backend: Upload-Auth prueft SECRET_KEY statt eigenen Token
  (ein Token weniger zu verwalten)
- BUILD_UPLOAD_TOKEN in Entwicklungs-.env = SECRET_KEY vom Server
- .env.example: Klarer Kommentar dass CLOUD_URL + BUILD_UPLOAD_TOKEN
  NUR auf der Entwicklungsmaschine gesetzt werden, nicht auf dem Server
- README: Desktop Sync Client Abschnitt mit Build-Anleitung und
  Auto-Upload-Erklaerung

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:41:22 +02:00
Stefan Hacker 9a6aa7aadc feat: Client-Download-System + Auto-Upload nach Build
Backend:
- GET /api/clients - Verfuegbare Clients auflisten (oeffentlich)
- GET /api/clients/<platform>/download - Client herunterladen (oeffentlich)
- POST /api/clients/<platform>/upload - Build hochladen (BUILD_UPLOAD_TOKEN)
- Alte Version wird automatisch bei neuem Upload ersetzt
- Plattformen: linux, windows, mac, android, ios

Frontend:
- /clients - Download-Seite mit Grid aller verfuegbaren Clients
- Login-Seite zeigt "Desktop & Mobile Clients herunterladen" Link
  wenn mindestens ein Client verfuegbar ist

build.sh:
- Nach jedem Build wird der Client automatisch auf CLOUD_URL
  hochgeladen (wenn CLOUD_URL + BUILD_UPLOAD_TOKEN in .env gesetzt)
- Bestes Format pro Plattform: AppImage > .deb > Binary (Linux),
  .msi > .exe (Windows), .dmg (Mac), .apk (Android), .ipa (iOS)

.env.example:
- CLOUD_URL: Oeffentliche URL der Cloud-Instanz
- BUILD_UPLOAD_TOKEN: Auth-Token fuer Build-Upload

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:39:51 +02:00
Stefan Hacker 3ed5adc1e8 fix: sudo vor allen docker-Befehlen im build.sh
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:31:30 +02:00
Stefan Hacker 48a46cbc79 feat: Build-Script + Docker-Build fuer alle Plattformen
build.sh - baut Clients via Docker (kein lokales Setup noetig):
  ./build.sh linux        # Linux Desktop (.deb + .AppImage)
  ./build.sh windows      # Windows Desktop (.msi + .exe) Cross-Compile
  ./build.sh mac          # macOS Desktop (.dmg) - nur auf macOS
  ./build.sh android      # Android App (.apk) via Docker
  ./build.sh ios          # iOS App (.ipa) - nur auf macOS
  ./build.sh all-desktop  # Linux + Windows zusammen
  ./build.sh clean        # Build-Cache loeschen

Dockerfile.build: Multi-stage Container mit Rust, Node.js, Tauri-Deps,
  Windows Cross-Compile Tools (mingw-w64)

Output landet in build-output/ (gitignored)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:29:58 +02:00
Stefan Hacker 06ad65dbb3 feat: Desktop Sync Client (Tauri) - Grundgeruest
Tauri 2 Desktop-Client mit:

Rust-Backend:
- MiniCloudApi: Login, Token-Refresh, Upload, Download, Sync-Tree,
  Sync-Changes, File-Locking (Lock/Unlock/Heartbeat)
- SyncEngine: Full-Sync (Server-Tree vs. lokales Dateisystem),
  Delta-Sync (nur Aenderungen seit letztem Sync), bidirektionaler
  Abgleich mit SHA-256 Checksummen, Ordner-Erstellung,
  Lock-Status-Pruefung vor Upload, Konflikt-Erkennung
- FileWatcher: Filesystem-Watcher (notify crate) fuer Echtzeit-
  Erkennung lokaler Aenderungen, filtert temp/hidden files

Vue-Frontend:
- Login-Screen: Server-URL, Benutzername, Passwort
- Main-Screen: Sync-Ordner setzen, Sync starten, Dateiliste mit
  Lock-Status, Sync-Protokoll
- Dark-Mode Support

Tauri-Kommandos: login, set_sync_dir, start_sync, delta_sync,
  get_status, get_file_tree

Zum Bauen (Linux):
  sudo apt install libwebkit2gtk-4.1-dev libgtk-3-dev
  cd clients/desktop && npm install && npm run tauri build

Windows/Mac: Tauri Voraussetzungen installieren, dann gleicher Befehl

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:26:57 +02:00
Stefan Hacker 748537b9f5 feat: File Locking System (Ein-/Auschecken) + Konflikt-Email
Backend - FileLock Model + API:
- POST /files/<id>/lock - Datei auschecken (sperren)
- POST /files/<id>/unlock - Datei einchecken (entsperren)
- POST /files/<id>/heartbeat - "Datei noch offen" (alle 60s)
- GET /files/<id>/lock-status - Sperrstatus abfragen
- GET /files/locks - Alle aktiven Sperren auflisten
- Auto-Unlock: Kein Heartbeat seit 5 Min -> Sperre wird freigegeben
- 423 Locked wenn bereits von anderem User gesperrt
- Admin kann fremde Sperren aufheben

Dateiliste + Sync-API:
- Lock-Info (locked, locked_by, locked_at) pro Datei mitgeliefert
- Sync-Tree enthaelt Lock-Status fuer Desktop/Mobile-Clients

Web-UI:
- Schloss-Icon mit Benutzername bei gesperrten Dateien
- Tooltip: "Ausgecheckt von Adam seit 14:30"
- Gesperrte Dateien: "Oeffnen nicht moeglich" Toast-Meldung
  (eigene Sperren sind erlaubt)

Konflikt-Email an Admin:
- Wer hat die Konflikt-Kopie erstellt (Name + Email)
- Welche Datei (Name + Ordnerpfad)
- Name der Konflikt-Kopie
- Von wem gesperrt (Name + Email + seit wann)
- Erklaerungstext was passiert ist

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 23:20:55 +02:00
Stefan Hacker 33156f9431 feat: OnlyOffice Force-Save bei Ctrl+S + private IP erlauben
- forcesavetype in Editor-Config: Ctrl+S speichert sofort zurueck
  zum Server (statt erst beim Schliessen des Dokuments)
- ALLOW_PRIVATE_IP_ADDRESS + ALLOW_META_IP_ADDRESS fuer OnlyOffice
  damit Callbacks an interne Docker-IPs funktionieren

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:54:31 +02:00
Stefan Hacker 5f79ebe9b0 fix: OnlyOffice/Preview zeigt immer aktuelle Version (kein Cache)
Drei Cache-Ebenen gefixt:
- Vue Router: :key=fullPath erzwingt Komponenten-Neuaufbau bei
  jeder Navigation (kein Wiederverwenden alter Instanzen)
- Frontend: Cache-Bust Parameter an Preview + OnlyOffice API-Calls
- Backend: No-Cache Headers (Cache-Control, Pragma) auf Preview-Endpunkt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:50:33 +02:00
Stefan Hacker 916971fc1b fix: OnlyOffice Cache - jedes Oeffnen laedt frische Version
Document-Key nutzt jetzt Timestamp statt Checksum, damit OnlyOffice
bei jedem Oeffnen die aktuelle Version vom Server laedt statt eine
gecachte alte Version anzuzeigen.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:47:17 +02:00
Stefan Hacker bb73a8130a fix: Doppelter Route-Dekorator auf oo_download entfernt
Der @api_bp.route('/files/onlyoffice-callback') Dekorator war
versehentlich auf oo_download statt onlyoffice_callback gelandet.
Flask routete dadurch alle Callback-POSTs an oo_download, die dann
mit 'missing access_key argument' crashte (500 Error).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:41:35 +02:00
Stefan Hacker 9d138ecf1d fix: OnlyOffice Callback komplett neu - robust gegen 500 Errors
- Gesamter Callback in try/except gewrapped (gibt immer error:0
  zurueck, damit OnlyOffice nicht endlos retryt)
- JWT Body-Decoding mit graceful fallback auf Raw-Daten
- JWT-Header-Validierung entfernt (verursachte den 500 Crash)
- Download ohne extra JWT-Header (OnlyOffice-interne URLs
  brauchen das nicht)
- Ausfuehrliches Logging: Status, Key, Dateiname, Groesse
- Saubere Imports am Anfang der Funktion

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:34:24 +02:00
Stefan Hacker 9d1f4e117c fix: OnlyOffice Callback JWT-Validierung + Speichern
Problem: OnlyOffice sendete JWT-Token im Callback-Request und im
Body, unser Endpoint hat das ignoriert -> Speichern schlug fehl.

Fix:
- Callback validiert OnlyOffice JWT aus Authorization-Header
- Callback entpackt JWT-wrapped Body (OnlyOffice wraps den Body
  in einen JWT-Token wenn JWT_ENABLED=true)
- Download der gespeicherten Datei sendet JWT-Header mit
- Besseres Error-Logging mit Traceback

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:28:34 +02:00
Stefan Hacker 1f9b87900c fix: Dedizierter OnlyOffice Download-Endpunkt ohne JWT-Auth
Problem: OnlyOffice konnte Dateien nicht herunterladen weil unser
token_required-Decorator den Request ablehnte - OnlyOffice sendet
eigene Header die mit unserem JWT-System kollidieren.

Loesung: Eigener Endpunkt GET /files/oo-download/<access_key>
- Kein JWT noetig, stattdessen Einmal-Schluessel
- Schluessel wird beim Oeffnen des Editors generiert und in der DB gespeichert
- Schluessel enthaelt file_id + user_id, wird beim Download validiert
- OnlyOffice ruft diesen Endpunkt intern auf (http://minicloud:5000)
- Kein Token in der URL, keine JWT-Konflikte

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:24:09 +02:00
Stefan Hacker c3c0610750 fix: OnlyOffice nutzt internes Docker-Netzwerk + langlebigen Token
Problem: OnlyOffice versuchte Dateien ueber die oeffentliche URL
herunterzuladen (http://selftestcloud...) und bekam 401 weil der
Access-Token nach 15 Min. ablief.

Fix:
- Download-URL und Callback-URL nutzen jetzt die interne Docker-URL
  (http://minicloud:5000) statt die oeffentliche URL
- Eigener 24h-Token fuer OnlyOffice Datei-Zugriff (statt des
  kurzlebigen User-Access-Tokens)
- ONLYOFFICE_INTERNAL_URL konfigurierbar (Default: http://minicloud:5000)

So bleibt der gesamte Dateizugriff zwischen OnlyOffice und Mini-Cloud
im Docker-Netzwerk - schneller und kein externer Roundtrip.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:19:30 +02:00
Stefan Hacker 35fddbfcbc docs: README OnlyOffice-Abschnitt aktualisiert
- Kein ONLYOFFICE_JWT_SECRET mehr, nutzt JWT_SECRET_KEY automatisch
- Nur noch ONLYOFFICE_URL in .env setzen
- Eigene Subdomain mit HTTPS als Pflicht beschrieben
- Schritte vereinfacht (4 statt 5)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:10:54 +02:00
Stefan Hacker 15211509a6 simplify: OnlyOffice nutzt JWT_SECRET_KEY, kein extra Secret
- OnlyOffice und Mini-Cloud teilen sich den gleichen JWT_SECRET_KEY
- ONLYOFFICE_JWT_SECRET komplett entfernt (aus .env, docker-compose, Backend, Frontend)
- docker-compose: OnlyOffice liest JWT_SECRET=${JWT_SECRET_KEY}
- In .env nur noch ONLYOFFICE_URL setzen, fertig
- Admin-GUI zeigt: URL + "JWT nutzt JWT_SECRET_KEY aus .env"

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:10:06 +02:00
Stefan Hacker 0dbeef7cd9 refactor: OnlyOffice Konfiguration nur ueber .env, nicht Admin-GUI
OnlyOffice URL und JWT Secret kommen jetzt ausschliesslich aus der
.env Datei (Umgebungsvariablen), nicht mehr aus der Admin-GUI:
- ONLYOFFICE_URL und ONLYOFFICE_JWT_SECRET in .env setzen
- docker-compose liest das gleiche Secret fuer den OnlyOffice-Container
- Eine Quelle der Wahrheit, kein Sync zwischen .env und DB noetig

Admin-GUI zeigt jetzt nur noch den Status an:
- Konfiguriert / Nicht konfiguriert (Tag)
- Aktuelle URL
- JWT Secret gesetzt / Fehlt (Tag)
- Setup-Anleitung mit .env Beispiel

Behebt: "Sicherheitstoken nicht korrekt" wenn OnlyOffice laeuft
aber JWT Secret nicht uebereinstimmt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 22:08:07 +02:00
Stefan Hacker 5bf98302e3 fix: DATABASE_PATH und UPLOAD_PATH aus .env entfernt
Pfade werden jetzt automatisch gesetzt und nicht mehr in .env
ueberschrieben, was den Bug verursacht hatte:
- Docker: Dockerfile setzt /app/data/ als ENV-Default
- Entwicklung: Config nutzt CWD/data/ als Default

.env.example erklaert das mit Kommentar.
Optionale manuelle Pfade bleiben als auskommentierte Zeilen.

Auf dem Server: DATABASE_PATH und UPLOAD_PATH aus .env loeschen
(oder auskommentieren), dann docker-compose up --build -d

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 21:58:58 +02:00
Stefan Hacker 97cb4c7748 fix: Datenbank-Pfad in Docker - relative Pfade aus .env falsch aufgeloest
Problem: Nach Umstellung auf env_file in docker-compose wurden die
relativen Pfade (./data/minicloud.db) aus .env falsch aufgeloest.
basedir zeigte auf / statt /app, dadurch wurde eine neue leere DB
unter /data/ erstellt statt die bestehende unter /app/data/ zu nutzen.
Ergebnis: Alle User weg, Login unmoeglich.

Fix:
- config.py: _resolve_path nutzt Path.cwd() fuer relative Pfade
  (in Docker CWD=/app, in Dev CWD=backend/)
- .env.example: Absolute Docker-Pfade als Default
  (/app/data/minicloud.db statt ./data/minicloud.db)
  mit Kommentar fuer Entwicklungsumgebung

Auf dem Server muss die .env angepasst werden:
  DATABASE_PATH=/app/data/minicloud.db
  UPLOAD_PATH=/app/data/files

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 21:53:43 +02:00
99 changed files with 20926 additions and 611 deletions
+38 -8
View File
@@ -7,11 +7,13 @@ SECRET_KEY=change-me-to-a-random-secret-key
FLASK_ENV=production
FLASK_DEBUG=0
# Datenbank
DATABASE_PATH=./data/minicloud.db
# Dateispeicher
UPLOAD_PATH=./data/files
# Datenbank + Dateispeicher
# Nicht aendern! Pfade werden automatisch gesetzt:
# Docker: /app/data/ (via Dockerfile)
# Entwicklung: ./data/ (via Config-Default)
# Nur setzen wenn ein eigener Pfad gewuenscht ist:
# DATABASE_PATH=/pfad/zu/minicloud.db
# UPLOAD_PATH=/pfad/zu/files
# JWT
# Token generieren: python3 -c "import secrets; print(secrets.token_urlsafe(64))"
@@ -29,8 +31,36 @@ FRONTEND_URL=https://cloud.example.com
# Max Upload-Groesse in MB
MAX_UPLOAD_SIZE_MB=500
# Zeitzone (prozessweit) - IANA-Format "Region/Stadt".
# Wirkt auf datetime.now(), strftime %Z und Kalender/Task-Zeitstempel.
# Haeufige Werte:
# Europe/Berlin, Europe/Vienna, Europe/Zurich, Europe/Amsterdam,
# Europe/Paris, Europe/London, Europe/Madrid, Europe/Rome,
# Europe/Warsaw, Europe/Prague, Europe/Copenhagen, Europe/Stockholm,
# UTC, America/New_York, America/Los_Angeles, Asia/Tokyo, Australia/Sydney
# Vollstaendige Liste: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=Europe/Berlin
# NTP-Server zum Pruefen der Uhrzeit beim Start (nicht-invasiver Offset-Check
# - im Container kann die Systemuhr nicht gesetzt werden; bei Abweichung >5s
# erscheint eine Warnung im Log, dann bitte die Host-Uhr synchronisieren).
# Leerlassen um den Check zu deaktivieren.
# Default: Physikalisch-Technische Bundesanstalt (offizielle deutsche Zeit).
# Alternativen: ptbtime2.ptb.de, ptbtime3.ptb.de, de.pool.ntp.org, time.cloudflare.com
NTP_SERVER=ptbtime1.ptb.de
# OnlyOffice Document Server (optional)
# Oeffentliche HTTPS-URL unter der OnlyOffice im Browser erreichbar ist
# Eigene Subdomain mit HTTPS, z.B. https://office.example.com
# JWT wird automatisch vom JWT_SECRET_KEY oben verwendet
ONLYOFFICE_URL=
# Muss mit JWT_SECRET im OnlyOffice-Container uebereinstimmen
ONLYOFFICE_JWT_SECRET=
# =============================================
# Client-Build Upload (NUR auf der ENTWICKLUNGSMASCHINE!)
# NICHT auf dem Produktionsserver setzen!
# Diese Werte braucht nur die Maschine auf der ./build.sh laeuft.
# =============================================
# URL der Cloud-Instanz wohin die Builds hochgeladen werden
CLOUD_URL=https://cloud.example.com
# SECRET_KEY oder JWT_SECRET_KEY des Zielservers
# (den gleichen Wert hier reinkopieren der auf dem Server steht)
BUILD_UPLOAD_TOKEN=
+3
View File
@@ -39,5 +39,8 @@ backend/static/
.DS_Store
Thumbs.db
# Build output
build-output/
# Logs
*.log
+227
View File
@@ -0,0 +1,227 @@
# Changelog
Alle wesentlichen Aenderungen an der Mini-Cloud Plattform.
---
## [0.9.0] - 2026-04-12 - Desktop Sync Client
### Desktop-Client (Tauri 2 / Rust + Vue)
- **Erster funktionsfaehiger Desktop Sync Client** fuer Windows, Linux und macOS
- Multi-Sync-Pfade: Beliebig viele Server-Ordner auf lokale Ordner mappen
- Virtual Files: `.cloud`-Platzhalter (0 Bytes), Download erst bei Doppelklick
- Full Sync: Alternativ alle Dateien komplett lokal spiegeln (pro Pfad waehlbar)
- Offline-Markierung: Einzelne Dateien als offline verfuegbar markieren (Rechtsklick)
- Sofort-Sync: Filesystem-Watcher erkennt Aenderungen sofort (3s Debounce statt Polling)
- Bidirektionaler Sync mit Timestamp-Vergleich (Server neuer = Download, Lokal neuer = Upload)
- File Locking: Automatisches Ein-/Auschecken mit Heartbeat alle 10s
- Auto-Unlock: Erkennt wenn eine Datei geschlossen wird (Windows: Write-Lock-Check, Linux: lsof)
- System-Tray: Minimiert in den Tray statt zu beenden, Doppelklick oeffnet Fenster
- Auto-Login: Zugangsdaten und Sync-Pfade persistent in `%APPDATA%/MiniCloud Sync/config.json`
- Single-Instance pro User (Terminalserver-kompatibel, PID-basiertes Lock-File)
- .cloud Datei-Assoziation: Doppelklick im Explorer oeffnet ueber den Client
- Minimiert starten: Checkbox in Einstellungen, Client startet direkt im Tray
- Token-Refresh alle 10 Minuten (verhindert Lock-Verlust nach Token-Ablauf)
- Lokaler Datei-Browser mit Rechtsklick-Kontextmenue
### Build-System
- `build.sh` Script: Baut Clients via Docker fuer alle Plattformen
- `./build.sh windows` - Cross-Compile von Linux nach Windows (NSIS-Installer mit WebView2)
- `./build.sh linux` - Linux Build (.AppImage, .deb)
- `./build.sh mac` - macOS Build (.dmg, nur auf Mac)
- Auto-Upload: Gebaute Clients werden automatisch auf den Cloud-Server hochgeladen
- Client-Downloads: Login-Seite und Benutzer-Einstellungen zeigen Download-Links
### File Locking (Backend)
- Neues FileLock-Model: Dateien ein-/auschecken
- API: Lock, Unlock, Heartbeat, Lock-Status
- Auto-Unlock nach 15 Minuten ohne Heartbeat
- Lock-Anzeige in der Dateiliste (Schloss-Icon mit Benutzername)
- Gesperrte Dateien: Oeffnen nicht moeglich, Fehlermeldung
- Konflikt-Email an Admin bei erzwungenem Sync gesperrter Dateien
---
## [0.8.0] - 2026-04-11 - OnlyOffice + Papierkorb
### OnlyOffice Document Server Integration
- Word, Excel und PowerPoint Dateien direkt im Browser bearbeiten
- Automatische Erkennung: OnlyOffice vorhanden = Editor, sonst einfache Vorschau
- WOPI-aehnliche Endpunkte fuer Dokumentzugriff und Callback
- JWT-Signierung (nutzt JWT_SECRET_KEY aus .env, kein extra Secret)
- Force-Save: Ctrl+S speichert sofort zurueck zum Server
- Dokument-Key mit Timestamp (kein Cache-Problem bei Wiederoffnen)
- Konfiguration nur ueber .env (ONLYOFFICE_URL), Admin-GUI zeigt Status
### Papierkorb
- Geloeschte Dateien landen im Papierkorb statt sofort geloescht zu werden
- Papierkorb-Seite in der Sidebar mit Tabelle aller geloeschten Elemente
- Wiederherstellen am Originalort oder endgueltig loeschen
- "Papierkorb leeren" Button mit Sicherheitsabfrage
- Share-View Loeschen nutzt ebenfalls den Papierkorb
### Bestaetigungsdialoge
- Ueberall wo geloescht wird jetzt ein Bestaetigungsdialog
- Kontakte, Kalender-Events, Kalender, E-Mails, Share-Dateien,
SFTP-Backup-Ziele, Admin Email-Konten
### Office-Preview
- Vorschau oeffnet sich innerhalb der App (kein neuer Tab)
- Zurueck-Button in der Toolbar
- PDF inline, Bilder zentriert, DOCX als HTML, XLSX als Tabelle, PPTX als Folien
- Text/Code-Dateien bearbeitbar mit Speichern-Funktion
---
## [0.7.0] - 2026-04-11 - Backup & Restore + SFTP
### Backup & Restore
- Lokales ZIP-Backup: Datenbank (sqlite3.backup API) + alle Dateien + Metadaten
- Streaming-Download ohne Speicher-Limits
- Chunked Restore-Upload fuer grosse Backups (10MB-Stuecke mit Fortschrittsbalken)
- DB-Merge-Strategie: INSERT OR REPLACE auf gemeinsame Spalten
- Einzeldatei-Restore: Backup durchsuchen + einzelne Dateien herunterladen/wiederherstellen
### SFTP-Backup
- Mehrere SFTP-Backup-Ziele konfigurierbar
- Automatischer Hintergrund-Scheduler (15 Min. bis woechentlich)
- Versionierung mit automatischem Aufraumen alter Backups
- Manuelles Backup per Klick ("Jetzt sichern")
- SFTP-Verbindungstest-Button
- Versionen-Dialog: Alle Backups auf dem SFTP-Server auflisten
- Restore direkt von SFTP: Version auswaehlen und wiederherstellen
### Restore-Anleitung
- Detaillierte Anleitung direkt in der Admin-UI
- Hinweis auf SECRET_KEY/JWT_SECRET_KEY Uebereinstimmung
---
## [0.6.0] - 2026-04-11 - Drag & Drop + Share-Typen
### Datei-Upload
- Drag & Drop Upload: Dateien und komplette Ordnerstrukturen reinziehen
- Ordner-Button: Kompletten Ordner mit Unterordnern hochladen (webkitdirectory)
- Upload-Fortschrittsbalken mit Datei-Zaehler
- Backend: `/files/ensure-path` erstellt verschachtelte Ordnerstrukturen
### Share-Links
- Drei Berechtigungsstufen: Nur Lesen / Lesen+Schreiben / Nur Upload
- Upload-Only: Briefkasten-Modus (hochladen ohne Einblick)
- Ordner-Freigaben: Dateiliste mit Download/Loeschen, Unterordner-Navigation
- Ordner als ZIP herunterladen (rekursiv)
- Share-Status visuell: Gruenes Icon zeigt geteilte Dateien an
- Upload in freigegebene Ordner mit Passwort-Schutz
### System-Benachrichtigungen
- Email bei Datei-/Ordner-Freigabe an Empfaenger
- Email bei Share-Link-Download an Ersteller (mit IP-Adresse)
- Email bei Upload in geteilten Ordner
- Email bei Kalender/Kontakte/Passwort-Freigabe
- Email bei Benutzer-Erstellung durch Admin
- Alle Benachrichtigungen fail-safe (keine Blockierung bei Email-Fehler)
---
## [0.5.0] - 2026-04-11 - Admin-Features + Passwort-Import
### Administration
- Benutzer ueber Web-UI anlegen (Username, Email, Passwort, Rolle, Quota)
- Benutzer bearbeiten/deaktivieren/loeschen mit Sicherheitsabfragen
- Email-Konten pro Benutzer im Admin verwalten (ohne sich als User einzuloggen)
- Benutzersuche in der Verwaltung
- Oeffentliche Registrierung: Schieberegler an/aus
- Einladungslinks: Einmal-Token auch bei deaktivierter Registrierung
- System-Email (SMTP): Konfigurierbar mit Verbindungstest
### Passwort-Manager Import
- Firefox CSV-Import (Einstellungen > Passwoerter > Exportieren)
- KeePass .kdbx Import mit Ordnerstruktur
- Generischer CSV-Import (Chrome, Bitwarden, 1Password)
- Automatische Spaltenerkennung (deutsch + englisch)
- Alle Eintraege werden clientseitig verschluesselt vor dem Speichern
---
## [0.4.0] - 2026-04-11 - Dateiverwaltung + Share-Links
### Dateiverwaltung
- Upload/Download mit Berechtigungen (read/write/admin)
- Ordner erstellen, verschieben, umbenennen, loeschen
- Berechtigungssystem pro Datei/Ordner und Benutzer
### Share-Links
- Token-basierte Freigabe-Links
- Optionales Passwort + Ablaufdatum + Download-Limit
- Oeffentliche Share-Seite ohne Login
- Dateien/Ordner mit Benutzern teilen (Benutzersuche)
### Sync-API
- `GET /sync/tree` - Kompletter Dateibaum mit Checksums
- `GET /sync/changes?since=` - Delta-Sync
- SHA-256 Checksums fuer Duplikat-Erkennung
---
## [0.3.0] - 2026-04-11 - Kalender + Kontakte + Email
### Kalender
- Kalender-CRUD mit Events (Monats-/Tagesansicht, FullCalendar)
- Kalender mit Benutzern teilen (Lesen oder Lesen+Schreiben)
- iCal-Export als Read-Only .ics Link
- CalDAV well-known URLs fuer Auto-Discovery (iOS, DAVx5, Thunderbird)
### Kontakte
- Adressbuecher + Kontakte CRUD
- vCard-Export, Teilen mit Benutzern
- Suche in Kontakten
### Email-Webclient
- IMAP/SMTP-Proxy (kein eigener Mailserver)
- Multi-Account: Ordner nach Konten gruppiert
- Absender-Logik: Standard = aktives Konto, Dropdown bei mehreren
- Drei-Spalten-Layout, Compose mit Reply
- Kein Email-Konto = Email-Bereich ausgeblendet
---
## [0.2.0] - 2026-04-11 - Passwort-Manager + Office-Viewer
### Passwort-Manager
- AES-256-GCM clientseitig verschluesselt (Zero Knowledge)
- TOTP-Code Generierung (Web Crypto API)
- Passwort-Generator
- Ordner/Gruppen-Hierarchie (wie KeePass)
- Teilen von Eintraegen und Ordnern (read/write/manage)
### Office-Viewer
- PDF: PDF.js direkt im Browser
- DOCX: python-docx -> HTML
- XLSX: openpyxl -> JSON -> Tabelle
- PPTX: python-pptx -> HTML-Folien
- Bilder und Textdateien inline
---
## [0.1.0] - 2026-04-11 - Erste Version
### Grundgeruest
- Flask Backend mit SQLAlchemy + SQLite (WAL-Modus)
- Vue 3 Frontend mit PrimeVue, Vite, Pinia
- JWT-Authentifizierung (Access + Refresh Token)
- Erster Benutzer wird automatisch Admin
- Benutzerverwaltung (Admin/User Rollen, Speicher-Quotas)
### Projekt-Setup
- `.gitignore`, `.env.example` mit Token-Generierung
- Dockerfile (Multi-Stage: Node Build + Python Production)
- `docker-compose.yml` mit Bind Mounts (keine Docker Volumes)
- `nginx.example.conf` fuer Reverse-Proxy mit Let's Encrypt
### Datenbank
- 15 Tabellen: Users, Files, FilePermissions, ShareLinks, Calendars,
CalendarEvents, CalendarShares, AddressBooks, Contacts,
AddressBookShares, EmailAccounts, PasswordFolders, PasswordEntries,
PasswordShares, AppSettings
- Auto-Migrate: Fehlende Spalten werden beim App-Start automatisch
per ALTER TABLE hinzugefuegt (kein manuelles Migrieren noetig)
+10 -1
View File
@@ -11,6 +11,7 @@ FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
# tzdata ist im python:3.11-slim bereits enthalten - nur gcc nachinstallieren.
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
@@ -30,9 +31,17 @@ RUN mkdir -p /app/data/files
# Environment
ENV FLASK_ENV=production
ENV TZ=Europe/Berlin
ENV DATABASE_PATH=/app/data/minicloud.db
ENV UPLOAD_PATH=/app/data/files
EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "--timeout", "120", "wsgi:application"]
# Single worker with many threads. The SSE broadcaster lives in process
# memory - mit mehreren Workern wuerden Events den Empfaenger nicht
# erreichen wenn Sender und Empfaenger auf verschiedenen Workern haengen.
# 32 Threads = bis zu 32 gleichzeitige Requests/SSE-Streams.
CMD ["gunicorn", "--bind", "0.0.0.0:5000", \
"--worker-class", "gthread", "--workers", "1", "--threads", "32", \
"--timeout", "120", "--keep-alive", "65", \
"wsgi:application"]
+229 -30
View File
@@ -93,6 +93,31 @@ docker-compose up --build -d
Die Datenbank und hochgeladene Dateien liegen unter `./data/` (Bind Mount, keine Docker Volumes).
### Docker aufraeumen (Speicher freigeben)
Nach vielen Builds sammeln sich alte Images und Cache-Layer an:
```bash
# Alles Ungenutzte loeschen (Images, Container, Cache, Netzwerke):
docker system prune -a -f
# Nur alte/ungenutzte Images loeschen:
docker image prune -a -f
# Nur Build-Cache loeschen:
docker builder prune -a -f
# Speicherverbrauch anzeigen:
docker system df
```
Bei Problemen nach Updates (alte Frontend-Version etc.):
```bash
docker-compose down
docker-compose build --no-cache
docker-compose up -d
```
### Nginx Reverse-Proxy (Beispiel)
Die Datei `nginx.example.conf` enthaelt eine vollstaendige Beispielkonfiguration:
@@ -121,32 +146,22 @@ Let's Encrypt Zertifikat erstellen:
certbot --nginx -d cloud.example.com
```
### OnlyOffice Document Server (optional)
### OnlyOffice Document Server
Fuer die Bearbeitung von Word, Excel und PowerPoint Dateien direkt im Browser.
Fuer die Bearbeitung von Word, Excel und PowerPoint Dateien direkt im Browser. OnlyOffice benoetigt eine eigene Subdomain mit HTTPS.
**1. docker-compose.yml - OnlyOffice-Service aktivieren:**
```yaml
# In docker-compose.yml auskommentieren:
onlyoffice:
image: onlyoffice/documentserver:latest
environment:
- JWT_ENABLED=true
- JWT_SECRET=${ONLYOFFICE_JWT_SECRET}
volumes:
- ./data/onlyoffice/logs:/var/log/onlyoffice
- ./data/onlyoffice/data:/var/www/onlyoffice/Data
restart: unless-stopped
```
**2. .env - OnlyOffice konfigurieren:**
**1. .env - OnlyOffice URL setzen:**
```bash
ONLYOFFICE_URL=https://office.example.com
ONLYOFFICE_JWT_SECRET=ein-sicheres-secret-hier
```
Das JWT-Secret wird automatisch vom `JWT_SECRET_KEY` verwendet - kein extra Secret noetig.
**2. docker-compose.yml - OnlyOffice-Service aktivieren:**
Der OnlyOffice-Service ist in der `docker-compose.yml` bereits vorbereitet. Er nutzt den gleichen `JWT_SECRET_KEY` aus der `.env`.
**3. Nginx - Eigene Subdomain fuer OnlyOffice:**
```nginx
@@ -167,12 +182,44 @@ server {
}
```
**4. Starten:**
```bash
certbot --nginx -d office.example.com
docker-compose up -d
docker-compose up --build -d
```
**Ohne OnlyOffice** werden Office-Dateien in einer einfachen Vorschau angezeigt (nur Lesen). **Mit OnlyOffice** erhaelt man einen vollwertigen Editor (wie Google Docs).
**Ohne OnlyOffice** (`ONLYOFFICE_URL` leer) werden Office-Dateien in einer einfachen Vorschau angezeigt. **Mit OnlyOffice** erhaelt man einen vollwertigen Editor (wie Google Docs).
### Zeitzone & NTP
In der `.env` stehen zwei Variablen die die Systemzeit betreffen:
```env
TZ=Europe/Berlin
NTP_SERVER=ptbtime1.ptb.de
```
**`TZ`** setzt die prozessweite Zeitzone (wirkt auf Log-Zeitstempel, Kalender/Task-Zeiten, `datetime.now()`). IANA-Format `Region/Stadt`.
Haeufige Werte:
| Region | Beispielwerte |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| Deutschland | `Europe/Berlin` |
| DACH/EU | `Europe/Vienna`, `Europe/Zurich`, `Europe/Amsterdam`, `Europe/Paris`, `Europe/London`, `Europe/Madrid`, `Europe/Rome`, `Europe/Warsaw` |
| Nord-EU | `Europe/Copenhagen`, `Europe/Stockholm`, `Europe/Helsinki`, `Europe/Oslo` |
| Sonstige | `UTC`, `America/New_York`, `America/Los_Angeles`, `Asia/Tokyo`, `Australia/Sydney` |
Vollstaendige Liste: <https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>
**`NTP_SERVER`** wird beim Start abgefragt, um die Abweichung der Systemuhr zu pruefen. Bei Drift > 5 s erscheint eine Warnung im Log. **Hinweis:** Im Container wird die Uhr dadurch nicht gesetzt (benoetigt `CAP_SYS_TIME`) - auf dem Host sollte ein NTP-Daemon laufen. Der Check dient nur zur Sichtbarkeit.
Default: `ptbtime1.ptb.de` (offizielle deutsche Zeitreferenz der Physikalisch-Technischen Bundesanstalt, Stratum 1, sehr hohe Verfuegbarkeit).
Alternativen: `ptbtime2.ptb.de`, `ptbtime3.ptb.de`, `de.pool.ntp.org`, `time.cloudflare.com`. Leerlassen um den Check zu deaktivieren.
Aktuelle Werte sind im Admin-Bereich unter **Einstellungen > System** einsehbar.
## Verwendung
@@ -193,14 +240,32 @@ docker-compose up -d
### Kalender
- Kalender erstellen, Events anlegen (Monats-/Tagesansicht)
- Monats-/Wochen-/Tagesansicht (FullCalendar)
- Drag & Drop zwischen Tagen, Termindauer per Rand-Ziehen
- Wiederkehrende Termine: taeglich/woechentlich/monatlich/jaehrlich,
"jeden 2. Mittwoch", eigene Intervalle, Enddatum oder Anzahl
- Serientermine: "Nur diesen" oder "Ganze Serie" bearbeiten
- Kalender-Sichtbarkeit pro Kalender per Checkbox
- Kalender mit anderen Benutzern teilen (Lesen oder Lesen+Schreiben)
- iCal-Link generieren fuer Read-Only-Import in Google Calendar, Apple Kalender etc.
- CalDAV-Zugriff fuer native Sync:
- **iOS**: Einstellungen > Kalender > Accounts > Anderer > CalDAV
- **Android (DAVx5)**: Server-URL: `https://<deine-domain>/dav/`
- **Thunderbird**: Neuer Kalender > Im Netzwerk > CalDAV
- **Outlook (CalDAV-Synchronizer)**: Server-URL: `https://<deine-domain>/dav/`
- iCal-Abo-Link mit optionalem Passwort (HTTP Basic Auth)
- Voller CalDAV-Server (RFC 4791 Subset) - siehe unten
#### CalDAV-Zugriff
Native Sync mit Handy/Laptop-Kalendern. Server-URL ist immer
`https://<deine-domain>/dav/` - Benutzername + Passwort wie im Web.
| Client | Einrichtung |
|-----------------|-------------|
| **iOS/macOS** | Einstellungen > Kalender > Accounts > Anderer > CalDAV-Account, Server `cloud.example.com/dav/` |
| **Android (DAVx5)** | Konto hinzufuegen > Anmeldung mit URL und Benutzername, URL `https://cloud.example.com/dav/` |
| **Thunderbird** | Neuer Kalender > Im Netzwerk > CalDAV, URL `https://cloud.example.com/dav/` (Thunderbird findet die Kalender selbst) |
| **Outlook** | Plugin CalDAV-Synchronizer, Server-URL `https://cloud.example.com/dav/` |
Unterstuetzte Operationen: PROPFIND (Auto-Discovery via `/.well-known/caldav`),
REPORT (calendar-query / calendar-multiget inkl. Zeitraumfilter), GET/PUT/DELETE
fuer einzelne Termine, MKCALENDAR, EXDATE fuer Serienausnahmen. ETags werden
benutzt damit Clients erkennen, was sich geaendert hat.
### Kontakte
@@ -276,10 +341,144 @@ data/ # Laufzeitdaten (gitignored)
files/ # Hochgeladene Dateien
```
## Desktop Sync Client
Der Desktop-Client (`clients/desktop/`) synchronisiert Dateien zwischen der Cloud und einem lokalen Ordner. Gebaut mit Tauri 2 (Rust + Vue).
### Features
- **Multi-Sync-Pfade**: Beliebig viele Server-Ordner auf lokale Ordner mappen (z.B. `/Projekte` -> `~/Projekte`, `/Shared/Team` -> `~/Team`)
- **Virtual Files**: `.cloud`-Platzhalter (0 Bytes), Download erst bei Doppelklick. Kein Speicherverbrauch fuer nicht benoetigte Dateien
- **Full Sync**: Alternativ alle Dateien komplett lokal spiegeln (pro Pfad waehlbar)
- **Offline-Markierung**: Einzelne Dateien als offline verfuegbar markieren (Rechtsklick im Datei-Browser)
- **Sofort-Sync**: Filesystem-Watcher erkennt lokale Aenderungen sofort (3s Debounce), kein Polling
- **Intelligenter Sync**: Checksum-Tracking erkennt wer sich geaendert hat (Server oder Lokal)
- **Konflikt-Erkennung**: Bei gleichzeitiger Aenderung wird eine Konflikt-Kopie erstellt
- **File Locking**: Lock beim Oeffnen, Heartbeat alle 60s, manuelles Entsperren per Rechtsklick, auto-unlock nach 15 Min ohne Heartbeat
- **System-Tray**: Minimiert in den Tray statt zu beenden, Doppelklick oeffnet Fenster
- **Minimiert starten**: Optional direkt im Tray starten (Checkbox in Einstellungen)
- **Auto-Login**: Zugangsdaten und Sync-Pfade bleiben nach Neustart/Update erhalten
- **Terminalserver**: Pro User eine eigene Instanz, keine Konflikte zwischen Benutzern
- **.cloud Datei-Handler**: Doppelklick im Explorer oeffnet ueber den Client
### Terminalserver-Verhalten
| Szenario | Verhalten |
|----------|-----------|
| User A startet Client | Laeuft, eigenes Lock-File in `%APPDATA%\MiniCloud Sync\` |
| User B startet Client | Laeuft separat, eigenes Lock-File in seinem `%APPDATA%` |
| User A doppelklickt `.cloud` | Laufende Instanz von User A oeffnet die Datei |
| User A startet nochmal | "Already running" -> beendet sich sofort |
| Client crashed | Naechster Start prueft ob PID noch lebt -> stale Lock -> ueberschreibt |
### Virtual Files vs. Full Sync
| | Virtual Files | Full Sync |
|---|---|---|
| Speicher | Nur .cloud Platzhalter (0 Bytes) | Alle Dateien komplett lokal |
| Zugriff | Doppelklick = Download + Oeffnen | Sofort verfuegbar |
| Offline | Nur markierte Dateien | Alles offline |
| Upload | Neue lokale Dateien werden hochgeladen | Bidirektionaler Sync |
| Empfehlung | Grosse Datenmengen, Laptops | Kleine Ordner, immer offline noetig |
### Sync-Logik (Checksum-Tracking)
Der Client merkt sich den Checksum jeder Datei beim letzten Sync. Beim naechsten Sync wird verglichen wer sich geaendert hat:
| Lokal geaendert | Server geaendert | Aktion |
|-----------------|------------------|--------|
| Nein | Ja | **Server -> Lokal** (Download) |
| Ja | Nein | **Lokal -> Server** (Upload) |
| Ja | Ja | **Konflikt**: Lokale Datei wird zu `Datei (Konflikt).txt`, Server-Version wird heruntergeladen |
| Nein | Nein | Nichts (identisch) |
Beim ersten Sync (kein gespeicherter Checksum) gewinnt immer der Server.
### File Locking
Dateien werden beim Oeffnen ueber den Client automatisch auf dem Server gesperrt. Andere Benutzer sehen "Datei gesperrt von X" und koennen sie nicht bearbeiten.
| Szenario | Was passiert |
|----------|-------------|
| .cloud Datei oeffnen | Download + Lock + Heartbeat alle 60s |
| Fertig -> Rechtsklick "Entsperren" | Lock sofort aufgehoben |
| Rechtsklick "Nicht mehr offline" | Lock aufgehoben + zurueck zu .cloud |
| Client beenden ohne Entsperren | Kein Heartbeat -> Lock laeuft nach 15 Min ab |
| Laptop zugeklappt / Netzwerk weg | Kein Heartbeat -> Lock laeuft nach 15 Min ab |
| Admin im Web-UI | Kann jeden Lock jederzeit manuell loesen |
#### Was das Lock wirklich kann (und was nicht)
Das Auschecken ist ein **Hinweis-Schloss**, kein physikalisches Dateischloss. Kurz gesagt: es hindert alle **Mini-Cloud-Wege** am Bearbeiten, aber nicht den Windows-Explorer oder andere Programme auf der Festplatte.
| Wo greift das Lock? | Beispiel |
|---------------------|----------|
| ✅ Web-Oberflaeche | Anna kann im Browser die Datei nicht oeffnen/bearbeiten - "wird von Adam bearbeitet" |
| ✅ Desktop-Client | Doppelklick in der Client-Ansicht -> Fehlermeldung, Datei oeffnet nicht |
| ✅ Automatischer Upload | Hat Anna die Datei trotzdem editiert, hebt der Client sie nicht hoch, solange Adam das Lock haelt |
| ❌ Windows-/Mac-Explorer | Anna kann die lokale Datei im Dateimanager oeffnen (ist ja eine ganz normale Datei auf der Platte) |
| ❌ Externe Programme | Word, Excel, Notepad usw. sehen das Lock nicht - jedes Programm kann die Datei oeffnen |
**Beispiel im Alltag:**
1. Adam checkt `Bericht.xlsx` aus (oeffnet sie im Client)
2. Anna hat den Ordner auch gesynct und die Datei liegt bei ihr lokal
3. Anna versucht, sie im Browser zu oeffnen -> **blockiert**
4. Anna versucht, sie im Client zu oeffnen -> **blockiert**
5. Anna oeffnet sie im Explorer direkt -> **geht auf** (weil die Datei technisch ja nur eine normale Datei ist)
6. Anna bearbeitet und speichert lokal -> Client bemerkt die Aenderung, sieht das Fremd-Lock und **haelt den Upload zurueck**
7. Adam checkt ein: jetzt vergleicht der Client - hat Adam auch geaendert? Wenn ja, wird Annas Version zu `Bericht (Konflikt Anna 2026-04-12 143022).xlsx` und Adams Version gewinnt. Niemand verliert Daten, aber ein Mensch muss die Versionen zusammenfuehren.
Das ist derselbe Ansatz wie bei Nextcloud oder Dropbox: **Konflikt-Kopie als Sicherheitsnetz**, keine kernel-tiefe Dateisperre. Der Schutz kommt ueber die Upload-Sperre - damit landet ein versehentliches Bearbeiten nie beim eigentlichen Owner.
### Bauen
```bash
# Voraussetzung: Docker
# Linux:
./build.sh linux
# Windows (Cross-Compile):
./build.sh windows
# macOS (nur auf Mac):
./build.sh mac
# Alle Desktop-Plattformen:
./build.sh all-desktop
```
### Auto-Upload auf den Server
Nach dem Build wird der Client automatisch auf den Cloud-Server hochgeladen und steht dort zum Download bereit.
**Auf der Entwicklungsmaschine** (nicht auf dem Server!) in die `.env` eintragen:
```bash
# URL der Cloud-Instanz
CLOUD_URL=https://cloud.example.com
# SECRET_KEY des Zielservers (identisch mit SECRET_KEY in der Server-.env)
BUILD_UPLOAD_TOKEN=der-secret-key-vom-server
```
Danach laedt `./build.sh linux` (etc.) den Build automatisch hoch. Auf der Login-Seite erscheint dann "Desktop & Mobile Clients herunterladen".
**Wichtig:** `CLOUD_URL` und `BUILD_UPLOAD_TOKEN` gehoeren NUR in die `.env` der Entwicklungsmaschine, NICHT auf den Produktionsserver!
### Einstellungen
Einstellungen werden gespeichert in:
- **Windows**: `%APPDATA%\MiniCloud Sync\config.json`
- **Linux**: `~/.config/MiniCloud Sync/config.json`
- **macOS**: `~/Library/Application Support/MiniCloud Sync/config.json`
Gespeichert werden: Server-URL, Benutzername, Passwort (base64), Sync-Pfade. Bleiben bei Updates erhalten.
## Roadmap
- Desktop Sync-Client (Windows, Linux, macOS) mit Full Sync + Virtual Files
- Mobile Sync-Client (iOS, Android) mit On-Demand-Download
- Mobile Sync-Client (iOS, Android) mit On-Demand-Download + File Provider
- Native Passwort-Manager Clients mit Autofill und Biometrie
- Radicale-Integration fuer vollstaendiges CalDAV/CardDAV-Protokoll
+76 -7
View File
@@ -1,13 +1,28 @@
import os
import time
from pathlib import Path
from flask import Flask, redirect, send_from_directory
from flask import Flask, Response, redirect, send_from_directory
from flask_cors import CORS
from app.config import Config
from app.extensions import db, bcrypt, migrate
def _configure_timezone(tz_name: str) -> None:
"""Prozess-Zeitzone setzen, sodass datetime.now(), strftime %Z etc.
die konfigurierte TZ verwenden. Sichere no-op wenn tzdata fehlt."""
if not tz_name:
return
os.environ['TZ'] = tz_name
tzset = getattr(time, 'tzset', None)
if tzset:
try:
tzset()
except Exception:
pass
def _auto_migrate(db):
"""Add missing columns to existing tables by comparing model definitions
with actual database schema. This handles the case where new columns are
@@ -61,6 +76,9 @@ def _auto_migrate(db):
def create_app(config_class=Config):
# Zeitzone moeglichst frueh setzen - vor allen datetime.now()-Aufrufen
_configure_timezone(getattr(config_class, 'TIMEZONE', None) or os.environ.get('TZ'))
# Check if static frontend build exists (Docker production mode)
static_dir = Path(__file__).resolve().parent.parent / 'static'
if static_dir.exists():
@@ -69,6 +87,9 @@ def create_app(config_class=Config):
app = Flask(__name__)
app.config.from_object(config_class)
# DAV-Clients setzen Trailing-Slashes uneinheitlich - daher deaktivieren
# wir die strikte Pruefung app-weit. Betrifft alle Blueprints.
app.url_map.strict_slashes = False
# Ensure data directories exist
Path(app.config['UPLOAD_PATH']).mkdir(parents=True, exist_ok=True)
@@ -88,14 +109,51 @@ def create_app(config_class=Config):
from app.api import api_bp
app.register_blueprint(api_bp)
# Well-known URLs for CalDAV/CardDAV auto-discovery (iOS, DAVx5, etc.)
@app.route('/.well-known/caldav')
def wellknown_caldav():
from app.dav import dav_bp
app.register_blueprint(dav_bp)
# Well-known URLs for CalDAV/CardDAV auto-discovery (iOS, DAVx5, etc.).
# 301-Redirect bei PROPFIND ist bei einigen Clients zickig, deshalb
# delegieren wir intern direkt an die DAV-Handler, statt zu redirecten.
from flask import request
from app.dav.caldav import propfind as dav_propfind, options as dav_options
def _wellknown_dav():
if request.method == 'PROPFIND':
return dav_propfind(subpath='')
if request.method == 'OPTIONS':
return dav_options()
return redirect('/dav/', code=301)
@app.route('/.well-known/carddav')
def wellknown_carddav():
return redirect('/dav/', code=301)
app.add_url_rule(
'/.well-known/caldav', view_func=_wellknown_dav,
methods=['GET', 'HEAD', 'PROPFIND', 'OPTIONS'],
provide_automatic_options=False,
)
app.add_url_rule(
'/.well-known/carddav', view_func=_wellknown_dav,
endpoint='_wellknown_carddav',
methods=['GET', 'HEAD', 'PROPFIND', 'OPTIONS'],
provide_automatic_options=False,
)
# Root DAV discovery: DAVx5 und einige andere Clients probieren zuerst
# PROPFIND/OPTIONS auf / (nur Hostname), bevor sie /.well-known nutzen.
# Wir reagieren hier auch mit DAV-Properties.
def _root_dav():
if request.method == 'PROPFIND':
return dav_propfind(subpath='')
if request.method == 'OPTIONS':
return dav_options()
# GET/HEAD: SPA index handhabt das woanders - dieser View matcht nur DAV-Methoden
return Response('', 405)
app.add_url_rule(
'/', view_func=_root_dav,
endpoint='_root_dav',
methods=['PROPFIND', 'OPTIONS'],
provide_automatic_options=False,
)
# iCal export (public, no auth)
@app.route('/ical/<token>')
@@ -131,4 +189,15 @@ def create_app(config_class=Config):
from app.services.backup_scheduler import start_backup_scheduler
start_backup_scheduler(app)
# NTP-Offset gegen den konfigurierten Zeitserver pruefen (nicht fatal).
ntp_server = app.config.get('NTP_SERVER') or ''
if ntp_server.strip():
import threading
from app.services.ntp_check import check_and_log
threading.Thread(
target=check_and_log,
args=(ntp_server.strip(), app.logger),
daemon=True,
).start()
return app
+1 -1
View File
@@ -2,4 +2,4 @@ from flask import Blueprint
api_bp = Blueprint('api', __name__, url_prefix='/api')
from app.api import auth, users, files, calendar, contacts, email, office, passwords, backup # noqa: E402, F401
from app.api import auth, users, files, calendar, contacts, tasks, email, office, passwords, backup, client_downloads # noqa: E402, F401
+425 -20
View File
@@ -1,14 +1,68 @@
import csv
import io
import re
import secrets
import uuid
from datetime import datetime, timezone
from flask import request, jsonify
from flask import request, jsonify, Response
from app.api import api_bp
from app.api.auth import token_required
from app.extensions import db
from app.extensions import db, bcrypt
from app.models.calendar import Calendar, CalendarEvent, CalendarShare
from app.models.user import User
from app.services.events import notify_calendar_change
def _calendar_recipients(cal: Calendar):
return [s.shared_with_id for s in CalendarShare.query.filter_by(calendar_id=cal.id).all()]
def _redact_if_private(event_dict: dict, is_owner: bool) -> dict:
"""For shared viewers, strip summary/description/location from private
events so only the time slot remains visible."""
if is_owner or not event_dict.get('is_private'):
return event_dict
d = dict(event_dict)
d['summary'] = 'Privat'
d['description'] = None
d['location'] = None
return d
def _redact_vevent(raw: str) -> str:
"""Strip SUMMARY/DESCRIPTION/LOCATION from a VEVENT block and set
CLASS:PRIVATE. Used for shared iCal exports and CalDAV responses."""
if not raw:
return raw
import re as _re
out_lines = []
has_class = False
for line in raw.split('\n'):
stripped = line.rstrip('\r')
upper = stripped.split(':', 1)[0].split(';', 1)[0].upper()
if upper == 'SUMMARY':
out_lines.append('SUMMARY:Privat')
elif upper in ('DESCRIPTION', 'LOCATION'):
continue
elif upper == 'CLASS':
has_class = True
out_lines.append('CLASS:PRIVATE')
else:
out_lines.append(stripped)
if not has_class:
# Inject CLASS right after UID if possible, else before END:VEVENT
for i, l in enumerate(out_lines):
if l.startswith('UID:'):
out_lines.insert(i + 1, 'CLASS:PRIVATE')
break
else:
for i, l in enumerate(out_lines):
if l.upper().startswith('END:VEVENT'):
out_lines.insert(i, 'CLASS:PRIVATE')
break
return '\r\n'.join(out_lines)
def _get_calendar_or_err(cal_id, user, need_write=False):
@@ -49,7 +103,14 @@ def list_calendars():
calendar_id=c.id, shared_with_id=user.id
).first()
d['permission'] = share.permission if share else 'read'
# Per-user color override: the owner's color is kept in 'owner_color'
# so the UI can show both, and 'color' reflects what this user picked.
d['owner_color'] = c.color
if share and share.color:
d['color'] = share.color
d['owner_name'] = c.owner.username
d['owner_full_name'] = c.owner.full_name
d['owner_display_name'] = c.owner.display_name
result.append(d)
return jsonify(result), 200
@@ -95,6 +156,33 @@ def update_calendar(cal_id):
return jsonify(cal.to_dict()), 200
@api_bp.route('/calendars/<int:cal_id>/my-color', methods=['PUT'])
@token_required
def set_my_calendar_color(cal_id):
"""Personal display color for a shared calendar. Doesn't affect the
owner's calendar color or any other user's view."""
user = request.current_user
cal = db.session.get(Calendar, cal_id)
if not cal:
return jsonify({'error': 'Nicht gefunden'}), 404
color = (request.get_json() or {}).get('color', '').strip()
if cal.owner_id == user.id:
# Owner -> update the calendar itself
if color:
cal.color = color
db.session.commit()
return jsonify({'color': cal.color}), 200
share = CalendarShare.query.filter_by(calendar_id=cal_id, shared_with_id=user.id).first()
if not share:
return jsonify({'error': 'Kein Zugriff'}), 403
share.color = color or None
db.session.commit()
return jsonify({'color': share.color or cal.color}), 200
@api_bp.route('/calendars/<int:cal_id>', methods=['DELETE'])
@token_required
def delete_calendar(cal_id):
@@ -103,8 +191,12 @@ def delete_calendar(cal_id):
if not cal or cal.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden oder keine Berechtigung'}), 404
recipients = _calendar_recipients(cal)
owner_id = cal.owner_id
cal_id = cal.id
db.session.delete(cal)
db.session.commit()
notify_calendar_change(owner_id, cal_id, 'deleted', shared_with=recipients)
return jsonify({'message': 'Kalender geloescht'}), 200
@@ -122,21 +214,183 @@ def list_events(cal_id):
end = request.args.get('end')
query = CalendarEvent.query.filter_by(calendar_id=cal_id)
# Wiederkehrende Termine duerfen nicht per Range gefiltert werden -
# die FullCalendar-RRULE-Plugin-Expansion im Frontend braucht den
# Master-Event auch wenn dessen dtstart vor dem sichtbaren Bereich liegt.
if start:
try:
start_dt = datetime.fromisoformat(start)
query = query.filter(CalendarEvent.dtend >= start_dt)
query = query.filter(db.or_(
CalendarEvent.recurrence_rule.isnot(None),
CalendarEvent.dtend >= start_dt,
))
except ValueError:
pass
if end:
try:
end_dt = datetime.fromisoformat(end)
query = query.filter(CalendarEvent.dtstart <= end_dt)
query = query.filter(db.or_(
CalendarEvent.recurrence_rule.isnot(None),
CalendarEvent.dtstart <= end_dt,
))
except ValueError:
pass
events = query.order_by(CalendarEvent.dtstart).all()
return jsonify([e.to_dict() for e in events]), 200
is_owner = (cal.owner_id == user.id)
return jsonify([_redact_if_private(e.to_dict(), is_owner) for e in events]), 200
@api_bp.route('/calendars/<int:cal_id>/export', methods=['GET'])
@token_required
def export_calendar(cal_id):
"""Export VEVENTs als .ics oder .csv."""
user = request.current_user
cal, err = _get_calendar_or_err(cal_id, user)
if err:
return err
fmt = (request.args.get('format') or 'ics').lower()
events = CalendarEvent.query.filter_by(calendar_id=cal_id).order_by(CalendarEvent.dtstart).all()
safe_name = re.sub(r'[^A-Za-z0-9._-]+', '_', cal.name or 'kalender') or 'kalender'
if fmt == 'ics':
lines = ['BEGIN:VCALENDAR', 'VERSION:2.0', 'PRODID:-//Mini-Cloud//DE', 'CALSCALE:GREGORIAN']
for e in events:
block = (e.ical_data or '').strip()
if not block:
block = _build_vevent(e.uid, e.summary or '', e.dtstart, e.dtend,
e.all_day, e.description or '', e.location or '',
e.recurrence_rule or '',
(e.exdates or '').split(',') if e.exdates else None)
# Make sure block contains BEGIN/END VEVENT
if 'BEGIN:VEVENT' not in block.upper():
continue
lines.append(block.strip())
lines.append('END:VCALENDAR')
body = '\r\n'.join(lines) + '\r\n'
return Response(
body, mimetype='text/calendar; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.ics"'},
)
if fmt == 'csv':
out = io.StringIO()
cols = ['summary', 'dtstart', 'dtend', 'all_day', 'location',
'description', 'recurrence_rule', 'uid']
w = csv.writer(out, delimiter=';', quoting=csv.QUOTE_ALL)
w.writerow(cols)
for e in events:
w.writerow([
e.summary or '',
e.dtstart.isoformat() if e.dtstart else '',
e.dtend.isoformat() if e.dtend else '',
'1' if e.all_day else '0',
e.location or '',
(e.description or '').replace('\r\n', ' ').replace('\n', ' '),
e.recurrence_rule or '',
e.uid or '',
])
return Response(
'\ufeff' + out.getvalue(), mimetype='text/csv; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.csv"'},
)
return jsonify({'error': 'Unbekanntes Format'}), 400
@api_bp.route('/calendars/<int:cal_id>/import', methods=['POST'])
@token_required
def import_calendar(cal_id):
"""Import .ics oder .csv -> Termine ins Kalender."""
from app.dav.caldav import _parse_vevent, _extract_vevent_block
user = request.current_user
cal, err = _get_calendar_or_err(cal_id, user, need_write=True)
if err:
return err
file = request.files.get('file')
if not file:
return jsonify({'error': 'Keine Datei'}), 400
raw = file.read()
name = (file.filename or '').lower()
try:
text = raw.decode('utf-8-sig')
except UnicodeDecodeError:
text = raw.decode('latin-1', errors='replace')
imported = 0
skipped = 0
def _save(parsed: dict, ical_block: str | None = None):
nonlocal imported, skipped
if not parsed.get('summary') or not parsed.get('dtstart'):
skipped += 1
return
uid = parsed.get('uid') or str(uuid.uuid4())
existing = CalendarEvent.query.filter_by(calendar_id=cal_id, uid=uid).first()
ev = existing or CalendarEvent(calendar_id=cal_id, uid=uid, ical_data='')
ev.summary = parsed.get('summary') or '(ohne Titel)'
ev.description = parsed.get('description')
ev.location = parsed.get('location')
ev.dtstart = parsed.get('dtstart')
ev.dtend = parsed.get('dtend')
ev.all_day = parsed.get('all_day', False)
ev.recurrence_rule = parsed.get('rrule')
ev.exdates = ','.join(parsed.get('exdates', [])) or None
ev.ical_data = (ical_block or '').strip() or _build_vevent(
uid, ev.summary, ev.dtstart, ev.dtend, ev.all_day,
ev.description or '', ev.location or '', ev.recurrence_rule or '',
(ev.exdates or '').split(',') if ev.exdates else None,
)
ev.updated_at = datetime.now(timezone.utc)
if not existing:
db.session.add(ev)
imported += 1
if name.endswith('.csv') or (b';' in raw[:200] and b'BEGIN:VCALENDAR' not in raw[:200]):
reader = csv.DictReader(io.StringIO(text), delimiter=';')
if not reader.fieldnames or len(reader.fieldnames) < 2:
reader = csv.DictReader(io.StringIO(text), delimiter=',')
for row in reader:
row = {k.strip().lower(): (v or '').strip() for k, v in row.items() if k}
try:
dtstart = datetime.fromisoformat(row.get('dtstart') or row.get('start') or '')
except (ValueError, TypeError):
skipped += 1
continue
try:
dtend = datetime.fromisoformat(row.get('dtend') or row.get('end') or '') if (row.get('dtend') or row.get('end')) else None
except ValueError:
dtend = None
parsed = {
'uid': row.get('uid'),
'summary': row.get('summary') or row.get('titel') or row.get('title'),
'description': row.get('description') or row.get('beschreibung'),
'location': row.get('location') or row.get('ort'),
'dtstart': dtstart,
'dtend': dtend,
'all_day': (row.get('all_day') or '').lower() in ('1', 'true', 'ja', 'yes'),
'rrule': row.get('recurrence_rule') or row.get('rrule'),
'exdates': [],
}
_save(parsed)
else:
# iCal: Kalender-Datei mit beliebig vielen VEVENTs
blocks = re.findall(r'BEGIN:VEVENT.*?END:VEVENT', text, flags=re.DOTALL | re.IGNORECASE)
if not blocks:
return jsonify({'error': 'Keine VEVENT-Daten gefunden'}), 400
for block in blocks:
try:
parsed = _parse_vevent(block)
except Exception:
parsed = None
if not parsed:
skipped += 1
continue
_save(parsed, ical_block=block)
db.session.commit()
if imported:
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_calendar_recipients(cal))
return jsonify({'imported': imported, 'skipped': skipped}), 200
@api_bp.route('/calendars/<int:cal_id>/events', methods=['POST'])
@@ -166,24 +420,30 @@ def create_event(cal_id):
return jsonify({'error': 'Ungueltiges Datumsformat'}), 400
event_uid = str(uuid.uuid4())
description = (data.get('description') or '').strip()
location = (data.get('location') or '').strip()
rrule = (data.get('recurrence_rule') or '').strip()
# Build simple iCal data
ical_data = _build_ical(event_uid, summary, dtstart_dt, dtend_dt, all_day,
data.get('description', ''), data.get('location', ''),
data.get('recurrence_rule', ''))
description, location, rrule, None)
event = CalendarEvent(
calendar_id=cal_id,
uid=event_uid,
ical_data=ical_data,
summary=summary,
description=description or None,
location=location or None,
dtstart=dtstart_dt,
dtend=dtend_dt,
all_day=all_day,
recurrence_rule=data.get('recurrence_rule'),
recurrence_rule=rrule or None,
is_private=bool(data.get('is_private', False)),
)
db.session.add(event)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_calendar_recipients(cal))
return jsonify(event.to_dict()), 201
@@ -202,14 +462,20 @@ def update_event(event_id):
data = request.get_json()
if 'summary' in data:
event.summary = data['summary'].strip()
if 'description' in data:
event.description = (data['description'] or '').strip() or None
if 'location' in data:
event.location = (data['location'] or '').strip() or None
if 'dtstart' in data:
event.dtstart = datetime.fromisoformat(data['dtstart'])
if 'dtend' in data:
event.dtend = datetime.fromisoformat(data['dtend'])
event.dtend = datetime.fromisoformat(data['dtend']) if data['dtend'] else None
if 'all_day' in data:
event.all_day = data['all_day']
if 'recurrence_rule' in data:
event.recurrence_rule = data['recurrence_rule']
event.recurrence_rule = (data['recurrence_rule'] or '').strip() or None
if 'is_private' in data:
event.is_private = bool(data['is_private'])
if 'calendar_id' in data:
new_cal, cerr = _get_calendar_or_err(data['calendar_id'], user, need_write=True)
if cerr:
@@ -218,14 +484,90 @@ def update_event(event_id):
event.ical_data = _build_ical(
event.uid, event.summary, event.dtstart, event.dtend,
event.all_day, data.get('description', ''), data.get('location', ''),
event.recurrence_rule or ''
event.all_day, event.description or '', event.location or '',
event.recurrence_rule or '',
event.exdates.split(',') if event.exdates else None,
)
event.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_calendar_recipients(cal))
return jsonify(event.to_dict()), 200
@api_bp.route('/events/<int:event_id>/exception', methods=['POST'])
@token_required
def add_event_exception(event_id):
"""Exclude a single occurrence of a recurring event ("nur dieser Termin").
Optionally creates a standalone replacement event for that date."""
user = request.current_user
event = db.session.get(CalendarEvent, event_id)
if not event:
return jsonify({'error': 'Event nicht gefunden'}), 404
cal, err = _get_calendar_or_err(event.calendar_id, user, need_write=True)
if err:
return err
if not event.recurrence_rule:
return jsonify({'error': 'Kein Serientermin'}), 400
data = request.get_json()
occurrence_date = data.get('occurrence_date') # ISO date or datetime
if not occurrence_date:
return jsonify({'error': 'occurrence_date erforderlich'}), 400
# Normalize to YYYY-MM-DD for storage key
try:
parsed = datetime.fromisoformat(occurrence_date.replace('Z', '+00:00'))
key = parsed.strftime('%Y-%m-%d' if event.all_day else '%Y-%m-%dT%H:%M:%S')
except ValueError:
key = occurrence_date
existing = (event.exdates or '').split(',') if event.exdates else []
if key not in existing:
existing.append(key)
event.exdates = ','.join(filter(None, existing))
# Optional: create replacement single event
replacement = None
if data.get('replacement'):
r = data['replacement']
rep_uid = str(uuid.uuid4())
rep_start = datetime.fromisoformat(r['dtstart'])
rep_end = datetime.fromisoformat(r['dtend']) if r.get('dtend') else rep_start
replacement = CalendarEvent(
calendar_id=event.calendar_id,
uid=rep_uid,
summary=r.get('summary', event.summary),
description=r.get('description', event.description),
location=r.get('location', event.location),
dtstart=rep_start,
dtend=rep_end,
all_day=r.get('all_day', event.all_day),
recurrence_rule=None,
ical_data='',
)
replacement.ical_data = _build_ical(
rep_uid, replacement.summary, rep_start, rep_end,
replacement.all_day, replacement.description or '',
replacement.location or '', '',
)
db.session.add(replacement)
event.ical_data = _build_ical(
event.uid, event.summary, event.dtstart, event.dtend,
event.all_day, event.description or '', event.location or '',
event.recurrence_rule or '',
event.exdates.split(',') if event.exdates else None,
)
event.updated_at = datetime.now(timezone.utc)
db.session.commit()
return jsonify({
'event': event.to_dict(),
'replacement': replacement.to_dict() if replacement else None,
}), 200
@api_bp.route('/events/<int:event_id>', methods=['DELETE'])
@token_required
def delete_event(event_id):
@@ -238,8 +580,12 @@ def delete_event(event_id):
if err:
return err
cal = db.session.get(Calendar, event.calendar_id)
db.session.delete(event)
db.session.commit()
if cal:
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_calendar_recipients(cal))
return jsonify({'message': 'Event geloescht'}), 200
@@ -287,6 +633,9 @@ def share_calendar(cal_id):
except Exception:
pass
notify_calendar_change(cal.owner_id, cal.id, 'share',
shared_with=[target.id, *_calendar_recipients(cal)])
return jsonify({'message': f'Kalender mit {username} geteilt'}), 200
@@ -319,8 +668,11 @@ def remove_calendar_share(cal_id, share_id):
if not share or share.calendar_id != cal_id:
return jsonify({'error': 'Freigabe nicht gefunden'}), 404
target_id = share.shared_with_id
db.session.delete(share)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'share',
shared_with=[target_id, *_calendar_recipients(cal)])
return jsonify({'message': 'Freigabe entfernt'}), 200
@@ -334,19 +686,58 @@ def generate_ical_link(cal_id):
if not cal or cal.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
cal.ical_token = secrets.token_urlsafe(32)
data = request.get_json(silent=True) or {}
password = (data.get('password') or '').strip()
if not cal.ical_token:
cal.ical_token = secrets.token_urlsafe(32)
if password:
cal.ical_password_hash = bcrypt.generate_password_hash(password).decode('utf-8')
elif data.get('clear_password'):
cal.ical_password_hash = None
db.session.commit()
return jsonify({
'ical_url': f'/ical/{cal.ical_token}',
'token': cal.ical_token,
'has_password': bool(cal.ical_password_hash),
}), 200
@api_bp.route('/calendars/<int:cal_id>/ical-link', methods=['DELETE'])
@token_required
def revoke_ical_link(cal_id):
user = request.current_user
cal = db.session.get(Calendar, cal_id)
if not cal or cal.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
cal.ical_token = None
cal.ical_password_hash = None
db.session.commit()
return jsonify({'message': 'Link zurueckgezogen'}), 200
def _basic_auth_challenge():
return Response(
'Kalender erfordert Passwort', 401,
{'WWW-Authenticate': 'Basic realm="Mini-Cloud Kalender"'}
)
def ical_export(token):
cal = Calendar.query.filter_by(ical_token=token).first()
if not cal:
return jsonify({'error': 'Nicht gefunden'}), 404
# Password protection via HTTP Basic (compatible with DAVx5, Apple Cal,
# Thunderbird, curl, etc.). Username is ignored.
if cal.ical_password_hash:
auth = request.authorization
if not auth or not auth.password:
return _basic_auth_challenge()
if not bcrypt.check_password_hash(cal.ical_password_hash, auth.password):
return _basic_auth_challenge()
events = CalendarEvent.query.filter_by(calendar_id=cal.id).all()
lines = [
@@ -357,13 +748,14 @@ def ical_export(token):
]
for e in events:
if e.ical_data:
# Extract VEVENT from stored ical_data
lines.append(e.ical_data)
block = _redact_vevent(e.ical_data) if e.is_private else e.ical_data
lines.append(block)
elif e.is_private:
lines.append(_build_vevent(e.uid, 'Privat', e.dtstart, e.dtend, e.all_day))
else:
lines.append(_build_vevent(e.uid, e.summary, e.dtstart, e.dtend, e.all_day))
lines.append('END:VCALENDAR')
from flask import Response
return Response(
'\r\n'.join(lines),
mimetype='text/calendar',
@@ -379,7 +771,9 @@ def _format_dt(dt, all_day=False):
return dt.strftime('%Y%m%dT%H%M%SZ')
def _build_vevent(uid, summary, dtstart, dtend, all_day, description='', location='', rrule=''):
def _build_vevent(uid, summary, dtstart, dtend, all_day, description='', location='', rrule='', exdates=None):
if not dtend:
dtend = dtstart
lines = [
'BEGIN:VEVENT',
f'UID:{uid}',
@@ -397,10 +791,21 @@ def _build_vevent(uid, summary, dtstart, dtend, all_day, description='', locatio
lines.append(f'LOCATION:{location}')
if rrule:
lines.append(f'RRULE:{rrule}')
if exdates:
for ex in exdates:
if all_day:
lines.append(f'EXDATE;VALUE=DATE:{ex.replace("-", "")}')
else:
# Convert ISO datetime (with or without TZ) into YYYYMMDDTHHMMSSZ
try:
dt = datetime.fromisoformat(ex.replace('Z', '+00:00'))
lines.append(f'EXDATE:{dt.strftime("%Y%m%dT%H%M%SZ")}')
except ValueError:
pass
lines.append(f'DTSTAMP:{datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")}')
lines.append('END:VEVENT')
return '\r\n'.join(lines)
def _build_ical(uid, summary, dtstart, dtend, all_day, description='', location='', rrule=''):
return _build_vevent(uid, summary, dtstart, dtend, all_day, description, location, rrule)
def _build_ical(uid, summary, dtstart, dtend, all_day, description='', location='', rrule='', exdates=None):
return _build_vevent(uid, summary, dtstart, dtend, all_day, description, location, rrule, exdates)
+126
View File
@@ -0,0 +1,126 @@
"""Client download management - upload builds, serve downloads."""
import os
from pathlib import Path
from flask import request, jsonify, send_from_directory, current_app
from app.api import api_bp
# Supported platforms and their file extensions
PLATFORMS = {
'linux': {'name': 'Linux', 'icon': 'pi-desktop', 'extensions': ['.AppImage', '.deb', '']},
'windows': {'name': 'Windows', 'icon': 'pi-microsoft', 'extensions': ['.msi', '.exe']},
'mac': {'name': 'macOS', 'icon': 'pi-apple', 'extensions': ['.dmg']},
'android': {'name': 'Android', 'icon': 'pi-android', 'extensions': ['.apk']},
'ios': {'name': 'iOS', 'icon': 'pi-apple', 'extensions': ['.ipa']},
}
def _clients_dir():
"""Get the client downloads directory."""
base = Path(current_app.config.get('UPLOAD_PATH', '/app/data/files')).parent
d = base / 'client-downloads'
d.mkdir(parents=True, exist_ok=True)
return d
def _verify_build_token():
"""Verify the build upload token from header or query param."""
token = request.headers.get('X-Build-Token', '') or request.args.get('build_token', '')
if not token:
return False
# Accept SECRET_KEY or JWT_SECRET_KEY
secret = os.environ.get('SECRET_KEY', '')
jwt_secret = os.environ.get('JWT_SECRET_KEY', '')
return token == secret or token == jwt_secret
# --- Public: list available clients ---
@api_bp.route('/clients', methods=['GET'])
def list_clients():
"""List available client downloads (public, no auth needed)."""
clients_dir = _clients_dir()
available = []
for platform, info in PLATFORMS.items():
platform_dir = clients_dir / platform
if not platform_dir.exists():
continue
files = sorted(platform_dir.iterdir(), key=lambda f: f.stat().st_mtime, reverse=True)
if not files:
continue
# Take the newest file
latest = files[0]
available.append({
'platform': platform,
'name': info['name'],
'icon': info['icon'],
'filename': latest.name,
'size': latest.stat().st_size,
'updated_at': latest.stat().st_mtime,
'download_url': f'/api/clients/{platform}/download',
})
return jsonify({
'clients': available,
'has_clients': len(available) > 0,
}), 200
@api_bp.route('/clients/<platform>/download', methods=['GET'])
def download_client(platform):
"""Download the latest client for a platform (public, no auth)."""
if platform not in PLATFORMS:
return jsonify({'error': 'Unbekannte Plattform'}), 404
clients_dir = _clients_dir()
platform_dir = clients_dir / platform
if not platform_dir.exists():
return jsonify({'error': 'Kein Client fuer diese Plattform verfuegbar'}), 404
files = sorted(platform_dir.iterdir(), key=lambda f: f.stat().st_mtime, reverse=True)
if not files:
return jsonify({'error': 'Kein Client verfuegbar'}), 404
latest = files[0]
return send_from_directory(str(platform_dir), latest.name, as_attachment=True)
# --- Build upload (authenticated with BUILD_UPLOAD_TOKEN) ---
@api_bp.route('/clients/<platform>/upload', methods=['POST'])
def upload_client(platform):
"""Upload a new client build. Authenticated with BUILD_UPLOAD_TOKEN."""
if not _verify_build_token():
return jsonify({'error': 'Ungueltiger Build-Token'}), 403
if platform not in PLATFORMS:
return jsonify({'error': 'Unbekannte Plattform'}), 404
if 'file' not in request.files:
return jsonify({'error': 'Keine Datei gesendet'}), 400
upload = request.files['file']
if not upload.filename:
return jsonify({'error': 'Leerer Dateiname'}), 400
clients_dir = _clients_dir()
platform_dir = clients_dir / platform
platform_dir.mkdir(parents=True, exist_ok=True)
# Remove old files for this platform (keep only latest)
for old_file in platform_dir.iterdir():
old_file.unlink()
dest = platform_dir / upload.filename
upload.save(str(dest))
return jsonify({
'message': f'{PLATFORMS[platform]["name"]} Client hochgeladen',
'filename': upload.filename,
'size': dest.stat().st_size,
}), 200
+488 -106
View File
@@ -1,13 +1,35 @@
import csv
import io
import json
import re
import uuid
import zipfile
from datetime import datetime, timezone
from flask import request, jsonify
from flask import request, jsonify, Response
from app.api import api_bp
from app.api.auth import token_required
from app.extensions import db
from app.models.contact import AddressBook, Contact, AddressBookShare
from app.models.user import User
from app.services.events import broadcaster
def _notify_addressbook(owner_id: int, book_id: int, change: str, shared_with=()):
"""SSE event for a vcard or share change. Re-uses the calendar event
infrastructure with a separate 'addressbook' type."""
recipients = [owner_id, *shared_with]
broadcaster.publish(recipients, {
'type': 'addressbook',
'change': change,
'address_book_id': book_id,
})
def _book_recipients(book: AddressBook):
return [s.shared_with_id for s in
AddressBookShare.query.filter_by(address_book_id=book.id).all()]
def _get_addressbook_or_err(book_id, user, need_write=False):
@@ -26,7 +48,224 @@ def _get_addressbook_or_err(book_id, user, need_write=False):
return book, None
# --- Address Books ---
# ---------------------------------------------------------------------------
# vCard helpers
# ---------------------------------------------------------------------------
def _escape(s):
if s is None:
return ''
return str(s).replace('\\', '\\\\').replace(',', '\\,').replace(';', '\\;').replace('\n', '\\n')
def _unescape(s):
if not s:
return ''
return s.replace('\\n', '\n').replace('\\;', ';').replace('\\,', ',').replace('\\\\', '\\')
def _apply_fields_to_contact(contact: Contact, data: dict):
"""Copy fields from a JSON request into a Contact model instance."""
for field in ('prefix', 'first_name', 'middle_name', 'last_name', 'suffix',
'nickname', 'organization', 'department', 'job_title',
'notes', 'photo', 'birthday', 'anniversary'):
if field in data:
value = data[field]
setattr(contact, field, (value.strip() if isinstance(value, str) else value) or None)
if 'display_name' in data:
contact.display_name = (data['display_name'] or '').strip() or None
for jsonfield in ('emails', 'phones', 'addresses', 'websites', 'impp', 'categories'):
if jsonfield in data:
value = data[jsonfield] or []
setattr(contact, jsonfield, json.dumps(value) if value else None)
# Denormalised primary fields for list display
emails = data.get('emails') if 'emails' in data else json.loads(contact.emails) if contact.emails else []
phones = data.get('phones') if 'phones' in data else json.loads(contact.phones) if contact.phones else []
contact.primary_email = (emails[0]['value'] if emails else None)
contact.primary_phone = (phones[0]['value'] if phones else None)
# Legacy columns
contact.email = contact.primary_email
contact.phone = contact.primary_phone
# Compose display name if not provided
if not contact.display_name:
parts = [contact.prefix, contact.first_name, contact.middle_name,
contact.last_name, contact.suffix]
contact.display_name = ' '.join(p for p in parts if p) or contact.organization or None
def _build_vcard(contact: Contact) -> str:
"""Render a Contact into vCard 3.0 text."""
lines = ['BEGIN:VCARD', 'VERSION:3.0', f'UID:{contact.uid}']
if contact.display_name:
lines.append(f'FN:{_escape(contact.display_name)}')
# N: lastname;firstname;middle;prefix;suffix
n_parts = [_escape(contact.last_name), _escape(contact.first_name),
_escape(contact.middle_name), _escape(contact.prefix),
_escape(contact.suffix)]
if any(n_parts):
lines.append('N:' + ';'.join(n_parts))
if contact.nickname:
lines.append(f'NICKNAME:{_escape(contact.nickname)}')
if contact.organization or contact.department:
lines.append(f'ORG:{_escape(contact.organization or "")};{_escape(contact.department or "")}')
if contact.job_title:
lines.append(f'TITLE:{_escape(contact.job_title)}')
for e in (json.loads(contact.emails) if contact.emails else []):
typ = (e.get('type') or 'home').upper()
lines.append(f'EMAIL;TYPE={typ}:{_escape(e.get("value", ""))}')
for p in (json.loads(contact.phones) if contact.phones else []):
typ = (p.get('type') or 'cell').upper()
lines.append(f'TEL;TYPE={typ}:{_escape(p.get("value", ""))}')
for a in (json.loads(contact.addresses) if contact.addresses else []):
typ = (a.get('type') or 'home').upper()
# ADR: po_box;extended;street;city;region;postal_code;country
parts = [_escape(a.get('po_box', '')), '', _escape(a.get('street', '')),
_escape(a.get('city', '')), _escape(a.get('region', '')),
_escape(a.get('postal_code', '')), _escape(a.get('country', ''))]
lines.append(f'ADR;TYPE={typ}:' + ';'.join(parts))
for w in (json.loads(contact.websites) if contact.websites else []):
typ = (w.get('type') or '').upper()
tag = f'URL;TYPE={typ}' if typ else 'URL'
lines.append(f'{tag}:{_escape(w.get("value", ""))}')
for i in (json.loads(contact.impp) if contact.impp else []):
proto = (i.get('protocol') or 'xmpp').lower()
lines.append(f'IMPP:{proto}:{_escape(i.get("value", ""))}')
if contact.birthday:
lines.append(f'BDAY:{contact.birthday}')
if contact.anniversary:
lines.append(f'ANNIVERSARY:{contact.anniversary}')
cats = json.loads(contact.categories) if contact.categories else []
if cats:
lines.append('CATEGORIES:' + ','.join(_escape(c) for c in cats))
if contact.notes:
lines.append(f'NOTE:{_escape(contact.notes)}')
if contact.photo:
# Photo can be a data: URL or http URL. In vCard 3.0 we use PHOTO;VALUE=uri.
lines.append(f'PHOTO;VALUE=uri:{contact.photo}')
lines.append(f'REV:{datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")}')
lines.append('END:VCARD')
return '\r\n'.join(lines)
def _unfold_vcard(raw: str):
"""Undo RFC 6350 line folding (continuation lines start with space/tab)."""
lines = []
for line in raw.replace('\r\n', '\n').split('\n'):
if line.startswith((' ', '\t')) and lines:
lines[-1] += line[1:]
else:
lines.append(line)
return lines
def parse_vcard(raw: str) -> dict:
"""Parse a VCARD text into a dict of fields usable by _apply_fields_to_contact.
Returns dict with keys matching contact fields + 'uid'."""
result = {
'emails': [], 'phones': [], 'addresses': [],
'websites': [], 'impp': [], 'categories': [],
}
for line in _unfold_vcard(raw):
if ':' not in line:
continue
key, _, value = line.partition(':')
parts = key.split(';')
name = parts[0].upper()
params = {}
for p in parts[1:]:
if '=' in p:
k, v = p.split('=', 1)
params[k.upper()] = v.upper()
if name == 'UID':
result['uid'] = value.strip()
elif name == 'FN':
result['display_name'] = _unescape(value)
elif name == 'N':
fields = value.split(';')
if len(fields) >= 5:
result['last_name'] = _unescape(fields[0]) or None
result['first_name'] = _unescape(fields[1]) or None
result['middle_name'] = _unescape(fields[2]) or None
result['prefix'] = _unescape(fields[3]) or None
result['suffix'] = _unescape(fields[4]) or None
elif name == 'NICKNAME':
result['nickname'] = _unescape(value)
elif name == 'ORG':
fields = value.split(';')
result['organization'] = _unescape(fields[0]) if fields else None
if len(fields) > 1:
result['department'] = _unescape(fields[1]) or None
elif name == 'TITLE':
result['job_title'] = _unescape(value)
elif name == 'EMAIL':
result['emails'].append({
'type': (params.get('TYPE') or 'home').lower(),
'value': _unescape(value),
})
elif name == 'TEL':
result['phones'].append({
'type': (params.get('TYPE') or 'cell').lower(),
'value': _unescape(value),
})
elif name == 'ADR':
fields = value.split(';')
pad = fields + [''] * (7 - len(fields))
result['addresses'].append({
'type': (params.get('TYPE') or 'home').lower(),
'po_box': _unescape(pad[0]),
'street': _unescape(pad[2]),
'city': _unescape(pad[3]),
'region': _unescape(pad[4]),
'postal_code': _unescape(pad[5]),
'country': _unescape(pad[6]),
})
elif name == 'URL':
result['websites'].append({
'type': (params.get('TYPE') or '').lower(),
'value': _unescape(value),
})
elif name == 'IMPP':
proto, _, addr = value.partition(':')
result['impp'].append({'protocol': proto.lower(), 'value': _unescape(addr or value)})
elif name == 'CATEGORIES':
result['categories'] = [_unescape(c).strip() for c in value.split(',') if c.strip()]
elif name == 'BDAY':
result['birthday'] = _normalise_date(value)
elif name == 'ANNIVERSARY':
result['anniversary'] = _normalise_date(value)
elif name == 'NOTE':
result['notes'] = _unescape(value)
elif name == 'PHOTO':
result['photo'] = value.strip() or None
return result
def _normalise_date(s: str):
s = s.strip()
m = re.match(r'^(\d{4})-?(\d{2})-?(\d{2})$', s[:10])
if m:
return f'{m.group(1)}-{m.group(2)}-{m.group(3)}'
return None
# ---------------------------------------------------------------------------
# Address books
# ---------------------------------------------------------------------------
@api_bp.route('/addressbooks', methods=['GET'])
@token_required
@@ -49,7 +288,12 @@ def list_addressbooks():
address_book_id=b.id, shared_with_id=user.id
).first()
d['permission'] = share.permission if share else 'read'
d['owner_color'] = d.get('color')
if share and share.color:
d['color'] = share.color
d['owner_name'] = b.owner.username
d['owner_full_name'] = b.owner.full_name
d['owner_display_name'] = b.owner.display_name
d['contact_count'] = b.contacts.count()
result.append(d)
@@ -61,13 +305,19 @@ def list_addressbooks():
def create_addressbook():
user = request.current_user
data = request.get_json()
name = data.get('name', '').strip()
name = (data.get('name') or '').strip()
if not name:
return jsonify({'error': 'Name erforderlich'}), 400
book = AddressBook(owner_id=user.id, name=name, description=data.get('description', ''))
book = AddressBook(
owner_id=user.id,
name=name,
color=data.get('color', '#3788d8'),
description=data.get('description') or None,
)
db.session.add(book)
db.session.commit()
_notify_addressbook(user.id, book.id, 'created')
return jsonify(book.to_dict()), 201
@@ -77,31 +327,66 @@ def update_addressbook(book_id):
user = request.current_user
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
return jsonify({'error': 'Nicht gefunden oder keine Berechtigung'}), 404
data = request.get_json()
if 'name' in data:
book.name = data['name'].strip()
if 'description' in data:
book.description = data['description']
book.description = data['description'] or None
if 'color' in data:
book.color = data['color']
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'updated',
shared_with=_book_recipients(book))
return jsonify(book.to_dict()), 200
@api_bp.route('/addressbooks/<int:book_id>/my-color', methods=['PUT'])
@token_required
def set_my_addressbook_color(book_id):
user = request.current_user
book = db.session.get(AddressBook, book_id)
if not book:
return jsonify({'error': 'Nicht gefunden'}), 404
color = ((request.get_json() or {}).get('color') or '').strip()
if book.owner_id == user.id:
if color:
book.color = color
db.session.commit()
return jsonify({'color': book.color}), 200
share = AddressBookShare.query.filter_by(
address_book_id=book_id, shared_with_id=user.id
).first()
if not share:
return jsonify({'error': 'Kein Zugriff'}), 403
share.color = color or None
db.session.commit()
return jsonify({'color': share.color or book.color}), 200
@api_bp.route('/addressbooks/<int:book_id>', methods=['DELETE'])
@token_required
def delete_addressbook(book_id):
user = request.current_user
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
return jsonify({'error': 'Nicht gefunden oder keine Berechtigung'}), 404
recipients = _book_recipients(book)
owner_id = book.owner_id
bid = book.id
db.session.delete(book)
db.session.commit()
_notify_addressbook(owner_id, bid, 'deleted', shared_with=recipients)
return jsonify({'message': 'Adressbuch geloescht'}), 200
# --- Contacts ---
# ---------------------------------------------------------------------------
# Contacts
# ---------------------------------------------------------------------------
@api_bp.route('/addressbooks/<int:book_id>/contacts', methods=['GET'])
@token_required
@@ -111,14 +396,174 @@ def list_contacts(book_id):
if err:
return err
search = request.args.get('search', '').strip()
query = Contact.query.filter_by(address_book_id=book_id)
search = (request.args.get('q') or '').strip()
q = Contact.query.filter_by(address_book_id=book_id)
if search:
query = query.filter(Contact.display_name.ilike(f'%{search}%'))
contacts = query.order_by(Contact.display_name).all()
like = f'%{search}%'
q = q.filter(
(Contact.display_name.ilike(like)) |
(Contact.primary_email.ilike(like)) |
(Contact.organization.ilike(like))
)
contacts = q.order_by(Contact.display_name).all()
return jsonify([c.to_dict() for c in contacts]), 200
@api_bp.route('/addressbooks/<int:book_id>/export', methods=['GET'])
@token_required
def export_addressbook(book_id):
"""Export contacts as a single .vcf, a .zip with one .vcf per contact, or .csv."""
user = request.current_user
book, err = _get_addressbook_or_err(book_id, user)
if err:
return err
fmt = (request.args.get('format') or 'vcf').lower()
contacts = Contact.query.filter_by(address_book_id=book_id).order_by(Contact.display_name).all()
safe_name = re.sub(r'[^A-Za-z0-9._-]+', '_', book.name or 'kontakte') or 'kontakte'
if fmt == 'vcf':
body = '\r\n'.join((c.vcard_data or _build_vcard(c)).strip() for c in contacts) + '\r\n'
return Response(
body, mimetype='text/vcard; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.vcf"'},
)
if fmt == 'vcf-zip':
buf = io.BytesIO()
with zipfile.ZipFile(buf, 'w', zipfile.ZIP_DEFLATED) as zf:
seen = {}
for c in contacts:
base = re.sub(r'[^A-Za-z0-9._-]+', '_', c.display_name or c.uid) or c.uid
seen[base] = seen.get(base, 0) + 1
fname = f"{base}.vcf" if seen[base] == 1 else f"{base}_{seen[base]}.vcf"
zf.writestr(fname, (c.vcard_data or _build_vcard(c)).strip() + '\r\n')
buf.seek(0)
return Response(
buf.read(), mimetype='application/zip',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.zip"'},
)
if fmt == 'csv':
out = io.StringIO()
cols = ['display_name', 'prefix', 'first_name', 'middle_name', 'last_name', 'suffix',
'nickname', 'organization', 'department', 'job_title',
'primary_email', 'primary_phone', 'birthday', 'anniversary',
'emails', 'phones', 'addresses', 'websites', 'categories', 'notes']
w = csv.writer(out, delimiter=';', quoting=csv.QUOTE_ALL)
w.writerow(cols)
for c in contacts:
d = c.to_dict()
row = []
for col in cols:
v = d.get(col, '')
if isinstance(v, list):
if v and isinstance(v[0], dict):
v = '; '.join(
(x.get('value') or x.get('street') or '') +
(f" ({x.get('type')})" if x.get('type') else '')
for x in v if isinstance(x, dict)
)
else:
v = ', '.join(str(x) for x in v)
row.append('' if v is None else str(v))
w.writerow(row)
return Response(
'\ufeff' + out.getvalue(), mimetype='text/csv; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe_name}.csv"'},
)
return jsonify({'error': 'Unbekanntes Format'}), 400
@api_bp.route('/addressbooks/<int:book_id>/import', methods=['POST'])
@token_required
def import_addressbook(book_id):
"""Import vCard (.vcf, single oder mehrere im File) oder CSV."""
user = request.current_user
book, err = _get_addressbook_or_err(book_id, user, need_write=True)
if err:
return err
file = request.files.get('file')
if not file:
return jsonify({'error': 'Keine Datei'}), 400
raw = file.read()
name = (file.filename or '').lower()
try:
text = raw.decode('utf-8-sig')
except UnicodeDecodeError:
text = raw.decode('latin-1', errors='replace')
imported = 0
skipped = 0
def _add_from_parsed(parsed: dict, raw_text: str | None = None) -> bool:
nonlocal imported, skipped
if not parsed.get('display_name') and not parsed.get('first_name') \
and not parsed.get('last_name') and not parsed.get('organization'):
skipped += 1
return False
uid = parsed.get('uid') or str(uuid.uuid4())
existing = Contact.query.filter_by(address_book_id=book_id, uid=uid).first()
contact = existing or Contact(address_book_id=book_id, uid=uid, vcard_data='')
_apply_fields_to_contact(contact, parsed)
contact.vcard_data = (raw_text or '').strip() or _build_vcard(contact)
contact.updated_at = datetime.now(timezone.utc)
if not existing:
db.session.add(contact)
imported += 1
return True
if name.endswith('.csv') or (b',' in raw[:200] and b'BEGIN:VCARD' not in raw[:200]):
# CSV import
reader = csv.DictReader(io.StringIO(text), delimiter=';')
if not reader.fieldnames or len(reader.fieldnames) < 2:
# try comma
reader = csv.DictReader(io.StringIO(text), delimiter=',')
for row in reader:
row = {k.strip().lower(): (v or '').strip() for k, v in row.items() if k}
parsed = {
'display_name': row.get('display_name') or row.get('name')
or row.get('vollname') or row.get('full name'),
'first_name': row.get('first_name') or row.get('vorname'),
'last_name': row.get('last_name') or row.get('nachname'),
'middle_name': row.get('middle_name'),
'prefix': row.get('prefix') or row.get('anrede'),
'suffix': row.get('suffix'),
'nickname': row.get('nickname') or row.get('spitzname'),
'organization': row.get('organization') or row.get('firma') or row.get('company'),
'department': row.get('department') or row.get('abteilung'),
'job_title': row.get('job_title') or row.get('position') or row.get('title'),
'birthday': row.get('birthday') or row.get('geburtstag'),
'notes': row.get('notes') or row.get('notizen'),
'emails': [], 'phones': [], 'addresses': [], 'websites': [], 'categories': [],
}
email = row.get('primary_email') or row.get('email') or row.get('e-mail')
if email:
parsed['emails'].append({'type': 'home', 'value': email})
phone = row.get('primary_phone') or row.get('phone') or row.get('telefon') or row.get('mobil')
if phone:
parsed['phones'].append({'type': 'cell', 'value': phone})
cats = row.get('categories') or row.get('kategorien')
if cats:
parsed['categories'] = [c.strip() for c in cats.split(',') if c.strip()]
_add_from_parsed(parsed)
else:
# vCard - eine oder mehrere im File
parts = re.findall(r'BEGIN:VCARD.*?END:VCARD', text, flags=re.DOTALL | re.IGNORECASE)
if not parts:
return jsonify({'error': 'Keine VCARD-Daten gefunden'}), 400
for vcf in parts:
try:
parsed = parse_vcard(vcf)
except Exception:
skipped += 1
continue
_add_from_parsed(parsed, raw_text=vcf)
db.session.commit()
if imported:
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return jsonify({'imported': imported, 'skipped': skipped}), 200
@api_bp.route('/addressbooks/<int:book_id>/contacts', methods=['POST'])
@token_required
def create_contact(book_id):
@@ -127,29 +572,16 @@ def create_contact(book_id):
if err:
return err
data = request.get_json()
display_name = data.get('display_name', '').strip()
if not display_name:
return jsonify({'error': 'Name erforderlich'}), 400
contact_uid = str(uuid.uuid4())
email = data.get('email', '')
phone = data.get('phone', '')
org = data.get('organization', '')
notes = data.get('notes', '')
vcard = _build_vcard(contact_uid, display_name, email, phone, org, notes)
contact = Contact(
address_book_id=book_id,
uid=contact_uid,
vcard_data=vcard,
display_name=display_name,
email=email or None,
phone=phone or None,
)
data = request.get_json() or {}
contact = Contact(address_book_id=book_id, uid=str(uuid.uuid4()), vcard_data='')
_apply_fields_to_contact(contact, data)
if not contact.display_name:
return jsonify({'error': 'Name oder Firma erforderlich'}), 400
contact.vcard_data = _build_vcard(contact)
db.session.add(contact)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return jsonify(contact.to_dict()), 201
@@ -160,11 +592,9 @@ def get_contact(contact_id):
contact = db.session.get(Contact, contact_id)
if not contact:
return jsonify({'error': 'Kontakt nicht gefunden'}), 404
book, err = _get_addressbook_or_err(contact.address_book_id, user)
if err:
return err
result = contact.to_dict()
result['vcard_data'] = contact.vcard_data
return jsonify(result), 200
@@ -177,29 +607,17 @@ def update_contact(contact_id):
contact = db.session.get(Contact, contact_id)
if not contact:
return jsonify({'error': 'Kontakt nicht gefunden'}), 404
book, err = _get_addressbook_or_err(contact.address_book_id, user, need_write=True)
if err:
return err
data = request.get_json()
if 'display_name' in data:
contact.display_name = data['display_name'].strip()
if 'email' in data:
contact.email = data['email'] or None
if 'phone' in data:
contact.phone = data['phone'] or None
contact.vcard_data = _build_vcard(
contact.uid,
contact.display_name,
data.get('email', contact.email or ''),
data.get('phone', contact.phone or ''),
data.get('organization', ''),
data.get('notes', ''),
)
data = request.get_json() or {}
_apply_fields_to_contact(contact, data)
contact.vcard_data = _build_vcard(contact)
contact.updated_at = datetime.now(timezone.utc)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return jsonify(contact.to_dict()), 200
@@ -210,17 +628,19 @@ def delete_contact(contact_id):
contact = db.session.get(Contact, contact_id)
if not contact:
return jsonify({'error': 'Kontakt nicht gefunden'}), 404
book, err = _get_addressbook_or_err(contact.address_book_id, user, need_write=True)
if err:
return err
db.session.delete(contact)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return jsonify({'message': 'Kontakt geloescht'}), 200
# --- Sharing ---
# ---------------------------------------------------------------------------
# Sharing
# ---------------------------------------------------------------------------
@api_bp.route('/addressbooks/<int:book_id>/share', methods=['POST'])
@token_required
@@ -230,10 +650,9 @@ def share_addressbook(book_id):
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nur der Eigentuemer kann teilen'}), 403
data = request.get_json()
username = data.get('username', '').strip()
data = request.get_json() or {}
username = (data.get('username') or '').strip()
permission = data.get('permission', 'read')
if permission not in ('read', 'readwrite'):
return jsonify({'error': 'Ungueltige Berechtigung'}), 400
@@ -246,7 +665,6 @@ def share_addressbook(book_id):
existing = AddressBookShare.query.filter_by(
address_book_id=book_id, shared_with_id=target.id
).first()
is_new = not existing
if existing:
existing.permission = permission
else:
@@ -254,16 +672,9 @@ def share_addressbook(book_id):
address_book_id=book_id, shared_with_id=target.id, permission=permission
)
db.session.add(share)
db.session.commit()
if is_new:
try:
from app.services.system_mail import notify_contacts_shared
notify_contacts_shared(book.name, user.username, target, permission)
except Exception:
pass
_notify_addressbook(book.owner_id, book.id, 'share',
shared_with=[target.id, *_book_recipients(book)])
return jsonify({'message': f'Adressbuch mit {username} geteilt'}), 200
@@ -274,7 +685,6 @@ def list_addressbook_shares(book_id):
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
shares = AddressBookShare.query.filter_by(address_book_id=book_id).all()
return jsonify([{
'id': s.id,
@@ -291,17 +701,20 @@ def remove_addressbook_share(book_id, share_id):
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
share = db.session.get(AddressBookShare, share_id)
if not share or share.address_book_id != book_id:
return jsonify({'error': 'Freigabe nicht gefunden'}), 404
target_id = share.shared_with_id
db.session.delete(share)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'share',
shared_with=[target_id, *_book_recipients(book)])
return jsonify({'message': 'Freigabe entfernt'}), 200
# --- Import/Export ---
# ---------------------------------------------------------------------------
# vCard export (all contacts of a book)
# ---------------------------------------------------------------------------
@api_bp.route('/addressbooks/<int:book_id>/export', methods=['GET'])
@token_required
@@ -310,40 +723,9 @@ def export_contacts(book_id):
book, err = _get_addressbook_or_err(book_id, user)
if err:
return err
contacts = Contact.query.filter_by(address_book_id=book_id).all()
vcards = '\r\n'.join(c.vcard_data for c in contacts)
from flask import Response
parts = [c.vcard_data for c in book.contacts]
return Response(
vcards,
mimetype='text/vcard',
'\r\n'.join(parts),
mimetype='text/vcard; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{book.name}.vcf"'},
)
# --- Helpers ---
def _build_vcard(uid, display_name, email='', phone='', org='', notes=''):
parts = display_name.split(' ', 1)
first = parts[0]
last = parts[1] if len(parts) > 1 else ''
lines = [
'BEGIN:VCARD',
'VERSION:3.0',
f'UID:{uid}',
f'FN:{display_name}',
f'N:{last};{first};;;',
]
if email:
lines.append(f'EMAIL:{email}')
if phone:
lines.append(f'TEL:{phone}')
if org:
lines.append(f'ORG:{org}')
if notes:
lines.append(f'NOTE:{notes}')
lines.append(f'REV:{datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")}')
lines.append('END:VCARD')
return '\r\n'.join(lines)
+381 -39
View File
@@ -15,6 +15,45 @@ from app.api import api_bp
from app.api.auth import token_required
from app.extensions import db, bcrypt
from app.models.file import File, FilePermission, ShareLink
from app.models.file_lock import FileLock
from app.services.events import broadcaster, notify_file_change
def _share_recipients(file_obj):
"""Return a list of user ids (besides the owner) that should see changes
to this file because they have a direct share permission on it or on
any of its ancestor folders."""
ids = set()
cur = file_obj
while cur is not None:
for p in FilePermission.query.filter_by(file_id=cur.id).all():
ids.add(p.user_id)
cur = cur.parent
ids.discard(file_obj.owner_id)
return list(ids)
def _effective_permission(file_obj, user):
"""Returns (permission_level, can_reshare) for the given user on this file,
walking up the folder tree. Owner gets ('admin', True). Returns
(None, False) if no access."""
if file_obj.owner_id == user.id:
return ('admin', True)
levels = {'read': 0, 'write': 1, 'admin': 2}
best_level = -1
best_perm = None
best_reshare = False
cur = file_obj
while cur is not None:
perm = FilePermission.query.filter_by(file_id=cur.id, user_id=user.id).first()
if perm:
lvl = levels.get(perm.permission, -1)
if lvl > best_level:
best_level = lvl
best_perm = perm.permission
best_reshare = perm.can_reshare
cur = cur.parent
return (best_perm, best_reshare)
def _user_upload_dir(user_id):
@@ -25,16 +64,22 @@ def _user_upload_dir(user_id):
def _check_file_access(file_obj, user, permission='read'):
"""Check if user has access to file. Owner always has full access."""
"""Check if user has access to file. Owner always has full access.
A permission on an ancestor folder also grants access to all descendants."""
if file_obj.owner_id == user.id:
return True
perm = FilePermission.query.filter_by(
file_id=file_obj.id, user_id=user.id
).first()
if not perm:
return False
perm_levels = {'read': 0, 'write': 1, 'admin': 2}
return perm_levels.get(perm.permission, -1) >= perm_levels.get(permission, 0)
needed = perm_levels.get(permission, 0)
# Walk up the tree looking for a permission on this file or any ancestor
cur = file_obj
while cur is not None:
perm = FilePermission.query.filter_by(
file_id=cur.id, user_id=user.id
).first()
if perm and perm_levels.get(perm.permission, -1) >= needed:
return True
cur = cur.parent
return False
def _get_file_or_403(file_id, user, permission='read'):
@@ -62,9 +107,25 @@ def list_files():
user = request.current_user
parent_id = request.args.get('parent_id', None, type=int)
# Own files in this folder (exclude trashed)
query = File.query.filter_by(owner_id=user.id, parent_id=parent_id, is_trashed=False)
files = query.order_by(File.is_folder.desc(), File.name).all()
# When browsing into a folder, verify access first. If the folder is
# shared with us (directly or via an ancestor), list ALL its children
# - not just ones owned by us.
if parent_id is not None:
parent_folder, perr = _get_file_or_403(parent_id, user, 'read')
if perr:
return perr
if parent_folder.owner_id == user.id:
files = File.query.filter_by(
owner_id=user.id, parent_id=parent_id, is_trashed=False
).order_by(File.is_folder.desc(), File.name).all()
else:
files = File.query.filter_by(
parent_id=parent_id, is_trashed=False
).order_by(File.is_folder.desc(), File.name).all()
else:
files = File.query.filter_by(
owner_id=user.id, parent_id=None, is_trashed=False
).order_by(File.is_folder.desc(), File.name).all()
# Shared files at root level
shared = []
@@ -74,7 +135,7 @@ def list_files():
if shared_file_ids:
shared = File.query.filter(
File.id.in_(shared_file_ids),
File.parent_id.is_(None)
File.is_trashed == False # noqa: E712
).order_by(File.is_folder.desc(), File.name).all()
result = []
@@ -82,10 +143,21 @@ def list_files():
d = f.to_dict()
d['has_shares'] = ShareLink.query.filter_by(file_id=f.id).count() > 0
d['has_permissions'] = FilePermission.query.filter_by(file_id=f.id).count() > 0
my_perm, my_reshare = _effective_permission(f, user)
d['my_permission'] = my_perm
d['my_can_reshare'] = bool(my_reshare)
lock = FileLock.get_lock(f.id)
if lock:
d['locked'] = True
d['locked_by'] = lock.user.username
d['locked_at'] = lock.locked_at.isoformat()
result.append(d)
for f in shared:
d = f.to_dict()
d['shared'] = True
my_perm, my_reshare = _effective_permission(f, user)
d['my_permission'] = my_perm
d['my_can_reshare'] = bool(my_reshare)
result.append(d)
# Build breadcrumb
@@ -131,6 +203,8 @@ def create_folder():
)
db.session.add(folder)
db.session.commit()
notify_file_change(folder.owner_id, folder.id, 'created',
shared_with=_share_recipients(folder))
return jsonify(folder.to_dict()), 201
@@ -222,6 +296,8 @@ def upload_file():
existing.checksum = checksum
existing.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_file_change(existing.owner_id, existing.id, 'updated',
shared_with=_share_recipients(existing))
return jsonify(existing.to_dict()), 200
file_obj = File(
@@ -236,6 +312,8 @@ def upload_file():
)
db.session.add(file_obj)
db.session.commit()
notify_file_change(file_obj.owner_id, file_obj.id, 'created',
shared_with=_share_recipients(file_obj))
return jsonify(file_obj.to_dict()), 201
@@ -256,8 +334,11 @@ def download_file(file_id):
if not filepath.exists():
return jsonify({'error': 'Datei auf Datentraeger nicht gefunden'}), 404
return send_file(str(filepath), mimetype=f.mime_type, as_attachment=True,
download_name=f.name)
# inline=1 renders the file in-browser (used by PDF/image previews).
# Default is attachment so normal download buttons still save to disk.
inline = request.args.get('inline', '0') == '1'
return send_file(str(filepath), mimetype=f.mime_type,
as_attachment=not inline, download_name=f.name)
def _download_folder_as_zip(folder):
@@ -300,6 +381,11 @@ def update_file(file_id):
if err:
return err
# Lock-Check: fremder Lock blockiert Aenderungen (admin kann durch)
lock = FileLock.get_lock(file_id)
if lock and lock.locked_by != user.id and user.role != 'admin':
return jsonify({'error': f'Datei ist von {lock.user.username} ausgecheckt'}), 423
data = request.get_json()
if 'name' in data:
name = data['name'].strip()
@@ -325,6 +411,8 @@ def update_file(file_id):
f.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'updated',
shared_with=_share_recipients(f))
return jsonify(f.to_dict()), 200
@@ -340,9 +428,18 @@ def delete_file(file_id):
if not f or f.owner_id != user.id:
return jsonify({'error': 'Zugriff verweigert'}), 403
# Lock-Check
lock = FileLock.get_lock(file_id)
if lock and lock.locked_by != user.id and user.role != 'admin':
return jsonify({'error': f'Datei ist von {lock.user.username} ausgecheckt'}), 423
# Capture recipients BEFORE we detach the file from its parent tree
recipients = _share_recipients(f)
owner_id = f.owner_id
# Soft-delete: move to trash
_trash_recursive(f)
db.session.commit()
notify_file_change(owner_id, f.id, 'deleted', shared_with=recipients)
return jsonify({'message': 'In Papierkorb verschoben'}), 200
@@ -475,12 +572,21 @@ def empty_trash():
@token_required
def get_permissions(file_id):
user = request.current_user
f, err = _get_file_or_403(file_id, user, 'admin')
if err:
if not (f := db.session.get(File, file_id)) or f.owner_id != user.id:
return jsonify({'error': 'Zugriff verweigert'}), 403
f = db.session.get(File, file_id)
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
is_owner = (f.owner_id == user.id)
my_perm, my_reshare = _effective_permission(f, user)
if not is_owner and not my_reshare:
return jsonify({'error': 'Zugriff verweigert'}), 403
# Owners see everyone; re-sharers only see perms they granted themselves.
if is_owner:
perms = FilePermission.query.filter_by(file_id=file_id).all()
else:
perms = FilePermission.query.filter_by(file_id=file_id, granted_by=user.id).all()
perms = FilePermission.query.filter_by(file_id=file_id).all()
from app.models.user import User
result = []
for p in perms:
@@ -490,6 +596,8 @@ def get_permissions(file_id):
'user_id': p.user_id,
'username': u.username if u else None,
'permission': p.permission,
'can_reshare': bool(p.can_reshare),
'granted_by': p.granted_by,
})
return jsonify(result), 200
@@ -499,33 +607,69 @@ def get_permissions(file_id):
def set_permission(file_id):
user = request.current_user
f = db.session.get(File, file_id)
if not f or f.owner_id != user.id:
return jsonify({'error': 'Nur der Eigentuemer kann Berechtigungen setzen'}), 403
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
is_owner = (f.owner_id == user.id)
my_perm, my_reshare = _effective_permission(f, user)
if not is_owner and not my_reshare:
return jsonify({'error': 'Keine Berechtigung zum Weiterteilen'}), 403
data = request.get_json()
target_user_id = data.get('user_id')
permission = data.get('permission', 'read')
can_reshare_req = bool(data.get('can_reshare', False))
if permission not in ('read', 'write', 'admin'):
return jsonify({'error': 'Ungueltige Berechtigung'}), 400
# Re-sharers can't hand out more than they have themselves.
levels = {'read': 0, 'write': 1, 'admin': 2}
if not is_owner:
max_allowed = levels.get(my_perm, -1)
if levels.get(permission, -1) > max_allowed:
return jsonify({
'error': f'Du kannst hoechstens "{my_perm}" weiterverteilen'
}), 403
if permission == 'admin':
return jsonify({'error': 'Admin-Recht kann nur der Eigentuemer vergeben'}), 403
from app.models.user import User
target = db.session.get(User, target_user_id)
if not target:
return jsonify({'error': 'Benutzer nicht gefunden'}), 404
if target.id == f.owner_id:
return jsonify({'error': 'Eigentuemer hat bereits Vollzugriff'}), 400
existing = FilePermission.query.filter_by(
file_id=file_id, user_id=target_user_id
).first()
is_new = not existing
if existing:
# Re-sharers may only modify perms they themselves granted
if not is_owner and existing.granted_by != user.id:
return jsonify({'error': 'Diese Freigabe wurde von jemand anderem erstellt'}), 403
existing.permission = permission
existing.can_reshare = can_reshare_req
if is_new or existing.granted_by is None:
existing.granted_by = user.id
else:
perm = FilePermission(file_id=file_id, user_id=target_user_id, permission=permission)
perm = FilePermission(
file_id=file_id,
user_id=target_user_id,
permission=permission,
can_reshare=can_reshare_req,
granted_by=user.id,
)
db.session.add(perm)
db.session.commit()
# SSE: notify target user (they just got/updated access) + owner + other
# share recipients so everyone's file list refreshes.
notify_file_change(f.owner_id, f.id, 'permission',
shared_with=[target.id, *_share_recipients(f)])
# Notify user via email
if is_new:
try:
@@ -542,15 +686,24 @@ def set_permission(file_id):
def remove_permission(file_id, perm_id):
user = request.current_user
f = db.session.get(File, file_id)
if not f or f.owner_id != user.id:
return jsonify({'error': 'Nur der Eigentuemer kann Berechtigungen entfernen'}), 403
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
perm = db.session.get(FilePermission, perm_id)
if not perm or perm.file_id != file_id:
return jsonify({'error': 'Berechtigung nicht gefunden'}), 404
is_owner = (f.owner_id == user.id)
if not is_owner and perm.granted_by != user.id:
return jsonify({'error': 'Du kannst nur selbst erstellte Freigaben entfernen'}), 403
target_user_id = perm.user_id
db.session.delete(perm)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'permission',
shared_with=[target_user_id, *_share_recipients(f)])
return jsonify({'message': 'Berechtigung entfernt'}), 200
@@ -560,9 +713,14 @@ def remove_permission(file_id, perm_id):
@token_required
def create_share_link(file_id):
user = request.current_user
f, err = _get_file_or_403(file_id, user, 'read')
if err:
return err
f = db.session.get(File, file_id)
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
is_owner = (f.owner_id == user.id)
my_perm, my_reshare = _effective_permission(f, user)
if not is_owner and not my_reshare:
return jsonify({'error': 'Keine Berechtigung zum Weiterteilen'}), 403
data = request.get_json() or {}
password = data.get('password')
@@ -573,6 +731,18 @@ def create_share_link(file_id):
if permission not in ('read', 'write', 'upload_only'):
return jsonify({'error': 'Berechtigung muss "read", "write" oder "upload_only" sein'}), 400
# Re-sharers can only hand out what they have themselves.
if not is_owner:
levels = {'read': 0, 'write': 1}
max_allowed = levels.get(my_perm, -1)
requested = levels.get(permission, 99)
if requested > max_allowed:
return jsonify({
'error': f'Du hast selbst nur "{my_perm}" - kannst nicht schreibend weiterteilen'
}), 403
if permission == 'upload_only' and my_perm not in ('write', 'admin'):
return jsonify({'error': 'Upload-Links nur mit Schreibrecht moeglich'}), 403
token = secrets.token_urlsafe(32)
password_hash = None
if password:
@@ -975,33 +1145,205 @@ def delete_share_link(token):
return jsonify({'message': 'Link geloescht'}), 200
# --- File Locking ---
@api_bp.route('/files/<int:file_id>/lock', methods=['POST'])
@token_required
def lock_file(file_id):
"""Lock a file (check out). Prevents others from opening/editing."""
user = request.current_user
f = db.session.get(File, file_id)
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
# Check existing lock
existing = FileLock.get_lock(file_id)
if existing:
if existing.locked_by == user.id:
# Already locked by this user - refresh heartbeat
existing.heartbeat_at = datetime.now(timezone.utc)
db.session.commit()
return jsonify(existing.to_dict()), 200
return jsonify({
'error': f'Datei wird von {existing.user.username} bearbeitet',
'locked_by': existing.user.username,
'locked_at': existing.locked_at.isoformat(),
}), 423 # 423 Locked
data = request.get_json(silent=True) or {}
lock = FileLock(
file_id=file_id,
locked_by=user.id,
client_info=data.get('client_info', ''),
)
db.session.add(lock)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'locked',
shared_with=_share_recipients(f))
return jsonify(lock.to_dict()), 200
@api_bp.route('/files/<int:file_id>/unlock', methods=['POST'])
@token_required
def unlock_file(file_id):
"""Unlock a file (check in)."""
user = request.current_user
lock = FileLock.get_lock(file_id)
if not lock:
return jsonify({'message': 'Datei war nicht gesperrt'}), 200
if lock.locked_by != user.id and user.role != 'admin':
return jsonify({'error': 'Nur der Sperrer oder ein Admin kann entsperren'}), 403
db.session.delete(lock)
db.session.commit()
f = db.session.get(File, file_id)
if f:
notify_file_change(f.owner_id, f.id, 'unlocked',
shared_with=_share_recipients(f))
return jsonify({'message': 'Datei entsperrt'}), 200
@api_bp.route('/files/<int:file_id>/heartbeat', methods=['POST'])
@token_required
def heartbeat_file(file_id):
"""Heartbeat - signal that the file is still being edited."""
user = request.current_user
lock = FileLock.get_lock(file_id)
if not lock:
return jsonify({'error': 'Keine Sperre vorhanden'}), 404
if lock.locked_by != user.id:
return jsonify({'error': 'Sperre gehoert einem anderen Benutzer'}), 403
lock.heartbeat_at = datetime.now(timezone.utc)
db.session.commit()
return jsonify({'message': 'Heartbeat aktualisiert'}), 200
@api_bp.route('/files/<int:file_id>/lock-status', methods=['GET'])
@token_required
def lock_status(file_id):
"""Check if a file is locked."""
lock = FileLock.get_lock(file_id)
if not lock:
return jsonify({'locked': False}), 200
return jsonify({
'locked': True,
'locked_by': lock.user.username,
'locked_by_id': lock.locked_by,
'locked_at': lock.locked_at.isoformat(),
'client_info': lock.client_info,
}), 200
@api_bp.route('/files/locks', methods=['GET'])
@token_required
def list_locks():
"""List all active locks (for admin overview or sync clients)."""
# Cleanup expired first
FileLock.cleanup_expired()
locks = FileLock.query.all()
return jsonify([l.to_dict() for l in locks]), 200
# --- Sync API ---
@api_bp.route('/sync/tree', methods=['GET'])
@token_required
def sync_tree():
"""Returns complete file tree with checksums for sync clients."""
"""Returns complete file tree with checksums for sync clients.
Includes both files owned by the user (under 'tree') and files
shared WITH the user (as a virtual 'Geteilt mit mir' folder under
'shared'). The client merges both.
"""
user = request.current_user
def _entry(f):
entry = {
'id': f.id,
'name': f.name,
'is_folder': f.is_folder,
'size': f.size,
'checksum': f.checksum,
'updated_at': f.updated_at.isoformat() if f.updated_at else None,
'modified_at': f.updated_at.isoformat() if f.updated_at else None,
}
lock = FileLock.get_lock(f.id)
if lock:
entry['locked'] = True
entry['locked_by'] = lock.user.username
return entry
def _build_tree(parent_id):
files = File.query.filter_by(owner_id=user.id, parent_id=parent_id)\
files = File.query.filter_by(owner_id=user.id, parent_id=parent_id, is_trashed=False)\
.order_by(File.is_folder.desc(), File.name).all()
result = []
for f in files:
entry = {
'id': f.id,
'name': f.name,
'is_folder': f.is_folder,
'size': f.size,
'checksum': f.checksum,
'updated_at': f.updated_at.isoformat() if f.updated_at else None,
}
e = _entry(f)
if f.is_folder:
entry['children'] = _build_tree(f.id)
result.append(entry)
e['children'] = _build_tree(f.id)
result.append(e)
return result
return jsonify({'tree': _build_tree(None)}), 200
def _build_shared_children(parent_id):
files = File.query.filter_by(parent_id=parent_id, is_trashed=False)\
.order_by(File.is_folder.desc(), File.name).all()
out = []
for f in files:
e = _entry(f)
if f.is_folder:
e['children'] = _build_shared_children(f.id)
out.append(e)
return out
shared_perms = FilePermission.query.filter_by(user_id=user.id).all()
shared_roots = []
seen = set()
for perm in shared_perms:
f = perm.file
if not f or f.is_trashed or f.id in seen:
continue
seen.add(f.id)
# Nur "Top-Level"-Shares: wenn der Eltern-Ordner NICHT auch geteilt
# ist, ist dieses Item die Wurzel des Shares beim Empfaenger.
parent_shared = any(
p.file_id == f.parent_id for p in shared_perms
) if f.parent_id else False
if parent_shared:
continue
e = _entry(f)
owner = f.owner.display_name if hasattr(f, 'owner') and f.owner else None
if owner:
e['name'] = f'{f.name} (von {owner})'
if f.is_folder:
e['children'] = _build_shared_children(f.id)
shared_roots.append(e)
return jsonify({
'tree': _build_tree(None),
'shared': shared_roots,
}), 200
@api_bp.route('/sync/events', methods=['GET'])
@token_required
def sync_events():
"""Server-Sent Events stream: real-time file change notifications."""
user = request.current_user
user_id = user.id
def event_stream():
yield from broadcaster.stream(user_id)
resp = Response(event_stream(), mimetype='text/event-stream')
resp.headers['Cache-Control'] = 'no-cache'
resp.headers['X-Accel-Buffering'] = 'no' # disable nginx buffering
resp.headers['Connection'] = 'keep-alive'
return resp
@api_bp.route('/sync/changes', methods=['GET'])
+119 -54
View File
@@ -1,21 +1,29 @@
import io
import os
import hashlib
from datetime import datetime, timezone
from datetime import datetime, timezone, timedelta
from pathlib import Path
from flask import request, jsonify, current_app, send_file
from app.api import api_bp
from app.api.auth import token_required
from app.api.files import _get_file_or_403
from app.api.files import _get_file_or_403, _share_recipients
from app.extensions import db
from app.models.settings import AppSettings
from app.services.events import notify_file_change
@api_bp.route('/files/<int:file_id>/preview', methods=['GET'])
@token_required
def preview_file(file_id):
from flask import after_this_request
@after_this_request
def add_no_cache(response):
response.headers['Cache-Control'] = 'no-cache, no-store, must-revalidate'
response.headers['Pragma'] = 'no-cache'
return response
user = request.current_user
f, err = _get_file_or_403(file_id, user, 'read')
if err:
@@ -212,6 +220,8 @@ def save_file(file_id):
f.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'updated',
shared_with=_share_recipients(f))
return jsonify({'message': 'Gespeichert', 'size': f.size}), 200
except Exception as e:
return jsonify({'error': f'Speichern fehlgeschlagen: {str(e)}'}), 500
@@ -330,7 +340,7 @@ def onlyoffice_config(file_id):
if err:
return err
oo_url = AppSettings.get('onlyoffice_url', os.environ.get('ONLYOFFICE_URL', ''))
oo_url = os.environ.get('ONLYOFFICE_URL', '')
if not oo_url:
return jsonify({'error': 'OnlyOffice nicht konfiguriert', 'available': False}), 200
@@ -353,9 +363,11 @@ def onlyoffice_config(file_id):
AppSettings.set(f'oo_callback_{callback_key}', str(file_id))
# Build the config
# The URLs must be reachable by OnlyOffice server (not the browser)
base_url = request.host_url.rstrip('/')
token = request.args.get('token', '') or request.headers.get('Authorization', '').replace('Bearer ', '')
internal_url = os.environ.get('ONLYOFFICE_INTERNAL_URL', 'http://minicloud:5000')
# Generate a one-time file access key (no JWT needed, simpler for OnlyOffice)
file_access_key = _secrets.token_urlsafe(32)
AppSettings.set(f'oo_file_{file_access_key}', f'{file_id}:{user.id}')
config = {
'available': True,
@@ -363,14 +375,15 @@ def onlyoffice_config(file_id):
'config': {
'document': {
'fileType': ext,
'key': f'{file_id}_{f.checksum or "0"}_{callback_key[:8]}',
'key': f'{file_id}_{int(datetime.now(timezone.utc).timestamp())}_{callback_key[:8]}',
'title': f.name,
'url': f'{base_url}/api/files/{file_id}/download?token={token}',
'url': f'{internal_url}/api/files/oo-download/{file_access_key}',
},
'documentType': doc_type,
'editorConfig': {
'callbackUrl': f'{base_url}/api/files/onlyoffice-callback?key={callback_key}',
'callbackUrl': f'{internal_url}/api/files/onlyoffice-callback?key={callback_key}',
'mode': 'edit' if can_write else 'view',
'forcesavetype': 0,
'lang': 'de',
'user': {
'id': str(user.id),
@@ -380,8 +393,8 @@ def onlyoffice_config(file_id):
},
}
# Sign with JWT if secret is set
jwt_secret = AppSettings.get('onlyoffice_jwt_secret', '')
# Sign config with JWT for OnlyOffice validation
jwt_secret = os.environ.get('JWT_SECRET_KEY', '')
if jwt_secret:
import jwt as pyjwt
config['config']['token'] = pyjwt.encode(config['config'], jwt_secret, algorithm='HS256')
@@ -389,57 +402,109 @@ def onlyoffice_config(file_id):
return jsonify(config), 200
@api_bp.route('/files/oo-download/<access_key>', methods=['GET'])
def oo_download(access_key):
"""Dedicated download endpoint for OnlyOffice - no JWT auth, uses one-time key."""
data = AppSettings.get(f'oo_file_{access_key}', '')
if not data:
return jsonify({'error': 'Ungueltiger Zugangsschluessel'}), 403
parts = data.split(':')
if len(parts) != 2:
return jsonify({'error': 'Ungueltiger Zugangsschluessel'}), 403
file_id = int(parts[0])
from app.models.file import File
f = db.session.get(File, file_id)
if not f:
return jsonify({'error': 'Datei nicht gefunden'}), 404
filepath = Path(current_app.config['UPLOAD_PATH']) / str(f.owner_id) / f.storage_path
if not filepath.exists():
return jsonify({'error': 'Datei nicht auf Datentraeger'}), 404
return send_file(str(filepath), mimetype=f.mime_type or 'application/octet-stream',
as_attachment=False, download_name=f.name)
@api_bp.route('/files/onlyoffice-callback', methods=['POST'])
def onlyoffice_callback():
"""Callback from OnlyOffice when document is saved."""
import urllib.request
"""Callback from OnlyOffice when document is saved.
callback_key = request.args.get('key', '')
file_id_str = AppSettings.get(f'oo_callback_{callback_key}', '')
OnlyOffice sends status codes:
1 = editing, 2 = ready to save, 4 = closed no changes, 6 = force save
Must always return {"error": 0} for success.
"""
try:
import jwt as pyjwt
import urllib.request
import shutil
if not file_id_str:
return jsonify({'error': 1}), 200 # OnlyOffice expects {"error": 0} for success
jwt_secret = os.environ.get('JWT_SECRET_KEY', '')
data = request.get_json()
status = data.get('status', 0)
# Get callback data - may be JWT-wrapped
data = request.get_json(silent=True) or {}
print(f'[OnlyOffice Callback] Raw status={data.get("status")}, key={request.args.get("key", "")}')
# Status 2 = document ready for saving, 6 = force save
if status in (2, 6):
download_url = data.get('url', '')
if download_url:
# If body contains a JWT token, decode it to get the real data
if 'token' in data and jwt_secret:
try:
from app.models.file import File
file_id = int(file_id_str)
f = db.session.get(File, file_id)
if f:
filepath = Path(current_app.config['UPLOAD_PATH']) / str(f.owner_id) / f.storage_path
# Download the saved document from OnlyOffice
urllib.request.urlretrieve(download_url, str(filepath))
# Update metadata
f.size = os.path.getsize(str(filepath))
h = hashlib.sha256()
with open(str(filepath), 'rb') as fh:
for chunk in iter(lambda: fh.read(8192), b''):
h.update(chunk)
f.checksum = h.hexdigest()
f.updated_at = datetime.now(timezone.utc)
db.session.commit()
decoded = pyjwt.decode(data['token'], jwt_secret, algorithms=['HS256'])
data = decoded
except Exception as e:
print(f'[OnlyOffice Callback] Error: {e}')
return jsonify({'error': 1}), 200
print(f'[OnlyOffice Callback] Body JWT decode failed (using raw data): {e}')
# Status 4 = closed without changes
if status in (2, 4, 6):
# Cleanup callback key
try:
setting = db.session.get(AppSettings, f'oo_callback_{callback_key}')
if setting:
db.session.delete(setting)
db.session.commit()
except Exception:
pass
status = data.get('status', 0)
callback_key = request.args.get('key', '')
# Status 2 or 6: save the document
if status in (2, 6):
file_id_str = AppSettings.get(f'oo_callback_{callback_key}', '')
if file_id_str:
download_url = data.get('url', '')
if download_url:
from app.models.file import File
file_id = int(file_id_str)
f = db.session.get(File, file_id)
if f and f.storage_path:
filepath = Path(current_app.config['UPLOAD_PATH']) / str(f.owner_id) / f.storage_path
print(f'[OnlyOffice Callback] Saving file {f.name} from {download_url}')
# Download saved doc from OnlyOffice
req = urllib.request.Request(download_url)
with urllib.request.urlopen(req, timeout=30) as resp, \
open(str(filepath), 'wb') as out:
shutil.copyfileobj(resp, out)
# Update metadata
f.size = os.path.getsize(str(filepath))
h = hashlib.sha256()
with open(str(filepath), 'rb') as fh:
for chunk in iter(lambda: fh.read(8192), b''):
h.update(chunk)
f.checksum = h.hexdigest()
f.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_file_change(f.owner_id, f.id, 'updated',
shared_with=_share_recipients(f))
print(f'[OnlyOffice Callback] File saved: {f.name} ({f.size} bytes)')
# Status 2, 4, 6: cleanup
if status in (2, 4, 6):
try:
setting = db.session.get(AppSettings, f'oo_callback_{callback_key}')
if setting:
db.session.delete(setting)
db.session.commit()
except Exception:
pass
except Exception as e:
print(f'[OnlyOffice Callback] ERROR: {e}')
import traceback
traceback.print_exc()
# Still return error: 0 so OnlyOffice doesn't retry endlessly
return jsonify({'error': 0}), 200
return jsonify({'error': 0}), 200
@@ -448,7 +513,7 @@ def onlyoffice_callback():
@token_required
def onlyoffice_status():
"""Check if OnlyOffice is available."""
oo_url = AppSettings.get('onlyoffice_url', os.environ.get('ONLYOFFICE_URL', ''))
oo_url = os.environ.get('ONLYOFFICE_URL', '')
return jsonify({
'available': bool(oo_url),
'url': oo_url,
+590
View File
@@ -0,0 +1,590 @@
"""REST API for task lists / tasks (VTODO).
Mirror der calendar.py-Architektur: TaskList = Calendar-aehnliche Sammlung,
Task = VTODO. CalDAV-Anbindung erfolgt in app/dav/caldav.py: TaskLists
erscheinen als Kalender-Collection mit supported-calendar-component-set
auf VTODO und unter URL /dav/<user>/tl-<id>/.
"""
from __future__ import annotations
import re
import uuid
from datetime import datetime, timezone
from flask import request, jsonify, Response
from app.api import api_bp
from app.api.auth import token_required
from app.extensions import db
from app.models.task import TaskList, Task, TaskListShare
from app.models.user import User
from app.services.events import notify_tasklist_change
# ---------------------------------------------------------------------------
# Access helpers
# ---------------------------------------------------------------------------
def _list_recipients(tl: TaskList):
return [s.shared_with_id for s in
TaskListShare.query.filter_by(task_list_id=tl.id).all()]
def _get_list_or_err(list_id, user, need_write=False):
tl = db.session.get(TaskList, list_id)
if not tl:
return None, (jsonify({'error': 'Aufgabenliste nicht gefunden'}), 404)
if tl.owner_id == user.id:
return tl, None
share = TaskListShare.query.filter_by(
task_list_id=list_id, shared_with_id=user.id
).first()
if not share:
return None, (jsonify({'error': 'Zugriff verweigert'}), 403)
if need_write and share.permission != 'readwrite':
return None, (jsonify({'error': 'Schreibzugriff verweigert'}), 403)
return tl, None
# ---------------------------------------------------------------------------
# VTODO build / parse
# ---------------------------------------------------------------------------
def _fmt_dt(dt: datetime | None) -> str | None:
if not dt:
return None
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
return dt.astimezone(timezone.utc).strftime('%Y%m%dT%H%M%SZ')
def build_vtodo(task: Task) -> str:
lines = ['BEGIN:VTODO', f'UID:{task.uid}',
f'DTSTAMP:{_fmt_dt(datetime.now(timezone.utc))}',
f'SUMMARY:{(task.summary or "").replace(chr(10), " ")}']
if task.description:
lines.append(f'DESCRIPTION:{task.description.replace(chr(10), chr(92) + "n")}')
if task.status:
lines.append(f'STATUS:{task.status}')
if task.priority is not None:
lines.append(f'PRIORITY:{task.priority}')
if task.percent_complete is not None:
lines.append(f'PERCENT-COMPLETE:{task.percent_complete}')
if task.due:
lines.append(f'DUE:{_fmt_dt(task.due)}')
if task.dtstart:
lines.append(f'DTSTART:{_fmt_dt(task.dtstart)}')
if task.completed_at:
lines.append(f'COMPLETED:{_fmt_dt(task.completed_at)}')
if task.categories:
lines.append(f'CATEGORIES:{task.categories}')
lines.append('END:VTODO')
return '\r\n'.join(lines)
def _unfold(text: str):
out, current = [], ''
for line in text.replace('\r\n', '\n').split('\n'):
if line.startswith((' ', '\t')) and current:
current += line[1:]
else:
if current:
out.append(current)
current = line
if current:
out.append(current)
return out
def _parse_dt(value: str) -> datetime | None:
value = value.strip()
try:
if value.endswith('Z'):
return datetime.strptime(value, '%Y%m%dT%H%M%SZ').replace(tzinfo=timezone.utc)
if 'T' in value:
return datetime.strptime(value, '%Y%m%dT%H%M%S')
return datetime.strptime(value, '%Y%m%d')
except ValueError:
try:
return datetime.fromisoformat(value)
except ValueError:
return None
def parse_vtodo(raw: str) -> dict | None:
if 'BEGIN:VTODO' not in raw.upper():
return None
result: dict = {}
in_block = False
for line in _unfold(raw):
upper = line.upper()
if upper.startswith('BEGIN:VTODO'):
in_block = True
continue
if upper.startswith('END:VTODO'):
break
if not in_block or ':' not in line:
continue
key, _, value = line.partition(':')
name = key.split(';')[0].upper()
if name == 'UID':
result['uid'] = value.strip()
elif name == 'SUMMARY':
result['summary'] = value.strip()
elif name == 'DESCRIPTION':
result['description'] = value.replace('\\n', '\n').replace('\\,', ',').strip()
elif name == 'STATUS':
result['status'] = value.strip().upper()
elif name == 'PRIORITY':
try:
result['priority'] = int(value.strip())
except ValueError:
pass
elif name == 'PERCENT-COMPLETE':
try:
result['percent_complete'] = int(value.strip())
except ValueError:
pass
elif name == 'DUE':
result['due'] = _parse_dt(value)
elif name == 'DTSTART':
result['dtstart'] = _parse_dt(value)
elif name == 'COMPLETED':
result['completed_at'] = _parse_dt(value)
elif name == 'CATEGORIES':
result['categories'] = value.strip()
return result if result.get('summary') or result.get('uid') else None
def _apply(task: Task, data: dict):
if 'summary' in data:
task.summary = (data.get('summary') or '').strip() or None
if 'description' in data:
task.description = (data.get('description') or '').strip() or None
if 'status' in data:
s = (data.get('status') or '').upper().strip() or None
task.status = s
if s == 'COMPLETED' and not task.completed_at:
task.completed_at = datetime.now(timezone.utc)
task.percent_complete = 100
elif s != 'COMPLETED':
task.completed_at = None
if 'priority' in data:
task.priority = data['priority'] if data['priority'] is not None else None
if 'percent_complete' in data:
task.percent_complete = data['percent_complete']
if 'due' in data:
v = data['due']
task.due = datetime.fromisoformat(v) if v else None
if 'dtstart' in data:
v = data['dtstart']
task.dtstart = datetime.fromisoformat(v) if v else None
if 'completed_at' in data:
v = data['completed_at']
task.completed_at = datetime.fromisoformat(v) if v else None
if 'categories' in data:
cats = data['categories']
if isinstance(cats, list):
task.categories = ','.join(c.strip() for c in cats if c and c.strip()) or None
else:
task.categories = (cats or '').strip() or None
# ---------------------------------------------------------------------------
# REST endpoints - lists
# ---------------------------------------------------------------------------
@api_bp.route('/tasklists', methods=['GET'])
@token_required
def list_tasklists():
user = request.current_user
own = TaskList.query.filter_by(owner_id=user.id).all()
shared = TaskListShare.query.filter_by(shared_with_id=user.id).all()
out = []
for tl in own:
d = tl.to_dict()
d['permission'] = 'owner'
d['task_count'] = tl.tasks.count()
out.append(d)
for s in shared:
tl = s.task_list
if not tl:
continue
d = tl.to_dict()
d['permission'] = s.permission
owner = tl.owner
d['owner_name'] = owner.username if owner else ''
d['owner_full_name'] = owner.full_name if owner else ''
d['owner_display_name'] = owner.display_name if owner else ''
d['task_count'] = tl.tasks.count()
if s.color:
d['color'] = s.color
out.append(d)
return jsonify(out), 200
@api_bp.route('/tasklists', methods=['POST'])
@token_required
def create_tasklist():
user = request.current_user
data = request.get_json() or {}
name = (data.get('name') or '').strip()
if not name:
return jsonify({'error': 'Name erforderlich'}), 400
tl = TaskList(owner_id=user.id, name=name,
color=data.get('color') or '#10b981',
description=(data.get('description') or '').strip() or None)
db.session.add(tl)
db.session.commit()
notify_tasklist_change(user.id, tl.id, 'created')
return jsonify(tl.to_dict()), 201
@api_bp.route('/tasklists/<int:list_id>', methods=['PUT'])
@token_required
def update_tasklist(list_id):
user = request.current_user
tl, err = _get_list_or_err(list_id, user, need_write=True)
if err:
return err
if tl.owner_id != user.id:
return jsonify({'error': 'Nur Eigentuemer kann die Liste umbenennen'}), 403
data = request.get_json() or {}
if 'name' in data:
tl.name = data['name'].strip()
if 'color' in data:
tl.color = data['color']
if 'description' in data:
tl.description = (data['description'] or '').strip() or None
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'updated', shared_with=_list_recipients(tl))
return jsonify(tl.to_dict()), 200
@api_bp.route('/tasklists/<int:list_id>/my-color', methods=['PUT'])
@token_required
def set_my_tasklist_color(list_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl:
return jsonify({'error': 'Nicht gefunden'}), 404
color = (request.get_json() or {}).get('color')
if not color:
return jsonify({'error': 'color erforderlich'}), 400
if tl.owner_id == user.id:
tl.color = color
db.session.commit()
return jsonify({'color': tl.color}), 200
share = TaskListShare.query.filter_by(task_list_id=list_id, shared_with_id=user.id).first()
if not share:
return jsonify({'error': 'Zugriff verweigert'}), 403
share.color = color
db.session.commit()
return jsonify({'color': share.color}), 200
@api_bp.route('/tasklists/<int:list_id>', methods=['DELETE'])
@token_required
def delete_tasklist(list_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl or tl.owner_id != user.id:
return jsonify({'error': 'Nur Eigentuemer kann loeschen'}), 403
recipients = _list_recipients(tl)
db.session.delete(tl)
db.session.commit()
notify_tasklist_change(user.id, list_id, 'deleted', shared_with=recipients)
return jsonify({'message': 'Aufgabenliste geloescht'}), 200
# ---------------------------------------------------------------------------
# REST endpoints - tasks
# ---------------------------------------------------------------------------
@api_bp.route('/tasklists/<int:list_id>/tasks', methods=['GET'])
@token_required
def list_tasks(list_id):
user = request.current_user
tl, err = _get_list_or_err(list_id, user)
if err:
return err
show_done = (request.args.get('include_done') or 'true').lower() != 'false'
q = Task.query.filter_by(task_list_id=list_id)
if not show_done:
q = q.filter((Task.status.is_(None)) | (Task.status != 'COMPLETED'))
tasks = q.order_by(Task.due.asc().nullslast(), Task.priority.desc().nullslast(), Task.id).all()
return jsonify([t.to_dict() for t in tasks]), 200
@api_bp.route('/tasklists/<int:list_id>/tasks', methods=['POST'])
@token_required
def create_task(list_id):
user = request.current_user
tl, err = _get_list_or_err(list_id, user, need_write=True)
if err:
return err
data = request.get_json() or {}
if not (data.get('summary') or '').strip():
return jsonify({'error': 'Titel erforderlich'}), 400
task = Task(task_list_id=list_id, uid=str(uuid.uuid4()), ical_data='')
_apply(task, data)
if not task.status:
task.status = 'NEEDS-ACTION'
task.ical_data = build_vtodo(task)
db.session.add(task)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return jsonify(task.to_dict()), 201
@api_bp.route('/tasks/<int:task_id>', methods=['GET'])
@token_required
def get_task(task_id):
user = request.current_user
task = db.session.get(Task, task_id)
if not task:
return jsonify({'error': 'Aufgabe nicht gefunden'}), 404
tl, err = _get_list_or_err(task.task_list_id, user)
if err:
return err
return jsonify(task.to_dict()), 200
@api_bp.route('/tasks/<int:task_id>', methods=['PUT'])
@token_required
def update_task(task_id):
user = request.current_user
task = db.session.get(Task, task_id)
if not task:
return jsonify({'error': 'Aufgabe nicht gefunden'}), 404
tl, err = _get_list_or_err(task.task_list_id, user, need_write=True)
if err:
return err
data = request.get_json() or {}
if 'task_list_id' in data and data['task_list_id'] != task.task_list_id:
new_tl, e2 = _get_list_or_err(data['task_list_id'], user, need_write=True)
if e2:
return e2
task.task_list_id = data['task_list_id']
_apply(task, data)
task.ical_data = build_vtodo(task)
task.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return jsonify(task.to_dict()), 200
@api_bp.route('/tasks/<int:task_id>', methods=['DELETE'])
@token_required
def delete_task(task_id):
user = request.current_user
task = db.session.get(Task, task_id)
if not task:
return jsonify({'error': 'Aufgabe nicht gefunden'}), 404
tl, err = _get_list_or_err(task.task_list_id, user, need_write=True)
if err:
return err
db.session.delete(task)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return jsonify({'message': 'Aufgabe geloescht'}), 200
# ---------------------------------------------------------------------------
# Sharing
# ---------------------------------------------------------------------------
@api_bp.route('/tasklists/<int:list_id>/share', methods=['POST'])
@token_required
def share_tasklist(list_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl or tl.owner_id != user.id:
return jsonify({'error': 'Nur Eigentuemer kann teilen'}), 403
data = request.get_json() or {}
username = (data.get('username') or '').strip()
permission = data.get('permission', 'read')
if permission not in ('read', 'readwrite'):
return jsonify({'error': 'Ungueltige Berechtigung'}), 400
target = User.query.filter_by(username=username).first()
if not target:
return jsonify({'error': 'Benutzer nicht gefunden'}), 404
if target.id == user.id:
return jsonify({'error': 'Kann nicht mit sich selbst teilen'}), 400
existing = TaskListShare.query.filter_by(task_list_id=list_id, shared_with_id=target.id).first()
if existing:
existing.permission = permission
else:
db.session.add(TaskListShare(task_list_id=list_id, shared_with_id=target.id,
permission=permission))
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'share',
shared_with=[target.id, *_list_recipients(tl)])
return jsonify({'message': f'Geteilt mit {username}'}), 200
@api_bp.route('/tasklists/<int:list_id>/shares', methods=['GET'])
@token_required
def list_tasklist_shares(list_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl or tl.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
shares = TaskListShare.query.filter_by(task_list_id=list_id).all()
return jsonify([{
'id': s.id, 'user_id': s.shared_with_id,
'username': s.shared_with.username, 'permission': s.permission,
} for s in shares]), 200
@api_bp.route('/tasklists/<int:list_id>/shares/<int:share_id>', methods=['DELETE'])
@token_required
def remove_tasklist_share(list_id, share_id):
user = request.current_user
tl = db.session.get(TaskList, list_id)
if not tl or tl.owner_id != user.id:
return jsonify({'error': 'Nicht gefunden'}), 404
share = db.session.get(TaskListShare, share_id)
if not share or share.task_list_id != list_id:
return jsonify({'error': 'Freigabe nicht gefunden'}), 404
target_id = share.shared_with_id
db.session.delete(share)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'share',
shared_with=[target_id, *_list_recipients(tl)])
return jsonify({'message': 'Freigabe entfernt'}), 200
# ---------------------------------------------------------------------------
# Import / Export (.ics with VTODO; CSV)
# ---------------------------------------------------------------------------
@api_bp.route('/tasklists/<int:list_id>/export', methods=['GET'])
@token_required
def export_tasklist(list_id):
import csv
import io
user = request.current_user
tl, err = _get_list_or_err(list_id, user)
if err:
return err
fmt = (request.args.get('format') or 'ics').lower()
tasks = Task.query.filter_by(task_list_id=list_id).all()
safe = re.sub(r'[^A-Za-z0-9._-]+', '_', tl.name or 'aufgaben') or 'aufgaben'
if fmt == 'ics':
lines = ['BEGIN:VCALENDAR', 'VERSION:2.0', 'PRODID:-//Mini-Cloud//DE', 'CALSCALE:GREGORIAN']
for t in tasks:
block = (t.ical_data or '').strip() or build_vtodo(t)
lines.append(block)
lines.append('END:VCALENDAR')
return Response(
'\r\n'.join(lines) + '\r\n',
mimetype='text/calendar; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe}.ics"'},
)
if fmt == 'csv':
out = io.StringIO()
w = csv.writer(out, delimiter=';', quoting=csv.QUOTE_ALL)
w.writerow(['summary', 'status', 'priority', 'percent_complete',
'due', 'dtstart', 'completed_at', 'categories', 'description', 'uid'])
for t in tasks:
w.writerow([
t.summary or '', t.status or '',
t.priority if t.priority is not None else '',
t.percent_complete if t.percent_complete is not None else '',
t.due.isoformat() if t.due else '',
t.dtstart.isoformat() if t.dtstart else '',
t.completed_at.isoformat() if t.completed_at else '',
t.categories or '',
(t.description or '').replace('\r\n', ' ').replace('\n', ' '),
t.uid or '',
])
return Response(
'\ufeff' + out.getvalue(), mimetype='text/csv; charset=utf-8',
headers={'Content-Disposition': f'attachment; filename="{safe}.csv"'},
)
return jsonify({'error': 'Unbekanntes Format'}), 400
@api_bp.route('/tasklists/<int:list_id>/import', methods=['POST'])
@token_required
def import_tasklist(list_id):
import csv
import io
user = request.current_user
tl, err = _get_list_or_err(list_id, user, need_write=True)
if err:
return err
file = request.files.get('file')
if not file:
return jsonify({'error': 'Keine Datei'}), 400
raw = file.read()
try:
text = raw.decode('utf-8-sig')
except UnicodeDecodeError:
text = raw.decode('latin-1', errors='replace')
name = (file.filename or '').lower()
imported, skipped = 0, 0
def _save(parsed: dict, ical_block: str | None = None):
nonlocal imported, skipped
if not parsed.get('summary'):
skipped += 1
return
uid = parsed.get('uid') or str(uuid.uuid4())
existing = Task.query.filter_by(task_list_id=list_id, uid=uid).first()
t = existing or Task(task_list_id=list_id, uid=uid, ical_data='')
t.summary = parsed.get('summary')
t.description = parsed.get('description')
t.status = parsed.get('status') or 'NEEDS-ACTION'
t.priority = parsed.get('priority')
t.percent_complete = parsed.get('percent_complete')
t.due = parsed.get('due')
t.dtstart = parsed.get('dtstart')
t.completed_at = parsed.get('completed_at')
cats = parsed.get('categories')
if isinstance(cats, list):
t.categories = ','.join(cats)
elif isinstance(cats, str):
t.categories = cats or None
t.ical_data = (ical_block or '').strip() or build_vtodo(t)
if not existing:
db.session.add(t)
imported += 1
if name.endswith('.csv') or (b';' in raw[:200] and b'BEGIN:VCALENDAR' not in raw[:200]):
reader = csv.DictReader(__import__('io').StringIO(text), delimiter=';')
if not reader.fieldnames or len(reader.fieldnames) < 2:
reader = csv.DictReader(__import__('io').StringIO(text), delimiter=',')
for row in reader:
row = {k.strip().lower(): (v or '').strip() for k, v in row.items() if k}
try:
due = datetime.fromisoformat(row['due']) if row.get('due') else None
except ValueError:
due = None
_save({
'uid': row.get('uid'),
'summary': row.get('summary') or row.get('titel'),
'description': row.get('description') or row.get('beschreibung'),
'status': (row.get('status') or '').upper() or None,
'priority': int(row['priority']) if row.get('priority', '').isdigit() else None,
'percent_complete': int(row['percent_complete']) if row.get('percent_complete', '').isdigit() else None,
'due': due,
'categories': row.get('categories') or row.get('kategorien'),
})
else:
blocks = re.findall(r'BEGIN:VTODO.*?END:VTODO', text, flags=re.DOTALL | re.IGNORECASE)
if not blocks:
return jsonify({'error': 'Keine VTODO-Daten gefunden'}), 400
for block in blocks:
parsed = parse_vtodo(block)
if not parsed:
skipped += 1
continue
_save(parsed, ical_block=block)
db.session.commit()
if imported:
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return jsonify({'imported': imported, 'skipped': skipped}), 200
+47 -8
View File
@@ -145,6 +145,12 @@ def delete_user(user_id):
@api_bp.route('/settings', methods=['GET'])
@admin_required
def get_settings():
import time as _time
from datetime import datetime as _dt
try:
tzname = _time.strftime('%Z')
except Exception:
tzname = ''
return jsonify({
'public_registration': AppSettings.get_bool('public_registration', default=True),
'system_smtp_host': AppSettings.get('system_smtp_host', ''),
@@ -153,9 +159,13 @@ def get_settings():
'system_smtp_username': AppSettings.get('system_smtp_username', ''),
'system_smtp_password_set': bool(AppSettings.get('system_smtp_password', '')),
'system_email_from': AppSettings.get('system_email_from', ''),
'onlyoffice_url': AppSettings.get('onlyoffice_url', os.environ.get('ONLYOFFICE_URL', '')),
'onlyoffice_jwt_secret': AppSettings.get('onlyoffice_jwt_secret', ''),
'onlyoffice_jwt_secret_set': bool(AppSettings.get('onlyoffice_jwt_secret', '')),
'onlyoffice_url': os.environ.get('ONLYOFFICE_URL', ''),
'onlyoffice_configured': bool(os.environ.get('ONLYOFFICE_URL', '')),
# Read-only system info aus der .env
'timezone': os.environ.get('TZ', 'Europe/Berlin'),
'timezone_abbr': tzname,
'server_time': _dt.now().isoformat(timespec='seconds'),
'ntp_server': os.environ.get('NTP_SERVER', ''),
}), 200
@@ -166,13 +176,11 @@ def update_settings():
if 'public_registration' in data:
AppSettings.set('public_registration', str(data['public_registration']).lower())
for key in ['system_smtp_host', 'system_smtp_port', 'system_smtp_ssl',
'system_smtp_username', 'system_email_from', 'onlyoffice_url']:
'system_smtp_username', 'system_email_from']:
if key in data:
AppSettings.set(key, str(data[key]))
if 'system_smtp_password' in data and data['system_smtp_password']:
AppSettings.set('system_smtp_password', data['system_smtp_password'])
if 'onlyoffice_jwt_secret' in data and data['onlyoffice_jwt_secret']:
AppSettings.set('onlyoffice_jwt_secret', data['onlyoffice_jwt_secret'])
return jsonify({'message': 'Einstellungen gespeichert'}), 200
@@ -273,6 +281,31 @@ def create_invite_link():
# --- User search (for sharing dialogs) ---
@api_bp.route('/auth/me', methods=['GET'])
@token_required
def get_me():
return jsonify(request.current_user.to_dict(include_email=True)), 200
@api_bp.route('/auth/me', methods=['PUT'])
@token_required
def update_me():
user = request.current_user
data = request.get_json() or {}
if 'first_name' in data:
user.first_name = (data.get('first_name') or '').strip() or None
if 'last_name' in data:
user.last_name = (data.get('last_name') or '').strip() or None
if 'email' in data:
email = (data.get('email') or '').strip() or None
if email and email != user.email:
if User.query.filter(User.email == email, User.id != user.id).first():
return jsonify({'error': 'E-Mail ist bereits vergeben'}), 409
user.email = email
db.session.commit()
return jsonify(user.to_dict(include_email=True)), 200
@api_bp.route('/users/search', methods=['GET'])
@token_required
def search_users():
@@ -281,13 +314,19 @@ def search_users():
if len(query) < 2:
return jsonify([]), 200
like = f'%{query}%'
users = User.query.filter(
User.username.ilike(f'%{query}%'),
(User.username.ilike(like)) | (User.first_name.ilike(like)) | (User.last_name.ilike(like)),
User.id != request.current_user.id,
User.is_active == True,
).limit(10).all()
return jsonify([{'id': u.id, 'username': u.username} for u in users]), 200
return jsonify([{
'id': u.id,
'username': u.username,
'full_name': u.full_name,
'display_name': u.display_name,
} for u in users]), 200
# --- Change password (non-admin, own account) ---
+23 -10
View File
@@ -2,23 +2,31 @@ import os
from datetime import timedelta
from pathlib import Path
# Project root: backend/app/config.py -> backend/app -> backend -> project_root
basedir = Path(__file__).resolve().parent.parent.parent
def _resolve_path(env_var, default_subpath):
"""Resolve a path from environment variable.
- Absolute paths (/app/data/...) are used as-is
- Relative paths (./data/...) are resolved relative to CWD
- No env var: use default relative to CWD
"""
env_val = os.environ.get(env_var, '').strip()
if not env_val:
return str(Path.cwd() / default_subpath)
if os.path.isabs(env_val):
return env_val
return str(Path.cwd() / env_val)
class Config:
SECRET_KEY = os.environ.get('SECRET_KEY', 'dev-secret-key-change-me')
# Database - always resolve relative to project root
_db_default = str(basedir / 'data' / 'minicloud.db')
_db_env = os.environ.get('DATABASE_PATH', '')
_db_path = str(basedir / _db_env) if _db_env and not os.path.isabs(_db_env) else (_db_env or _db_default)
SQLALCHEMY_DATABASE_URI = f'sqlite:///{_db_path}'
# Database
SQLALCHEMY_DATABASE_URI = f'sqlite:///{_resolve_path("DATABASE_PATH", "data/minicloud.db")}'
SQLALCHEMY_TRACK_MODIFICATIONS = False
# File uploads - always resolve relative to project root
_upload_env = os.environ.get('UPLOAD_PATH', '')
UPLOAD_PATH = str(basedir / _upload_env) if _upload_env and not os.path.isabs(_upload_env) else (_upload_env or str(basedir / 'data' / 'files'))
# File uploads
UPLOAD_PATH = _resolve_path('UPLOAD_PATH', 'data/files')
MAX_CONTENT_LENGTH = int(os.environ.get('MAX_UPLOAD_SIZE_MB', 500)) * 1024 * 1024
# JWT
@@ -32,3 +40,8 @@ class Config:
# CORS
FRONTEND_URL = os.environ.get('FRONTEND_URL', 'http://localhost:3000')
# Zeitzone (prozessweit, wirkt nach time.tzset())
TIMEZONE = os.environ.get('TZ', 'Europe/Berlin')
# NTP-Server fuer Offset-Check beim Start. Leerstring deaktiviert den Check.
NTP_SERVER = os.environ.get('NTP_SERVER', 'ptbtime1.ptb.de')
+6
View File
@@ -0,0 +1,6 @@
from flask import Blueprint
dav_bp = Blueprint('dav', __name__, url_prefix='/dav')
from . import caldav # noqa: F401,E402
from . import carddav # noqa: F401,E402
+780
View File
@@ -0,0 +1,780 @@
"""Minimal CalDAV server (RFC 4791 subset).
Implements the endpoints that Thunderbird, DAVx5 and Apple Calendar
actually use in practice:
OPTIONS - capability advertisement (DAV: 1, 2, calendar-access)
PROPFIND Depth 0/1 - discovery chain + listings
REPORT calendar-query + calendar-multiget
GET single VCALENDAR resource
PUT create/update VCALENDAR resource
DELETE remove a resource or calendar collection
Non-goals for this revision: ACL reports, free-busy, sync-token based
incremental sync, scheduling (iTIP/iMIP). Clients fall back to full
PROPFIND refresh when sync-token isn't advertised, which is fine for
small personal calendars.
"""
from __future__ import annotations
import re
import uuid
import xml.etree.ElementTree as ET
from datetime import datetime, timezone
from functools import wraps
from flask import Response, request
from app.extensions import db
from app.models.calendar import Calendar, CalendarEvent, CalendarShare
from app.models.user import User
from app.services.events import notify_calendar_change
def _cal_recipients(cal: 'Calendar'):
return [s.shared_with_id for s in
CalendarShare.query.filter_by(calendar_id=cal.id).all()]
from . import dav_bp
# ---------------------------------------------------------------------------
# XML namespace plumbing
# ---------------------------------------------------------------------------
NS = {
'd': 'DAV:',
'c': 'urn:ietf:params:xml:ns:caldav',
'cs': 'http://calendarserver.org/ns/',
'ic': 'http://apple.com/ns/ical/',
}
for prefix, uri in NS.items():
ET.register_namespace('' if prefix == 'd' else prefix, uri)
def _qn(prefix: str, local: str) -> str:
return f'{{{NS[prefix]}}}{local}'
def _xml_response(root: ET.Element, status: int = 207) -> Response:
body = b'<?xml version="1.0" encoding="utf-8"?>\n' + ET.tostring(root, encoding='utf-8')
headers = {
'DAV': '1, 2, 3, calendar-access, addressbook',
'Content-Type': 'application/xml; charset=utf-8',
}
return Response(body, status=status, headers=headers)
# ---------------------------------------------------------------------------
# Authentication (HTTP Basic over the existing user table)
# ---------------------------------------------------------------------------
def _challenge() -> Response:
return Response(
'Authentication required', 401,
{'WWW-Authenticate': 'Basic realm="Mini-Cloud DAV"'}
)
def basic_auth(f):
@wraps(f)
def wrapper(*args, **kwargs):
auth = request.authorization
if not auth or not auth.username or not auth.password:
return _challenge()
user = User.query.filter_by(username=auth.username).first()
if not user or not user.is_active or not user.check_password(auth.password):
return _challenge()
request.dav_user = user
return f(*args, **kwargs)
return wrapper
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
DAV_HEADERS = {
'DAV': '1, 2, 3, calendar-access, addressbook',
}
ALLOW_COLLECTION = 'OPTIONS, PROPFIND, REPORT, DELETE, MKCALENDAR'
ALLOW_RESOURCE = 'OPTIONS, PROPFIND, GET, PUT, DELETE'
def _etag_for_event(event: CalendarEvent) -> str:
ts = int((event.updated_at or event.created_at or datetime.now(timezone.utc)).timestamp() * 1000)
return f'"{event.id}-{ts}"'
def _href_calendar(username: str, cal_id: int) -> str:
return f'/dav/{username}/cal-{cal_id}/'
def _href_event(username: str, cal_id: int, uid: str) -> str:
return f'/dav/{username}/cal-{cal_id}/{uid}.ics'
def _user_calendars(user: User):
return Calendar.query.filter_by(owner_id=user.id).all()
def _parse_calendar_path(path_part: str):
"""Input: "cal-42" -> 42, otherwise None."""
m = re.match(r'cal-(\d+)$', path_part)
return int(m.group(1)) if m else None
def _calendar_for(user: User, cal_id: int):
cal = db.session.get(Calendar, cal_id)
if not cal or cal.owner_id != user.id:
return None
return cal
# ---------------------------------------------------------------------------
# OPTIONS (advertise DAV capabilities on any path)
# ---------------------------------------------------------------------------
@dav_bp.route('/', methods=['OPTIONS'])
@dav_bp.route('/<path:subpath>', methods=['OPTIONS'])
def options(subpath=''):
headers = {
**DAV_HEADERS,
'Allow': 'OPTIONS, PROPFIND, REPORT, GET, PUT, DELETE, MKCALENDAR',
}
return Response('', status=200, headers=headers)
# ---------------------------------------------------------------------------
# PROPFIND
# ---------------------------------------------------------------------------
def _make_response(href: str, populate_prop) -> ET.Element:
"""Build a <response><href/><propstat><prop>...</prop><status>200</status>
</propstat></response> element. `populate_prop` is a callable that gets
the <prop> element and appends the actual property sub-elements to it."""
resp = ET.Element(_qn('d', 'response'))
ET.SubElement(resp, _qn('d', 'href')).text = href
propstat = ET.SubElement(resp, _qn('d', 'propstat'))
prop = ET.SubElement(propstat, _qn('d', 'prop'))
populate_prop(prop)
ET.SubElement(propstat, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
return resp
def _root_response(href: str, user: User) -> ET.Element:
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(prop, _qn('d', 'displayname')).text = 'Mini-Cloud DAV'
cup = ET.SubElement(prop, _qn('d', 'current-user-principal'))
ET.SubElement(cup, _qn('d', 'href')).text = f'/dav/{user.username}/'
return _make_response(href, populate)
def _principal_response(user: User) -> ET.Element:
href = f'/dav/{user.username}/'
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(rt, _qn('d', 'principal'))
ET.SubElement(prop, _qn('d', 'displayname')).text = user.username
cup = ET.SubElement(prop, _qn('d', 'current-user-principal'))
ET.SubElement(cup, _qn('d', 'href')).text = href
pu = ET.SubElement(prop, _qn('d', 'principal-URL'))
ET.SubElement(pu, _qn('d', 'href')).text = href
# Separate home-sets so clients (DAVx5!) don't mix calendars and
# addressbooks in the same listing.
cal_home = ET.SubElement(prop, _qn('c', 'calendar-home-set'))
ET.SubElement(cal_home, _qn('d', 'href')).text = f'/dav/{user.username}/calendars/'
ab_home = ET.SubElement(prop, '{urn:ietf:params:xml:ns:carddav}addressbook-home-set')
ET.SubElement(ab_home, _qn('d', 'href')).text = f'/dav/{user.username}/addressbooks/'
return _make_response(href, populate)
def _calendar_response(user: User, cal: Calendar) -> ET.Element:
href = _href_calendar(user.username, cal.id)
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(rt, _qn('c', 'calendar'))
ET.SubElement(prop, _qn('d', 'displayname')).text = cal.name
ET.SubElement(prop, _qn('c', 'calendar-description')).text = cal.description or ''
supported = ET.SubElement(prop, _qn('c', 'supported-calendar-component-set'))
comp = ET.SubElement(supported, _qn('c', 'comp'))
comp.set('name', 'VEVENT')
# supported-report-set: advertise which REPORTs this collection handles
srs = ET.SubElement(prop, _qn('d', 'supported-report-set'))
for report_name in ('calendar-query', 'calendar-multiget'):
sup = ET.SubElement(srs, _qn('d', 'supported-report'))
rep = ET.SubElement(sup, _qn('d', 'report'))
ET.SubElement(rep, _qn('c', report_name))
ET.SubElement(prop, _qn('ic', 'calendar-color')).text = cal.color or '#3788d8'
ET.SubElement(prop, _qn('cs', 'getctag')).text = _calendar_ctag(cal)
# current-user-privilege-set: advertise what the authenticated user is
# allowed to do. DAVx5 checks this to decide read-only vs read-write.
cups = ET.SubElement(prop, _qn('d', 'current-user-privilege-set'))
for priv_name in ('read', 'write', 'write-properties', 'write-content', 'bind', 'unbind'):
p = ET.SubElement(cups, _qn('d', 'privilege'))
ET.SubElement(p, _qn('d', priv_name))
return _make_response(href, populate)
def _calendar_ctag(cal: Calendar) -> str:
"""Collection tag: changes when any event in the calendar changes."""
last = db.session.query(db.func.max(CalendarEvent.updated_at)).filter_by(calendar_id=cal.id).scalar()
ts = int((last or cal.updated_at or datetime.now(timezone.utc)).timestamp())
return f'"{cal.id}-{ts}"'
def _event_response(user: User, cal: Calendar, event: CalendarEvent, include_data: bool = False) -> ET.Element:
href = _href_event(user.username, cal.id, event.uid)
def populate(prop):
ET.SubElement(prop, _qn('d', 'getetag')).text = _etag_for_event(event)
ET.SubElement(prop, _qn('d', 'getcontenttype')).text = \
'text/calendar; charset=utf-8; component=VEVENT'
ET.SubElement(prop, _qn('d', 'resourcetype')) # empty -> regular resource
if include_data:
ET.SubElement(prop, _qn('c', 'calendar-data')).text = _wrap_vcalendar(cal, event)
return _make_response(href, populate)
def _wrap_vcalendar(cal: Calendar, event: CalendarEvent) -> str:
"""Return a full VCALENDAR envelope around the event's ical_data."""
lines = [
'BEGIN:VCALENDAR',
'VERSION:2.0',
'PRODID:-//Mini-Cloud//DE',
'CALSCALE:GREGORIAN',
event.ical_data.strip() if event.ical_data else '',
'END:VCALENDAR',
]
return '\r\n'.join(lines)
@dav_bp.route('/', methods=['PROPFIND'])
@dav_bp.route('/<path:subpath>', methods=['PROPFIND'])
@basic_auth
def propfind(subpath=''):
user: User = request.dav_user
depth = request.headers.get('Depth', '0')
multistatus = ET.Element(_qn('d', 'multistatus'))
parts = [p for p in subpath.split('/') if p]
# /dav/ (root) or / (when called via the app-level shortcut for DAVx5)
if not parts:
# Use the actual request path so Clients wie DAVx5 die href passend
# zu ihrer Anfrage sehen.
request_href = request.path if request.path.endswith('/') else request.path + '/'
multistatus.append(_root_response(request_href, user))
if depth != '0':
multistatus.append(_principal_response(user))
return _xml_response(multistatus)
# /dav/<username>/ : nur der Principal. Clients MUESSEN den Home-Sets
# (calendar-home-set / addressbook-home-set) folgen - sonst wuerden die
# Container hier faelschlich als leere Kalender angezeigt (DAVx5).
if len(parts) == 1:
if parts[0] != user.username:
return Response('', 403)
multistatus.append(_principal_response(user))
return _xml_response(multistatus)
# /dav/<username>/calendars/ : Kalender + Aufgabenlisten (DAVx5 erkennt
# VTODO-Listen automatisch an supported-calendar-component-set).
if len(parts) == 2 and parts[1] == 'calendars':
if parts[0] != user.username:
return Response('', 403)
container = ET.Element(_qn('d', 'response'))
ET.SubElement(container, _qn('d', 'href')).text = f'/dav/{user.username}/calendars/'
propstat = ET.SubElement(container, _qn('d', 'propstat'))
prop = ET.SubElement(propstat, _qn('d', 'prop'))
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(prop, _qn('d', 'displayname')).text = 'Kalender'
ET.SubElement(propstat, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
multistatus.append(container)
if depth != '0':
for cal in _user_calendars(user):
multistatus.append(_calendar_response(user, cal))
from .taskdav import user_lists, list_response
for tl in user_lists(user):
multistatus.append(list_response(user, tl))
return _xml_response(multistatus)
# /dav/<username>/addressbooks/ : only addressbook collections
if len(parts) == 2 and parts[1] == 'addressbooks':
if parts[0] != user.username:
return Response('', 403)
from .carddav import _addressbook_response, _user_addressbooks
container = ET.Element(_qn('d', 'response'))
ET.SubElement(container, _qn('d', 'href')).text = f'/dav/{user.username}/addressbooks/'
propstat = ET.SubElement(container, _qn('d', 'propstat'))
prop = ET.SubElement(propstat, _qn('d', 'prop'))
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(prop, _qn('d', 'displayname')).text = 'Adressbücher'
ET.SubElement(propstat, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
multistatus.append(container)
if depth != '0':
for ab in _user_addressbooks(user):
multistatus.append(_addressbook_response(user, ab))
return _xml_response(multistatus)
# /dav/<username>/cal-<id>/ : calendar + events (auch tl-N delegieren)
if len(parts) == 2:
if parts[0] != user.username:
return Response('', 403)
if parts[1].startswith('tl-'):
from .taskdav import tl_propfind
return tl_propfind(username=parts[0], tl_part=parts[1])
cal_id = _parse_calendar_path(parts[1])
if cal_id is None:
return Response('Not found', 404)
cal = _calendar_for(user, cal_id)
if not cal:
return Response('Not found', 404)
multistatus.append(_calendar_response(user, cal))
if depth != '0':
for ev in CalendarEvent.query.filter_by(calendar_id=cal.id).all():
multistatus.append(_event_response(user, cal, ev))
return _xml_response(multistatus)
# /dav/<username>/cal-<id>/<uid>.ics : single event (tl-N delegieren)
if len(parts) == 3:
if parts[0] != user.username:
return Response('', 403)
if parts[1].startswith('tl-'):
from .taskdav import tl_task_propfind
return tl_task_propfind(username=parts[0], tl_part=parts[1], filename=parts[2])
cal_id = _parse_calendar_path(parts[1])
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
uid = parts[2].removesuffix('.ics')
ev = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=uid).first()
if not ev:
return Response('Not found', 404)
multistatus.append(_event_response(user, cal, ev, include_data=True))
return _xml_response(multistatus)
return Response('Not found', 404)
# ---------------------------------------------------------------------------
# REPORT (calendar-query, calendar-multiget)
# ---------------------------------------------------------------------------
@dav_bp.route('/<path:subpath>', methods=['REPORT'])
@basic_auth
def report(subpath):
user: User = request.dav_user
parts = [p for p in subpath.split('/') if p]
if len(parts) < 2 or parts[0] != user.username:
return Response('', 403)
if parts[1].startswith('tl-'):
from .taskdav import tl_report
return tl_report(username=parts[0], tl_part=parts[1])
cal_id = _parse_calendar_path(parts[1])
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
multistatus = ET.Element(_qn('d', 'multistatus'))
tag = root.tag
# Prüfen ob der Client calendar-data angefragt hat. Falls nicht,
# liefern wir es auch nicht mit - strikter nach RFC und DAVx5
# entscheidet dann sauber "ich brauche Phase 2: multiget".
wants_data = root.find(f".//{_qn('c', 'calendar-data')}") is not None
if tag == _qn('c', 'calendar-multiget'):
hrefs = [h.text for h in root.findall(_qn('d', 'href')) if h.text]
for href in hrefs:
uid = href.rsplit('/', 1)[-1].removesuffix('.ics')
ev = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=uid).first()
if ev:
multistatus.append(_event_response(user, cal, ev, include_data=True))
return _xml_response(multistatus)
if tag == _qn('c', 'calendar-query'):
start, end = _extract_time_range(root)
q = CalendarEvent.query.filter_by(calendar_id=cal.id)
if end is not None:
q = q.filter(CalendarEvent.dtstart < end)
if start is not None:
q = q.filter(
(CalendarEvent.dtend >= start) | (CalendarEvent.dtstart >= start)
| (CalendarEvent.recurrence_rule.isnot(None))
)
for ev in q.all():
multistatus.append(_event_response(user, cal, ev, include_data=wants_data))
return _xml_response(multistatus)
# Unknown report - return empty multistatus so clients don't break
return _xml_response(multistatus)
def _extract_time_range(root: ET.Element):
tr = root.find(f".//{_qn('c', 'time-range')}")
if tr is None:
return None, None
def parse(s):
if not s:
return None
s = s.replace('Z', '+00:00')
dt = None
try:
dt = datetime.fromisoformat(s)
except ValueError:
try:
dt = datetime.strptime(s, '%Y%m%dT%H%M%S%z')
except ValueError:
try:
dt = datetime.strptime(s[:15], '%Y%m%dT%H%M%S').replace(tzinfo=timezone.utc)
except ValueError:
return None
# Unsere DB-Spalten sind tz-naive (lokal UTC) - Vergleich ginge
# sonst mit TypeError. Also tz-Info abstreifen.
if dt.tzinfo is not None:
dt = dt.astimezone(timezone.utc).replace(tzinfo=None)
return dt
return parse(tr.get('start')), parse(tr.get('end'))
# ---------------------------------------------------------------------------
# GET single event
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/<filename>', methods=['GET', 'HEAD'])
@basic_auth
def get_event(username, cal_part, filename):
if cal_part.startswith('ab-'):
from .carddav import ab_get
return ab_get(username=username, ab_part=cal_part, filename=filename)
if cal_part.startswith('tl-'):
from .taskdav import tl_get
return tl_get(username=username, tl_part=cal_part, filename=filename)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
ev = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=uid).first()
if not ev:
return Response('Not found', 404)
return Response(
_wrap_vcalendar(cal, ev),
mimetype='text/calendar; charset=utf-8',
headers={'ETag': _etag_for_event(ev)},
)
# ---------------------------------------------------------------------------
# PUT event (create or update)
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/<filename>', methods=['PUT'])
@basic_auth
def put_event(username, cal_part, filename):
if cal_part.startswith('ab-'):
from .carddav import ab_put
return ab_put(username=username, ab_part=cal_part, filename=filename)
if cal_part.startswith('tl-'):
from .taskdav import tl_put
return tl_put(username=username, tl_part=cal_part, filename=filename)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
raw = request.get_data(as_text=True) or ''
parsed = _parse_vevent(raw)
if not parsed:
return Response('Cannot parse VEVENT', 400)
# UID inside the body wins over the filename if present
body_uid = parsed.get('uid') or uid
existing = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=body_uid).first()
if_match = request.headers.get('If-Match')
if_none_match = request.headers.get('If-None-Match')
if existing and if_none_match == '*':
return Response('', 412)
if if_match and existing and if_match.strip() != _etag_for_event(existing):
return Response('', 412)
if not existing:
existing = CalendarEvent(calendar_id=cal.id, uid=body_uid, ical_data=raw)
db.session.add(existing)
existing.summary = parsed.get('summary') or '(ohne Titel)'
existing.description = parsed.get('description')
existing.location = parsed.get('location')
existing.dtstart = parsed.get('dtstart')
existing.dtend = parsed.get('dtend')
existing.all_day = parsed.get('all_day', False)
existing.recurrence_rule = parsed.get('rrule')
existing.exdates = ','.join(parsed.get('exdates', [])) or None
# Keep the raw VEVENT as-is so CalDAV clients round-trip faithfully.
existing.ical_data = _extract_vevent_block(raw)
existing.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_cal_recipients(cal))
status = 201 if request.method == 'PUT' and not if_match else 204
return Response('', status, {'ETag': _etag_for_event(existing)})
# ---------------------------------------------------------------------------
# DELETE
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/<filename>', methods=['DELETE'])
@basic_auth
def delete_event(username, cal_part, filename):
if cal_part.startswith('ab-'):
from .carddav import ab_delete
return ab_delete(username=username, ab_part=cal_part, filename=filename)
if cal_part.startswith('tl-'):
from .taskdav import tl_delete
return tl_delete(username=username, tl_part=cal_part, filename=filename)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
ev = CalendarEvent.query.filter_by(calendar_id=cal.id, uid=uid).first()
if not ev:
return Response('', 404)
db.session.delete(ev)
db.session.commit()
notify_calendar_change(cal.owner_id, cal.id, 'event',
shared_with=_cal_recipients(cal))
return Response('', 204)
@dav_bp.route('/<username>/<cal_part>/', methods=['DELETE'])
@dav_bp.route('/<username>/<cal_part>', methods=['DELETE'])
@basic_auth
def delete_calendar(username, cal_part):
if cal_part.startswith('ab-'):
from .carddav import ab_delete_collection
return ab_delete_collection(username=username, ab_part=cal_part)
if cal_part.startswith('tl-'):
from .taskdav import tl_delete_collection
return tl_delete_collection(username=username, tl_part=cal_part)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('', 404)
recipients = _cal_recipients(cal)
owner_id = cal.owner_id
cid = cal.id
db.session.delete(cal)
db.session.commit()
notify_calendar_change(owner_id, cid, 'deleted', shared_with=recipients)
return Response('', 204)
# ---------------------------------------------------------------------------
# PROPPATCH (Clients setzen gerne Anzeigefarbe/-name). Wir persistieren
# den Kalenderfarbe (calendar-color) + Displayname; andere Properties
# bestaetigen wir als "angewendet" damit DAVx5/Apple zufrieden sind.
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/', methods=['PROPPATCH'])
@dav_bp.route('/<username>/<cal_part>', methods=['PROPPATCH'])
@basic_auth
def proppatch_calendar(username, cal_part):
if cal_part.startswith('tl-'):
from .taskdav import tl_proppatch
return tl_proppatch(username=username, tl_part=cal_part)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
cal_id = _parse_calendar_path(cal_part)
cal = _calendar_for(user, cal_id) if cal_id else None
if not cal:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
for el in root.iter():
tag = el.tag
if tag == _qn('ic', 'calendar-color') and el.text:
cal.color = el.text.strip()[:7]
elif tag == _qn('d', 'displayname') and el.text:
cal.name = el.text.strip()[:255]
db.session.commit()
# Respond with 207 marking everything as applied so the client is happy.
multistatus = ET.Element(_qn('d', 'multistatus'))
href = _href_calendar(user.username, cal.id)
resp = ET.SubElement(multistatus, _qn('d', 'response'))
ET.SubElement(resp, _qn('d', 'href')).text = href
propstat = ET.SubElement(resp, _qn('d', 'propstat'))
prop = ET.SubElement(propstat, _qn('d', 'prop'))
# Echo back everything the client asked to set
for set_block in root.findall(_qn('d', 'set')):
inner_prop = set_block.find(_qn('d', 'prop'))
if inner_prop is not None:
for child in inner_prop:
ET.SubElement(prop, child.tag)
ET.SubElement(propstat, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
return _xml_response(multistatus)
# ---------------------------------------------------------------------------
# MKCALENDAR (create a new calendar collection via the DAV URL)
# ---------------------------------------------------------------------------
@dav_bp.route('/<username>/<cal_part>/', methods=['MKCALENDAR'])
@dav_bp.route('/<username>/<cal_part>', methods=['MKCALENDAR'])
@basic_auth
def mkcalendar(username, cal_part):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
# Extract display name from body if present
name = 'Neuer Kalender'
color = '#3788d8'
try:
body = request.get_data()
if body:
root = ET.fromstring(body)
dn = root.find(f".//{_qn('d', 'displayname')}")
if dn is not None and dn.text:
name = dn.text
col = root.find(f".//{_qn('ic', 'calendar-color')}")
if col is not None and col.text:
color = col.text[:7]
except ET.ParseError:
pass
cal = Calendar(owner_id=user.id, name=name, color=color)
db.session.add(cal)
db.session.commit()
return Response('', 201, {'Location': _href_calendar(user.username, cal.id)})
# ---------------------------------------------------------------------------
# VEVENT parser (quick & pragmatic - covers what the major CalDAV clients send)
# ---------------------------------------------------------------------------
def _extract_vevent_block(raw: str) -> str:
"""Return only the VEVENT block from a full VCALENDAR body. If none
is found the input is returned as-is."""
m = re.search(r'BEGIN:VEVENT[\s\S]*?END:VEVENT', raw, flags=re.IGNORECASE)
return m.group(0) if m else raw
def _unfold(raw: str) -> list[str]:
"""Undo RFC 5545 line folding (continuation lines start with space/tab)."""
lines = []
for line in raw.replace('\r\n', '\n').split('\n'):
if line.startswith((' ', '\t')) and lines:
lines[-1] += line[1:]
else:
lines.append(line)
return lines
def _parse_dt(value: str, params: dict) -> tuple[datetime | None, bool]:
"""Parse an iCalendar DATE or DATE-TIME. Returns (datetime, all_day)."""
if not value:
return None, False
is_date = params.get('VALUE', '').upper() == 'DATE' or len(value) == 8
if is_date:
try:
return datetime.strptime(value, '%Y%m%d'), True
except ValueError:
return None, True
# Try Z (UTC), TZID-tagged, or naive floating time
val = value.replace('Z', '')
for fmt in ('%Y%m%dT%H%M%S', '%Y-%m-%dT%H:%M:%S', '%Y-%m-%d %H:%M:%S'):
try:
dt = datetime.strptime(val, fmt)
if value.endswith('Z'):
dt = dt.replace(tzinfo=timezone.utc)
return dt, False
except ValueError:
continue
return None, False
def _parse_vevent(raw: str) -> dict | None:
block = _extract_vevent_block(raw)
if 'BEGIN:VEVENT' not in block.upper():
return None
result: dict = {'exdates': []}
for line in _unfold(block):
if ':' not in line:
continue
key, _, value = line.partition(':')
# Separate parameters: "DTSTART;TZID=Europe/Berlin"
parts = key.split(';')
name = parts[0].upper()
params = {}
for p in parts[1:]:
if '=' in p:
k, v = p.split('=', 1)
params[k.upper()] = v
if name == 'UID':
result['uid'] = value.strip()
elif name == 'SUMMARY':
result['summary'] = _unescape(value)
elif name == 'DESCRIPTION':
result['description'] = _unescape(value)
elif name == 'LOCATION':
result['location'] = _unescape(value)
elif name == 'DTSTART':
dt, all_day = _parse_dt(value, params)
result['dtstart'] = dt
result['all_day'] = all_day
elif name == 'DTEND':
dt, _ = _parse_dt(value, params)
result['dtend'] = dt
elif name == 'RRULE':
result['rrule'] = value.strip()
elif name == 'EXDATE':
dt, all_day = _parse_dt(value, params)
if dt:
result['exdates'].append(
dt.strftime('%Y-%m-%d' if all_day else '%Y-%m-%dT%H:%M:%S')
)
if 'uid' not in result:
result['uid'] = str(uuid.uuid4())
return result
def _unescape(s: str) -> str:
return s.replace('\\n', '\n').replace('\\,', ',').replace('\\;', ';').replace('\\\\', '\\')
+367
View File
@@ -0,0 +1,367 @@
"""Minimal CardDAV server (RFC 6352 subset).
Mirror structure of caldav.py - adds addressbook collections under
/dav/<username>/ab-<id>/
and serves vCard 3.0 resources via GET/PUT/DELETE plus addressbook-query
and addressbook-multiget REPORTs.
Reuses the auth + XML helpers from caldav.py to stay consistent.
"""
from __future__ import annotations
import re
import uuid
import xml.etree.ElementTree as ET
from datetime import datetime, timezone
from flask import Response, request
from app.extensions import db
from app.models.contact import AddressBook, Contact, AddressBookShare
from app.models.user import User
from app.api.contacts import (
_apply_fields_to_contact, _build_vcard, parse_vcard,
_notify_addressbook, _book_recipients,
)
from . import dav_bp
from .caldav import (
NS, _qn, _xml_response, basic_auth, _make_response,
_principal_response, # reused - we extend below
)
# ---------------------------------------------------------------------------
# URL helpers
# ---------------------------------------------------------------------------
def _href_addressbook(username: str, book_id: int) -> str:
return f'/dav/{username}/ab-{book_id}/'
def _href_vcard(username: str, book_id: int, uid: str) -> str:
return f'/dav/{username}/ab-{book_id}/{uid}.vcf'
def _parse_addressbook_path(part: str):
m = re.match(r'ab-(\d+)$', part)
return int(m.group(1)) if m else None
def _user_addressbooks(user: User):
return AddressBook.query.filter_by(owner_id=user.id).all()
def _addressbook_for(user: User, book_id: int):
book = db.session.get(AddressBook, book_id)
if not book or book.owner_id != user.id:
return None
return book
# ---------------------------------------------------------------------------
# Property responses
# ---------------------------------------------------------------------------
def _addressbook_ctag(book: AddressBook) -> str:
last = db.session.query(db.func.max(Contact.updated_at)).filter_by(address_book_id=book.id).scalar()
ts = int((last or book.updated_at or datetime.now(timezone.utc)).timestamp())
return f'"ab{book.id}-{ts}"'
def _addressbook_response(user: User, book: AddressBook) -> ET.Element:
href = _href_addressbook(user.username, book.id)
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
# urn:ietf:params:xml:ns:carddav addressbook element
ab = ET.SubElement(rt, '{urn:ietf:params:xml:ns:carddav}addressbook') # noqa: F841
ET.SubElement(prop, _qn('d', 'displayname')).text = book.name
ET.SubElement(prop, '{urn:ietf:params:xml:ns:carddav}addressbook-description').text = book.description or ''
srs = ET.SubElement(prop, _qn('d', 'supported-report-set'))
for r in ('addressbook-query', 'addressbook-multiget'):
sup = ET.SubElement(srs, _qn('d', 'supported-report'))
rep = ET.SubElement(sup, _qn('d', 'report'))
ET.SubElement(rep, '{urn:ietf:params:xml:ns:carddav}' + r)
ET.SubElement(prop, _qn('ic', 'calendar-color')).text = book.color or '#3788d8'
ET.SubElement(prop, _qn('cs', 'getctag')).text = _addressbook_ctag(book)
cups = ET.SubElement(prop, _qn('d', 'current-user-privilege-set'))
for priv in ('read', 'write', 'write-properties', 'write-content', 'bind', 'unbind'):
p = ET.SubElement(cups, _qn('d', 'privilege'))
ET.SubElement(p, _qn('d', priv))
return _make_response(href, populate)
def _vcard_response(user: User, book: AddressBook, contact: Contact, include_data: bool = False) -> ET.Element:
href = _href_vcard(user.username, book.id, contact.uid)
def populate(prop):
ts = int((contact.updated_at or datetime.now(timezone.utc)).timestamp() * 1000)
ET.SubElement(prop, _qn('d', 'getetag')).text = f'"{contact.id}-{ts}"'
ET.SubElement(prop, _qn('d', 'getcontenttype')).text = 'text/vcard; charset=utf-8'
ET.SubElement(prop, _qn('d', 'resourcetype'))
if include_data:
ET.SubElement(prop, '{urn:ietf:params:xml:ns:carddav}address-data').text = \
contact.vcard_data or _build_vcard(contact)
return _make_response(href, populate)
def _etag_for_contact(contact: Contact) -> str:
ts = int((contact.updated_at or contact.created_at or datetime.now(timezone.utc)).timestamp() * 1000)
return f'"{contact.id}-{ts}"'
# ---------------------------------------------------------------------------
# Extend the principal response from caldav.py to include addressbook-home-set
# This is done by wrapping the existing helper and appending the extra prop.
# ---------------------------------------------------------------------------
# We import caldav.propfind and add a separate URL-rule set here. For the
# principal, caldav._principal_response already emits calendar-home-set; we
# leave the combined principal to that function. CardDAV clients that check
# addressbook-home-set via PROPFIND get it via our own route below, because
# the URL `/dav/<username>/` is handled by caldav's propfind. To also return
# addressbook-home-set there we monkey-patch the principal populate.
# Simpler approach: re-implement the principal for our own URL-space by
# hooking into the propfind dispatcher's principal branch.
# Since caldav.propfind already builds the principal response, we inject the
# addressbook-home-set via a wrapper. Let's override by providing our own
# handler in the blueprint that augments the response.
# ---------------------------------------------------------------------------
# OPTIONS / PROPFIND / REPORT / GET / PUT / DELETE for /dav/<user>/ab-<id>/...
# ---------------------------------------------------------------------------
_DAV_HEADERS = {'DAV': '1, 2, 3, addressbook'}
@dav_bp.route('/<username>/<ab_part>/', methods=['OPTIONS'])
@dav_bp.route('/<username>/<ab_part>', methods=['OPTIONS'])
def ab_options(username, ab_part):
if not ab_part.startswith('ab-'):
from .caldav import options as _cal_options
return _cal_options(subpath=f'{username}/{ab_part}')
return Response('', 200, {
'DAV': '1, 2, 3, addressbook',
'Allow': 'OPTIONS, PROPFIND, REPORT, GET, PUT, DELETE, PROPPATCH, MKCOL',
})
@dav_bp.route('/<username>/<ab_part>/', methods=['PROPFIND'])
@dav_bp.route('/<username>/<ab_part>', methods=['PROPFIND'])
@basic_auth
def ab_propfind(username, ab_part):
if not ab_part.startswith('ab-'):
from .caldav import propfind as _cal_propfind
return _cal_propfind(subpath=f'{username}/{ab_part}')
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
depth = request.headers.get('Depth', '0')
multistatus = ET.Element(_qn('d', 'multistatus'))
multistatus.append(_addressbook_response(user, book))
if depth != '0':
for c in book.contacts.all():
multistatus.append(_vcard_response(user, book, c))
return _xml_response(multistatus)
@dav_bp.route('/<username>/<ab_part>/<filename>', methods=['PROPFIND'])
@basic_auth
def ab_contact_propfind(username, ab_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
uid = filename.removesuffix('.vcf')
contact = Contact.query.filter_by(address_book_id=book.id, uid=uid).first()
if not contact:
return Response('Not found', 404)
multistatus = ET.Element(_qn('d', 'multistatus'))
multistatus.append(_vcard_response(user, book, contact, include_data=True))
return _xml_response(multistatus)
@dav_bp.route('/<username>/<ab_part>/', methods=['REPORT'])
@dav_bp.route('/<username>/<ab_part>', methods=['REPORT'])
@basic_auth
def ab_report(username, ab_part):
if not ab_part.startswith('ab-'):
from .caldav import report as _cal_report
return _cal_report(subpath=f'{username}/{ab_part}')
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
wants_data = root.find(f".//{{urn:ietf:params:xml:ns:carddav}}address-data") is not None
multistatus = ET.Element(_qn('d', 'multistatus'))
if root.tag == '{urn:ietf:params:xml:ns:carddav}addressbook-multiget':
hrefs = [h.text for h in root.findall(_qn('d', 'href')) if h.text]
for href in hrefs:
uid = href.rsplit('/', 1)[-1].removesuffix('.vcf')
contact = Contact.query.filter_by(address_book_id=book.id, uid=uid).first()
if contact:
multistatus.append(_vcard_response(user, book, contact, include_data=True))
return _xml_response(multistatus)
if root.tag == '{urn:ietf:params:xml:ns:carddav}addressbook-query':
# No filter implementation yet - return all
for contact in book.contacts.all():
multistatus.append(_vcard_response(user, book, contact, include_data=wants_data))
return _xml_response(multistatus)
return _xml_response(multistatus)
@dav_bp.route('/<username>/<ab_part>/<filename>', methods=['GET', 'HEAD'])
@basic_auth
def ab_get(username, ab_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
uid = filename.removesuffix('.vcf')
contact = Contact.query.filter_by(address_book_id=book.id, uid=uid).first()
if not contact:
return Response('Not found', 404)
return Response(
contact.vcard_data or _build_vcard(contact),
mimetype='text/vcard; charset=utf-8',
headers={'ETag': _etag_for_contact(contact)},
)
@dav_bp.route('/<username>/<ab_part>/<filename>', methods=['PUT'])
@basic_auth
def ab_put(username, ab_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
uid = filename.removesuffix('.vcf')
raw = request.get_data(as_text=True) or ''
parsed = parse_vcard(raw)
body_uid = parsed.get('uid') or uid
existing = Contact.query.filter_by(address_book_id=book.id, uid=body_uid).first()
if_match = request.headers.get('If-Match')
if_none_match = request.headers.get('If-None-Match')
if existing and if_none_match == '*':
return Response('', 412)
if if_match and existing and if_match.strip() != _etag_for_contact(existing):
return Response('', 412)
is_new = existing is None
if is_new:
existing = Contact(address_book_id=book.id, uid=body_uid, vcard_data=raw)
db.session.add(existing)
_apply_fields_to_contact(existing, parsed)
# Keep the original raw VCARD so round-tripping is faithful - but also
# record the server-rebuilt version for web UI consumers. We prefer the
# raw source of truth here.
existing.vcard_data = raw.strip() or _build_vcard(existing)
existing.updated_at = datetime.now(timezone.utc)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
status = 201 if is_new else 204
return Response('', status, {'ETag': _etag_for_contact(existing)})
@dav_bp.route('/<username>/<ab_part>/<filename>', methods=['DELETE'])
@basic_auth
def ab_delete(username, ab_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('Not found', 404)
uid = filename.removesuffix('.vcf')
contact = Contact.query.filter_by(address_book_id=book.id, uid=uid).first()
if not contact:
return Response('', 404)
db.session.delete(contact)
db.session.commit()
_notify_addressbook(book.owner_id, book.id, 'contact',
shared_with=_book_recipients(book))
return Response('', 204)
@dav_bp.route('/<username>/<ab_part>/', methods=['DELETE'])
@dav_bp.route('/<username>/<ab_part>', methods=['DELETE'])
@basic_auth
def ab_delete_collection(username, ab_part):
if not ab_part.startswith('ab-'):
return Response('', 404)
user: User = request.dav_user
if username != user.username:
return Response('', 403)
book_id = _parse_addressbook_path(ab_part)
book = _addressbook_for(user, book_id) if book_id else None
if not book:
return Response('', 404)
recipients = _book_recipients(book)
owner_id = book.owner_id
book_id = book.id
db.session.delete(book)
db.session.commit()
_notify_addressbook(owner_id, book_id, 'deleted', shared_with=recipients)
return Response('', 204)
@dav_bp.route('/<username>/<ab_part>/', methods=['MKCOL'])
@dav_bp.route('/<username>/<ab_part>', methods=['MKCOL'])
@basic_auth
def ab_mkcol(username, ab_part):
"""Create a new addressbook collection via MKCOL (RFC 5689 extended).
Some CardDAV clients (Apple) use this instead of MKCALENDAR."""
user: User = request.dav_user
if username != user.username:
return Response('', 403)
name = 'Neues Adressbuch'
try:
body = request.get_data()
if body:
root = ET.fromstring(body)
dn = root.find(f".//{_qn('d', 'displayname')}")
if dn is not None and dn.text:
name = dn.text
except ET.ParseError:
pass
book = AddressBook(owner_id=user.id, name=name)
db.session.add(book)
db.session.commit()
_notify_addressbook(user.id, book.id, 'created')
return Response('', 201, {'Location': _href_addressbook(user.username, book.id)})
+368
View File
@@ -0,0 +1,368 @@
"""CalDAV Task-List Handler (VTODO).
TaskLists werden parallel zu Calendars als Calendar-Collection
ausgeliefert, jedoch mit `<supported-calendar-component-set>` = VTODO
(statt VEVENT). DAVx5/OpenTasks erkennen sie dadurch automatisch als
Aufgabenliste.
URL-Schema:
/dav/<user>/tl-<id>/ Collection
/dav/<user>/tl-<id>/<uid>.ics VTODO-Resource
Diese Funktionen werden aus caldav.py heraus aufgerufen, sobald der
URL-Bestandteil mit `tl-` beginnt - parallel zur ab-/CardDAV-Delegation.
"""
from __future__ import annotations
import re
import xml.etree.ElementTree as ET
from datetime import datetime, timezone
from flask import Response, request
from app.extensions import db
from app.models.task import TaskList, Task
from app.models.user import User
from app.api.tasks import build_vtodo, parse_vtodo, _list_recipients
from app.services.events import notify_tasklist_change
# Re-use XML helpers from caldav.py
def _import_caldav_helpers():
from . import caldav
return caldav
def _qn(prefix, name):
return _import_caldav_helpers()._qn(prefix, name)
def _xml_response(elem):
return _import_caldav_helpers()._xml_response(elem)
def _make_response(href, populate):
return _import_caldav_helpers()._make_response(href, populate)
# ---------------------------------------------------------------------------
# Path / URL helpers
# ---------------------------------------------------------------------------
def parse_tl_path(part: str):
m = re.match(r'tl-(\d+)$', part)
return int(m.group(1)) if m else None
def href_list(username, lid):
return f'/dav/{username}/tl-{lid}/'
def href_task(username, lid, uid):
return f'/dav/{username}/tl-{lid}/{uid}.ics'
def user_lists(user: User):
return TaskList.query.filter_by(owner_id=user.id).all()
def list_for(user: User, lid: int):
tl = db.session.get(TaskList, lid)
if not tl or tl.owner_id != user.id:
return None
return tl
def _ctag(tl: TaskList) -> str:
last = db.session.query(db.func.max(Task.updated_at)).filter_by(task_list_id=tl.id).scalar()
ts = int((last or tl.updated_at or datetime.now(timezone.utc)).timestamp())
return f'"tl{tl.id}-{ts}"'
def _etag(t: Task) -> str:
ts = int((t.updated_at or t.created_at or datetime.now(timezone.utc)).timestamp() * 1000)
return f'"{t.id}-{ts}"'
def _wrap_vcalendar(t: Task) -> str:
block = (t.ical_data or '').strip() or build_vtodo(t)
return '\r\n'.join([
'BEGIN:VCALENDAR', 'VERSION:2.0', 'PRODID:-//Mini-Cloud//DE',
'CALSCALE:GREGORIAN', block, 'END:VCALENDAR',
])
# ---------------------------------------------------------------------------
# PROPFIND building blocks
# ---------------------------------------------------------------------------
def list_response(user: User, tl: TaskList) -> ET.Element:
href = href_list(user.username, tl.id)
def populate(prop):
rt = ET.SubElement(prop, _qn('d', 'resourcetype'))
ET.SubElement(rt, _qn('d', 'collection'))
ET.SubElement(rt, _qn('c', 'calendar'))
ET.SubElement(prop, _qn('d', 'displayname')).text = tl.name
ET.SubElement(prop, _qn('c', 'calendar-description')).text = tl.description or ''
supported = ET.SubElement(prop, _qn('c', 'supported-calendar-component-set'))
comp = ET.SubElement(supported, _qn('c', 'comp'))
comp.set('name', 'VTODO')
srs = ET.SubElement(prop, _qn('d', 'supported-report-set'))
for r in ('calendar-query', 'calendar-multiget'):
sup = ET.SubElement(srs, _qn('d', 'supported-report'))
rep = ET.SubElement(sup, _qn('d', 'report'))
ET.SubElement(rep, _qn('c', r))
ET.SubElement(prop, _qn('ic', 'calendar-color')).text = tl.color or '#10b981'
ET.SubElement(prop, _qn('cs', 'getctag')).text = _ctag(tl)
cups = ET.SubElement(prop, _qn('d', 'current-user-privilege-set'))
for priv in ('read', 'write', 'write-properties', 'write-content', 'bind', 'unbind'):
p = ET.SubElement(cups, _qn('d', 'privilege'))
ET.SubElement(p, _qn('d', priv))
return _make_response(href, populate)
def task_response(user: User, tl: TaskList, t: Task, include_data=False) -> ET.Element:
href = href_task(user.username, tl.id, t.uid)
def populate(prop):
ET.SubElement(prop, _qn('d', 'getetag')).text = _etag(t)
ET.SubElement(prop, _qn('d', 'getcontenttype')).text = \
'text/calendar; charset=utf-8; component=VTODO'
ET.SubElement(prop, _qn('d', 'resourcetype'))
if include_data:
ET.SubElement(prop, _qn('c', 'calendar-data')).text = _wrap_vcalendar(t)
return _make_response(href, populate)
# ---------------------------------------------------------------------------
# Handlers (entered from caldav.py when path starts with tl-)
# ---------------------------------------------------------------------------
def tl_propfind(username, tl_part):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
depth = request.headers.get('Depth', '0')
multi = ET.Element(_qn('d', 'multistatus'))
multi.append(list_response(user, tl))
if depth != '0':
for t in tl.tasks.all():
multi.append(task_response(user, tl, t))
return _xml_response(multi)
def tl_task_propfind(username, tl_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
t = Task.query.filter_by(task_list_id=tl.id, uid=uid).first()
if not t:
return Response('Not found', 404)
multi = ET.Element(_qn('d', 'multistatus'))
multi.append(task_response(user, tl, t, include_data=True))
return _xml_response(multi)
def tl_report(username, tl_part):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
wants_data = root.find(f".//{_qn('c', 'calendar-data')}") is not None
multi = ET.Element(_qn('d', 'multistatus'))
if root.tag == _qn('c', 'calendar-multiget'):
hrefs = [h.text for h in root.findall(_qn('d', 'href')) if h.text]
for href in hrefs:
uid = href.rsplit('/', 1)[-1].removesuffix('.ics')
t = Task.query.filter_by(task_list_id=tl.id, uid=uid).first()
if t:
multi.append(task_response(user, tl, t, include_data=True))
return _xml_response(multi)
if root.tag == _qn('c', 'calendar-query'):
for t in tl.tasks.all():
multi.append(task_response(user, tl, t, include_data=wants_data))
return _xml_response(multi)
return _xml_response(multi)
def tl_get(username, tl_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
t = Task.query.filter_by(task_list_id=tl.id, uid=uid).first()
if not t:
return Response('Not found', 404)
return Response(_wrap_vcalendar(t),
mimetype='text/calendar; charset=utf-8',
headers={'ETag': _etag(t)})
def tl_put(username, tl_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
raw = request.get_data(as_text=True) or ''
parsed = parse_vtodo(raw)
if not parsed:
return Response('Cannot parse VTODO', 400)
body_uid = parsed.get('uid') or uid
existing = Task.query.filter_by(task_list_id=tl.id, uid=body_uid).first()
if_match = request.headers.get('If-Match')
if_none_match = request.headers.get('If-None-Match')
if existing and if_none_match == '*':
return Response('', 412)
if if_match and existing and if_match.strip() != _etag(existing):
return Response('', 412)
is_new = existing is None
if is_new:
existing = Task(task_list_id=tl.id, uid=body_uid, ical_data=raw)
db.session.add(existing)
existing.summary = parsed.get('summary') or '(ohne Titel)'
existing.description = parsed.get('description')
existing.status = parsed.get('status') or 'NEEDS-ACTION'
existing.priority = parsed.get('priority')
existing.percent_complete = parsed.get('percent_complete')
existing.due = parsed.get('due')
existing.dtstart = parsed.get('dtstart')
existing.completed_at = parsed.get('completed_at')
cats = parsed.get('categories')
if isinstance(cats, str):
existing.categories = cats or None
elif isinstance(cats, list):
existing.categories = ','.join(cats) or None
# Roh-Block sichern fuer Round-Trip
block = re.search(r'BEGIN:VTODO.*?END:VTODO', raw, flags=re.DOTALL | re.IGNORECASE)
existing.ical_data = (block.group(0).strip() if block else raw.strip()) or build_vtodo(existing)
existing.updated_at = datetime.now(timezone.utc)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return Response('', 201 if is_new else 204, {'ETag': _etag(existing)})
def tl_delete(username, tl_part, filename):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
uid = filename.removesuffix('.ics')
t = Task.query.filter_by(task_list_id=tl.id, uid=uid).first()
if not t:
return Response('', 404)
db.session.delete(t)
db.session.commit()
notify_tasklist_change(tl.owner_id, tl.id, 'task', shared_with=_list_recipients(tl))
return Response('', 204)
def tl_delete_collection(username, tl_part):
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('', 404)
recipients = _list_recipients(tl)
owner_id = tl.owner_id
list_id = tl.id
db.session.delete(tl)
db.session.commit()
notify_tasklist_change(owner_id, list_id, 'deleted', shared_with=recipients)
return Response('', 204)
def tl_options(username, tl_part):
return Response('', 200, {
'DAV': '1, 2, 3, calendar-access, addressbook',
'Allow': 'OPTIONS, PROPFIND, REPORT, GET, PUT, DELETE, MKCALENDAR, PROPPATCH',
})
def tl_proppatch(username, tl_part):
"""Bestaetige Property-Updates damit Clients zufrieden sind. Wir
persistieren Displayname + Color, alles andere wird stillschweigend
akzeptiert."""
user: User = request.dav_user
if username != user.username:
return Response('', 403)
lid = parse_tl_path(tl_part)
tl = list_for(user, lid) if lid else None
if not tl:
return Response('Not found', 404)
try:
root = ET.fromstring(request.data or b'<x/>')
except ET.ParseError:
return Response('Malformed XML', 400)
changed = False
for el in root.iter():
tag = (el.tag.split('}', 1)[1] if '}' in el.tag else el.tag).lower()
if tag == 'displayname' and el.text:
tl.name = el.text
changed = True
elif tag == 'calendar-color' and el.text:
tl.color = el.text[:7]
changed = True
if changed:
db.session.commit()
multi = ET.Element(_qn('d', 'multistatus'))
resp = ET.SubElement(multi, _qn('d', 'response'))
ET.SubElement(resp, _qn('d', 'href')).text = href_list(user.username, tl.id)
ps = ET.SubElement(resp, _qn('d', 'propstat'))
ET.SubElement(ps, _qn('d', 'status')).text = 'HTTP/1.1 200 OK'
return _xml_response(multi)
def tl_mkcol(username, tl_part):
"""Erstelle eine neue TaskList per MKCOL/MKCALENDAR. Der Pfadteil
`tl-N` ist bei MKCOL aber unbekannt - DAVx5 schickt einen frei
gewaehlten Namen wie `mein-task-uuid`. Daher: wir akzeptieren jeden
Pfadteil und legen eine TaskList an."""
user: User = request.dav_user
if username != user.username:
return Response('', 403)
name = 'Neue Aufgabenliste'
try:
body = request.get_data()
if body:
root = ET.fromstring(body)
for el in root.iter():
tag = (el.tag.split('}', 1)[1] if '}' in el.tag else el.tag).lower()
if tag == 'displayname' and el.text:
name = el.text
except ET.ParseError:
pass
tl = TaskList(owner_id=user.id, name=name)
db.session.add(tl)
db.session.commit()
notify_tasklist_change(user.id, tl.id, 'created')
return Response('', 201, {'Location': href_list(user.username, tl.id)})
+3
View File
@@ -2,16 +2,19 @@ from app.models.user import User
from app.models.file import File, FilePermission, ShareLink
from app.models.calendar import Calendar, CalendarEvent, CalendarShare
from app.models.contact import AddressBook, Contact, AddressBookShare
from app.models.task import TaskList, Task, TaskListShare
from app.models.email_account import EmailAccount
from app.models.password_vault import PasswordFolder, PasswordEntry, PasswordShare
from app.models.settings import AppSettings
from app.models.backup_target import BackupTarget
from app.models.file_lock import FileLock
__all__ = [
'User',
'File', 'FilePermission', 'ShareLink',
'Calendar', 'CalendarEvent', 'CalendarShare',
'AddressBook', 'Contact', 'AddressBookShare',
'TaskList', 'Task', 'TaskListShare',
'EmailAccount',
'PasswordFolder', 'PasswordEntry', 'PasswordShare',
'AppSettings',
+12
View File
@@ -12,6 +12,7 @@ class Calendar(db.Model):
color = db.Column(db.String(7), default='#3788d8')
description = db.Column(db.Text, nullable=True)
ical_token = db.Column(db.String(64), unique=True, nullable=True, index=True)
ical_password_hash = db.Column(db.String(255), nullable=True)
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
@@ -20,6 +21,7 @@ class Calendar(db.Model):
cascade='all, delete-orphan')
shares = db.relationship('CalendarShare', backref='calendar', lazy='dynamic',
cascade='all, delete-orphan')
# Note: `owner` is auto-created as a backref by User.calendars relationship
def to_dict(self):
return {
@@ -29,6 +31,7 @@ class Calendar(db.Model):
'color': self.color,
'description': self.description,
'ical_token': self.ical_token,
'ical_has_password': bool(self.ical_password_hash),
'created_at': self.created_at.isoformat() if self.created_at else None,
}
@@ -41,10 +44,14 @@ class CalendarEvent(db.Model):
uid = db.Column(db.String(255), unique=True, nullable=False)
ical_data = db.Column(db.Text, nullable=False) # Full VCALENDAR component
summary = db.Column(db.String(500), nullable=True)
description = db.Column(db.Text, nullable=True)
location = db.Column(db.String(500), nullable=True)
dtstart = db.Column(db.DateTime, nullable=True, index=True)
dtend = db.Column(db.DateTime, nullable=True)
all_day = db.Column(db.Boolean, default=False)
recurrence_rule = db.Column(db.Text, nullable=True)
exdates = db.Column(db.Text, nullable=True) # Komma-separiert, ISO-Datum (YYYY-MM-DD)
is_private = db.Column(db.Boolean, default=False, nullable=False)
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
@@ -55,10 +62,14 @@ class CalendarEvent(db.Model):
'calendar_id': self.calendar_id,
'uid': self.uid,
'summary': self.summary,
'description': self.description,
'location': self.location,
'dtstart': self.dtstart.isoformat() if self.dtstart else None,
'dtend': self.dtend.isoformat() if self.dtend else None,
'all_day': self.all_day,
'recurrence_rule': self.recurrence_rule,
'exdates': self.exdates.split(',') if self.exdates else [],
'is_private': bool(self.is_private),
'created_at': self.created_at.isoformat() if self.created_at else None,
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
}
@@ -71,6 +82,7 @@ class CalendarShare(db.Model):
calendar_id = db.Column(db.Integer, db.ForeignKey('calendars.id'), nullable=False, index=True)
shared_with_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
permission = db.Column(db.String(20), nullable=False, default='read') # 'read' or 'readwrite'
color = db.Column(db.String(7), nullable=True) # Persoenliche Anzeige-Farbe
shared_with = db.relationship('User', backref='shared_calendars')
+77 -3
View File
@@ -10,6 +10,7 @@ class AddressBook(db.Model):
owner_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
name = db.Column(db.String(255), nullable=False)
description = db.Column(db.Text, nullable=True)
color = db.Column(db.String(7), default='#3788d8')
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
@@ -18,6 +19,7 @@ class AddressBook(db.Model):
cascade='all, delete-orphan')
shares = db.relationship('AddressBookShare', backref='address_book', lazy='dynamic',
cascade='all, delete-orphan')
# `owner` wird automatisch durch User.address_books backref erzeugt
def to_dict(self):
return {
@@ -25,6 +27,7 @@ class AddressBook(db.Model):
'owner_id': self.owner_id,
'name': self.name,
'description': self.description,
'color': self.color,
'created_at': self.created_at.isoformat() if self.created_at else None,
}
@@ -36,22 +39,92 @@ class Contact(db.Model):
address_book_id = db.Column(db.Integer, db.ForeignKey('address_books.id'),
nullable=False, index=True)
uid = db.Column(db.String(255), unique=True, nullable=False)
vcard_data = db.Column(db.Text, nullable=False) # Full VCARD
vcard_data = db.Column(db.Text, nullable=False)
# Structured name fields
prefix = db.Column(db.String(64), nullable=True)
first_name = db.Column(db.String(128), nullable=True)
middle_name = db.Column(db.String(128), nullable=True)
last_name = db.Column(db.String(128), nullable=True, index=True)
suffix = db.Column(db.String(64), nullable=True)
display_name = db.Column(db.String(255), nullable=True, index=True)
nickname = db.Column(db.String(128), nullable=True)
# Organisation
organization = db.Column(db.String(255), nullable=True)
department = db.Column(db.String(255), nullable=True)
job_title = db.Column(db.String(255), nullable=True)
# Primary fields for quick listing (denormalised)
primary_email = db.Column(db.String(255), nullable=True, index=True)
primary_phone = db.Column(db.String(50), nullable=True)
# JSON-encoded multi-valued fields
# Each list entry: {"type": "home|work|other|mobile|fax|pager|...", "value": "..."}
emails = db.Column(db.Text, nullable=True)
phones = db.Column(db.Text, nullable=True)
# address: {"type": ..., "street": ..., "po_box": ..., "city": ...,
# "region": ..., "postal_code": ..., "country": ...}
addresses = db.Column(db.Text, nullable=True)
websites = db.Column(db.Text, nullable=True)
impp = db.Column(db.Text, nullable=True) # {"protocol": "skype", "value": "..."}
categories = db.Column(db.Text, nullable=True) # ["family", "work", ...]
# Dates
birthday = db.Column(db.String(10), nullable=True) # YYYY-MM-DD
anniversary = db.Column(db.String(10), nullable=True)
# Free text
notes = db.Column(db.Text, nullable=True)
# Photo: data URL (data:image/jpeg;base64,...) oder http(s)://
photo = db.Column(db.Text, nullable=True)
# Legacy column kept for old clients / migrations
email = db.Column(db.String(255), nullable=True)
phone = db.Column(db.String(50), nullable=True)
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
def to_dict(self):
import json
def _loads(s, default):
if not s:
return default
try:
return json.loads(s)
except (ValueError, TypeError):
return default
return {
'id': self.id,
'address_book_id': self.address_book_id,
'uid': self.uid,
'prefix': self.prefix,
'first_name': self.first_name,
'middle_name': self.middle_name,
'last_name': self.last_name,
'suffix': self.suffix,
'display_name': self.display_name,
'email': self.email,
'phone': self.phone,
'nickname': self.nickname,
'organization': self.organization,
'department': self.department,
'job_title': self.job_title,
'emails': _loads(self.emails, []),
'phones': _loads(self.phones, []),
'addresses': _loads(self.addresses, []),
'websites': _loads(self.websites, []),
'impp': _loads(self.impp, []),
'categories': _loads(self.categories, []),
'birthday': self.birthday,
'anniversary': self.anniversary,
'notes': self.notes,
'photo': self.photo,
'primary_email': self.primary_email or self.email,
'primary_phone': self.primary_phone or self.phone,
'created_at': self.created_at.isoformat() if self.created_at else None,
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
}
@@ -65,6 +138,7 @@ class AddressBookShare(db.Model):
nullable=False, index=True)
shared_with_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
permission = db.Column(db.String(20), nullable=False, default='read')
color = db.Column(db.String(7), nullable=True) # personal display color
shared_with = db.relationship('User', backref='shared_address_books')
+4 -1
View File
@@ -55,8 +55,11 @@ class FilePermission(db.Model):
file_id = db.Column(db.Integer, db.ForeignKey('files.id'), nullable=False, index=True)
user_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
permission = db.Column(db.String(20), nullable=False) # 'read', 'write', 'admin'
can_reshare = db.Column(db.Boolean, default=False, nullable=False)
granted_by = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=True)
user = db.relationship('User', backref='file_permissions')
user = db.relationship('User', foreign_keys=[user_id], backref='file_permissions')
grantor = db.relationship('User', foreign_keys=[granted_by])
__table_args__ = (
db.UniqueConstraint('file_id', 'user_id', name='uq_file_user_permission'),
+59
View File
@@ -0,0 +1,59 @@
from datetime import datetime, timezone, timedelta
from app.extensions import db
# Lock expires after 15 minutes without heartbeat
# Client sends heartbeat every 10 seconds and refreshes JWT every 10 minutes
LOCK_TIMEOUT_MINUTES = 15
class FileLock(db.Model):
__tablename__ = 'file_locks'
id = db.Column(db.Integer, primary_key=True)
file_id = db.Column(db.Integer, db.ForeignKey('files.id'), unique=True, nullable=False, index=True)
locked_by = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False)
locked_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc), nullable=False)
heartbeat_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc), nullable=False)
client_info = db.Column(db.String(255), nullable=True) # e.g. "Desktop-Client Windows"
file = db.relationship('File', backref=db.backref('lock', uselist=False))
user = db.relationship('User', backref='file_locks')
def is_expired(self):
cutoff = datetime.now(timezone.utc) - timedelta(minutes=LOCK_TIMEOUT_MINUTES)
return self.heartbeat_at.replace(tzinfo=timezone.utc) < cutoff
def to_dict(self):
return {
'id': self.id,
'file_id': self.file_id,
'locked_by': self.locked_by,
'locked_by_username': self.user.username if self.user else None,
'locked_at': self.locked_at.isoformat() if self.locked_at else None,
'heartbeat_at': self.heartbeat_at.isoformat() if self.heartbeat_at else None,
'client_info': self.client_info,
'is_expired': self.is_expired(),
}
@staticmethod
def cleanup_expired():
"""Remove all expired locks."""
cutoff = datetime.now(timezone.utc) - timedelta(minutes=LOCK_TIMEOUT_MINUTES)
expired = FileLock.query.filter(FileLock.heartbeat_at < cutoff).all()
count = len(expired)
for lock in expired:
db.session.delete(lock)
if count:
db.session.commit()
return count
@staticmethod
def get_lock(file_id):
"""Get active (non-expired) lock for a file, cleaning up expired ones."""
lock = FileLock.query.filter_by(file_id=file_id).first()
if lock and lock.is_expired():
db.session.delete(lock)
db.session.commit()
return None
return lock
+86
View File
@@ -0,0 +1,86 @@
from datetime import datetime, timezone
from app.extensions import db
class TaskList(db.Model):
__tablename__ = 'task_lists'
id = db.Column(db.Integer, primary_key=True)
owner_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
name = db.Column(db.String(255), nullable=False)
color = db.Column(db.String(7), default='#10b981')
description = db.Column(db.Text, nullable=True)
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
tasks = db.relationship('Task', backref='task_list', lazy='dynamic',
cascade='all, delete-orphan')
shares = db.relationship('TaskListShare', backref='task_list', lazy='dynamic',
cascade='all, delete-orphan')
def to_dict(self):
return {
'id': self.id,
'owner_id': self.owner_id,
'name': self.name,
'color': self.color,
'description': self.description,
'created_at': self.created_at.isoformat() if self.created_at else None,
}
class Task(db.Model):
__tablename__ = 'tasks'
id = db.Column(db.Integer, primary_key=True)
task_list_id = db.Column(db.Integer, db.ForeignKey('task_lists.id'), nullable=False, index=True)
uid = db.Column(db.String(255), unique=True, nullable=False)
ical_data = db.Column(db.Text, nullable=False, default='') # Full VTODO block
summary = db.Column(db.String(500), nullable=True)
description = db.Column(db.Text, nullable=True)
status = db.Column(db.String(32), nullable=True) # NEEDS-ACTION | IN-PROCESS | COMPLETED | CANCELLED
priority = db.Column(db.Integer, nullable=True) # 0 (keine) - 9
percent_complete = db.Column(db.Integer, nullable=True) # 0..100
due = db.Column(db.DateTime, nullable=True, index=True)
dtstart = db.Column(db.DateTime, nullable=True)
completed_at = db.Column(db.DateTime, nullable=True)
categories = db.Column(db.Text, nullable=True) # kommagetrennt
created_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = db.Column(db.DateTime, default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc))
def to_dict(self):
return {
'id': self.id,
'task_list_id': self.task_list_id,
'uid': self.uid,
'summary': self.summary,
'description': self.description,
'status': self.status or 'NEEDS-ACTION',
'priority': self.priority,
'percent_complete': self.percent_complete,
'due': self.due.isoformat() if self.due else None,
'dtstart': self.dtstart.isoformat() if self.dtstart else None,
'completed_at': self.completed_at.isoformat() if self.completed_at else None,
'categories': self.categories.split(',') if self.categories else [],
'created_at': self.created_at.isoformat() if self.created_at else None,
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
}
class TaskListShare(db.Model):
__tablename__ = 'task_list_shares'
id = db.Column(db.Integer, primary_key=True)
task_list_id = db.Column(db.Integer, db.ForeignKey('task_lists.id'), nullable=False, index=True)
shared_with_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False, index=True)
permission = db.Column(db.String(20), nullable=False, default='read')
color = db.Column(db.String(7), nullable=True)
shared_with = db.relationship('User', backref='shared_task_lists')
__table_args__ = (
db.UniqueConstraint('task_list_id', 'shared_with_id', name='uq_task_list_share'),
)
+18
View File
@@ -9,6 +9,8 @@ class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False, index=True)
email = db.Column(db.String(255), unique=True, nullable=True)
first_name = db.Column(db.String(100), nullable=True)
last_name = db.Column(db.String(100), nullable=True)
password_hash = db.Column(db.String(255), nullable=False)
role = db.Column(db.String(20), default='user', nullable=False) # 'admin' or 'user'
master_key_salt = db.Column(db.LargeBinary, nullable=True) # For password manager
@@ -23,6 +25,7 @@ class User(db.Model):
foreign_keys='File.owner_id')
calendars = db.relationship('Calendar', backref='owner', lazy='dynamic')
address_books = db.relationship('AddressBook', backref='owner', lazy='dynamic')
task_lists = db.relationship('TaskList', backref='owner', lazy='dynamic')
email_accounts = db.relationship('EmailAccount', backref='user', lazy='dynamic',
order_by='EmailAccount.sort_order')
password_folders = db.relationship('PasswordFolder', backref='owner', lazy='dynamic')
@@ -33,10 +36,25 @@ class User(db.Model):
def check_password(self, password):
return bcrypt.check_password_hash(self.password_hash, password)
@property
def full_name(self) -> str:
"""Vor- + Nachname zusammengesetzt, sonst Leerstring."""
parts = [self.first_name or '', self.last_name or '']
return ' '.join(p.strip() for p in parts if p and p.strip())
@property
def display_name(self) -> str:
"""Voller Name falls vorhanden, sonst Username."""
return self.full_name or self.username
def to_dict(self, include_email=False):
data = {
'id': self.id,
'username': self.username,
'first_name': self.first_name or '',
'last_name': self.last_name or '',
'full_name': self.full_name,
'display_name': self.display_name,
'role': self.role,
'is_active': self.is_active,
'storage_quota_mb': self.storage_quota_mb,
+104
View File
@@ -0,0 +1,104 @@
"""In-memory event broadcaster for SSE clients.
Each logged-in user can have multiple connected clients (desktop, web,
mobile). Every client gets its own queue. Mutating file operations push
an event into the queues of every affected user.
"""
from __future__ import annotations
import json
import queue
import threading
import time
from typing import Iterable
class EventBroadcaster:
def __init__(self) -> None:
self._lock = threading.Lock()
# user_id -> list[queue.Queue]
self._subs: dict[int, list[queue.Queue]] = {}
def subscribe(self, user_id: int) -> queue.Queue:
q: queue.Queue = queue.Queue(maxsize=256)
with self._lock:
self._subs.setdefault(user_id, []).append(q)
return q
def unsubscribe(self, user_id: int, q: queue.Queue) -> None:
with self._lock:
lst = self._subs.get(user_id)
if not lst:
return
try:
lst.remove(q)
except ValueError:
pass
if not lst:
self._subs.pop(user_id, None)
def publish(self, user_ids: Iterable[int], event: dict) -> None:
payload = dict(event)
payload.setdefault('ts', time.time())
with self._lock:
for uid in set(user_ids):
for q in self._subs.get(uid, []):
try:
q.put_nowait(payload)
except queue.Full:
pass # slow client - drop event
def stream(self, user_id: int):
"""Generator yielding SSE-formatted strings for one client."""
q = self.subscribe(user_id)
try:
# Initial hello so the client knows it's connected
yield f"event: hello\ndata: {json.dumps({'user_id': user_id})}\n\n"
while True:
try:
event = q.get(timeout=20.0)
except queue.Empty:
# Heartbeat / keepalive comment - also keeps proxies happy
yield ": keepalive\n\n"
continue
kind = event.get('type', 'change')
yield f"event: {kind}\ndata: {json.dumps(event)}\n\n"
finally:
self.unsubscribe(user_id, q)
broadcaster = EventBroadcaster()
def notify_file_change(owner_id: int, file_id: int | None, change: str,
shared_with: Iterable[int] = ()) -> None:
"""Emit a file change event to the owner plus any users with share access."""
recipients = [owner_id, *shared_with]
broadcaster.publish(recipients, {
'type': 'file',
'change': change, # 'created' | 'updated' | 'deleted' | 'locked' | 'unlocked'
'file_id': file_id,
})
def notify_calendar_change(owner_id: int, calendar_id: int, change: str,
shared_with: Iterable[int] = ()) -> None:
"""Emit a calendar-level change event (event added/changed/deleted or
share membership changed). Sent to owner + all users the calendar is
shared with."""
recipients = [owner_id, *shared_with]
broadcaster.publish(recipients, {
'type': 'calendar',
'change': change, # 'event'|'share'|'deleted'
'calendar_id': calendar_id,
})
def notify_tasklist_change(owner_id: int, list_id: int, change: str,
shared_with: Iterable[int] = ()) -> None:
recipients = [owner_id, *shared_with]
broadcaster.publish(recipients, {
'type': 'tasklist',
'change': change, # 'task'|'share'|'deleted'|'created'
'task_list_id': list_id,
})
+56
View File
@@ -0,0 +1,56 @@
"""Leichtgewichtiger SNTP-Client zum Pruefen des Zeit-Offsets.
Im Container koennen wir die Systemzeit nicht wirklich setzen (braucht
CAP_SYS_TIME). Aber wir koennen den Offset ermitteln und loggen, damit
der Admin weiss, ob der Host driftet. Fuer einen harten Sync muss auf
dem Host selbst ein NTP-Daemon laufen.
"""
from __future__ import annotations
import socket
import struct
import time
_NTP_EPOCH_OFFSET = 2208988800 # Sekunden zwischen 1900 und 1970
def query_ntp(server: str, timeout: float = 3.0, port: int = 123) -> float | None:
"""Fragt einen NTP-Server und gibt das Offset (Server - Local) in
Sekunden zurueck, oder None bei Fehler."""
packet = b'\x1b' + 47 * b'\0' # LI=0, VN=3, Mode=3 (client)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(timeout)
try:
t0 = time.time()
sock.sendto(packet, (server, port))
data, _ = sock.recvfrom(1024)
t3 = time.time()
except (socket.gaierror, socket.timeout, OSError):
return None
finally:
sock.close()
if len(data) < 48:
return None
# Transmit timestamp: Offset 40, 8 bytes, fixed point 32.32
secs, frac = struct.unpack('!II', data[40:48])
if secs == 0:
return None
t2 = secs - _NTP_EPOCH_OFFSET + frac / 2**32
# Einfacher Offset (sans roundtrip): (t2 - ((t0 + t3) / 2))
return t2 - (t0 + t3) / 2
def check_and_log(server: str, logger=None) -> float | None:
import logging
log = logger or logging.getLogger('ntp')
offset = query_ntp(server)
if offset is None:
log.warning('NTP-Check: Server %s nicht erreichbar', server)
return None
if abs(offset) > 5.0:
log.warning('NTP-Check: Systemzeit weicht um %.2fs von %s ab -> Host-Uhr synchronisieren!',
offset, server)
else:
log.info('NTP-Check: Offset %.3fs gegen %s (ok)', offset, server)
return offset
+29
View File
@@ -179,3 +179,32 @@ def notify_user_created(user, created_by_username):
f'Deine Mini-Cloud'
)
send_system_email(user.email, subject, body)
def notify_conflict_to_admin(conflict_user, conflict_file_name, conflict_copy_name,
folder_path, lock_user_name, lock_user_email, locked_since):
"""Notify admin about a sync conflict (user edited a locked file)."""
from app.models.settings import AppSettings
admin_email = AppSettings.get('system_email_from', '')
if not admin_email:
return
subject = f'Mini-Cloud: Datei-Konflikt - {conflict_file_name}'
body = (
f'Datei-Konflikt in der Mini-Cloud!\n\n'
f'Benutzer: {conflict_user.username}'
f'{" (" + conflict_user.email + ")" if conflict_user.email else ""}\n'
f'Hat bearbeitet: {conflict_file_name}\n'
f'Ordner: {folder_path}\n'
f'Konflikt-Kopie: {conflict_copy_name}\n\n'
f'Gesperrt von: {lock_user_name}'
f'{" (" + lock_user_email + ")" if lock_user_email else ""}\n'
f'Gesperrt seit: {locked_since}\n\n'
f'Ursache: {conflict_user.username} hat die Datei lokal bearbeitet '
f'waehrend {lock_user_name} sie ausgecheckt hatte.\n\n'
f'Die Aenderungen von {conflict_user.username} wurden als '
f'Konflikt-Kopie gespeichert und muessen manuell zusammengefuehrt werden.\n\n'
f'Deine Mini-Cloud'
)
send_system_email(admin_email, subject, body)
Executable
+255
View File
@@ -0,0 +1,255 @@
#!/bin/bash
#
# Mini-Cloud Client Build Script
# Baut Desktop- und Mobile-Clients via Docker (kein lokales Setup noetig)
#
# Verwendung:
# ./build.sh linux # Linux Desktop (.deb + .AppImage)
# ./build.sh windows # Windows Desktop (.msi + .exe)
# ./build.sh mac # macOS Desktop (.dmg) - nur auf macOS moeglich
# ./build.sh android # Android App (.apk)
# ./build.sh ios # iOS App (.ipa) - nur auf macOS moeglich
# ./build.sh all-desktop # Linux + Windows
# ./build.sh clean # Build-Cache loeschen
#
# Nach dem Build wird der Client automatisch auf den Server hochgeladen
# wenn CLOUD_URL und BUILD_UPLOAD_TOKEN in .env gesetzt sind.
#
set -e
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
DESKTOP_DIR="$SCRIPT_DIR/clients/desktop"
MOBILE_DIR="$SCRIPT_DIR/clients/mobile"
OUTPUT_DIR="$SCRIPT_DIR/build-output"
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
info() { echo -e "${GREEN}[BUILD]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; exit 1; }
# Load .env if exists
if [ -f "$SCRIPT_DIR/.env" ]; then
export $(grep -v '^#' "$SCRIPT_DIR/.env" | grep -E '^(CLOUD_URL|BUILD_UPLOAD_TOKEN)=' | xargs)
fi
mkdir -p "$OUTPUT_DIR"
upload_to_server() {
local platform="$1"
local filepath="$2"
if [ -z "$CLOUD_URL" ] || [ -z "$BUILD_UPLOAD_TOKEN" ]; then
warn "CLOUD_URL oder BUILD_UPLOAD_TOKEN nicht gesetzt - Upload uebersprungen"
warn "Setze beide in .env fuer automatischen Upload"
return
fi
local filename=$(basename "$filepath")
info "Lade $filename auf $CLOUD_URL hoch..."
local http_code
http_code=$(curl -s -o /tmp/upload_response.txt -w "%{http_code}" \
-X POST "$CLOUD_URL/api/clients/$platform/upload" \
-H "X-Build-Token: $BUILD_UPLOAD_TOKEN" \
-F "file=@$filepath")
if [ "$http_code" = "200" ]; then
info "Upload erfolgreich: $filename -> $CLOUD_URL"
cat /tmp/upload_response.txt | python3 -m json.tool 2>/dev/null || cat /tmp/upload_response.txt
elif [ "$http_code" = "403" ]; then
error "Upload fehlgeschlagen: BUILD_UPLOAD_TOKEN ist falsch!"
echo ""
echo " Der BUILD_UPLOAD_TOKEN in deiner .env muss der SECRET_KEY"
echo " oder JWT_SECRET_KEY vom Zielserver sein."
echo ""
echo " Pruefe auf dem Server: grep SECRET_KEY /pfad/zur/.env"
echo " Dann den Wert in die lokale .env als BUILD_UPLOAD_TOKEN kopieren."
echo ""
elif [ "$http_code" = "000" ]; then
error "Upload fehlgeschlagen: Server nicht erreichbar ($CLOUD_URL)"
else
warn "Upload fehlgeschlagen (HTTP $http_code)"
cat /tmp/upload_response.txt 2>/dev/null
fi
rm -f /tmp/upload_response.txt
}
build_linux() {
info "Baue Linux Desktop Client..."
cd "$DESKTOP_DIR"
sudo docker build -f Dockerfile.build -t minicloud-desktop-builder .
sudo docker run --rm \
-v "$OUTPUT_DIR:/output" \
minicloud-desktop-builder \
bash -c "npm run tauri build && cp -r src-tauri/target/release/bundle/* /output/ 2>/dev/null; \
cp src-tauri/target/release/minicloud-sync /output/ 2>/dev/null; \
echo 'Linux Build fertig!'"
info "Linux Build fertig! Dateien in: $OUTPUT_DIR/"
# Upload best file (AppImage > deb > binary)
local upload_file=""
for f in "$OUTPUT_DIR"/*.AppImage "$OUTPUT_DIR"/*.deb "$OUTPUT_DIR"/minicloud-sync; do
if [ -f "$f" ]; then upload_file="$f"; break; fi
done
[ -n "$upload_file" ] && upload_to_server "linux" "$upload_file"
}
build_windows() {
info "Baue Windows Desktop Client (Cross-Compile)..."
cd "$DESKTOP_DIR"
sudo docker build -f Dockerfile.build -t minicloud-desktop-builder .
sudo docker run --rm \
-v "$OUTPUT_DIR:/output" \
-e CARGO_TARGET_X86_64_PC_WINDOWS_GNU_LINKER=x86_64-w64-mingw32-gcc \
minicloud-desktop-builder \
bash -c "npm run tauri build -- --target x86_64-pc-windows-gnu 2>&1 || true; \
find src-tauri/target -name '*.exe' -o -name '*.msi' | head -5; \
cp src-tauri/target/x86_64-pc-windows-gnu/release/*.exe /output/ 2>/dev/null; \
cp -r src-tauri/target/x86_64-pc-windows-gnu/release/bundle/* /output/ 2>/dev/null; \
echo 'Windows Build fertig!'"
info "Windows Build fertig! Dateien in: $OUTPUT_DIR/"
# Upload setup installer (NSIS with WebView2 bundled), not the naked .exe
local upload_file=""
for f in "$OUTPUT_DIR"/nsis/*setup*.exe "$OUTPUT_DIR"/*.msi "$OUTPUT_DIR"/nsis/*.exe; do
if [ -f "$f" ]; then upload_file="$f"; break; fi
done
[ -n "$upload_file" ] && upload_to_server "windows" "$upload_file"
}
build_mac() {
# macOS kann nicht cross-compiled werden, muss auf macOS laufen
if [[ "$(uname)" != "Darwin" ]]; then
error "macOS Build ist nur auf macOS moeglich!"
fi
info "Baue macOS Desktop Client..."
cd "$DESKTOP_DIR"
npm install
npm run tauri build
cp -r src-tauri/target/release/bundle/* "$OUTPUT_DIR/" 2>/dev/null
info "macOS Build fertig! Dateien in: $OUTPUT_DIR/"
local upload_file=""
for f in "$OUTPUT_DIR"/*.dmg; do
if [ -f "$f" ]; then upload_file="$f"; break; fi
done
[ -n "$upload_file" ] && upload_to_server "mac" "$upload_file"
}
build_android() {
if [ ! -d "$MOBILE_DIR" ]; then
error "Mobile Client noch nicht erstellt (clients/mobile/)"
fi
info "Baue Android App..."
cd "$MOBILE_DIR"
sudo docker run --rm \
-v "$MOBILE_DIR:/app" \
-v "$OUTPUT_DIR:/output" \
ghcr.io/nickvdyck/flutter-android:latest \
bash -c "cd /app && flutter pub get && flutter build apk --release && \
cp build/app/outputs/flutter-apk/app-release.apk /output/minicloud.apk && \
echo 'Android Build fertig!'"
info "Android APK: $OUTPUT_DIR/minicloud.apk"
[ -f "$OUTPUT_DIR/minicloud.apk" ] && upload_to_server "android" "$OUTPUT_DIR/minicloud.apk"
}
build_ios() {
if [[ "$(uname)" != "Darwin" ]]; then
error "iOS Build ist nur auf macOS moeglich!"
fi
if [ ! -d "$MOBILE_DIR" ]; then
error "Mobile Client noch nicht erstellt (clients/mobile/)"
fi
info "Baue iOS App..."
cd "$MOBILE_DIR"
flutter pub get
flutter build ios --release
info "iOS Build fertig! Oeffne Xcode fuer Signierung + Archive."
local upload_file=""
for f in "$MOBILE_DIR"/build/ios/ipa/*.ipa; do
if [ -f "$f" ]; then upload_file="$f"; break; fi
done
[ -n "$upload_file" ] && upload_to_server "ios" "$upload_file"
}
do_clean() {
info "Loesche Build-Cache..."
rm -rf "$OUTPUT_DIR"/*
rm -rf "$DESKTOP_DIR/src-tauri/target"
rm -rf "$MOBILE_DIR/build" 2>/dev/null
sudo docker rmi minicloud-desktop-builder 2>/dev/null || true
info "Build-Cache geloescht."
}
# Main
case "${1:-help}" in
linux)
build_linux
;;
windows)
build_windows
;;
mac|macos)
build_mac
;;
android)
build_android
;;
ios)
build_ios
;;
all-desktop)
build_linux
build_windows
;;
clean)
do_clean
;;
*)
echo ""
echo "Mini-Cloud Client Build Script"
echo ""
echo "Verwendung: $0 <ziel>"
echo ""
echo "Desktop:"
echo " linux Linux (.deb + .AppImage + Binary)"
echo " windows Windows (.msi + .exe) - Cross-Compile via Docker"
echo " mac macOS (.dmg) - nur auf macOS"
echo " all-desktop Linux + Windows"
echo ""
echo "Mobile:"
echo " android Android (.apk) - via Docker"
echo " ios iOS (.ipa) - nur auf macOS"
echo ""
echo "Sonstiges:"
echo " clean Build-Cache loeschen"
echo ""
echo "Alle Builds (ausser mac/ios) laufen in Docker - kein lokales"
echo "Setup noetig. Output landet in: build-output/"
echo ""
echo "Auto-Upload: Wenn CLOUD_URL und BUILD_UPLOAD_TOKEN in .env"
echo "gesetzt sind, wird der Client nach dem Build automatisch auf"
echo "den Server hochgeladen und steht zum Download bereit."
echo ""
;;
esac
+24
View File
@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
+70
View File
@@ -0,0 +1,70 @@
# Native File-Provider-Integration (Platzhalter-Modus)
Zusaetzlich zum klassischen "alles-kopieren"-Sync bietet der Desktop-Client
einen **OneDrive-aehnlichen Platzhalter-Modus**: Dateien erscheinen im
Dateimanager als kleine Metadata-Dateien (Platzhalter) und werden erst
bei Doppelklick vom Server geladen.
## Status
| Plattform | Status | Technologie |
| --------- | --------- | ------------------------------------ |
| Windows | **MVP** | Cloud Files API (`cfapi.dll`) |
| Linux | Skelett | FUSE (libfuse3) - feature `linux_fuse` |
| macOS | Geplant | `NSFileProviderExtension` + Signatur |
## Windows
### Voraussetzungen
- Windows 10 1709 (Build 16299) oder neuer
- Der Client laeuft als regulaerer Benutzerprozess (keine Admin-Rechte noetig)
### Was funktioniert
- `CfRegisterSyncRoot` registriert einen Ordner als Sync-Root, der Explorer
zeigt Wolken-Overlay-Icons an.
- `CfCreatePlaceholders` legt fuer jede Mini-Cloud-Datei einen Platzhalter
mit korrekter Groesse und Aenderungszeit an.
- `FETCH_DATA`-Callback laedt per Range-Request vom Server, sobald der
Explorer Dateidaten anfordert (z.B. beim Oeffnen).
- `CfSetPinState` erlaubt manuelles "Immer offline halten" / "Nur in Cloud".
### Was noch fehlt
- Upload-Callback (`NOTIFY_FILE_CLOSE_COMPLETION`) fuer lokal geaenderte Dateien
- Context-Menue "Ein-/Auschecken" via Shell-Extension
- Delta-Updates (neue/geloeschte Dateien auf dem Server -> lokale Placeholder)
- Konflikt-Aufloesung
### Einschalten
Im Client-UI den Schalter **"Cloud-Files-Modus"** aktivieren (ruft intern
`cloud_files_enable`-Command auf). Alternativ per Kommandozeile beim Build:
```powershell
# Aus clients/desktop/src-tauri:
cargo build --release
```
Windows-Targets brauchen das Windows-SDK (uebersetzt aber sauber mit
cross-compile via `cargo xwin` aus Linux, wenn `build.sh windows` laeuft).
## Linux
FUSE-Provider ist optional und mit einem Feature-Flag versehen, damit
normale Linux-Builds nicht `libfuse3-dev` voraussetzen:
```bash
cargo build --features linux_fuse
```
Overlay-Icons im Dateimanager (Nautilus / Dolphin / Caja) brauchen
zusaetzlich eine native Extension pro DE - folgt in einem spaeteren
Commit.
## macOS
Braucht eine Apple Developer ID + Notarization, da `NSFileProviderExtension`
sonst vom Finder nicht geladen wird. Wird angegangen, sobald ein
Apple-Dev-Zugang verfuegbar ist.
+50
View File
@@ -0,0 +1,50 @@
# Multi-stage build container for Tauri Desktop Client
# Supports: linux, windows (cross-compile)
FROM rust:1.94-bookworm AS builder
# Install system dependencies for Tauri
RUN apt-get update && apt-get install -y --no-install-recommends \
libwebkit2gtk-4.1-dev \
libgtk-3-dev \
libayatana-appindicator3-dev \
librsvg2-dev \
libcairo2-dev \
libgdk-pixbuf-2.0-dev \
libsoup-3.0-dev \
libjavascriptcoregtk-4.1-dev \
pkg-config \
curl \
wget \
file \
&& rm -rf /var/lib/apt/lists/*
# Install Node.js
RUN curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y nodejs \
&& rm -rf /var/lib/apt/lists/*
# Windows cross-compile tools
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc-mingw-w64-x86-64 \
nsis \
&& rm -rf /var/lib/apt/lists/* \
&& rustup target add x86_64-pc-windows-gnu || true
WORKDIR /build
# Cache Rust dependencies
COPY src-tauri/Cargo.toml src-tauri/Cargo.lock* ./src-tauri/
COPY src-tauri/build.rs ./src-tauri/
RUN mkdir -p src-tauri/src && echo "pub fn run() {}" > src-tauri/src/lib.rs \
&& echo "fn main() { minicloud_sync_lib::run() }" > src-tauri/src/main.rs \
&& cd src-tauri && cargo fetch 2>/dev/null || true
# Copy full source
COPY . .
# Install npm dependencies
RUN npm ci
# Default: build for linux
CMD ["npm", "run", "tauri", "build"]
+7
View File
@@ -0,0 +1,7 @@
# Tauri + Vue 3
This template should help get you started developing with Tauri + Vue 3 in Vite. The template uses Vue 3 `<script setup>` SFCs, check out the [script setup docs](https://v3.vuejs.org/api/sfc-script-setup.html#sfc-script-setup) to learn more.
## Recommended IDE Setup
- [VS Code](https://code.visualstudio.com/) + [Vue - Official](https://marketplace.visualstudio.com/items?itemName=Vue.volar) + [Tauri](https://marketplace.visualstudio.com/items?itemName=tauri-apps.tauri-vscode) + [rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer)
+14
View File
@@ -0,0 +1,14 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Tauri + Vue 3 App</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/main.js"></script>
</body>
</html>
+1583
View File
File diff suppressed because it is too large Load Diff
+24
View File
@@ -0,0 +1,24 @@
{
"name": "tauri-app",
"private": true,
"version": "0.1.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview",
"tauri": "tauri"
},
"dependencies": {
"vue": "^3.5.13",
"@tauri-apps/api": "^2",
"@tauri-apps/plugin-opener": "^2",
"@tauri-apps/plugin-dialog": "^2",
"@tauri-apps/plugin-notification": "^2"
},
"devDependencies": {
"@vitejs/plugin-vue": "^5.2.1",
"vite": "^6.0.3",
"@tauri-apps/cli": "^2"
}
}
+6
View File
@@ -0,0 +1,6 @@
<svg width="206" height="231" viewBox="0 0 206 231" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M143.143 84C143.143 96.1503 133.293 106 121.143 106C108.992 106 99.1426 96.1503 99.1426 84C99.1426 71.8497 108.992 62 121.143 62C133.293 62 143.143 71.8497 143.143 84Z" fill="#FFC131"/>
<ellipse cx="84.1426" cy="147" rx="22" ry="22" transform="rotate(180 84.1426 147)" fill="#24C8DB"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M166.738 154.548C157.86 160.286 148.023 164.269 137.757 166.341C139.858 160.282 141 153.774 141 147C141 144.543 140.85 142.121 140.558 139.743C144.975 138.204 149.215 136.139 153.183 133.575C162.73 127.404 170.292 118.608 174.961 108.244C179.63 97.8797 181.207 86.3876 179.502 75.1487C177.798 63.9098 172.884 53.4021 165.352 44.8883C157.82 36.3744 147.99 30.2165 137.042 27.1546C126.095 24.0926 114.496 24.2568 103.64 27.6274C92.7839 30.998 83.1319 37.4317 75.8437 46.1553C74.9102 47.2727 74.0206 48.4216 73.176 49.5993C61.9292 50.8488 51.0363 54.0318 40.9629 58.9556C44.2417 48.4586 49.5653 38.6591 56.679 30.1442C67.0505 17.7298 80.7861 8.57426 96.2354 3.77762C111.685 -1.01901 128.19 -1.25267 143.769 3.10474C159.348 7.46215 173.337 16.2252 184.056 28.3411C194.775 40.457 201.767 55.4101 204.193 71.404C206.619 87.3978 204.374 103.752 197.73 118.501C191.086 133.25 180.324 145.767 166.738 154.548ZM41.9631 74.275L62.5557 76.8042C63.0459 72.813 63.9401 68.9018 65.2138 65.1274C57.0465 67.0016 49.2088 70.087 41.9631 74.275Z" fill="#FFC131"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M38.4045 76.4519C47.3493 70.6709 57.2677 66.6712 67.6171 64.6132C65.2774 70.9669 64 77.8343 64 85.0001C64 87.1434 64.1143 89.26 64.3371 91.3442C60.0093 92.8732 55.8533 94.9092 51.9599 97.4256C42.4128 103.596 34.8505 112.392 30.1816 122.756C25.5126 133.12 23.9357 144.612 25.6403 155.851C27.3449 167.09 32.2584 177.598 39.7906 186.112C47.3227 194.626 57.153 200.784 68.1003 203.846C79.0476 206.907 90.6462 206.743 101.502 203.373C112.359 200.002 122.011 193.568 129.299 184.845C130.237 183.722 131.131 182.567 131.979 181.383C143.235 180.114 154.132 176.91 164.205 171.962C160.929 182.49 155.596 192.319 148.464 200.856C138.092 213.27 124.357 222.426 108.907 227.222C93.458 232.019 76.9524 232.253 61.3736 227.895C45.7948 223.538 31.8055 214.775 21.0867 202.659C10.3679 190.543 3.37557 175.59 0.949823 159.596C-1.47592 143.602 0.768139 127.248 7.41237 112.499C14.0566 97.7497 24.8183 85.2327 38.4045 76.4519ZM163.062 156.711L163.062 156.711C162.954 156.773 162.846 156.835 162.738 156.897C162.846 156.835 162.954 156.773 163.062 156.711Z" fill="#24C8DB"/>
</svg>

After

Width:  |  Height:  |  Size: 2.5 KiB

+1
View File
@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="31.88" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 257"><defs><linearGradient id="IconifyId1813088fe1fbc01fb466" x1="-.828%" x2="57.636%" y1="7.652%" y2="78.411%"><stop offset="0%" stop-color="#41D1FF"></stop><stop offset="100%" stop-color="#BD34FE"></stop></linearGradient><linearGradient id="IconifyId1813088fe1fbc01fb467" x1="43.376%" x2="50.316%" y1="2.242%" y2="89.03%"><stop offset="0%" stop-color="#FFEA83"></stop><stop offset="8.333%" stop-color="#FFDD35"></stop><stop offset="100%" stop-color="#FFA800"></stop></linearGradient></defs><path fill="url(#IconifyId1813088fe1fbc01fb466)" d="M255.153 37.938L134.897 252.976c-2.483 4.44-8.862 4.466-11.382.048L.875 37.958c-2.746-4.814 1.371-10.646 6.827-9.67l120.385 21.517a6.537 6.537 0 0 0 2.322-.004l117.867-21.483c5.438-.991 9.574 4.796 6.877 9.62Z"></path><path fill="url(#IconifyId1813088fe1fbc01fb467)" d="M185.432.063L96.44 17.501a3.268 3.268 0 0 0-2.634 3.014l-5.474 92.456a3.268 3.268 0 0 0 3.997 3.378l24.777-5.718c2.318-.535 4.413 1.507 3.936 3.838l-7.361 36.047c-.495 2.426 1.782 4.5 4.151 3.78l15.304-4.649c2.372-.72 4.652 1.36 4.15 3.788l-11.698 56.621c-.732 3.542 3.979 5.473 5.943 2.437l1.313-2.028l72.516-144.72c1.215-2.423-.88-5.186-3.54-4.672l-25.505 4.922c-2.396.462-4.435-1.77-3.759-4.114l16.646-57.705c.677-2.35-1.37-4.583-3.769-4.113Z"></path></svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

+7
View File
@@ -0,0 +1,7 @@
# Generated by Cargo
# will have compiled files and executables
/target/
# Generated by Tauri
# will have schema files for capabilities auto-completion
/gen/schemas
File diff suppressed because it is too large Load Diff
+57
View File
@@ -0,0 +1,57 @@
[package]
name = "minicloud-sync"
version = "0.1.0"
description = "Mini-Cloud Desktop Sync Client"
authors = ["Mini-Cloud"]
edition = "2021"
[lib]
name = "minicloud_sync_lib"
crate-type = ["staticlib", "cdylib", "rlib"]
[build-dependencies]
tauri-build = { version = "2", features = [] }
[dependencies]
tauri = { version = "2", features = ["tray-icon"] }
tauri-plugin-opener = "2"
tauri-plugin-dialog = "2"
tauri-plugin-notification = "2"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
reqwest = { version = "0.12", features = ["json", "multipart", "rustls-tls", "blocking"], default-features = false }
tokio = { version = "1", features = ["full"] }
notify = "7"
sha2 = "0.10"
dirs = "6"
rusqlite = { version = "0.34", features = ["bundled"] }
chrono = { version = "0.4", features = ["serde"] }
base64 = "0.22"
open = "5"
once_cell = "1"
# Plattform-spezifische File-Provider-Integration (OneDrive-artig).
# Nur auf Windows gegen die Cloud Files API (cfapi.dll) linken.
[target.'cfg(windows)'.dependencies]
windows = { version = "0.58", features = [
"Win32_Foundation",
"Win32_Storage_FileSystem",
"Win32_Storage_CloudFilters",
"Win32_System_IO",
"Win32_System_Com",
"Win32_System_CorrelationVector", # gate fuer CF_CALLBACK_INFO / CfExecute / CfConnectSyncRoot
"Win32_UI_Shell",
"Win32_Security",
"Win32_System_Registry",
] }
widestring = "1"
winreg = "0.52"
# Linux: FUSE-basiertes Virtual-Filesystem (optional, cargo build --features linux_fuse)
[target.'cfg(target_os = "linux")'.dependencies]
fuser = { version = "0.15", optional = true }
libc = "0.2"
[features]
default = []
linux_fuse = ["fuser"]
+3
View File
@@ -0,0 +1,3 @@
fn main() {
tauri_build::build()
}
@@ -0,0 +1,13 @@
{
"$schema": "../gen/schemas/desktop-schema.json",
"identifier": "default",
"description": "Capability for the main window",
"windows": ["main"],
"permissions": [
"core:default",
"opener:default",
"dialog:default",
"dialog:allow-open",
"notification:default"
]
}
Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 974 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 903 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

Binary file not shown.
Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

@@ -0,0 +1,25 @@
//! Linux FUSE-basierte File-Provider-Integration (Platzhalter-Modus).
//!
//! Status: Skelett. Funktioniert nur wenn mit `--features linux_fuse`
//! gebaut wird und `libfuse3-dev` installiert ist. Overlay-Icons im
//! Dateimanager (Nautilus/Dolphin) werden spaeter als separate Extension
//! nachgereicht - das FUSE-Filesystem selbst kann die nicht setzen.
#![cfg(all(target_os = "linux", feature = "linux_fuse"))]
use super::RemoteEntry;
use std::path::PathBuf;
pub fn mount(mount_point: &PathBuf) -> Result<(), String> {
std::fs::create_dir_all(mount_point).map_err(|e| e.to_string())?;
// TODO: fuser::Filesystem-Impl mit auf-Abruf-Download
Err("Linux FUSE-Provider: noch nicht implementiert (MVP folgt)".into())
}
pub fn unmount(_mount_point: &PathBuf) -> Result<(), String> {
Err("Linux FUSE-Provider: noch nicht implementiert".into())
}
pub fn populate(_mount_point: &PathBuf, _entries: &[RemoteEntry]) -> Result<(), String> {
Err("Linux FUSE-Provider: noch nicht implementiert".into())
}
@@ -0,0 +1,121 @@
//! Native File-Provider-Integration (Platzhalter-Dateien wie bei OneDrive).
//!
//! Auf Windows realisiert ueber die Cloud Files API (cfapi.dll), auf Linux
//! ueber FUSE (optional, hinter `linux_fuse`-Feature). macOS folgt spaeter
//! ueber NSFileProviderExtension (braucht Apple-Signatur).
//!
//! Der bestehende `sync::engine` bleibt unberuehrt und bietet weiterhin
//! den klassischen "kopiere-alles-lokal"-Modus. Der Cloud-Files-Modus
//! ist sozusagen "files-on-demand": Datei wird erst bei Zugriff geladen.
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
/// Ein Eintrag aus dem Mini-Cloud-Syncbaum, so wie er vom Server kommt.
/// Wird von beiden Plattformen genutzt, um Platzhalter / FUSE-Inodes zu
/// erzeugen.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RemoteEntry {
pub id: i64,
pub name: String,
pub parent_id: Option<i64>,
pub is_folder: bool,
pub size: i64,
/// UTC-ISO8601
pub modified_at: String,
/// SHA-256 falls vom Server ausgeliefert, sonst None.
pub checksum: Option<String>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum SyncState {
/// Datei existiert nur als Platzhalter (online-only).
Cloud,
/// Datei ist vollstaendig lokal vorhanden und aktuell.
InSync,
/// Lokal geaendert, Upload ausstehend.
PendingUpload,
/// Auf dem Server gesperrt (durch anderen Nutzer).
LockedByOther,
/// Durch diesen Client gesperrt.
LockedLocal,
}
#[cfg(windows)]
pub mod windows;
#[cfg(windows)]
pub mod shell_integration;
#[cfg(all(target_os = "linux", feature = "linux_fuse"))]
pub mod linux;
pub mod sync_loop;
pub mod watcher;
/// Registriere den Sync-Root beim Betriebssystem. Ruft je nach Plattform
/// cfapi/CfRegisterSyncRoot bzw. mountet ein FUSE-Dateisystem.
#[allow(unused_variables)]
pub fn register_sync_root(
mount_point: &PathBuf,
provider_name: &str,
account_id: &str,
) -> Result<(), String> {
#[cfg(windows)]
return windows::register_sync_root(mount_point, provider_name, account_id);
#[cfg(all(target_os = "linux", feature = "linux_fuse"))]
return linux::mount(mount_point);
#[cfg(not(any(windows, all(target_os = "linux", feature = "linux_fuse"))))]
Err("File-Provider-Integration fuer diese Plattform noch nicht verfuegbar".into())
}
#[allow(unused_variables)]
pub fn unregister_sync_root(mount_point: &PathBuf) -> Result<(), String> {
#[cfg(windows)]
return windows::unregister_sync_root(mount_point);
#[cfg(all(target_os = "linux", feature = "linux_fuse"))]
return linux::unmount(mount_point);
#[cfg(not(any(windows, all(target_os = "linux", feature = "linux_fuse"))))]
Err("File-Provider-Integration fuer diese Plattform noch nicht verfuegbar".into())
}
/// Erzeuge fuer alle Remote-Eintraege Platzhalter (cloud-only Dateien).
/// Ordner werden als echte Verzeichnisse angelegt, Dateien als
/// Platzhalter mit gespeicherten Metadaten (Groesse, Mtime, ID).
#[allow(unused_variables)]
pub fn populate_placeholders(
mount_point: &PathBuf,
entries: &[RemoteEntry],
) -> Result<(), String> {
#[cfg(windows)]
return windows::populate_placeholders(mount_point, entries);
#[cfg(all(target_os = "linux", feature = "linux_fuse"))]
return linux::populate(mount_point, entries);
#[cfg(not(any(windows, all(target_os = "linux", feature = "linux_fuse"))))]
Err("File-Provider-Integration fuer diese Plattform noch nicht verfuegbar".into())
}
/// Ist File-Provider-Integration auf dieser Plattform grundsaetzlich verfuegbar?
pub fn is_supported() -> bool {
cfg!(windows) || cfg!(all(target_os = "linux", feature = "linux_fuse"))
}
/// Markiere eine lokal bereits vorhandene Datei als "immer offline halten".
#[allow(unused_variables)]
pub fn pin_file(path: &PathBuf) -> Result<(), String> {
#[cfg(windows)]
return windows::set_pin_state(path, true);
#[cfg(not(windows))]
Err("Nur auf Windows unterstuetzt".into())
}
#[allow(unused_variables)]
pub fn unpin_file(path: &PathBuf) -> Result<(), String> {
#[cfg(windows)]
return windows::set_pin_state(path, false);
#[cfg(not(windows))]
Err("Nur auf Windows unterstuetzt".into())
}
@@ -0,0 +1,206 @@
//! Explorer-Sidebar-Integration fuer Windows (ohne Admin-Rechte).
//!
//! Registriert den Sync-Ordner als Shell-Namespace-Extension unter
//! HKEY_CURRENT_USER, sodass er mit eigenem Icon in der Navigation
//! des Datei-Explorers erscheint (wie OneDrive/Dropbox).
//!
//! Anders als die eigentliche Cloud Files API ist das reine Registry-
//! Kosmetik - der Ordner funktioniert auch ohne Sidebar-Eintrag,
//! nur sieht man ihn dann nicht in der linken Leiste.
#![cfg(windows)]
use std::path::Path;
use winreg::enums::*;
use winreg::RegKey;
// Stabile GUID fuer Mini-Cloud - gleiche wie in windows.rs als ProviderId.
const CLSID_GUID: &str = "{4D696E69-436C-6F75-6444-7566667944AB}";
// Standard-CLSID fuer "Generic Shell Folder Implementation".
const SHELL_FOLDER_CLSID: &str = "{0E5AAE11-A475-4c5b-AB00-C66DE400274E}";
/// Registriere den Mount-Ordner in der Explorer-Navigation.
/// `icon_source`: Pfad zu ICO oder EXE mit Icon-Index (z.B. "C:\\app.exe,0")
pub fn install(
display_name: &str,
mount_point: &Path,
icon_source: &str,
) -> Result<(), String> {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
// 1) CLSID-Eintrag unter Software\Classes\CLSID\{GUID}
let clsid_path = format!("Software\\Classes\\CLSID\\{}", CLSID_GUID);
let (clsid, _) = hkcu
.create_subkey(&clsid_path)
.map_err(|e| format!("create CLSID: {e}"))?;
clsid
.set_value("", &display_name.to_string())
.map_err(|e| format!("set displayname: {e}"))?;
clsid
.set_value("System.IsPinnedToNameSpaceTree", &1u32)
.map_err(|e| format!("set pinned: {e}"))?;
clsid
.set_value("SortOrderIndex", &0x42u32)
.map_err(|e| format!("set sortorder: {e}"))?;
// 2) DefaultIcon
let (icon_key, _) = clsid
.create_subkey("DefaultIcon")
.map_err(|e| format!("create DefaultIcon: {e}"))?;
icon_key
.set_value("", &icon_source.to_string())
.map_err(|e| format!("set icon: {e}"))?;
// 3) InProcServer32 -> shell32.dll (Standard Shell-Folder-Host)
let (inproc, _) = clsid
.create_subkey("InProcServer32")
.map_err(|e| format!("create InProcServer32: {e}"))?;
inproc
.set_value("", &"%SystemRoot%\\system32\\shell32.dll".to_string())
.map_err(|e| format!("set shell32: {e}"))?;
inproc
.set_value("ThreadingModel", &"Both".to_string())
.map_err(|e| format!("set threading: {e}"))?;
// 4) Instance -> zeigt auf generischen Shell-Folder
let (instance, _) = clsid
.create_subkey("Instance")
.map_err(|e| format!("create Instance: {e}"))?;
instance
.set_value("CLSID", &SHELL_FOLDER_CLSID.to_string())
.map_err(|e| format!("set inst clsid: {e}"))?;
let (pb, _) = instance
.create_subkey("InitPropertyBag")
.map_err(|e| format!("create InitPropertyBag: {e}"))?;
pb.set_value("Attributes", &0x11u32)
.map_err(|e| format!("set attrs pb: {e}"))?;
pb.set_value(
"TargetFolderPath",
&mount_point.to_string_lossy().into_owned(),
)
.map_err(|e| format!("set target: {e}"))?;
// 5) ShellFolder-Flags
let (sf, _) = clsid
.create_subkey("ShellFolder")
.map_err(|e| format!("create ShellFolder: {e}"))?;
sf.set_value("FolderValueFlags", &0x28u32)
.map_err(|e| format!("set folderflags: {e}"))?;
sf.set_value("Attributes", &0xF080004Du32)
.map_err(|e| format!("set attrs sf: {e}"))?;
// 6) In die Navigation einhaengen
let ns_path = format!(
"Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Desktop\\NameSpace\\{}",
CLSID_GUID
);
let (ns, _) = hkcu
.create_subkey(&ns_path)
.map_err(|e| format!("create NameSpace: {e}"))?;
ns.set_value("", &display_name.to_string())
.map_err(|e| format!("set ns name: {e}"))?;
// 7) Kontext-Menue-Verben (Rechtsklick) fuer Dateien unter dem Mount
install_context_menu(mount_point)?;
// 8) Explorer informieren (SHChangeNotify)
notify_shell();
Ok(())
}
/// Registriert "Immer offline halten" / "Speicher freigeben" als
/// Rechtsklick-Menuepunkte, die nur fuer Dateien unterhalb des Mounts
/// angezeigt werden (AppliesTo-Filter).
fn install_context_menu(mount_point: &Path) -> Result<(), String> {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let exe = std::env::current_exe()
.map_err(|e| format!("current_exe: {e}"))?
.to_string_lossy()
.into_owned();
// Trailing Backslash wegstrippen, dann eine saubere AQS-Query bauen.
// Registry-Werte sind normale Strings; Backslashes bleiben einfach.
let mount_clean = mount_point
.to_string_lossy()
.trim_end_matches('\\')
.to_string();
// AppliesTo: nur Dateien, deren Pfad mit dem Mount-Ordner beginnt.
let applies_to = format!("System.ItemPathDisplay:~< \"{}\"", mount_clean);
for (verb, label, flag) in [
("MiniCloudPin", "Immer offline verfuegbar", "--pin"),
("MiniCloudUnpin", "Speicher freigeben", "--unpin"),
] {
// Unter AllFilesystemObjects statt * - das greift auch fuer
// Ordner und vermeidet Konflikte mit Dateityp-spezifischen Verben.
let key_path = format!("Software\\Classes\\AllFilesystemObjects\\shell\\{}", verb);
let (k, _) = hkcu
.create_subkey(&key_path)
.map_err(|e| format!("verb {verb}: {e}"))?;
k.set_value("", &label.to_string())
.map_err(|e| format!("default: {e}"))?;
k.set_value("MUIVerb", &label.to_string())
.map_err(|e| format!("MUIVerb: {e}"))?;
k.set_value("AppliesTo", &applies_to)
.map_err(|e| format!("AppliesTo: {e}"))?;
k.set_value("Icon", &exe)
.map_err(|e| format!("Icon: {e}"))?;
let (cmd, _) = k
.create_subkey("command")
.map_err(|e| format!("cmd: {e}"))?;
cmd.set_value("", &format!("\"{}\" {} \"%1\"", exe, flag))
.map_err(|e| format!("cmdline: {e}"))?;
}
Ok(())
}
fn uninstall_context_menu() {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
for verb in ["MiniCloudPin", "MiniCloudUnpin"] {
// alte (falsche) Stelle ebenfalls aufraeumen
let _ = hkcu.delete_subkey_all(format!("Software\\Classes\\*\\shell\\{}", verb));
let _ = hkcu.delete_subkey_all(format!(
"Software\\Classes\\AllFilesystemObjects\\shell\\{}",
verb
));
}
}
/// Entferne die Shell-Integration wieder.
pub fn uninstall() -> Result<(), String> {
let hkcu = RegKey::predef(HKEY_CURRENT_USER);
let ns_path = format!(
"Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Desktop\\NameSpace\\{}",
CLSID_GUID
);
let _ = hkcu.delete_subkey_all(&ns_path);
let clsid_path = format!("Software\\Classes\\CLSID\\{}", CLSID_GUID);
let _ = hkcu.delete_subkey_all(&clsid_path);
uninstall_context_menu();
notify_shell();
Ok(())
}
/// Teilt Explorer mit, dass sich die Shell-Namespace-Liste geaendert hat.
/// Ohne das sieht man den neuen Eintrag erst nach Explorer-Neustart.
fn notify_shell() {
use windows::Win32::UI::Shell::{SHChangeNotify, SHCNE_ASSOCCHANGED, SHCNF_IDLIST};
unsafe {
SHChangeNotify(SHCNE_ASSOCCHANGED, SHCNF_IDLIST, None, None);
}
}
/// Standard-Icon-Quelle: die laufende .exe mit Index 0.
pub fn default_icon_source() -> String {
std::env::current_exe()
.ok()
.and_then(|p| p.to_str().map(|s| format!("{},0", s)))
.unwrap_or_else(|| "%SystemRoot%\\system32\\imageres.dll,2".to_string())
}
@@ -0,0 +1,221 @@
//! Hintergrund-Synchronisation fuer den Cloud-Files-Modus.
//!
//! Zwei Aufgaben:
//! 1. Lokale Aenderungen im Mount-Point beobachten (notify-Watcher) und
//! geaenderte Dateien hochladen. Neu angelegte Dateien werden als
//! neue Datei beim Server registriert und als Platzhalter markiert.
//! 2. Serverseitige Aenderungen pollen (/api/sync/changes?since=...) und
//! fehlende Platzhalter erzeugen bzw. entfernte loeschen.
//!
//! Der Loop laeuft in einem dedizierten Tokio-Task; ein gespeicherter
//! `Stop`-Channel beendet ihn sauber beim Deaktivieren.
use super::RemoteEntry;
use serde::Deserialize;
use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::mpsc;
#[derive(Clone)]
pub struct SyncLoopConfig {
pub server_url: String,
pub access_token: String,
pub mount_point: PathBuf,
pub poll_interval_secs: u64,
}
pub struct SyncLoopHandle {
pub stop_flag: Arc<AtomicBool>,
pub tx: mpsc::UnboundedSender<LoopMessage>,
}
pub enum LoopMessage {
LocalChange(PathBuf),
Shutdown,
}
/// Starte den Sync-Loop. Gibt einen Handle zurueck, mit dem man ihn
/// stoppen oder externe Events (z.B. vom Watcher) einspeisen kann.
pub fn start(cfg: SyncLoopConfig) -> SyncLoopHandle {
let stop_flag = Arc::new(AtomicBool::new(false));
let (tx, mut rx) = mpsc::unbounded_channel::<LoopMessage>();
let stop = stop_flag.clone();
let cfg_task = cfg.clone();
tokio::spawn(async move {
let client = reqwest::Client::new();
let mut since: Option<String> = None;
let mut interval = tokio::time::interval(Duration::from_secs(cfg_task.poll_interval_secs));
loop {
if stop.load(Ordering::Relaxed) {
break;
}
tokio::select! {
_ = interval.tick() => {
if let Err(e) = poll_server_changes(&client, &cfg_task, &mut since).await {
eprintln!("[cloud_files] poll error: {e}");
}
}
Some(msg) = rx.recv() => {
match msg {
LoopMessage::Shutdown => break,
LoopMessage::LocalChange(path) => {
if let Err(e) = upload_local_change(&client, &cfg_task, &path).await {
eprintln!("[cloud_files] upload error: {e}");
}
}
}
}
}
}
});
SyncLoopHandle { stop_flag, tx }
}
#[derive(Debug, Deserialize)]
struct ChangesResponse {
#[serde(default)]
created: Vec<RemoteEntry>,
#[serde(default)]
updated: Vec<RemoteEntry>,
#[serde(default)]
deleted: Vec<i64>,
timestamp: Option<String>,
}
async fn poll_server_changes(
client: &reqwest::Client,
cfg: &SyncLoopConfig,
since: &mut Option<String>,
) -> Result<(), String> {
let base = cfg.server_url.trim_end_matches('/');
let mut url = format!("{}/api/sync/changes", base);
if let Some(s) = since.as_deref() {
url.push_str(&format!("?since={}", urlencode(s)));
}
let resp = client
.get(&url)
.bearer_auth(&cfg.access_token)
.send()
.await
.map_err(|e| e.to_string())?;
if !resp.status().is_success() {
return Err(format!("HTTP {}", resp.status()));
}
let body: ChangesResponse = resp.json().await.map_err(|e| e.to_string())?;
// Created + Updated: jeweils passendes Verzeichnis sichern, dann
// Platzhalter (neu) anlegen. Bei Updates muss der alte Platzhalter
// erst geloescht werden - Windows erlaubt kein "replace in place".
for e in body.created.iter().chain(body.updated.iter()) {
let rel = build_relative_path(e);
let full = cfg.mount_point.join(&rel);
if e.is_folder {
let _ = std::fs::create_dir_all(&full);
continue;
}
let parent = full.parent().map(|p| p.to_path_buf()).unwrap_or_else(|| cfg.mount_point.clone());
let _ = std::fs::create_dir_all(&parent);
let _ = std::fs::remove_file(&full); // ignoriert falls nicht da
#[cfg(windows)]
{
let identity = e.id.to_string();
if let Err(err) = super::windows::create_placeholder_at(
&parent,
&e.name,
e.size,
&e.modified_at,
identity.as_bytes(),
) {
eprintln!("[cloud_files] placeholder {}: {}", e.name, err);
}
}
}
// Deleted: nur per ID vom Server - wir kennen den Pfad nicht mehr.
// MVP: ignorieren. In Version 2 fuehren wir ein lokales Mapping.
let _ = body.deleted;
if let Some(ts) = body.timestamp {
*since = Some(ts);
}
Ok(())
}
async fn upload_local_change(
client: &reqwest::Client,
cfg: &SyncLoopConfig,
path: &PathBuf,
) -> Result<(), String> {
if !path.is_file() {
return Ok(());
}
// cfapi-Platzhalter oder gerade hydrierende Dateien NICHT hochladen -
// sonst wird jede Wolken-Datei sofort komplett gesynct und wir haben
// keinen On-Demand-Modus mehr.
#[cfg(windows)]
{
if super::windows::is_cfapi_placeholder(path) {
super::windows::log_msg(
&cfg.mount_point,
&format!("skip upload (placeholder): {}", path.display()),
);
return Ok(());
}
}
// Eigene Log-Datei nicht mit hochladen.
if path
.file_name()
.and_then(|n| n.to_str())
.map(|n| n.starts_with(".minicloud-"))
.unwrap_or(false)
{
return Ok(());
}
// Relativer Pfad im Mount = Ziel-Pfad auf Server
let rel = path
.strip_prefix(&cfg.mount_point)
.map_err(|_| "path outside mount".to_string())?
.to_string_lossy()
.replace('\\', "/");
let bytes = std::fs::read(path).map_err(|e| e.to_string())?;
let base = cfg.server_url.trim_end_matches('/');
let url = format!("{}/api/files/upload", base);
let file_name = path
.file_name()
.and_then(|s| s.to_str())
.unwrap_or("unnamed")
.to_string();
let form = reqwest::multipart::Form::new()
.text("path", rel.clone())
.part(
"file",
reqwest::multipart::Part::bytes(bytes).file_name(file_name),
);
let resp = client
.post(&url)
.bearer_auth(&cfg.access_token)
.multipart(form)
.send()
.await
.map_err(|e| e.to_string())?;
if !resp.status().is_success() {
return Err(format!("HTTP {}", resp.status()));
}
Ok(())
}
fn build_relative_path(e: &RemoteEntry) -> PathBuf {
// Achtung: RemoteEntry hat nur parent_id, nicht den kompletten Pfad.
// Fuer diesen einfachen Fall nehmen wir nur den Namen. Bei geschachtelten
// Ordnern muesste man die Hierarchie ueber /api/sync/tree vor-laden -
// das passiert einmal beim Aktivieren; Delta-Updates kommen meistens
// flach (oder in einer gemeinsamen Wurzel).
PathBuf::from(&e.name)
}
fn urlencode(s: &str) -> String {
// Sehr minimalistisch: wir ersetzen nur problematische Zeichen.
s.replace(' ', "%20").replace(':', "%3A").replace('+', "%2B")
}
@@ -0,0 +1,43 @@
//! Leichtgewichtiger Callback-basierter FS-Watcher fuer den Cloud-Files-Modus.
//!
//! Anders als `sync::watcher::FileWatcher` gibt dieser hier einen Closure
//! direkt an notify weiter, sodass wir kein Channel-Pumpen brauchen.
use notify::{Event, EventKind, RecommendedWatcher, RecursiveMode, Watcher, Config};
use std::path::{Path, PathBuf};
pub struct CallbackWatcher {
_watcher: RecommendedWatcher,
}
impl CallbackWatcher {
pub fn new<F>(watch_dir: &Path, mut on_change: F) -> Result<Self, String>
where
F: FnMut(PathBuf, EventKind) + Send + 'static,
{
let mut watcher = RecommendedWatcher::new(
move |res: Result<Event, notify::Error>| {
if let Ok(ev) = res {
for path in ev.paths {
let name = path.file_name().and_then(|n| n.to_str()).unwrap_or("");
if name.starts_with('.')
|| name.starts_with('~')
|| name.ends_with(".tmp")
{
continue;
}
on_change(path, ev.kind.clone());
}
}
},
Config::default(),
)
.map_err(|e| format!("Watcher-Fehler: {e}"))?;
watcher
.watch(watch_dir, RecursiveMode::Recursive)
.map_err(|e| format!("Watch-Fehler: {e}"))?;
Ok(Self { _watcher: watcher })
}
}
@@ -0,0 +1,639 @@
//! Windows Cloud Files API Integration.
//!
//! Registriert den Sync-Ordner als Sync-Root, legt Platzhalter-Dateien an
//! und reicht Zugriffe auf Dateidaten als HTTPS-Download durch. Der
//! Explorer zeigt Wolken-/Haken-Overlays automatisch an, solange die
//! Pin-Stati korrekt gesetzt sind.
//!
//! Voraussetzung: Windows 10 1709+ (cfapi.dll). Der Account-Identifier
//! sollte stabil sein (z.B. Hash(Server-URL + Username)).
#![cfg(windows)]
use super::RemoteEntry;
use once_cell::sync::Lazy;
use std::path::{Path, PathBuf};
use std::ptr;
use std::sync::{Arc, Mutex};
use widestring::U16CString;
use windows::core::PCWSTR;
use windows::Win32::Storage::CloudFilters as CF;
use windows::Win32::Storage::FileSystem::FILE_ATTRIBUTE_NORMAL;
use windows::Win32::System::Com::{CoInitializeEx, COINIT_MULTITHREADED};
#[derive(Default, Clone)]
pub struct CloudContext {
pub server_url: String,
pub access_token: String,
pub mount_point: PathBuf,
}
static CONTEXT: Lazy<Arc<Mutex<CloudContext>>> =
Lazy::new(|| Arc::new(Mutex::new(CloudContext::default())));
static CONNECTION_KEY: Lazy<Mutex<Option<CF::CF_CONNECTION_KEY>>> =
Lazy::new(|| Mutex::new(None));
pub fn set_context(server_url: String, access_token: String, mount_point: PathBuf) {
let mut ctx = CONTEXT.lock().unwrap();
ctx.server_url = server_url;
ctx.access_token = access_token;
ctx.mount_point = mount_point;
}
fn ctx_snapshot() -> CloudContext {
CONTEXT.lock().unwrap().clone()
}
const PROVIDER_VERSION: &str = "1.0";
// Windows-FILETIME: 100ns-Ticks seit 1601-01-01. Unix-Epoch liegt
// 11_644_473_600 Sekunden danach.
fn unix_to_ft_ticks(unix_secs: i64) -> i64 {
(unix_secs + 11_644_473_600) * 10_000_000
}
// ---------------------------------------------------------------------------
// Sync-Root-Registrierung
// ---------------------------------------------------------------------------
pub fn register_sync_root(
mount_point: &PathBuf,
provider_name: &str,
account_id: &str,
) -> Result<(), String> {
// COM initialisieren (cfapi benoetigt MTA-Apartment)
unsafe {
let _ = CoInitializeEx(Some(ptr::null()), COINIT_MULTITHREADED);
}
std::fs::create_dir_all(mount_point).map_err(|e| format!("mkdir: {e}"))?;
let display = format!("Mini-Cloud - {}", account_id);
let path_wide = U16CString::from_str(mount_point.to_string_lossy().as_ref())
.map_err(|e| format!("path encode: {e}"))?;
let display_wide = U16CString::from_str(&display).map_err(|e| e.to_string())?;
let provider_wide = U16CString::from_str(provider_name).map_err(|e| e.to_string())?;
let version_wide = U16CString::from_str(PROVIDER_VERSION).map_err(|e| e.to_string())?;
let mut info = CF::CF_SYNC_REGISTRATION::default();
info.StructSize = std::mem::size_of::<CF::CF_SYNC_REGISTRATION>() as u32;
info.ProviderName = PCWSTR(provider_wide.as_ptr());
info.ProviderVersion = PCWSTR(version_wide.as_ptr());
// Stabile GUID fuer "Mini-Cloud" (random einmalig generiert).
info.ProviderId = windows::core::GUID::from_u128(0x4D696E69_436C_6F75_6444_7566667944ab);
let mut policies = CF::CF_SYNC_POLICIES::default();
policies.StructSize = std::mem::size_of::<CF::CF_SYNC_POLICIES>() as u32;
policies.HardLink = CF::CF_HARDLINK_POLICY::default();
policies.Hydration = CF::CF_HYDRATION_POLICY::default();
policies.Population = CF::CF_POPULATION_POLICY::default();
policies.InSync = CF::CF_INSYNC_POLICY::default();
// Hydration PARTIAL = Datei-Inhalt kommt bei Zugriff per FETCH_DATA.
// Population FULL = Ordnerinhalt ist komplett vorgefuellt durch uns
// (populate_placeholders). So muss Windows NICHT FETCH_PLACEHOLDERS
// callen, den wir nicht implementieren - sonst timeout beim Oeffnen.
policies.Hydration.Primary = CF::CF_HYDRATION_POLICY_PARTIAL;
policies.Population.Primary = CF::CF_POPULATION_POLICY_FULL;
// Holder fuer displayname, damit wir ihn spaeter ggf. in ein eigenes
// struct einbauen koennen. windows-rs verlangt hier nichts weiter.
let _ = display_wide;
// Erst eine eventuell vorhandene Registrierung wegraeumen. Sonst
// uebernimmt UPDATE nur einen Teil der Policies und alte PARTIAL-
// Population-Einstellungen bleiben aktiv -> Explorer-Timeout.
unsafe {
let _ = CF::CfUnregisterSyncRoot(PCWSTR(path_wide.as_ptr()));
}
log_msg(mount_point, &format!(
"register_sync_root path={} provider={} account={}",
mount_point.display(), provider_name, account_id
));
unsafe {
if let Err(e) = CF::CfRegisterSyncRoot(
PCWSTR(path_wide.as_ptr()),
&info,
&policies,
CF::CF_REGISTER_FLAG_NONE,
) {
log_err(mount_point, &format!("CfRegisterSyncRoot FAILED: {e:?}"));
// Als Fallback mit UPDATE-Flag
CF::CfRegisterSyncRoot(
PCWSTR(path_wide.as_ptr()),
&info,
&policies,
CF::CF_REGISTER_FLAG_UPDATE,
)
.map_err(|e| format!("CfRegisterSyncRoot(UPDATE): {e}"))?;
}
}
log_msg(mount_point, "CfRegisterSyncRoot OK");
connect_callbacks(mount_point)?;
log_msg(mount_point, "callbacks connected");
// Explorer-Sidebar-Eintrag mit Wolken-Icon
let icon = super::shell_integration::default_icon_source();
match super::shell_integration::install(provider_name, mount_point, &icon) {
Ok(()) => log_msg(mount_point, "shell integration installed"),
Err(e) => log_err(mount_point, &format!("shell integration FAILED: {e}")),
}
Ok(())
}
pub fn unregister_sync_root(mount_point: &PathBuf) -> Result<(), String> {
// Shell-Eintrag zuerst entfernen (schlaegt nie fehl).
let _ = super::shell_integration::uninstall();
let _ = disconnect_callbacks();
let path_wide = U16CString::from_str(mount_point.to_string_lossy().as_ref())
.map_err(|e| e.to_string())?;
unsafe {
CF::CfUnregisterSyncRoot(PCWSTR(path_wide.as_ptr()))
.map_err(|e| format!("CfUnregisterSyncRoot: {e}"))?;
}
Ok(())
}
// ---------------------------------------------------------------------------
// Callback-Tabelle
// ---------------------------------------------------------------------------
unsafe extern "system" fn on_fetch_data(
info: *const CF::CF_CALLBACK_INFO,
params: *const CF::CF_CALLBACK_PARAMETERS,
) {
let info = &*info;
let params = &*params;
let fetch = &params.Anonymous.FetchData;
// FileIdentity enthaelt unsere Mini-Cloud-File-ID als UTF-8-Bytes.
let identity = std::slice::from_raw_parts(
info.FileIdentity as *const u8,
info.FileIdentityLength as usize,
);
let file_id: i64 = std::str::from_utf8(identity)
.ok()
.and_then(|s| s.parse().ok())
.unwrap_or(0);
let offset: i64 = fetch.RequiredFileOffset;
let length: u64 = fetch.RequiredLength as u64;
let connection_key = info.ConnectionKey;
let transfer_key = info.TransferKey;
// HTTPS-Download im separaten Thread (Callback darf nicht blockieren).
let ctx = ctx_snapshot();
std::thread::spawn(move || {
log_msg(&ctx.mount_point, &format!(
"FETCH_DATA file_id={file_id} offset={offset} len={length}"
));
match transfer_range(connection_key, transfer_key, file_id, offset, length, &ctx) {
Ok(()) => log_msg(&ctx.mount_point, &format!(
"fetch file_id={file_id} OK"
)),
Err(e) => {
log_err(&ctx.mount_point, &format!(
"fetch file_id={file_id} offset={offset} len={length} FAILED: {e}"
));
// Garantiert Fehler-Completion, damit Windows nicht in Timeout laeuft.
let _ = complete_transfer(connection_key, transfer_key, None, offset, length);
}
}
});
}
pub fn log_msg(mount: &Path, msg: &str) {
use std::io::Write;
// Log-Datei NEBEN den Mount, damit sie nicht selbst als Platzhalter
// behandelt wird.
let log = mount
.parent()
.map(|p| p.join(".minicloud-cloudfiles.log"))
.unwrap_or_else(|| PathBuf::from(".minicloud-cloudfiles.log"));
if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open(&log) {
let _ = writeln!(f, "[{}] {}", chrono::Utc::now().to_rfc3339(), msg);
}
}
fn log_err(mount: &Path, msg: &str) {
log_msg(mount, msg);
}
/// True wenn die Datei ein cfapi-Platzhalter ist (noch nicht hydriert)
/// oder gerade vom Cloud-Filter verwaltet wird. Fuer solche Dateien
/// duerfen wir KEINEN Upload ausloesen, sonst verwandelt der Sync-Loop
/// jeden Platzhalter sofort in eine vollstaendig lokale Datei.
pub fn is_cfapi_placeholder(path: &Path) -> bool {
use windows::Win32::Storage::FileSystem::GetFileAttributesW;
let Ok(w) = U16CString::from_str(path.to_string_lossy().as_ref()) else {
return false;
};
let attrs = unsafe { GetFileAttributesW(PCWSTR(w.as_ptr())) };
if attrs == u32::MAX {
return false;
}
// FILE_ATTRIBUTE_OFFLINE (0x1000) oder
// FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS (0x400000) oder
// FILE_ATTRIBUTE_RECALL_ON_OPEN (0x40000)
(attrs & 0x0040_1000) != 0 || (attrs & 0x0004_0000) != 0
}
fn transfer_range(
connection_key: CF::CF_CONNECTION_KEY,
transfer_key: i64,
file_id: i64,
offset: i64,
length: u64,
ctx: &CloudContext,
) -> Result<(), String> {
if ctx.server_url.is_empty() || ctx.access_token.is_empty() {
return Err("CloudContext nicht gesetzt (Server/Token leer)".into());
}
let client = reqwest::blocking::Client::builder()
.timeout(std::time::Duration::from_secs(60))
.build()
.map_err(|e| format!("client: {e}"))?;
let url = format!(
"{}/api/files/{}/download",
ctx.server_url.trim_end_matches('/'),
file_id
);
let range = format!("bytes={}-{}", offset, offset as u64 + length - 1);
let resp = client
.get(&url)
.bearer_auth(&ctx.access_token)
.header("Range", &range)
.send()
.map_err(|e| format!("send: {e}"))?;
let status = resp.status();
if !status.is_success() && status.as_u16() != 206 {
return Err(format!("HTTP {}", status));
}
let bytes = resp.bytes().map_err(|e: reqwest::Error| e.to_string())?;
// Wenn Server kein Range unterstuetzt und volle Datei liefert,
// aus dem Body den angeforderten Bereich ausschneiden.
let slice: &[u8] = if status.as_u16() == 206 {
&bytes[..]
} else {
let start = offset as usize;
let end = (start + length as usize).min(bytes.len());
if start >= bytes.len() {
&[]
} else {
&bytes[start..end]
}
};
complete_transfer(connection_key, transfer_key, Some(slice), offset, slice.len() as u64)
}
fn complete_transfer(
connection_key: CF::CF_CONNECTION_KEY,
transfer_key: i64,
data: Option<&[u8]>,
offset: i64,
length: u64,
) -> Result<(), String> {
let mut op_info = CF::CF_OPERATION_INFO::default();
op_info.StructSize = std::mem::size_of::<CF::CF_OPERATION_INFO>() as u32;
op_info.Type = CF::CF_OPERATION_TYPE_TRANSFER_DATA;
op_info.ConnectionKey = connection_key;
op_info.TransferKey = transfer_key;
let mut params = CF::CF_OPERATION_PARAMETERS::default();
params.ParamSize = std::mem::size_of::<CF::CF_OPERATION_PARAMETERS>() as u32;
unsafe {
let transfer = &mut params.Anonymous.TransferData;
if let Some(data) = data {
transfer.CompletionStatus = windows::Win32::Foundation::NTSTATUS(0); // STATUS_SUCCESS
transfer.Buffer = data.as_ptr() as _;
transfer.Offset = offset;
transfer.Length = length as i64;
} else {
transfer.CompletionStatus =
windows::Win32::Foundation::NTSTATUS(0xC0000001u32 as i32); // STATUS_UNSUCCESSFUL
}
CF::CfExecute(&op_info, &mut params).map_err(|e| format!("CfExecute: {e}"))?;
}
Ok(())
}
unsafe extern "system" fn on_fetch_placeholders(
info: *const CF::CF_CALLBACK_INFO,
_params: *const CF::CF_CALLBACK_PARAMETERS,
) {
// Safety-Net: wir populieren schon ueber populate_placeholders,
// aber falls Windows trotzdem ruft, geben wir leere Antwort.
let info = &*info;
let mut op_info = CF::CF_OPERATION_INFO::default();
op_info.StructSize = std::mem::size_of::<CF::CF_OPERATION_INFO>() as u32;
op_info.Type = CF::CF_OPERATION_TYPE_TRANSFER_PLACEHOLDERS;
op_info.ConnectionKey = info.ConnectionKey;
op_info.TransferKey = info.TransferKey;
let mut params = CF::CF_OPERATION_PARAMETERS::default();
params.ParamSize = std::mem::size_of::<CF::CF_OPERATION_PARAMETERS>() as u32;
let transfer = &mut params.Anonymous.TransferPlaceholders;
transfer.CompletionStatus = windows::Win32::Foundation::NTSTATUS(0);
transfer.PlaceholderTotalCount = 0;
transfer.PlaceholderArray = std::ptr::null_mut();
transfer.PlaceholderCount = 0;
transfer.EntriesProcessed = 0;
transfer.Flags = CF::CF_OPERATION_TRANSFER_PLACEHOLDERS_FLAG_DISABLE_ON_DEMAND_POPULATION;
let _ = CF::CfExecute(&op_info, &mut params);
}
fn connect_callbacks(mount_point: &Path) -> Result<(), String> {
let callbacks = [
CF::CF_CALLBACK_REGISTRATION {
Type: CF::CF_CALLBACK_TYPE_FETCH_DATA,
Callback: Some(on_fetch_data),
},
CF::CF_CALLBACK_REGISTRATION {
Type: CF::CF_CALLBACK_TYPE_FETCH_PLACEHOLDERS,
Callback: Some(on_fetch_placeholders),
},
// Sentinel: Type = INVALID beendet die Tabelle.
CF::CF_CALLBACK_REGISTRATION {
Type: CF::CF_CALLBACK_TYPE_NONE,
Callback: None,
},
];
let path_wide = U16CString::from_str(mount_point.to_string_lossy().as_ref())
.map_err(|e| e.to_string())?;
let key = unsafe {
CF::CfConnectSyncRoot(
PCWSTR(path_wide.as_ptr()),
callbacks.as_ptr(),
None,
CF::CF_CONNECT_FLAG_REQUIRE_PROCESS_INFO
| CF::CF_CONNECT_FLAG_REQUIRE_FULL_FILE_PATH,
)
.map_err(|e| format!("CfConnectSyncRoot: {e}"))?
};
*CONNECTION_KEY.lock().unwrap() = Some(key);
Ok(())
}
fn disconnect_callbacks() -> Result<(), String> {
if let Some(key) = CONNECTION_KEY.lock().unwrap().take() {
unsafe {
CF::CfDisconnectSyncRoot(key)
.map_err(|e| format!("CfDisconnectSyncRoot: {e}"))?;
}
}
Ok(())
}
// ---------------------------------------------------------------------------
// Placeholder-Erzeugung
// ---------------------------------------------------------------------------
pub fn populate_placeholders(
mount_point: &PathBuf,
entries: &[RemoteEntry],
) -> Result<(), String> {
use std::collections::HashMap;
log_msg(mount_point, &format!(
"populate_placeholders: {} Eintraege", entries.len()
));
let by_id: HashMap<i64, &RemoteEntry> = entries.iter().map(|e| (e.id, e)).collect();
fn rel_path<'a>(
entry: &'a RemoteEntry,
by_id: &HashMap<i64, &'a RemoteEntry>,
) -> PathBuf {
let mut parts = vec![entry.name.as_str()];
let mut cur = entry.parent_id;
while let Some(id) = cur {
if let Some(p) = by_id.get(&id) {
parts.push(p.name.as_str());
cur = p.parent_id;
} else {
break;
}
}
parts.reverse();
parts.iter().collect()
}
// Erst Ordner anlegen
for e in entries.iter().filter(|e| e.is_folder) {
let p = mount_point.join(rel_path(e, &by_id));
std::fs::create_dir_all(&p).ok();
}
// Dann Dateien als Platzhalter. Existierende "normale" Dateien
// (z.B. nach vorherigem CfUnregisterSyncRoot) vorher loeschen,
// weil CfCreatePlaceholders sonst mit ERROR_FILE_EXISTS scheitert
// und die Datei nie zum Platzhalter wird -> spaeter koennte man
// sie nicht mehr dehydrieren (0x80070178 "keine Clouddatei").
for e in entries.iter().filter(|e| !e.is_folder) {
let rel = rel_path(e, &by_id);
let full = mount_point.join(&rel);
let parent = rel
.parent()
.map(|p| mount_point.join(p))
.unwrap_or_else(|| mount_point.clone());
let identity = e.id.to_string();
if full.exists() && !is_cfapi_placeholder(&full) {
log_msg(mount_point, &format!(
"deleting non-placeholder {} to recreate",
full.display()
));
if let Err(err) = std::fs::remove_file(&full) {
log_err(mount_point, &format!(
"remove {} failed: {err}", full.display()
));
}
}
match create_placeholder(&parent, &e.name, e.size, &e.modified_at, identity.as_bytes()) {
Ok(()) => log_msg(mount_point, &format!("placeholder created: {}", full.display())),
Err(err) => log_err(mount_point, &format!(
"placeholder {} FAILED: {err}", full.display()
)),
}
}
Ok(())
}
pub fn create_placeholder_at(
parent_dir: &Path,
name: &str,
size: i64,
modified_iso: &str,
file_identity: &[u8],
) -> Result<(), String> {
create_placeholder(parent_dir, name, size, modified_iso, file_identity)
}
fn create_placeholder(
parent_dir: &Path,
name: &str,
size: i64,
modified_iso: &str,
file_identity: &[u8],
) -> Result<(), String> {
let parent_wide = U16CString::from_str(parent_dir.to_string_lossy().as_ref())
.map_err(|e| e.to_string())?;
let name_wide = U16CString::from_str(name).map_err(|e| e.to_string())?;
let mtime_unix = chrono::DateTime::parse_from_rfc3339(modified_iso)
.map(|dt| dt.timestamp())
.unwrap_or(0);
let ft_ticks = unix_to_ft_ticks(mtime_unix);
let mut ph = CF::CF_PLACEHOLDER_CREATE_INFO::default();
ph.RelativeFileName = PCWSTR(name_wide.as_ptr());
ph.FsMetadata.FileSize = size;
ph.FsMetadata.BasicInfo.FileAttributes = FILE_ATTRIBUTE_NORMAL.0;
ph.FsMetadata.BasicInfo.LastWriteTime = ft_ticks;
ph.FsMetadata.BasicInfo.CreationTime = ft_ticks;
ph.FsMetadata.BasicInfo.ChangeTime = ft_ticks;
ph.FsMetadata.BasicInfo.LastAccessTime = ft_ticks;
ph.Flags = CF::CF_PLACEHOLDER_CREATE_FLAG_MARK_IN_SYNC;
ph.FileIdentity = file_identity.as_ptr() as _;
ph.FileIdentityLength = file_identity.len() as u32;
// CfCreatePlaceholders nimmt in windows-rs 0.58 einen Slice und einen
// Option<*mut u32> fuer "wie viele wurden angelegt".
let mut phs = [ph];
let mut count: u32 = 0;
unsafe {
CF::CfCreatePlaceholders(
PCWSTR(parent_wide.as_ptr()),
&mut phs,
CF::CF_CREATE_FLAG_NONE,
Some(&mut count as *mut u32),
)
.map_err(|e| format!("CfCreatePlaceholders: {e}"))?;
}
Ok(())
}
// ---------------------------------------------------------------------------
// Pin / Unpin (offline halten)
// ---------------------------------------------------------------------------
pub fn set_pin_state(file: &Path, pinned: bool) -> Result<(), String> {
use windows::Win32::Storage::FileSystem::{
CreateFileW, FILE_FLAG_BACKUP_SEMANTICS, FILE_FLAG_OPEN_REPARSE_POINT,
FILE_WRITE_ATTRIBUTES, FILE_READ_ATTRIBUTES,
FILE_SHARE_READ, FILE_SHARE_WRITE, FILE_SHARE_DELETE, OPEN_EXISTING,
};
let path_wide = U16CString::from_str(file.to_string_lossy().as_ref())
.map_err(|e| e.to_string())?;
// CfSetPinState / CfDehydratePlaceholder brauchen WRITE_ATTRIBUTES.
// OPEN_REPARSE_POINT verhindert, dass das Oeffnen selbst eine
// Hydration ausloest (sonst waere Unpin bedeutungslos).
let handle = unsafe {
CreateFileW(
PCWSTR(path_wide.as_ptr()),
(FILE_READ_ATTRIBUTES | FILE_WRITE_ATTRIBUTES).0,
FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
None,
OPEN_EXISTING,
FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OPEN_REPARSE_POINT,
None,
)
}
.map_err(|e| format!("open: {e}"))?;
let state = if pinned {
CF::CF_PIN_STATE_PINNED
} else {
CF::CF_PIN_STATE_UNPINNED
};
let set_res = unsafe {
CF::CfSetPinState(handle, state, CF::CF_SET_PIN_FLAG_NONE, None)
};
// Hydrate bei Pin / Dehydrate bei Unpin. CfSetPinState aendert nur
// das Flag - ohne explizite Hydrate-/Dehydrate-Calls passiert am
// Disk-Inhalt und am Icon nichts Sichtbares.
let (hydrate_err, dehydrate_err) = if set_res.is_ok() {
if pinned {
let r = unsafe {
CF::CfHydratePlaceholder(
handle,
0,
-1,
CF::CF_HYDRATE_FLAG_NONE,
None,
)
};
(r.err().map(|e| format!("{:?}", e)), None)
} else {
let r = unsafe {
CF::CfDehydratePlaceholder(
handle,
0,
-1,
CF::CF_DEHYDRATE_FLAG_NONE,
None,
)
};
(None, r.err().map(|e| format!("{:?}", e)))
}
} else {
(None, None)
};
unsafe {
let _ = windows::Win32::Foundation::CloseHandle(handle);
}
// Explorer Icon-Overlay aktualisieren
notify_file_update(file);
// Log-Verzeichnis ist der Mount-Ordner oder dessen Parent
let log_dir = file
.ancestors()
.find(|p| p.parent().is_some())
.map(|p| p.to_path_buf())
.unwrap_or_else(|| file.to_path_buf());
log_msg(
&log_dir,
&format!(
"set_pin_state file={} pinned={} result={:?} hydrate_err={:?} dehydrate_err={:?}",
file.display(),
pinned,
set_res,
hydrate_err,
dehydrate_err
),
);
set_res.map_err(|e| format!("CfSetPinState: {e}"))?;
Ok(())
}
/// Sagt dem Shell "diese Datei hat sich geaendert" damit das Overlay-
/// Icon (Wolke/Haken) aktualisiert wird, ohne dass der User F5 druecken
/// muss.
fn notify_file_update(file: &Path) {
use windows::Win32::UI::Shell::{SHChangeNotify, SHCNE_UPDATEITEM, SHCNF_PATHW};
let Ok(w) = U16CString::from_str(file.to_string_lossy().as_ref()) else {
return;
};
unsafe {
SHChangeNotify(
SHCNE_UPDATEITEM,
SHCNF_PATHW,
Some(w.as_ptr() as _),
None,
);
}
}
File diff suppressed because it is too large Load Diff
+6
View File
@@ -0,0 +1,6 @@
// Prevents additional console window on Windows in release, DO NOT REMOVE!!
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]
fn main() {
minicloud_sync_lib::run()
}
+277
View File
@@ -0,0 +1,277 @@
use reqwest::Client;
use serde::{Deserialize, Serialize};
use std::path::Path;
#[derive(Clone)]
pub struct MiniCloudApi {
client: Client,
pub server_url: String,
pub access_token: String,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct LoginResponse {
pub access_token: String,
pub user: UserInfo,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct UserInfo {
pub id: i64,
pub username: String,
pub role: String,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct FileEntry {
pub id: i64,
pub name: String,
pub is_folder: bool,
pub size: Option<i64>,
pub checksum: Option<String>,
pub updated_at: Option<String>,
pub children: Option<Vec<FileEntry>>,
pub locked: Option<bool>,
pub locked_by: Option<String>,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct SyncTreeResponse {
pub tree: Vec<FileEntry>,
}
#[derive(Debug, Serialize, Deserialize)]
#[allow(dead_code)]
pub struct SyncChangesResponse {
pub changes: Vec<FileEntry>,
pub server_time: String,
}
#[derive(Debug, Serialize, Deserialize)]
#[allow(dead_code)]
pub struct LockResponse {
pub locked: Option<bool>,
pub locked_by: Option<String>,
pub error: Option<String>,
}
impl MiniCloudApi {
pub fn new(server_url: &str) -> Self {
Self {
client: Client::builder()
.danger_accept_invalid_certs(false)
.build()
.unwrap(),
server_url: server_url.trim_end_matches('/').to_string(),
access_token: String::new(),
}
}
fn auth_header(&self) -> String {
format!("Bearer {}", self.access_token)
}
pub async fn login(&mut self, username: &str, password: &str) -> Result<LoginResponse, String> {
let url = format!("{}/api/auth/login", self.server_url);
let body = serde_json::json!({
"username": username,
"password": password,
});
let resp = self.client.post(&url)
.json(&body)
.send()
.await
.map_err(|e| format!("Verbindungsfehler: {}", e))?;
if !resp.status().is_success() {
let text = resp.text().await.unwrap_or_default();
return Err(format!("Login fehlgeschlagen: {}", text));
}
let data: LoginResponse = resp.json().await
.map_err(|e| format!("Antwort-Fehler: {}", e))?;
self.access_token = data.access_token.clone();
Ok(data)
}
pub async fn refresh_token(&mut self) -> Result<String, String> {
let url = format!("{}/api/auth/refresh", self.server_url);
let resp = self.client.post(&url)
.header("Authorization", self.auth_header())
.send()
.await
.map_err(|e| format!("Refresh fehlgeschlagen: {}", e))?;
if !resp.status().is_success() {
return Err("Token-Refresh fehlgeschlagen".to_string());
}
let data: serde_json::Value = resp.json().await.map_err(|e| e.to_string())?;
if let Some(token) = data.get("access_token").and_then(|t| t.as_str()) {
self.access_token = token.to_string();
Ok(token.to_string())
} else {
Err("Kein Token in Antwort".to_string())
}
}
pub async fn get_sync_tree(&self) -> Result<Vec<FileEntry>, String> {
let url = format!("{}/api/sync/tree", self.server_url);
let resp = self.client.get(&url)
.header("Authorization", self.auth_header())
.send()
.await
.map_err(|e| format!("Sync-Tree Fehler: {}", e))?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!("Sync-Tree HTTP {}: {}", status, text));
}
let data: SyncTreeResponse = resp.json().await
.map_err(|e| format!("Sync-Tree Parse-Fehler: {}", e))?;
Ok(data.tree)
}
#[allow(dead_code)]
pub async fn get_changes(&self, since: &str) -> Result<SyncChangesResponse, String> {
let url = format!("{}/api/sync/changes?since={}", self.server_url, since);
let resp = self.client.get(&url)
.header("Authorization", self.auth_header())
.send()
.await
.map_err(|e| format!("Changes Fehler: {}", e))?;
resp.json().await.map_err(|e| format!("Parse-Fehler: {}", e))
}
pub async fn download_file(&self, file_id: i64, dest: &Path) -> Result<(), String> {
let url = format!("{}/api/files/{}/download?token={}",
self.server_url, file_id, self.access_token);
let resp = self.client.get(&url)
.send()
.await
.map_err(|e| format!("Download Fehler: {}", e))?;
if !resp.status().is_success() {
return Err(format!("Download fehlgeschlagen: {}", resp.status()));
}
let bytes = resp.bytes().await.map_err(|e| e.to_string())?;
if let Some(parent) = dest.parent() {
std::fs::create_dir_all(parent).map_err(|e| e.to_string())?;
}
std::fs::write(dest, &bytes).map_err(|e| format!("Schreiben fehlgeschlagen: {}", e))
}
pub async fn upload_file(&self, file_path: &Path, parent_id: Option<i64>) -> Result<FileEntry, String> {
let url = format!("{}/api/files/upload", self.server_url);
let file_name = file_path.file_name()
.and_then(|n| n.to_str())
.unwrap_or("file")
.to_string();
let file_bytes = std::fs::read(file_path)
.map_err(|e| format!("Datei lesen fehlgeschlagen: {}", e))?;
let mut form = reqwest::multipart::Form::new()
.part("file", reqwest::multipart::Part::bytes(file_bytes).file_name(file_name));
if let Some(pid) = parent_id {
form = form.text("parent_id", pid.to_string());
}
let resp = self.client.post(&url)
.header("Authorization", self.auth_header())
.multipart(form)
.send()
.await
.map_err(|e| format!("Upload Fehler: {}", e))?;
if !resp.status().is_success() {
let text = resp.text().await.unwrap_or_default();
return Err(format!("Upload fehlgeschlagen: {}", text));
}
resp.json().await.map_err(|e| format!("Parse-Fehler: {}", e))
}
pub async fn create_folder(&self, name: &str, parent_id: Option<i64>) -> Result<FileEntry, String> {
let url = format!("{}/api/files/folder", self.server_url);
let body = serde_json::json!({
"name": name,
"parent_id": parent_id,
});
let resp = self.client.post(&url)
.header("Authorization", self.auth_header())
.json(&body)
.send()
.await
.map_err(|e| format!("Create-Folder Verbindungsfehler: {}", e))?;
if !resp.status().is_success() {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
return Err(format!("Create-Folder fehlgeschlagen ({}): {}", status, text));
}
resp.json().await.map_err(|e| format!("Create-Folder Parse-Fehler: {}", e))
}
pub async fn lock_file(&self, file_id: i64, client_info: &str) -> Result<(), String> {
let url = format!("{}/api/files/{}/lock", self.server_url, file_id);
let body = serde_json::json!({ "client_info": client_info });
let resp = self.client.post(&url)
.header("Authorization", self.auth_header())
.json(&body)
.send()
.await
.map_err(|e| e.to_string())?;
if resp.status().as_u16() == 423 {
let data: serde_json::Value = resp.json().await.map_err(|e| e.to_string())?;
let by = data.get("locked_by").and_then(|v| v.as_str()).unwrap_or("?");
return Err(format!("Datei gesperrt von {}", by));
}
if !resp.status().is_success() {
return Err("Lock fehlgeschlagen".to_string());
}
Ok(())
}
pub async fn unlock_file(&self, file_id: i64) -> Result<(), String> {
let url = format!("{}/api/files/{}/unlock", self.server_url, file_id);
self.client.post(&url)
.header("Authorization", self.auth_header())
.send()
.await
.map_err(|e| e.to_string())?;
Ok(())
}
pub async fn delete_file(&self, file_id: i64) -> Result<(), String> {
let url = format!("{}/api/files/{}", self.server_url, file_id);
let resp = self.client.delete(&url)
.header("Authorization", self.auth_header())
.send()
.await
.map_err(|e| format!("Delete Fehler: {}", e))?;
if !resp.status().is_success() {
let text = resp.text().await.unwrap_or_default();
return Err(format!("Delete fehlgeschlagen: {}", text));
}
Ok(())
}
pub async fn heartbeat(&self, file_id: i64) -> Result<(), String> {
let url = format!("{}/api/files/{}/heartbeat", self.server_url, file_id);
self.client.post(&url)
.header("Authorization", self.auth_header())
.send()
.await
.map_err(|e| e.to_string())?;
Ok(())
}
}
@@ -0,0 +1,82 @@
use crate::sync::engine::SyncPath;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct AppConfig {
pub server_url: String,
pub username: String,
#[serde(default)]
pub password_b64: String, // base64 encoded (not plaintext in JSON)
pub sync_paths: Vec<SyncPath>,
#[serde(default)]
pub auto_start: bool,
#[serde(default)]
pub start_minimized: bool,
/// Persistierter Mount-Punkt der Cloud-Files-Integration.
/// Leer = nicht aktiv. Wird beim App-Start wieder aktiviert.
#[serde(default)]
pub cloud_files_mount: String,
}
impl AppConfig {
/// Get the config directory
fn config_dir() -> PathBuf {
// Windows: %APPDATA%/MiniCloud Sync
// Linux: ~/.config/MiniCloud Sync
// Mac: ~/Library/Application Support/MiniCloud Sync
let base = dirs::config_dir()
.or_else(|| dirs::home_dir().map(|h| h.join(".config")))
.unwrap_or_else(|| PathBuf::from("."));
let dir = base.join("MiniCloud Sync");
std::fs::create_dir_all(&dir).ok();
dir
}
fn config_path() -> PathBuf {
Self::config_dir().join("config.json")
}
pub fn load() -> Self {
let path = Self::config_path();
eprintln!("[Config] Loading from: {}", path.display());
if path.exists() {
match std::fs::read_to_string(&path) {
Ok(content) => {
match serde_json::from_str(&content) {
Ok(config) => {
eprintln!("[Config] Loaded OK");
return config;
}
Err(e) => eprintln!("[Config] Parse error: {}", e),
}
}
Err(e) => eprintln!("[Config] Read error: {}", e),
}
} else {
eprintln!("[Config] No config file found");
}
Self::default()
}
pub fn save(&self) -> Result<(), String> {
let path = Self::config_path();
eprintln!("[Config] Saving to: {}", path.display());
let json = serde_json::to_string_pretty(self).map_err(|e| e.to_string())?;
std::fs::write(&path, &json).map_err(|e| format!("Config save: {}", e))?;
eprintln!("[Config] Saved OK");
Ok(())
}
pub fn save_password(&mut self, password: &str) {
use base64::Engine;
self.password_b64 = base64::engine::general_purpose::STANDARD.encode(password.as_bytes());
}
pub fn get_password(&self) -> Option<String> {
if self.password_b64.is_empty() { return None; }
use base64::Engine;
let bytes = base64::engine::general_purpose::STANDARD.decode(&self.password_b64).ok()?;
String::from_utf8(bytes).ok()
}
}
@@ -0,0 +1,487 @@
use crate::sync::api::{FileEntry, MiniCloudApi};
use crate::sync::journal::{Journal, JournalEntry};
use sha2::{Digest, Sha256};
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf};
use std::sync::Arc;
/// A configured sync path: maps a server folder to a local folder.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SyncPath {
pub id: String,
pub server_path: String,
pub server_folder_id: Option<i64>,
pub local_dir: String,
pub mode: SyncMode,
pub enabled: bool,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum SyncMode {
Virtual,
Full,
}
/// `.cloud` placeholder content (JSON payload of the 0-byte-ish placeholder).
#[derive(Debug, Serialize, Deserialize)]
struct CloudPlaceholder {
id: i64,
name: String,
size: i64,
checksum: String,
updated_at: String,
server_path: String,
}
pub struct SyncEngine {
pub api: MiniCloudApi,
pub sync_paths: Vec<SyncPath>,
pub journal: Arc<Journal>,
pub username: String,
}
impl SyncEngine {
pub fn new(api: MiniCloudApi, journal: Arc<Journal>, username: String) -> Self {
Self { api, sync_paths: Vec::new(), journal, username }
}
/// Sync every configured path.
pub async fn sync_all(&mut self) -> Result<Vec<String>, String> {
let mut log = Vec::new();
let tree = self.api.get_sync_tree().await?;
let sync_paths = self.sync_paths.clone();
for sp in &sync_paths {
if !sp.enabled { continue; }
let local_dir = PathBuf::from(&sp.local_dir);
std::fs::create_dir_all(&local_dir).ok();
let subtree = match sp.server_folder_id {
Some(id) => find_subtree(&tree, id).unwrap_or_default(),
None => tree.clone(),
};
// Phase 1: propagate deletions based on journal history.
self.detect_deletions(sp, &subtree, &local_dir, &mut log).await;
// Phase 2: normal sync (downloads, uploads, conflicts).
self.sync_dir(&subtree, &local_dir, "", sp.server_folder_id, sp, &mut log).await;
}
Ok(log)
}
/// Walks the journal for this sync path and reconciles existence:
/// - file was in journal and is gone locally but still on server -> delete on server
/// - file was in journal and is gone on server but still local -> delete locally
/// - file is gone on both sides -> clean journal entry
async fn detect_deletions(
&self,
sp: &SyncPath,
subtree: &[FileEntry],
local_root: &Path,
log: &mut Vec<String>,
) {
use std::collections::HashMap;
let mut server_files: HashMap<String, i64> = HashMap::new();
collect_server_files(subtree, "", &mut server_files);
for je in self.journal.list_for_sync(&sp.id) {
let local_real = local_root.join(&je.relative_path);
let local_cloud = {
let parent = local_real.parent().map(|p| p.to_path_buf());
let fname = local_real.file_name().map(|n| n.to_string_lossy().to_string());
match (parent, fname) {
(Some(p), Some(n)) => p.join(format!("{}.cloud", n)),
_ => PathBuf::new(),
}
};
let local_exists = local_real.exists() || local_cloud.exists();
let server_id = server_files.get(&je.relative_path).copied();
match (local_exists, server_id) {
(true, Some(_)) => { /* present on both sides - normal sync handles it */ }
(false, None) => {
let _ = self.journal.delete(&sp.id, &je.relative_path);
}
(false, Some(id)) => {
match self.api.delete_file(id).await {
Ok(_) => {
log.push(format!("Server-Papierkorb: {}", je.relative_path));
let _ = self.journal.delete(&sp.id, &je.relative_path);
}
Err(e) => log.push(format!("Server-Delete-Fehler {}: {}", je.relative_path, e)),
}
}
(true, None) => {
std::fs::remove_file(&local_real).ok();
std::fs::remove_file(&local_cloud).ok();
let _ = self.journal.delete(&sp.id, &je.relative_path);
log.push(format!("Lokal geloescht: {}", je.relative_path));
}
}
}
}
/// Recursively sync a single directory level.
/// `rel_prefix` is the journal-relative path prefix (e.g. "", or "sub/dir/").
async fn sync_dir(
&mut self,
server_entries: &[FileEntry],
local_dir: &Path,
rel_prefix: &str,
parent_id: Option<i64>,
sp: &SyncPath,
log: &mut Vec<String>,
) {
use std::collections::HashMap;
let server_by_name: HashMap<String, &FileEntry> = server_entries
.iter().map(|e| (e.name.clone(), e)).collect();
// --- Pass 1: iterate server entries, reconcile each against local/journal ---
for entry in server_entries {
let rel = if rel_prefix.is_empty() {
entry.name.clone()
} else {
format!("{}/{}", rel_prefix, entry.name)
};
if entry.is_folder {
let sub_local = local_dir.join(&entry.name);
std::fs::create_dir_all(&sub_local).ok();
if let Some(children) = &entry.children {
Box::pin(self.sync_dir(children, &sub_local, &rel, Some(entry.id), sp, log)).await;
}
continue;
}
self.reconcile_file(entry, local_dir, &rel, parent_id, sp, log).await;
}
// --- Pass 2: iterate local entries, upload new local files/folders ---
let dir_iter = match std::fs::read_dir(local_dir) {
Ok(d) => d,
Err(_) => return,
};
for e in dir_iter.flatten() {
let name = e.file_name().to_string_lossy().to_string();
if should_skip_name(&name) { continue; }
let path = e.path();
let is_dir = path.is_dir();
// `.cloud` placeholders are stored locally under "foo.txt.cloud"
// but represent the server-side "foo.txt".
let real_name = name.trim_end_matches(".cloud").to_string();
let is_placeholder = name.ends_with(".cloud") && !is_dir;
// Already covered by server pass?
if server_by_name.contains_key(&real_name) { continue; }
if is_placeholder { continue; } // orphan placeholder - handled below
let rel = if rel_prefix.is_empty() {
real_name.clone()
} else {
format!("{}/{}", rel_prefix, real_name)
};
if is_dir {
match self.api.create_folder(&real_name, parent_id).await {
Ok(folder) => {
log.push(format!("Ordner erstellt: {}", rel));
self.upload_local_tree(&path, Some(folder.id), &rel, sp, log).await;
}
Err(e) => log.push(format!("Ordner-Fehler {}: {}", rel, e)),
}
} else {
match self.api.upload_file(&path, parent_id).await {
Ok(fe) => {
log.push(format!("Hochgeladen: {}", rel));
let checksum = fe.checksum.unwrap_or_default();
let size = fe.size.unwrap_or(0);
let _ = self.journal.upsert(&JournalEntry {
sync_path_id: sp.id.clone(),
relative_path: rel.clone(),
file_id: Some(fe.id),
synced_checksum: checksum,
synced_size: size,
synced_mtime: fe.updated_at.unwrap_or_default(),
local_state: "offline".to_string(),
});
}
Err(e) => log.push(format!("Upload-Fehler {}: {}", rel, e)),
}
}
}
// --- Pass 3: clean up orphan .cloud placeholders for files gone from server ---
if let Ok(dir_iter) = std::fs::read_dir(local_dir) {
for e in dir_iter.flatten() {
let name = e.file_name().to_string_lossy().to_string();
if !name.ends_with(".cloud") || e.path().is_dir() { continue; }
let real_name = name.trim_end_matches(".cloud");
if server_by_name.contains_key(real_name) { continue; }
std::fs::remove_file(e.path()).ok();
let rel = if rel_prefix.is_empty() {
real_name.to_string()
} else {
format!("{}/{}", rel_prefix, real_name)
};
let _ = self.journal.delete(&sp.id, &rel);
log.push(format!("Entfernt (Server): {}", name));
}
}
}
/// Core 3-way reconciliation for a single server file.
async fn reconcile_file(
&self,
entry: &FileEntry,
local_dir: &Path,
rel: &str,
parent_id: Option<i64>,
sp: &SyncPath,
log: &mut Vec<String>,
) {
let real_path = local_dir.join(&entry.name);
let cloud_path = local_dir.join(format!("{}.cloud", entry.name));
let journal_entry = self.journal.get(&sp.id, rel);
let server_hash = entry.checksum.clone().unwrap_or_default();
let server_size = entry.size.unwrap_or(0);
let server_mtime = entry.updated_at.clone().unwrap_or_default();
// Case A: real file exists locally = offline state
if real_path.exists() && !real_path.is_dir() {
// Avoid race: if placeholder still around, remove it
if cloud_path.exists() { std::fs::remove_file(&cloud_path).ok(); }
let local_hash = compute_file_hash(&real_path);
if local_hash == server_hash {
// In sync - just (re)record journal
self.journal_offline(sp, rel, entry, &server_hash, server_size, &server_mtime);
return;
}
// Hashes differ. Locked by someone else? Hold back.
if entry.locked.unwrap_or(false) {
let by = entry.locked_by.clone().unwrap_or_default();
if by != self.username {
log.push(format!("Zurueckgehalten (gesperrt von {}): {}", by, rel));
return;
}
}
let (local_changed, server_changed) = match &journal_entry {
Some(j) => (local_hash != j.synced_checksum, server_hash != j.synced_checksum),
None => {
// No journal history: this is the first time we're tracking
// this file. Treat the server as authoritative (Nextcloud
// does the same on first sync) so edits made on the web
// GUI or other clients propagate down cleanly.
(false, true)
}
};
if local_changed && !server_changed {
// Upload
match self.api.upload_file(&real_path, parent_id).await {
Ok(fe) => {
log.push(format!("Lokal->Server: {}", rel));
let new_hash = fe.checksum.unwrap_or(local_hash.clone());
self.journal_offline(sp, rel, entry, &new_hash,
fe.size.unwrap_or(server_size),
&fe.updated_at.unwrap_or(server_mtime.clone()));
}
Err(e) => log.push(format!("Upload-Fehler {}: {}", rel, e)),
}
} else if server_changed && !local_changed {
// Download
match self.api.download_file(entry.id, &real_path).await {
Ok(_) => {
log.push(format!("Server->Lokal: {}", rel));
self.journal_offline(sp, rel, entry, &server_hash, server_size, &server_mtime);
}
Err(e) => log.push(format!("Download-Fehler {}: {}", rel, e)),
}
} else {
// Both changed OR no journal -> conflict copy
let conflict_path = make_conflict_path(&real_path, &self.username);
std::fs::rename(&real_path, &conflict_path).ok();
match self.api.download_file(entry.id, &real_path).await {
Ok(_) => {
log.push(format!("KONFLIKT: {} (lokal: {})", rel,
conflict_path.file_name().unwrap().to_string_lossy()));
self.journal_offline(sp, rel, entry, &server_hash, server_size, &server_mtime);
}
Err(e) => {
// Restore original
std::fs::rename(&conflict_path, &real_path).ok();
log.push(format!("Download-Fehler {}: {}", rel, e));
}
}
}
return;
}
// Case B: local has a .cloud placeholder (or neither) = virtual state
// Virtual placeholders never have local edits, just keep them fresh.
let needs_write = match std::fs::read_to_string(&cloud_path) {
Ok(content) => match serde_json::from_str::<CloudPlaceholder>(&content) {
Ok(old) => old.checksum != server_hash || old.id != entry.id,
Err(_) => true,
},
Err(_) => true,
};
if needs_write {
let placeholder = CloudPlaceholder {
id: entry.id,
name: entry.name.clone(),
size: server_size,
checksum: server_hash.clone(),
updated_at: server_mtime.clone(),
server_path: rel.to_string(),
};
if let Ok(json) = serde_json::to_string_pretty(&placeholder) {
if !cloud_path.exists() {
log.push(format!("Platzhalter: {}.cloud", entry.name));
} else {
log.push(format!("Platzhalter aktualisiert: {}.cloud", entry.name));
}
std::fs::write(&cloud_path, json).ok();
}
}
self.journal.upsert(&JournalEntry {
sync_path_id: sp.id.clone(),
relative_path: rel.to_string(),
file_id: Some(entry.id),
synced_checksum: server_hash,
synced_size: server_size,
synced_mtime: server_mtime,
local_state: "virtual".to_string(),
}).ok();
// If Full mode and no real file yet, download now
if sp.mode == SyncMode::Full && !real_path.exists() {
if let Err(e) = self.api.download_file(entry.id, &real_path).await {
log.push(format!("Full-Download-Fehler {}: {}", rel, e));
} else {
std::fs::remove_file(&cloud_path).ok();
log.push(format!("Heruntergeladen: {}", rel));
// Update journal to offline
if let Some(mut j) = self.journal.get(&sp.id, rel) {
j.local_state = "offline".to_string();
let _ = self.journal.upsert(&j);
}
}
}
}
fn journal_offline(
&self, sp: &SyncPath, rel: &str, entry: &FileEntry,
hash: &str, size: i64, mtime: &str,
) {
let _ = self.journal.upsert(&JournalEntry {
sync_path_id: sp.id.clone(),
relative_path: rel.to_string(),
file_id: Some(entry.id),
synced_checksum: hash.to_string(),
synced_size: size,
synced_mtime: mtime.to_string(),
local_state: "offline".to_string(),
});
}
/// Walk a freshly-created local tree and upload every file (used after
/// creating a new folder on the server).
async fn upload_local_tree(
&self, dir: &Path, parent_id: Option<i64>, rel_prefix: &str,
sp: &SyncPath, log: &mut Vec<String>,
) {
let iter = match std::fs::read_dir(dir) { Ok(d) => d, Err(_) => return };
for e in iter.flatten() {
let name = e.file_name().to_string_lossy().to_string();
if should_skip_name(&name) { continue; }
let path = e.path();
let rel = format!("{}/{}", rel_prefix, name);
if path.is_dir() {
match self.api.create_folder(&name, parent_id).await {
Ok(folder) => {
log.push(format!("Ordner erstellt: {}", rel));
Box::pin(self.upload_local_tree(&path, Some(folder.id), &rel, sp, log)).await;
}
Err(e) => log.push(format!("Ordner-Fehler {}: {}", rel, e)),
}
} else {
match self.api.upload_file(&path, parent_id).await {
Ok(fe) => {
log.push(format!("Hochgeladen: {}", rel));
self.journal_offline(sp, &rel, &fe,
&fe.checksum.clone().unwrap_or_default(),
fe.size.unwrap_or(0),
&fe.updated_at.clone().unwrap_or_default());
}
Err(e) => log.push(format!("Upload-Fehler {}: {}", rel, e)),
}
}
}
}
}
fn should_skip_name(name: &str) -> bool {
name.starts_with('.') || name.starts_with('~') || name.ends_with(".tmp")
}
fn make_conflict_path(original: &Path, username: &str) -> PathBuf {
let stem = original.file_stem().map(|s| s.to_string_lossy().to_string()).unwrap_or_default();
let ext = original.extension().map(|e| e.to_string_lossy().to_string());
let ts = chrono::Local::now().format("%Y-%m-%d %H%M%S").to_string();
let name = match ext {
Some(e) if !e.is_empty() => format!("{} (Konflikt {} {}).{}", stem, username, ts, e),
_ => format!("{} (Konflikt {} {})", stem, username, ts),
};
original.parent().map(|p| p.join(&name)).unwrap_or_else(|| PathBuf::from(&name))
}
fn collect_server_files(
entries: &[FileEntry],
prefix: &str,
out: &mut std::collections::HashMap<String, i64>,
) {
for e in entries {
let rel = if prefix.is_empty() {
e.name.clone()
} else {
format!("{}/{}", prefix, e.name)
};
if e.is_folder {
if let Some(children) = &e.children {
collect_server_files(children, &rel, out);
}
} else {
out.insert(rel, e.id);
}
}
}
fn find_subtree(tree: &[FileEntry], folder_id: i64) -> Option<Vec<FileEntry>> {
for entry in tree {
if entry.id == folder_id { return entry.children.clone(); }
if let Some(children) = &entry.children {
if let Some(r) = find_subtree(children, folder_id) { return Some(r); }
}
}
None
}
pub fn compute_file_hash(path: &Path) -> String {
let data = match std::fs::read(path) {
Ok(d) => d,
Err(_) => return String::new(),
};
let mut hasher = Sha256::new();
hasher.update(&data);
format!("{:x}", hasher.finalize())
}
@@ -0,0 +1,120 @@
use rusqlite::{params, Connection};
use std::path::PathBuf;
use std::sync::Mutex;
/// One row of the sync journal. Represents the "last known synced state"
/// for a single file within a sync path. The server and local checksum
/// matched this value at the last successful sync.
#[derive(Debug, Clone)]
pub struct JournalEntry {
pub sync_path_id: String,
pub relative_path: String,
pub file_id: Option<i64>,
pub synced_checksum: String,
pub synced_size: i64,
pub synced_mtime: String,
pub local_state: String, // "virtual" or "offline"
}
pub struct Journal {
conn: Mutex<Connection>,
}
impl Journal {
pub fn open() -> Result<Self, String> {
let dir = dirs::config_dir()
.or_else(|| dirs::home_dir().map(|h| h.join(".config")))
.unwrap_or_else(|| PathBuf::from("."))
.join("MiniCloud Sync");
std::fs::create_dir_all(&dir).ok();
let path = dir.join("journal.db");
let conn = Connection::open(&path).map_err(|e| format!("Journal open: {}", e))?;
conn.execute_batch(
r#"
CREATE TABLE IF NOT EXISTS sync_journal (
sync_path_id TEXT NOT NULL,
relative_path TEXT NOT NULL,
file_id INTEGER,
synced_checksum TEXT NOT NULL DEFAULT '',
synced_size INTEGER NOT NULL DEFAULT 0,
synced_mtime TEXT NOT NULL DEFAULT '',
local_state TEXT NOT NULL DEFAULT 'virtual',
PRIMARY KEY (sync_path_id, relative_path)
);
"#,
).map_err(|e| format!("Journal schema: {}", e))?;
Ok(Self { conn: Mutex::new(conn) })
}
pub fn get(&self, sync_path_id: &str, rel: &str) -> Option<JournalEntry> {
let conn = self.conn.lock().unwrap();
conn.query_row(
"SELECT file_id, synced_checksum, synced_size, synced_mtime, local_state
FROM sync_journal WHERE sync_path_id = ?1 AND relative_path = ?2",
params![sync_path_id, rel],
|row| Ok(JournalEntry {
sync_path_id: sync_path_id.to_string(),
relative_path: rel.to_string(),
file_id: row.get(0)?,
synced_checksum: row.get(1)?,
synced_size: row.get(2)?,
synced_mtime: row.get(3)?,
local_state: row.get(4)?,
}),
).ok()
}
pub fn upsert(&self, e: &JournalEntry) -> Result<(), String> {
let conn = self.conn.lock().unwrap();
conn.execute(
"INSERT INTO sync_journal
(sync_path_id, relative_path, file_id, synced_checksum, synced_size, synced_mtime, local_state)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)
ON CONFLICT(sync_path_id, relative_path) DO UPDATE SET
file_id = excluded.file_id,
synced_checksum = excluded.synced_checksum,
synced_size = excluded.synced_size,
synced_mtime = excluded.synced_mtime,
local_state = excluded.local_state",
params![e.sync_path_id, e.relative_path, e.file_id, e.synced_checksum,
e.synced_size, e.synced_mtime, e.local_state],
).map_err(|e| format!("Journal upsert: {}", e))?;
Ok(())
}
pub fn delete(&self, sync_path_id: &str, rel: &str) -> Result<(), String> {
let conn = self.conn.lock().unwrap();
conn.execute(
"DELETE FROM sync_journal WHERE sync_path_id = ?1 AND relative_path = ?2",
params![sync_path_id, rel],
).map_err(|e| format!("Journal delete: {}", e))?;
Ok(())
}
pub fn list_for_sync(&self, sync_path_id: &str) -> Vec<JournalEntry> {
let conn = self.conn.lock().unwrap();
let mut stmt = match conn.prepare(
"SELECT relative_path, file_id, synced_checksum, synced_size, synced_mtime, local_state
FROM sync_journal WHERE sync_path_id = ?1") {
Ok(s) => s,
Err(_) => return Vec::new(),
};
let rows = stmt.query_map(params![sync_path_id], |row| {
Ok(JournalEntry {
sync_path_id: sync_path_id.to_string(),
relative_path: row.get(0)?,
file_id: row.get(1)?,
synced_checksum: row.get(2)?,
synced_size: row.get(3)?,
synced_mtime: row.get(4)?,
local_state: row.get(5)?,
})
});
match rows {
Ok(it) => it.filter_map(|r| r.ok()).collect(),
Err(_) => Vec::new(),
}
}
}
@@ -0,0 +1,5 @@
pub mod api;
pub mod config;
pub mod engine;
pub mod journal;
pub mod watcher;
@@ -0,0 +1,59 @@
use notify::{Config, Event, EventKind, RecommendedWatcher, RecursiveMode, Watcher};
use std::path::PathBuf;
use std::sync::mpsc;
pub struct FileWatcher {
_watcher: RecommendedWatcher,
pub receiver: mpsc::Receiver<FileChange>,
pub path: PathBuf,
}
#[derive(Debug, Clone)]
pub struct FileChange {
pub path: PathBuf,
pub kind: ChangeKind,
}
#[derive(Debug, Clone)]
pub enum ChangeKind {
Created,
Modified,
Deleted,
}
impl FileWatcher {
pub fn new(watch_dir: &PathBuf) -> Result<Self, String> {
let (tx, rx) = mpsc::channel();
let mut watcher = RecommendedWatcher::new(
move |result: Result<Event, notify::Error>| {
if let Ok(event) = result {
let kind = match event.kind {
EventKind::Create(_) => Some(ChangeKind::Created),
EventKind::Modify(_) => Some(ChangeKind::Modified),
EventKind::Remove(_) => Some(ChangeKind::Deleted),
_ => None,
};
if let Some(kind) = kind {
for path in event.paths {
// Skip hidden files and temp files
let name = path.file_name()
.and_then(|n| n.to_str())
.unwrap_or("");
if name.starts_with('.') || name.starts_with('~') || name.ends_with(".tmp") {
continue;
}
let _ = tx.send(FileChange { path, kind: kind.clone() });
}
}
}
},
Config::default(),
).map_err(|e| format!("Watcher-Fehler: {}", e))?;
watcher.watch(watch_dir.as_ref(), RecursiveMode::Recursive)
.map_err(|e| format!("Watch-Fehler: {}", e))?;
Ok(Self { _watcher: watcher, receiver: rx, path: watch_dir.clone() })
}
}
+53
View File
@@ -0,0 +1,53 @@
{
"$schema": "https://schema.tauri.app/config/2",
"productName": "MiniCloud Sync",
"version": "0.1.0",
"identifier": "com.minicloud.sync",
"build": {
"beforeDevCommand": "npm run dev",
"devUrl": "http://localhost:1420",
"beforeBuildCommand": "npm run build",
"frontendDist": "../dist"
},
"app": {
"windows": [
{
"title": "Mini-Cloud Sync",
"width": 700,
"height": 550,
"resizable": true,
"center": true
}
],
"security": {
"csp": null
}
},
"bundle": {
"active": true,
"targets": "all",
"icon": [
"icons/32x32.png",
"icons/128x128.png",
"icons/128x128@2x.png",
"icons/icon.icns",
"icons/icon.ico"
],
"fileAssociations": [
{
"ext": ["cloud"],
"mimeType": "application/x-minicloud",
"description": "Mini-Cloud Platzhalter"
}
],
"windows": {
"nsis": {
"installerIcon": "icons/icon.ico",
"headerImage": null,
"sidebarImage": null,
"installMode": "both",
"displayLanguageSelector": false
}
}
}
}
+752
View File
@@ -0,0 +1,752 @@
<script setup>
import { ref, onMounted, onUnmounted } from "vue";
import { invoke } from "@tauri-apps/api/core";
import { listen } from "@tauri-apps/api/event";
import { open as dialogOpen } from "@tauri-apps/plugin-dialog";
const screen = ref("login");
const serverUrl = ref("https://");
const username = ref("");
const password = ref("");
const loginError = ref("");
const loginLoading = ref(false);
const syncPaths = ref([]);
const syncLog = ref([]);
const syncing = ref(false);
const syncStatus = ref("Nicht verbunden");
const userInfo = ref(null);
const fileTree = ref([]);
const fileChanges = ref([]);
const autoSyncActive = ref(false);
const startMinimized = ref(false);
async function saveStartMinimized() {
await invoke("set_start_minimized", { minimized: startMinimized.value });
}
// New sync path form
const showAddPath = ref(false);
const newPathLocal = ref("");
const newPathServerFolder = ref("");
const newPathServerId = ref(null);
const newPathMode = ref("virtual");
// Cloud-Files (Windows cfapi / Linux FUSE)
const cloudFilesSupported = ref(false);
const cloudFilesActive = ref(false);
const cloudFilesBusy = ref(false);
const cloudFilesMountPoint = ref("");
const cloudFilesError = ref("");
async function checkCloudFilesSupport() {
try { cloudFilesSupported.value = await invoke("cloud_files_supported"); }
catch { cloudFilesSupported.value = false; }
try {
const saved = await invoke("cloud_files_get_mount");
if (saved) cloudFilesMountPoint.value = saved;
} catch { /* no saved mount */ }
}
async function forceCleanupCloudFiles() {
if (!cloudFilesMountPoint.value) return;
if (!confirm(`Sync-Root unter ${cloudFilesMountPoint.value} zwangsweise aufraeumen?\n\nDanach kann der Ordner ggf. geloescht werden.`)) return;
cloudFilesError.value = "";
cloudFilesBusy.value = true;
try {
await invoke("cloud_files_force_cleanup", { mountPoint: cloudFilesMountPoint.value });
cloudFilesActive.value = false;
cloudFilesMountPoint.value = "";
syncLog.value = [`[${ts()}] Cloud-Files Zwangsbereinigung durchgefuehrt`, ...syncLog.value].slice(0, 200);
} catch (err) {
cloudFilesError.value = String(err);
} finally {
cloudFilesBusy.value = false;
}
}
async function browseCfMount() {
try {
const selected = await dialogOpen({ directory: true, multiple: false,
title: "Cloud-Files-Ordner waehlen" });
if (selected) cloudFilesMountPoint.value = selected;
} catch { /* cancelled */ }
}
async function enableCloudFiles() {
cloudFilesError.value = "";
cloudFilesBusy.value = true;
try {
await invoke("cloud_files_enable", { mountPoint: cloudFilesMountPoint.value });
cloudFilesActive.value = true;
syncLog.value = [`[${ts()}] Cloud-Files aktiviert: ${cloudFilesMountPoint.value}`, ...syncLog.value].slice(0, 200);
} catch (err) {
cloudFilesError.value = String(err);
} finally {
cloudFilesBusy.value = false;
}
}
async function disableCloudFiles() {
cloudFilesError.value = "";
cloudFilesBusy.value = true;
try {
await invoke("cloud_files_disable", { mountPoint: cloudFilesMountPoint.value });
cloudFilesActive.value = false;
syncLog.value = [`[${ts()}] Cloud-Files deaktiviert`, ...syncLog.value].slice(0, 200);
} catch (err) {
cloudFilesError.value = String(err);
} finally {
cloudFilesBusy.value = false;
}
}
const serverFolders = ref([]);
// Local file browser
const localFiles = ref([]);
const localBreadcrumb = ref([]);
const contextMenu = ref({ show: false, x: 0, y: 0, file: null });
async function loadLocalFiles(subPath = null) {
try {
localFiles.value = await invoke("browse_sync_folder", { subPath });
if (subPath) {
// Build breadcrumb
const sp = syncPaths.value[0];
if (sp) {
const rel = subPath.replace(sp.local_dir, "").replace(/^[/\\]/, "");
const parts = rel.split(/[/\\]/).filter(Boolean);
localBreadcrumb.value = [{ name: "Sync", path: sp.local_dir }];
let current = sp.local_dir;
for (const p of parts) {
current += (current.endsWith("/") || current.endsWith("\\") ? "" : "/") + p;
localBreadcrumb.value.push({ name: p, path: current });
}
}
} else {
localBreadcrumb.value = [];
}
} catch { localFiles.value = []; }
}
function openLocalFolder(file) {
if (file.is_folder) loadLocalFiles(file.path);
}
function showContextMenu(e, file) {
e.preventDefault();
contextMenu.value = { show: true, x: e.clientX, y: e.clientY, file };
}
function hideContextMenu() {
contextMenu.value = { show: false, x: 0, y: 0, file: null };
}
async function doMarkOffline(file) {
hideContextMenu();
try {
const result = await invoke("mark_offline", { cloudPath: file.path });
syncLog.value = [`[${ts()}] ${result}`, ...syncLog.value].slice(0, 200);
await loadLocalFiles(null);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
async function doUnlockFile(file) {
hideContextMenu();
const fileId = file.file_id ?? findFileInTree(fileTree.value, file.name)?.id;
if (!fileId) {
syncLog.value = [`[${ts()}] Fehler: Datei nicht auf Server gefunden`, ...syncLog.value];
return;
}
try {
await invoke("unlock_file_cmd", { fileId });
syncLog.value = [`[${ts()}] Entsperrt: ${file.name}`, ...syncLog.value].slice(0, 200);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
async function doLockOnly(file) {
hideContextMenu();
const fileId = file.file_id ?? findFileInTree(fileTree.value, file.name)?.id;
if (!fileId) {
syncLog.value = [`[${ts()}] Fehler: Datei nicht auf Server gefunden`, ...syncLog.value];
return;
}
try {
await invoke("lock_file_cmd", { fileId });
syncLog.value = [`[${ts()}] Ausgecheckt: ${file.name}`, ...syncLog.value].slice(0, 200);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
function findFileInTree(entries, name) {
for (const e of entries) {
if (e.name === name) return e;
if (e.children) {
const found = findFileInTree(e.children, name);
if (found) return found;
}
}
return null;
}
async function doUnmarkOffline(file) {
hideContextMenu();
try {
const result = await invoke("unmark_offline", { cloudPath: file.path });
syncLog.value = [`[${ts()}] ${result}`, ...syncLog.value].slice(0, 200);
await loadLocalFiles(null);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
async function doOpenCloudFile(file) {
hideContextMenu();
try {
const realPath = await invoke("open_cloud_file", { cloudPath: file.path });
syncLog.value = [`[${ts()}] Geoeffnet: ${realPath}`, ...syncLog.value].slice(0, 200);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
async function doOpenOfflineFile(file) {
hideContextMenu();
try {
await invoke("open_offline_file", { realPath: file.path });
syncLog.value = [`[${ts()}] Ausgecheckt + geoeffnet: ${file.name}`, ...syncLog.value].slice(0, 200);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
}
let unlistenStatus, unlistenLog, unlistenError, unlistenFileChange, unlistenTrigger, unlistenCloudOpen;
async function handleLogin() {
loginError.value = "";
loginLoading.value = true;
try {
const result = await invoke("login", {
serverUrl: serverUrl.value,
username: username.value,
password: password.value,
});
userInfo.value = result;
screen.value = "main";
syncStatus.value = `Verbunden als ${result.username}`;
startMinimized.value = await invoke("get_start_minimized");
await loadFileTree();
await loadSyncPaths();
} catch (err) {
loginError.value = String(err);
} finally {
loginLoading.value = false;
}
}
async function loadFileTree() {
try {
fileTree.value = await invoke("get_file_tree");
// Build flat folder list for sync path selection
serverFolders.value = [{ id: null, name: "/ (Alle Dateien)", path: "/" }];
flattenFolders(fileTree.value, "", serverFolders.value);
} catch (err) { console.error(err); }
}
function flattenFolders(entries, prefix, list) {
for (const e of entries) {
if (e.is_folder) {
const path = `${prefix}/${e.name}`;
list.push({ id: e.id, name: path, path });
if (e.children) flattenFolders(e.children, path, list);
}
}
}
async function loadSyncPaths() {
try { syncPaths.value = await invoke("get_sync_paths"); }
catch { syncPaths.value = []; }
}
async function browseFolder() {
try {
const selected = await dialogOpen({ directory: true, multiple: false, title: "Sync-Ordner waehlen" });
if (selected) newPathLocal.value = selected;
} catch { /* dialog cancelled */ }
}
async function addSyncPath() {
if (!newPathLocal.value) return;
try {
await invoke("add_sync_path", {
serverPath: newPathServerFolder.value || "/",
serverFolderId: newPathServerId.value,
localDir: newPathLocal.value,
mode: newPathMode.value,
});
showAddPath.value = false;
newPathLocal.value = "";
newPathServerFolder.value = "";
newPathServerId.value = null;
newPathMode.value = "virtual";
await loadSyncPaths();
// Auto-start sync now that we have a path (if not already running)
if (!autoSyncActive.value && syncPaths.value.length > 0) {
await startSync();
}
} catch (err) { alert(err); }
}
async function removeSyncPath(id) {
await invoke("remove_sync_path", { id });
await loadSyncPaths();
// If no paths remain, stop auto-sync
if (syncPaths.value.length === 0) {
autoSyncActive.value = false;
syncStatus.value = "Keine Sync-Pfade konfiguriert";
}
}
async function toggleMode(id) {
await invoke("toggle_sync_mode", { id });
await loadSyncPaths();
}
function selectServerFolder(folder) {
newPathServerFolder.value = folder.path;
newPathServerId.value = folder.id;
}
async function startSync() {
syncing.value = true;
syncStatus.value = "Erster Sync...";
try {
const log = await invoke("start_sync");
syncLog.value = [...log.map(m => `[${ts()}] ${m}`), ...syncLog.value].slice(0, 200);
syncStatus.value = "Synchronisiert";
autoSyncActive.value = true;
await loadFileTree();
await loadLocalFiles(null);
} catch (err) { syncStatus.value = `Fehler: ${err}`; }
finally { syncing.value = false; }
}
async function syncNow() {
syncing.value = true;
try {
const log = await invoke("run_sync_now");
syncLog.value = [...log.map(m => `[${ts()}] ${m}`), ...syncLog.value].slice(0, 200);
await loadFileTree();
} catch (err) { syncStatus.value = `Fehler: ${err}`; }
finally { syncing.value = false; }
}
function ts() {
return new Date().toLocaleTimeString("de-DE", { hour: "2-digit", minute: "2-digit", second: "2-digit" });
}
function formatSize(b) {
if (!b) return "";
const u = ["B","KB","MB","GB"]; let i=0; let s=b;
while (s>=1024 && i<u.length-1) { s/=1024; i++; }
return `${s.toFixed(i>0?1:0)} ${u[i]}`;
}
onMounted(async () => {
await checkCloudFilesSupport();
// Try auto-login with saved credentials
try {
const saved = await invoke("load_saved_config");
if (saved.has_credentials) {
loginLoading.value = true;
serverUrl.value = saved.server_url;
username.value = saved.username;
try {
const result = await invoke("auto_login");
userInfo.value = result;
screen.value = "main";
syncStatus.value = `Verbunden als ${result.username}`;
syncPaths.value = (await invoke("get_sync_paths"));
startMinimized.value = await invoke("get_start_minimized");
await loadFileTree();
// Auto-start sync if paths configured
if (syncPaths.value.length > 0) {
await startSync();
}
// Cloud-Files automatisch reaktivieren, wenn Mount gespeichert.
if (cloudFilesSupported.value && cloudFilesMountPoint.value) {
try {
await invoke("cloud_files_enable", { mountPoint: cloudFilesMountPoint.value });
cloudFilesActive.value = true;
} catch (e) {
cloudFilesError.value = `Auto-Reaktivierung fehlgeschlagen: ${e}`;
}
}
} catch (err) {
syncStatus.value = "Auto-Login fehlgeschlagen";
// Show login screen with pre-filled fields
}
loginLoading.value = false;
} else if (saved.has_config) {
serverUrl.value = saved.server_url;
username.value = saved.username;
}
} catch { /* no saved config */ }
unlistenStatus = await listen("sync-status", e => {
syncing.value = e.payload === "syncing";
syncStatus.value = e.payload === "syncing" ? "Synchronisiere..." : "Synchronisiert";
if (e.payload === "synced") { loadFileTree(); loadLocalFiles(null); }
});
unlistenLog = await listen("sync-log", e => {
syncLog.value = [...e.payload.map(m => `[${ts()}] ${m}`), ...syncLog.value].slice(0, 200);
});
unlistenError = await listen("sync-error", e => {
syncStatus.value = `Fehler: ${e.payload}`;
syncing.value = false;
});
unlistenFileChange = await listen("file-change", e => {
fileChanges.value = [`[${ts()}] ${e.payload}`, ...fileChanges.value].slice(0, 50);
});
unlistenTrigger = await listen("trigger-sync", () => syncNow());
// Server-Push: bei jedem File-Event Server-Tree + Lokale Liste neu laden,
// damit Lock-Status, neue/geloeschte Dateien sofort angezeigt werden.
await listen("sse-event", () => {
loadFileTree();
loadLocalFiles(null);
});
unlistenCloudOpen = await listen("open-cloud-file", async (e) => {
const cloudPath = e.payload;
syncLog.value = [`[${ts()}] Oeffne: ${cloudPath}`, ...syncLog.value].slice(0, 200);
try {
const realPath = await invoke("open_cloud_file", { cloudPath });
syncLog.value = [`[${ts()}] Geoeffnet: ${realPath}`, ...syncLog.value].slice(0, 200);
} catch (err) {
syncLog.value = [`[${ts()}] Fehler: ${err}`, ...syncLog.value].slice(0, 200);
}
});
});
onUnmounted(() => { unlistenStatus?.(); unlistenLog?.(); unlistenError?.(); unlistenFileChange?.(); unlistenTrigger?.(); unlistenCloudOpen?.(); });
</script>
<template>
<!-- Login -->
<div v-if="screen === 'login'" class="login-screen">
<div class="login-card">
<div class="login-header">
<div class="logo-icon"></div>
<h1>Mini-Cloud</h1>
<p>Desktop Sync Client</p>
</div>
<form @submit.prevent="handleLogin">
<div class="field"><label>Server-URL</label><input v-model="serverUrl" placeholder="https://cloud.example.com" /></div>
<div class="field"><label>Benutzername</label><input v-model="username" autofocus /></div>
<div class="field"><label>Passwort</label><input v-model="password" type="password" /></div>
<div v-if="loginError" class="error">{{ loginError }}</div>
<button type="submit" :disabled="loginLoading" class="btn-primary full">{{ loginLoading ? "Verbinde..." : "Anmelden" }}</button>
</form>
</div>
</div>
<!-- Main -->
<div v-else class="main-screen">
<div class="toolbar">
<div class="toolbar-left">
<span class="logo-small"></span><strong>Mini-Cloud Sync</strong>
<span class="status-badge" :class="{ syncing, error: syncStatus.startsWith('Fehler') }">
<span v-if="syncing" class="spin"></span> {{ syncStatus }}
</span>
</div>
<div class="toolbar-right"><span class="user-info">{{ userInfo?.username }}</span></div>
</div>
<div class="content">
<!-- Cloud-Files (Windows Cloud Files API, OneDrive-artig) -->
<div class="section">
<div class="section-header">
<h3>Cloud-Files (OneDrive-Style)</h3>
<span v-if="cloudFilesActive" class="status-badge syncing"> aktiv</span>
<span v-else-if="!cloudFilesSupported" class="status-badge error">nicht verfuegbar</span>
</div>
<p class="hint">
Dateien erscheinen als Platzhalter im Explorer mit Wolken-Icon und
werden erst bei Zugriff geladen. Rechtsklick im Explorer &rarr;
"Immer offline halten" oder "Speicher freigeben".
</p>
<p v-if="!cloudFilesSupported" class="hint" style="color:#c62828">
Auf dieser Plattform noch nicht verfuegbar. Aktuell: Windows 10/11.
Linux-FUSE ist in Vorbereitung, macOS folgt mit Apple-Signatur.
</p>
<template v-else>
<div class="cf-row">
<input v-model="cloudFilesMountPoint" placeholder="Ordner waehlen..." />
<button class="btn-secondary" @click="browseCfMount">Durchsuchen</button>
<button v-if="!cloudFilesActive" class="btn-primary"
:disabled="!cloudFilesMountPoint || cloudFilesBusy"
@click="enableCloudFiles">
{{ cloudFilesBusy ? "Aktiviere..." : "Aktivieren" }}
</button>
<button v-else class="btn-secondary" :disabled="cloudFilesBusy"
@click="disableCloudFiles">Deaktivieren</button>
<button v-if="cloudFilesMountPoint && !cloudFilesActive"
class="btn-secondary" :disabled="cloudFilesBusy"
@click="forceCleanupCloudFiles"
title="Toten Sync-Root nach hartem Beenden des Clients aufraeumen">
Aufraeumen
</button>
</div>
<div v-if="cloudFilesError" class="error" style="margin-top:0.5rem">{{ cloudFilesError }}</div>
</template>
</div>
<!-- Sync Paths (Legacy) - auf Windows ausgeblendet sobald Cloud-Files
aktiv ist; Cloud-Files ersetzt diese Ansicht vollstaendig. -->
<div v-if="!cloudFilesActive" class="section">
<div class="section-header">
<h3>Sync-Pfade</h3>
<div class="header-btns">
<button v-if="syncPaths.length && !autoSyncActive" @click="startSync" :disabled="syncing" class="btn-primary">Sync starten</button>
<button v-if="autoSyncActive" @click="syncNow" :disabled="syncing" class="btn-small">Jetzt synchronisieren</button>
<button @click="showAddPath = !showAddPath" class="btn-small">+ Pfad hinzufuegen</button>
</div>
</div>
<div v-if="autoSyncActive" class="auto-info">Auto-Sync alle 30s aktiv</div>
<!-- Add new sync path -->
<div v-if="showAddPath" class="add-path-form">
<div class="field">
<label>Server-Ordner</label>
<select v-model="newPathServerId" @change="selectServerFolder(serverFolders.find(f => f.id === newPathServerId))">
<option v-for="f in serverFolders" :key="f.id ?? 'root'" :value="f.id">{{ f.name }}</option>
</select>
</div>
<div class="field">
<label>Lokaler Ordner</label>
<div class="browse-row">
<input v-model="newPathLocal" placeholder="/home/user/MiniCloud" />
<button @click="browseFolder" class="btn-small">Durchsuchen...</button>
</div>
</div>
<div class="field">
<label>Modus</label>
<div class="mode-select">
<label class="mode-option" :class="{ active: newPathMode === 'virtual' }">
<input type="radio" v-model="newPathMode" value="virtual" /> Virtual Files
<small>Platzhalter, Download bei Bedarf</small>
</label>
<label class="mode-option" :class="{ active: newPathMode === 'full' }">
<input type="radio" v-model="newPathMode" value="full" /> 💾 Full Sync
<small>Alle Dateien lokal spiegeln</small>
</label>
</div>
</div>
<div class="form-actions">
<button @click="showAddPath = false" class="btn-small">Abbrechen</button>
<button @click="addSyncPath" class="btn-primary" :disabled="!newPathLocal">Hinzufuegen</button>
</div>
</div>
<!-- Existing sync paths -->
<div v-for="sp in syncPaths" :key="sp.id" class="sync-path-card">
<div class="sp-info">
<div class="sp-server"> {{ sp.server_path }}</div>
<div class="sp-arrow"></div>
<div class="sp-local">📁 {{ sp.local_dir }}</div>
</div>
<div class="sp-actions">
<span class="sp-mode" :class="sp.mode" @click="toggleMode(sp.id)" :title="'Klicken zum Wechseln'">
{{ sp.mode === 'Full' ? '💾 Full' : '☁ Virtual' }}
</span>
<button @click="removeSyncPath(sp.id)" class="btn-danger" title="Entfernen"></button>
</div>
</div>
<div v-if="!syncPaths.length && !showAddPath" class="empty">
Noch keine Sync-Pfade. Klicke "Pfad hinzufuegen" um loszulegen.
</div>
</div>
<!-- Local File Browser (Legacy, nur fuer Full-Sync-Modus) -->
<div v-if="autoSyncActive && !cloudFilesActive" class="section" @click="hideContextMenu">
<div class="section-header">
<h3>Lokale Dateien</h3>
<button @click="loadLocalFiles(null)" class="btn-small"></button>
</div>
<div v-if="localBreadcrumb.length" class="local-breadcrumb">
<span v-for="(b, i) in localBreadcrumb" :key="i">
<a @click="loadLocalFiles(b.path)">{{ b.name }}</a>
<span v-if="i < localBreadcrumb.length - 1"> / </span>
</span>
</div>
<div class="local-file-list">
<div v-for="f in localFiles" :key="f.path"
class="local-file-item"
@dblclick="f.is_folder ? openLocalFolder(f) : (f.is_cloud ? doOpenCloudFile(f) : doOpenOfflineFile(f))"
@contextmenu="showContextMenu($event, f)">
<span class="lf-icon">{{ f.is_folder ? '📁' : (f.is_cloud ? '☁' : '📄') }}</span>
<span class="lf-name">{{ f.name }}</span>
<span v-if="f.is_cloud" class="lf-badge cloud">Cloud</span>
<span v-else-if="f.is_offline" class="lf-badge offline">Offline</span>
<span v-if="f.locked" class="lf-badge locked" :title="'Ausgecheckt von ' + f.locked_by">🔒 {{ f.locked_by }}</span>
<span class="lf-size">{{ formatSize(f.cloud_size || f.size) }}</span>
</div>
<div v-if="!localFiles.length" class="empty">Ordner ist leer</div>
</div>
</div>
<!-- Context Menu -->
<div v-if="contextMenu.show" class="context-menu"
:style="{ left: contextMenu.x + 'px', top: contextMenu.y + 'px' }">
<div v-if="contextMenu.file?.is_cloud" class="cm-item" @click="doOpenCloudFile(contextMenu.file)">
📥 Oeffnen (herunterladen)
</div>
<div v-if="contextMenu.file?.is_cloud" class="cm-item" @click="doMarkOffline(contextMenu.file)">
💾 Offline verfuegbar machen
</div>
<div v-if="contextMenu.file?.is_offline" class="cm-item" @click="doOpenOfflineFile(contextMenu.file)">
📂 Oeffnen (auschecken)
</div>
<div v-if="contextMenu.file?.is_offline && !contextMenu.file?.locked" class="cm-item" @click="doLockOnly(contextMenu.file)">
🔒 Auschecken (sperren)
</div>
<div v-if="contextMenu.file?.is_offline && contextMenu.file?.locked" class="cm-item" @click="doUnlockFile(contextMenu.file)">
🔓 Entsperren (einchecken)
</div>
<div v-if="contextMenu.file?.is_offline" class="cm-item" @click="doUnmarkOffline(contextMenu.file)">
Nicht mehr offline (Platzhalter)
</div>
<div class="cm-item" @click="hideContextMenu">Abbrechen</div>
</div>
<!-- File Tree -->
<div class="section">
<div class="section-header">
<h3>Server-Dateien</h3>
<button @click="loadFileTree" class="btn-small"></button>
</div>
<div class="file-tree">
<template v-for="e in fileTree" :key="e.id">
<div class="tree-item">
<span class="tree-icon">{{ e.is_folder ? '📁' : '📄' }}</span>
<span class="tree-name">{{ e.name }}</span>
<span v-if="e.locked" class="tree-lock">🔒 {{ e.locked_by }}</span>
<span v-if="!e.is_folder" class="tree-size">{{ formatSize(e.size) }}</span>
</div>
<div v-if="e.children" v-for="c in e.children" :key="c.id" class="tree-item indent">
<span class="tree-icon">{{ c.is_folder ? '📁' : '📄' }}</span>
<span class="tree-name">{{ c.name }}</span>
<span v-if="c.locked" class="tree-lock">🔒 {{ c.locked_by }}</span>
<span v-if="!c.is_folder" class="tree-size">{{ formatSize(c.size) }}</span>
</div>
</template>
<div v-if="!fileTree.length" class="empty">Keine Dateien</div>
</div>
</div>
<!-- File Changes -->
<div v-if="fileChanges.length" class="section">
<h3>Lokale Aenderungen</h3>
<div class="log-list"><div v-for="(m,i) in fileChanges" :key="i" class="log-item change">{{ m }}</div></div>
</div>
<!-- Sync Log -->
<div v-if="syncLog.length" class="section">
<h3>Sync-Protokoll</h3>
<div class="log-list"><div v-for="(m,i) in syncLog" :key="i" class="log-item">{{ m }}</div></div>
</div>
<!-- Settings -->
<div class="section">
<h3>Einstellungen</h3>
<label class="checkbox-row">
<input type="checkbox" v-model="startMinimized" @change="saveStartMinimized" />
Minimiert starten (direkt im System-Tray)
</label>
</div>
</div>
</div>
</template>
<style>
*{box-sizing:border-box;margin:0;padding:0}
body{font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;font-size:14px;color:#1a1a1a;background:#f0f2f5}
.login-screen{height:100vh;display:flex;align-items:center;justify-content:center}
.login-card{background:#fff;border-radius:12px;padding:2rem;width:360px;box-shadow:0 2px 12px rgba(0,0,0,.1)}
.login-header{text-align:center;margin-bottom:1.5rem}
.logo-icon{font-size:2.5rem}.login-header h1{font-size:1.3rem;margin:.5rem 0 .25rem}.login-header p{color:#666;font-size:.85rem}
.field{margin-bottom:.75rem}.field label{display:block;margin-bottom:.25rem;font-weight:500;font-size:.85rem}
.field input,.field select{width:100%;padding:.5rem;border:1px solid #ddd;border-radius:6px;font-size:.9rem;background:#fafafa}
.field input:focus,.field select:focus{border-color:#4a90d9;outline:none;background:#fff}
.error{color:#e53e3e;font-size:.85rem;margin-bottom:.75rem}
.btn-primary{padding:.5rem 1rem;background:#4a90d9;color:#fff;border:none;border-radius:6px;font-size:.85rem;cursor:pointer;font-weight:500;white-space:nowrap}
.btn-primary:hover{background:#3a7bc8}.btn-primary:disabled{opacity:.6;cursor:not-allowed}
.btn-primary.full{width:100%}
.btn-small{padding:.25rem .5rem;background:#e8e8e8;border:none;border-radius:4px;font-size:.8rem;cursor:pointer}
.btn-small:hover{background:#ddd}
.btn-danger{padding:.25rem .5rem;background:#fee;color:#c00;border:none;border-radius:4px;font-size:.8rem;cursor:pointer}
.btn-danger:hover{background:#fcc}
.main-screen{height:100vh;display:flex;flex-direction:column}
.toolbar{display:flex;align-items:center;justify-content:space-between;padding:.5rem 1rem;background:#fff;border-bottom:1px solid #e0e0e0}
.toolbar-left{display:flex;align-items:center;gap:.5rem}.logo-small{font-size:1.2rem}
.status-badge{font-size:.8rem;padding:.2rem .5rem;border-radius:4px;background:#e8f5e9;color:#2e7d32}
.status-badge.syncing{background:#fff3e0;color:#e65100}.status-badge.error{background:#ffebee;color:#c62828}
.spin{display:inline-block;animation:spin 1s linear infinite}@keyframes spin{from{transform:rotate(0)}to{transform:rotate(360deg)}}
.user-info{font-size:.85rem;color:#666}
.content{flex:1;overflow-y:auto;padding:1rem}
.section{background:#fff;border-radius:8px;padding:1rem;margin-bottom:.75rem}
.section h3{margin-bottom:.5rem;font-size:.95rem}
.section-header{display:flex;align-items:center;justify-content:space-between;margin-bottom:.5rem}
.section-header h3{margin:0}.header-btns{display:flex;gap:.5rem}
.auto-info{font-size:.8rem;color:#2e7d32;margin-bottom:.5rem}
.add-path-form{border:1px solid #e0e0e0;border-radius:8px;padding:1rem;margin-bottom:.75rem;background:#fafafa}
.browse-row{display:flex;gap:.5rem}
.browse-row input{flex:1}
.mode-select{display:flex;gap:.5rem}
.mode-option{flex:1;display:flex;flex-direction:column;padding:.5rem;border:2px solid #e0e0e0;border-radius:6px;cursor:pointer;font-size:.85rem}
.mode-option.active{border-color:#4a90d9;background:#f0f7ff}
.mode-option input{margin-right:.25rem}
.mode-option small{color:#888;font-size:.75rem;margin-top:.25rem}
.form-actions{display:flex;justify-content:flex-end;gap:.5rem;margin-top:.75rem}
.sync-path-card{display:flex;align-items:center;justify-content:space-between;padding:.5rem .75rem;border:1px solid #e8e8e8;border-radius:6px;margin-bottom:.375rem;font-size:.85rem}
.sp-info{display:flex;align-items:center;gap:.375rem;flex:1;min-width:0}
.sp-server,.sp-local{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}
.sp-server{color:#4a90d9}.sp-arrow{color:#999;flex-shrink:0}.sp-local{color:#555}
.sp-actions{display:flex;align-items:center;gap:.375rem;flex-shrink:0}
.sp-mode{font-size:.75rem;padding:.2rem .4rem;border-radius:4px;cursor:pointer;background:#f0f0f0}
.sp-mode.Full{background:#e3f2fd;color:#1565c0}.sp-mode.Virtual{background:#f3e5f5;color:#7b1fa2}
.cf-row{display:flex;gap:.5rem;align-items:center;flex-wrap:wrap}
.cf-row input{flex:1;min-width:300px}
.file-tree{max-height:250px;overflow-y:auto}
.tree-item{display:flex;align-items:center;gap:.5rem;padding:.3rem 0;border-bottom:1px solid #f5f5f5;font-size:.85rem}
.tree-item.indent{padding-left:1.5rem}.tree-icon{flex-shrink:0}.tree-name{flex:1;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}
.tree-lock{font-size:.75rem;color:#e67e22;flex-shrink:0}.tree-size{font-size:.75rem;color:#999;flex-shrink:0}
.empty{text-align:center;color:#999;padding:1rem;font-size:.85rem}
.log-list{max-height:150px;overflow-y:auto;font-family:monospace;font-size:.78rem}
.log-item{padding:.2rem 0;border-bottom:1px solid #f8f8f8;color:#555}.log-item.change{color:#1565c0}
.local-breadcrumb{font-size:.85rem;margin-bottom:.5rem;color:#666}
.local-breadcrumb a{color:#4a90d9;cursor:pointer;text-decoration:none}
.local-breadcrumb a:hover{text-decoration:underline}
.local-file-list{max-height:300px;overflow-y:auto}
.local-file-item{display:flex;align-items:center;gap:.5rem;padding:.35rem .25rem;border-bottom:1px solid #f5f5f5;font-size:.85rem;cursor:default;user-select:none}
.local-file-item:hover{background:#f8f8f8}
.lf-icon{flex-shrink:0;font-size:1rem}
.lf-name{flex:1;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}
.lf-badge{font-size:.65rem;padding:.1rem .3rem;border-radius:3px;flex-shrink:0}
.lf-badge.cloud{background:#e3f2fd;color:#1565c0}
.lf-badge.offline{background:#e8f5e9;color:#2e7d32}
.lf-badge.locked{background:#fff3e0;color:#e65100}
.lf-size{font-size:.75rem;color:#999;flex-shrink:0}
.checkbox-row{display:flex;align-items:center;gap:.5rem;font-size:.85rem;cursor:pointer}
.context-menu{position:fixed;background:#fff;border:1px solid #ddd;border-radius:6px;box-shadow:0 4px 12px rgba(0,0,0,.15);z-index:9999;min-width:200px;padding:.25rem 0}
.cm-item{padding:.5rem .75rem;cursor:pointer;font-size:.85rem}
.cm-item:hover{background:#f0f0f0}
@media(prefers-color-scheme:dark){body{color:#e0e0e0;background:#1a1a1a}.login-card,.section{background:#2a2a2a}.toolbar{background:#2a2a2a;border-color:#3a3a3a}.field input,.field select{background:#333;border-color:#444;color:#e0e0e0}.status-badge{background:#1b5e20;color:#a5d6a7}.status-badge.syncing{background:#e65100;color:#ffcc80}.add-path-form{background:#333;border-color:#444}.mode-option{border-color:#444}.mode-option.active{border-color:#4a90d9;background:#1a3a5c}.sync-path-card{border-color:#3a3a3a}.tree-item{border-color:#333}.log-item{border-color:#333;color:#aaa}.log-item.change{color:#64b5f6}.local-file-item{border-color:#333}.local-file-item:hover{background:#333}.context-menu{background:#2a2a2a;border-color:#444}.cm-item:hover{background:#3a3a3a}}
</style>
+1
View File
@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="37.07" height="36" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 198"><path fill="#41B883" d="M204.8 0H256L128 220.8L0 0h97.92L128 51.2L157.44 0h47.36Z"></path><path fill="#41B883" d="m0 0l128 220.8L256 0h-51.2L128 132.48L50.56 0H0Z"></path><path fill="#35495E" d="M50.56 0L128 133.12L204.8 0h-47.36L128 51.2L97.92 0H50.56Z"></path></svg>

After

Width:  |  Height:  |  Size: 496 B

+4
View File
@@ -0,0 +1,4 @@
import { createApp } from "vue";
import App from "./App.vue";
createApp(App).mount("#app");
+31
View File
@@ -0,0 +1,31 @@
import { defineConfig } from "vite";
import vue from "@vitejs/plugin-vue";
const host = process.env.TAURI_DEV_HOST;
// https://vite.dev/config/
export default defineConfig(async () => ({
plugins: [vue()],
// Vite options tailored for Tauri development and only applied in `tauri dev` or `tauri build`
//
// 1. prevent Vite from obscuring rust errors
clearScreen: false,
// 2. tauri expects a fixed port, fail if that port is not available
server: {
port: 1420,
strictPort: true,
host: host || false,
hmr: host
? {
protocol: "ws",
host,
port: 1421,
}
: undefined,
watch: {
// 3. tell Vite to ignore watching `src-tauri`
ignored: ["**/src-tauri/**"],
},
},
}));
+3 -1
View File
@@ -20,7 +20,9 @@ services:
- "8080:80"
environment:
- JWT_ENABLED=true
- JWT_SECRET=${ONLYOFFICE_JWT_SECRET}
- JWT_SECRET=${JWT_SECRET_KEY}
- ALLOW_META_IP_ADDRESS=true
- ALLOW_PRIVATE_IP_ADDRESS=true
volumes:
- ./data/onlyoffice/logs:/var/log/onlyoffice
- ./data/onlyoffice/data:/var/www/onlyoffice/Data
+1 -1
View File
@@ -4,7 +4,7 @@
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>frontend</title>
<title>Mini-Cloud</title>
</head>
<body>
<div id="app"></div>
+86 -3
View File
@@ -8,11 +8,18 @@
"name": "frontend",
"version": "0.0.0",
"dependencies": {
"@fullcalendar/core": "^6.1.15",
"@fullcalendar/daygrid": "^6.1.15",
"@fullcalendar/interaction": "^6.1.15",
"@fullcalendar/rrule": "^6.1.15",
"@fullcalendar/timegrid": "^6.1.15",
"@fullcalendar/vue3": "^6.1.15",
"@primevue/themes": "^4.5.4",
"axios": "^1.15.0",
"pinia": "^3.0.4",
"primeicons": "^7.0.0",
"primevue": "^4.5.5",
"rrule": "^2.8.1",
"vue": "^3.5.32",
"vue-router": "^4.6.4"
},
@@ -101,6 +108,65 @@
"tslib": "^2.4.0"
}
},
"node_modules/@fullcalendar/core": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/core/-/core-6.1.20.tgz",
"integrity": "sha512-1cukXLlePFiJ8YKXn/4tMKsy0etxYLCkXk8nUCFi11nRONF2Ba2CD5b21/ovtOO2tL6afTJfwmc1ed3HG7eB1g==",
"license": "MIT",
"dependencies": {
"preact": "~10.12.1"
}
},
"node_modules/@fullcalendar/daygrid": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/daygrid/-/daygrid-6.1.20.tgz",
"integrity": "sha512-AO9vqhkLP77EesmJzuU+IGXgxNulsA8mgQHynclJ8U70vSwAVnbcLG9qftiTAFSlZjiY/NvhE7sflve6cJelyQ==",
"license": "MIT",
"peerDependencies": {
"@fullcalendar/core": "~6.1.20"
}
},
"node_modules/@fullcalendar/interaction": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/interaction/-/interaction-6.1.20.tgz",
"integrity": "sha512-p6txmc5txL0bMiPaJxe2ip6o0T384TyoD2KGdsU6UjZ5yoBlaY+dg7kxfnYKpYMzEJLG58n+URrHr2PgNL2fyA==",
"license": "MIT",
"peerDependencies": {
"@fullcalendar/core": "~6.1.20"
}
},
"node_modules/@fullcalendar/rrule": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/rrule/-/rrule-6.1.20.tgz",
"integrity": "sha512-5Awk7bmaA97hSZRpIBehenXkYreVIvx8nnaMFZ/LDGRuK1mgbR4vSUrDTvVU+oEqqKnj/rqMBByWqN5NeehQxw==",
"license": "MIT",
"peerDependencies": {
"@fullcalendar/core": "~6.1.20",
"rrule": "^2.6.0"
}
},
"node_modules/@fullcalendar/timegrid": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/timegrid/-/timegrid-6.1.20.tgz",
"integrity": "sha512-4H+/MWbz3ntA50lrPif+7TsvMeX3R1GSYjiLULz0+zEJ7/Yfd9pupZmAwUs/PBpA6aAcFmeRr0laWfcz1a9V1A==",
"license": "MIT",
"dependencies": {
"@fullcalendar/daygrid": "~6.1.20"
},
"peerDependencies": {
"@fullcalendar/core": "~6.1.20"
}
},
"node_modules/@fullcalendar/vue3": {
"version": "6.1.20",
"resolved": "https://registry.npmjs.org/@fullcalendar/vue3/-/vue3-6.1.20.tgz",
"integrity": "sha512-8qg6pS27II9QBwFkkJC+7SfflMpWqOe7i3ii5ODq9KpLAjwQAd/zjfq8RvKR1Yryoh5UmMCmvRbMB7i4RGtqog==",
"license": "MIT",
"peerDependencies": {
"@fullcalendar/core": "~6.1.20",
"vue": "^3.0.11"
}
},
"node_modules/@jridgewell/sourcemap-codec": {
"version": "1.5.5",
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
@@ -1393,6 +1459,16 @@
"node": "^10 || ^12 || >=14"
}
},
"node_modules/preact": {
"version": "10.12.1",
"resolved": "https://registry.npmjs.org/preact/-/preact-10.12.1.tgz",
"integrity": "sha512-l8386ixSsBdbreOAkqtrwqHwdvR35ID8c3rKPa8lCWuO86dBi32QWHV4vfsZK1utLLFMvw+Z5Ad4XLkZzchscg==",
"license": "MIT",
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/preact"
}
},
"node_modules/primeicons": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/primeicons/-/primeicons-7.0.0.tgz",
@@ -1471,6 +1547,15 @@
"dev": true,
"license": "MIT"
},
"node_modules/rrule": {
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/rrule/-/rrule-2.8.1.tgz",
"integrity": "sha512-hM3dHSBMeaJ0Ktp7W38BJZ7O1zOgaFEsn41PDk+yHoEtfLV+PoJt9E9xAlZiWgf/iqEqionN0ebHFZIDAp+iGw==",
"license": "BSD-3-Clause",
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/source-map-js": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz",
@@ -1522,9 +1607,7 @@
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
"dev": true,
"license": "0BSD",
"optional": true
"license": "0BSD"
},
"node_modules/vite": {
"version": "8.0.8",
+7
View File
@@ -9,11 +9,18 @@
"preview": "vite preview"
},
"dependencies": {
"@fullcalendar/core": "^6.1.15",
"@fullcalendar/daygrid": "^6.1.15",
"@fullcalendar/interaction": "^6.1.15",
"@fullcalendar/rrule": "^6.1.15",
"@fullcalendar/timegrid": "^6.1.15",
"@fullcalendar/vue3": "^6.1.15",
"@primevue/themes": "^4.5.4",
"axios": "^1.15.0",
"pinia": "^3.0.4",
"primeicons": "^7.0.0",
"primevue": "^4.5.5",
"rrule": "^2.8.1",
"vue": "^3.5.32",
"vue-router": "^4.6.4"
},
File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 9.3 KiB

After

Width:  |  Height:  |  Size: 337 B

+13
View File
@@ -1,3 +1,16 @@
<template>
<router-view />
</template>
<script setup>
import { watchEffect } from 'vue'
import { useAuthStore } from './stores/auth'
const auth = useAuthStore()
watchEffect(() => {
document.title = auth.user?.username
? `Mini-Cloud - ${auth.user.username}`
: 'Mini-Cloud'
})
</script>
+10
View File
@@ -48,6 +48,11 @@ const routes = [
name: 'Contacts',
component: () => import('../views/ContactsView.vue'),
},
{
path: 'tasks',
name: 'Tasks',
component: () => import('../views/TasksView.vue'),
},
{
path: 'email',
name: 'Email',
@@ -71,6 +76,11 @@ const routes = [
},
],
},
{
path: '/clients',
name: 'Clients',
component: () => import('../views/ClientsView.vue'),
},
{
path: '/share/:token',
name: 'Share',
+7
View File
@@ -17,6 +17,13 @@ export const useFilesStore = defineStore('files', () => {
const response = await apiClient.get('/files', { params })
files.value = response.data.files
breadcrumb.value = response.data.breadcrumb
} catch (err) {
// Let the caller handle access/deletion errors - just clear the list
if (err.response && (err.response.status === 403 || err.response.status === 404)) {
files.value = []
breadcrumb.value = []
}
throw err
} finally {
loading.value = false
}
+76 -21
View File
@@ -37,6 +37,28 @@
</div>
</div>
<!-- System-Info: Zeitzone & NTP (read-only) -->
<div class="admin-section">
<h3>System-Zeit</h3>
<p class="hint">Wird in der <code>.env</code> festgelegt (Keys <code>TZ</code> und <code>NTP_SERVER</code>).
Aenderungen erfordern einen Neustart des Backends.</p>
<div class="sysinfo">
<div class="sysinfo-row">
<span class="sysinfo-label">Zeitzone:</span>
<code>{{ settings.timezone || '—' }}</code>
<span v-if="settings.timezone_abbr" class="sysinfo-extra">({{ settings.timezone_abbr }})</span>
</div>
<div class="sysinfo-row">
<span class="sysinfo-label">Aktuelle Server-Zeit:</span>
<code>{{ formatServerTime(settings.server_time) }}</code>
</div>
<div class="sysinfo-row">
<span class="sysinfo-label">NTP-Server:</span>
<code>{{ settings.ntp_server || '(deaktiviert)' }}</code>
</div>
</div>
</div>
<!-- System Email -->
<div class="admin-section">
<h3>System-E-Mail (SMTP)</h3>
@@ -80,28 +102,37 @@
<h3>OnlyOffice Document Server</h3>
<p class="hint">Fuer die Bearbeitung von Word, Excel und PowerPoint Dateien direkt im Browser.
Ohne OnlyOffice werden Dateien in einer einfachen Vorschau angezeigt.</p>
<div class="smtp-form">
<div class="field">
<label>OnlyOffice URL</label>
<InputText v-model="smtpForm.onlyoffice_url" placeholder="http://onlyoffice:80 oder https://office.example.com" fluid />
<div class="setting-row">
<div class="setting-info">
<strong>Status</strong>
</div>
<div class="field">
<label>JWT Secret {{ onlyofficeJwtSet ? '(gesetzt)' : '' }}</label>
<Password v-model="smtpForm.onlyoffice_jwt_secret" :feedback="false" toggle-mask fluid
placeholder="Muss mit JWT_SECRET in docker-compose uebereinstimmen" />
</div>
<Button label="Speichern" icon="pi pi-save" size="small" @click="saveSmtp" />
<Tag v-if="onlyofficeConfigured" value="Konfiguriert" severity="success" />
<Tag v-else value="Nicht konfiguriert" severity="warn" />
</div>
<div v-if="onlyofficeConfigured" class="settings-info" style="margin: 0.75rem 0">
<div class="info-row">
<span class="label">URL:</span>
<code>{{ onlyofficeUrl }}</code>
</div>
<div class="info-row">
<span class="label">JWT:</span>
<span>Nutzt JWT_SECRET_KEY aus .env</span>
</div>
</div>
<div class="restore-instructions" style="margin-top: 1rem">
<strong>Setup:</strong>
<strong>Konfiguration ueber <code>.env</code>:</strong>
<pre style="background: var(--p-surface-100); padding: 0.75rem; border-radius: 4px; font-size: 0.85rem; margin: 0.5rem 0">ONLYOFFICE_URL=https://office.deine-domain.de</pre>
<p class="hint">JWT wird automatisch vom <code>JWT_SECRET_KEY</code> verwendet - kein extra Secret noetig.</p>
<strong>Setup-Schritte:</strong>
<ol>
<li>In <code>docker-compose.yml</code> den <code>onlyoffice</code>-Service auskommentieren</li>
<li>Nginx-Eintrag fuer OnlyOffice anlegen (z.B. <code>office.deine-domain.de</code>) - siehe <code>nginx.example.conf</code></li>
<li>Let's Encrypt Zertifikat fuer die OnlyOffice-Domain erstellen</li>
<li><code>docker-compose up -d</code></li>
<li>Hier die <strong>oeffentliche HTTPS-URL</strong> eintragen (z.B. <code>https://office.deine-domain.de</code>)<br/>
<em>Nicht</em> die interne Docker-URL - der Browser muss OnlyOffice erreichen koennen!</li>
<li>JWT Secret muss mit <code>ONLYOFFICE_JWT_SECRET</code> in <code>docker-compose.yml</code> uebereinstimmen</li>
<li>In <code>docker-compose.yml</code> den <code>onlyoffice</code>-Service aktivieren</li>
<li><code>ONLYOFFICE_URL</code> und <code>ONLYOFFICE_JWT_SECRET</code> in <code>.env</code> setzen</li>
<li>Nginx-Eintrag fuer die OnlyOffice-Domain anlegen (siehe <code>nginx.example.conf</code>)</li>
<li>Let's Encrypt: <code>certbot --nginx -d office.deine-domain.de</code></li>
<li><code>docker-compose up --build -d</code></li>
</ol>
</div>
</div>
@@ -540,7 +571,19 @@ const smtpForm = ref({
system_smtp_username: '', system_smtp_password: '', system_email_from: '',
})
const smtpPasswordSet = ref(false)
const onlyofficeJwtSet = ref(false)
const onlyofficeConfigured = ref(false)
const onlyofficeUrl = ref('')
const settings = ref({ timezone: '', timezone_abbr: '', server_time: '', ntp_server: '' })
function formatServerTime(iso) {
if (!iso) return '—'
try {
return new Date(iso).toLocaleString('de-DE', {
day: '2-digit', month: '2-digit', year: 'numeric',
hour: '2-digit', minute: '2-digit', second: '2-digit',
})
} catch { return iso }
}
const smtpTesting = ref(false)
// Backup & Restore
@@ -648,8 +691,14 @@ async function loadSettings() {
smtpForm.value.system_smtp_username = res.data.system_smtp_username || ''
smtpForm.value.system_email_from = res.data.system_email_from || ''
smtpPasswordSet.value = res.data.system_smtp_password_set
smtpForm.value.onlyoffice_url = res.data.onlyoffice_url || ''
onlyofficeJwtSet.value = res.data.onlyoffice_jwt_secret_set
onlyofficeConfigured.value = res.data.onlyoffice_configured
onlyofficeUrl.value = res.data.onlyoffice_url || ''
settings.value = {
timezone: res.data.timezone || '',
timezone_abbr: res.data.timezone_abbr || '',
server_time: res.data.server_time || '',
ntp_server: res.data.ntp_server || '',
}
} catch { /* first load, defaults */ }
}
@@ -1206,6 +1255,12 @@ onMounted(() => {
.field-row { display: flex; gap: 0.75rem; align-items: flex-end; }
.flex-grow { flex: 1; }
.hint { font-size: 0.85rem; color: var(--p-text-muted-color); margin: 0 0 0.75rem; }
.hint code { background: var(--p-surface-100); padding: 0.05rem 0.35rem; border-radius: 3px; font-size: 0.8rem; }
.sysinfo { display: flex; flex-direction: column; gap: 0.4rem; font-size: 0.875rem; }
.sysinfo-row { display: flex; gap: 0.5rem; align-items: center; flex-wrap: wrap; }
.sysinfo-label { min-width: 180px; color: var(--p-text-muted-color); }
.sysinfo code { background: var(--p-surface-100); padding: 0.15rem 0.5rem; border-radius: 4px; }
.sysinfo-extra { color: var(--p-text-muted-color); font-size: 0.8rem; }
.invite-section { margin-top: 1.5rem; padding-top: 1rem; border-top: 1px solid var(--p-surface-200); }
.invite-section h4 { margin: 0 0 0.25rem; font-size: 0.95rem; }
.invite-row { display: flex; gap: 0.5rem; align-items: flex-start; }
+6 -1
View File
@@ -22,6 +22,11 @@
<span>Kontakte</span>
</router-link>
<router-link to="/tasks" class="nav-item" active-class="active">
<i class="pi pi-check-square"></i>
<span>Aufgaben</span>
</router-link>
<router-link
v-if="auth.hasEmailAccounts"
to="/email"
@@ -67,7 +72,7 @@
</aside>
<main class="main-content">
<router-view />
<router-view :key="$route.fullPath" />
</main>
</div>
</template>
File diff suppressed because it is too large Load Diff
+100
View File
@@ -0,0 +1,100 @@
<template>
<div class="clients-container">
<div class="clients-card">
<div class="clients-header">
<i class="pi pi-cloud" style="font-size: 2rem; color: var(--p-primary-color)"></i>
<h1>Mini-Cloud Clients</h1>
<p>Lade den Sync-Client fuer dein Geraet herunter</p>
</div>
<div v-if="loading" class="loading">
<i class="pi pi-spin pi-spinner"></i> Laden...
</div>
<div v-else-if="!clients.length" class="empty">
<p>Noch keine Clients verfuegbar.</p>
</div>
<div v-else class="clients-grid">
<div v-for="client in clients" :key="client.platform" class="client-card">
<div class="client-icon">
<i :class="'pi ' + platformIcon(client.platform)"></i>
</div>
<h3>{{ client.name }}</h3>
<p class="client-meta">{{ client.filename }} ({{ formatSize(client.size) }})</p>
<Button :label="'Download ' + client.name" icon="pi pi-download"
@click="downloadClient(client)" fluid />
</div>
</div>
<div class="clients-footer">
<router-link to="/login">Zurueck zur Anmeldung</router-link>
</div>
</div>
</div>
</template>
<script setup>
import { ref, onMounted } from 'vue'
import axios from 'axios'
import Button from 'primevue/button'
const clients = ref([])
const loading = ref(true)
const platformIcons = {
linux: 'pi-desktop',
windows: 'pi-desktop',
mac: 'pi-desktop',
android: 'pi-mobile',
ios: 'pi-mobile',
}
function platformIcon(platform) {
return platformIcons[platform] || 'pi-download'
}
function formatSize(bytes) {
if (!bytes) return ''
const units = ['B', 'KB', 'MB', 'GB']
let i = 0; let size = bytes
while (size >= 1024 && i < units.length - 1) { size /= 1024; i++ }
return `${size.toFixed(i > 0 ? 1 : 0)} ${units[i]}`
}
function downloadClient(client) {
window.location.href = `/api/clients/${client.platform}/download`
}
onMounted(async () => {
try {
const res = await axios.get('/api/clients')
clients.value = res.data.clients
} catch { clients.value = [] }
loading.value = false
})
</script>
<style scoped>
.clients-container {
min-height: 100vh; display: flex; align-items: center; justify-content: center;
background: var(--p-surface-50); padding: 1rem;
}
.clients-card {
background: var(--p-surface-0); border-radius: 12px; padding: 2.5rem;
max-width: 700px; width: 100%; box-shadow: 0 2px 12px rgba(0,0,0,0.08);
}
.clients-header { text-align: center; margin-bottom: 2rem; }
.clients-header h1 { font-size: 1.5rem; margin: 0.5rem 0 0.25rem; }
.clients-header p { color: var(--p-text-muted-color); margin: 0; }
.clients-grid { display: grid; grid-template-columns: repeat(auto-fill, minmax(200px, 1fr)); gap: 1rem; }
.client-card {
border: 1px solid var(--p-surface-200); border-radius: 8px; padding: 1.25rem; text-align: center;
}
.client-icon { font-size: 2rem; color: var(--p-primary-color); margin-bottom: 0.5rem; }
.client-card h3 { margin: 0 0 0.25rem; font-size: 1rem; }
.client-meta { font-size: 0.8rem; color: var(--p-text-muted-color); margin: 0 0 1rem; }
.loading, .empty { text-align: center; padding: 2rem; color: var(--p-text-muted-color); }
.clients-footer { text-align: center; margin-top: 1.5rem; font-size: 0.875rem; }
.clients-footer a { color: var(--p-primary-color); text-decoration: none; }
</style>
File diff suppressed because it is too large Load Diff
+226 -14
View File
@@ -22,7 +22,11 @@
<div class="header-actions">
<Button icon="pi pi-folder-plus" label="Neuer Ordner" size="small" outlined @click="showNewFolder = true" />
<Button icon="pi pi-upload" label="Dateien" size="small" @click="triggerUpload" />
<Button icon="pi pi-folder" label="Ordner" size="small" outlined @click="triggerFolderUpload" />
<Button size="small" outlined @click="triggerFolderUpload">
<i class="pi pi-upload" style="margin-right:0.35rem"></i>
<i class="pi pi-folder" style="margin-right:0.5rem"></i>
Ordner
</Button>
<input ref="fileInput" type="file" multiple hidden @change="handleUpload" />
<input ref="folderInput" type="file" hidden webkitdirectory @change="handleFolderUpload" />
</div>
@@ -58,6 +62,9 @@
<i :class="fileIcon(data)" class="file-icon"></i>
<span>{{ data.name }}</span>
<Tag v-if="data.shared" value="Geteilt" severity="info" class="shared-tag" />
<span v-if="data.locked" class="lock-badge" :title="'Ausgecheckt von ' + data.locked_by + ' seit ' + formatDate(data.locked_at)">
<i class="pi pi-lock"></i> {{ data.locked_by }}
</span>
</div>
</template>
</Column>
@@ -91,6 +98,7 @@
@click.stop="downloadFile(data)"
/>
<Button
v-if="canShare(data)"
:icon="(data.has_shares || data.has_permissions) ? 'pi pi-users' : 'pi pi-share-alt'"
text rounded size="small"
:severity="(data.has_shares || data.has_permissions) ? 'success' : undefined"
@@ -98,14 +106,33 @@
@click.stop="openShare(data)"
/>
<Button
v-if="!data.is_folder && !data.locked"
icon="pi pi-lock-open"
text rounded size="small"
title="Auschecken (sperren)"
@click.stop="lockFile(data)"
/>
<Button
v-if="!data.is_folder && data.locked && (data.locked_by === auth.user?.username || auth.user?.role === 'admin')"
icon="pi pi-lock"
text rounded size="small"
severity="warn"
:title="data.locked_by === auth.user?.username ? 'Einchecken (entsperren)' : 'Lock zwangsweise entfernen (Admin)'"
@click.stop="unlockFile(data)"
/>
<Button
v-if="canWrite(data)"
icon="pi pi-pencil"
text rounded size="small"
:disabled="data.locked && data.locked_by !== auth.user?.username"
@click.stop="openRename(data)"
/>
<Button
v-if="canWrite(data)"
icon="pi pi-trash"
text rounded size="small"
severity="danger"
:disabled="data.locked && data.locked_by !== auth.user?.username"
@click.stop="confirmDelete(data)"
/>
</div>
@@ -147,9 +174,15 @@
<h5>Mit Benutzer teilen</h5>
<div class="user-share-row">
<InputText v-model="shareUserQuery" placeholder="Benutzername suchen..." fluid @input="searchUsers" />
<Select v-model="shareUserPermission" :options="userPermOptions" optionLabel="label" optionValue="value" />
<Select v-model="shareUserPermission" :options="availableUserPermOptions" optionLabel="label" optionValue="value" />
<label class="reshare-check">
<input type="checkbox" v-model="shareUserReshare" /> darf weiterteilen
</label>
<Button label="Teilen" size="small" @click="shareWithUser" :disabled="!selectedShareUser" />
</div>
<div v-if="!isOwner(shareFile) && shareFile" class="share-hint">
Du hast {{ myPermLabel(shareFile) }} - du kannst maximal {{ myPermLabel(shareFile) }} weiterteilen.
</div>
<div v-if="userSearchResults.length" class="user-search-results">
<div v-for="u in userSearchResults" :key="u.id"
class="user-result" :class="{ selected: selectedShareUser?.id === u.id }"
@@ -158,12 +191,26 @@
</div>
</div>
<div v-if="filePermissions.length" class="existing-shares">
<div v-for="perm in filePermissions" :key="perm.id" class="share-perm-item">
<i class="pi pi-user"></i>
<span>{{ perm.username }}</span>
<Tag :value="permLabel(perm.permission)" size="small" />
<Button icon="pi pi-trash" text size="small" severity="danger" @click="removeUserShare(perm.id)" />
</div>
<template v-for="perm in filePermissions" :key="perm.id">
<div v-if="editingPermId !== perm.id" class="share-perm-item">
<i class="pi pi-user"></i>
<span>{{ perm.username }}</span>
<Tag :value="permLabel(perm.permission)" size="small" />
<Tag v-if="perm.can_reshare" value="darf weiterteilen" severity="info" size="small" />
<Button icon="pi pi-pencil" text size="small" @click="startEditPerm(perm)" title="Bearbeiten" />
<Button icon="pi pi-trash" text size="small" severity="danger" @click="removeUserShare(perm.id)" title="Entfernen" />
</div>
<div v-else class="share-perm-item editing">
<i class="pi pi-user"></i>
<span>{{ perm.username }}</span>
<Select v-model="editPermValue" :options="availableUserPermOptions" optionLabel="label" optionValue="value" />
<label class="reshare-check">
<input type="checkbox" v-model="editPermReshare" /> darf weiterteilen
</label>
<Button icon="pi pi-check" text size="small" severity="success" @click="saveEditPerm(perm)" title="Speichern" />
<Button icon="pi pi-times" text size="small" @click="cancelEditPerm" title="Abbrechen" />
</div>
</template>
</div>
</div>
@@ -173,7 +220,7 @@
<div class="share-form">
<div class="field">
<label>Berechtigung</label>
<Select v-model="shareLinkPermission" :options="linkPermOptions" optionLabel="label" optionValue="value" fluid />
<Select v-model="shareLinkPermission" :options="availableLinkPermOptions" optionLabel="label" optionValue="value" fluid />
</div>
<div class="field">
<label>Passwort (optional)</label>
@@ -221,8 +268,9 @@
</template>
<script setup>
import { ref, watch, onMounted } from 'vue'
import { ref, computed, watch, onMounted, onUnmounted } from 'vue'
import { useRoute, useRouter } from 'vue-router'
import { useAuthStore } from '../stores/auth'
import { useFilesStore } from '../stores/files'
import { useToast } from 'primevue/usetoast'
import apiClient from '../api/client'
@@ -238,6 +286,7 @@ import ProgressBar from 'primevue/progressbar'
const route = useRoute()
const router = useRouter()
const auth = useAuthStore()
const filesStore = useFilesStore()
const toast = useToast()
@@ -262,6 +311,10 @@ const filePermissions = ref([])
const shareUserQuery = ref('')
const selectedShareUser = ref(null)
const shareUserPermission = ref('read')
const shareUserReshare = ref(false)
const editingPermId = ref(null)
const editPermValue = ref('read')
const editPermReshare = ref(false)
const userSearchResults = ref([])
const userPermOptions = [{ label: 'Lesen', value: 'read' }, { label: 'Schreiben', value: 'write' }, { label: 'Admin', value: 'admin' }]
const linkPermOptions = [
@@ -269,6 +322,12 @@ const linkPermOptions = [
{ label: 'Lesen + Hochladen (nur Ordner)', value: 'write' },
{ label: 'Nur Upload (Ordner, kein Einblick)', value: 'upload_only' },
]
const availableLinkPermOptions = computed(() => {
const f = shareFile.value
if (!f || isOwner(f)) return linkPermOptions
if (f.my_permission === 'read') return linkPermOptions.filter(o => o.value === 'read')
return linkPermOptions
})
const shareLinkPermission = ref('read')
const currentOrigin = window.location.origin
const shareLoading = ref(false)
@@ -298,6 +357,15 @@ function handleDoubleClick(event) {
}
function openPreview(data) {
if (data.locked && data.locked_by !== auth.user?.username) {
toast.add({
severity: 'warn',
summary: 'Datei gesperrt',
detail: `${data.name} wird von ${data.locked_by} bearbeitet. Oeffnen nicht moeglich.`,
life: 5000,
})
return
}
const previewable = /\.(pdf|docx?|xlsx?|pptx?|txt|md|json|xml|csv|py|js|html|css|yml|yaml|png|jpe?g|gif|svg|webp|bmp|odt|ods|odp|rtf)$/i
if (previewable.test(data.name)) {
router.push(`/preview/${data.id}`)
@@ -539,6 +607,37 @@ function permLabel(perm) {
return { read: 'Lesen', write: 'Schreiben', admin: 'Admin' }[perm] || perm
}
function isOwner(data) {
return data && data.owner_id === auth.user?.id
}
function canWrite(data) {
if (!data) return false
if (isOwner(data)) return true
return data.my_permission === 'write' || data.my_permission === 'admin'
}
function canShare(data) {
if (!data) return false
if (isOwner(data)) return true
return !!data.my_can_reshare
}
function myPermLabel(data) {
if (!data || !data.my_permission) return ''
return permLabel(data.my_permission)
}
// Option list for the "Mit Benutzer teilen" dropdown - re-sharers can only
// hand out permissions up to their own level. Admin is owner-only.
const availableUserPermOptions = computed(() => {
const f = shareFile.value
const levels = { read: 0, write: 1, admin: 2 }
if (!f || isOwner(f)) return userPermOptions
const myLevel = levels[f.my_permission] ?? -1
return userPermOptions.filter(o => levels[o.value] <= myLevel && o.value !== 'admin')
})
async function openShare(data) {
shareFile.value = data
sharePassword.value = ''
@@ -580,10 +679,12 @@ async function shareWithUser() {
await apiClient.post(`/files/${shareFile.value.id}/permissions`, {
user_id: selectedShareUser.value.id,
permission: shareUserPermission.value,
can_reshare: shareUserReshare.value,
})
toast.add({ severity: 'success', summary: `Mit ${selectedShareUser.value.username} geteilt`, life: 3000 })
shareUserQuery.value = ''
selectedShareUser.value = null
shareUserReshare.value = false
const res = await apiClient.get(`/files/${shareFile.value.id}/permissions`)
filePermissions.value = res.data
await filesStore.loadFiles(currentParentId())
@@ -603,6 +704,34 @@ async function removeUserShare(permId) {
}
}
function startEditPerm(perm) {
editingPermId.value = perm.id
editPermValue.value = perm.permission
editPermReshare.value = !!perm.can_reshare
}
function cancelEditPerm() {
editingPermId.value = null
}
async function saveEditPerm(perm) {
if (!shareFile.value) return
try {
await apiClient.post(`/files/${shareFile.value.id}/permissions`, {
user_id: perm.user_id,
permission: editPermValue.value,
can_reshare: editPermReshare.value,
})
const res = await apiClient.get(`/files/${shareFile.value.id}/permissions`)
filePermissions.value = res.data
editingPermId.value = null
toast.add({ severity: 'success', summary: 'Berechtigung aktualisiert', life: 2500 })
await filesStore.loadFiles(currentParentId())
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler', detail: err.response?.data?.error || err.message, life: 5000 })
}
}
async function createShare() {
console.log('createShare called, shareFile:', shareFile.value?.id, 'permission:', shareLinkPermission.value)
if (!shareFile.value) {
@@ -645,6 +774,28 @@ async function removeShare(token) {
}
}
async function lockFile(data) {
try {
await apiClient.post(`/files/${data.id}/lock`, { client_info: 'Web-GUI' })
toast.add({ severity: 'success', summary: 'Ausgecheckt', detail: `${data.name} ist jetzt fuer dich gesperrt.`, life: 3000 })
await filesStore.loadFiles(currentParentId())
} catch (err) {
toast.add({ severity: 'error', summary: 'Sperren fehlgeschlagen', detail: err.response?.data?.error || err.message, life: 5000 })
}
}
async function unlockFile(data) {
const isAdminOverride = data.locked_by !== auth.user?.username
if (isAdminOverride && !confirm(`Den Lock von ${data.locked_by} zwangsweise entfernen?`)) return
try {
await apiClient.post(`/files/${data.id}/unlock`)
toast.add({ severity: 'success', summary: 'Eingecheckt', detail: `${data.name} ist wieder frei.`, life: 3000 })
await filesStore.loadFiles(currentParentId())
} catch (err) {
toast.add({ severity: 'error', summary: 'Entsperren fehlgeschlagen', detail: err.response?.data?.error || err.message, life: 5000 })
}
}
function confirmDelete(data) {
deleteTarget.value = data
showDeleteConfirm.value = true
@@ -661,12 +812,64 @@ async function doDelete() {
}
}
async function safeLoadCurrentFolder() {
try {
await filesStore.loadFiles(currentParentId())
} catch (err) {
const status = err.response?.status
if (status === 403 || status === 404) {
toast.add({
severity: 'warn',
summary: 'Kein Zugriff',
detail: 'Dieser Ordner wurde geloescht oder die Freigabe wurde entfernt.',
life: 5000,
})
// Redirect to root after short delay so user sees the toast
setTimeout(() => router.push('/files'), 600)
}
}
}
watch(() => route.params.folderId, () => {
filesStore.loadFiles(currentParentId())
safeLoadCurrentFolder()
})
// Live updates: subscribe to server-sent events so that lock changes /
// uploads / deletions by other users or clients refresh the current
// folder automatically.
let eventSource = null
let reloadDebounce = null
function scheduleReload() {
if (reloadDebounce) return
reloadDebounce = setTimeout(() => {
reloadDebounce = null
safeLoadCurrentFolder()
}, 300)
}
onMounted(() => {
filesStore.loadFiles(currentParentId())
safeLoadCurrentFolder()
if (auth.accessToken) {
const url = `/api/sync/events?token=${encodeURIComponent(auth.accessToken)}`
try {
eventSource = new EventSource(url)
const handler = () => scheduleReload()
// Any named event from the server triggers a reload. Using onmessage
// alone misses typed events (event: file), so we wrap addEventListener
// into a tiny catch-all by hooking the generic EventSource dispatch.
eventSource.addEventListener('file', handler)
eventSource.addEventListener('message', handler)
eventSource.addEventListener('open', () => scheduleReload())
eventSource.onerror = () => { /* browser auto-reconnects */ }
} catch { /* SSE not available - fall back to manual refresh */ }
}
})
onUnmounted(() => {
if (reloadDebounce) { clearTimeout(reloadDebounce); reloadDebounce = null }
if (eventSource) { eventSource.close(); eventSource = null }
})
</script>
@@ -706,6 +909,12 @@ onMounted(() => {
}
.file-icon { font-size: 1.125rem; width: 1.25rem; text-align: center; }
.shared-tag { font-size: 0.7rem; }
.lock-badge {
display: inline-flex; align-items: center; gap: 0.25rem;
font-size: 0.7rem; color: var(--p-orange-600); background: var(--p-orange-50);
padding: 0.125rem 0.375rem; border-radius: 4px; margin-left: 0.25rem;
}
.lock-badge i { font-size: 0.65rem; }
.row-actions { display: flex; gap: 0; }
.empty-state {
display: flex; flex-direction: column; align-items: center;
@@ -718,12 +927,15 @@ onMounted(() => {
.share-section:last-child { border-bottom: none; }
.share-section h5 { margin: 0 0 0.75rem; font-size: 0.9rem; }
.share-form { }
.user-share-row { display: flex; gap: 0.5rem; align-items: flex-start; }
.user-share-row { display: flex; gap: 0.5rem; align-items: center; flex-wrap: wrap; }
.reshare-check { display: flex; align-items: center; gap: 0.25rem; font-size: 0.8rem; white-space: nowrap; }
.share-hint { font-size: 0.75rem; color: var(--p-surface-500); margin-top: 0.35rem; font-style: italic; }
.user-search-results { border: 1px solid var(--p-surface-200); border-radius: 6px; margin-top: 0.25rem; max-height: 150px; overflow-y: auto; }
.user-result { padding: 0.5rem 0.75rem; cursor: pointer; display: flex; align-items: center; gap: 0.5rem; font-size: 0.875rem; }
.user-result:hover, .user-result.selected { background: var(--p-primary-50); }
.existing-shares { margin-top: 0.5rem; }
.share-perm-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.375rem 0; font-size: 0.875rem; }
.share-perm-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.375rem 0; font-size: 0.875rem; flex-wrap: wrap; }
.share-perm-item.editing { background: var(--p-surface-50); padding: 0.5rem; border-radius: 4px; }
.share-link-item {
display: flex; justify-content: space-between; align-items: center;
padding: 0.5rem 0; border-bottom: 1px solid var(--p-surface-100);
+8
View File
@@ -46,6 +46,9 @@
<div v-if="registrationAllowed" class="auth-footer">
<router-link to="/register">Noch kein Konto? Registrieren</router-link>
</div>
<div v-if="hasClients" class="auth-footer">
<router-link to="/clients"><i class="pi pi-download"></i> Desktop & Mobile Clients herunterladen</router-link>
</div>
</div>
</div>
</template>
@@ -68,12 +71,17 @@ const password = ref('')
const error = ref('')
const loading = ref(false)
const registrationAllowed = ref(false)
const hasClients = ref(false)
onMounted(async () => {
try {
const res = await axios.get('/api/auth/registration-status')
registrationAllowed.value = res.data.allowed
} catch { registrationAllowed.value = false }
try {
const res = await axios.get('/api/clients')
hasClients.value = res.data.has_clients
} catch { hasClients.value = false }
})
async function handleLogin() {
+58 -1
View File
@@ -29,9 +29,22 @@
<InputText v-model="searchQuery" placeholder="Passwoerter suchen..." fluid />
</div>
<div v-if="filteredEntries.length" class="selection-bar">
<Checkbox v-model="allSelected" :binary="true" @change="toggleSelectAll" inputId="select-all" />
<label for="select-all" class="select-all-label">
Alle auswaehlen
<span v-if="selectedIds.length" class="selected-count">({{ selectedIds.length }} ausgewaehlt)</span>
</label>
<Button v-if="selectedIds.length" icon="pi pi-trash" :label="`${selectedIds.length} loeschen`"
severity="danger" size="small" @click="deleteSelected" />
</div>
<div class="entries-list">
<div v-for="entry in filteredEntries" :key="entry.id"
class="entry-item" @click="openEntry(entry)">
class="entry-item" :class="{ selected: selectedIds.includes(entry.id) }"
@click="openEntry(entry)">
<Checkbox :modelValue="selectedIds.includes(entry.id)" :binary="true"
@click.stop @update:modelValue="toggleSelect(entry.id)" />
<div class="entry-icon">
<i class="pi pi-key"></i>
</div>
@@ -166,6 +179,7 @@ import InputText from 'primevue/inputtext'
import Password from 'primevue/password'
import Textarea from 'primevue/textarea'
import Select from 'primevue/select'
import Checkbox from 'primevue/checkbox'
const toast = useToast()
const auth = useAuthStore()
@@ -200,6 +214,45 @@ const importAccept = computed(() => {
const showTotpDialog = ref(false)
const totpCode = ref('')
const selectedIds = ref([])
const allSelected = computed({
get: () => filteredEntries.value.length > 0 && filteredEntries.value.every(e => selectedIds.value.includes(e.id)),
set: () => {},
})
function toggleSelectAll() {
const visibleIds = filteredEntries.value.map(e => e.id)
const allSel = visibleIds.every(id => selectedIds.value.includes(id))
if (allSel) {
selectedIds.value = selectedIds.value.filter(id => !visibleIds.includes(id))
} else {
const set = new Set([...selectedIds.value, ...visibleIds])
selectedIds.value = [...set]
}
}
function toggleSelect(id) {
const i = selectedIds.value.indexOf(id)
if (i >= 0) selectedIds.value.splice(i, 1)
else selectedIds.value.push(id)
}
async function deleteSelected() {
const n = selectedIds.value.length
if (!n) return
if (!window.confirm(`${n} Eintrag/Eintraege wirklich loeschen?`)) return
let ok = 0
for (const id of [...selectedIds.value]) {
try {
await apiClient.delete(`/passwords/entries/${id}`)
ok++
} catch { /* skip */ }
}
selectedIds.value = []
toast.add({ severity: 'success', summary: `${ok} Eintrag/Eintraege geloescht`, life: 3000 })
await loadEntries()
}
const folderOptions = computed(() => [{ id: null, name: '(Kein Ordner)' }, ...folders.value])
const filteredEntries = computed(() => {
if (!searchQuery.value) return entries.value
@@ -491,6 +544,10 @@ onMounted(async () => {
.shared-label { color: var(--p-text-muted-color); font-size: 0.75rem; }
.entries-main { flex: 1; }
.search-bar { margin-bottom: 1rem; }
.selection-bar { display: flex; align-items: center; gap: 0.75rem; padding: 0.5rem 0.75rem; margin-bottom: 0.5rem; background: var(--p-surface-50); border-radius: 6px; }
.select-all-label { font-size: 0.875rem; cursor: pointer; flex: 1; }
.selected-count { color: var(--p-text-muted-color); margin-left: 0.5rem; }
.entry-item.selected { background: var(--p-primary-50); }
.entries-list { display: flex; flex-direction: column; gap: 2px; }
.entry-item { display: flex; align-items: center; gap: 0.75rem; padding: 0.75rem; background: var(--p-surface-0); border-radius: 6px; cursor: pointer; }
.entry-item:hover { background: var(--p-surface-100); }
+4 -3
View File
@@ -104,6 +104,7 @@ const auth = useAuthStore()
const toast = useToast()
const fileId = route.params.fileId
const cacheBust = Date.now()
const fileName = ref('')
const previewType = ref('')
const previewUrl = ref('')
@@ -135,12 +136,12 @@ async function loadPreview() {
loading.value = true
try {
// For Office files, try OnlyOffice first
const previewRes = await apiClient.get(`/files/${fileId}/preview`)
const previewRes = await apiClient.get(`/files/${fileId}/preview?_=${cacheBust}`)
fileName.value = previewRes.data.name || ''
if (isOfficeFile(fileName.value)) {
try {
const ooRes = await apiClient.get(`/files/${fileId}/onlyoffice-config`)
const ooRes = await apiClient.get(`/files/${fileId}/onlyoffice-config?_=${cacheBust}`)
if (ooRes.data.available) {
onlyOfficeMode.value = true
loading.value = false
@@ -156,7 +157,7 @@ async function loadPreview() {
previewType.value = data.type
if (data.type === 'pdf' || data.type === 'image') {
previewUrl.value = getTokenUrl(`/api/files/${fileId}/download`)
previewUrl.value = getTokenUrl(`/api/files/${fileId}/download?inline=1`)
canEdit.value = false
} else if (data.type === 'html') {
htmlContent.value = data.content
+95 -5
View File
@@ -12,15 +12,31 @@
<span class="label">Benutzername:</span>
<span>{{ auth.user?.username }}</span>
</div>
<div class="info-row">
<span class="label">E-Mail:</span>
<span>{{ auth.user?.email || 'Nicht angegeben' }}</span>
</div>
<div class="info-row">
<span class="label">Rolle:</span>
<Tag :value="auth.user?.role" :severity="auth.user?.role === 'admin' ? 'danger' : 'info'" />
</div>
</div>
<p class="hint" style="margin:0.75rem 0 0.5rem;font-size:0.8rem;color:var(--p-text-muted-color)">
Vor- und Nachname werden anderen Benutzern angezeigt, wenn du etwas mit ihnen teilst.
</p>
<form @submit.prevent="saveProfile" class="profile-form">
<div class="field-row">
<div class="field">
<label>Vorname</label>
<InputText v-model="profile.first_name" fluid />
</div>
<div class="field">
<label>Nachname</label>
<InputText v-model="profile.last_name" fluid />
</div>
</div>
<div class="field">
<label>E-Mail</label>
<InputText v-model="profile.email" type="email" fluid />
</div>
<Button type="submit" label="Profil speichern" :loading="profileLoading" size="small" />
</form>
</div>
<!-- Change Password -->
@@ -45,6 +61,24 @@
</form>
</div>
<!-- Client Downloads -->
<div v-if="availableClients.length" class="settings-section">
<h3>Desktop & Mobile Clients</h3>
<div class="client-list">
<div v-for="client in availableClients" :key="client.platform" class="client-item">
<div class="client-info">
<i :class="'pi ' + (client.platform === 'linux' || client.platform === 'windows' || client.platform === 'mac' ? 'pi-desktop' : 'pi-mobile')"></i>
<div>
<strong>{{ client.name }}</strong>
<span class="client-meta">{{ client.filename }}</span>
</div>
</div>
<Button icon="pi pi-download" :label="'Download'" size="small" outlined
@click="downloadClient(client)" />
</div>
</div>
</div>
<!-- Email Accounts -->
<div class="settings-section">
<div class="section-header">
@@ -167,6 +201,43 @@ import InputSwitch from 'primevue/inputswitch'
const auth = useAuthStore()
const toast = useToast()
// Client downloads
const availableClients = ref([])
function downloadClient(client) {
window.location.href = `/api/clients/${client.platform}/download`
}
// --- Profile (Vorname/Nachname/E-Mail) ---
const profile = ref({ first_name: '', last_name: '', email: '' })
const profileLoading = ref(false)
async function loadProfile() {
try {
const res = await apiClient.get('/auth/me')
profile.value = {
first_name: res.data.first_name || '',
last_name: res.data.last_name || '',
email: res.data.email || '',
}
auth.user = { ...auth.user, ...res.data }
} catch { /* ignore */ }
}
async function saveProfile() {
profileLoading.value = true
try {
const res = await apiClient.put('/auth/me', profile.value)
auth.user = { ...auth.user, ...res.data }
toast.add({ severity: 'success', summary: 'Profil gespeichert', life: 2500 })
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler',
detail: err.response?.data?.error || err.message, life: 4000 })
} finally {
profileLoading.value = false
}
}
// --- Password change ---
const currentPassword = ref('')
const newPassword = ref('')
@@ -307,7 +378,14 @@ async function doDeleteAccount() {
}
}
onMounted(loadAccounts)
onMounted(async () => {
loadAccounts()
loadProfile()
try {
const res = await apiClient.get('/clients')
availableClients.value = res.data.clients
} catch { availableClients.value = [] }
})
</script>
<style scoped>
@@ -321,6 +399,10 @@ onMounted(loadAccounts)
.section-header h3 { margin: 0; }
.settings-info { display: flex; flex-direction: column; gap: 0.5rem; }
.info-row { display: flex; align-items: center; gap: 0.5rem; }
.profile-form { display: flex; flex-direction: column; gap: 0.5rem; max-width: 540px; }
.profile-form .field-row { display: flex; gap: 0.75rem; }
.profile-form .field-row .field { flex: 1; }
.profile-form .field label { display: block; font-size: 0.8rem; margin-bottom: 0.25rem; }
.info-row .label { font-weight: 500; min-width: 120px; }
.password-form { max-width: 400px; }
.password-form .field { margin-bottom: 1rem; }
@@ -346,4 +428,12 @@ onMounted(loadAccounts)
.field label { display: block; margin-bottom: 0.5rem; font-weight: 500; font-size: 0.875rem; }
.field-row { display: flex; gap: 0.75rem; align-items: flex-end; }
.flex-grow { flex: 1; }
.client-list { display: flex; flex-direction: column; gap: 0.5rem; }
.client-item {
display: flex; align-items: center; justify-content: space-between;
padding: 0.75rem; border: 1px solid var(--p-surface-200); border-radius: 8px;
}
.client-info { display: flex; align-items: center; gap: 0.75rem; }
.client-info i { font-size: 1.25rem; color: var(--p-primary-color); }
.client-meta { display: block; font-size: 0.8rem; color: var(--p-text-muted-color); }
</style>
+773
View File
@@ -0,0 +1,773 @@
<template>
<div class="view-container">
<div class="view-header">
<h2>Aufgaben</h2>
<div class="header-actions">
<Button icon="pi pi-list" label="Neue Liste" size="small" outlined @click="showNewList = true" />
<Button icon="pi pi-upload" label="Import" size="small" outlined @click="triggerImport" />
<input ref="importInput" type="file" accept=".ics,.ical,.csv" hidden @change="onImportFile" />
<Button icon="pi pi-download" label="Export" size="small" outlined
:disabled="!selectedListId" @click="showExportDialog = true" />
<Button icon="pi pi-plus" label="Neue Aufgabe" size="small"
:disabled="!writableLists.length" @click="openNewTask" />
</div>
</div>
<div class="tasks-layout">
<aside class="lists-sidebar">
<h4>Listen</h4>
<div v-for="tl in lists" :key="tl.id"
class="list-item" :class="{ active: selectedListId === tl.id }"
@click="selectedListId = tl.id">
<span class="list-color" :style="{ background: tl.color }"></span>
<span class="list-name">{{ tl.name }}</span>
<span v-if="tl.permission !== 'owner'" class="shared-label"
:title="`Geteilt von ${tl.owner_display_name || tl.owner_name}`">
(geteilt von {{ tl.owner_display_name || tl.owner_name }})
</span>
<span class="count">{{ tl.task_count }}</span>
<Button icon="pi pi-ellipsis-v" text size="small" class="list-menu"
@click.stop="openListMenu(tl)" />
</div>
</aside>
<div class="tasks-main">
<div class="toolbar">
<InputText v-model="search" placeholder="Aufgaben suchen..." fluid />
<label class="toggle"><Checkbox v-model="hideDone" :binary="true" /> Erledigte ausblenden</label>
</div>
<div v-if="selectedTaskIds.length" class="bulk-bar">
<span>{{ selectedTaskIds.length }} ausgewaehlt</span>
<Button icon="pi pi-trash" :label="`${selectedTaskIds.length} loeschen`"
severity="danger" size="small" @click="bulkDelete" />
<Button label="Auswahl aufheben" size="small" text @click="selectedTaskIds = []" />
</div>
<table class="task-table">
<thead>
<tr>
<th class="col-check">
<Checkbox v-model="allSelected" :binary="true" @change="toggleAll" />
</th>
<th class="col-done"></th>
<th>Titel</th>
<th>Faellig</th>
<th>Prio</th>
<th>Status</th>
<th></th>
</tr>
</thead>
<tbody>
<tr v-for="t in filteredTasks" :key="t.id" class="task-row"
:class="{ done: t.status === 'COMPLETED', selected: selectedTaskIds.includes(t.id) }"
@click="openEditTask(t)">
<td class="col-check" @click.stop>
<Checkbox :modelValue="selectedTaskIds.includes(t.id)" :binary="true"
@update:modelValue="toggleSelect(t.id, $event)" />
</td>
<td class="col-done" @click.stop>
<Checkbox :modelValue="t.status === 'COMPLETED'" :binary="true"
@update:modelValue="toggleDone(t, $event)" title="Erledigt" />
</td>
<td class="col-title">
<span>{{ t.summary || '(ohne Titel)' }}</span>
<small v-if="t.description" class="meta">{{ shortDesc(t.description) }}</small>
</td>
<td class="col-date">{{ formatDue(t.due) }}</td>
<td>{{ formatPrio(t.priority) }}</td>
<td><span class="status-badge" :class="statusClass(t.status)">{{ statusLabel(t.status) }}</span></td>
<td class="col-actions" @click.stop>
<Button icon="pi pi-trash" text size="small" severity="danger" @click="confirmDelete(t)" />
</td>
</tr>
<tr v-if="!filteredTasks.length">
<td colspan="7" class="empty-row">Keine Aufgaben.</td>
</tr>
</tbody>
</table>
</div>
</div>
<!-- New List Dialog -->
<Dialog v-model:visible="showNewList" header="Neue Aufgabenliste" modal :style="{ width: '400px' }">
<div class="field">
<label>Name</label>
<InputText v-model="newListName" fluid autofocus @keyup.enter="createList" />
</div>
<div class="field">
<label>Farbe</label>
<InputText v-model="newListColor" type="color" style="width: 60px; height: 36px" />
</div>
<template #footer>
<Button label="Abbrechen" text @click="showNewList = false" />
<Button label="Erstellen" @click="createList" />
</template>
</Dialog>
<!-- List Menu -->
<Dialog v-model:visible="showListMenu" header="Listen-Optionen" modal :style="{ width: '480px' }">
<div v-if="menuList">
<div class="rename-row">
<template v-if="!isRenaming">
<strong>{{ menuList.name }}</strong>
<Button v-if="menuList.permission === 'owner'"
icon="pi pi-pencil" text size="small" title="Umbenennen"
@click="startRename" />
</template>
<template v-else>
<InputText v-model="renameValue" fluid autofocus
@keyup.enter="saveRename" @keyup.escape="isRenaming = false" />
<Button icon="pi pi-check" text size="small" severity="success"
title="Speichern" @click="saveRename" />
<Button icon="pi pi-times" text size="small"
title="Abbrechen" @click="isRenaming = false" />
</template>
</div>
<div class="field">
<label>Farbe</label>
<InputText :modelValue="menuList.color" @change="onListColor($event)" type="color" style="width:60px; height:36px" />
</div>
<div v-if="menuList.permission === 'owner'" class="field">
<label>Mit Benutzer teilen</label>
<div class="share-row">
<div style="position: relative; flex: 1;">
<InputText v-model="shareUsername" placeholder="Benutzername suchen..."
fluid @input="onShareSearch" />
<div v-if="shareSearchResults.length" class="user-search-popup">
<div v-for="u in shareSearchResults" :key="u.id" class="user-result"
@click="shareUsername = u.username; shareSearchResults = []">
<i class="pi pi-user"></i>
<span>{{ u.username }}</span>
<small v-if="u.full_name" class="user-fullname">{{ u.full_name }}</small>
</div>
</div>
</div>
<Select v-model="sharePermission" :options="permOptions" optionLabel="label" optionValue="value" />
<Button label="Teilen" size="small" @click="doShare" />
</div>
<div v-if="listShares.length" class="existing-shares">
<template v-for="s in listShares" :key="s.id">
<div v-if="editingShareId !== s.id" class="share-perm-item">
<i class="pi pi-user"></i> <span>{{ s.username }}</span>
<span class="perm-label">{{ s.permission === 'readwrite' ? 'Lesen+Schreiben' : 'Lesen' }}</span>
<Button icon="pi pi-pencil" text size="small" title="Bearbeiten" @click="startEditShare(s)" />
<Button icon="pi pi-trash" text size="small" severity="danger" title="Entfernen" @click="removeShare(s.id)" />
</div>
<div v-else class="share-perm-item editing">
<i class="pi pi-user"></i> <span>{{ s.username }}</span>
<Select v-model="editSharePermission" :options="permOptions" optionLabel="label" optionValue="value" />
<Button icon="pi pi-check" text size="small" severity="success" title="Speichern" @click="saveEditShare(s)" />
<Button icon="pi pi-times" text size="small" title="Abbrechen" @click="editingShareId = null" />
</div>
</template>
</div>
</div>
<div v-if="menuList.permission === 'owner'" class="field" style="border-top:1px solid var(--p-surface-200); padding-top:1rem">
<Button label="Liste loeschen" severity="danger" outlined size="small" @click="confirmDeleteList = true" />
</div>
<div class="field" style="border-top:1px solid var(--p-surface-200); padding-top:1rem">
<label><i class="pi pi-info-circle"></i> CalDAV-Zugang (Handy / DAVx5)</label>
<div class="caldav-hint">In DAVx5 unter demselben Konto sichtbar wie Kalender. Aufgabenlisten sind mit "OpenTasks" synchronisierbar.</div>
<div class="url-row">
<strong>Listen-URL:</strong>
<code>{{ origin }}/dav/{{ username }}/tl-{{ menuList.id }}/</code>
<Button icon="pi pi-copy" text size="small" @click="copy(`${origin}/dav/${username}/tl-${menuList.id}/`)" />
</div>
</div>
</div>
</Dialog>
<!-- Task Dialog -->
<Dialog v-model:visible="showTaskDialog" :header="editingTaskId ? 'Aufgabe bearbeiten' : 'Neue Aufgabe'"
modal :style="{ width: '560px' }">
<div v-if="writableLists.length > 1" class="field">
<label>Liste</label>
<Select v-model="taskTargetListId" :options="writableListOptions"
optionLabel="label" optionValue="id" fluid />
</div>
<div class="field">
<label>Titel</label>
<InputText v-model="taskForm.summary" fluid autofocus />
</div>
<div class="field">
<label>Beschreibung</label>
<Textarea v-model="taskForm.description" rows="3" fluid />
</div>
<div class="field-row">
<div class="field">
<label>Faellig</label>
<InputText v-model="taskForm.due" type="datetime-local" fluid />
</div>
<div class="field">
<label>Status</label>
<Select v-model="taskForm.status" :options="statusOptions" optionLabel="label" optionValue="value" fluid />
</div>
</div>
<div class="field-row">
<div class="field">
<label>Prioritaet</label>
<Select v-model="taskForm.priority" :options="prioOptions" optionLabel="label" optionValue="value" fluid />
</div>
<div class="field">
<label>Fortschritt %</label>
<InputText v-model.number="taskForm.percent_complete" type="number" min="0" max="100" fluid />
</div>
</div>
<div class="field">
<label>Kategorien (kommagetrennt)</label>
<InputText v-model="taskForm.categories" fluid />
</div>
<template #footer>
<Button v-if="editingTaskId" label="Loeschen" text severity="danger" @click="deleteCurrent" />
<Button label="Abbrechen" text @click="showTaskDialog = false" />
<Button :label="editingTaskId ? 'Speichern' : 'Erstellen'" @click="saveTask" />
</template>
</Dialog>
<Dialog v-model:visible="confirmDeleteList" header="Liste loeschen" modal :style="{ width: '400px' }">
<p>Liste <strong>{{ menuList?.name }}</strong> mit allen Aufgaben loeschen?</p>
<template #footer>
<Button label="Abbrechen" text @click="confirmDeleteList = false" />
<Button label="Loeschen" severity="danger" @click="deleteList" />
</template>
</Dialog>
<!-- Export Dialog -->
<Dialog v-model:visible="showExportDialog" header="Aufgaben exportieren" modal :style="{ width: '420px' }">
<p>Aus Liste <strong>{{ currentList?.name }}</strong></p>
<div class="field">
<label>Format</label>
<Select v-model="exportFormat" :options="exportFormats" optionLabel="label" optionValue="value" fluid />
</div>
<template #footer>
<Button label="Abbrechen" text @click="showExportDialog = false" />
<Button label="Herunterladen" icon="pi pi-download" @click="doExport" />
</template>
</Dialog>
</div>
</template>
<script setup>
import { ref, reactive, computed, onMounted, onUnmounted, watch } from 'vue'
import { useToast } from 'primevue/usetoast'
import { useAuthStore } from '../stores/auth'
import apiClient from '../api/client'
import Button from 'primevue/button'
import Dialog from 'primevue/dialog'
import InputText from 'primevue/inputtext'
import Textarea from 'primevue/textarea'
import Select from 'primevue/select'
import Checkbox from 'primevue/checkbox'
const toast = useToast()
const auth = useAuthStore()
const origin = computed(() => window.location.origin)
const username = computed(() => auth.user?.username || '')
const lists = ref([])
const selectedListId = ref(null)
const taskTargetListId = ref(null)
const writableLists = computed(() =>
lists.value.filter(l => l.permission === 'owner' || l.permission === 'readwrite')
)
const writableListOptions = computed(() => writableLists.value.map(l => ({
...l,
label: l.permission === 'owner'
? l.name
: `${l.name} (geteilt von ${l.owner_display_name || l.owner_name})`,
})))
const tasks = ref([])
const search = ref('')
const hideDone = ref(false)
const selectedTaskIds = ref([])
const showNewList = ref(false)
const newListName = ref('')
const newListColor = ref('#10b981')
const showListMenu = ref(false)
const menuList = ref(null)
const shareUsername = ref('')
const sharePermission = ref('read')
const listShares = ref([])
const shareSearchResults = ref([])
const editingShareId = ref(null)
const editSharePermission = ref('read')
const isRenaming = ref(false)
const renameValue = ref('')
function startRename() {
renameValue.value = menuList.value?.name || ''
isRenaming.value = true
}
async function saveRename() {
const newName = renameValue.value.trim()
if (!newName || !menuList.value || newName === menuList.value.name) {
isRenaming.value = false
return
}
try {
await apiClient.put(`/tasklists/${menuList.value.id}`, { name: newName })
menuList.value.name = newName
isRenaming.value = false
await loadLists()
toast.add({ severity: 'success', summary: 'Umbenannt', life: 2000 })
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler',
detail: err.response?.data?.error || err.message, life: 4000 })
}
}
let shareSearchTimer = null
function startEditShare(s) {
editingShareId.value = s.id
editSharePermission.value = s.permission
}
async function saveEditShare(s) {
if (!menuList.value) return
try {
await apiClient.post(`/tasklists/${menuList.value.id}/share`, {
username: s.username,
permission: editSharePermission.value,
})
editingShareId.value = null
await loadShares()
toast.add({ severity: 'success', summary: 'Berechtigung aktualisiert', life: 2500 })
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler',
detail: err.response?.data?.error || err.message, life: 4000 })
}
}
function onShareSearch() {
clearTimeout(shareSearchTimer)
const q = shareUsername.value.trim()
if (q.length < 2) { shareSearchResults.value = []; return }
shareSearchTimer = setTimeout(async () => {
try {
const res = await apiClient.get('/users/search', { params: { q } })
shareSearchResults.value = res.data
} catch { shareSearchResults.value = [] }
}, 250)
}
const permOptions = [
{ label: 'Lesen', value: 'read' },
{ label: 'Lesen+Schreiben', value: 'readwrite' },
]
const confirmDeleteList = ref(false)
const showTaskDialog = ref(false)
const editingTaskId = ref(null)
const taskForm = reactive({
summary: '', description: '',
due: '', status: 'NEEDS-ACTION', priority: null, percent_complete: null,
categories: '',
})
const statusOptions = [
{ label: 'Offen', value: 'NEEDS-ACTION' },
{ label: 'In Arbeit', value: 'IN-PROCESS' },
{ label: 'Erledigt', value: 'COMPLETED' },
{ label: 'Abgebrochen', value: 'CANCELLED' },
]
const prioOptions = [
{ label: '—', value: null },
{ label: 'Hoch (1)', value: 1 },
{ label: 'Mittel (5)', value: 5 },
{ label: 'Niedrig (9)', value: 9 },
]
const showExportDialog = ref(false)
const exportFormat = ref('ics')
const exportFormats = [
{ label: 'iCalendar (.ics)', value: 'ics' },
{ label: 'CSV (.csv)', value: 'csv' },
]
const importInput = ref(null)
const currentList = computed(() => lists.value.find(l => l.id === selectedListId.value))
const filteredTasks = computed(() => {
const q = search.value.trim().toLowerCase()
return tasks.value.filter(t => {
if (hideDone.value && t.status === 'COMPLETED') return false
if (q && !(t.summary || '').toLowerCase().includes(q)
&& !(t.description || '').toLowerCase().includes(q)) return false
return true
})
})
const allSelected = computed({
get: () => filteredTasks.value.length > 0 && filteredTasks.value.every(t => selectedTaskIds.value.includes(t.id)),
set: () => {},
})
function toggleAll() {
const ids = filteredTasks.value.map(t => t.id)
const allSel = ids.every(id => selectedTaskIds.value.includes(id))
if (allSel) selectedTaskIds.value = selectedTaskIds.value.filter(id => !ids.includes(id))
else {
const set = new Set(selectedTaskIds.value); ids.forEach(id => set.add(id))
selectedTaskIds.value = [...set]
}
}
function toggleSelect(id, checked) {
if (checked && !selectedTaskIds.value.includes(id)) selectedTaskIds.value = [...selectedTaskIds.value, id]
else if (!checked) selectedTaskIds.value = selectedTaskIds.value.filter(x => x !== id)
}
function shortDesc(s) { return s.length > 80 ? s.slice(0, 80) + '…' : s }
function formatDue(d) {
if (!d) return ''
return new Date(d).toLocaleString('de-DE', { day: '2-digit', month: '2-digit', year: 'numeric', hour: '2-digit', minute: '2-digit' })
}
function formatPrio(p) {
if (p === null || p === undefined) return ''
if (p <= 3) return 'Hoch'
if (p >= 7) return 'Niedrig'
return 'Mittel'
}
function statusLabel(s) {
return ({ 'NEEDS-ACTION': 'Offen', 'IN-PROCESS': 'In Arbeit', 'COMPLETED': 'Erledigt', 'CANCELLED': 'Abgebrochen' })[s] || 'Offen'
}
function statusClass(s) {
return { 'NEEDS-ACTION': 'todo', 'IN-PROCESS': 'progress', 'COMPLETED': 'done', 'CANCELLED': 'cancelled' }[s] || 'todo'
}
async function loadLists() {
const res = await apiClient.get('/tasklists')
lists.value = res.data
if (!selectedListId.value && lists.value.length) selectedListId.value = lists.value[0].id
if (!lists.value.length) {
await apiClient.post('/tasklists', { name: 'Meine Aufgaben', color: '#10b981' })
await loadLists()
}
}
async function loadTasks() {
if (!selectedListId.value) { tasks.value = []; return }
try {
const res = await apiClient.get(`/tasklists/${selectedListId.value}/tasks`)
tasks.value = res.data
} catch { tasks.value = [] }
}
async function createList() {
if (!newListName.value.trim()) return
await apiClient.post('/tasklists', { name: newListName.value.trim(), color: newListColor.value })
showNewList.value = false
newListName.value = ''
await loadLists()
}
function openListMenu(tl) {
menuList.value = tl
shareUsername.value = ''
shareSearchResults.value = []
isRenaming.value = false
showListMenu.value = true
loadShares()
}
async function loadShares() {
if (!menuList.value || menuList.value.permission !== 'owner') { listShares.value = []; return }
try {
const res = await apiClient.get(`/tasklists/${menuList.value.id}/shares`)
listShares.value = res.data
} catch { listShares.value = [] }
}
async function doShare() {
if (!menuList.value || !shareUsername.value.trim()) return
try {
await apiClient.post(`/tasklists/${menuList.value.id}/share`, {
username: shareUsername.value.trim(), permission: sharePermission.value,
})
toast.add({ severity: 'success', summary: 'Geteilt', life: 2500 })
shareUsername.value = ''
shareSearchResults.value = []
await loadShares()
} catch (err) {
toast.add({ severity: 'error', summary: err.response?.data?.error || 'Fehler', life: 4000 })
}
}
async function removeShare(id) {
await apiClient.delete(`/tasklists/${menuList.value.id}/shares/${id}`)
await loadShares()
}
async function onListColor(ev) {
const color = ev.target.value
await apiClient.put(`/tasklists/${menuList.value.id}/my-color`, { color })
menuList.value.color = color
await loadLists()
}
async function deleteList() {
if (!menuList.value) return
await apiClient.delete(`/tasklists/${menuList.value.id}`)
confirmDeleteList.value = false
showListMenu.value = false
if (selectedListId.value === menuList.value.id) selectedListId.value = null
await loadLists()
await loadTasks()
}
function openNewTask() {
if (!writableLists.value.length) {
toast.add({ severity: 'warn', summary: 'Keine beschreibbare Liste', life: 3000 })
return
}
editingTaskId.value = null
Object.assign(taskForm, {
summary: '', description: '', due: '',
status: 'NEEDS-ACTION', priority: null, percent_complete: null,
categories: '',
})
// Default-Liste: aktuell markierte falls beschreibbar, sonst erste beschreibbare
const sel = writableLists.value.find(l => l.id === selectedListId.value)
taskTargetListId.value = sel ? sel.id : writableLists.value[0].id
showTaskDialog.value = true
}
function openEditTask(t) {
editingTaskId.value = t.id
Object.assign(taskForm, {
summary: t.summary || '',
description: t.description || '',
due: t.due ? t.due.slice(0, 16) : '',
status: t.status || 'NEEDS-ACTION',
priority: t.priority,
percent_complete: t.percent_complete,
categories: (t.categories || []).join(', '),
})
showTaskDialog.value = true
}
async function saveTask() {
if (!taskForm.summary.trim()) return
const payload = {
summary: taskForm.summary.trim(),
description: taskForm.description,
due: taskForm.due ? new Date(taskForm.due).toISOString() : null,
status: taskForm.status,
priority: taskForm.priority,
percent_complete: taskForm.percent_complete,
categories: taskForm.categories.split(',').map(s => s.trim()).filter(Boolean),
}
try {
if (editingTaskId.value) {
await apiClient.put(`/tasks/${editingTaskId.value}`, payload)
} else {
const target = taskTargetListId.value || selectedListId.value
if (!target) {
toast.add({ severity: 'error', summary: 'Bitte Liste waehlen', life: 3000 })
return
}
await apiClient.post(`/tasklists/${target}/tasks`, payload)
}
showTaskDialog.value = false
await loadLists()
await loadTasks()
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler', detail: err.response?.data?.error, life: 4000 })
}
}
async function toggleDone(t, checked) {
try {
await apiClient.put(`/tasks/${t.id}`, { status: checked ? 'COMPLETED' : 'NEEDS-ACTION' })
await loadTasks()
} catch (err) {
toast.add({ severity: 'error', summary: 'Fehler', life: 3000 })
}
}
async function deleteCurrent() {
if (!editingTaskId.value) return
if (!confirm('Aufgabe wirklich loeschen?')) return
await apiClient.delete(`/tasks/${editingTaskId.value}`)
showTaskDialog.value = false
await loadLists()
await loadTasks()
}
async function confirmDelete(t) {
if (!confirm(`"${t.summary || '(ohne Titel)'}" loeschen?`)) return
await apiClient.delete(`/tasks/${t.id}`)
await loadLists()
await loadTasks()
}
async function bulkDelete() {
const ids = [...selectedTaskIds.value]
if (!ids.length || !confirm(`${ids.length} Aufgabe(n) loeschen?`)) return
let ok = 0, fail = 0
for (const id of ids) {
try { await apiClient.delete(`/tasks/${id}`); ok++ } catch { fail++ }
}
selectedTaskIds.value = []
toast.add({
severity: fail ? 'warn' : 'success',
summary: `${ok} geloescht${fail ? `, ${fail} fehlgeschlagen` : ''}`, life: 3000,
})
await loadLists()
await loadTasks()
}
function triggerImport() {
if (!selectedListId.value) {
toast.add({ severity: 'warn', summary: 'Keine Liste ausgewaehlt', life: 3000 })
return
}
importInput.value?.click()
}
async function onImportFile(ev) {
const file = ev.target.files?.[0]
ev.target.value = ''
if (!file) return
const fd = new FormData()
fd.append('file', file)
try {
const res = await apiClient.post(`/tasklists/${selectedListId.value}/import`, fd,
{ headers: { 'Content-Type': 'multipart/form-data' } })
toast.add({
severity: 'success',
summary: `${res.data.imported} importiert`,
detail: res.data.skipped ? `${res.data.skipped} uebersprungen` : undefined,
life: 4000,
})
await loadLists()
await loadTasks()
} catch (err) {
toast.add({ severity: 'error', summary: 'Import fehlgeschlagen', detail: err.response?.data?.error, life: 5000 })
}
}
async function doExport() {
if (!selectedListId.value) return
try {
const res = await apiClient.get(`/tasklists/${selectedListId.value}/export`,
{ params: { format: exportFormat.value }, responseType: 'blob' })
const ext = exportFormat.value === 'csv' ? 'csv' : 'ics'
const url = URL.createObjectURL(new Blob([res.data]))
const a = document.createElement('a')
a.href = url
a.download = `${currentList.value?.name || 'aufgaben'}.${ext}`
a.click()
URL.revokeObjectURL(url)
showExportDialog.value = false
} catch (err) {
toast.add({ severity: 'error', summary: 'Export fehlgeschlagen', life: 4000 })
}
}
function copy(text) {
navigator.clipboard.writeText(text)
toast.add({ severity: 'info', summary: 'Kopiert', life: 1500 })
}
// --- Live refresh via SSE ---
let eventSource = null
let reloadTimer = null
function scheduleReload() {
if (reloadTimer) return
reloadTimer = setTimeout(async () => {
reloadTimer = null
await loadLists()
await loadTasks()
}, 300)
}
onMounted(async () => {
await loadLists()
await loadTasks()
if (auth.accessToken) {
try {
eventSource = new EventSource(`/api/sync/events?token=${encodeURIComponent(auth.accessToken)}`)
eventSource.addEventListener('tasklist', scheduleReload)
eventSource.addEventListener('message', scheduleReload)
eventSource.onerror = () => {}
} catch {}
}
})
onUnmounted(() => {
if (reloadTimer) clearTimeout(reloadTimer)
if (eventSource) eventSource.close()
})
watch(selectedListId, loadTasks)
</script>
<style scoped>
.view-container { padding: 1.5rem; }
.view-header { display: flex; justify-content: space-between; align-items: center; margin-bottom: 1rem; }
.view-header h2 { margin: 0; }
.header-actions { display: flex; gap: 0.5rem; }
.tasks-layout { display: flex; gap: 1rem; align-items: flex-start; }
.lists-sidebar { width: 260px; flex-shrink: 0; }
.lists-sidebar h4 { margin: 0 0 0.5rem; font-size: 0.85rem; text-transform: uppercase; color: var(--p-text-muted-color); }
.list-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.5rem; border-radius: 4px;
cursor: pointer; font-size: 0.875rem; }
.list-item:hover { background: var(--p-surface-50); }
.list-item.active { background: var(--p-primary-50); }
.list-color { width: 12px; height: 12px; border-radius: 3px; flex-shrink: 0; }
.list-name { flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.shared-label { color: var(--p-text-muted-color); font-size: 0.7rem; }
.count { color: var(--p-text-muted-color); font-size: 0.8rem; }
.list-menu { opacity: 0; transition: opacity .15s; }
.list-item:hover .list-menu { opacity: 1; }
.tasks-main { flex: 1; min-width: 0; }
.toolbar { display: flex; gap: 0.75rem; align-items: center; margin-bottom: 0.75rem; }
.toggle { display: flex; align-items: center; gap: 0.35rem; font-size: 0.875rem; white-space: nowrap; }
.bulk-bar { display: flex; gap: 0.5rem; align-items: center; padding: 0.5rem 0.75rem;
background: var(--p-primary-50); border-radius: 6px; margin-bottom: 0.5rem; font-size: 0.875rem; }
.task-table { width: 100%; border-collapse: collapse; font-size: 0.875rem; }
.task-table th { text-align: left; padding: 0.5rem; border-bottom: 2px solid var(--p-surface-200); font-weight: 600; }
.task-table td { padding: 0.5rem; border-bottom: 1px solid var(--p-surface-100); vertical-align: top; }
.task-row { cursor: pointer; }
.task-row:hover { background: var(--p-surface-50); }
.task-row.done .col-title span { text-decoration: line-through; color: var(--p-text-muted-color); }
.task-row.selected { background: var(--p-primary-50); }
.col-check, .col-done { width: 36px; }
.col-actions { width: 60px; text-align: right; }
.col-date { white-space: nowrap; }
.col-title { }
.meta { display: block; color: var(--p-text-muted-color); font-size: 0.75rem; margin-top: 0.1rem; }
.empty-row { text-align: center; color: var(--p-text-muted-color); padding: 2rem !important; }
.status-badge { display: inline-block; padding: 0.15rem 0.5rem; border-radius: 10px; font-size: 0.72rem; }
.status-badge.todo { background: var(--p-surface-100); }
.status-badge.progress { background: var(--p-blue-100); color: var(--p-blue-700); }
.status-badge.done { background: var(--p-green-100); color: var(--p-green-700); }
.status-badge.cancelled { background: var(--p-red-100); color: var(--p-red-700); }
.field { margin-bottom: 0.75rem; }
.field label { display: block; margin-bottom: 0.25rem; font-weight: 500; font-size: 0.875rem; }
.field-row { display: flex; gap: 0.75rem; }
.field-row .field { flex: 1; }
.share-row { display: flex; gap: 0.5rem; align-items: center; flex-wrap: wrap; }
.rename-row { display: flex; align-items: center; gap: 0.5rem; margin-bottom: 0.75rem; }
.rename-row strong { font-size: 1rem; }
.user-search-popup { position: absolute; top: 100%; left: 0; right: 0; z-index: 10;
background: white; border: 1px solid var(--p-surface-200);
border-radius: 4px; max-height: 160px; overflow-y: auto;
box-shadow: 0 4px 12px rgba(0,0,0,0.1); }
.user-result { padding: 0.5rem 0.75rem; cursor: pointer; font-size: 0.875rem;
display: flex; gap: 0.5rem; align-items: center; }
.user-result:hover { background: var(--p-primary-50); }
.user-fullname { color: var(--p-text-muted-color); font-size: 0.75rem; margin-left: auto; }
.existing-shares { margin-top: 0.5rem; }
.share-perm-item { display: flex; align-items: center; gap: 0.5rem; padding: 0.375rem 0; font-size: 0.875rem; flex-wrap: wrap; }
.share-perm-item.editing { background: var(--p-surface-50); padding: 0.5rem; border-radius: 4px; }
.perm-label { color: var(--p-text-muted-color); font-size: 0.75rem; }
.url-row { display: flex; gap: 0.5rem; align-items: center; flex-wrap: wrap; }
.url-row strong { min-width: 110px; font-size: 0.8rem; }
.url-row code { background: var(--p-surface-100); padding: 0.25rem 0.5rem; border-radius: 4px; font-size: 0.8rem; flex: 1; word-break: break-all; }
.caldav-hint { font-size: 0.8rem; color: var(--p-text-muted-color); margin: 0 0 0.5rem; }
</style>
+26 -1
View File
@@ -24,15 +24,40 @@ server {
proxy_set_header Connection "upgrade";
}
# CalDAV/CardDAV braucht spezielle Methoden
# Server-Sent Events: Puffer aus, lange Read-Timeouts, sonst bricht die
# Live-Refresh-Verbindung nach ein paar Sekunden ab.
location /api/sync/events {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 24h;
proxy_send_timeout 24h;
chunked_transfer_encoding on;
}
# CalDAV/CardDAV braucht spezielle Methoden (PROPFIND, REPORT, MKCALENDAR)
location /dav/ {
# Nach 2017 erlaubt nginx die meisten WebDAV-Methoden out of the box.
# Wichtig: kein Buffering der Request-Body (PUT groesserer ICS) und
# korrekte Forward-Header fuer HTTP-Basic-Auth.
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass_request_headers on;
proxy_request_buffering off;
client_max_body_size 50M;
}
location = /.well-known/caldav { return 301 https://$host/dav/; }
location = /.well-known/carddav { return 301 https://$host/dav/; }
}
# OnlyOffice Document Server (optional)