planner added auto detect bridge or network cards

This commit is contained in:
duffyduck 2026-03-04 22:51:24 +01:00
parent aea2d82f27
commit fff5402226
5 changed files with 366 additions and 123 deletions

View File

@ -6,11 +6,13 @@ Migriert ein komplettes Proxmox-Cluster (inkl. Ceph) von einem Netzwerk in ein a
## Features
- Automatische Erkennung aller Nodes, IPs und Konfigurationen
- **Auto-Detect** aller Nodes, Bridges, IPs und Netzwerke
- Koordinierte Migration aller Nodes in einem Durchgang
- **Multi-NIC-Support** — erkennt automatisch Management-, Ceph-Public- und Ceph-Cluster-Bridges
- Ceph-Support (Public Network, Cluster Network, MON-Adressen)
- Funktioniert auch bei **gebrochenem Quorum** (z.B. wenn ein Node bereits manuell geändert wurde)
- **Rescue-Netzwerk** — temporäres Emergency-Netz wenn sich Nodes nicht mehr erreichen können
- SSH-Key-Sicherung vor pve-cluster-Stop (verhindert SSH-Abbrüche)
- Automatische Backups aller Konfigurationen vor der Migration
- Dry-Run-Modus zum gefahrlosen Testen
- Verifikation nach der Migration
@ -77,22 +79,33 @@ Das Tool führt interaktiv durch den Prozess:
[Ceph]
Public Network: 192.168.0.0/24
Cluster Network: 192.168.0.0/24
Cluster Network: 10.0.1.0/24
=== Phase 2: Migration planen ===
Neues Netzwerk (z.B. 172.0.2.0/16): 172.0.2.0/16
Neues Gateway [172.0.0.1]: 172.0.2.1
[Netzwerk-Erkennung]
vmbr0: Management/Corosync (192.168.0.0/24)
vmbr1: Ceph Cluster (10.0.1.0/24)
[IP-Mapping]
[Management-Netzwerk (Corosync)]
Aktuell: 192.168.0.0/24
Neues Management-Netzwerk (z.B. 172.0.2.0/16): 172.0.2.0/16
Neues Gateway [172.0.0.1]: 172.0.2.1
[Management IP-Mapping]
pve1: 192.168.0.101 -> [172.0.2.101]:
pve2: 192.168.0.102 -> [172.0.2.102]:
pve3: 192.168.0.103 -> [172.0.2.103]:
pve4: 192.168.0.104 -> [172.0.2.104]:
[Ceph Netzwerke]
Ceph Public: gleich wie Management -> wird automatisch mit umgezogen
Ceph Cluster (10.0.1.0/24) auf separater NIC -> eigenes Mapping
Migration durchführen? [j/N]: j
```
Das Tool erkennt automatisch welche Bridges welche Netzwerke tragen.
Wenn Ceph Public/Cluster auf separaten NICs liegen, werden die IPs einzeln pro Node abgefragt.
### Optionen
| Option | Beschreibung |
@ -108,7 +121,7 @@ Migration durchführen? [j/N]: j
| Datei | Wo | Was |
|---|---|---|
| `/etc/network/interfaces` | Jeder Node | Bridge-IP, Gateway |
| `/etc/network/interfaces` | Jeder Node | Alle Bridge-IPs (Management, Ceph), Gateway |
| `/etc/hosts` | Jeder Node | Hostname-zu-IP-Zuordnung |
| `/etc/corosync/corosync.conf` | Jeder Node | Corosync Ring-Adressen |
| `/etc/pve/ceph.conf` | Cluster-FS | public_network, cluster_network, MON-Adressen |
@ -116,12 +129,13 @@ Migration durchführen? [j/N]: j
## Migrationsablauf (Phase 4)
1. Neue Konfigurationen werden auf alle Nodes verteilt (Staging)
2. Corosync wird auf allen Nodes gestoppt
3. pve-cluster (pmxcfs) wird gestoppt
4. Corosync-Config wird direkt geschrieben (`/etc/corosync/corosync.conf`)
5. `/etc/hosts` wird aktualisiert
6. `/etc/network/interfaces` wird aktualisiert + Netzwerk-Reload (`ifreload -a`)
7. Services werden gestartet, Quorum abgewartet, Ceph aktualisiert
2. SSH-Keys sichern (`/etc/pve/priv/authorized_keys` → `~/.ssh/authorized_keys`)
3. Corosync wird auf allen Nodes gestoppt
4. pve-cluster (pmxcfs) wird gestoppt → `/etc/pve` unmounted
5. Corosync-Config wird direkt geschrieben (`/etc/corosync/corosync.conf`)
6. `/etc/hosts` wird aktualisiert
7. `/etc/network/interfaces` wird aktualisiert + `ifreload -a` (alle Bridges)
8. Services starten, Quorum abwarten, Ceph aktualisieren, SSH-Keys aufräumen
## Rescue-Netzwerk (Emergency Mode)
@ -244,6 +258,8 @@ systemctl restart corosync
- VMs/CTs werden **nicht** automatisch migriert oder gestoppt — das Netzwerk wird im laufenden Betrieb geändert
- Nach der Migration sollten VM-Netzwerke (Bridges in VM-Configs) geprüft werden, falls diese sich auf spezifische IPs beziehen
- Die Emergency-IPs (`ip addr add`) sind temporär und überleben keinen Reboot — sie werden nur zur SSH-Kommunikation während der Migration genutzt
- Bridges werden automatisch erkannt — keine manuelle Angabe nötig. Alle betroffenen Bridges (Management, Ceph Public, Ceph Cluster) werden per `ifreload -a` aktualisiert
- SSH-Keys werden vor dem pve-cluster-Stop gesichert und danach wiederhergestellt, damit SSH während der Migration nicht abbricht
- Getestet mit Proxmox VE 7.x und 8.x
## Projektstruktur
@ -254,7 +270,7 @@ proxmox-cluster-network-changer/
├── discovery.py # Phase 1: Cluster-Config lesen & parsen
├── planner.py # Phase 2: IP-Mapping, neue Configs generieren
├── backup.py # Phase 3: Backup aller Configs
├── migrator.py # Phase 4: Migration durchführen (7 Schritte)
├── migrator.py # Phase 4: Migration durchführen (8 Schritte)
├── verifier.py # Phase 5: Post-Migration Checks
├── rescue.py # Rescue-Netzwerk (Emergency Mode)
├── ssh_manager.py # SSH-Verbindungen (lokal + remote)

View File

@ -254,6 +254,23 @@ def generate_network_interfaces(content: str, old_ip: str, new_ip: str,
return new_content
def generate_network_interfaces_multi(
content: str,
replacements: list[tuple[str, str, int, str | None, str | None]]
) -> str:
"""Update /etc/network/interfaces with multiple IP replacements.
Each replacement is: (old_ip, new_ip, new_cidr, old_gateway, new_gateway)
This handles multiple bridges/NICs (management, ceph public, ceph cluster).
"""
new_content = content
for old_ip, new_ip, new_cidr, old_gw, new_gw in replacements:
new_content = generate_network_interfaces(
new_content, old_ip, new_ip, new_cidr, new_gw, old_gw
)
return new_content
def generate_hosts(content: str, ip_mapping: dict[str, str]) -> str:
"""Update /etc/hosts with new IPs."""
new_content = content

View File

@ -32,7 +32,7 @@ class Migrator:
return False
# Step 1: Write new configs to all nodes (but don't activate yet)
print("[1/7] Neue Konfigurationen verteilen...")
print("[1/8] Neue Konfigurationen verteilen...")
if not self._distribute_configs(plan, configs, dry_run):
return False
@ -318,14 +318,15 @@ class Migrator:
print(f" [{node.name}] FEHLER: {err}")
return False
# Reload network - use ifreload if available, otherwise ifdown/ifup
# Reload network - ifreload -a reloads ALL interfaces
rc, _, _ = self.ssh.run_on_node(
node.ssh_host, "which ifreload", node.is_local
)
if rc == 0:
reload_cmd = "ifreload -a"
else:
reload_cmd = f"ifdown {plan.bridge_name} && ifup {plan.bridge_name}"
# Fallback: restart networking service
reload_cmd = "systemctl restart networking"
print(f" [{node.name}] Netzwerk wird neu geladen ({reload_cmd})...")
rc, _, err = self.ssh.run_on_node(

View File

@ -20,14 +20,17 @@ class NetworkInterface:
class NodeInfo:
"""Represents a single Proxmox node."""
name: str # e.g. pve1
current_ip: str # current IP address
new_ip: Optional[str] = None # planned new IP
current_ip: str # current corosync/management IP
new_ip: Optional[str] = None # planned new management IP
ssh_host: Optional[str] = None # how to reach it (IP or hostname)
is_local: bool = False # is this the node we're running on
is_reachable: bool = False
interfaces: list[NetworkInterface] = field(default_factory=list)
hosts_content: str = ""
network_interfaces_content: str = ""
# Extra IP mappings for this node (e.g. ceph public/cluster on separate NICs)
# {old_ip: new_ip}
extra_ip_mapping: dict[str, str] = field(default_factory=dict)
@dataclass
@ -56,7 +59,7 @@ class CephConfig:
public_network: str = "" # e.g. 192.168.0.0/24
cluster_network: str = "" # e.g. 192.168.0.0/24
mon_hosts: list[str] = field(default_factory=list)
mon_sections: dict[str, dict[str, str]] = field(default_factory=dict) # [mon.pve1] -> {key: val}
mon_sections: dict[str, dict[str, str]] = field(default_factory=dict)
raw_content: str = ""
@ -73,4 +76,5 @@ class MigrationPlan:
ceph_config: Optional[CephConfig] = None
dry_run: bool = False
quorum_available: bool = True
bridge_name: str = "vmbr0" # which bridge to modify
# Detected bridges: {bridge_name: subnet}
detected_bridges: dict[str, str] = field(default_factory=dict)

View File

@ -4,7 +4,7 @@ import ipaddress
from models import NodeInfo, CorosyncConfig, CephConfig, MigrationPlan
from config_parser import (
generate_corosync_conf, generate_ceph_conf,
generate_network_interfaces, generate_hosts,
generate_network_interfaces_multi, generate_hosts,
)
@ -23,90 +23,36 @@ class Planner:
print("\n=== Phase 2: Migration planen ===\n")
# Get new network
plan.new_network = self._ask_new_network()
# Auto-detect bridges and networks
self._detect_bridges(plan, ceph)
# Get new management network
print("[Management-Netzwerk (Corosync)]")
if plan.old_network:
print(f" Aktuell: {plan.old_network}")
plan.new_network = self._ask_new_network(
" Neues Management-Netzwerk (z.B. 172.0.2.0/16): "
)
if not plan.new_network:
return None
new_net = ipaddress.ip_network(plan.new_network, strict=False)
plan.new_gateway = self._ask_gateway(new_net)
# Detect old network from first node
if nodes:
old_ip = ipaddress.ip_address(nodes[0].current_ip)
# Try to find matching interface
for node in nodes:
for iface in node.interfaces:
if iface.address == str(old_ip) or (
iface.address and iface.cidr and
ipaddress.ip_address(iface.address) in
ipaddress.ip_network(f'{iface.address}/{iface.cidr}', strict=False) and
old_ip in ipaddress.ip_network(f'{iface.address}/{iface.cidr}', strict=False)
):
plan.old_network = str(ipaddress.ip_network(
f'{iface.address}/{iface.cidr}', strict=False
))
plan.bridge_name = iface.name
break
if plan.old_network:
break
# Fallback: try to guess from corosync IPs
if not plan.old_network:
# Find common network from all corosync node IPs
for cidr_guess in [24, 16, 8]:
net = ipaddress.ip_network(
f'{nodes[0].current_ip}/{cidr_guess}', strict=False
)
if all(ipaddress.ip_address(n.current_ip) in net for n in nodes):
plan.old_network = str(net)
break
if plan.old_network:
print(f" Erkanntes altes Netzwerk: {plan.old_network}")
else:
print(" [!] Altes Netzwerk konnte nicht erkannt werden")
# Generate IP mapping suggestions
print("\n[IP-Mapping]")
print("Für jeden Node wird eine neue IP benötigt.\n")
# Management IP mapping
print("\n[Management IP-Mapping]")
print(" Für jeden Node wird eine neue Management-IP benötigt.\n")
for node in nodes:
suggested_ip = self._suggest_new_ip(node.current_ip, plan.new_network)
print(f" {node.name}: {node.current_ip} -> ", end="")
user_input = input(f"[{suggested_ip}]: ").strip()
if user_input:
node.new_ip = user_input
else:
node.new_ip = suggested_ip
node.new_ip = user_input or suggested_ip
print(f" => {node.new_ip}")
# Ceph network planning
if ceph:
print("\n[Ceph Netzwerke]")
print(f" Aktuelles Public Network: {ceph.public_network}")
print(f" Aktuelles Cluster Network: {ceph.cluster_network}")
default_ceph_net = plan.new_network
user_input = input(
f"\n Neues Ceph Public Network [{default_ceph_net}]: "
).strip()
plan.ceph_new_public_network = user_input or default_ceph_net
user_input = input(
f" Neues Ceph Cluster Network [{plan.ceph_new_public_network}]: "
).strip()
plan.ceph_new_cluster_network = user_input or plan.ceph_new_public_network
# Which bridge to modify
print(f"\n[Bridge]")
user_input = input(
f" Welche Bridge soll geändert werden? [{plan.bridge_name}]: "
).strip()
if user_input:
plan.bridge_name = user_input
self._plan_ceph(plan, nodes, ceph)
# Show preview
self._show_preview(plan)
@ -119,12 +65,220 @@ class Planner:
return plan
def _ask_new_network(self) -> str | None:
"""Ask for the new network."""
def _detect_bridges(self, plan: MigrationPlan, ceph: CephConfig | None):
"""Auto-detect which bridges carry which networks."""
nodes = plan.nodes
if not nodes:
return
print("[Netzwerk-Erkennung]")
# Find management bridge (carries corosync IP)
for node in nodes:
if not node.interfaces:
continue
for iface in node.interfaces:
if not iface.address or not iface.cidr:
continue
try:
iface_net = ipaddress.ip_network(
f'{iface.address}/{iface.cidr}', strict=False
)
mgmt_ip = ipaddress.ip_address(node.current_ip)
if mgmt_ip in iface_net:
plan.old_network = str(iface_net)
plan.detected_bridges[iface.name] = str(iface_net)
print(f" {iface.name}: Management/Corosync ({iface_net})")
break
except ValueError:
continue
if plan.old_network:
break
# Fallback for old_network
if not plan.old_network and nodes:
for cidr_guess in [24, 16, 8]:
net = ipaddress.ip_network(
f'{nodes[0].current_ip}/{cidr_guess}', strict=False
)
if all(
ipaddress.ip_address(n.current_ip) in net for n in nodes
):
plan.old_network = str(net)
print(f" Management-Netzwerk (geschätzt): {net}")
break
if not plan.old_network:
print(" [!] Management-Netzwerk nicht erkannt")
# Find ceph bridges (if on separate NICs)
if ceph and nodes:
for node in nodes:
if not node.interfaces:
continue
for iface in node.interfaces:
if not iface.address or not iface.cidr:
continue
if iface.name in plan.detected_bridges:
continue
try:
iface_net = ipaddress.ip_network(
f'{iface.address}/{iface.cidr}', strict=False
)
label = None
if ceph.public_network:
ceph_pub = ipaddress.ip_network(
ceph.public_network, strict=False
)
if iface_net.overlaps(ceph_pub):
label = "Ceph Public"
if ceph.cluster_network:
ceph_cls = ipaddress.ip_network(
ceph.cluster_network, strict=False
)
if iface_net.overlaps(ceph_cls):
label = (label + " + Cluster") if label else "Ceph Cluster"
if label:
plan.detected_bridges[iface.name] = str(iface_net)
print(f" {iface.name}: {label} ({iface_net})")
except ValueError:
continue
if len(plan.detected_bridges) > 1:
break
if not plan.detected_bridges:
print(" Keine Bridges erkannt (Interfaces nicht lesbar?)")
print()
def _plan_ceph(self, plan: MigrationPlan, nodes: list[NodeInfo],
ceph: CephConfig):
"""Plan Ceph network changes, handling separate networks."""
ceph_pub_net = ceph.public_network
ceph_cls_net = ceph.cluster_network
# Check if ceph networks differ from management
ceph_public_same = True
ceph_cluster_same = True
if ceph_pub_net and plan.old_network:
try:
ceph_pub = ipaddress.ip_network(ceph_pub_net, strict=False)
mgmt = ipaddress.ip_network(plan.old_network, strict=False)
ceph_public_same = ceph_pub.overlaps(mgmt)
except ValueError:
pass
if ceph_cls_net and plan.old_network:
try:
ceph_cls = ipaddress.ip_network(ceph_cls_net, strict=False)
mgmt = ipaddress.ip_network(plan.old_network, strict=False)
ceph_cluster_same = ceph_cls.overlaps(mgmt)
except ValueError:
pass
print("\n[Ceph Netzwerke]")
print(f" Aktuelles Public Network: {ceph_pub_net}")
print(f" Aktuelles Cluster Network: {ceph_cls_net}")
if ceph_public_same and ceph_cluster_same:
print(" -> Ceph nutzt das gleiche Netzwerk wie Management")
print(" Wird automatisch mit umgezogen.\n")
default_pub = plan.new_network
user_input = input(
f" Neues Ceph Public Network [{default_pub}]: "
).strip()
plan.ceph_new_public_network = user_input or default_pub
user_input = input(
f" Neues Ceph Cluster Network [{plan.ceph_new_public_network}]: "
).strip()
plan.ceph_new_cluster_network = user_input or plan.ceph_new_public_network
else:
# Separate Ceph networks
if not ceph_public_same:
print(f"\n [!] Ceph Public Network ({ceph_pub_net}) liegt auf"
f" separater NIC!")
pub_net = self._ask_new_network(
f" Neues Ceph Public Network: "
)
plan.ceph_new_public_network = pub_net or plan.new_network
self._ask_ceph_ips(
nodes, ceph, plan.ceph_new_public_network,
"Public", ceph_pub_net
)
else:
plan.ceph_new_public_network = plan.new_network
if not ceph_cluster_same:
if ceph_cls_net != ceph_pub_net:
print(f"\n [!] Ceph Cluster Network ({ceph_cls_net}) liegt auf"
f" separater NIC!")
cls_net = self._ask_new_network(
f" Neues Ceph Cluster Network: "
)
plan.ceph_new_cluster_network = cls_net or plan.ceph_new_public_network
self._ask_ceph_ips(
nodes, ceph, plan.ceph_new_cluster_network,
"Cluster", ceph_cls_net
)
else:
plan.ceph_new_cluster_network = plan.ceph_new_public_network
else:
plan.ceph_new_cluster_network = plan.new_network
def _ask_ceph_ips(self, nodes: list[NodeInfo], ceph: CephConfig,
new_network: str, network_type: str,
old_network: str):
"""Ask for per-node Ceph IPs when on a separate network."""
print(f"\n [Ceph {network_type} IP-Mapping]")
print(f" Altes Netz: {old_network} -> Neues Netz: {new_network}\n")
old_net = ipaddress.ip_network(old_network, strict=False)
for node in nodes:
# Find the node's current IP on this ceph network
old_ceph_ip = None
for iface in node.interfaces:
if not iface.address:
continue
try:
if ipaddress.ip_address(iface.address) in old_net:
old_ceph_ip = iface.address
break
except ValueError:
continue
if not old_ceph_ip:
# Try MON hosts
for mon_ip in ceph.mon_hosts:
try:
if ipaddress.ip_address(mon_ip) in old_net:
old_ceph_ip = mon_ip
break
except ValueError:
continue
if not old_ceph_ip:
print(f" {node.name}: Keine {network_type}-IP gefunden,"
f" übersprungen")
continue
suggested = self._suggest_new_ip(old_ceph_ip, new_network)
print(f" {node.name}: {old_ceph_ip} -> ", end="")
user_input = input(f"[{suggested}]: ").strip()
new_ceph_ip = user_input or suggested
print(f" => {new_ceph_ip}")
node.extra_ip_mapping[old_ceph_ip] = new_ceph_ip
def _ask_new_network(self, prompt: str) -> str | None:
"""Ask for a new network."""
while True:
network = input("Neues Netzwerk (z.B. 172.0.2.0/16): ").strip()
network = input(prompt).strip()
if not network:
print("Abgebrochen.")
print(" Abgebrochen.")
return None
try:
ipaddress.ip_network(network, strict=False)
@ -134,9 +288,8 @@ class Planner:
def _ask_gateway(self, network: ipaddress.IPv4Network) -> str:
"""Ask for the gateway in the new network."""
# Suggest first usable IP as gateway
suggested = str(list(network.hosts())[0])
user_input = input(f"Neues Gateway [{suggested}]: ").strip()
user_input = input(f" Neues Gateway [{suggested}]: ").strip()
return user_input or suggested
def _suggest_new_ip(self, old_ip: str, new_network: str) -> str:
@ -144,40 +297,58 @@ class Planner:
old = ipaddress.ip_address(old_ip)
new_net = ipaddress.ip_network(new_network, strict=False)
# Keep the last octet(s) from the old IP
old_host = int(old) & 0xFF # last octet
old_host = int(old) & 0xFF
if new_net.prefixlen <= 16:
# For /16 or bigger, keep last two octets
old_host = int(old) & 0xFFFF
new_ip = ipaddress.ip_address(int(new_net.network_address) | old_host)
return str(new_ip)
def _build_full_ip_mapping(self, plan: MigrationPlan) -> dict[str, str]:
"""Build complete IP mapping including management + ceph IPs."""
ip_mapping = {}
for node in plan.nodes:
if node.new_ip:
ip_mapping[node.current_ip] = node.new_ip
for old_ip, new_ip in node.extra_ip_mapping.items():
ip_mapping[old_ip] = new_ip
return ip_mapping
def _show_preview(self, plan: MigrationPlan):
"""Show a preview of all planned changes."""
print("\n" + "=" * 60)
print(" MIGRATION PREVIEW")
print("=" * 60)
ip_mapping = {n.current_ip: n.new_ip for n in plan.nodes if n.new_ip}
ip_mapping = self._build_full_ip_mapping(plan)
print(f"\n Netzwerk: {plan.old_network} -> {plan.new_network}")
print(f"\n Management: {plan.old_network} -> {plan.new_network}")
print(f" Gateway: {plan.new_gateway}")
print(f" Bridge: {plan.bridge_name}")
print(f" Quorum verfügbar: {'Ja' if plan.quorum_available else 'NEIN'}")
if plan.detected_bridges:
print(f"\n [Erkannte Bridges]")
for bridge, subnet in plan.detected_bridges.items():
print(f" {bridge}: {subnet}")
print("\n [Node IP-Mapping]")
for node in plan.nodes:
status = "erreichbar" if node.is_reachable else "NICHT ERREICHBAR"
print(f" {node.name}: {node.current_ip} -> {node.new_ip} ({status})")
print(f" {node.name}: {node.current_ip} -> {node.new_ip}"
f" ({status})")
for old_ip, new_ip in node.extra_ip_mapping.items():
print(f" + {old_ip} -> {new_ip}")
if plan.ceph_config:
print("\n [Ceph Netzwerke]")
print(f" Public: {plan.ceph_config.public_network} -> {plan.ceph_new_public_network}")
print(f" Cluster: {plan.ceph_config.cluster_network} -> {plan.ceph_new_cluster_network}")
print(f" Public: {plan.ceph_config.public_network}"
f" -> {plan.ceph_new_public_network}")
print(f" Cluster: {plan.ceph_config.cluster_network}"
f" -> {plan.ceph_new_cluster_network}")
if plan.ceph_config.mon_hosts:
print(f" MON Hosts: {', '.join(plan.ceph_config.mon_hosts)}")
new_mons = [ip_mapping.get(h, h) for h in plan.ceph_config.mon_hosts]
new_mons = [ip_mapping.get(h, h)
for h in plan.ceph_config.mon_hosts]
print(f" -> {', '.join(new_mons)}")
print("\n [Dateien die geändert werden]")
@ -192,7 +363,8 @@ class Planner:
if not plan.quorum_available:
print("\n [!] WARNUNG: Kein Quorum verfügbar!")
print(" Es wird 'pvecm expected 1' verwendet um Quorum zu erzwingen.")
print(" Es wird 'pvecm expected 1' verwendet um Quorum"
" zu erzwingen.")
print(" Ceph-Config wird direkt auf jedem Node geschrieben.")
print("\n" + "=" * 60)
@ -205,7 +377,7 @@ class Planner:
'ceph': new ceph.conf content (or None)
'nodes': {node_name: {'interfaces': content, 'hosts': content}}
"""
ip_mapping = {n.current_ip: n.new_ip for n in plan.nodes if n.new_ip}
ip_mapping = self._build_full_ip_mapping(plan)
configs = {
'corosync': None,
@ -228,15 +400,24 @@ class Planner:
)
# Generate per-node configs
new_cidr = ipaddress.ip_network(plan.new_network, strict=False).prefixlen
new_mgmt_cidr = ipaddress.ip_network(
plan.new_network, strict=False
).prefixlen
# Detect old gateway from first reachable node
# Detect old gateway from any reachable node
old_gateway = None
if plan.old_network:
mgmt_net = ipaddress.ip_network(plan.old_network, strict=False)
for node in plan.nodes:
for iface in node.interfaces:
if iface.name == plan.bridge_name and iface.gateway:
if iface.gateway:
try:
gw_ip = ipaddress.ip_address(iface.gateway)
if gw_ip in mgmt_net:
old_gateway = iface.gateway
break
except ValueError:
continue
if old_gateway:
break
@ -244,13 +425,37 @@ class Planner:
if not node.new_ip or not node.network_interfaces_content:
continue
# Build list of IP replacements for this node
# Each: (old_ip, new_ip, new_cidr, old_gateway, new_gateway)
replacements = []
# Management IP
replacements.append((
node.current_ip, node.new_ip, new_mgmt_cidr,
old_gateway, plan.new_gateway,
))
# Extra IPs (ceph on separate NICs)
for old_ip, new_ip in node.extra_ip_mapping.items():
extra_cidr = new_mgmt_cidr # fallback
# Try to get CIDR from new ceph network
for net_str in [plan.ceph_new_public_network,
plan.ceph_new_cluster_network]:
if net_str:
try:
extra_cidr = ipaddress.ip_network(
net_str, strict=False
).prefixlen
break
except ValueError:
pass
replacements.append((old_ip, new_ip, extra_cidr, None, None))
node_configs = {}
# Network interfaces
node_configs['interfaces'] = generate_network_interfaces(
node.network_interfaces_content,
node.current_ip, node.new_ip,
new_cidr, plan.new_gateway, old_gateway,
# Network interfaces - apply ALL replacements
node_configs['interfaces'] = generate_network_interfaces_multi(
node.network_interfaces_content, replacements
)
# /etc/hosts