17 users online | 17 Guests and 0 Registered
How do I change the metadata IP configuration in an HA cluster?
- Take note of current HA configuration
- Current metadata IPs, should you want the last octet to remain the same
- Any other service IPs, such as a "floating" service on non-metadata LAN
- Power control BMC/IPMI settings
- MDC currently in control (I believe we left it on MDC1 earlier this week)
- Unmount all clients
- Unmount FC clients
- Stop SONG cluster from "SONG" tab of :82 GUI (this will end all SMB sessions)
- Verify no clients are mounted with
bwclstat -l
command on active MDC
- Begin downtime, and start any other network modifications, likely including:
- Modify client IP addresses
- In HyperFS client, delete old MDC service address
- Add new MDC service address
- Modify any switches, routers, VLANs, etc. (you'll know this step better than me)
- Log into active MDC :82 GUI using its dedicated IP, not the service address
- Stop all filesystems
- Stop HyperFS HA service
- Delete HyperFS HA configuration
- Change MDC1 metadata IP
- Log into local console via IPMI/BMC, or local VGA console (SSH over non-metadata network is fine, though a little risky)
- Login to local desktop session and open terminal, or press Ctrl+Alt+F2 for CLI
- Gain root, if not already
- Determine the logical interface being used for metadata using
ip a
- Edit the configuration script for the interface, for example
/etc/sysconfig/network-scripts/ifcfg-bond0
(Use whatever Linux text editor you are comfortable with, like vim, nano, emacs)
- Modify the value of
IPADDR
and any other parameters that will be changing
- Save and close the file
- Restart the network service with
service network restart
- Ping MDC1 over new metadata network to verify functionality
- Change MDC2 metadata IP using the same steps as above
- Ping MDC2 to check it
- Re-open :82 GUI of MDC that was last-active
- Reconfigure HA using the information gathered in step #1, but using the new metadata IP information
- Start HA
- Remount clients (using the MDC IP that was set in step #3.3) and verify functionality
- End downtime