1.1 KiB
1.1 KiB
Prep a cloud-init'ed Ceph OSD host
Preliminary requirements:
- My standard VM make_ceph script which assumes I've created an LVM on the containing host for the data
- Clone and customize the make_ceph script to set the hostname, MAC address and location of the data LVM partition (logical volume)
- Add the intended IP and MAC address to my master hosts list for Ansible deployment, my DHCP servers and my primary DNS server.
- Run the make_ceph script to spin up the VM. in conjunction with the generic cloud-init user-data, this should install an SSH key for my ansible server
- Add the new machine to the ansible hosts file and run ansible.
- This role installs the ceph repo, cephadm, a root ssh key for Ceph administration.
- At this point you should be able to log into the ceph admin node and do a "ceph orch host add" with the new hostname and IP address.
Note that Cephadm will see the allocated LVM partition as /dev/sda and make an OSD out of it automatically. An OSD crush reweight will be required to make it an active participant, since the initial crush weight is very low.