instant_osd/ansible_role/roles/ceph_osd_host
2024-07-20 14:29:29 -04:00
..
files Initial commit 2024-07-20 14:29:29 -04:00
tasks Initial commit 2024-07-20 14:29:29 -04:00
vars Initial commit 2024-07-20 14:29:29 -04:00
README.md Initial commit 2024-07-20 14:29:29 -04:00

Prep a cloud-init'ed Ceph OSD host

Preliminary requirements:

  1. My standard VM make_ceph script which assumes I've created an LVM on the containing host for the data
  2. Clone and customize the make_ceph script to set the hostname, MAC address and location of the data LVM partition (logical volume)
  3. Add the intended IP and MAC address to my master hosts list for Ansible deployment, my DHCP servers and my primary DNS server.
  4. Run the make_ceph script to spin up the VM. in conjunction with the generic cloud-init user-data, this should install an SSH key for my ansible server
  5. Add the new machine to the ansible hosts file and run ansible.
  6. This role installs the ceph repo, cephadm, a root ssh key for Ceph administration.
  7. At this point you should be able to log into the ceph admin node and do a "ceph orch host add" with the new hostname and IP address.

Note that Cephadm will see the allocated LVM partition as /dev/sda and make an OSD out of it automatically. An OSD crush reweight will be required to make it an active participant, since the initial crush weight is very low.