20 lines
1.1 KiB
Markdown
20 lines
1.1 KiB
Markdown
|
# Prep a cloud-init'ed Ceph OSD host
|
||
|
Preliminary requirements:
|
||
|
1. My standard VM make_ceph script which assumes I've created an LVM
|
||
|
on the containing host for the data
|
||
|
1. Clone and customize the make_ceph script to set the hostname, MAC
|
||
|
address and location of the data LVM partition (logical volume)
|
||
|
1. Add the intended IP and MAC address to my master hosts list for
|
||
|
Ansible deployment, my DHCP servers and my primary DNS server.
|
||
|
1. Run the make_ceph script to spin up the VM. in conjunction with the
|
||
|
generic cloud-init user-data, this should install an SSH key for my
|
||
|
ansible server
|
||
|
1. Add the new machine to the ansible hosts file and run ansible.
|
||
|
1. This role installs the ceph repo, cephadm, a root ssh key for Ceph
|
||
|
administration.
|
||
|
1. At this point you should be able to log into the ceph admin node
|
||
|
and do a "ceph orch host add" with the new hostname and IP address.
|
||
|
|
||
|
Note that Cephadm will see the allocated LVM partition as /dev/sda and
|
||
|
make an OSD out of it automatically. An OSD crush reweight will be required
|
||
|
to make it an active participant, since the initial crush weight is very low.
|