Fixed much badspelling
This commit is contained in:
parent
1d81e11179
commit
05cbfd67cb
25
README.md
25
README.md
|
@ -55,17 +55,17 @@ Also don't forget to add the new VM's hostname to your ansible inventory file (d
|
||||||
|
|
||||||
### VM Customization
|
### VM Customization
|
||||||
|
|
||||||
Noe the fun begins. The cloud-init.data file contains information common to all
|
Now the fun begins. The ``cloud-init.data`` file contains information common to all
|
||||||
VMs you'll create. There should properly be a meta-data file for the VM-specific
|
VMs you'll create. There should properly be a meta-data file for the VM-specific
|
||||||
stuff, but I haven't been able to get that to work and thus I dynamically create
|
stuff, but I haven't been able to get that to work and thus I dynamically create
|
||||||
a tempory composite cloud-init for the actual VM creation.
|
a tempory composite cloud-init for the actual VM creation.
|
||||||
|
|
||||||
Clone the make_ceph_osdx.sh file to make a custom VM. Edit the variables that
|
Clone the ``make_cephxx.sh`` file to make a custom VM. Edit the variables that
|
||||||
define the hostname, MAC address and LVM Logical Volume that will hold the OSD data.
|
define the hostname, MAC address and LVM Logical Volume that will hold the OSD data.
|
||||||
|
|
||||||
Note that the default MAC address for libvirt is randomly generated, so I manually supply my own to make DHCP assign a predictable IP address.
|
Note that the default MAC address for libvirt is randomly generated, so I manually supply my own to make DHCP assign a predictable IP address.
|
||||||
|
|
||||||
Use LVM's "lvreate" command to create the Logical Volume you'll reference here
|
Use LVM's "lvcreate" command to create the Logical Volume you'll reference here
|
||||||
and edit the script to edit it". As presently configured, the VM will present the
|
and edit the script to edit it". As presently configured, the VM will present the
|
||||||
LVM logical volume as device "/dev/sda" - the OS lives on /dev/vda. The device ID
|
LVM logical volume as device "/dev/sda" - the OS lives on /dev/vda. The device ID
|
||||||
will vary if you use a different VM bus type than "scsi", but since I don't know
|
will vary if you use a different VM bus type than "scsi", but since I don't know
|
||||||
|
@ -78,13 +78,13 @@ Of course, if you are as error-prone as I am, this may require a few tweaks. Fea
|
||||||
If you're extra paranoid, you can delete VM disk and (if it got created), the VM itself.
|
If you're extra paranoid, you can delete VM disk and (if it got created), the VM itself.
|
||||||
|
|
||||||
Once everything is happy, the boot process will run and log to your command-line
|
Once everything is happy, the boot process will run and log to your command-line
|
||||||
console. At its end, you'll be presented with a login prompy.
|
console. At its end, you'll be presented with a login prompt.
|
||||||
|
|
||||||
***Caution*** It's best to wait a minute or 2, as some setup may still be running even after the login prompt comes up!
|
***Caution*** It's best to wait a minute or 2, as some setup may still be running even after the login prompt comes up!
|
||||||
|
|
||||||
As supplied, the login is userid "almalinux" and password "redhat". These are defined in the cloud-init.data file and if you like, you can change them.
|
As supplied, the login is userid "almalinux" and password "redhat". These are defined in the cloud-init.data file and if you like, you can change them.
|
||||||
|
|
||||||
Now you're ready to run the Ansible stage. Use ctrl-] to return to you VM's original shell (disconnect from the VM console). You don't need it anymore.
|
Now you're ready to run the Ansible stage. Use ctrl-] to return to your VM host's original shell (disconnect from the VM console). You don't need it anymore.
|
||||||
|
|
||||||
## Ansible provisioning
|
## Ansible provisioning
|
||||||
|
|
||||||
|
@ -98,25 +98,26 @@ prep work:
|
||||||
password is "redhat". "mynewosd" is, of course, the hostname you gave to the new
|
password is "redhat". "mynewosd" is, of course, the hostname you gave to the new
|
||||||
OSD VM.
|
OSD VM.
|
||||||
|
|
||||||
Use the ansible-playbook to run the ceph OSD playbook. This playbook provisions
|
Use the ansible-playbook command to run the ceph OSD playbook. This playbook provisions
|
||||||
using the "ceph_osd_hot" role you installed.
|
using the "ceph_osd_host" role you installed.
|
||||||
|
|
||||||
It does the following:
|
It does the following:
|
||||||
1. Install the ceph repository into ``yum.repos.d``.
|
1. Install the ceph repository into ``yum.repos.d``.
|
||||||
1. Install the cephamd utility from the ceph repository.
|
1. Install the cephadm utility from the ceph repository.
|
||||||
1. Copy in the ``/etc/ceph`` configuration information files.
|
1. Copy in the ``/etc/ceph`` configuration information files from your master copy in the role/files directory..
|
||||||
1. Do an initial run of cephadm to cause it to pull the container(s) needed to tun cephadm and the ceph daemons.
|
1. Do an initial run of cephadm to cause it to pull the container(s) needed to run cephadm and the ceph daemons.
|
||||||
|
|
||||||
Note that if you like, you can also install the "ceph-common" package and be able to run ceph commands without needing "cephadm shell" to run them.
|
Note that if you like, you can also install the "ceph-common" package and be able to run ceph commands without needing "cephadm shell" to run them.
|
||||||
|
|
||||||
## Rejoice!
|
## Rejoice!
|
||||||
|
|
||||||
Congratulations! You have just created a new ceph host. You can confirm if you like, by using ssh to login to "almalinux@mynewcephosd" and issuing the "sudo cephadm shell" command to enter the cephadm shell and then type "ceph orch ps" to
|
Congratulations! You have just created a new ceph host. You can confirm if you like, by using ssh to login to "almalinux@mynewcephosd", issuing the "sudo cephadm shell" command to enter the cephadm shell and then type "ceph orch ps" to
|
||||||
list the running daemons in your system.
|
list the running daemons in your system.
|
||||||
|
|
||||||
Note that if the above fails, the most likely cause will be that your /etc/ceph config files are wrong. You did replace mine with your own in the ansible role file, didn't you?
|
Note that if the above fails, the most likely cause will be that your /etc/ceph config files are wrong. You did replace mine with your own in the ansible role file, didn't you?
|
||||||
|
|
||||||
## Going live, You're now a full-fledged ceph node and you only need to issue the
|
## Going live
|
||||||
|
You're now a full-fledged ceph node and you only need to issue the
|
||||||
"ceph orch host add" command to add this new VM to the Ceph host list. Ceph will
|
"ceph orch host add" command to add this new VM to the Ceph host list. Ceph will
|
||||||
automatically see the unused OSD data device (/dev/sdb) and make an OSD out of it.
|
automatically see the unused OSD data device (/dev/sdb) and make an OSD out of it.
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user