cifmw_cephadm
Deploys a Ceph cluster on a set of EDPM nodes using cephadm.
The
openstack-k8s-operators HCI documentation
describes how to run Ceph on EDPM nodes but leaves it to the reader
to install Ceph with cephadm
. The cifmw_cephadm
role and
ceph.yml
playbook may be used to automate the Ceph installation.
Before this role is run the following roles should be run.
cifmw_create_admin
: creates a user forcephadm
cifmw_block_device
: creates a virtual disk to store datacifmw_ceph_spec
: defines the Ceph cluster layout
After this role is run, the cifmw_ceph_client
role can generate
a k8s CR which OpenStack can use to connect to the deployed Ceph
cluster.
The ceph.yml
playbook in the playbooks directory provides a complete
working example which does all of the above and has been tested on
a three EDPM node deployment from
install_yamls.
Privilege escalation
Requires an Ansible user who can become root to install Ceph server.
Parameters
The ceph.yml
playbook defaults these parameters so that they do not
need to be changed for a typical EDPM deployment.
cifmw_cephadm_default_container
: If this is value istrue
, thencephadm bootstrap
is not passed the--image
parameter and whatever default Ceph container defined inside ofcephadm
is used. Otherwise usecifmw_cephadm_container_ns
(e.g. “quay.io/ceph”),cifmw_cephadm_container_image
(e.g. “ceph”) andcifmw_cephadm_container_tag
(e.g. “v18”).cifmw_cephadm_spec_ansible_host
: the path to the Ceph spec generated by thecifmw_ceph_spec
role (e.g./tmp/ceph_spec.yml
).cifmw_cephadm_bootstrap_conf
: the path to the initial Ceph configuration file generated by thecifmw_ceph_spec
role (e.g./tmp/initial_ceph.conf
)cifmw_ceph_client_vars
: the path to ceph client variables passed as input to thecifmw_ceph_client
role (e.g./tmp/ceph_client.yml
).cifmw_cephadm_pools
: see belowcifmw_cephadm_keys
: see below
cifmw_cephadm_certs
: The path on the ceph host where TLS/SSL certificates
are located. It points to ‘/etc/pki/tls’
cifmw_cephadm_certificate
: The SSL/TLS certificate signed by CA which is an optional parameter. If it is provided, ceph dashboard and rgw will be configured for SSL automatically. Certificate should be made available incifmw_cephadm_certs
path only. To enable SSL for dashboard, bothcifmw_cephadm_certificate
andcifmw_cephadm_key
are needed.cifmw_cephadm_key
: The SSL/TLS certificate key which is an optional parameter. If it is provided, ceph dashboard and rgw will be configured for SSL automatically.cifmw_cephadm_monitoring_network
: the Cephpublic_network
where the dashboard monitoring stack instances should be bound. The network range is gathered from thecifmw_cephadm_bootstrap_conf
file, which represents the initial Ceph configuration file passed at bootstrap time.cifmw_cephadm_rgw_network
: the Cephpublic_network
where theradosgw
instances should be bound. The network range is gathered from thecifmw_cephadm_bootstrap_conf
file, which represents the initial Ceph configuration file passed at bootstrap time.cifmw_cephadm_rgw_vip
: the ingress daemon deployed along withradosgw
requires aVIP
that will be owned bykeepalived
. This IP address will be used as entry point to reach theradosgw backends
throughhaproxy
.cifmw_cephadm_nfs_vip
: the ingress daemon deployed along with thenfs
cluster requires aVIP
that will be owned bykeepalived
. This IP address is the same used for rgw unless an override is passed, and it’s used as entry point to reach theganesha backends
through anhaproxy
instance where proxy-protocol is enabled.cifmw_cephadm_ceph_spec_fqdn
: When true, the Ceph spec should use a fully qualified domain name (e.g. server1.bar.com). When false, the Ceph spec should use a short hostname (e.g. server1).cifmw_cephadm_ns
: Name of the OpenStack controlplane namespace used in configuring swift objects.
Use the cifmw_cephadm_pools
list of dictionaries to define pools for
Nova (vms), Cinder (volumes), Cinder-backups (backups), and Glance (images).
cifmw_cephadm_pools:
- name: vms
pg_autoscale_mode: True
target_size_ratio: 0.3
application: rbd
- name: volumes
pg_autoscale_mode: True
target_size_ratio: 0.3
application: rbd
- name: backups
pg_autoscale_mode: True
target_size_ratio: 0.2
application: rbd
- name: images
target_size_ratio: 0.2
pg_autoscale_mode: True
application: rbd
Use the cifmw_cephadm_keys
list of dictionaries to define a CephX
key which OpenStack can use authenticate to Ceph. The cephx_key
Ansible module will generate a random value to pass for the key value.
cifmw_cephadm_keys:
- name: client.openstack
key: "{{ cephx.key }}"
mode: '0600'
caps:
mgr: allow *
mon: profile rbd
osd: profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images
Examples
See ceph.yml
in the playbooks directory.
Tips for using standalone
Pick the appropriate storage network
In the ceph.yml
playbook, set the storage_network_range
variable.
If network isolation is not being used, then set the
storage_network_range
variable to192.168.122.0/24
(the default EDPM IP address range).If network isolation is used, then as per the openstack-k8s-operators networking documentation, the default storage network is
172.18.0.0/24
and thestorage_network_range
variable should be set accordingly. As per the openstack-k8s-operators HCI documentation a shortenedOpenStackDataPlane
services list can be used to configure the storage network before Ceph and OpenStack are deployed.
See the README of the cifmw_ceph_spec
role for more details on how
the storage_network_range
variable is used.
Update the Ansible inventory and environment variables
This example assumes ci-framework and install_yamls git repositories are in in $HOME and that EDPM nodes have been provisioned.
The cifmw_cephadm
, cifmw_create_admin
, and cifmw_block_device
roles need to be able to SSH into all EDPM nodes but the default
inventory only has localhost. The devsetup process in
install_yamls
generates each EDPM node and its IP address sequentially starting at
192.168.122.100. The following command may be used to create an
inventory with the group edpm
containing N
EDPM nodes.
export N=2
echo -e "localhost ansible_connection=local\n[edpm]" > ~/ci-framework/inventory.yml
for I in $(seq 100 $((N+100))); do
echo 192.168.122.${I} >> ~/ci-framework/inventory.yml
done
install_yamls
generates an SSH key install_yamls/out/edpm/ansibleee-ssh-key-id_rsa
for root on every EDPM node. Configure the Ansible environment to use
this user and key.
export ANSIBLE_REMOTE_USER=cloud-admin
export ANSIBLE_SSH_PRIVATE_KEY=~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa
export ANSIBLE_HOST_KEY_CHECKING=False
Run the Ceph playbook
cd ~/ci-framework/
ansible-playbook playbooks/ceph.yml