== Keep Calm and Route On ==
infra ramblings. all views are my own

Openstack in the Homelab, Part 3: Post-Deploy

Kolla openstack virtualization kvm

Deploy Openstack on homelab equipment

In PART 1 we reviewed the prerequisites for getting Kolla-Ansible setup, installed, and configured, along with the associated networking.

In PART 2 we configured our configuration files and inventories, installed certificates, configured our NFS mounts, and then deployed Openstack using Kolla-Ansible.

In this part (part 3) we will perform a handful of post-deployment setup tasks to complete setting up Openstack.

It is worth noting that at this point, Openstack IS deployed. You could stop reading right now, and perform the post provisioning steps yourself, create your provider networks, routers, and start bringing in Glance images.

Copy the post-deploy-init script & edit

To ease our post-deployment steps, Kolla-Ansible provides a script, called init-runonce which helps perform most of the common initialization post-deploy steps.

We will copy this file, and then edit it to better suite our actual deployment:

deploy-user@os01:~$ sudo cp /usr/local/share/kolla-ansible/init-runonce .

Next, open up this file and edit the following items to suit your deployment:

### init-runonce
EXT_NET_CIDR=${EXT_NET_CIDR:-''} #Your external server/provider network
EXT_NET_RANGE=${EXT_NET_RANGE:-'start=,end='} #The range of IPs you want for your VMs in Openstack
EXT_NET_GATEWAY=${EXT_NET_GATEWAY:-''} #Provider network gateway

$KOLLA_OPENSTACK_COMMAND network create name-of-internal-os-network
$KOLLA_OPENSTACK_COMMAND subnet create --subnet-range --network name-of-internal-os-network \
    --gateway --dns-nameserver name-of-internal-os-network-subnet

$KOLLA_OPENSTACK_COMMAND router create name-of-default-router
$KOLLA_OPENSTACK_COMMAND router add subnet name-of-default-router name-of-internal-os-network-subnet
if [[ $ENABLE_EXT_NET -eq 1 ]]; then
  $KOLLA_OPENSTACK_COMMAND router set --external-gateway public1 name-of-default-router

There are a number of variables above, so let’s break it down:

EXT_NET_CIDR we are defining the server/provider network that instances can have floating IPs for. In my case this is the same network as my management/server network the nodes are deployed on.

EXT_NET_RANGE this is the block of IPs we are giving to Openstack, that it can deploy as floating IPs. These must not be IPs that some external DHCP server will try and hand out.

EXT_NET_GATEWAY this one is more self-explanitory; this is the gateway for the server/management/provider network.

name-of-internal-os-network replace this with what you want the internal Openstack network name to be.

name-of-internal-os-network-subnet replace this with the name of the internal Openstack network subnet.

name-of-default-router replace this with the name you want for your first router

A completed example might look like this:

### init-runonce
EXT_NET_CIDR=${EXT_NET_CIDR:-''} #Your external server/provider network
EXT_NET_RANGE=${EXT_NET_RANGE:-'start=,end='} #The range of IPs you want for your VMs in Openstack
EXT_NET_GATEWAY=${EXT_NET_GATEWAY:-''} #Provider network gateway

$KOLLA_OPENSTACK_COMMAND network create int-network1
$KOLLA_OPENSTACK_COMMAND subnet create --subnet-range --network int-network1 \
    --gateway --dns-nameserver int-network1-subnet

$KOLLA_OPENSTACK_COMMAND router create router1
$KOLLA_OPENSTACK_COMMAND router add subnet router1 int-network1-subnet
if [[ $ENABLE_EXT_NET -eq 1 ]]; then
  $KOLLA_OPENSTACK_COMMAND router set --external-gateway public1 router1

Once we complete these changes, save and close. Then we just have to make it executable, and run it:

deploy-user@os01:~$ chmod +x init-runonce
deploy-user@os01:~$ source /etc/kolla/admin-openrc.sh
deploy-user@os01:~$ ./init-runonce

This will take a minute to run, and once completed, you should have your internal & provider networks created, and a basic cirros image in Glance to deploy.

At this point, we can log into the Horizon dashboard if we want, which is accessible at https://iporfqdnofdeployment. Once there, you will be prompted to login. Currently the admin user is the only, and we can retrieve the password for admin by echoing it on os01:

deploy-user@os01:~$ echo $OS_PASSWORD

It is worth noting that this variable, and a number of others, are set when we source /etc/kolla/admin-openrc.sh

Now that we have the admin password, log in with admin/theretrievedpassword on Horizon. Once logged in, feel free to take a look and see what options (and buttons) are avaliable. If you want to create a user for yourself, this can be done under “Identity > Users”.

In Project > Compute > Images, you will see that the init-runonce pulled the cirros image in, which can be deployed if we want. Since cirros is very basic, we probably want to pull in some cloud images for something like Ubuntu or even CentOS. To do this, we can go back and run:

# Focal 20.04
deploy-user@os01:~$ curl https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img | \
openstack image create --public --container-format=bare \
--disk-format=qcow2 focal
# Bionic 18.04
deploy-user@os01:~$ curl https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img | \
openstack image create --public --container-format=bare \
--disk-format=qcow2 bionic
# CentOS 7
deploy-user@os01:~$ curl https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-2003.qcow2 | \
openstack image create --public --container-format=bare \
--disk-format=qcow2 centos7

Once complete, we can check in images in Horizon to see our new images. We can also check via the openstack cli:

deploy-user@os01:~$ openstack image list
| ID                                   | Name    | Status |
| 2e8b63a4-b3ff-4a49-b272-ceb1c6ae7fc1 | bionic  | active |
| 97ceea86-1510-484b-919b-432204ce1445 | centos7 | active |
| ddbe3d78-dfb2-4f94-bf3d-0aa071c4c696 | cirros  | active |
| 65be43bb-e702-42b5-a39e-ea81bb251c89 | focal   | active |

Now that we have images, we can easily start deploying instances.

To do this from Horizon, go into Project > Compute > Instances and select “Launch Instance”.

A dialog will appears, allowing is to page through setting the name, source (based on our Glance images), volume size (that will be placed on our Cinder NFS mount), flavor (size of the VM, like in AWS), network and key-pair.

For our network, typically it makes sense to put the instance in the internal subnet, and then allocate a floating IP for it to access it from outside Openstack.

Key pairs can be uploaded, so I would recommend uploading a key pair for SSH that you already have, that you will access your instances using (or at least will access them initially for provisioning).

Once completed, hit “Launch Instance” to launch your instance. You should see your instance (VM) be provisioned, and then start running. Once provisioned, use the down carat next to the instance to select to allocate a floating IP, then allocate a floating IP from the pool (using the “+”), and then add it to the instance.

At this point, you should be able to SSH to your new instance via its floating IP address. If you are using an ubuntu image, the default user is ubuntu (centos is centos).

Remember that the OVN router (part of Neutron) has its own security group-type firewall, so if you need to access additional ports on your instances, you will need to go into security groups (Project > Network > Security Groups) and add those ports to a new security group, and then assign it to your instance.

Congrats, you now have your own scalable private cloud. Have fun!