Contents

Installing OpenShift with Assisted Installer

Introduction

As part of my work, I assist customers with deploying various RedHat related technologies - sometimes in rather complicated combinations.

We’re going to do a series of blog posts together, starting with OpenShift and then slowly introducing the following technologies

  • OpenShift GitOps (ArgoCD)
  • OpenShift Data Foundation
  • Service Mesh (Istio, Kiali, Jaeger)
  • 3Scale API Management
  • And no doubt some others along the way

But - we all need to start somewhere, so let’s start at the beginning with OpenShift. We’ll be using the Assisted Installer to facilitate our installation.

Requirements

We’re going to use the RedHat Assisted Installer (SaaS) which is located at https://console.redhat.com/openshift/assisted-installer/clusters/. For this you’ll need a RedHat account, these can be made for free at https://developers.redhat.com/about. This installation method will involve the installer creating a customised, bootable ISO for us.

The Assisted Installer is based upon https://github.com/openshift/assisted-service which can used to host your own if required.

The installation type we’ll use is bare-metal IPI - which provides us with native load-balancing services.

Other things we’ll need

  • A Network with DHCP services available & a default route with internet access
  • Total of 5 ip addresses available (3 dynamic, 2 static)
  • A DNS server where we can inject some DNS records (this should be given as the DNS server by DHCP above)
  • A Hypervisor to provide 3 Virtual Machines. I’m using Proxmox but any decent hypervisor will suffice.

Planning

Prior to undertaking our installation we need to plan our cluster configuration.

OpenShift clusters are made up of a <cluster-name> and <base-domain> which form the full DNS entries for the cluster.

  • Our <cluster-name> will be ‘lab’
  • Our <base-domain> will be ‘home.flumpy.net’

With this information we can formulate the DNS records we require.

RecordPurposeIP-Address
api.lab.home.flumpy.netKubernetes API services192.168.1.55
*.apps.lab.home.flumpy.netWildcard for applications192.168.1.50

As you can see above, i’ve selected two ip addresses to be used as the API and Application load-balancer respectively. For now lets just take note of these ips as we’ll use them later on in the process.

Fix up DNS

In order to access our OpenShift cluster we’ll require working DNS entries.

Everyone’s home setup is different, for my lab I utilise pi-hole ( https://pi-hole.net/ ) at home in order to provide everyone with advert blocking. We’ll step through the process I used to add DNS records, you’ll need to adapt to suit your own environment.

pi-hole utilises dnsmasq behind the scenes, so we can inject static records into it’s configuration to be served. We can do this by placing our own configuration file within /etc/dnsmasq.d

These entries create wildcard DNS records which we will validate shortly.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# cat <<EOF >/etc/dnsmasq.d/99-openshift-dns.conf
> address=/api.lab.home.flumpy.net/192.168.1.55
> address=/apps.lab.home.flumpy.net/192.168.1.50
> EOF
# cat 99-openshift-dns.conf
address=/api.lab.home.flumpy.net/192.168.1.55
address=/apps.lab.home.flumpy.net/192.168.1.50
# pihole restartdns
  [] Restarting DNS server
#

We can then verify that these DNS records function as expected.

1
2
3
4
5
6
$ dig @192.168.1.30 +short A api.lab.home.flumpy.net.
192.168.1.55
$ dig @192.168.1.30 +short A apps.lab.home.flumpy.net.
192.168.1.50
$ dig @192.168.1.30 +short A my-cool-app.apps.lab.home.flumpy.net.
192.168.1.50

The last test verifies that our wildcard is working as expected. OpenShift will make heavy use of this record.

Assisted Installer

Now that we’ve selected our cluster name, ips and configured DNS - we can move onto utilising the Assisted Installer.

/img/AssistedInstaller.png

  • Bash the shiny Create New Cluster button. Here i’ve filled out the details for my cluster ( cluster name = lab , base domain = home.flumpy.net). I’ve selected 4.8.X as my version as I don’t want to be on the latest version.

/img/AssistedInstaller-page1.png

  • Once complete, hit Next. On this page just focus on clicking the Generate Discovery ISO button that i’ve beutifully shaded yellow here.

/img/AssistedInstaller-page2.png

  • Here - you have a choice. If you have Internet from the 1990’s dialup style, or are capped - you might want to download the Full Image file. Otherwise, you can select the minimal image file, this will mean each node will download the necessary data upon bootup (it fetches a rootFS from the OpenShift mirror upon startup).
  • Also provide a public SSH key to assist with any future debugging

/img/AssistedInstaller-GenerateISO.png

  • Download the ISO it provides and keep it safe for now

Keep the Assisted Installer page open. It’s now periodically checking to see if your machines have checked in, which they’ll do when they boot from the ISO file we’ve downloaded. This moves us nicely onto creating our virtual machines.

Virtual Machines

As we discussed above, we’re going to use virtual machines for building this OpenShift cluser. For these VM’s, i’ll be using Proxmox ( https://www.proxmox.com/en/ ) which is a freely available virtualisation solution. Any modern hypervisor will suffice.

Our first step is to upload our discovery ISO into a datastore so we can mount it onto VMS. In Proxmox this is done by uploading the ISO into an available data-store. This can either be done via the GUI as shown below - or by scping the ISO into the data-store location. Your ISO data-store location will be /templates/iso

/img/Proxmox-upload.png

Now that we’ve made our ISO content available, let’s make some Virtual Machines. On my Hypervisor I have 120GB of memory so i’ve divided this into 3 large VM’s to give me ample room for extra OpenShift services.

As this is a lab - we’re going to only have 3 control-plane nodes and no workers - we’ll use our control-plane nodes for workloads too.

NameMemoryvCPUStorage
ocp13891216128GB
ocp23891216128GB
ocp33891216128GB

Settings I picked for making my virtual machines

NameValue
BIOSOVMF (UEFI)
Machine Typeq35
SCSI ControllerVirtIO SCSI Single
ISO<location of our discovery ISO>
Hard Disk (scsi0)128GB Disk - VirtIO & IO Thread enabled
Network DeviceVirtIO Network Device
CPU TypeHost

I’ve selected UEFI & q35 for potential future use cases in-case I want to pass through PCI devices (e.g GPUs).

Once you’ve created your VM’s - boot them up from the ISO. Once they’ve reached a ready state they’ll start appearing in the Assisted Installer window we left open from earlier.

We should now review the discovered settings to ensure they match what we expect

/img/AssistedInstaller-Discovery.png

We now need to configure the hostnames for each of our 3 machines to match our VM names. We do this by pairing up the MAC address discovered with what our Hypervisor is showing us.

/img/AssistedInstaller-Discovery-Full.png

Using your Hyervisor console, or CLI collect the mac addresses for your 3 VM’s and map them back to the appropriate name. Once you’ve mapped a VM back to its name - click ‘localhost’ within the Assisted Installer page and update the hostname to reflect the VM name

NameMemoryvCPUStorageMAC
ocp13891216128GBE6:F4:65:22:6D:FF
ocp23891216128GBBE:42:B9:26:34:CF
ocp33891216128GB4A:67:CE:86:0F:BC

When you’ve done all 3 VM’s - the Assisted Installer page will enable the Next button - go ahead and bash it for the next section.

/img/AssistedInstaller-Discovery-Done.png

Assisted Installer - Networking

We’re almost at the home straight now. All we’ve got left to do is to tell the installer about our Network configuration

Earlier, we already mapped out our IP addresses so let’s refresh our memory

RecordPurposeIP-Address
api.lab.home.flumpy.netKubernetes API services192.168.1.55
*.apps.lab.home.flumpy.netWildcard for applications192.168.1.50

So - in this environment our machines fall within the 192.168.1.0/24 CIDR range as that’s what i’m using here.

Within the Assisted Installer

  • Ensure Cluster-Managed Networking is selected
  • Untick the box to Allocate IP’s via DHCP as we’re going to define them statically
  • Select your network from the drop down box (e.g 192.168.1.0/24 for this example)
  • Fill your details for API Virtual IP & Ingress Virtual IP

Once you’ve filled out the above information - we should be ready to go with green ticks. Bash that Next button.

/img/AssistedInstaller-Network.png

Assisted Installer - Review & Go!

OK! If everythings gone to plan - you should now be looking at the Review page - this is just a chance to review all the settings we’ve made and confirm the installer is happy to proceed.

/img/AssistedInstaller-review.png

Let’s go ahead and click the Install Cluster button. This will start the actual installation of OpenShift onto our provided infrastructure. This will take a little bit of time so go ahead and fetch a drink/snack.

If you want to monitor what the cluster is doing behind the scenes - you can click the ‘Cluster Events’ button which will give you (more) information on what’s being done.

After approx 45 minutes my installation had finished - your milege will vary depending on your hardware stack. Once the installation is completed you’ll be greeted with a page like so

/img/AssistedInstaller-Done.png

At this stage

  • Download the kubeconfig file and place it somewhere safe
  • Note the console URL (bookmark it)
  • Take note of the kubeadmin password - this is temporary and we’ll replace it in a later post