- inaBOX |Bare Metal
- Dedicated Server Deployment
- Hypervisor Management Deployment
inaBOX |Bare Metal
Peter Goldthorp, Dito. May 2023
BMS environments can be configured to use dedicated servers or hypervisors. In a dedicated deployment Oracle is installed directly on each BMS server. In a hypervisor deployment VMs are created using BMS server resources and Oracle is installed on the VMs.
Dedicated Server Deployment
Google publishes a BMS toolkit which can be used to configure dedicated servers. It includes shell scripts and Ansible playbooks that can be used with installation media downloaded from Oracle to setup Oracle on a BMS server.
Setup instructions and best practice guidelines for a dedicated server deployment are described in Google’s BMS documentation.
Hypervisor Management Deployment
Dito’s inaBOX for Bare Metal is a packaged solution that adds hypervisor management to a BMS environment. It uses Oracle Linux Virtualization Manager (OLVM) to organize BMS servers and storage into resources that can be used to create virtual machines. This can improve the environment’s resource utilization and configuration management options. It also supports hard partitioning, an Oracle approved mechanism to limit the number of CPU cores that get counted for licensing.
The following diagram shows components in an inaBOX |Bare Metal hypervisor managed deployment.
Oracle Linux Virtualization Manager (OLVM)
OLVM is a server virtualization management platform. It is used to install and manage hypervisors on BMS servers. Dito inaBOX for Bare Metal installs an OLVM Engine on a Google Compute Engine VM. Partner Interconnect and firewall rules are configured to allow the OLVM Engine to access BMS servers. Each BMS server is configured to act as a KVM host managed by the OLVM Engine.
Remote Desktop Access
For security reasons the OLVM Engine is installed on a VM with no external IP address. A jump server is created on a GCE VM and configured to support Apache Guacamole or Chrome Remote Desktop sessions. It is used to access the OLVM Engine’s browser UI. ISO images and VM templates are uploaded via the jump server. It is also used to provide ssh and console access for BMS VMs.
SSO, IAM and OAuth
SSO Access to the OLVM environment is controlled using a Keycloak server and Google’s identity and access management infrastructure.
Network traffic for VMs provisioned on a KVM host should be isolated from management traffic for the host itself. A user with login access to a VM should not be able to use that access to reconfigure it or other VMs on the host.
OLVM supports the use of logical networks to separate traffic with different characteristics. These can be mapped to separate Interconnect VLANs to isolate management traffic and virtual machine traffic. Firewalld rules can also be used to control access to the KVM hosts in an OLVM configured deployment.
Two PostgreSQL databases are created on the OLVM Engine VM. The first one stores persistent information about the state of the OLVM environment. The second stores historical configuration information and statistical metrics. Dito developed shell scripts are used to automate backups and restoration of these databases.
An optional Storware environment can be configured to backup OLVM virtual machines.
Shell scripts are provided to backup an Oracle database and transport backup files to and from a cloud storage bucket.
Documented steps for pinning CPUs in an OLVM environment to limit the Oracle license requirement
By default BMS servers have no access to the internet. Some processes require internet access. For example, a systems administrator may require access to a remote repository when running yum or dnf update and developers may need remote access to npm, git or the gcloud sdk.
Dito developed Terraform scripts can be used to automate the creation of a squid proxy cluster in GCP. These can be used in conjunction with a NAT Gateway to provide https access to BMS resources. Separate scripts are available to create Nginx reverse proxies for other protocols.
DHCP, DNS and NTP
DHCP and DNS for OLVM provisioned VMs can be provided using a dedicated VM running bind or dnsmasq. Dito has developed setup steps and configuration rules for using dnsmasq to resolve domain names in an OLVM deployed BMS environment.
Network time synchronization can be achieved using a reverse proxy to a public NTP server or Dito developed Terraform code that creates 2 or more GCE VMs and configure them for use as an NTP server via an internal UDP load balancer.
OLVM Virtual Machine Integration with GCP
Weblogic or other application servers running on OLVM provisioned VMs can be load balanced in GCP using a global external HTTP(S) load balancer with hybrid connectivity. Dito developed Terraform scripts are available to configure this.
Oracle Database Development Utilities
Copyright © Dito LLC, 2023