Skip to content

Commit

Permalink
More grammar corrections found by Vale
Browse files Browse the repository at this point in the history
  • Loading branch information
lovesprung committed Jan 16, 2025
1 parent 3d80d9f commit 7555982
Show file tree
Hide file tree
Showing 11 changed files with 41 additions and 40 deletions.
7 changes: 4 additions & 3 deletions .vale/config/vocabularies/Nephio/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,13 @@ APIs
apiserver
ASN
ASNs
[Aa]utomations
[Aa]utoscaling
backtrackVal
[Bb]ool
[Bb]oolean
cabundle
[Cc]onfigmap
[Cc]onfigMap
Codegen
[Cc]loudified
CNI
Expand Down Expand Up @@ -95,13 +96,13 @@ OpenID
objectSelector
[Pp]arameterization
[Pp]ackageVariant
[Pp]ackagerevision
[Pp]ackageRevision
[Pp]ackage[Nn]ames
params
parameterRef
passwordless
[Pp]kgserver
pluggable
[Pp]luggable
Podman
[Pp]orch
[Pp]orchctl
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@ After the conversion process, all the generated Go code is gathered and compiled
-----
### Flow-1: Helm to YAML
Helm to YAML conversion is achieved by running the following command
`helm template <chart> --namespace <namespace> --output-dir “temp/templated/”` internally. As of now, it retrieves the values from default "values.yaml"
`helm template <chart> --namespace <namespace> --output-dir “temp/templated/”` internally. As of now, it retrieves the values from default *values.yaml*.

### Flow-2: YAML Split
The SDK iterates over each YAML file in the *converted-yamls* directory. If a .yaml file contains multiple Kubernetes Resource Models (KRM), separated by "---", the SDK splits the .yaml file accordingly to isolate each individual KRM resource. This ensures that each KRM resource is processed independently.
The SDK iterates over each YAML file in the *converted-yamls* directory. If a *.yaml* file contains multiple Kubernetes Resource Models (KRM), separated by "---", the SDK splits the .yaml file accordingly to isolate each individual KRM resource. This ensures that each KRM resource is processed independently.

### Runtime-Object and Unstruct-Object
The SDK currently employs the "runtime-object method" to handle Kubernetes resources whose structure is recognized by Kubernetes by default. Examples of such resources include Deployment, Service, and ConfigMap. Conversely, resources that are not inherently known to Kubernetes and require explicit installation or definition, such as Third-Party Custom Resource Definitions (CRDs) like NetworkAttachmentDefinition or PrometheusRule, are processed using the "unstructured-object" method. Such examples are given below:
Expand Down Expand Up @@ -69,14 +69,14 @@ networkAttachmentDefinition1 := &unstructured.Unstructured{
```

### Flow-3.1: KRM to Runtime-Object
The conversion process relies on the "k8s.io/apimachinery/pkg/runtime" package. Currently, only the API version "v1" is supported. The supported kinds for the Runtime Object method include: Deployment, Service, Secret, Role, RoleBinding, ClusterRoleBinding, PersistentVolumeClaim, StatefulSet, ServiceAccount, ClusterRole, PriorityClass, ConfigMap
The conversion process relies on the "k8s.io/apimachinery/pkg/runtime" package. Currently, only the API version "v1" is supported. The supported kinds for the Runtime Object method include: Deployment, Service, Secret, Role, RoleBinding, ClusterRoleBinding, PersistentVolumeClaim, StatefulSet, ServiceAccount, ClusterRole, PriorityClass, ConfigMap.

### Flow-3.2: Runtime-Object to JSON
Firstly, the SDK performs a typecast of the runtime object to its actual data type. For instance, if the Kubernetes Kind is "Service," the SDK typecasts the runtime object to the specific data type corev1.Service. Then, it conducts a Depth-First Search (DFS) traversal over the corev1.Service object using reflection. During this traversal, the SDK generates a JSON structure that encapsulates information about the struct hierarchy, including corresponding data types and values. This transformation results in a JSON representation of the corev1.Service object's structure and content.

#### DFS Algorithm Cases

The DFS function iterates over the runtime object, traversing its structure in a Depth-First Search manner. During this traversal, it constructs the JSON structure while inspecting each attribute for its data type and value. Attributes that have default values in the runtime object but are not explicitly set in the .yaml file are omitted from the conversion process. This ensures that only explicitly defined attributes with their corresponding values are included in the resulting JSON structure. The function follows this flow to accurately capture the structure, data types, and values of the Kubernetes resource while excluding default attributes that are not explicitly configured in the .yaml file.
The DFS function iterates over the runtime object, traversing its structure in a Depth-First Search manner. During this traversal, it constructs the JSON structure while inspecting each attribute for its data type and value. Attributes that have default values in the runtime object but are not explicitly set in the *.yaml* file are omitted from the conversion process. This ensures that only explicitly defined attributes with their corresponding values are included in the resulting JSON structure. The function follows this flow to accurately capture the structure, data types, and values of the Kubernetes resource while excluding default attributes that are not explicitly configured in the *.yaml* file.


A) Base-Cases:
Expand All @@ -103,7 +103,7 @@ B) Composite-Cases:
C) Special-Cases:
We have assumed in the DFS function, that every path (structure) will end at the basic-data-types (string, int, bool etc), But there lies some cases when we can’t traverse further because the attributes of struct are private. Such cases are handled specially. (Converted to String and then return appropriately)
1. V1.Time and resource.Quantity
2. []byte/[]uint8: []byte is generally used in kind: Secret. It is seen that we provide 64base encoded secret-value in yaml, but on converting the yaml to runtime-obj, the secret-val automatically get decoded to actual value, Since, It is not good to show decoded/actual secret value in the code, therefore, we encode it again and store this base64-encoded-value as secret-value in json.
2. []byte/[]uint8: []byte is generally used in kind: Secret. It is seen that we provide 64base encoded secret-value in Yaml, but on converting the Yaml to runtime-obj, the secret-val automatically get decoded to actual value, Since, It is not good to show decoded/actual secret value in the code, therefore, we encode it again and store this base64-encoded-value as secret-value in JSON.


JSON Conversion Example
Expand Down Expand Up @@ -155,9 +155,9 @@ spec:
```

### Flow-3.3: JSON to String (Go-Code)
The SDK reads the .json file containing the information about the Kubernetes resource and then translates this information into a string of Go code. This process involves parsing the JSON structure and generating corresponding Go code strings based on the structure, data types, and values extracted from the JSON representation. Ultimately, this results in a string that represents the Kubernetes resource in a format compatible with Go code.
The SDK reads the *.json* file containing the information about the Kubernetes resource and then translates this information into a string of Go code. This process involves parsing the JSON structure and generating corresponding Go code strings based on the structure, data types, and values extracted from the JSON representation. Ultimately, this results in a string that represents the Kubernetes resource in a format compatible with Go code.

#### TraverseJSON Cases (Json-to-String)
#### TraverseJSON Cases (JSON-to-String)
The traverse JSON function is responsible for converting JSON data into Go code. Here's how it handles base cases:
The JSON structure contains type as well as value information. Based on the type the following case are formulated.
A) Base Cases:
Expand Down Expand Up @@ -250,15 +250,15 @@ GoCode Conversion Example
}
```

### Significance of Config-Jsons: (Struct_Module_mapping.json & Enum_module_mapping.json)
### Significance of Config-JSONs: (Struct_Module_mapping.json & Enum_module_mapping.json)
Based on the data type, Values are formatted accordingly,
| Data-Type | Value | Formatted-Value |
| :---: | :---: | :---: |
| int32 | 5 | 5 |
| string | 5 | \"5\" |
| *int32 | 5 | int32Ptr(5) |

The Config-Jsons are required for more package-specific-types (such as : v1.Service, v1.Deployment)
The Config-JSONs are required for more package-specific-types (such as : v1.Service, v1.Deployment)

#### Struct_Module_mapping.json
Mostly, It is seen that inspecting the type of struct(using reflect) would tell us that the struct belong to package “v1”, but there are multiple v1 packages (appsv1, metav1, rbacv1, etc), So, the actual package remains unknown.
Expand All @@ -274,7 +274,7 @@ Structs need to be initialized using curly brackets {}, whereas enums need Paren

Solution: We solve the above problems by building an “enumModuleMapping” which is a set that stores all data types that are enums. i.e. If a data type belongs to the set, then It is an Enum.

There is an automation-script that takes the *types.go* files of packages and build the config-json. For details, Please refer [here](https://github.com/nephio-project/nephio-sdk/tree/main/helm-to-operator-codegen-sdk/config)
There is an automation-script that takes the *types.go* files of packages and build the Config-JSON. For details, Please refer [here](https://github.com/nephio-project/nephio-sdk/tree/main/helm-to-operator-codegen-sdk/config)


### Flow-4: KRM to Unstruct-Obj to String(Go-code)
Expand Down Expand Up @@ -307,7 +307,7 @@ B) Composite Cases:
```


### Flow-5: Go-Codes to Gofile
### Flow-5: Go-Codes to Go file
The process of generating the final Go file consists of the following steps:

1. Collecting Go Code: Go code for each Kubernetes Resource Model (KRM) is collected and stored in a map where the key represents the kind of resource (e.g., "Service", "Deployment"), and the value is a slice containing the corresponding Go code strings.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,35 +12,35 @@ folks come up to speed on using testify and mockery.
## How Mockery works

The [mockery documentation](https://vektra.github.io/mockery/latest/#why-mockery) describes why you would use and how to
use Mockery. In a nutshell, Mockery generates mock implementations for interfaces in go, which you can then use instead
use Mockery. In a nutshell, Mockery generates mock implementations for interfaces in Go, which you can then use instead
of real implementations when unit testing.

## Mockery support in Nephio make

The Makefiles in Nephio repos containing go code have targets to support mockery.
The Makefiles in Nephio repositories containing Go code have targets to support mockery.

The [default-mockery.mk](https://github.com/nephio-project/nephio/blob/main/default-mockery.mk) file in the root of
Nephio repository is included in Nephio make runs.

There are two targets in default-mockery.mk:

1. install-mockery: Installs mockery in docker or locally if docker is not available
2. generate-mocks: Runs generation of the mocks for go interfaces
2. generate-mocks: Runs generation of the mocks for Go interfaces

The targets above must be run explicitly.

Run `make install-mockery` to install mockery in your container runtime (Docker, Podman etc) or locally if you have no
container runtime running. You need only run this target once unless you need to reinstall Mockery for whatever reason.

Run `make generate-mocks` to generate the mocked implementation of the go interfaces specified in *.mockery.yaml* files.
Run `make generate-mocks` to generate the mocked implementation of the Go interfaces specified in *.mockery.yaml* files.
You need to run this target each time an interface that you are mocking changes or whenever you change the contents of a
*.mockery.yaml* file. You can run `make generate-mocks` in the repo root to generate or re-generate all interfaces or in
*.mockery.yaml* file. You can run `make generate-mocks` in the repository root to generate or re-generate all interfaces or in
subdirectories containing a Makefile to generate or regenerate only the interfaces in that subdirectory and its
children.

The generate-mocks target looks for *.mockery.yaml* files in the repo and it runs the mockery mock generator on each
The generate-mocks target looks for *.mockery.yaml* files in the repository and it runs the mockery mock generator on each
*.mockery.yaml* file it finds. This has the nice effect of allowing *.mockery.yaml* files to be in either the root of
the repo or in subdirectories, so the choice of placement of *.mockery.yaml* files is left to the developer.
the repository or in subdirectories, so the choice of placement of *.mockery.yaml* files is left to the developer.

## The .mockery.yaml file

Expand Down Expand Up @@ -70,7 +70,7 @@ we want to generate mocks for the GiteaClient interface so we provide the packag
6. dir: "{{.InterfaceDir}}"
```

We want mocks to be generated for the GiteaClient go interface (line 4). The {{.InterfaceDir}} parameter (line 6) asks
We want mocks to be generated for the GiteaClient Go interface (line 4). The {{.InterfaceDir}} parameter (line 6) asks
Mockery to generate the mock file in the same directory as the interface is located.

### Example 2
Expand Down Expand Up @@ -107,7 +107,7 @@ Generate mocks for the external package *sigs.k8s.io/controller-runtime/pkg/clie
10. Client:
```

Generate a mock implementation of the go interface Client in the external package
Generate a mock implementation of the Go interface Client in the external package
*sigs.k8s.io/controller-runtime/pkg/client*.

```go
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ tasks such as
* createService: This function creates a Service resource for the AMF deployment. It defines the desired state of
the service, including the selector for the associated deployment and the ports it exposes.
* createConfigMap: This function creates a ConfigMap resource for the AMF deployment. It generates the
configuration data for the AMF based on the provided template values and renders it into the amfcfg.yaml file.
configuration data for the AMF based on the provided template values and renders it into the *amfcfg.yaml* file.
* createResourceRequirements: This function calculates the resource requirements (CPU and memory limits and
requests) for the AMF deployment based on the specified capacity and sets them in a ResourceRequirements object.
* createNetworkAttachmentDefinitionNetworks: This function creates the network attachment definition networks for
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ If you want to use GitHub or GitLab then follow below steps
Get a [GitHub token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#fine-grained-personal-access-tokens) if your repository is private,
to allow Porch to make modifications.

Register the edge repository using *kpt* cli or Nephio WebUI.
Register the edge repository using *kpt* CLI or Nephio WebUI.

```bash
GITHUB_USERNAME=<Github Username>
Expand Down Expand Up @@ -82,7 +82,7 @@ kpt live apply <cluster-name> --reconcile-timeout=15m --output=table
{{% alert title="Note" color="primary" %}}

* For management cluster you have to name the repository as *mgmt*.
* In the *repository* package by default gitea address is *172.18.0.200:3000* in *repository/set-values.yaml*
* In the *repository* package by default Gitea address is *172.18.0.200:3000* in *repository/set-values.yaml*
change this to your git address.
* *repository/token-configsync.yaml* and *repository/token-porch.yaml* are responsible for creating secrets with the
help of Nephio token controller for accessing git instance for root-sync. You would need the name of config-sync token
Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/guides/install-guides/install-on-openshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ In this guide, you will set up Nephio with:
## Prerequisites

- A Red Hat Account and access to https://console.redhat.com/openshift/
- OpenShift cli client *oc*. [Download here](https://console.redhat.com/openshift/downloads)
- OpenShift CLI client *oc*. [Download here](https://console.redhat.com/openshift/downloads)

## Setup the Management Cluster

Expand Down Expand Up @@ -58,12 +58,12 @@ Once installed, you need to prepare the management cluster for zero touch provis

If using init.sh directly to deploy Nephio management components, as one would for a generic K8s Cluster, there are some prerequisites to consider:
- A default StorageClass must be configured providing persistent storage for PVCs (for instance through the LVMS Operator and an LVMCluster)
- [Security Context Constraits](https://github.com/nephio-project/catalog/tree/main/distros/openshift/security-context-constraints) must be applied for successful Nephio component deployment
- [Security Context Constraints](https://github.com/nephio-project/catalog/tree/main/distros/openshift/security-context-constraints) must be applied for successful Nephio component deployment

Follow the steps present in the [Install Guide](/content/en/docs/guides/install-guides/_index.md) for a Pre-installed K8s Cluster to install manaement components
Follow the steps present in the [Install Guide](/content/en/docs/guides/install-guides/_index.md) for a Pre-installed K8s Cluster to install management components


### Option 2: Using Blueprints Nephio OpenShift Repo OpenShift Package Repository
### Option 2: Using Blueprints Nephio OpenShift Repository OpenShift Package Repository

A repository of OpenShift-installation specific packages must be used to deploy Nephio. This repository contains
packages derived from the standard Nephio R1 packages, but with OpenShift-specific modifications.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: >
weight: 7
---

If you are not exposing the webui on a load balancer IP address, but are instead using `kubectl port-forward`, you
If you are not exposing the WebUI on a load balancer IP address, but are instead using `kubectl port-forward`, you
should use *localhost* and *7007* for the HOSTNAME and PORT; otherwise, use the DNS name and port as it will be seen
by your browser.

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/guides/user-guides/controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ To enable a particular reconciler, you pass an environment variable to the
Nephio Controller at startup. The environment variable is of the form
*ENABLE_\<RECONCILER\>* where *\<RECONCILER\>* is the name of the reconciler to
be enabled in upper case. Therefore, to enable the bootstrap-packages reconciler,
pass the ENABLE_BOOTSTRAPPACKAGES to the nephio controller. Reconcilers are
pass the ENABLE_BOOTSTRAPPACKAGES to the Nephio controller. Reconcilers are
disabled by default.


Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/guides/user-guides/exercise-2-oai.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ packagevariantset.config.porch.kpt.dev/oai-edge-clusters created
```


It will take around 15 mins to create the three clusters. You can check the progress by looking at commits made in gitea
It will take around 15 minutes to create the three clusters. You can check the progress by looking at commits made in Gitea
*mgmt* and *mgmt-staging* repository. After couple of minutes you should see three independent repositories (Core,
Regional and Edge) for each workload cluster.

Expand Down Expand Up @@ -507,8 +507,8 @@ packagevariant.config.porch.kpt.dev/oai-upf-edge created

All the NFs will wait for NRF to come up and then they will register to NRF. SMF has a dependency on UPF which is
described by *dependency.yaml* file in SMF package. It will wait till the time UPF is deployed. It takes around
~800 seconds for the whole core network to come up. NRF is exposing its service via metallb external ip-address. In
case metallb ip-address pool is not properly defined in the previous section, then UPF will not be able to register to
~800 seconds for the whole core network to come up. NRF is exposing its service via MetalLB external ip-address. In
case MetalLB ip-address pool is not properly defined in the previous section, then UPF will not be able to register to
NRF and in this case SMF and UPF will not be able to communicate.

### Check Core Network Deployment
Expand Down
Loading

0 comments on commit 7555982

Please sign in to comment.