diff --git a/docs-2.0-en/20.appendix/6.eco-tool-version.md b/docs-2.0-en/20.appendix/6.eco-tool-version.md index 61a1590b5d4..934fe6d5d49 100644 --- a/docs-2.0-en/20.appendix/6.eco-tool-version.md +++ b/docs-2.0-en/20.appendix/6.eco-tool-version.md @@ -23,14 +23,6 @@ NebulaGraph Dashboard Community Edition (Dashboard for short) is a visualization | {{ nebula.tag }} | {{dashboard.tag}}| -## NebulaGraph Stats Exporter - -[Nebula-stats-exporter](https://github.com/vesoft-inc/nebula-stats-exporter) exports monitor metrics to Promethus. - -|NebulaGraph version|Stats Exporter version| -|:---|:---| -| {{ nebula.tag }} | {{exporter.tag}}| - ## NebulaGraph Exchange NebulaGraph Exchange (Exchange for short) is an Apache Spark&trade application for batch migration of data in a cluster to NebulaGraph in a distributed environment. It can support the migration of batch data and streaming data in a variety of different formats. For details, see [What is NebulaGraph Exchange](../import-export/nebula-exchange/about-exchange/ex-ug-what-is-exchange.md). diff --git a/docs-2.0-en/3.ngql-guide/9.space-statements/6.clear-space.md b/docs-2.0-en/3.ngql-guide/9.space-statements/6.clear-space.md index cacb6667d8c..1acc884ede0 100644 --- a/docs-2.0-en/3.ngql-guide/9.space-statements/6.clear-space.md +++ b/docs-2.0-en/3.ngql-guide/9.space-statements/6.clear-space.md @@ -4,7 +4,7 @@ !!! note - It is recommended to execute [`SUBMIT JOB COMPACT`](../../4.job-statements/#submit_job_compact) immediately after executing the `CLEAR SPACE` operation improve the query performance. Note that the COMPACT operation may affect query performance, and it is recommended to perform this operation during low business hours (e.g., early morning). + It is recommended to execute [SUBMIT JOB COMPACT](../4.job-statements.md#submit_job_compact) immediately after executing the `CLEAR SPACE` operation improve the query performance. Note that the COMPACT operation may affect query performance, and it is recommended to perform this operation during low business hours (e.g., early morning). ## Permission requirements diff --git a/docs-2.0-en/4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md b/docs-2.0-en/4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md index 1c013957a8a..8509bdbc2e8 100644 --- a/docs-2.0-en/4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md +++ b/docs-2.0-en/4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md @@ -6,11 +6,4 @@ You can install the NebulaGraph Community Edition with the following ecosystem t ## Installation details -- To install NebulaGraph with **NebulaGraph Dashboard Enterprise Edition**, see [Create a cluster](../../nebula-dashboard-ent/3.create-import-dashboard/1.create-cluster.md). - - To install NebulaGraph with **NebulaGraph Operator**, see [Install NebulaGraph clusters](../../k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md). - - -!!! note - - [Contact us](https://www.nebula-graph.io/contact) to get the installation package for the Enterprise Edition of NebulaGraph. diff --git a/docs-2.0-en/k8s-operator/2.get-started/2.3.create-cluster.md b/docs-2.0-en/k8s-operator/2.get-started/2.3.create-cluster.md index baaa3d6cbf1..0d05064206f 100644 --- a/docs-2.0-en/k8s-operator/2.get-started/2.3.create-cluster.md +++ b/docs-2.0-en/k8s-operator/2.get-started/2.3.create-cluster.md @@ -8,7 +8,7 @@ This topic introduces how to create a {{nebula.name}} cluster with the following ## Prerequisites - [NebulaGraph Operator is installed.](2.1.install-operator.md) -- [LM is installed and the License Key is loaded.](2.2.deploy-lm.md) + - [A StorageClass is created.](https://kubernetes.io/docs/concepts/storage/storage-classes/) ## Create a {{nebula.name}} cluster with Helm @@ -43,32 +43,10 @@ This topic introduces how to create a {{nebula.name}} cluster with the following kubectl create namespace "${NEBULA_CLUSTER_NAMESPACE}" ``` -5. Create a Secret for pulling the NebulaGraph cluster image from a private repository. - - ```bash - kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" create secret docker-registry \ - --docker-server=DOCKER_REGISTRY_SERVER \ - --docker-username=DOCKER_USER \ - --docker-password=DOCKER_PASSWORD - ``` - - - ``: Specify the name of the Secret. - - `DOCKER_REGISTRY_SERVER`: Specify the server address of the private repository from which the image will be pulled, such as `reg.example-inc.com`. - - `DOCKER_USER`: The username for the image repository. - - `DOCKER_PASSWORD`: The password for the image repository. - -6. Apply the variables to the Helm chart to create a NebulaGraph cluster. +5. Apply the variables to the Helm chart to create a NebulaGraph cluster. ```bash helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ - # Configure the access address and port (default port is '9119') that points to the LM. You must configure this parameter in order to obtain the license information. Only for NebulaGraph Enterprise Edition clusters. - --set nebula.metad.licenseManagerURL="192.168.8.XXX:9119" \ - # Configure the image addresses for each service in the cluster. - --set nebula.graphd.image="" \ - --set nebula.metad.image="" \ - --set nebula.storaged.image="" \ - # Configure the Secret for pulling images from a private repository. - --set imagePullSecrets[0].name="{}" \ --set nameOverride="${NEBULA_CLUSTER_NAME}" \ --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ # Specify the version of the NebulaGraph cluster. @@ -79,9 +57,6 @@ This topic introduces how to create a {{nebula.name}} cluster with the following --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ ``` - NebulaGraph Operator supports creating clusters with zones. For more information, see [Install NebulaGraph clusters](../4.cluster-administration/4.1.installation/4.1.1.cluster-install.md). - - ## Create a {{nebula.name}} cluster with Kubectl !!! compatibility "Legacy version compatibility" @@ -97,22 +72,9 @@ The following example shows how to create a NebulaGraph cluster by creating a cl kubectl create namespace nebula ``` -2. Create a Secret for pulling the NebulaGraph Enterprise image from a private repository. - - ```bash - kubectl -n create secret docker-registry \ - --docker-server=DOCKER_REGISTRY_SERVER \ - --docker-username=DOCKER_USER \ - --docker-password=DOCKER_PASSWORD - ``` - - ``: The namespace where this Secret will be stored. - - ``: Specify the name of the Secret. - - `DOCKER_REGISTRY_SERVER`: Specify the server address of the private repository from which the image will be pulled, such as `reg.example-inc.com`. - - `DOCKER_USER`: The username for the image repository. - - `DOCKER_PASSWORD`: The password for the image repository. +2. Define the cluster configuration file `nebulacluster.yaml`. -3. Define the cluster configuration file. ??? info "Expand to see an example configuration for the cluster" @@ -128,7 +90,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl whenUnsatisfiable: "ScheduleAnyway" graphd: # Container image for the Graph service. - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-graphd logVolumeClaim: resources: requests: @@ -145,14 +107,9 @@ The following example shows how to create a NebulaGraph cluster by creating a cl memory: 500Mi version: v{{nebula.release}} imagePullPolicy: Always - # Secret for pulling images from a private repository. - imagePullSecrets: - - name: secret-name metad: - # LM access address and port number for obtaining License information. - licenseManagerURL: 192.168.x.xxx:9119 # Container image for the Meta service. - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-metad logVolumeClaim: resources: requests: @@ -178,7 +135,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl schedulerName: default-scheduler storaged: # Container image for the Storage service. - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-storaged logVolumeClaim: resources: requests: @@ -200,22 +157,13 @@ The following example shows how to create a NebulaGraph cluster by creating a cl version: v{{nebula.release}} ``` - The following parameters must be customized: - - - `spec.metad.licenseManagerURL`: Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. - - `spec..image`: Specify the container image of the Graph, Meta, and Storage service respectively. - - `spec.imagePullSecrets`: Specify the Secret for pulling the NebulaGraph Enterprise service images from a private repository. - - `spec..logVolumeClaim.storageClassName`: Specify the log disk storage configurations for the Graph, Meta, and Storage service respectively. - - `spec.metad.dataVolumeClaim.storageClassName`: Specify the data disk storage configurations for the Meta service. - - `spec.storaged.dataVolumeClaims.storageClassName`: Specify the data disk storage configurations for the Storage service. - For more information about the other parameters, see [Install NebulaGraph clusters](../4.cluster-administration/4.1.installation/4.1.1.cluster-install.md). -4. Create a NebulaGraph cluster. +3. Create a NebulaGraph cluster. ```bash - kubectl create -f apps_v1alpha1_nebulacluster.yaml + kubectl create -f nebulacluster.yaml ``` Output: @@ -224,7 +172,7 @@ The following example shows how to create a NebulaGraph cluster by creating a cl nebulacluster.apps.nebula-graph.io/nebula created ``` -5. Check the status of the NebulaGraph cluster. +4. Check the status of the NebulaGraph cluster. ```bash kubectl get nc nebula diff --git a/docs-2.0-en/k8s-operator/3.operator-management/3.1.customize-installation.md b/docs-2.0-en/k8s-operator/3.operator-management/3.1.customize-installation.md index d29c6b35a2f..7eee156d692 100644 --- a/docs-2.0-en/k8s-operator/3.operator-management/3.1.customize-installation.md +++ b/docs-2.0-en/k8s-operator/3.operator-management/3.1.customize-installation.md @@ -66,7 +66,7 @@ Part of the above parameters are described as follows: | `controllerManager.replicas` | `2` | The number of controller-manager replicas. | | `admissionWebhook.create` | `false` | Whether to enable Admission Webhook. This option is disabled. To enable it, set the value to `true` and you will need to install [cert-manager](https://cert-manager.io/docs/installation/helm/). For details, see [Enable admission control](../4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md). | | `shceduler.create` | `true` | Whether to enable Scheduler. | -| `shceduler.schedulerName` | `nebula-scheduler` | The name of the scheduler customized by NebulaGraph Operator. It is used to evenly distribute Storage Pods across different [zones](../4.cluster-administration/4.8.ha-and-balancing/4.8.2.enable-zone.md). | +| `shceduler.schedulerName` | `nebula-scheduler` | The name of the scheduler customized by NebulaGraph Operator. It is used to evenly distribute Storage Pods across different zones which are only available in the Enterprise Edition.| | `shceduler.replicas` | `2` | The number of nebula-scheduler replicas. | diff --git a/docs-2.0-en/k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md b/docs-2.0-en/k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md index 1390bdbd0db..1ddaca1284b 100644 --- a/docs-2.0-en/k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md +++ b/docs-2.0-en/k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md @@ -10,7 +10,6 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu - [Install NebulaGraph Operator](../../2.get-started/2.1.install-operator.md) - [Create a StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) -- [Install and Load the License Key](../../2.get-started/2.2.deploy-lm.md) ## Use `kubectl apply` @@ -20,22 +19,7 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu kubectl create namespace nebula ``` -2. Create a Secret for pulling NebulaGraph images from a private registry. - - ```bash - kubectl -n create secret docker-registry \ - --docker-server=DOCKER_REGISTRY_SERVER \ - --docker-username=DOCKER_USER \ - --docker-password=DOCKER_PASSWORD - ``` - - - ``: Namespace to store the Secret. - - ``: Name of the Secret. - - `DOCKER_REGISTRY_SERVE`: Private registry server address for pulling images, for example, `reg.example-inc.com`. - - `DOCKER_USE`: Username for the image registry. - - `DOCKER_PASSWORD`: Password for the image registry. - -3. Create a YAML configuration file for the cluster. For example, create a cluster named `nebula`. +2. Create a YAML configuration file `nebulacluster.yaml` for the cluster. For example, create a cluster named `nebula`. ??? info "Expand to view an example configuration for the `nebula` cluster" @@ -52,8 +36,6 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu whenUnsatisfiable: "ScheduleAnyway" # Enable PV recycling. enablePVReclaim: false - # Enable the backup and restore feature. - enableBR: false # Enable monitoring. exporter: image: vesoft/nebula-stats-exporter @@ -71,9 +53,6 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu limits: cpu: "200m" memory: "256Mi" - # Secret for pulling images from a private registry. - imagePullSecrets: - - name: secret-name # Configure the image pull policy. imagePullPolicy: Always # Select the nodes for Pod scheduling. @@ -105,7 +84,7 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu # successThreshold: 1 # timeoutSeconds: 10 # Container image for the Graph service. - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-graphd logVolumeClaim: resources: requests: @@ -128,8 +107,6 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu config: {} # Meta service configuration. metad: - # LM access address and port, used to obtain License information. - licenseManagerURL: 192.168.x.xxx:9119 # readinessProbe: # failureThreshold: 3 # httpGet: @@ -141,7 +118,7 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu # successThreshold: 1 # timeoutSeconds: 5 # Container image for the Meta service. - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-metad logVolumeClaim: resources: requests: @@ -176,7 +153,7 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu # successThreshold: 1 # timeoutSeconds: 5 # Container image for the Storage service. - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-graphd logVolumeClaim: resources: requests: @@ -200,18 +177,12 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu config: {} ``` - When creating the YAML configuration file for the cluster, you must customize the following parameters. For more detailed information about these parameters, see the **Cluster configuration parameters** section below. - - - `spec.metad.licenseManagerURL` - - `spec..image` - - `spec.imagePullSecrets` - - `spec...storageClassName` - + For more detailed information about these parameters, see the **Cluster configuration parameters** section below. -4. Create the NebulaGraph cluster. +3. Create the NebulaGraph cluster. ```bash - kubectl create -f apps_v1alpha1_nebulacluster.yaml -n nebula + kubectl create -f nebulacluster.yaml -n nebula ``` Output: @@ -222,7 +193,7 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu If you don't specify the namespace using `-n`, it will default to the `default` namespace. -5. Check the status of the NebulaGraph cluster. +4. Check the status of the NebulaGraph cluster. ```bash kubectl get nebulaclusters nebula -n nebula @@ -232,7 +203,7 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu ```bash NAME READY GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE - nebula2 True 1 1 1 1 1 1 86s + nebula True 1 1 1 1 1 1 86s ``` ## Use `helm` @@ -263,21 +234,7 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu kubectl create namespace "${NEBULA_CLUSTER_NAMESPACE}" ``` -5. Create a Secret for pulling images from a private repository. - - ```bash - kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" create secret docker-registry \ - --docker-server=DOCKER_REGISTRY_SERVER \ - --docker-username=DOCKER_USER \ - --docker-password=DOCKER_PASSWORD - ``` - - - ``: Specify the name of the Secret. - - `DOCKER_REGISTRY_SERVER`: Specify the address of the private image repository (e.g., `reg.example-inc.com`). - - `DOCKER_USER`: Username for the image repository. - - `DOCKER_PASSWORD`: Password for the image repository. - -6. Check the customizable configuration parameters for the `nebula-cluster` Helm chart of the `nebula-operator` when creating the cluster. +5. Check the customizable configuration parameters for the `nebula-cluster` Helm chart of the `nebula-operator` when creating the cluster. - Run the following command to view all the configurable parameters. @@ -287,7 +244,7 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu - Visit [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml) to see all the configuration parameters for the NebulaGraph cluster. Click on [Chart parameters](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/doc/user/nebula_cluster_helm_guide.md#optional-chart-parameters) to see the parameter descriptions and their default values. -7. Create the NebulaGraph cluster. +6. Create the NebulaGraph cluster. You can use the `--set` flag to customize the default values of the NebulaGraph cluster configuration. For example, `--set nebula.storaged.replicas=3` sets the number of replicas for the Storage service to 3. @@ -298,23 +255,14 @@ Using NebulaGraph Operator to install NebulaGraph clusters enables automated clu --version={{operator.release}} \ # Specify the namespace for the NebulaGraph cluster. --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ - # Configure the Secret for pulling images from the private repository. - --set imagePullSecrets[0].name="{}" \ # Customize the chart release name. --set nameOverride="${NEBULA_CLUSTER_NAME}" \ - # Configure the LM (License Manager) access address and port, with the default port being '9119'. - # You must configure this parameter to obtain the License information. - --set nebula.metad.licenseManagerURL="192.168.8.XXX:9119" \ - # Configure the image addresses for various services in the cluster. - --set nebula.graphd.image="" \ - --set nebula.metad.image="" \ - --set nebula.storaged.image="" \ --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ # Specify the version for the NebulaGraph cluster. --set nebula.version=v{{nebula.release}} ``` -8. Check the status of NebulaGraph cluster pods. +7. Check the status of NebulaGraph cluster pods. ```bash kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}" @@ -367,11 +315,3 @@ The table below lists the configurable parameters and their descriptions for cre | `spec.imagePullPolicy` | `Always` | The image pull policy for NebulaGraph images. For more details on pull policies, please see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | | `spec.logRotate` | `{}` | Log rotation configuration. For details, see [Managing Cluster Logs](../4.5.logging.md). | | `spec.enablePVReclaim` | `false` | Defines whether to automatically delete PVCs after deleting the cluster to release data. For details, see [Reclaim PV](../4.4.storage-management/4.4.3.configure-pv-reclaim.md). | -| `spec.metad.licenseManagerURL` | - | Configures the URL pointing to the License Manager (LM), consisting of the access address and port (default port `9119`). For example, `192.168.8.xxx:9119`. **You must configure this parameter to obtain the License information; otherwise, the NebulaGraph cluster will not function.** | -| `spec.storaged.enableAutoBalance` | `false` | Whether to enable automatic balancing. For details, see [Balancing Storage Data After Scaling Out](../4.8.ha-and-balancing/4.8.3.balance-data-after-scale-out.md). | -| `spec.enableBR` | `false` | Defines whether to enable the BR tool. For details, see [Backup and Restore](../4.6.backup-and-restore.md). | -| `spec.imagePullSecrets` | `[]` | Defines the Secret required to pull images from a private repository. | - -## Related topics - -[Enabling Zones in a Cluster](../4.8.ha-and-balancing/4.8.2.enable-zone.md) \ No newline at end of file diff --git a/docs-2.0-en/k8s-operator/4.cluster-administration/4.1.installation/4.1.3.cluster-uninstall.md b/docs-2.0-en/k8s-operator/4.cluster-administration/4.1.installation/4.1.3.cluster-uninstall.md index c016ed967ba..36ff108e35a 100644 --- a/docs-2.0-en/k8s-operator/4.cluster-administration/4.1.installation/4.1.3.cluster-uninstall.md +++ b/docs-2.0-en/k8s-operator/4.cluster-administration/4.1.installation/4.1.3.cluster-uninstall.md @@ -70,7 +70,7 @@ Deletion is only supported for NebulaGraph clusters created with the NebulaGraph Example output: - ```bash + ```yaml USER-SUPPLIED VALUES: imagePullSecrets: - name: secret_for_pull_image diff --git a/docs-2.0-en/k8s-operator/4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md b/docs-2.0-en/k8s-operator/4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md index b5e88d1ef26..d5cb53811d7 100644 --- a/docs-2.0-en/k8s-operator/4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md +++ b/docs-2.0-en/k8s-operator/4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md @@ -4,7 +4,9 @@ Kubernetes [Admission Control](https://kubernetes.io/docs/reference/access-authn ## Prerequisites -You have already created a cluster using Kubernetes. For detailed steps, see [Creating a NebulaGraph Cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). + +A NebulaGraph cluster is created with NebulaGrpah Operator. For detailed steps, see [Create a NebulaGraph cluster](../4.1.installation/4.1.1.cluster-install.md). + ## Admission control rules @@ -18,7 +20,8 @@ Kubernetes admission control allows you to insert custom logic or policies befor !!! note - High availability mode refers to the high availability of NebulaGraph cluster services. Storage and Meta services are stateful, and the number of replicas should be an odd number due to [Raft](../../1.introduction/3.nebula-graph-architecture/4.storage-service.md#raft) protocol requirements for data consistency. In high availability mode, at least 3 Storage services and 3 Meta services are required. Graph services are stateless, so their number of replicas can be even but should be at least 2. + High availability mode refers to the high availability of NebulaGraph cluster services. Storage and Meta services are stateful, and the number of replicas should be an odd number due to [Raft](../../../1.introduction/3.nebula-graph-architecture/4.storage-service.md#raft) protocol requirements for data consistency. In high availability mode, at least 3 Storage services and 3 Meta services are required. Graph services are stateless, so their number of replicas can be even but should be at least 2. + - Preventing additional PVs from being added to Storage service via `dataVolumeClaims`. diff --git a/docs-2.0-en/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0-en/nebula-operator/2.deploy-nebula-operator.md deleted file mode 100644 index 4b5e5da0591..00000000000 --- a/docs-2.0-en/nebula-operator/2.deploy-nebula-operator.md +++ /dev/null @@ -1,267 +0,0 @@ -# Deploy NebulaGraph Operator - -You can deploy NebulaGraph Operator with [Helm](https://helm.sh/). - -## Background - -[NebulaGraph Operator](1.introduction-to-nebula-operator.md) automates the management of NebulaGraph clusters, and eliminates the need for you to install, scale, upgrade, and uninstall NebulaGraph clusters, which lightens the burden on managing different application versions. - -## Prerequisites - -Before installing NebulaGraph Operator, you need to install the following software and ensure the correct version of the software : - -| Software | Requirement | -| ------------------------------------------------------------ | --------- | -| [Kubernetes](https://kubernetes.io) | \>= 1.16 | -| [Helm](https://helm.sh) | \>= 3.2.0 | -| [CoreDNS](https://github.com/coredns/coredns) | \>= 1.6.0 | - -!!! note - - - If using a role-based access control policy, you need to enable [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (optional). - - - [CoreDNS](https://coredns.io/) is a flexible and scalable DNS server that is [installed](https://github.com/coredns/helm) for Pods in NebulaGraph clusters. - -## Steps - -### Install NebulaGraph Operator - -1. Add the NebulaGraph Operator Helm repository. - - ```bash - helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts - ``` - -2. Update information of available charts locally from repositories. - - ```bash - helm repo update - ``` - - For more information about `helm repo`, see [Helm Repo](https://helm.sh/docs/helm/helm_repo/). - -3. Create a namespace for NebulaGraph Operator. - - ```bash - kubectl create namespace - ``` - - For example, run the following command to create a namespace named `nebula-operator-system`. - - ```bash - kubectl create namespace nebula-operator-system - ``` - - All the resources of NebulaGraph Operator are deployed in this namespace. - -4. Install NebulaGraph Operator. - - ```bash - helm install nebula-operator nebula-operator/nebula-operator --namespace= --version=${chart_version} - ``` - - For example, the command to install NebulaGraph Operator of version {{operator.release}} is as follows. - - ```bash - helm install nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} - ``` - - - `nebula-operator-system` is a user-created namespace name. If you have not created this namespace, run `kubectl create namespace nebula-operator-system` to create one. You can also use a different name. - - - `{{operator.release}}` is the version of the nebula-operator chart. When not specifying `--version`, the latest version of the nebula-operator chart is used by default. Run `helm search repo -l nebula-operator` to see chart versions. - - You can customize the configuration items of the NebulaGraph Operator chart before running the installation command. For more information, see **Customize Helm charts** below. - -### Customize Helm charts - -When executing the `helm install [NAME] [CHART] [flags]` command to install a chart, you can specify the chart configuration. For more information, see [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). - -View the related configuration options in the [nebula-operator chart](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/charts/nebula-operator/values.yaml) configuration file. - -Alternatively, you can view the configurable options through the command `helm show values nebula-operator/nebula-operator`, as shown below. - - -For example: - -```yaml -[k8s@master ~]$ helm show values nebula-operator/nebula-operator -image: - nebulaOperator: - image: vesoft/nebula-operator:{{operator.tag}} - imagePullPolicy: Always - kubeRBACProxy: - image: bitnami/kube-rbac-proxy:0.14.2 - imagePullPolicy: Always - kubeScheduler: - image: registry.k8s.io/kube-scheduler:v1.24.11 - imagePullPolicy: Always - -imagePullSecrets: [] -kubernetesClusterDomain: "" - -controllerManager: - create: true - replicas: 2 - env: [] - resources: - limits: - cpu: 200m - memory: 200Mi - requests: - cpu: 100m - memory: 100Mi - -admissionWebhook: - create: false - -scheduler: - create: true - schedulerName: nebula-scheduler - replicas: 2 - env: [] - resources: - limits: - cpu: 200m - memory: 20Mi - requests: - cpu: 100m - memory: 100Mi -``` - -Part of the above parameters are described as follows: - -| Parameter | Default value | Description | -| :------------------------------------- | :------------------------------ | :----------------------------------------- | -| `image.nebulaOperator.image` | `vesoft/nebula-operator:{{operator.tag}}` | The image of NebulaGraph Operator, version of which is {{operator.release}}. | -| `image.nebulaOperator.imagePullPolicy` | `IfNotPresent` | The image pull policy in Kubernetes. | -| `imagePullSecrets` | - | The image pull secret in Kubernetes. | -| `kubernetesClusterDomain` | `cluster.local` | The cluster domain. | -| `controllerManager.create` | `true` | Whether to enable the controller-manager component. | -| `controllerManager.replicas` | `2` | The number of controller-manager replicas. | -| `admissionWebhook.create` | `false` | Whether to enable Admission Webhook. This option is disabled. To enable it, set the value to `true` and you will need to install [cert-manager](https://cert-manager.io/docs/installation/helm/). | -| `shceduler.create` | `true` | Whether to enable Scheduler. | -| `shceduler.schedulerName` | `nebula-scheduler` | The Scheduler name. | -| `shceduler.replicas` | `2` | The number of nebula-scheduler replicas. | - -You can run `helm install [NAME] [CHART] [flags]` to specify chart configurations when installing a chart. For more information, see [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). - -The following example shows how to specify the NebulaGraph Operator's AdmissionWebhook mechanism to be turned on when you install NebulaGraph Operator (AdmissionWebhook is disabled by default): - -```bash -helm install nebula-operator nebula-operator/nebula-operator --namespace= --set admissionWebhook.create=true -``` - -For more information about `helm install`, see [Helm Install](https://helm.sh/docs/helm/helm_install/). - -### Update NebulaGraph Operator - -1. Update the information of available charts locally from chart repositories. - - ```bash - helm repo update - ``` - -1. Update NebulaGraph Operator by passing configuration parameters via `--set`. - - - `--set`:Overrides values using the command line. For configurable items, see the above-mentioned section **Customize Helm charts**. - - For example, to enable the AdmissionWebhook, run the following command: - - ```bash - helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} --set admissionWebhook.create=true - ``` - - For more information, see [Helm upgrade](https://helm.sh/docs/helm/helm_update/). - -### Upgrade NebulaGraph Operator - -!!! compatibility "Legacy version compatibility" - - - Does not support upgrading 0.9.0 and below version NebulaGraph Operator to 1.x. - - The 1.x version NebulaGraph Operator is not compatible with NebulaGraph of version below v3.x. - -1. Update the information of available charts locally from chart repositories. - - ```bash - helm repo update - ``` - -2. Upgrade Operator to {{operator.tag}}. - - ```bash - helm upgrade nebula-operator nebula-operator/nebula-operator --namespace= --version={{operator.release}} - ``` - - For example: - - ```bash - helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} - ``` - - Output: - - ```bash - Release "nebula-operator" has been upgraded. Happy Helming! - NAME: nebula-operator - LAST DEPLOYED: Tue Apr 16 02:21:08 2022 - NAMESPACE: nebula-operator-system - STATUS: deployed - REVISION: 3 - TEST SUITE: None - NOTES: - NebulaGraph Operator installed! - ``` - -3. Pull the latest CRD configuration file. - - !!! note - You need to upgrade the corresponding CRD configurations after NebulaGraph Operator is upgraded. Otherwise, the creation of NebulaGraph clusters will fail. For information about the CRD configurations, see [apps.nebula-graph.io_nebulaclusters.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.tag}}/config/crd/bases/apps.nebula-graph.io_nebulaclusters.yaml). - - 1. Pull the NebulaGraph Operator chart package. - - ```bash - helm pull nebula-operator/nebula-operator --version={{operator.release}} - ``` - - - `--version`: The NebulaGraph Operator version you want to upgrade to. If not specified, the latest version will be pulled. - - 2. Run `tar -zxvf` to unpack the charts. - - For example: To unpack {{operator.tag}} chart to the `/tmp` path, run the following command: - - ```bash - tar -zxvf nebula-operator-{{operator.release}}.tgz -C /tmp - ``` - - - `-C /tmp`: If not specified, the chart files will be unpacked to the current directory. - - -4. Upgrade the CRD configuration file in the `nebula-operator` directory. - - ```bash - kubectl apply -f crds/nebulacluster.yaml - ``` - - Output: - - ```bash - customresourcedefinition.apiextensions.k8s.io/nebulaclusters.apps.nebula-graph.io configured - ``` - -### Uninstall NebulaGraph Operator - -1. Uninstall the NebulaGraph Operator chart. - - ```bash - helm uninstall nebula-operator --namespace= - ``` - -2. Delete CRD. - - ```bash - kubectl delete crd nebulaclusters.apps.nebula-graph.io - ``` - -## What's next - -Automate the deployment of NebulaGraph clusters with NebulaGraph Operator. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). diff --git a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md deleted file mode 100644 index 18ce95076db..00000000000 --- a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ /dev/null @@ -1,99 +0,0 @@ -# Deploy NebulaGraph clusters with Kubectl - -!!! compatibility "Legacy version compatibility" - - The 1.x version NebulaGraph Operator is not compatible with NebulaGraph of version below v3.x. - -## Prerequisites - -- [You have installed NebulaGraph Operator](../2.deploy-nebula-operator.md) - -- [You have created StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) - -## Create clusters - -The following example shows how to create a NebulaGraph cluster by creating a cluster named `nebula`. - -1. Create a namespace, for example, `nebula`. If not specified, the `default` namespace is used. - - ```bash - kubectl create namespace nebula - ``` - -2. Create a file named `apps_v1alpha1_nebulacluster.yaml`. - - See [community cluster configurations](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/nebulacluster.yaml). - - The following table describes the parameters in the sample configuration file. - - | Parameter | Default value | Description | - | :---------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - | `metadata.name` | - | The name of the created NebulaGraph cluster. | - | `spec.console` | - | Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). | - | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | - | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | - | `spec.graphd.version` | `v3.6.0` | The version of the Graphd service. | - | `spec.graphd.service` | - | The Service configurations for the Graphd service. | - | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | - | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | - | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | - | `spec.metad.version` | `v3.6.0` | The version of the Metad service. | - | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | - | `spec.metad.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Metad service. | - | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | - | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | - | `spec.storaged.version` | `v3.6.0` | The version of the Storaged service. | - | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc. | - | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | - | `spec.storaged.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Storaged service. | - | `spec.storaged.enableAutoBalance` | `true` | Whether to balance data automatically. | - | `spec..securityContext` | `{}` | Defines privilege and access control settings for NebulaGraph service containers. For details, see [SecurityContext](https://github.com/vesoft-inc/nebula-operator/blob/release-1.5/doc/user/security_context.md). | - | `spec.agent` | `{}` | Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used. | - | `spec.reference.name` | - | The name of the dependent controller. | - | `spec.schedulerName` | - | The scheduler name. | - | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | - | `spec.logRotate` | - | Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md). | - | `spec.enablePVReclaim` | `false` | Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md). | - | | | | - - -3. Create a NebulaGraph cluster. - - ```bash - kubectl create -f apps_v1alpha1_nebulacluster.yaml - ``` - - Output: - - ```bash - nebulacluster.apps.nebula-graph.io/nebula created - ``` - -4. Check the status of the NebulaGraph cluster. - - ```bash - kubectl get nebulaclusters.apps.nebula-graph.io nebula - ``` - - Output: - - ```bash - NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE - nebula 1 1 1 1 3 3 86s - ``` - -## Scaling clusters - -The cluster scaling feature is for NebulaGraph Enterprise Edition only. - -## Delete clusters - -Run the following command to delete a NebulaGraph cluster with Kubectl: - -```bash -kubectl delete -f apps_v1alpha1_nebulacluster.yaml -``` - -## What's next - -[Connect to NebulaGraph databases](../4.connect-to-nebula-graph-service.md) diff --git a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md deleted file mode 100644 index bf3c80c80cd..00000000000 --- a/docs-2.0-en/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ /dev/null @@ -1,88 +0,0 @@ -# Deploy NebulaGraph clusters with Helm - -!!! compatibility "Legacy version compatibility" - - The 1.x version NebulaGraph Operator is not compatible with NebulaGraph of version below v3.x. - -## Prerequisite - -- [You have installed NebulaGraph Operator](../2.deploy-nebula-operator.md) - -- [You have created StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) - -## Create clusters - -1. Add the NebulaGraph Operator Helm repository. - - ```bash - helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts - ``` - -2. Update information of available charts locally from chart repositories. - - ```bash - helm repo update - ``` - -3. Set environment variables to your desired values. - - ```bash - export NEBULA_CLUSTER_NAME=nebula # The desired NebulaGraph cluster name. - export NEBULA_CLUSTER_NAMESPACE=nebula # The desired namespace where your NebulaGraph cluster locates. - export STORAGE_CLASS_NAME=fast-disks # The name of the StorageClass that has been created. - ``` - -4. Create a namespace for your NebulaGraph cluster (If you have created one, skip this step). - - ```bash - kubectl create namespace "${NEBULA_CLUSTER_NAMESPACE}" - ``` - -5. Apply the variables to the Helm chart to create a NebulaGraph cluster. - - ```bash - helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ - --set nameOverride=${NEBULA_CLUSTER_NAME} \ - --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ - # Specify the version of the NebulaGraph cluster. - --set nebula.version=v{{nebula.release}} \ - # Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default. - # Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart. - --version={{operator.release}} \ - --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ - ``` - - To view all configuration parameters of the NebulaGraph cluster, run the `helm show values nebula-operator/nebula-cluster` command or click [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml). - - Click [Chart parameters](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/doc/user/nebula_cluster_helm_guide.md#optional-chart-parameters) to see descriptions and default values of the configurable cluster parameters. - - Use the `--set` argument to set configuration parameters for the cluster. For example, `--set nebula.storaged.replicas=3` will set the number of replicas for the Storage service in the cluster to 3. - - -6. Check the status of the NebulaGraph cluster you created. - - ```bash - kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}" - ``` - -## Scaling clusters - -The cluster scaling feature is for NebulaGraph Enterprise Edition only. - -## Delete clusters - -Run the following command to delete a NebulaGraph cluster with Helm: - -```bash -helm uninstall "${NEBULA_CLUSTER_NAME}" --namespace="${NEBULA_CLUSTER_NAMESPACE}" -``` - -Or use variable values to delete a NebulaGraph cluster with Helm: - -```bash -helm uninstall nebula --namespace=nebula -``` - -## What's next - -[Connect to NebulaGraph Databases](../4.connect-to-nebula-graph-service.md) diff --git a/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md b/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md deleted file mode 100644 index f75c4564bde..00000000000 --- a/docs-2.0-en/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md +++ /dev/null @@ -1,173 +0,0 @@ -# Customize parameters for a NebulaGraph cluster - -Meta, Storage, and Graph services in a NebulaGraph cluster have their own configuration settings, which are defined in the YAML file of the NebulaGraph cluster instance as `config`. These settings are mapped and loaded into the corresponding service's ConfigMap in Kubernetes. At the time of startup, the configuration present in the ConfigMap is mounted onto the directory `/usr/local/nebula/etc/` for every service. - -!!! note - - It is not available to customize configuration parameters for NebulaGraph Clusters deployed with Helm. - -The structure of `config` is as follows. - -```go -Config map[string]string `json:"config,omitempty"` -``` - -## Prerequisites - -You have created a NebulaGraph cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). - -## Steps - -The following example uses a cluster named `nebula` and the cluster's configuration file named `nebula_cluster.yaml` to show how to set `config` for the Graph service in a NebulaGraph cluster. - -1. Run the following command to access the edit page of the `nebula` cluster. - - ```bash - kubectl edit nebulaclusters.apps.nebula-graph.io nebula - ``` - -2. Customize parameters under the `spec.graphd.config` field. In the following sample, the `enable_authorize` and `auth_type` parameters are used for demonstration purposes. - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - graphd: - resources: - requests: - cpu: "500m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-graphd - version: {{nebula.tag}} - storageClaim: - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - config: // Custom configuration parameters for the Graph service in a cluster. - "enable_authorize": "true" - "auth_type": "password" - ... - ``` - - The parameters that can be added under the `config` field are listed in detail in the [Meta service configuration parameters](../../5.configurations-and-logs/1.configurations/2.meta-config.md), [Storage service configuration parameters](../../5.configurations-and-logs/1.configurations/4.storage-config.md), and [Graph service configuration parameters](../../5.configurations-and-logs/1.configurations/3.graph-config.md) topics. - - !!! note - - * To update cluster configurations without incurring pod restart, ensure that all parameters added under the `config` field support runtime dynamic modification. Check the **Whether supports runtime dynamic modifications** column of the parameter tables on the aforementioned parameter details pages to see if a parameter supports runtime dynamic modification. - * If one or more parameters that do not support runtime dynamic modification are added under the `config` field, pod restart is required for the parameters to take effect. - - - To add the `config` for the Meta and Storage services, add `spec.metad.config` and `spec.storaged.config` respectively. - -3. Run `kubectl apply -f nebula_cluster.yaml` to push your configuration changes to the cluster. - - After customizing the parameters, the configurations in the corresponding ConfigMap (`nebula-graphd`) of the Graph service will be overwritten. - - -## Customize port configurations - -You can add the `port` and `ws_http_port` parameters under the `config` field to customize port configurations. For details about these two parameters, see the Networking configurations section in [Meta service configuration parameters](../../5.configurations-and-logs/1.configurations/2.meta-config.md), [Storage service configuration parameters](../../5.configurations-and-logs/1.configurations/4.storage-config.md), and [Graph service configuration parameters](../../5.configurations-and-logs/1.configurations/3.graph-config.md). - -!!! note - - * Pod restart is required for the `port` and `ws_http_port` parameters to take effect. - * It is NOT recommnended to modify the `port` parameter after the cluster is started. - -1. Modifiy the cluster configuration file. - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - graphd: - config: - port: "3669" - ws_http_port: "8080" - resources: - requests: - cpu: "200m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-graphd - version: {{nebula.tag}} - metad: - config: - ws_http_port: 8081 - resources: - requests: - cpu: "300m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-metad - version: {{nebula.tag}} - dataVolumeClaim: - resources: - requests: - storage: 2Gi - storageClassName: local-path - storaged: - config: - ws_http_port: 8082 - resources: - requests: - cpu: "300m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-storaged - version: {{nebula.tag}} - dataVolumeClaims: - - resources: - requests: - storage: 2Gi - storageClassName: local-path - enableAutoBalance: true - reference: - name: statefulsets.apps - version: v1 - schedulerName: default-scheduler - imagePullPolicy: IfNotPresent - imagePullSecrets: - - name: nebula-image - enablePVReclaim: true - topologySpreadConstraints: - - topologyKey: kubernetes.io/hostname - whenUnsatisfiable: "ScheduleAnyway" - ``` - -2. Run the `kubectl apply -f nebula_cluster.yaml` to push your configuration changes to the cluster. - -3. Verify that the configuration takes effect. - - ```bash - kubectl get svc - ``` - - Sample response: - - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - nebula-graphd-headless ClusterIP None 3669/TCP,8080/TCP 10m - nebula-graphd-svc ClusterIP 10.102.13.115 3669/TCP,8080/TCP 10m - nebula-metad-headless ClusterIP None 9559/TCP,8081/TCP 11m - nebula-storaged-headless ClusterIP None 9779/TCP,8082/TCP,9778/TCP 11m - ``` diff --git a/docs-2.0-zh/20.appendix/0.FAQ.md b/docs-2.0-zh/20.appendix/0.FAQ.md index f24d272751b..ceedf5ada7e 100644 --- a/docs-2.0-zh/20.appendix/0.FAQ.md +++ b/docs-2.0-zh/20.appendix/0.FAQ.md @@ -566,8 +566,6 @@ Fail to create a new session from connection pool, fail to authenticate, error: 集群扩缩容功能未正式在社区版中发布。以下涉及`SUBMIT JOB BALANCE DATA REMOVE`和`SUBMIT JOB BALANCE DATA`的操作在社区版中均为实验性功能,功能未稳定。如需使用,请先做好数据备份,并且在 [Graph 配置文件](../5.configurations-and-logs/1.configurations/3.graph-config.md)中将`enable_experimental_feature`和`enable_data_balance`均设置为`true`。 - - ### 如何增加或减少 Meta、Graph、Storage 节点的数量 - {{nebula.name}} {{ nebula.release }} 未提供运维命令以实现自动扩缩容,但可参考以下步骤实现手动扩缩容: @@ -593,11 +591,6 @@ Fail to create a new session from connection pool, fail to authenticate, error: Storage 扩缩容之后,根据需要执行`SUBMIT JOB BALANCE DATA`将当前图空间的分片平均分配到所有 Storage 节点中和执行`SUBMIT JOB BALANCE LEADER`命令均衡分布所有图空间中的 leader。运行命令前,需要选择一个图空间。 - -- 使用悦数运维监控,在可视化页面对 Graph 和 Storage 进行快速扩缩容,详情参见[集群操作-扩缩容](../nebula-dashboard-ent/4.cluster-operator/operator/scale.md)。 - -- 使用 NebulaGraph Operator 扩缩容集群,详情参见[创建{{nebula.name}}集群](../k8s-operator/4.cluster-administration/4.3.scaling/4.3.1.resizing.md)。 - ### 如何在 Storage 节点中增加或减少磁盘 diff --git a/docs-2.0-zh/20.appendix/6.eco-tool-version.md b/docs-2.0-zh/20.appendix/6.eco-tool-version.md index 37738fe8f75..7a50d0ec5c8 100644 --- a/docs-2.0-zh/20.appendix/6.eco-tool-version.md +++ b/docs-2.0-zh/20.appendix/6.eco-tool-version.md @@ -26,13 +26,6 @@ NebulaGraph Studio(简称 Studio)是一款可以通过 Web 访问的图数 |:---|:---| | {{ nebula.tag }} | {{dashboard.tag}}| -## NebulaGraph Stats Exporter - -[nebula-stats-exporter](https://github.com/vesoft-inc/nebula-stats-exporter)将监控数据导入Prometheus。 - -|{{nebula.name}}版本|Stats Exporter 版本| -|:---|:---| -| {{ nebula.tag }} | {{exporter.tag}}| ## NebulaGraph Exchange diff --git a/docs-2.0-zh/3.ngql-guide/9.space-statements/6.clear-space.md b/docs-2.0-zh/3.ngql-guide/9.space-statements/6.clear-space.md index 4da124f7d5a..f945eec9032 100644 --- a/docs-2.0-zh/3.ngql-guide/9.space-statements/6.clear-space.md +++ b/docs-2.0-zh/3.ngql-guide/9.space-statements/6.clear-space.md @@ -5,7 +5,7 @@ !!! note - 建议在执行`CLEAR SPACE`操作之后,立即执行[`SUBMIT JOB COMPACT`](../../4.job-statements/#submit_job_compact)操作以提升查询性能。需要注意的是,COMPACT 操作可能会影响查询性能,建议在业务低峰期(例如凌晨)执行该操作。 + 建议在执行`CLEAR SPACE`操作之后,立即执行 [SUBMIT JOB COMPACT](../4.job-statements.md#submit_job_compact)操作以提升查询性能。需要注意的是,COMPACT 操作可能会影响查询性能,建议在业务低峰期(例如凌晨)执行该操作。 ## 权限要求 diff --git a/docs-2.0-zh/k8s-operator/2.get-started/2.3.create-cluster.md b/docs-2.0-zh/k8s-operator/2.get-started/2.3.create-cluster.md index a1571878856..2fce4a0b6da 100644 --- a/docs-2.0-zh/k8s-operator/2.get-started/2.3.create-cluster.md +++ b/docs-2.0-zh/k8s-operator/2.get-started/2.3.create-cluster.md @@ -8,7 +8,7 @@ ## 前提条件 - [安装 NebulaGraph Operator](2.1.install-operator.md) -- [已安装 LM 并加载 License Key](2.2.deploy-lm.md) + - [已创建 StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) ## 使用 Helm 创建{{nebula.name}}集群 @@ -43,36 +43,10 @@ kubectl create namespace "${NEBULA_CLUSTER_NAMESPACE}" ``` - -5. 创建 Secret,用于拉取私有仓库中{{nebula.name}}镜像。 - - ```bash - kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" create secret docker-registry \ - --docker-server=DOCKER_REGISTRY_SERVER \ - --docker-username=DOCKER_USER \ - --docker-password=DOCKER_PASSWORD - ``` - - - ``:指定 Secret 的名称。 - - `DOCKER_REGISTRY_SERVE`:指定拉取镜像的私有仓库服务器地址,例如`reg.example-inc.com`。 - - `DOCKER_USE`:镜像仓库用户名。 - - `DOCKER_PASSWORD`:镜像仓库密码。 - - - -6. 创建{{nebula.name}}集群。 +5. 创建{{nebula.name}}集群。 ```bash helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ - - # 配置指向 LM 访问地址和端口,默认端口为`9119`。必须配置此参数以获取 License 信息。 - --set nebula.metad.licenseManagerURL="192.168.8.XXX:9119" \ - # 配置集群中各服务的镜像地址。 - --set nebula.graphd.image="" \ - --set nebula.metad.image="" \ - --set nebula.storaged.image="" \ - # 配置拉取私有仓库中镜像的 Secret。 - --set imagePullSecrets[0].name="{}" \ --set nameOverride="${NEBULA_CLUSTER_NAME}" \ --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ # 指定{{nebula.name}}集群的版本。 @@ -81,9 +55,7 @@ # 执行 helm search repo - l nebula-operator/nebula-cluster 命令可查看所有 chart 版本。 --version={{operator.release}} \ --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ - ``` - -NebulaGraph Operator 支持通过 Helm 创建带 Zone 的集群,详情请参见[创建{{nebula.name}}集群](../4.cluster-administration/4.1.installation/4.1.1.cluster-install.md)。 + ``` ## 使用 Kubectl 创建{{nebula.name}}集群 @@ -98,24 +70,8 @@ NebulaGraph Operator 支持通过 Helm 创建带 Zone 的集群,详情请参 ```bash kubectl create namespace nebula ``` - -2. 创建 Secret,用于拉取私有仓库中{{nebula.name}}镜像。 - - ```bash - kubectl -n create secret docker-registry \ - --docker-server=DOCKER_REGISTRY_SERVER \ - --docker-username=DOCKER_USER \ - --docker-password=DOCKER_PASSWORD - ``` - - ``:存放该 Secret 的命名空间。 - - ``:指定 Secret 的名称。 - - `DOCKER_REGISTRY_SERVE`:指定拉取镜像的私有仓库服务器地址,例如`reg.example-inc.com`。 - - `DOCKER_USE`:镜像仓库用户名。 - - `DOCKER_PASSWORD`:镜像仓库密码。 - - -3. 创建集群配置文件。 +2. 创建集群配置文件,例如`nebulacluster.yaml`。 ??? info "展开查看集群的示例配置" @@ -131,7 +87,7 @@ NebulaGraph Operator 支持通过 Helm 创建带 Zone 的集群,详情请参 whenUnsatisfiable: "ScheduleAnyway" graphd: # Graph 服务的容器镜像。 - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-graphd logVolumeClaim: resources: requests: @@ -148,14 +104,9 @@ NebulaGraph Operator 支持通过 Helm 创建带 Zone 的集群,详情请参 memory: 500Mi version: v{{nebula.release}} imagePullPolicy: Always - # 用于从私有仓库拉取镜像的 Secret。 - imagePullSecrets: - - name: secret-name metad: - # LM 访问地址和端口号,用于获取 License 信息。 - licenseManagerURL: 192.168.x.xxx:9119 # Meta 服务的容器镜像。 - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-metad logVolumeClaim: resources: requests: @@ -181,7 +132,7 @@ NebulaGraph Operator 支持通过 Helm 创建带 Zone 的集群,详情请参 schedulerName: default-scheduler storaged: # Storage 服务的容器镜像。 - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-storaged logVolumeClaim: resources: requests: @@ -201,23 +152,14 @@ NebulaGraph Operator 支持通过 Helm 创建带 Zone 的集群,详情请参 cpu: 500m memory: 500Mi version: v{{nebula.release}} - ``` - - 您必须自定义以下参数: - - - `spec.metad.licenseManagerURL`:指定 [LM](../../9.about-license/2.license-management-suite/3.license-manager.md) 的 URL,由 LM 的访问地址和端口(默认端口`9119`)组成。例如,`192.168.8.100:9119`。 - - `spec..image`:分别指定 Graph,Meta,以及 Storage 服务的容器镜像。 - - `spec.imagePullSecrets`:指定拉取私有仓库中镜像所需的 Secret。 - - `spec..logVolumeClaim.storageClassName`: 分别指定 Graph、Meta 以及 Storage 服务的日志盘存储卷的存储类名称。 - - `spec.metad.dataVolumeClaim.storageClassName`:指定 Meta 服务的数据盘存储配置。 - - `spec.storaged.dataVolumeClaims.storageClassName`:指定 Storage 服务的数据盘存储配置。 + ``` 关于其它参数的详情,请参考[创建{{nebula.name}}集群](../4.cluster-administration/4.1.installation/4.1.1.cluster-install.md)。 -4. 创建{{nebula.name}}集群。 +3. 创建{{nebula.name}}集群。 ```bash - kubectl create -f apps_v1alpha1_nebulacluster.yaml + kubectl create -f nebulacluster.yaml ``` 返回: @@ -225,10 +167,8 @@ NebulaGraph Operator 支持通过 Helm 创建带 Zone 的集群,详情请参 ```bash nebulacluster.apps.nebula-graph.io/nebula created ``` - - {{nebula.name}} Operator 支持通过 Kubectl 创建带 Zone 的集群,详情请参见[创建集群](../4.cluster-administration/4.1.installation/4.1.1.cluster-install.md)。 -5. 查看{{nebula.name}}集群状态。 +4. 查看{{nebula.name}}集群状态。 ```bash kubectl get nc nebula diff --git a/docs-2.0-zh/k8s-operator/3.operator-management/3.1.customize-installation.md b/docs-2.0-zh/k8s-operator/3.operator-management/3.1.customize-installation.md index fb5b3e36f15..d299c2f4755 100644 --- a/docs-2.0-zh/k8s-operator/3.operator-management/3.1.customize-installation.md +++ b/docs-2.0-zh/k8s-operator/3.operator-management/3.1.customize-installation.md @@ -68,7 +68,7 @@ scheduler: | `controllerManager.replicas` | `2` | controller-manager 副本数。 | | `admissionWebhook.create` | `false` | 是否启用 Admission Webhook。默认关闭,如需开启,需设置为`true`并且需要安装 [cert-manager](https://cert-manager.io/docs/installation/helm/)。详情参见[开启准入控制](../4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md)。 | | `scheduler.create` | `true` | 是否启用 Scheduler。 | -| `scheduler.schedulerName` | `nebula-scheduler` | NebulaGraph Operator 自定义的调度器名称。用于开启 [Zone](../4.cluster-administration/4.8.ha-and-balancing/4.8.2.enable-zone.md)时,均匀调度 Storage Pods 到不同的 Zone 中。| +| `scheduler.schedulerName` | `nebula-scheduler` | NebulaGraph Operator 自定义的调度器名称。用于开启 Zone 时,均匀调度 Storage Pods 到不同的 Zone 中。仅适用于企业版集群。| | `scheduler.replicas` | `2` | nebula-scheduler 副本数。 | ## 示例 diff --git a/docs-2.0-zh/k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md b/docs-2.0-zh/k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md index d9b76247f95..7e86fc3242a 100644 --- a/docs-2.0-zh/k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md +++ b/docs-2.0-zh/k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md @@ -10,7 +10,7 @@ - [安装 NebulaGraph Operator](../../2.get-started/2.1.install-operator.md) - [已创建 StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) -- [已安装 LM 并加载 License Key](../../2.get-started/2.2.deploy-lm.md) + ## 使用`kubectl apply` @@ -20,22 +20,7 @@ kubectl create namespace nebula ``` -2. 创建 Secret,用于拉取私有仓库中{{nebula.name}}镜像。 - - ```bash - kubectl -n create secret docker-registry \ - --docker-server=DOCKER_REGISTRY_SERVER \ - --docker-username=DOCKER_USER \ - --docker-password=DOCKER_PASSWORD - ``` - - - ``:存放该 Secret 的命名空间。 - - ``:指定 Secret 的名称。 - - `DOCKER_REGISTRY_SERVE`:指定拉取镜像的私有仓库服务器地址,例如`reg.example-inc.com`。 - - `DOCKER_USE`:镜像仓库用户名。 - - `DOCKER_PASSWORD`:镜像仓库密码。 - -3. 创建集群的 YAML 配置文件。例如,创建名为`nebula`的集群。 +2. 创建集群的 YAML 配置文件`nebulacluster.yaml`。例如,创建名为`nebula`的集群。 ??? info "展开查看`nebula`集群的示例配置" @@ -53,8 +38,6 @@ # 是否回收 PV。 enablePVReclaim: false # 是否启用备份和恢复功能。 - enableBR: false - # 是否启用监控功能。 exporter: image: vesoft/nebula-stats-exporter version: v3.3.0 @@ -71,9 +54,6 @@ limits: cpu: "200m" memory: "256Mi" - # 从私有仓库拉取镜像的 Secret。 - imagePullSecrets: - - name: secret-name # 配置镜像拉取策略。 imagePullPolicy: Always # 选择 Pod 被调度的节点。 @@ -105,7 +85,7 @@ # successThreshold: 1 # timeoutSeconds: 10 # Graph 服务的容器镜像。 - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-graphd logVolumeClaim: resources: requests: @@ -128,8 +108,6 @@ config: {} # Meta 服务的相关配置。 metad: - # LM 访问地址和端口号,用于获取 License 信息。 - licenseManagerURL: 192.168.x.xxx:9119 # readinessProbe: # failureThreshold: 3 # httpGet: @@ -141,7 +119,7 @@ # successThreshold: 1 # timeoutSeconds: 5 # Meta 服务的容器镜像。 - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-metad logVolumeClaim: resources: requests: @@ -176,7 +154,7 @@ # successThreshold: 1 # timeoutSeconds: 5 # Storage 服务的容器镜像。 - image: reg.example-inc.com/xxx/xxx + image: vesoft/nebula-storaged logVolumeClaim: resources: requests: @@ -200,15 +178,10 @@ config: {} ``` - 创建集群的 YAML 配置文件时,必须自定义配置以下参数,其他参数可按需配置。有关参数的详细说明,请参见下文的**集群配置参数**。 - - - `spec.metad.licenseManagerURL` - - `spec..image` - - `spec.imagePullSecrets` - - `spec...storageClassName` + 有关参数的详细说明,请参见下文的**集群配置参数**。 -4. 创建{{nebula.name}}集群。 +3. 创建{{nebula.name}}集群。 ```bash kubectl create -f apps_v1alpha1_nebulacluster.yaml -n nebula @@ -223,7 +196,7 @@ 如果不通过`-n`指定命名空间,默认使用`default`命名空间。 -5. 查看{{nebula.name}}集群状态。 +4. 查看{{nebula.name}}集群状态。 ```bash kubectl get nebulaclusters nebula -n nebula @@ -233,7 +206,7 @@ ```bash NAME READY GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE - nebula2 True 1 1 1 1 1 1 86s + nebula True 1 1 1 1 1 1 86s ``` ## 使用`helm` @@ -264,23 +237,7 @@ kubectl create namespace "${NEBULA_CLUSTER_NAMESPACE}" ``` - -5. 创建 Secret,用于拉取私有仓库中{{nebula.name}}镜像。 - - ```bash - kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" create secret docker-registry \ - --docker-server=DOCKER_REGISTRY_SERVER \ - --docker-username=DOCKER_USER \ - --docker-password=DOCKER_PASSWORD - ``` - - - ``:指定 Secret 的名称。 - - `DOCKER_REGISTRY_SERVE`:指定拉取镜像的私有仓库服务器地址,例如`reg.example-inc.com`。 - - `DOCKER_USE`:镜像仓库用户名。 - - `DOCKER_PASSWORD`:镜像仓库密码。 - - -6. 查看创建集群时,`nebula-operator`的`nebula-cluster` chart 的可自定义的配置项。 +5. 查看创建集群时,`nebula-operator`的`nebula-cluster` chart 的可自定义的配置项。 - 执行以下命令查看所有可以配置的参数。 @@ -291,7 +248,7 @@ - 单击 [nebula-cluster/values.yaml ](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml) 查看{{nebula.name}}集群的所有配置参数。单击 [Chart parameters](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/doc/user/nebula_cluster_helm_guide.md#optional-chart-parameters) 查看参数的描述及默认值。 -7. 创建{{nebula.name}}集群。 +6. 创建{{nebula.name}}集群。 通过`--set`参数自定义{{nebula.name}}集群配置项的默认值,例如,`--set nebula.storaged.replicas=3`可设置{{nebula.name}}集群中 Storage 服务的副本数为 3。 @@ -303,22 +260,15 @@ --version={{operator.release}} \ # 指定{{nebula.name}}集群所处的命名空间。 --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ - # 配置拉取私有仓库中镜像的 Secret。 - --set imagePullSecrets[0].name="{}" \ # 自定义 chart 发布名称。 --set nameOverride="${NEBULA_CLUSTER_NAME}" \ - # 配置指向 LM 访问地址和端口,默认端口为`9119`。必须配置此参数以获取 License 信息。 - --set nebula.metad.licenseManagerURL="192.168.8.XXX:9119" \ - # 配置集群中各服务的镜像地址。 - --set nebula.graphd.image="" \ - --set nebula.metad.image="" \ - --set nebula.storaged.image="" \ --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ # 指定{{nebula.name}}集群的版本。 --set nebula.version=v{{nebula.release}} ``` -8. 查看{{nebula.name}}集群 Pod 的启动状态。 + +7. 查看{{nebula.name}}集群 Pod 的启动状态。 ```bash kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}" @@ -371,12 +321,3 @@ | `spec.imagePullPolicy` | ` Always` | {{nebula.name}}镜像的拉取策略。关于拉取策略详情,请参考 [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy)。 | | `spec.logRotate` | `{}` | 日志轮转配置。详情参见[管理集群日志](../4.5.logging.md)。 | | `spec.enablePVReclaim` | `false` | 定义是否在删除集群后自动删除 PVC 以释放数据。详情参见[回收 PV](../4.4.storage-management/4.4.3.configure-pv-reclaim.md)。 | - | `spec.metad.licenseManagerURL` | - | 配置指向 [LM](../../2.get-started/2.2.deploy-lm.md) 的 URL,由 LM 的访问地址和端口(默认端口`9119`)组成。例如,`192.168.8.xxx:9119`。**必须配置此参数以获取 License 信息,否则无法使用{{nebula.name}}集群。** | - | `spec.storaged.enableAutoBalance` | `false` | 是否启用自动均衡。详情参见[均衡扩容后的 Storage 数据](../4.8.ha-and-balancing/4.8.3.balance-data-after-scale-out.md)。 | - | `spec.enableBR` | `false` | 定义是否启用 BR 工具。详情参见[备份与恢复](../4.6.backup-and-restore.md)。 | - | `spec.imagePullSecrets` | `[]` | 定义拉取私有仓库中镜像所需的 Secret。 | - - -## 相关链接 - -[集群中开启 Zone](../4.8.ha-and-balancing/4.8.2.enable-zone.md) \ No newline at end of file diff --git a/docs-2.0-zh/k8s-operator/4.cluster-administration/4.1.installation/4.1.3.cluster-uninstall.md b/docs-2.0-zh/k8s-operator/4.cluster-administration/4.1.installation/4.1.3.cluster-uninstall.md index 7fd3fc91d32..ae38740f63d 100644 --- a/docs-2.0-zh/k8s-operator/4.cluster-administration/4.1.installation/4.1.3.cluster-uninstall.md +++ b/docs-2.0-zh/k8s-operator/4.cluster-administration/4.1.installation/4.1.3.cluster-uninstall.md @@ -70,7 +70,7 @@ 返回示例: - ```bash + ```yaml USER-SUPPLIED VALUES: imagePullSecrets: - name: secret_for_pull_image diff --git a/docs-2.0-zh/k8s-operator/4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md b/docs-2.0-zh/k8s-operator/4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md index 2b5d916831b..915103bfee2 100644 --- a/docs-2.0-zh/k8s-operator/4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md +++ b/docs-2.0-zh/k8s-operator/4.cluster-administration/4.7.security/4.7.2.enable-admission-control.md @@ -4,7 +4,7 @@ K8s 的[准入控制(Admission Control)](https://kubernetes.io/docs/referenc ## 前提条件 -已使用 K8s 创建一个集群。具体步骤,参见[使用 Kubectl 创建{{nebula.name}}集群](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md)。 +已使用 K8s 创建一个集群。具体步骤,参见[创建{{nebula.name}}集群](../4.1.installation/4.1.1.cluster-install.md)。 ## 准入控制规则 @@ -18,7 +18,7 @@ K8s 的准入控制允许用户在 Kubernetes API Server 处理请求之前, !!! note - 高可用模式是指{{nebula.name}}集群服务的高可用。Storage 服务和 Meta 服务是有状态的服务,其副本数据通过 [Raft](../../1.introduction/3.nebula-graph-architecture/4.storage-service.md#raft) 协议保持一致性且副本数量不能为偶数。因此,高可用模式下,至少需要 3 个 Storage 服务和 3 个 Meta 服务。Graph 服务为无状态的服务,因此其副本数量可以为偶数,但至少需要 2 个副本。 + 高可用模式是指{{nebula.name}}集群服务的高可用。Storage 服务和 Meta 服务是有状态的服务,其副本数据通过 [Raft](../../../1.introduction/3.nebula-graph-architecture/4.storage-service.md#raft) 协议保持一致性且副本数量不能为偶数。因此,高可用模式下,至少需要 3 个 Storage 服务和 3 个 Meta 服务。Graph 服务为无状态的服务,因此其副本数量可以为偶数,但至少需要 2 个副本。 - 禁止通过`dataVolumeClaims`为 Storage 服务追加额外的 PV。 diff --git a/docs-2.0-zh/nebula-operator/2.deploy-nebula-operator.md b/docs-2.0-zh/nebula-operator/2.deploy-nebula-operator.md deleted file mode 100644 index 78d30380be3..00000000000 --- a/docs-2.0-zh/nebula-operator/2.deploy-nebula-operator.md +++ /dev/null @@ -1,290 +0,0 @@ -# 部署 NebulaGraph Operator - -用户可使用 [Helm](https://helm.sh/) 工具部署 NebulaGraph Operator。 - -## 背景信息 - -[NebulaGraph Operator](1.introduction-to-nebula-operator.md) 为用户管理{{nebula.name}}集群,使用户无需在生产环境中手动安装、扩展、升级和卸载 NebulaGraph,减轻用户管理不同应用版本的负担。 - -## 前提条件 - -安装 NebulaGraph Operator 前,用户需要安装以下软件并确保安装版本的正确性。 - -| 软件 | 版本要求 | -| ------------------------------------------------------------ | --------- | -| [Kubernetes](https://kubernetes.io) | \>= 1.16 | -| [Helm](https://helm.sh) | \>= 3.2.0 | -| [CoreDNS](https://github.com/coredns/coredns) | \>= 1.6.0 | - -!!! note - - - 如果使用基于角色的访问控制的策略,用户需开启 [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)(可选)。 - - [CoreDNS](https://coredns.io/) 是一个灵活的、可扩展的 DNS 服务器,被[安装](https://github.com/coredns/helm)在集群内作为集群内 Pods 的 DNS 服务器。{{nebula.name}}集群中的每个组件通过 DNS 解析类似`x.default.svc.cluster.local`这样的域名相互通信。 - -## 操作步骤 - -### 安装 NebulaGraph Operator - -1. 添加 NebulaGraph Operator Helm 仓库。 - - ```bash - helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts - ``` - -2. 拉取最新的 Operator Helm 仓库。 - - ```bash - helm repo update - ``` - - 参考 [Helm 仓库](https://helm.sh/docs/helm/helm_repo/)获取更多`helm repo`相关信息。 - -3. 创建命名空间用于安装 NebulaGraph Operator。 - - ```bash - kubectl create namespace - ``` - - 例如,创建`nebula-operator-system`命名空间。 - - ```bash - kubectl create namespace nebula-operator-system - ``` - nebula-operator chart 中的所有资源都会安装在该命名空间下。 - -4. 安装 NebulaGraph Operator。 - - ```bash - helm install nebula-operator nebula-operator/nebula-operator --namespace= --version=${chart_version} - ``` - - 例如,安装{{operator.release}}版的 Operator 命令如下。 - - ```bash - helm install nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} - ``` - - - `{{operator.release}}`为 nebula-operator chart 的版本,不指定`--version`时默认使用最新版的 chart。执行`helm search repo -l nebula-operator`查看 chart 版本。 - - - 用户可在执行安装 NebulaGraph Operator chart 命令时自定义 Operator 的配置。更多信息,查看下文**自定义配置 Chart**。 - -### 自定义配置 Chart - -执行`helm install [NAME] [CHART] [flags]`命令安装 Chart 时,可指定 Chart 配置。更多信息,参考[安装前自定义 Chart](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing)。 - -在 [nebula-operator chart](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/charts/nebula-operator/values.yaml) 配置文件中查看相关的配置选项。 - -或者通过命令`helm show values nebula-operator/nebula-operator`查看可配置的选项,如下所示。 - -```yaml -[abby@master ~]$ helm show values nebula-operator/nebula-operator -image: - nebulaOperator: - image: vesoft/nebula-operator:{{operator.tag}} - imagePullPolicy: Always - kubeRBACProxy: - image: bitnami/kube-rbac-proxy:0.14.2 - imagePullPolicy: Always - kubeScheduler: - image: registry.k8s.io/kube-scheduler:v1.24.11 - imagePullPolicy: Always - -imagePullSecrets: [] -kubernetesClusterDomain: "" - -controllerManager: - create: true - replicas: 2 - env: [] - resources: - limits: - cpu: 200m - memory: 200Mi - requests: - cpu: 100m - memory: 100Mi - -admissionWebhook: - create: false - -scheduler: - create: true - schedulerName: nebula-scheduler - replicas: 2 - env: [] - resources: - limits: - cpu: 200m - memory: 200Mi - requests: - cpu: 100m - memory: 100Mi -... -``` - -部分参数描述如下: - -| 参数 | 默认值 | 描述 | -| :------------------------------------- | :------------------------------ | :----------------------------------------- | -| `image.nebulaOperator.image` | `vesoft/nebula-operator:{{operator.tag}}` | NebulaGraph Operator 的镜像,版本为{{operator.release}}。 | -| `image.nebulaOperator.imagePullPolicy` | `IfNotPresent` | 镜像拉取策略。 | -| `imagePullSecrets` | - | 镜像拉取密钥。 | -| `kubernetesClusterDomain` | `cluster.local` | 集群域名。 | -| `controllerManager.create` | `true` | 是否启用 controller-manager。 | -| `controllerManager.replicas` | `2` | controller-manager 副本数。 | -| `admissionWebhook.create` | `false` | 是否启用 Admission Webhook。默认关闭,如需开启,需设置为`true`并且需要安装 [cert-manager](https://cert-manager.io/docs/installation/helm/)。 | -| `shceduler.create` | `true` | 是否启用 Scheduler。 | -| `shceduler.schedulerName` | `nebula-scheduler` | 调度器名称。 | -| `shceduler.replicas` | `2` | nebula-scheduler 副本数。 | - - -以下示例为在安装 NebulaGraph Operator 时,指定 NebulaGraph Operator 的 AdmissionWebhook 机制为开启状态(默认关闭 AdmissionWebhook): - -```bash -helm install nebula-operator nebula-operator/nebula-operator --namespace= --set admissionWebhook.create=true -``` - -### 更新 NebulaGraph Operator - -1. 拉取最新的 Helm 仓库。 - - ```bash - helm repo update - ``` - -2. 通过`--set`传递配置参数,更新 NebulaGraph Operator。 - - - `--set`:通过命令行的方式新增或覆盖指定项。有关可以更新的配置项,查看上文**自定义配置 Chart**。 - - 例如,更新 NebulaGraph Operator 的 AdmissionWebhook 机制为开启状态。 - - ```bash - helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} --set admissionWebhook.create=true - ``` - - 更多信息,参考 [Helm 升级](https://helm.sh/docs/helm/helm_upgrade/)。 - - -### 升级 NebulaGraph Operator - -!!! compatibility "历史版本兼容性" - - - 不支持升级 0.9.0 及以下版本的 NebulaGraph Operator 至 1.x 版本。 - - 1.x 版本的 NebulaGraph Operator 不兼容 3.x 以下版本的 NebulaGraph。 - -1. 拉取最新的 Helm 仓库。 - - ```bash - helm repo update - ``` - -2. 升级 NebulaGraph Operator 至 {{operator.release}} 版本。 - - ```bash - helm upgrade nebula-operator nebula-operator/nebula-operator --namespace= --version={{operator.release}} - ``` - - 示例: - - ```bash - helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}} - ``` - - 输出: - - ```bash - Release "nebula-operator" has been upgraded. Happy Helming! - NAME: nebula-operator - LAST DEPLOYED: Tue Nov 16 02:21:08 2021 - NAMESPACE: nebula-operator-system - STATUS: deployed - REVISION: 3 - TEST SUITE: None - NOTES: - NebulaGraph Operator installed! - ``` - -3. 拉取最新的 CRD 配置文件。 - - - !!! note - - 升级 Operator 后,需要同时升级相应的 CRD 配置,否则{{nebula.name}}集群创建会失败。有关 CRD 的配置,参见 [apps.nebula-graph.io_nebulaclusters.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.tag}}/config/crd/bases/apps.nebula-graph.io_nebulaclusters.yaml)。 - - 1. 下载 NebulaGraph Operator chart 至本地。 - - ```bash - helm pull nebula-operator/nebula-operator --version={{operator.release}} - ``` - - - `--version`: 升级版本号。如不指定,则默认为最新版本。 - - 2. 执行`tar -zxvf`解压安装包。 - - 例如:解压 {{operator.release}} chart 包至`/tmp`路径下。 - - ```bash - tar -zxvf nebula-operator-{{operator.release}}.tgz -C /tmp - ``` - - - `-C /tmp`: 如不指定,则默认解压至当前路径。 - - -4. 在`nebula-operator`目录下升级 CRD 配置文件。 - - ```bash - kubectl apply -f crds/nebulacluster.yaml - ``` - - 输出: - - ```bash - customresourcedefinition.apiextensions.k8s.io/nebulaclusters.apps.nebula-graph.io configured - ``` - - - -### 卸载 NebulaGraph Operator - -1. 卸载 NebulaGraph Operator chart。 - - ```bash - helm uninstall nebula-operator --namespace= - ``` - -2. 删除 CRD。 - - ```bash - kubectl delete crd nebulaclusters.apps.nebula-graph.io - ``` - -## 后续操作 - - - -- 使用 NebulaGraph Operator 自动化部署{{nebula.name}}集群。更多信息,请参考[使用 Kubectl 部署{{nebula.name}}集群](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md)或者[使用 Helm 部署{{nebula.name}}集群](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md)。 - - diff --git a/docs-2.0-zh/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0-zh/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md deleted file mode 100644 index f9d694216a6..00000000000 --- a/docs-2.0-zh/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ /dev/null @@ -1,108 +0,0 @@ -# 使用 Kubectl 部署{{nebula.name}}集群 - -!!! compatibility "历史版本兼容性" - - 1.x 版本的 NebulaGraph Operator 不兼容 3.x 以下版本的{{nebula.name}}。 - -## 前提条件 - -- [安装 NebulaGraph Operator](../2.deploy-nebula-operator.md) -- [已创建 StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) - - -## 创建集群 - -本文以创建名为`nebula`的集群为例,说明如何部署{{nebula.name}}集群。 - -1. 创建命名空间,例如`nebula`。如果不指定命名空间,默认使用`default`命名空间。 - - ```bash - kubectl create namespace nebula - ``` - - - -2. 创建集群配置文件。 - - - 创建名为`apps_v1alpha1_nebulacluster.yaml`的文件。文件内容参见[示例配置](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/nebulacluster.yaml)。 - - - - 示例配置的参数描述如下: - - | 参数 | 默认值 | 描述 | - | :---- | :--- | :--- | - | `metadata.name` | - | 创建的{{nebula.name}}集群名称。 | - |`spec.console`|-| 启动 Console 容器用于连接 Graph 服务。配置详情,参见 [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| - | `spec.graphd.replicas` | `1` | Graphd 服务的副本数。 | - | `spec.graphd.image` | `vesoft/nebula-graphd` | Graphd 服务的容器镜像。 | - | `spec.graphd.version` | `{{nebula.tag}}` | Graphd 服务的版本号。 | - | `spec.graphd.service` | | 访问 Graphd 服务的 Service 配置。 | - | `spec.graphd.logVolumeClaim.storageClassName` | - | Graphd 服务的日志盘存储卷的存储类名称。使用示例配置时需要将其替换为事先创建的存储类名称,参见 [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) 查看创建存储类详情。 | - | `spec.metad.replicas` | `1` | Metad 服务的副本数。 | - | `spec.metad.image` | `vesoft/nebula-metad` | Metad 服务的容器镜像。 | - | `spec.metad.version` | `{{nebula.tag}}` | Metad 服务的版本号。 | - | `spec.metad.dataVolumeClaim.storageClassName` | - | Metad 服务的数据盘存储配置。使用示例配置时需要将其替换为事先创建的存储类名称,参见 [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) 查看创建存储类详情。 | - | `spec.metad.logVolumeClaim.storageClassName`|-|Metad 服务的日志盘存储配置。使用示例配置时需要将其替换为事先创建的存储类名称,参见 [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) 查看创建存储类详情。 | - | `spec.storaged.replicas` | `3` | Storaged 服务的副本数。 | - | `spec.storaged.image` | `vesoft/nebula-storaged` | Storaged 服务的容器镜像。 | - | `spec.storaged.version` | `{{nebula.tag}}` | Storaged 服务的版本号。 | - | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Storaged 服务的数据盘存储大小,可指定多块数据盘存储数据。当指定多块数据盘时,路径为:`/usr/local/nebula/data1`、`/usr/local/nebula/data2`等。 | - | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | Storaged 服务的数据盘存储配置。使用示例配置时需要将其替换为事先创建的存储类名称,参见 [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) 查看创建存储类详情。 | - | `spec.storaged.logVolumeClaim.storageClassName`|-|Storaged 服务的日志盘存储配置。使用示例配置时需要将其替换为事先创建的存储类名称,参见 [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) 查看创建存储类详情。 | - |`spec..securityContext`|`{}`|定义集群容器的权限和访问控制,以控制访问和执行容器的操作。详情参见 [SecurityContext](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/doc/user/security_context.md)。 | - |`spec.agent`|`{}`| Agent 服务的配置。用于备份和恢复及日志清理功能,如果不自定义该配置,将使用默认配置。| - | `spec.reference.name` | - | 依赖的控制器名称。 | - | `spec.schedulerName` | - | 调度器名称。 | - | `spec.imagePullPolicy` | {{nebula.name}}镜像的拉取策略。关于拉取策略详情,请参考 [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy)。 | 镜像拉取策略。 | - |`spec.logRotate`| - |日志轮转配置。详情参见[管理集群日志](../8.custom-cluster-configurations/8.4.manage-running-logs.md)。| - |`spec.enablePVReclaim`|`false`|定义是否在删除集群后自动删除 PVC 以释放数据。详情参见[回收 PV](../8.custom-cluster-configurations/8.2.pv-reclaim.md)。| - - - -3. 创建{{nebula.name}}集群。 - - ```bash - kubectl create -f apps_v1alpha1_nebulacluster.yaml - ``` - - 返回: - - ```bash - nebulacluster.apps.nebula-graph.io/nebula created - ``` - - -4. 查看{{nebula.name}}集群状态。 - - ```bash - kubectl get nebulaclusters nebula - ``` - - 返回: - - ```bash - NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE - nebula 1 1 1 1 3 3 86s - ``` - - -## 扩缩容集群 - - -不支持扩缩容社区版的{{nebula.name}}集群。 - - -## 删除集群 - -使用 Kubectl 删除{{nebula.name}}集群的命令如下: - -```bash -kubectl delete -f apps_v1alpha1_nebulacluster.yaml -``` - -## 后续操作 - -[连接{{nebula.name}}数据库](../4.connect-to-nebula-graph-service.md) - diff --git a/docs-2.0-zh/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0-zh/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md deleted file mode 100644 index 4fdfdded4f8..00000000000 --- a/docs-2.0-zh/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ /dev/null @@ -1,99 +0,0 @@ -# 使用 Helm 部署{{nebula.name}}集群 - -!!! compatibility "历史版本兼容性" - - 1.x 版本的 NebulaGraph Operator 不兼容 3.x 以下版本的{{nebula.name}}。 - -## 前提条件 - -- [安装 NebulaGraph Operator](../2.deploy-nebula-operator.md) -- [已创建 StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) - - -## 创建{{nebula.name}}集群 - -1. 添加 NebulaGraph Operator Helm 仓库 - - ```bash - helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts - ``` - -2. 更新 Helm 仓库,拉取最新仓库资源。 - - ```bash - helm repo update - ``` - -3. 为安装集群所需的配置参数设置环境变量。 - - ```bash - export NEBULA_CLUSTER_NAME=nebula #{{nebula.name}}集群的名字。 - export NEBULA_CLUSTER_NAMESPACE=nebula #{{nebula.name}}集群所处的命名空间的名字。 - export STORAGE_CLASS_NAME=fast-disks #{{nebula.name}}集群的 StorageClass。 - ``` - -4. 为{{nebula.name}}集群创建命名空间(如已创建,略过此步)。 - - ```bash - kubectl create namespace "${NEBULA_CLUSTER_NAMESPACE}" - ``` - - - -6. 创建{{nebula.name}}集群。 - - ```bash - helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ - - --set nameOverride=${NEBULA_CLUSTER_NAME} \ - --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ - # 指定{{nebula.name}}集群的版本。 - --set nebula.version=v{{nebula.release}} \ - # 指定集群 chart 的版本,不指定则默认安装最新版本 chart。 - # 执行 helm search repo nebula-operator/nebula-cluster 命令可查看所有 chart 版本。 - --version={{operator.release}} \ - --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ - ``` - - - - 执行`helm show values nebula-operator/nebula-cluster`命令,或者单击 [nebula-cluster/values.yaml -](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml) 可查看{{nebula.name}}集群的所有配置参数。 - - 单击 [Chart parameters](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/doc/user/nebula_cluster_helm_guide.md#optional-chart-parameters) 查看可配置的集群参数的描述及默认值。 - - 通过`--set`参数设置{{nebula.name}}集群的配置参数,例如,`--set nebula.storaged.replicas=3`可设置{{nebula.name}}集群中 Storage 服务的副本数为 3。 - - -7. 查看{{nebula.name}}集群创建状态。 - - ```bash - kubectl -n "${NEBULA_CLUSTER_NAMESPACE}" get pod -l "app.kubernetes.io/cluster=${NEBULA_CLUSTER_NAME}" - ``` - -## 扩缩容集群 - - -不支持扩缩容社区版的{{nebula.name}}集群。 - - - - -## 删除集群 - -使用 Helm 删除集群的命令如下: - -```bash -helm uninstall "${NEBULA_CLUSTER_NAME}" --namespace="${NEBULA_CLUSTER_NAMESPACE}" -``` - -或者使用真实值删除集群,例如: - -```bash -helm uninstall nebula --namespace=nebula -``` - -## 后续操作 - -[连接{{nebula.name}}](../4.connect-to-nebula-graph-service.md) - diff --git a/docs-2.0-zh/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md b/docs-2.0-zh/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md deleted file mode 100644 index 2c83dbc0683..00000000000 --- a/docs-2.0-zh/nebula-operator/8.custom-cluster-configurations/8.1.custom-conf-parameter.md +++ /dev/null @@ -1,170 +0,0 @@ -# 自定义{{nebula.name}}集群的配置参数 - -{{nebula.name}}集群中 Meta、Storage、Graph 服务都有各自的配置,其在用户创建的{{nebula.name}}集群实例的 YAML 文件中被定义为`config`。`config`中的设置会被映射并加载到对应服务的 ConfigMap 中。各个服务在启动时会挂载 ConfigMap 中的配置到`/usr/local/nebula/etc/`目录下。 - -!!! note - - 暂不支持通过 Helm 自定义{{nebula.name}}集群的配置参数。 - -`config`结构如下: - -```go -Config map[string]string `json:"config,omitempty"` -``` - -## 前提条件 - -已使用 K8s 创建一个集群。具体步骤,参见[使用 Kubectl 创建{{nebula.name}}集群](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md)。 - -## 操作步骤 - -以下示例使用名为`nebula`的集群、名为`nebula_cluster.yaml`的 YAML 配置文件,说明如何为集群的 Graph 服务配置`config`: - -1. 执行以下命令进入`nebula`集群的编辑页面。 - - ```bash - kubectl edit nebulaclusters.apps.nebula-graph.io nebula - ``` - -2. 在 YAML 文件的`spec.graphd.config`配置项中,添加需要修改的参数。下文以 `enable_authorize`和`auth_type`为例。 - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - graphd: - resources: - requests: - cpu: "500m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-graphd - version: {{nebula.tag}} - storageClaim: - resources: - requests: - storage: 2Gi - storageClassName: fast-disks - config: //为 Graph 服务自定义参数。 - "enable_authorize": "true" - "auth_type": "password" - ... - ``` - 在`config`字段下可配置的参数详情,请分别参见 [Meta 服务配置参数](../../5.configurations-and-logs/1.configurations/2.meta-config.md)、[Storage 服务配置参数](../../5.configurations-and-logs/1.configurations/4.storage-config.md)、[Graph 服务配置参数](../../5.configurations-and-logs/1.configurations/3.graph-config.md)。 - - !!! note - - * 若要在集群运行时动态修改参数配置且不触发 Pod 重启,请确保当前修改的参数全部支持运行时动态修改。参数是否支持运行时动态修改,请查看上述参数详情页各个表格中**是否支持运行时动态修改**一列。 - * 若本次修改的参数包含一个或多个不支持运行时动态修改的参数,则会触发 Pod 重启。 - - 如果需要为 Meta 服务和 Storage 服务配置`config`,则在`spec.metad.config`和`spec.storaged.config`中添加对应的配置项。 - -3. 执行`kubectl apply -f nebula_cluster.yaml`使上述更新生效。 - - 在修改参数值后,Graph 服务对应的 ConfigMap(`nebula-graphd`)中的配置将被覆盖。 - -### 配置自定义端口 - -您可以在`config`字段中添加`port`和`ws_http_port`参数,从而配置自定义的端口。这两个参数的详细信息,请参见[Meta 服务配置参数](../../5.configurations-and-logs/1.configurations/2.meta-config.md)、[Storage 服务配置参数](../../5.configurations-and-logs/1.configurations/4.storage-config.md)、[Graph 服务配置参数](../../5.configurations-and-logs/1.configurations/3.graph-config.md)的 networking 配置一节。 - -!!! note - - * 自定义`port`和`ws_http_port`参数配置后,会触发 Pod 重启,并在重启后生效。 - * 在集群启动后,不建议修改`port`参数。 - -1. 修改集群配置文件。 - - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - graphd: - config: - port: "3669" - ws_http_port: "8080" - resources: - requests: - cpu: "200m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-graphd - version: {{nebula.tag}} - metad: - config: - ws_http_port: 8081 - resources: - requests: - cpu: "300m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-metad - version: {{nebula.tag}} - dataVolumeClaim: - resources: - requests: - storage: 2Gi - storageClassName: local-path - storaged: - config: - ws_http_port: 8082 - resources: - requests: - cpu: "300m" - memory: "500Mi" - limits: - cpu: "1" - memory: "1Gi" - replicas: 1 - image: vesoft/nebula-storaged - version: {{nebula.tag}} - dataVolumeClaims: - - resources: - requests: - storage: 2Gi - storageClassName: local-path - enableAutoBalance: true - reference: - name: statefulsets.apps - version: v1 - schedulerName: default-scheduler - imagePullPolicy: IfNotPresent - imagePullSecrets: - - name: nebula-image - enablePVReclaim: true - topologySpreadConstraints: - - topologyKey: kubernetes.io/hostname - whenUnsatisfiable: "ScheduleAnyway" - ``` - -2. 执行`kubectl apply -f nebula_cluster.yaml`使上述更新生效。 - -3. 验证配置已经生效。 - - ```bash - kubectl get svc - ``` - - 返回示例: - - ``` - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - nebula-graphd-headless ClusterIP None 3669/TCP,8080/TCP 10m - nebula-graphd-svc ClusterIP 10.102.13.115 3669/TCP,8080/TCP 10m - nebula-metad-headless ClusterIP None 9559/TCP,8081/TCP 11m - nebula-storaged-headless ClusterIP None 9779/TCP,8082/TCP,9778/TCP 11m - ``` \ No newline at end of file