From 2396e80f33b5431ef512722e6c0b110cd0d93833 Mon Sep 17 00:00:00 2001 From: Gergely Csatari Date: Wed, 30 Oct 2024 16:06:06 +0200 Subject: [PATCH 01/32] Add license file Signed-off-by: Gergely Csatari --- LICENSE | 344 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ README.md | 2 +- 2 files changed, 345 insertions(+), 1 deletion(-) create mode 100644 LICENSE diff --git a/LICENSE b/LICENSE new file mode 100644 index 00000000..e32c13b4 --- /dev/null +++ b/LICENSE @@ -0,0 +1,344 @@ +SPDX-License-Identifier: CC-BY-4.0 + +======================================================================= + +Creative Commons Attribution 4.0 International Public License + +By exercising the Licensed Rights (defined below), You accept and agree +to be bound by the terms and conditions of this Creative Commons +Attribution 4.0 International Public License ("Public License"). To the +extent this Public License may be interpreted as a contract, You are +granted the Licensed Rights in consideration of Your acceptance of +these terms and conditions, and the Licensor grants You such rights in +consideration of benefits the Licensor receives from making the +Licensed Material available under these terms and conditions. + + +Section 1 -- Definitions. + + a. Adapted Material means material subject to Copyright and Similar + Rights that is derived from or based upon the Licensed Material + and in which the Licensed Material is translated, altered, + arranged, transformed, or otherwise modified in a manner requiring + permission under the Copyright and Similar Rights held by the + Licensor. For purposes of this Public License, where the Licensed + Material is a musical work, performance, or sound recording, + Adapted Material is always produced where the Licensed Material is + synched in timed relation with a moving image. + + b. Adapter's License means the license You apply to Your Copyright + and Similar Rights in Your contributions to Adapted Material in + accordance with the terms and conditions of this Public License. + + c. Copyright and Similar Rights means copyright and/or similar rights + closely related to copyright including, without limitation, + performance, broadcast, sound recording, and Sui Generis Database + Rights, without regard to how the rights are labeled or + categorized. For purposes of this Public License, the rights + specified in Section 2(b)(1)-(2) are not Copyright and Similar + Rights. + + d. Effective Technological Measures means those measures that, in the + absence of proper authority, may not be circumvented under laws + fulfilling obligations under Article 11 of the WIPO Copyright + Treaty adopted on December 20, 1996, and/or similar international + agreements. + + e. Exceptions and Limitations means fair use, fair dealing, and/or + any other exception or limitation to Copyright and Similar Rights + that applies to Your use of the Licensed Material. + + f. Licensed Material means the artistic or literary work, database, + or other material to which the Licensor applied this Public + License. + + g. Licensed Rights means the rights granted to You subject to the + terms and conditions of this Public License, which are limited to + all Copyright and Similar Rights that apply to Your use of the + Licensed Material and that the Licensor has authority to license. + + h. Licensor means the individual(s) or entity(ies) granting rights + under this Public License. + + i. Share means to provide material to the public by any means or + process that requires permission under the Licensed Rights, such + as reproduction, public display, public performance, distribution, + dissemination, communication, or importation, and to make material + available to the public including in ways that members of the + public may access the material from a place and at a time + individually chosen by them. + + j. Sui Generis Database Rights means rights other than copyright + resulting from Directive 96/9/EC of the European Parliament and of + the Council of 11 March 1996 on the legal protection of databases, + as amended and/or succeeded, as well as other essentially + equivalent rights anywhere in the world. + + k. You means the individual or entity exercising the Licensed Rights + under this Public License. Your has a corresponding meaning. + + +Section 2 -- Scope. + + a. License grant. + + 1. Subject to the terms and conditions of this Public License, + the Licensor hereby grants You a worldwide, royalty-free, + non-sublicensable, non-exclusive, irrevocable license to + exercise the Licensed Rights in the Licensed Material to: + + a. reproduce and Share the Licensed Material, in whole or + in part; and + + b. produce, reproduce, and Share Adapted Material. + + 2. Exceptions and Limitations. For the avoidance of doubt, where + Exceptions and Limitations apply to Your use, this Public + License does not apply, and You do not need to comply with + its terms and conditions. + + 3. Term. The term of this Public License is specified in Section + 6(a). + + 4. Media and formats; technical modifications allowed. The + Licensor authorizes You to exercise the Licensed Rights in + all media and formats whether now known or hereafter created, + and to make technical modifications necessary to do so. The + Licensor waives and/or agrees not to assert any right or + authority to forbid You from making technical modifications + necessary to exercise the Licensed Rights, including + technical modifications necessary to circumvent Effective + Technological Measures. For purposes of this Public License, + simply making modifications authorized by this Section 2(a) + (4) never produces Adapted Material. + + 5. Downstream recipients. + + a. Offer from the Licensor -- Licensed Material. Every + recipient of the Licensed Material automatically + receives an offer from the Licensor to exercise the + Licensed Rights under the terms and conditions of this + Public License. + + b. No downstream restrictions. You may not offer or impose + any additional or different terms or conditions on, or + apply any Effective Technological Measures to, the + Licensed Material if doing so restricts exercise of the + Licensed Rights by any recipient of the Licensed + Material. + + 6. No endorsement. Nothing in this Public License constitutes or + may be construed as permission to assert or imply that You + are, or that Your use of the Licensed Material is, connected + with, or sponsored, endorsed, or granted official status by, + the Licensor or others designated to receive attribution as + provided in Section 3(a)(1)(A)(i). + + b. Other rights. + + 1. Moral rights, such as the right of integrity, are not + licensed under this Public License, nor are publicity, + privacy, and/or other similar personality rights; however, to + the extent possible, the Licensor waives and/or agrees not to + assert any such rights held by the Licensor to the limited + extent necessary to allow You to exercise the Licensed + Rights, but not otherwise. + + 2. Patent and trademark rights are not licensed under this + Public License. + + 3. To the extent possible, the Licensor waives any right to + collect royalties from You for the exercise of the Licensed + Rights, whether directly or through a collecting society + under any voluntary or waivable statutory or compulsory + licensing scheme. In all other cases the Licensor expressly + reserves any right to collect such royalties. + + +Section 3 -- License Conditions. + +Your exercise of the Licensed Rights is expressly made subject to the +following conditions. + + a. Attribution. + + 1. If You Share the Licensed Material (including in modified + form), You must: + + a. retain the following if it is supplied by the Licensor + with the Licensed Material: + + i. identification of the creator(s) of the Licensed + Material and any others designated to receive + attribution, in any reasonable manner requested by + the Licensor (including by pseudonym if + designated); + + ii. a copyright notice; + + iii. a notice that refers to this Public License; + + iv. a notice that refers to the disclaimer of + warranties; + + v. a URI or hyperlink to the Licensed Material to the + extent reasonably practicable; + + b. indicate if You modified the Licensed Material and + retain an indication of any previous modifications; and + + c. indicate the Licensed Material is licensed under this + Public License, and include the text of, or the URI or + hyperlink to, this Public License. + + 2. You may satisfy the conditions in Section 3(a)(1) in any + reasonable manner based on the medium, means, and context in + which You Share the Licensed Material. For example, it may be + reasonable to satisfy the conditions by providing a URI or + hyperlink to a resource that includes the required + information. + + 3. If requested by the Licensor, You must remove any of the + information required by Section 3(a)(1)(A) to the extent + reasonably practicable. + + 4. If You Share Adapted Material You produce, the Adapter's + License You apply must not prevent recipients of the Adapted + Material from complying with this Public License. + + +Section 4 -- Sui Generis Database Rights. + +Where the Licensed Rights include Sui Generis Database Rights that +apply to Your use of the Licensed Material: + + a. for the avoidance of doubt, Section 2(a)(1) grants You the right + to extract, reuse, reproduce, and Share all or a substantial + portion of the contents of the database; + + b. if You include all or a substantial portion of the database + contents in a database in which You have Sui Generis Database + Rights, then the database in which You have Sui Generis Database + Rights (but not its individual contents) is Adapted Material; and + + c. You must comply with the conditions in Section 3(a) if You Share + all or a substantial portion of the contents of the database. + +For the avoidance of doubt, this Section 4 supplements and does not +replace Your obligations under this Public License where the Licensed +Rights include other Copyright and Similar Rights. + + +Section 5 -- Disclaimer of Warranties and Limitation of Liability. + + a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE + EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS + AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF + ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, + IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, + WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR + PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, + ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT + KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT + ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. + + b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE + TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, + NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, + INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, + COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR + USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN + ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR + DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR + IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. + + c. The disclaimer of warranties and limitation of liability provided + above shall be interpreted in a manner that, to the extent + possible, most closely approximates an absolute disclaimer and + waiver of all liability. + + +Section 6 -- Term and Termination. + + a. This Public License applies for the term of the Copyright and + Similar Rights licensed here. However, if You fail to comply with + this Public License, then Your rights under this Public License + terminate automatically. + + b. Where Your right to use the Licensed Material has terminated under + Section 6(a), it reinstates: + + 1. automatically as of the date the violation is cured, provided + it is cured within 30 days of Your discovery of the + violation; or + + 2. upon express reinstatement by the Licensor. + + For the avoidance of doubt, this Section 6(b) does not affect any + right the Licensor may have to seek remedies for Your violations + of this Public License. + + c. For the avoidance of doubt, the Licensor may also offer the + Licensed Material under separate terms or conditions or stop + distributing the Licensed Material at any time; however, doing so + will not terminate this Public License. + + d. Sections 1, 5, 6, 7, and 8 survive termination of this Public + License. + + +Section 7 -- Other Terms and Conditions. + + a. The Licensor shall not be bound by any additional or different + terms or conditions communicated by You unless expressly agreed. + + b. Any arrangements, understandings, or agreements regarding the + Licensed Material not stated herein are separate from and + independent of the terms and conditions of this Public License. + + +Section 8 -- Interpretation. + + a. For the avoidance of doubt, this Public License does not, and + shall not be interpreted to, reduce, limit, restrict, or impose + conditions on any use of the Licensed Material that could lawfully + be made without permission under this Public License. + + b. To the extent possible, if any provision of this Public License is + deemed unenforceable, it shall be automatically reformed to the + minimum extent necessary to make it enforceable. If the provision + cannot be reformed, it shall be severed from this Public License + without affecting the enforceability of the remaining terms and + conditions. + + c. No term or condition of this Public License will be waived and no + failure to comply consented to unless expressly agreed to by the + Licensor. + + d. Nothing in this Public License constitutes or may be interpreted + as a limitation upon, or waiver of, any privileges and immunities + that apply to the Licensor or You, including from the legal + processes of any jurisdiction or authority. + + +======================================================================= + +Creative Commons is not a party to its public +licenses. Notwithstanding, Creative Commons may elect to apply one of +its public licenses to material it publishes and in those instances +will be considered the “Licensor.” The text of the Creative Commons +public licenses is dedicated to the public domain under the CC0 Public +Domain Dedication. Except for the limited purpose of indicating that +material is shared under a Creative Commons public license or as +otherwise permitted by the Creative Commons policies published at +creativecommons.org/policies, Creative Commons does not authorize the +use of the trademark "Creative Commons" or any other trademark or logo +of Creative Commons without its prior written consent including, +without limitation, in connection with any unauthorized modifications +to any of its public licenses or any other arrangements, +understandings, or agreements concerning use of licensed material. For +the avoidance of doubt, this paragraph does not form part of the +public licenses. + +Creative Commons may be contacted at creativecommons.org. + diff --git a/README.md b/README.md index fc264ae9..504b8cec 100644 --- a/README.md +++ b/README.md @@ -17,5 +17,5 @@ please follow [this](https://github.com/anuket-project/anuket-specifications/blo - [Terms of Reference](https://github.com/anuket-project/anuket-specifications/blob/master/doc/GSMA_CNTT_Terms_of_Reference.pdf) - [Code of Conduct](https://github.com/anuket-project/anuket-specifications/blob/master/doc/CODE_OF_CONDUCT.rst) -- License of the Anuket Specifications project is [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) +- License of the Anuket Specifications project is [Creative Commons Attribution 4.0 International](LICENSE) From 68c9a2129b315a2a9bc56912b816580c9bd9c6f0 Mon Sep 17 00:00:00 2001 From: Gergely Csatari Date: Wed, 30 Oct 2024 17:45:22 +0200 Subject: [PATCH 02/32] Adding OpenSSF Scorecard badge Signed-off-by: Gergely Csatari --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index fc264ae9..3198152d 100644 --- a/README.md +++ b/README.md @@ -19,3 +19,6 @@ please follow [this](https://github.com/anuket-project/anuket-specifications/blo - [Code of Conduct](https://github.com/anuket-project/anuket-specifications/blob/master/doc/CODE_OF_CONDUCT.rst) - License of the Anuket Specifications project is [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) +# Badges + +[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/anuket-project/RM/badge)](https://scorecard.dev/viewer/?uri=github.com/anuket-project/RM) From 3bb88953d79b6780673279ada911e86fd1999d82 Mon Sep 17 00:00:00 2001 From: Michael Greaves Date: Thu, 31 Oct 2024 11:16:53 +0100 Subject: [PATCH 03/32] Language corrections for chapter 2. Signed-off-by: Michael Greaves --- doc/ref_model/chapters/chapter02.rst | 689 ++++++++++++++------------- 1 file changed, 358 insertions(+), 331 deletions(-) diff --git a/doc/ref_model/chapters/chapter02.rst b/doc/ref_model/chapters/chapter02.rst index bf0eefdd..a849e58d 100644 --- a/doc/ref_model/chapters/chapter02.rst +++ b/doc/ref_model/chapters/chapter02.rst @@ -1,558 +1,585 @@ .. _workload-requirements--analysis: -Workload Requirements & Analysis -================================ +Workload Requirements and Analysis +================================== -The Cloud Infrastructure is the totality of all hardware and software components which build up the environment in which -VNFs/CNFs (workloads) are deployed, managed and executed. It is, therefore, inevitable that different workloads would -require different capabilities and have different expectations from it. +The cloud infrastructure is the totality of all the hardware and software components which build up the environment in +which the VNFs and CNFs (workloads) are deployed, managed, and executed. It is, therefore, inevitable that different +workloads will require different capabilities and have different expectations from the cloud infrastructure. -One of the main targets of the Reference Model is to define an agnostic cloud infrastructure, to remove any dependencies -between workloads and the deployed cloud infrastructure, and offer infrastructure resources to workloads in an -abstracted way with defined capabilities and metrics. +Among the main targets of the Reference Model is to define an agnostic cloud infrastructure, to remove any dependencies +between the workloads and the deployed cloud infrastructure, and to offer infrastructure resources to the workloads in +an abstracted way, with defined capabilities and metrics. -This means, operators will be able to host their Telco workloads (VNFs/CNFs) with different traffic types, behaviour and -from any vendor on a unified consistent cloud infrastructure. +This means that operators will be able to host their Telco workloads (VNFs/CNFs) with different traffic types and +behaviours, and will be able to buy from any vendor on a unified consistent cloud infrastructure. -Additionally, a well-defined cloud infrastructure is also needed for other type of workloads such as IT, Machine -Learning, and Artificial Intelligence. +Additionally, a well-defined cloud infrastructure is needed for other types of workloads such as IT, machine +learning, and artificial intelligence. This chapter analyses various Telco workloads and their requirements, and recommends certain cloud infrastructure -parameters needed to specify the desired performance expected by these workloads. +parameters necessary to specify the desired performance expected by these workloads. -Workloads Collateral +Workloads collateral -------------------- -There are different ways that workloads can be classified, for example: +There are different ways in which workloads can be classified, for example: - **By function type:** - - Data Plane (a.k.a., User Plane, Media Plane, Forwarding Plane) - - Control Plane (a.k.a., Signalling Plane) - - Management Plane + - Data plane (also known as user plane, media plane, and forwarding plane) + - Control plane (also known as signalling plane) + - Management plane .. - **Note**\ *: Data plane workloads also include control and management plane functions; control plane workloads - also include management plane functions.* + **Note**\ *: Data plane workloads also include control and management plane functions. Control plane + workloads also include management plane functions.* - **By service offered:** - - Mobile broadband service - - Fixed broadband Service - - Voice Service - - Value-Added-Services + - mobile broadband service + - fixed broadband service + - voice service + - value-added services -- **By technology:** 2G, 3G, 4G, 5G, IMS, FTTx, Wi-Fi... +- **By technology:** 2G, 3G, 4G, 5G, IMS, FTTx, WiFi, and so on. -The list of, most likely to be virtualised, Network Functions below, covering almost **95%** of the Telco workloads, is -organised by network segment and function type. +The following list of network functions most likely to be virtualised, covering almost 95% of the Telco workloads, +is organised by network segment and function type. - **Radio Access Network (RAN)** - - Data Plane + - Data plane - - BBU: BaseBand Unit - - CU: Centralised Unit - - DU: Distributed Unit + - BaseBand Unit (BBU) + - Centralised Unit (CU) + - Distributed Unit (DU) - **2G/3G/4G mobile core network** - - Control Plane - - - MME: Mobility Management Entity - - 3GPP AAA: Authentication, Authorisation, and Accounting - - PCRF: Policy and Charging Rules Function - - OCS: Online Charging system - - OFCS: Offline Charging System - - HSS: Home Subscriber Server - - DRA: Diameter Routing Agent - - HLR: Home Location Register - - SGW-C: Serving GateWay Control plane - - PGW-C: Packet data network GateWay Control plane - - - Data Plane - - - SGW: Serving GateWay - - SGW-U: Serving GateWay User plane - - PGW: Packet data network GateWay - - PGW-U: Packet data network GateWay User plane - - ePDG: Evolved Packet Data GateWay - - MSC: Mobile Switching Center - - SGSN: Serving GPRS Support Node - - GGSN: Gateway GPRS Support Node - - SMSC : SMS Center + - Control plane + + - Mobility Management Entity (MME) + - Authentication, Authorisation, and Accounting (3GPP AAA) + - Policy and Charging Rules Function (PCRF) + - Online Charging system (OCS) + - Offline Charging System (OFCS) + - Home Subscriber Server (HSS) + - Diameter Routing Agent (DRA) + - Home Location Register (HLR) + - Serving GateWay Control plane (SGW-C) + - Packet data network GateWay Control plane (PGW-C) + + - Data plane + + - Serving GateWay (SGW) + - Serving GateWay User plane (SGW-U) + - Packet data network GateWay (PGW) + - Packet data network GateWay User plane (PGW-U) + - Evolved Packet Data GateWay (ePDG) + - Mobile Switching Center (MSC) + - Serving GPRS Support Node (SGSN) + - Gateway GPRS Support Node (GGSN) + - Short Message Service Center (SMSC) - **5G core network** - 5G core nodes are virtualisable by design and strong candidate to be onboarded onto Telco Cloud as "cloud native - application" + 5G core nodes are virtualisable by design and are a strong candidate for onboarding into the Telco cloud as + cloud-native applications. - - Data Plane + - Data plane - - UPF: User Plane Function + - User Plane Function (UPF) - - Control Plane + - Control plane - - AMF: Access and Mobility management Function - - SMF: Session Management Function - - PCF: Policy Control Function - - AUSF: Authentication Server Function - - NSSF: Network Slice Selection Function - - UDM: Unified Data Management - - UDR: Unified Data Repository - - NRF: Network Repository Function - - NEF: Network Exposure Function - - CHF: Charging Function part of the Converged Charging System (CCS) + - Access and Mobility management Function (AMF) + - Session Management Function (SMF) + - Policy Control Function (PCF) + - Authentication Server Function (AUSF) + - Network Slice Selection Function (NSSF) + - Unified Data Management (UDM) + - Unified Data Repository (UDR) + - Network Repository Function (NRF) + - Network Exposure Function (NEF) + - Charging Function part of the Converged Charging System (CHF) .. - **Note:**\ *for Service-based Architecture (SBA) all Network Functions are stateless (store all sessions/ state - on unified data repository UDR)* + **Note:**\ *for Service-based Architecture (SBA), all network functions are stateless. That is, they + store all sessions or states on unified data repository (UDR).* - **IP Multimedia Subsystem (IMS)** - - Data Plane + - Data plane - - MGW: Media GateWay - - SBC: Session Border Controller - - MRF: Media Resource Function + - Media GateWay (MGW) + - Session Border Controller (SBC) + - Media Resource Function (MRF) - - Control Plane + - Control plane - - CSCF: Call Session Control Function - - MTAS: Mobile Telephony Application Server - - BGCF: Border Gateway Control Function - - MGCF: Media Gateway Control Function + - Call Session Control Function (CSCF) + - Mobile Telephony Application Server (MTAS) + - Border Gateway Control Function (BGCF) + - Media Gateway Control Function (MGCF) - **Fixed network** - - Data Plane + - Data plane - - MSAN: MultiService Access Node - - OLT: Optical Line Termination - - WLC: WLAN Controller - - BNG: Broadband Network Gateway - - BRAS: Broadband Remote Access Server - - RGW: Residential GateWay - - CPE: Customer Premises Equipment + - MultiService Access Node (MSAN) + - Optical Line Termination (OLT) + - WLAN Controller (WLC) + - Broadband Network Gateway (BNG) + - Broadband Remote Access Server (BRAS) + - Residential GateWay (RGW) + - Customer Premises Equipment (CPE) - - Control Plane + - Control plane - - AAA: Authentication, Authorisation, and Accounting + - Authentication, Authorisation, and Accounting (AAA) - **Other network functions** - - Data Plane + - Data plane - - LSR: Label Switching Router - - DPI: Deep Packet Inspection - - CG-NAT: Carrier-Grade Network Address Translation - - ADC: Application Delivery Controller - - FW: FireWall - - Sec-GW: Security GateWay - - CDN: Content Delivery Network + - Label Switching Router (LSR) + - Deep Packet Inspection (DPI) + - Carrier-Grade Network Address Translation (CG-NAT) + - Application Delivery Controller (ADC) + - FireWall (FW) + - Security GateWay (Sec-GW) + - Content Delivery Network (CDN) - Control plane - - RR: Route Reflector - - DNS: Domain Name System + - Route Reflector (RR) + - Domain Name System (DNS) - - Management Plane + - Management plane - - NMS: Network Management System + - Network Management System (NMS) Use cases --------- -The intent of this section is to describe some important use cases that are pertinent to this Reference Model. We start -with some typical Edge related use cases. The list of use cases will be extended in the future releases. +The intent of this section is to describe some important use cases that are pertinent to this Reference Model. We will +start with some typical Edge-related use cases. The list of use cases will be extended in future releases. Telco Edge is commonly coupled with 5G use cases, seen as one of the ingredients of the Ultra-Reliable Low-latency -Communication (URLLC) and Enhanced Mobile Broadband (eMBB) Network Slicing. The requirements for user plane Local -Breakout / Termination are common mandating that Value Added Services (VASs) & Any Gi-LAN applications are locally -hosted at the Edge. The Telco Edge is a perfect fit for centralized vRAN deployment and vDU/vCU hosting that satisfy the -latency requirements. +Communication (URLLC) and Enhanced Mobile Broadband (eMBB) network slicing. The requirements for user plane local +breakout/termination are common. They stipulate that value-added services (VASs) and any Gi-LAN applications are +locally hosted at the Edge. The Telco Edge is a perfect fit for centralized vRAN deployments and vDU/vCU hosting that +satisfy the latency requirements. -It is expected that with the technology evolution (e.g. 6G) the use cases will be more demanding. For instance, -either to meet less than 1 ms latency, or ultrafast data rate, it will be required to evolve the architecture. -These use cases, once available, can be used for life saving decisions, for instance for the remote automation in -environments not supporting life (e.g., in deep space communication), to ensure that the car autonomous -driving can be done in real time, and even for holographic communications. Such use cases can be seen as the -evolution of 5G use cases, where such requirements could not be met due to the technology constrains. +It is expected that with the technology evolution (for example, 6G), the use cases will be more demanding. For +example, to achieve either less than 1 ms latency or an ultrafast data rate, it will be required to evolve the +architecture. These use cases, once available, can be used for life saving decisions, such as for remote +automation in environments not supporting life (for example, in deep space communication), to ensure that the car +autonomous driving can be done in real time, and even for holographic communications. Such use cases can be seen as +the evolution of 5G use cases, where such requirements cannot be met due to technology constraints. -- **Use Case #1 - Edge CDN with eMBB Core Network Slicing** +Use case no. 1: Edge CDN with eMBB core network slicing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - **Business Objectives** +- **Business objectives** - Monetizing 5G by provisioning eMBB network slice with distributed Content Delivery Network (CDN) as a service, that - enables Ultra-HD (UHD) streaming, Video Optimization, caching for large files, and other capabilities that can - either bundled by the Network Slice offering or implicitly enabled by the operator. + Monetizing 5G by provisioning eMBB network slices with a distributed content delivery network (CDN) as a service + that enables ultra-HD (UHD) streaming, video optimization, caching for large files, and other capabilities that can + either be bundled by the network slice offering or implicitly enabled by the operator. - - **Targeted Segments** +- **Targeted segments** - - B2C (Targeting high Tier Packages & Bundles) - - Content Owners (Potential revenue sharing model) - - Mobile Virtual Network Operators (MVNOs - Wholesale) - - Stadiums and Venues. + - B2C: targeting high-tier packages and bundles + - content owners: potential revenue sharing model + - mobile virtual network operators (MVNOs): wholesale + - stadiums and venues - - **Architecture** +- **Architecture** .. figure:: ../figures/Fig2-1-uc1.png - :alt: Edge CDN with eMBB Core Network Slicing + :alt: Edge CDN with eMBB Core Network Slicing - Edge CDN with eMBB Core Network Slicing + Edge CDN with eMBB core network slicing -- **Use Case #2 - Edge Private 5G with Core Network Slicing** +Use case no. 2: Edge private 5G with core network slicing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - **Business Objectives** +- **Business objectives** - Private 5G is considered one of the most anticipated Business use cases in the coming few years enabling Mobile - Operators to provide a standalone private Mobile Network to enterprises that may include all the ingredients of PLMN - such as Radio, Core, Infrastructure & Services covering the business requirements in terms of security, performance, - reliability, & availability. + Private 5G is considered to be one of the most eagerly awaited business use cases in the coming years. It will + enable mobile operators to provide a standalone private mobile network to enterprises that may include all the + ingredients of the PLMN, such as radio, core, infrastructure, and services covering business requirements in terms + of security, performance, reliability, and availability. - - **Targeted Segments** +- **Targeted segments** - - Governmental Sectors & Public Safety (Mission critical applications) - - Factories and Industry sector. - - Enterprises with Business-critical applications. - - Enterprises with strict security requirements with respect to assets reachability. - - Enterprises with strict KPIs requirements that mandate the on-premise deployment. + - governmental sectors and public safety (mission-critical applications) + - factories and the industry sector + - enterprises with business-critical applications + - enterprises with strict security requirements, with respect to the reachability of assets + - enterprises with strict KPI requirements that mandate the on-premise deployment - - **Architecture** +- **Architecture** - - There are multiple flavours for Private 5G deployments or NPN, Non-Public Network as defined by 3GPP. - - The use case addresses the technical realization of NPN as a Network Slice of a PLMN as per Annex D – - 3GPP TS 23.501 R16 and not covering the other scenarios of deployment. - - The use case assumes a network slice that is constructed from a single UPF deployed on Customer premises while - sharing the 5G Control Plane (AMF, SMF, & other CP Network Functions) with the PLMN. - - The use case doesn’t cover the requirements of the private Application Servers (ASs) as they may vary with each - customer setup. - - Hosting the CU/DU on-Customer Infrastructure depends on the enterprise offering by the Mobile Operator and the - selected Private 5G setup. - - The Edge Cloud Infrastructure can be governed by the client or handled by the Service Provider (Mobile Operator) - as part of Managed-services model. + - There are multiple flavours for private 5G deployments or for the non-public network (NPN), as defined by 3GPP. + - This use case addresses the technical realization of the NPN as a network slice of a PLMN, according to Annex D – + 3GPP TS 23.501 R16. It does not cover the other scenarios of deployment. + - Thise use case assumes a network slice that is constructed from a single UPF deployed on customer premises, while + sharing the 5G control plane (AMF, SMF, and other CP network functions) with the PLMN. + - This use case does not cover the requirements of the private application servers (ASs), as they may vary with + each customer setup. + - Hosting the CU/DU on-customer infrastructure depends on the enterprise offering by the mobile operator and the + selected private 5G setup. + - The Edge cloud infrastructure can be governed by the client or handled by the service provider (mobile operator) + as part of managed-services model. .. figure:: ../figures/Fig2-2-uc2.png - :alt: Edge Private 5G with Core Network Slicing + :alt: Edge private 5G with core network slicing - Edge Private 5G with Core Network Slicing. + Edge private 5G with core network slicing. -- **Use Case #3 - Edge Automotive (V2X) with uRLLC Core Network Slicing** +Use case no. 3: Edge automotive (V2X) with uRLLC core network slicing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - **Business Objectives** +- **Business objectives** - The V2X (Vehicle-to-everything) set of use cases provides a 5G monetization framework for Mobile Operators - developing 5G URLLC business use cases targeting the Automotive Industry, Smart City Regulators, & Public Safety. + The vehicle-to-everything (V2X) set of use cases provides a 5G monetization framework for mobile operators + developing 5G URLLC business use cases targeting the automotive industry, smart city regulators, and public + safety. - - **Targeted Segments** +- **Targeted segments** - - Automotive Industry. - - Governmental Departments (Smart Cities, Transport, Police, Emergency Services, etc.). - - Private residencies (Compounds, Hotels and Resorts). - - Enterprise and Industrial Campuses. + - the automotive industry + - governmental departments (smart cities, transport, police, emergency services, and so on) + - private residences (compounds, hotels, and resorts) + - enterprise and industrial campuses - - **Architecture** +- **Architecture** - - 5G NR-V2X is a work item in 3GPP Release 16 that is not completed yet by the time of writing this document. + - 5G NR-V2X is a work item in 3GPP Release 16 that has not been completed at the time of writing this document. - - C-V2X, Cellular V2X has two modes of communications + - Cellular V2X (C-V2X) has two modes of communication: - - Direct Mode (Commonly described by SL, Sidelink by 3GPP): This includes the V2V, V2I, & V2P using a direct - Interface (PC5) operating in ITS, Intelligent Transport Bands (e.g. 5.9 GHZ). - - Network Mode (UL/DL): This covers the V2N while operating in the common telecom licensed spectrum. This use - case is capitalizing on this mode. + - Direct mode, commonly described by Sidelink (SL), by 3GPP: this includes the V2V, V2I, and V2P using a + direct interface (PC5) operating in ITS and intelligent transport bands (for example, 5.9 GHZ). + - Network mode (UL/DL): this covers the V2N while operating in the common telecom-licensed spectrum. This use + case capitalizes on this mode. - - The potential use cases that may consume services from Edge is the Network Model (V2N) and potentially the V2I - (According on how the Infrastructure will be mapped to an Edge level) + - The potential use cases that may consume services from the Edge are the network model (V2N) and potentially + the V2I (according to how the infrastructure will be mapped to an Edge level). .. figure:: ../figures/Fig2-3-uc3.png - :alt: Edge Automotive (V2X) with uRLLC Core Network Slicing + :alt: Edge automotive (V2X) with uRLLC core network slicing - Edge Automotive (V2X) with uRLLC Core Network Slicing + Edge automotive (V2X) with uRLLC core network slicing -- **Use Case #4 – Edge vRAN Deployments** +Use case no. 4: Edge vRAN deployments +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - **Business Objectives** - vRAN is one of the trending technologies of RAN deployment that fits for all Radio Access Technologies. vRAN helps - to provide coverage for rural & uncovered areas with a compelling CAPEX reduction compared to Traditional and legacy - RAN deployments. This coverage can be extended to all area types with 5G greenfield deployment as a typical example. +- **Business objectives** + vRAN is one of the trending RAN deployment technologies that fits all radio access technologies. vRAN helps to + provide coverage for rural and uncovered areas with a compelling CAPEX reduction, compared to traditional and + legacy RAN deployments. This coverage can be extended to all area types with 5G greenfield deployments as a + typical example. - - **Targeted Segments** +- **Targeted segments** - - Private 5G Customers (vRAN Can be part of the Non-Public Network, NPN) - - B2B Customers & MVNOs (vRAN Can be part of an E2E Network Slicing) - - B2C (Mobile Consumers Segment). + - Private 5G customers: vRAN can be part of the non-public network (NPN). + - B2B customers and MVNOs: vRAN can be part of E2E network slicing. + - B2C: the mobile consumers segment. - - **Architecture** +- **Architecture** - - There are multiple deployment models for Centralized Unit (CU) & Distributed Unit (DU). This use case covers the - placement case of having the DU & CU collocated & deployed on Telco Edge, see NGMN Overview on 5GRAN Functional - Decomposition ver 1.0 :cite:p:`ngmn5granfnldecomp`. - - The use case covers the 5G vRAN deployment. However, this can be extended to cover 4G vRAN as well. - - Following Split Option 7.2, the average market latency for RU-DU (Fronthaul) is 100 microsec – 200 microsec while - the latency for DU-CU (Midhaul) is tens of milliseconds, see ORAN-WG4.IOT.0-v01.00 :cite:p:`oranwg4iot0`. + - There are multiple deployment models for the centralized unit (CU) and the distributed unit (DU). This use + case covers the placement case of having the DU and the CU co-located and deployed on the Telco Edge. For + details, see the NGMN Overview on 5GRAN Functional Decomposition ver 1.0 :cite:p:`ngmn5granfnldecomp`. + - This use case covers the 5G vRAN deployment. However, this can be extended to cover 4G vRAN, as well. + - Following Split Option 7.2, the average market latency for RU-DU (fronthaul) is 100 microseconds to 200 + microseconds, while the latency for DU-CU (midhaul) is tens of milliseconds. For details, see + ORAN-WG4.IOT.0-v01.00 :cite:p:`oranwg4iot0`. .. figure:: ../figures/Fig2-4-uc4.png - :alt: Edge vRAN Deployments + :alt: Edge vRAN deployments - Edge vRAN Deployments + Edge vRAN deployments -- **Use Case #5 - Telepresence Experience** +Use case no. 5: Telepresence experience +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - **Business Objectives** +- **Business objectives** - This service would allow the communication between one or more persons with the feeling to be present in a location without being physically in a virtual environment. This service will make use of eMBB, and URLLC network slices and a distributed deployment which would offload processing. + This service allows communication with one or more persons and creates the impression of being present in the + same location without being physically in a virtual environment. This service makes use of eMBB and URLLC + network slices, and a distributed deployment which would offload processing. - - **Targeted Segments** +- **Targeted segments** - - B2B Customers & MVNOs - - B2C (Mobile Consumers Segment) - - Enterpises which make use of Communication platforms - - - **Architecture** + - B2B customers and MVNOs + - B2C: the mobile consumers segment + - enterpises which use of communication platforms + +- **Architecture** - - Distributed deployment model across the ecosystem. It should be possible to deploy workload at the extreme edge, which would allow real-time processing for video, and offload processing for network load prediction, which would support the Quality of Experience that is defined for such a use case - - The use case covers should allow the placement at the management plane and control plane (e.g. Core, Edge domain) - - There are high-level requirements requirement for such a use case (e.g. latency of 1ms, available bandwidth 8 Gbps) + - The architecture takes the form of distributed deployment models across the ecosystem. It should be possible + to deploy the workloads at the extreme edge, which would allow real-time processing for video, and would + offload processing for network load prediction. This would in turn support the quality of experience that is + defined for such a use case. + - This use case covers the placement at the management plane and the control plane (for example, the Core and + the Edge domain). + - There are high-level requirements for such a use case, such as a latency of 1ms, and an available bandwidth of + 8 Gbps. -- **Use Case #6 - Digital Twins for Manufacturing** +Use case no. 6: Digital twins for manufacturing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - **Business Objectives** +- **Business Objectives** - Providing the capability to design and create a product/service as a Digital Twin which can be tested before moving into Production environment. Therefore, only once acceptance is achieved in the digital world, the service would be available. This leads to extreme reduction of Total Cost of Ownership (TCO), and minimize the risks that are commonly associated to a design and testing of a service for industrial environment. + The business objectives are to provide the ability to design and create a product/service as a digital twin + which can be tested before moving into the production environment. Therefore, only once acceptance is achieved + in the digital world does the service become available. This leads to an extreme reduction of the total cost of + ownership (TCO), and minimizes the risks that are commonly associated with the design and testing of a service + for the industrial environment. - - **Targeted Segments** +- **Targeted segments** - - Private Networks - - Enterprise - - Factory (make use of high level of automation). - - - **Architecture** + - private networks + - enterprise + - factories (making extensive use of automation) + + - **Architecture** - - Demands very low latency (<<1ms) and high reliability. - - Trustworthiness needs to be guaranteed, which are usually associated to performance, security and resource efficiency/cost and subsequently productivity - - Processing capability of massive volumes of data. + - This use case demands low latency (less than 1 ms) and high reliability. + - The trustworthiness of the feature needs to be guaranteed. This is usually associated with performance, + security, and resource efficiency/cost, and, subsequently, productivity. + - This use case can process massive volumes of data. Analysis -------- -Studying various requirements of workloads helps understanding what expectation they will have from the underlying cloud -infrastructure. Following are *some* of the requirement types on which various workloads might have different -expectation levels: +Studying the various requirements of the workloads helps us to understand what expectations they will have +from the underlying cloud infrastructure. Some of the requirement types on which various workloads may have +different expectation levels are set out below: - **Computing** - - Speed (e.g., CPU clock and physical cores number) - - Predictability (e.g., CPU and RAM sharing level) - - Specific processing (e.g., cryptography, transcoding) + - speed (for example, the CPU clock and the number of physical cores) + - predictability (for example, the CPU and RAM sharing levels) + - specific processing (for example, cryptography and transcoding) - **Networking** - - Throughput (i.e., bit rate and/or packet rate) - - Latency - - Connection points / interfaces number (i.e., vNIC and VLAN) - - Specific traffic control (e.g., firewalling, NAT, cyphering) - - Specific external network connectivity (e.g., MPLS, VXLAN) + - throughput (that is, bit rate or packet rate, or both) + - latency + - the number of connection points or interfaces (that is, vNICs and VLANs) + - specific traffic control (for example, firewalling, NAT, and cyphering) + - specific external network connectivity (for example, MPLS and VXLAN) - **Storage** - - IOPS (i.e., input/output rate and/or byte rate) - - Volume - - Ephemeral or Persistent - - Specific features (e.g., object storage, local storage) + - IOPS (that is, input/output rate or byte rate, or both) + - volume + - ephemeral or persistent + - specific features (for example, object storage and local storage) -By trying to sort workloads into different categories based on the requirements observed, below are the different -profiles concluded, which are mainly driven by expected performance levels: +In trying to sort workloads into different categories based on the observed requirements, we have identified +two different profiles, detailed below. These profiles are mainly driven by the expected performance levels. - **Profile One** - - Workload types + - Workload types: + + - control plane functions without specific needs, and management plane functions + - *examples: OFCS, AAA, and NMS* - - Control plane functions without specific need, and management plane functions - - *Examples: OFCS, AAA, NMS* + - Requirements: - - No specific requirement + - There are no specific requirements. - **Profile Two** - Workload types - - Data plane functions (i.e., functions with specific networking and computing needs) - - *Examples: BNG, S/PGW, UPF, Sec-GW, DPI, CDN, SBC, MME, AMF, IMS-CSCF, UDR* + - data plane functions (that is, functions with specific networking and computing needs) + - *examples: BNG, S/PGW, UPF, Sec-GW, DPI, CDN, SBC, MME, AMF, IMS-CSCF, and UDR* - - Requirements + - Requirements: - - Predictable computing - - High network throughput - - Low network latency + - predictable computing + - high network throughput + - low network latency .. _profiles-profile-extensions--flavours: -Profiles, Profile Extensions & Flavours ---------------------------------------- +Profiles, profile extensions, and flavours +------------------------------------------ -**Profiles** are used to tag infrastructure (such as hypervisor hosts, or Kubernetes worker nodes) and associate it with +**Profiles** are used to tag infrastructure, such as hypervisor hosts or Kubernetes worker nodes, and associate it with a set of capabilities that are exploitable by the workloads. -Two profile *layers* are proposed: - -- The top level **profiles** represent macro-characteristics that partition infrastructure into separate pools, i.e.: an - infrastructure object can belong to one and only one profile, and workloads can only be created using a single - profile. Workloads requesting a given profile **must** be instantiated on infrastructure of that same profile. -- For a given profile, **profile extensions** represent small deviations from (or further qualification, such as - infrastructure sizing differences (e.g. memory size)) the profile that do not require partitioning the infrastructure - into separate pools, but that have specifications with a finer granularity of the profile. Profile Extensions can be - *optionally* requested by workloads that want a more granular control over what infrastructure they run on, i.e.: an - infrastructure resource can have **more than one profile extension label** attached to it, and workloads can request +There are two profile *layers*: + +- Top-level **profiles**: The top-level profiles represent macro-characteristics that partition the infrastructure into + separate pools. This means that an infrastructure object can belong to one profile only, and workloads can only be + created using a single profile. Workloads requesting a given profile **must** be instantiated on the infrastructure of + that same profile. +- Profile extensions: For a given profile, **profile extensions** represent small variations of the profile, such as + infrastructure sizing differences (for example, memory size), that do not require the partitioning of the infrastructure + into separate pools, but that have specifications with a finer granularity of the profile. Profile extensions can be + *optionally* requested by workloads that want a more granular control over the infrastructure on which they run, that is, + an infrastructure resource can have **more than one profile extension label** attached to it, and workloads can request resources to be instantiated on infrastructure with a certain profile extension. Workloads requesting a given profile - extension **must** be instantiated on infrastructure with that same profile extension. It is allowed to instantiate - workloads on infrastructure tagged with more profile extensions than requested, as long as the minimum requirements - are satisfied. - -Workloads specify infrastructure capability requirements as workload metadata, indicating what kind of infrastructure -they must run on to achieve functionality and/or the intended level of performance. Workloads request resources -specifying the Profiles and Profile Extensions, and a set of sizing metadata that maybe expressed as flavours that are -required for the workload to run as intended. -A resource request by a workload can be met by any infrastructure node that has the same or a more specialised profile + extension **must** be instantiated on infrastructure with the same profile extension. The operator may instantiate + workloads on infrastructure tagged with more profile extensions than requested, as long as the minimum requirements are + satisfied. + +The workloads specify infrastructure capability requirements as workload metadata, indicating what kind of +infrastructure they must run on to achieve functionality or the intended level of performance, or both. The workloads +request resources, specifying the profiles and profile extensions, and a set of sizing metadata that may be expressed +as flavours that are required for the workload to run as intended. +A resource request by a workload can be met by any infrastructure node that has the same or a more specialised profile, and the necessary capacity to support the requested flavour or resource size. -Profiles, Profile Extensions and Flavours will be considered in greater detail in +Profiles, profile extensions, and flavours are considered in greater detail in :ref:`chapters/chapter04:profile extensions`. Profiles (top-level partitions) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Based on the above analysis, the following cloud infrastructure profiles are proposed (also shown in -:numref:`Infrastructure profiles proposed based on VNFs categorisation` below) -- **Basic**: for Workloads that can tolerate resource over-subscription and variable latency. -- **High Performance**: for Workloads that require predictable computing performance, high network throughput and low - network latency. +Based on the analysis in Profiles, profile extensions, and flavours, the following cloud infrastructure profiles are as +follows (see also :numref:`Infrastructure profiles proposed based on VNFs categorisation`): + +- **Basic**: this is for workloads that can tolerate resource over-subscription and variable latency. +- **High-performance**: this is for workloads that require predictable computing performance, high network throughput, +and low network latency. .. figure:: ../figures/RM-ch02-node-profiles.png - :alt: Infrastructure profiles proposed based on VNFs categorisation - :name: Infrastructure profiles proposed based on VNFs categorisation + :alt: Infrastructure profiles based on the categorisation of the VNFs + :name: Infrastructure profiles based on the categorisation of the VNFs - Infrastructure profiles proposed based on VNFs categorisation + Infrastructure profiles based on the categorisation of the VNFs In :ref:`chapters/chapter04:infrastructure capabilities, measurements and catalogue` -these **B (Basic)** and **H (High) Performance** infrastructure profiles will be -defined in greater detail for use by workloads. +these **Basic (B)** and **High-performance (H)** infrastructure profiles are defined in greater detail for use by the +workloads. -Profiles partition the infrastructure: an infrastructure object (host/node) **must** have one and only one profile -associated to it. +Profiles partition the infrastructure: an infrastructure object (host/node) **must** have only one profile associated +to it. -Profile Extensions (specialisations) +Profile extensions (specialisations) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Profile Extensions are meant to be used as labels for infrastructure, identifying the nodes that implement special -capabilities that go beyond the profile baseline. Certain profile extensions may be relevant only for some profiles. -The following **profile extensions** are proposed: +Profile extensions are intended to be used as labels for infrastructure. They identify the nodes that implement +special capabilities that go beyond the profile baseline. Certain profile extensions may only be relevant for some +profiles. The **profile extensions** are detailed in the following table. +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Profile Extension | Mnemonic | Applicable to | Applicable to | Description | Notes | -| Name | | Basic Profile | High | | | -| | | | Performance | | | -| | | | Profile | | | +| Profile extension | Mnemonic | Applicable to | Applicable to | Description | Notes | +| name | | the basic | the high- | | | +| | | profile | performance | | | +| | | | profile | | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Compute Intensive | compute-high-perf-cpu | ❌ | ✅ | Nodes that have | May use | -| High-performance | | | | predictable computing | vanilla | +| Compute-intensive | compute-high-perf-cpu | ❌ | ✅ | Nodes that have | May use | +| high-performance | | | | predictable computing | vanilla | | CPU | | | | performance and higher | VIM/K8S | | | | | | clock speeds. | scheduling | | | | | | | instead. | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Storage Intensive | storage-high-perf | ❌ | ✅ | Nodes that have low | | -| High-performance | | | | storage latency and/or | | -| storage | | | | high storage IOPS | | +| Storage-intensive | storage-high-perf | ❌ | ✅ | Nodes that have low | | +| high-performance | | | | storage latency or | | +| storage | | | | high storage IOPS, or | | +| | | | | both. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Compute Intensive | compute-high-memory | ❌ | ✅ | Nodes that have high | May use | -| High memory | | | | amounts of RAM. | vanilla | +| Compute-intensive | compute-high-memory | ❌ | ✅ | Nodes that have high | May use | +| high memory | | | | amounts of RAM. | vanilla | | | | | | | VIM/K8S | | | | | | | scheduling | | | | | | | instead. | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Compute Intensive | compute-gpu | ❌ | ✅ | for compute intensive | May use Node | -| GPU | | | | Workloads that | Feature | -| | | | | requires GPU compute | Discovery. | -| | | | | resource on the node | | +| Compute-intensive | compute-gpu | ❌ | ✅ | For compute-intensive | May use node | +| GPU | | | | workloads that | feature | +| | | | | require GPU compute | discovery. | +| | | | | resources on the node. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Network Intensive | high-speed-network | ❌ | ✅ | Denotes the presence | | -| High speed | | | | of network links (to | | -| network (25G) | | | | the DC network) of | | +| Network-intensive | high-speed-network | ❌ | ✅ | Denotes the presence | | +| high-speed | | | | of network links (to | | +| network (25G) | | | | the DC network) with a | | | | | | | speed of 25 Gbps or | | | | | | | greater on the node. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Network Intensive | very-high-speed-network | ❌ | ✅ | Denotes the presence | | -| Very High speed | | | | of network links (to | | -| network (100G) | | | | the DC network) of | | +| Network-intensive | very-high-speed-network | ❌ | ✅ | Denotes the presence | | +| very-high-speed | | | | of network links (to | | +| network (100G) | | | | the DC network) with a | | | | | | | speed of 100 Gbps or | | | | | | | greater on the node. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Low Latency - | low-latency-edge | ✅ | ✅ | Labels a host/node as | | -| Edge Sites | | | | located in an edge | | +| Low latency Edge | low-latency-edge | ✅ | ✅ | Labels a host/node as | | +| sites | | | | located in an Edge | | | | | | | site, for workloads | | | | | | | requiring low latency | | -| | | | | (specify value) to | | +| | | | | (specify value), to | | | | | | | final users or | | | | | | | geographical | | | | | | | distribution. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Very Low Latency | very-low-latency-edge | ✅ | ✅ | Labels a host/node as | | -| - Edge Sites | | | | located in an edge | | +| Very low latency | very-low-latency-edge | ✅ | ✅ | Labels a host/node as | | +| Edge sites | | | | located in an Edge | | | | | | | site, for workloads | | | | | | | requiring low latency | | -| | | | | (specify value) to | | +| | | | | (specify value), to | | | | | | | final users or | | | | | | | geographical | | | | | | | distribution. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Ultra Low Latency | ultra-low-latency-edge | ✅ | ✅ | Labels a host/node as | | -| - Edge Sites | | | | located in an edge | | +| Ultra low latency | ultra-low-latency-edge | ✅ | ✅ | Labels a host/node as | | +| Edge sites | | | | located in an Edge | | | | | | | site, for workloads | | | | | | | requiring low latency | | -| | | | | (specify value) to | | +| | | | | (specify value), to | | | | | | | final users or | | | | | | | geographical | | | | | | | distribution. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Fixed function | compute-ffa | ❌ | ✅ | Labels a host/node | | +| Fixed-function | compute-ffa | ❌ | ✅ | Labels a host/node | | | accelerator | | | | that includes a | | -| | | | | consumable fixed | | +| | | | | consumable fixed- | | | | | | | function accelerator | | | | | | | (non-programmable, | | -| | | | | e.g. Crypto, | | +| | | | | such as a Crypto- or | | | | | | | vRAN-specific | | | | | | | adapter). | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Firmware- | compute-fpga | ❌ | ✅ | Labels a host/node | | +| Firmware- | compute-fpga | ❌ | ✅ | Labels a host/node | | | programmable | | | | that includes a | | | adapter | | | | consumable | | -| | | | | Firmware-programmable | | +| | | | | firmware-programmable | | | | | | | adapter (programmable, | | -| | | | | e.g. Network/storage | | -| | | | | FPGA with programmable | | -| | | | | part of firmware | | -| | | | | image). | | +| | | | | such as a network/ | | +| | | | | storage FPGA with a | | +| | | | | programmable part of | | +| | | | | the firmware image). | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| SmartNIC enabled | network-smartnic | ❌ | ✅ | Labels a host/node | | +| SmartNIC enabled | network-smartnic | ❌ | ✅ | Labels a host/node | | | | | | | that includes a | | -| | | | | Programmable | | +| | | | | programmable | | | | | | | accelerator for | | | | | | | vSwitch/vRouter, | | -| | | | | Network Function | | -| | | | | and/or Hardware | | -| | | | | Infrastructure. | | +| | | | | network function, | | +| | | | | and/or hardware | | +| | | | | infrastructure. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| SmartSwitch | network-smartswitch | ❌ | ✅ | Labels a host/node | | +| SmartSwitch- | network-smartswitch | ❌ | ✅ | Labels a host/node | | | enabled | | | | that is connected to a | | -| | | | | Programmable Switch | | -| | | | | Fabric or TOR switch | | +| | | | | programmable switch | | +| | | | | fabric or a TOR | | +| | | | | switch. | | +-------------------+-------------------------+---------------+---------------+------------------------+---------------+ **Table 2-1:** Profile extensions - \*\ **Note:** This is an initial set of proposed profiles and profile extensions and it is expected that more - profiles and/or profile extensions will be added as more requirements are gathered and as technology enhances and + \*\ **Note:** This is an initial set of proposed profiles and profile extensions. It is expected that more profiles + or profile extensions, or both, will be added as more requirements are gathered, and as technology evolves and matures. From bfc75ce13d0a948c474dc0a30c5a73d56100517e Mon Sep 17 00:00:00 2001 From: Gergely Csatari Date: Mon, 18 Nov 2024 14:55:58 +0200 Subject: [PATCH 04/32] Add manual build trigger Signed-off-by: Gergely Csatari --- .github/workflows/build-docs.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/workflows/build-docs.yml b/.github/workflows/build-docs.yml index fbc3d6e5..b539eb25 100644 --- a/.github/workflows/build-docs.yml +++ b/.github/workflows/build-docs.yml @@ -1,6 +1,7 @@ name: "Pull Request Docs Check" on: -- pull_request + pull_request: + workflow_dispatch: jobs: docs: From 685ce4cc3af2cd0f4ccb1739eeee7f986a59b65b Mon Sep 17 00:00:00 2001 From: Gergely Csatari Date: Mon, 18 Nov 2024 15:49:23 +0200 Subject: [PATCH 05/32] Adding links to the link ignore table and transforming Table 9-4 to a link-table Signed-off-by: Gergely Csatari --- doc/ref_model/chapters/chapter09.rst | 137 ++++++++++++--------------- doc/ref_model/conf.py | 4 +- 2 files changed, 64 insertions(+), 77 deletions(-) diff --git a/doc/ref_model/chapters/chapter09.rst b/doc/ref_model/chapters/chapter09.rst index 8059d413..38fd2947 100644 --- a/doc/ref_model/chapters/chapter09.rst +++ b/doc/ref_model/chapters/chapter09.rst @@ -645,82 +645,67 @@ and so on, prior to deployment, are listed in Table 9-4 (below). The tenant processes for application LCM, such as updates, are out of scope. For the purpose of these requirements, CI includes Continuous Delivery, and CD refers to Continuous Deployment. -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| Ref # | Description | Comments/Notes | -+===============+===================================+=================================================================+ -| auto.cicd.001 | The CI/CD pipeline must support | CI/CD pipelines automate CI/CD best practices into repeatable | -| | deployment on any cloud and cloud | workflows for integrating code and configurations into builds, | -| | infrastructures, including | testing builds including validation against design and | -| | different hardware accelerators. | operator-specific criteria, and delivery of the product onto a | -| | | runtime environment. Example of an open-source cloud native | -| | | CI/CD framework is the Tekton project | -| | | (:cite:p:`tekton-project`) | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.002 | The CI/CD pipelines must use | | -| | event-driven task automation | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.003 | The CI/CD pipelines should avoid | | -| | scheduling tasks | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.004 | The CI/CD pipeline is triggered | The software release can be source code files, configuration | -| | by a new or updated software | files, images, manifests. Operators may support a single or | -| | release being loaded into a | multiple repositories and may specify which repository is to be | -| | repository | used for these releases. An example of an open source | -| | | repository is the CNCF Harbor (:cite:p:`cncf-harbor`) | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.005 | The CI pipeline must scan source | | -| | code and manifests to validate | | -| | compliance with design and coding | | -| | best practices. | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.006 | The CI pipeline must support the | | -| | build and packaging of images and | | -| | deployment manifests from source | | -| | code and configuration files. | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.007 | The CI pipeline must scan images | See section 7.10 | -| | and manifests to validate for | (:ref:`chapters/chapter07:consolidated security requirements`). | -| | compliance with security | Examples of such security requirements include only ingesting | -| | requirements. | images, source code, configuration files, etc., only from | -| | | trusted sources. | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.008 | The CI pipeline must validate | Example: different tests | -| | images and manifests | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.009 | The CI pipeline must validate | | -| | with all hardware offload | | -| | permutations and without hardware | | -| | offload | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.010 | The CI pipeline must promote | Example: promote from a development repository to a production | -| | validated images and manifests to | repository | -| | be deployable. | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.011 | The CD pipeline must verify and | Example: RBAC, request is within quota limits, | -| | validate the tenant request | affinity/anti-affinity, etc. | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.012 | The CD pipeline after all | | -| | validations must turn over | | -| | control to orchestration of the | | -| | software | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.013 | The CD pipeline must be able to | | -| | deploy into Development, Test, | | -| | and Production environments | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.014 | The CD pipeline must be able to | | -| | automatically promote software | | -| | from Development to Test and | | -| | Production environments | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.015 | The CI pipeline must run all | | -| | relevant Reference Conformance | | -| | test suites | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ -| auto.cicd.016 | The CD pipeline must run all | | -| | relevant Reference Conformance | | -| | test suites | | -+---------------+-----------------------------------+-----------------------------------------------------------------+ +.. list-table:: Automation CI/CD + :widths: 10 20 30 + :header-rows: 1 + + * - Ref # + - Description + - Comments/Notes + * - auto.cicd.001 + - The CI/CD pipeline must support deployment on any cloud and cloud infrastructures, including different hardware + accelerators. + - CI/CD pipelines automate CI/CD best practices into repeatable workflows for integrating code and configurations + into builds, testing builds including validation against design and operator-specific criteria, and delivery of + the product onto a runtime environment. Example of an open-source cloud native CI/CD framework is the Tekton + project (:cite:p:`tekton-project`) + * - auto.cicd.002 + - The CI/CD pipelines must use event-driven task automation + - + * - auto.cicd.003 + - The CI/CD pipelines should avoid scheduling tasks + - + * - auto.cicd.004 + - The CI/CD pipeline is triggered by a new or updated software release being loaded into a repository + - The software release can be source code files, configuration files, images, manifests. Operators may support a + single or multiple repositories and may specify which repository is to be used for these releases. An example of + an open source repository is the CNCF Harbor (:cite:p:`cncf-harbor`) + * - auto.cicd.005 + - The CI pipeline must scan source code and manifests to validate compliance with design and coding best practices. + - + * - auto.cicd.006 + - The CI pipeline must support the build and packaging of images and deployment manifests from source code and + configuration files. + - + * - auto.cicd.007 + - The CI pipeline must scan images and manifests to validate for compliance with security requirements. + - See section 7.10 (:ref:`chapters/chapter07:consolidated security requirements`). Examples of such security + requirements include only ingesting images, source code, configuration files, etc., only from trusted sources. + * - auto.cicd.008 + - The CI pipeline must validate images and manifests + - Example: different tests + * - auto.cicd.009 + - The CI pipeline must validate with all hardware offload permutations and without hardware offload + - + * - auto.cicd.010 + - The CI pipeline must promote validated images and manifests to be deployable. + - Example: promote from a development repository to a production repository + * - auto.cicd.011 + - The CD pipeline must verify and validate the tenant request + - Example: RBAC, request is within quota limits, affinity/anti-affinity, etc. + * - auto.cicd.012 + - The CD pipeline after all validations must turn over control to orchestration of the software + - + * - auto.cicd.013 + - The CD pipeline must be able to deploy into Development, Test, and Production environments + - + * - auto.cicd.014 + - The CD pipeline must be able to automatically promote software from Development to Test and Production + environments + - + * - auto.cicd.015 + - The CI pipeline must run all relevant Reference Conformance test suites + - **Table 9-4:** Automation CI/CD diff --git a/doc/ref_model/conf.py b/doc/ref_model/conf.py index e29065b3..5a4b526e 100644 --- a/doc/ref_model/conf.py +++ b/doc/ref_model/conf.py @@ -32,7 +32,9 @@ "https://ntia.gov/files/ntia/publications/sbom_minimum_elements_report.pdf", "https://www.fcc.gov/", "https://gdpr-info.eu/", - "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/ism" + "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/ism", + "https://sourceforge.net/p/linux-ima/wiki/Home", + "https://sourceforge.net/projects/tboot/" ] linkcheck_timeout = 10 From 9042d191070d74851352c658d5d37756c0bd6c8b Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Wed, 20 Nov 2024 11:33:52 +0100 Subject: [PATCH 06/32] Update chapter02.rst To pass action fail https://github.com/anuket-project/RM/actions/runs/11930908117/job/33252646132 --- doc/ref_model/chapters/chapter02.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/ref_model/chapters/chapter02.rst b/doc/ref_model/chapters/chapter02.rst index ac54718e..592cdcb9 100644 --- a/doc/ref_model/chapters/chapter02.rst +++ b/doc/ref_model/chapters/chapter02.rst @@ -456,7 +456,7 @@ follows (see also :numref:`Infrastructure profiles proposed based on VNFs catego - **Basic**: this is for workloads that can tolerate resource over-subscription and variable latency. - **High-performance**: this is for workloads that require predictable computing performance, high network throughput, -and low network latency. + and low network latency. .. figure:: ../figures/RM-ch02-node-profiles.png :alt: Infrastructure profiles based on the categorisation of the VNFs @@ -483,7 +483,7 @@ profiles. The **profile extensions** are detailed in the following table. | name | | the basic | the high- | | | | | | profile | performance | | | | | | | profile | | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ ++===================+=========================+===============+===============+========================+===============+ | Compute-intensive | compute-high-perf-cpu | ❌ | ✅ | Nodes that have | May use | | high-performance | | | | predictable computing | vanilla | | CPU | | | | performance and higher | VIM/K8S | From 94b33567c65dd3c48a20e7acdcae7445cd3cdfba Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 21 Nov 2024 19:47:52 +0100 Subject: [PATCH 07/32] Table 2-1 in new format --- doc/ref_model/chapters/chapter02.rst | 207 +++++++++++++-------------- 1 file changed, 100 insertions(+), 107 deletions(-) diff --git a/doc/ref_model/chapters/chapter02.rst b/doc/ref_model/chapters/chapter02.rst index 592cdcb9..cc8f68b1 100644 --- a/doc/ref_model/chapters/chapter02.rst +++ b/doc/ref_model/chapters/chapter02.rst @@ -478,113 +478,106 @@ Profile extensions are intended to be used as labels for infrastructure. They id special capabilities that go beyond the profile baseline. Certain profile extensions may only be relevant for some profiles. The **profile extensions** are detailed in the following table. -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Profile extension | Mnemonic | Applicable to | Applicable to | Description | Notes | -| name | | the basic | the high- | | | -| | | profile | performance | | | -| | | | profile | | | -+===================+=========================+===============+===============+========================+===============+ -| Compute-intensive | compute-high-perf-cpu | ❌ | ✅ | Nodes that have | May use | -| high-performance | | | | predictable computing | vanilla | -| CPU | | | | performance and higher | VIM/K8S | -| | | | | clock speeds. | scheduling | -| | | | | | instead. | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Storage-intensive | storage-high-perf | ❌ | ✅ | Nodes that have low | | -| high-performance | | | | storage latency or | | -| storage | | | | high storage IOPS, or | | -| | | | | both. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Compute-intensive | compute-high-memory | ❌ | ✅ | Nodes that have high | May use | -| high memory | | | | amounts of RAM. | vanilla | -| | | | | | VIM/K8S | -| | | | | | scheduling | -| | | | | | instead. | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Compute-intensive | compute-gpu | ❌ | ✅ | For compute-intensive | May use node | -| GPU | | | | workloads that | feature | -| | | | | require GPU compute | discovery. | -| | | | | resources on the node. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Network-intensive | high-speed-network | ❌ | ✅ | Denotes the presence | | -| high-speed | | | | of network links (to | | -| network (25G) | | | | the DC network) with a | | -| | | | | speed of 25 Gbps or | | -| | | | | greater on the node. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Network-intensive | very-high-speed-network | ❌ | ✅ | Denotes the presence | | -| very-high-speed | | | | of network links (to | | -| network (100G) | | | | the DC network) with a | | -| | | | | speed of 100 Gbps or | | -| | | | | greater on the node. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Low latency Edge | low-latency-edge | ✅ | ✅ | Labels a host/node as | | -| sites | | | | located in an Edge | | -| | | | | site, for workloads | | -| | | | | requiring low latency | | -| | | | | (specify value), to | | -| | | | | final users or | | -| | | | | geographical | | -| | | | | distribution. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Very low latency | very-low-latency-edge | ✅ | ✅ | Labels a host/node as | | -| Edge sites | | | | located in an Edge | | -| | | | | site, for workloads | | -| | | | | requiring low latency | | -| | | | | (specify value), to | | -| | | | | final users or | | -| | | | | geographical | | -| | | | | distribution. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Ultra low latency | ultra-low-latency-edge | ✅ | ✅ | Labels a host/node as | | -| Edge sites | | | | located in an Edge | | -| | | | | site, for workloads | | -| | | | | requiring low latency | | -| | | | | (specify value), to | | -| | | | | final users or | | -| | | | | geographical | | -| | | | | distribution. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Real-time and | rt-tsn | ❌ | ✅ | Labels a host/node | For example, | -| time-sensitive | | | | configured for Real- | nodes to run | -| networking - RAN | | | | -Time predictability | vDU | -| cell sites | | | | and Time Sensitive | | -| | | | | Networking | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Fixed-function | compute-ffa | ❌ | ✅ | Labels a host/node | | -| accelerator | | | | that includes a | | -| | | | | consumable fixed- | | -| | | | | function accelerator | | -| | | | | (non-programmable, | | -| | | | | such as a Crypto- or | | -| | | | | vRAN-specific | | -| | | | | adapter). | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| Firmware- | compute-fpga | ❌ | ✅ | Labels a host/node | | -| programmable | | | | that includes a | | -| adapter | | | | consumable | | -| | | | | firmware-programmable | | -| | | | | adapter (programmable, | | -| | | | | such as a network/ | | -| | | | | storage FPGA with a | | -| | | | | programmable part of | | -| | | | | the firmware image). | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| SmartNIC enabled | network-smartnic | ❌ | ✅ | Labels a host/node | | -| | | | | that includes a | | -| | | | | programmable | | -| | | | | accelerator for | | -| | | | | vSwitch/vRouter, | | -| | | | | network function, | | -| | | | | and/or hardware | | -| | | | | infrastructure. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ -| SmartSwitch- | network-smartswitch | ❌ | ✅ | Labels a host/node | | -| enabled | | | | that is connected to a | | -| | | | | programmable switch | | -| | | | | fabric or a TOR | | -| | | | | switch. | | -+-------------------+-------------------------+---------------+---------------+------------------------+---------------+ +.. list-table:: Profile extensions + :widths: 20 25 10 10 25 10 + :header-rows: 1 + + * - Profile extension name + - Mnemonic + - Applicable to the basic profile + - Applicable to the high-performance profile + - Description + - Notes + * - Compute-intensive high-performance CPU + - compute-high-perf-cpu + - ❌ + - ✅ + - Nodes that have predictable computing performance and higher clock speeds. + - May use vanilla VIM/K8S scheduling instead. + * - Storage-intensive high-performance storage + - storage-high-perf + - ❌ + - ✅ + - Nodes that have low storage latency or high storage IOPS, or both. + - + * - Compute-intensive high memory + - compute-high-memory + - ❌ + - ✅ + - Nodes that have high amounts of RAM. + - May use vanilla VIM/K8S scheduling instead. + * - Compute-intensive GPU + - compute-gpu + - ❌ + - ✅ + - For compute-intensive workloads that require GPU compute resources on the node. + - May use Node Feature Discovery. + * - Network-intensive high-speed network (25G) + - high-speed-network + - ❌ + - ✅ + - Denotes the presence of network links (to the DC network) with a speed of 25 Gbps or greater on the node. + - + * - Network-intensive very-high-speed network (100G) + - very-high-speed-network + - ❌ + - ✅ + - Denotes the presence of network links (to the DC network) with a speed of 100 Gbps or greater on the node. + - + * - Low latency Edge sites + - low-latency-edge + - ✅ + - ✅ + - Labels a host/node as located in an Edge site, for workloads requiring low latency (specify value), to final + users or geographical distribution. + - + * - Very low latency Edge sites + - very-low-latency-edge + - ✅ + - ✅ + - Labels a host/node as located in an Edge site, for workloads requiring low latency (specify value), to final + users or geographical distribution. + - + * - Ultra low latency Edge sites + - ultra-low-latency-edge + - ✅ + - ✅ + - Labels a host/node as located in an Edge site, for workloads requiring low latency (specify value), to final + users or geographical distribution. + - + * - Real-time and time-sensitive networking - RAN cell sites + - rt-tsn + - ❌ + - ✅ + - Labels a host/node configured for Real-Time predictability and Time Sensitive Networking + - For example, nodes to run vDU. + * - Fixed-function accelerator + - compute-ffa + - ❌ + - ✅ + - Labels a host/node that includes a consumable fixed-function accelerator (non-programmable, such as a Crypto- + or vRAN-specific adapter). + - + * - Firmware-programmable adapter + - compute-fpga + - ❌ + - ✅ + - Labels a host/node that includes a consumable firmware-programmable adapter (programmable, such as a + network/storage FPGA with a programmable part of the firmware image). + - + * - SmartNIC enabled + - network-smartnic + - ❌ + - ✅ + - Labels a host/node that includes a programmable accelerator for vSwitch/vRouter, network function, and/or + hardware infrastructure. + - + * - SmartSwitch-enabled + - network-smartswitch + - ❌ + - ✅ + - Labels a host/node that is connected to a programmable switch fabric or a TOR switch. + - **Table 2-1:** Profile extensions From b925bc295086ec5f1da4d6562ec7336852a00100 Mon Sep 17 00:00:00 2001 From: Gergely Csatari Date: Thu, 21 Nov 2024 21:29:12 +0200 Subject: [PATCH 08/32] Fix build errors - Ignoring non working link - Fixing references - Remving debug logs from togsma.py Signed-off-by: Gergely Csatari --- doc/ref_model/chapters/chapter02.rst | 2 +- doc/ref_model/chapters/chapter04.rst | 4 ++-- doc/ref_model/chapters/chapter05.rst | 4 ++-- doc/ref_model/conf.py | 3 ++- doc/ref_model/togsma.py | 2 +- 5 files changed, 8 insertions(+), 7 deletions(-) diff --git a/doc/ref_model/chapters/chapter02.rst b/doc/ref_model/chapters/chapter02.rst index cc8f68b1..acb59ae7 100644 --- a/doc/ref_model/chapters/chapter02.rst +++ b/doc/ref_model/chapters/chapter02.rst @@ -452,7 +452,7 @@ Profiles (top-level partitions) Based on the analysis in Profiles, profile extensions, and flavours, the following cloud infrastructure profiles are as -follows (see also :numref:`Infrastructure profiles proposed based on VNFs categorisation`): +follows (see also :numref:`Infrastructure profiles based on the categorisation of the VNFs`): - **Basic**: this is for workloads that can tolerate resource over-subscription and variable latency. - **High-performance**: this is for workloads that require predictable computing performance, high network throughput, diff --git a/doc/ref_model/chapters/chapter04.rst b/doc/ref_model/chapters/chapter04.rst index 6f0d05c5..cc64713b 100644 --- a/doc/ref_model/chapters/chapter04.rst +++ b/doc/ref_model/chapters/chapter04.rst @@ -615,7 +615,7 @@ extensions to build its overall functionality, as discussed below. Cloud infrastructure profiles -The two :ref:`chapters/chapter02:profiles, profile extensions & flavours` are as follows: +The two :ref:`profiles-profile-extensions--flavours` are as follows: :: @@ -631,7 +631,7 @@ capabilities. The Cloud Infrastructure will have nodes configured as with option storage extensions, and acceleration extensions. The justification for defining these two profiles, and a set of extensible profile extensions, is provided in the -section :ref:`chapters/chapter02:profiles, profile extensions & flavours`. It includes the following: +section :ref:`profiles-profile-extensions--flavours`. It includes the following: - Workloads can be deployed by requesting compute hosts configured according to a specific profile (basic or high performance). diff --git a/doc/ref_model/chapters/chapter05.rst b/doc/ref_model/chapters/chapter05.rst index 70e1d255..3ede30a4 100644 --- a/doc/ref_model/chapters/chapter05.rst +++ b/doc/ref_model/chapters/chapter05.rst @@ -1,7 +1,7 @@ Feature Set and Requirements from Infrastructure ================================================ -A profile :ref:`chapters/chapter02:profiles, profile extensions & flavours` specifies the configuration of a +A profile :ref:`profiles-profile-extensions--flavours` specifies the configuration of a Cloud Infrastructure node (host or server). :ref:`chapters/chapter02:profile extensions (specialisations)` may specify additional configurations. Workloads use profiles to describe the configuration of nodes on which they can be hosted to execute on. Workload flavours provide a mechanism to specify the VM or Pod sizing information to host @@ -571,7 +571,7 @@ as accelerators, the underlay networking, and storage. This chapter defines a simplified host, profile, and related capabilities model associated with each of the different Cloud Infrastructure hardware profile and related capabilities. The two -:ref:`chapters/chapter02:profiles, profile extensions & flavours` (also known as host profiles, node profiles, and +:ref:`profiles-profile-extensions--flavours` (also known as host profiles, node profiles, and hardware profiles), and some of their associated capabilities, are shown in :numref:`Cloud Infrastructure Hardware Profiles and host-associated capabilities`. diff --git a/doc/ref_model/conf.py b/doc/ref_model/conf.py index 5a4b526e..9a298fb3 100644 --- a/doc/ref_model/conf.py +++ b/doc/ref_model/conf.py @@ -34,7 +34,8 @@ "https://gdpr-info.eu/", "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/ism", "https://sourceforge.net/p/linux-ima/wiki/Home", - "https://sourceforge.net/projects/tboot/" + "https://sourceforge.net/projects/tboot/", + "https://www.cisa.gov/news-events/news/apache-log4j-vulnerability-guidance" ] linkcheck_timeout = 10 diff --git a/doc/ref_model/togsma.py b/doc/ref_model/togsma.py index ab444cca..3141ea21 100644 --- a/doc/ref_model/togsma.py +++ b/doc/ref_model/togsma.py @@ -114,7 +114,7 @@ def warn(self, msg: str) -> None: f"{bib_data.entries[key].fields['title']} `[{i}] <#references>`_") else: - print(f"ref: changing {bib_data.entries[key].key} to {bib_data.entries[key].fields['url']}") + #print(f"ref: changing {bib_data.entries[key].key} to {bib_data.entries[key].fields['url']}") filedata = filedata.replace( f":cite:p:`{bib_data.entries[key].key}`", f"`[{i}] <{bib_data.entries[key].fields['url']}>`__") From b4566f53c150eae29d62cafebae9876b5b750fcc Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 12:05:22 +0100 Subject: [PATCH 09/32] Tables 3-6, 3-9 and 3-10 fixed formatting of 3-6 and 3-9 added number for table 3-10 and fixed formatting changed number for previous 3-10 to 3-11 --- doc/ref_model/chapters/chapter03.rst | 574 ++++++++++++++------------- 1 file changed, 300 insertions(+), 274 deletions(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index 63794ec0..37dd453c 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -796,28 +796,34 @@ below. The table below highlights areas under which common SFC functional components can be categorized. - -+------------+------------------------+--------------------------------------------------------------------------------+ -| Components | Example | Responsibilities | -+============+========================+================================================================================+ -| Management | ``SFC orchestrator`` | High Level of orchestrator Orchestrate the SFC based on SFC Models/Policies | -| | | with help of control components. | -| +------------------------+--------------------------------------------------------------------------------+ -| | ``SFC OAM Components`` | Responsible for SFC OAM functions | -| +------------------------+--------------------------------------------------------------------------------+ -| | ``VNF MANO`` | NFVO, VNFM, and VIM Responsible for SFC Data components lifecycle | -| +------------------------+--------------------------------------------------------------------------------+ -| | ``CNF MANO`` | CNF DevOps Components Responsible for SFC data components lifecycle | -+------------+------------------------+--------------------------------------------------------------------------------+ -| Control | ``SFC SDN Controller`` | SDNC responsible to create the service specific overlay network. Deploy | -| | | different techniques to stitch the wiring but provide the same functionality, | -| | | for example l2xconn, SRv6 , Segment routing etc. | -| +------------------------+--------------------------------------------------------------------------------+ -| | ``SFC Renderer`` | Creates and wires ports/interfaces for SF data path | -+------------+------------------------+--------------------------------------------------------------------------------+ -| Data | ``Core Components``\ | Responsible for steering the traffic for intended service functionalities | -| | SF, SFF, SF Proxy | based on Policies | -+------------+------------------------+--------------------------------------------------------------------------------+ +.. list-table:: SFC Architecture Components + :widths: 10 20 90 + :header-rows: 1 + + * - Components + - Example + - Responsibilities + * - Management + - ``SFC orchestrator`` + - High Level of orchestrator Orchestrate the SFC based on SFC Models/Policies with help of control components. + * - + - ``SFC OAM Components`` + - Responsible for SFC OAM functions + * - + - ``VNF MANO`` + - NFVO, VNFM, and VIM Responsible for SFC Data components lifecycle + * - + - ``CNF MANO`` + - CNF DevOps Components Responsible for SFC data components lifecycle + * - Control + - ``SFC SDN Controller`` + - SDNC responsible to create the service specific overlay network. Deploy different techniques to stitch the wiring but provide the same functionality, for example l2xconn, SRv6, Segment routing etc. + * - + - ``SFC Renderer`` + - Creates and wires ports/interfaces for SF data path + * - Data + - ``Core Components``\ SF, SFF, SF Proxy + - Responsible for steering the traffic for intended service functionalities based on Policies **Table 3-6:** SFC Architecture Components @@ -1302,34 +1308,166 @@ Where: - "N" - No, not available - "NA" - Not Applicable for this Use Case / Stereotype -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Tenant / User | -+===============================+=====================================+======+======+======+============+============+========+=======+=====+======+========+ -| | Infra / Ctrl / Mgt | Platform Native | Shared File | Object | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Use Case | Stereotype | Boot | Ctrl | Mgt | Hypervisor | Container | Within | Cross | Ext | vNAS | Object | -| | | | | | Attached | Persistent | | | | | | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Data-centre Storage | Dedicated Network Storage Appliance | Y | Y | Y | Y | Y | O | O | O | O | O | -| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Dedicated Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | -| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Traditional SAN | Y | Y | Y | N | N | N | N | N | N | N | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Satellite data-centre Storage | Small Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Small data-centre Storage | Converged Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Edge Cloud | Edge Cloud for VNF/CNF Storage | NA | O | NA | Y | Y | O | O | O | O | O | -| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Edge Cloud for Apps Storage | NA | O | NA | Y | Y | O | O | O | O | Y | -| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Edge Cloud for Content Mgt Storage | NA | O | NA | Y | Y | O | O | O | O | Y | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Split Control/User Plane | Split Edge Ctrl Plane Storage | NA | N | NA | Y | Y | O | O | O | O | O | -| Edge Cloud +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| + Split Edge User Plane Storage + NA | N | NA | N | N | N | N | N | N | N | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +.. list-table:: Storage Use Cases and Stereotypes + :widths: 30 30 5 5 5 10 10 7 7 5 5 7 + :header-rows: 3 + + * - + - + - + - + - + - Tenant / User + - + - + - + - + - + - + * - + - + - Infra / Ctrl / Mgmt + - + - + - Platform Native + - + - Shared File + - + - + - + - Object + * - Use Case + - Stereotype + - Boot + - Ctrl + - Mgt + - Hypervisor Attached + - Container Persistent + - Within + - Cross + - Ext + - vNAS + - Object + * - Data-centre Storage + - Dedicated Network Storage Appliance + - Y + - Y + - Y + - Y + - Y + - O + - O + - O + - O + - O + * - + - Dedicated Software Defined Storage + - O + - O + - O + - Y + - Y + - O + - O + - O + - O + - O + * - + - Traditional SAN + - Y + - Y + - Y + - N + - N + - N + - N + - N + - N + - N + * - Satellite data-centre Storage + - Small Software Defined Storage + - O + - O + - O + - Y + - Y + - O + - O + - O + - O + - O + * - Small data-centre Storage + - Converged Software Defined Storage + - O + - O + - O + - Y + - Y + - O + - O + - O + - O + - O + * - Edge Cloud + - Edge Cloud for VNF/CNF Storage + - NA + - O + - NA + - Y + - Y + - O + - O + - O + - O + - O + * - + - Edge Cloud for Apps Storage + - NA + - O + - NA + - Y + - Y + - O + - O + - O + - O + - Y + * - + - Edge Cloud for Content Mgt Storage + - NA + - O + - NA + - Y + - Y + - O + - O + - O + - O + - Y + * - Split Control/User Plane Edge Cloud + - Split Edge Ctrl Plane Storage + - NA + - N + - NA + - Y + - Y + - O + - O + - O + - O + - O + * - + - Split Edge User Plane Storage + - NA + - N + - NA + - N + - N + - N + - N + - N + - N + - N **Table 3-9:** Storage Use Cases and Stereotypes @@ -1339,229 +1477,117 @@ at inception. This will allow the right set of considerations to be addressed fo - for various use cases to meet functional and performance needs and - to avoid the need for significant rework of the storage solution and the likely ripple through impact on the broader Cloud Infrastructure. -The considerations will help to guide the build and deployment of the Storage solution for the various Use Cases and Stereotypes outlined in the summary table. - -+----+----+----+----------+-----------------------------------------------------------+ -| Use Case | Description | -+====+====+====+==========+===========================================================+ -| **Data-centre** | Provide a highly reliable and scalable storage capability | -| **Storage** | that has flexibility to meet diverse needs | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Management Plane (Cloud | -| | | Infrastructure fault and performance management and | -| | | platform automation) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane | -+----+----+----+----------+-----------------------------------------------------------+ -| | General Considerations: What are the general considerations, irrespective of | -| | the deployment stereotype/technology used in the storage sub-system? | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Can storage support Virtual Machine (RA-1) and Container (RA-2) Hosting | -| | | cases from single instance? Noting that if you wish to have single | -| | | storage instance providing storage across multiple clusters and/or | -| | | availability zones within the same data-centre then this needs to be | -| | | factored into the underlay network design. | -+----+----+----+----------+-----------------------------------------------------------+ -| | 2 | Can the storage system support Live Migration/Multi-Attach within and | -| | | across Availability Zones (applicable to Virtual Machine hosting (RA-1)) | -| | | and how does the Cloud Infrastructure solution support migration of | -| | | Virtual Machines between availability zones in general? | -+----+----+----+----------+-----------------------------------------------------------+ -| | 3 | Can the storage system support the full range of Shared File Storage use | -| | | cases: including the ability to control how network exposed Share File | -| | | Storage is visible: Within Tenancy, Across Tenancy (noting that a Tenancy | -| | | can operate across availability zones) and Externally? | -+----+----+----+----------+-----------------------------------------------------------+ -| | 4 | Can the storage system support alternate performance tiers to allow | -| | | tenant selection of best Cost/Performance option? For very high | -| | | performance storage provision, meeting throughput and IOP needs can be | -| | | achieved by using: very high IOP flash storage, higher bandwidth | -| | | networking,performance optimised replication design and storage pool host | -| | | distribution, while achieving very low latency targets require careful | -| | | planning of underlay storage VLAN/switch networking. | -+----+----+----+----------+-----------------------------------------------------------+ -| | Specific Considerations: In selecting a particular stereotype/technology this | -| | can bring with it considerations that are specific to this choice | -+----+----+----+----------+-----------------------------------------------------------+ -| | Dedicated Software Defined Storage | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Need to establish the physical disk data layout / encoding scheme | -| | | choice, options could be: replication / mirroring of data across | -| | | multiple storage hosts or CRC-based redundancy management encoding | -| | | (such as "erasure encoding"). This typically has performance/cost | -| | | implications as replication has a lower performance impact, but | -| | | consumes larger number of physical disks. If using replication then | -| | | increasing the number of replicas provide greater data loss | -| | | prevention, but consumes more disk system backend network bandwidth, | -| | | with bandwidth need proportional to number of replicas. | -| +----+----------+-----------------------------------------------------------+ -| | 2 | In general with Software Defined Storage solution it is not | -| | | to use hardware RAID controllers, as this impacts the scope of | -| | | recovery on failure as the failed device replacement can only be | -| | | managed within the RAID volume that disk is part of. With Software | -| | | Defined Storage failure recovering can be managed within the host | -| | | that the disk failed in, but also across physical storage hosts. | -| +----+----------+-----------------------------------------------------------+ -| | 3 | Can storage be consumed optimally irrespective of whether this is at | -| | | Control, Management or Tenant / User Plane? Example is iSCSI/NFS, | -| | | which while available and providing a common technical capability, | -| | | does not provide best achievable performance. Best performance is | -| | | achieved using provided OS layer driver that matches the particular | -| | | software defined storage implementation (example is using RADOS | -| | | driver in Ceph case vs. Ceph ability to expose iSCSI). | -+----+----+----+----------+-----------------------------------------------------------+ -| | Dedicated Network Storage Appliance | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Macro choice is made based on vendor / model selection and | -| | | configuration choices available | -+----+----+----+----------+-----------------------------------------------------------+ -| | Traditional SAN | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | This is generally made available via Fiber Channel Arbitrated Loop | -| | | (FC-AL)/SCSI connectivity and hence has a need for very specific | -| | | connectivity. To provide the features required for Cloud | -| | | Infrastructure (Shared File Storage, Object Storage and | -| | | Multi-tenancy support), a SAN storage systems needs to be augmented | -| | | with other gateway/s to provide an IP Network consumable capability. | -| | | This is often seen with current deployments where NFS/CIFS (NAS) | -| | | Gateway is connected by FC-AL (for storage back-end) and IP Network | -| | | for Cloud Infrastructure consumption (front-end). This model helps | -| | | to extent use of SAN storage investment. NOTE: This applies to SANs | -| | | which use SAS/SATA physical disk devices, as direct connect FC-AL | -| | | disk devices are no longer manufactured. | -+----+----+----+----------+-----------------------------------------------------------+ -| **Satellite** | Satellite data-centre is a smaller regional deployment | -| **Data-centre Storage** | which has connectivity to and utilises resources | -| | available from the main Data-centre, so only provides | -| | support for subset of needs | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant/User Plane | -| +----+----+----------+-----------------------------------------------------------+ -| | General Considerations: What are the general considerations, irrespective | -| | of the deployment stereotype/technology used in storage sub-system? | -| +----+----+----------+-----------------------------------------------------------+ -| | 1 | Is there a need to support multiple clusters/availability zones at the | -| | | same site? If so then use "Data-Centre Storage" use case, otherwise, | -| | | consider how to put Virtual Machine & Container Hosting control plane | -| | | and Storage control plane on the same set of hosts to reduce footprint. | -| +----+----+----------+-----------------------------------------------------------+ -| | 2 | Can Shared File Storage establishment be avoided by using capabilities | -| | | provided by large Data-Centre Storage? | -| +----+----+----------+-----------------------------------------------------------+ -| | 3 | Can very large capacity storage needs be moved to larger Data-Centre | -| | | Storage capabilities? | -| +----+----+----------+-----------------------------------------------------------+ -| | Specific Considerations: In selecting a particular stereotype/technology this | -| | can bring with it considerations that are specific to this choice | -+----+----+----+----------+-----------------------------------------------------------+ -| | Small Software Defined Storage | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Leverage same technology as "Dedicated Software Defined Storage" | -| | | scenarios, but avoid/limit Infrastructure boot and Management plane | -| | | support and Network Storage support | -| +----+----------+-----------------------------------------------------------+ -| | 2 | Avoid having dedicated storage instance per cluster/availability | -| | | zone | -| +----+----------+-----------------------------------------------------------+ -| | 3 | Resilience through rapid rebuild (N + 1 failure scenario) | -+----+----+----+----------+-----------------------------------------------------------+ -| **Small Data-centre** | Small data-centre storage deployment is used in cases | -| **Storage** | where software-defined storage and virtual machine / | -| | container hosting are running on a converged | -| | infrastructure footprint with the aim of reducing the | -| | overall size of the platform. This solution behaves as a | -| | standalone Infrastructure Cloud platform. | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Management Plane (Cloud | -| | | Infrastructure fault and performance management and | -| | | platform automation) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane | -| +----+----+----------+-----------------------------------------------------------+ -| | General Considerations: What are the general considerations, irrespective of | -| | the deployment stereotype/technology used in the storagesub-system? | -| +----+----+----------+-----------------------------------------------------------+ -| | 1 | Is there need to support multiple clusters / availability zones at same | -| | | site? See guidance for "Satellite Data-centre Storage" use case(1). | -| +----+----+----------+-----------------------------------------------------------+ -| | 2 | Is Shared File Storage required? Check sharing scope carefully as fully | -| | | virtualised NFs solution adds complexity and increases resources needs. | -| +----+----+----------+-----------------------------------------------------------+ -| | 3 | Is there need for large local capacity? With large capacity flash (15-30 | -| | | TB/device), the solution can hold significant storage capacity, but need | -| | | to consider carefully data loss prevention need and impact on | -| | | rebuilt/recovery times. | -| +----+----+----------+-----------------------------------------------------------+ -| | Specific Considerations: In selecting a particular stereotype/technology this | -| | can bring with it considerations that are specific to this choice | -+----+----+----+----------+-----------------------------------------------------------+ -| | Converged Software Defined Storage | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Leverage same technology as "Dedicated Software-Defined Storage" | -| | | scenarios, but on converged infrastructure. To meet capacity needs | -| | | provision three hosts for storage and the rest for virtual | -| | | infrastructure and storage control and management and tenant | -| | | workload hosting. | -| +----+----------+-----------------------------------------------------------+ -| | 2 | If the solution needs to host two clusters/availability zones then | -| | | have sharable storage instances. | -| +----+----------+-----------------------------------------------------------+ -| | 3 | Resilience through rapid rebuild (N + 0 or N + 1) | -+----+----+----+----------+-----------------------------------------------------------+ -| **Edge Cloud for App** | Support the deployment of Applications at the edge, which | -| **Storage** | tend to have greater storage needs than a network VNF/CNF | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane - very limited | -| | | configuration storage | -+----+----+----+----------+-----------------------------------------------------------+ -| **Edge Cloud for** | Support the deployment of VNF / CNF at the edge. | -| **VNF/CNF Storage** | | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane - limited | -| | | configuration storage | -+----+----+----+----------+-----------------------------------------------------------+ -| **Edge Cloud for** | Support the deployment of deployment of media content | -| **Content Storage** | cache at the edge. This is a very common Content | -| | Distribution Network (CDN) use case | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane - Media Content | -| | | storage | -| +----+----+----------+-----------------------------------------------------------+ -| | General Considerations: What are the general considerations, irrespective of | -| | the deployment stereotype/technology used in the storage sub-system? | -| +----+----+----------+-----------------------------------------------------------+ -| | 1 | Consuming and exposing Object storage through Tenant application | -| +----+----+----------+-----------------------------------------------------------+ -| | 2 | Use Embedded Shared File Storage for Control and Tenant Storage Needs | -| +----+----+----------+-----------------------------------------------------------+ -| | Specific Considerations: In selecting a particular stereotype/technology this | -| | can bring with it considerations that are specific to this choice | -| +----+----+----------+-----------------------------------------------------------+ -| | Embedded Shared File Storage | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | What is the best way to achieve some level of data resilience, while | -| | | minimising required infrastructure? (i.e do not have luxury of | -| | | having host (VMs) dedicated to supporting storage control and | -| | | storage data needs) | -+----+----+----+----------+-----------------------------------------------------------+ +The considerations will help to guide the build and deployment of the Storage solution for the various Use Cases and Stereotypes outlined in the Table 3-10. + +.. list-table:: Storage Considerations + :widths: 20 20 60 + :header-rows: 1 + + * - Use Case + - + - Description + * - **Data-centre Storage** + - + - Provide a highly reliable and scalable storage capability that has flexibility to meet diverse needs + * - + - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Management Plane (Cloud Infrastructure fault and performance management and platform automation) + - Cloud Infrastructure Tenant / User Plane + * - + - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storage sub-system? + - 1. Can storage support Virtual Machine (RA-1) and Container (RA-2) Hosting cases from single instance? Noting that if you wish to have single storage instance providing storage across multiple clusters and/or availability zones within the same data-centre then this needs to be factored into the underlay network design. + 2. Can the storage system support Live Migration/Multi-Attach within and across Availability Zones (applicable to Virtual Machine hosting (RA-1)) and how does the Cloud Infrastructure solution support migration of Virtual Machines between availability zones in general? + 3. Can the storage system support the full range of Shared File Storage use cases: including the ability to control how network exposed Share File Storage is visible: Within Tenancy, Across Tenancy (noting that a Tenancy can operate across availability zones) and Externally? + 4. Can the storage system support alternate performance tiers to allow tenant selection of best Cost/Performance option? For very high performance storage provision, meeting throughput and IOP needs can be achieved by using: very high IOP flash storage, higher bandwidth networking,performance optimised replication design and storage pool host distribution, while achieving very low latency targets require careful planning of underlay storage VLAN/switch networking. + * - + - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice + - + * - + - Dedicated Software Defined Storage + - 1. Need to establish the physical disk data layout / encoding scheme choice, options could be: replication / mirroring of data across multiple storage hosts or CRC-based redundancy management encoding (such as "erasure encoding"). This typically has performance/cost implications as replication has a lower performance impact, but consumes larger number of physical disks. If using replication then increasing the number of replicas provide greater data loss prevention, but consumes more disk system backend network bandwidth, with bandwidth need proportional to number of replicas. + 2. In general with Software Defined Storage solution it is not to use hardware RAID controllers, as this impacts the scope of recovery on failure as the failed device replacement can only be managed within the RAID volume that disk is part of. With Software Defined Storage failure recovering can be managed within the host that the disk failed in, but also across physical storage hosts. + 3. Can storage be consumed optimally irrespective of whether this is at Control, Management or Tenant / User Plane? Example is iSCSI/NFS, which while available and providing a common technical capability, does not provide best achievable performance. Best performance is achieved using provided OS layer driver that matches the particular software defined storage implementation (example is using RADOS driver in Ceph case vs. Ceph ability to expose iSCSI). + * - + - Dedicated Network Storage Appliance + - 1. Macro choice is made based on vendor / model selection and configuration choices available. + * - + - Traditional SAN + - 1. This is generally made available via Fiber Channel Arbitrated Loop (FC-AL)/SCSI connectivity and hence has a need for very specific connectivity. To provide the features required for Cloud Infrastructure (Shared File Storage, Object Storage and Multi-tenancy support), a SAN storage systems needs to be augmented with other gateway/s to provide an IP Network consumable capability. This is often seen with current deployments where NFS/CIFS (NAS) Gateway is connected by FC-AL (for storage back-end) and IP Network for Cloud Infrastructure consumption (front-end). This model helps to extent use of SAN storage investment. NOTE: This applies to SANs which use SAS/SATA physical disk devices, as direct connect FC-AL disk devices are no longer manufactured. + * - **Satellite Data-centre Storage** + - + - Satellite data-centre is a smaller regional deployment which has connectivity to and utilises resources available from the main Data-centre, so only provides support for subset of needs + * - + - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Tenant/User Plane + * - + - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in storage sub-system? + - 1. Is there a need to support multiple clusters/availability zones at the same site? If so then use "Data-Centre Storage" use case, otherwise, consider how to put Virtual Machine & Container Hosting control plane and Storage control plane on the same set of hosts to reduce footprint. + 2. Can Shared File Storage establishment be avoided by using capabilities provided by large Data-Centre Storage? + 3. Can very large capacity storage needs be moved to larger Data-Centre Storage capabilities? + * - + - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice + - + * - + - Small Software Defined Storage + - 1. Leverage same technology as "Dedicated Software Defined Storage" scenarios, but avoid/limit Infrastructure boot and Management plane support and Network Storage support + 2. Avoid having dedicated storage instance per cluster/availability zone + 3. Resilience through rapid rebuild (N + 1 failure scenario) + * - **Small Data-centre Storage** + - + - Small data-centre storage deployment is used in cases where software-defined storage and virtual machine / container hosting are running on a converged infrastructure footprint with the aim of reducing the overall size of the platform. This solution behaves as a standalone Infrastructure Cloud platform. + * - + - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Management Plane (Cloud Infrastructure fault and performance management and platform automation) + - Cloud Infrastructure Tenant / User Plane + * - + - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storagesub-system? + - 1. Is there need to support multiple clusters / availability zones at same site? See guidance for "Satellite Data-centre Storage" use case(1). + 2. Is Shared File Storage required? Check sharing scope carefully as fully virtualised NFs solution adds complexity and increases resources needs. + 3. Is there need for large local capacity? With large capacity flash (15-30 TB/device), the solution can hold significant storage capacity, but need to consider carefully data loss prevention need and impact on rebuilt/recovery times. + * - + - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice + - + * - + - Converged Software Defined Storage + - 1. Leverage same technology as "Dedicated Software-Defined Storage" scenarios, but on converged infrastructure. To meet capacity needs provision three hosts for storage and the rest for virtual infrastructure and storage control and management and tenant workload hosting. + 2. If the solution needs to host two clusters/availability zones then have sharable storage instances. + 3. Resilience through rapid rebuild (N + 0 or N + 1) + * - **Edge Cloud for App Storage** + - + - Support the deployment of Applications at the edge, which tend to have greater storage needs than a network VNF/CNF + * - + - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Tenant / User Plane - very limited configuration storage + * - **Edge Cloud for VNF/CNF Storage** + - + - Support the deployment of VNF / CNF at the edge. + * - + - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Tenant / User Plane - limited configuration storage + * - **Edge Cloud for Content Storage** + - + - Support the deployment of deployment of media content cache at the edge. This is a very common Content Distribution Network (CDN) use case + * - + - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Tenant / User Plane - Media Content storage + * - + - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storage sub-system? + - 1. Consuming and exposing Object storage through Tenant application + 2. Use Embedded Shared File Storage for Control and Tenant Storage Needs + * - + - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice + - + * - + - Embedded Shared File Storage + - 1. What is the best way to achieve some level of data resilience, while minimising required infrastructure? (i.e do not have luxury of having host (VMs) dedicated to supporting storage control and storage data needs) + +**Table 3-10:** Storage Considerations The General Storage Model illustrates that at the bottom of any storage solution there is always the physical storage layer and a storage operating system of some sort. In a Cloud Infrastructure environment what is generally consumed is @@ -1673,7 +1699,7 @@ get activated, life cycle managed and supported in running infrastructure. | | | | 3. Reprogrammable | +-----------------------------+-----------------------------+----------------------------+-----------------------------+ -**Table 3-10:** Hardware acceleration categories, implementation, activation/LCM/support and usage +**Table 3-11:** Hardware acceleration categories, implementation, activation/LCM/support and usage .. figure:: ../figures/ch03-examples-of-server-and-smartswitch-based-nodes.png :alt: Examples of server- and SmartSwitch-based nodes (for illustration only) From 799184024bd84a626615f7d01899a31d8c3c2688 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 15:58:14 +0100 Subject: [PATCH 10/32] unroll Table 3-9 previous commit results in rm.docx that cannot be opened by Word --- doc/ref_model/chapters/chapter03.rst | 328 ++++++++++++++++++--------- 1 file changed, 221 insertions(+), 107 deletions(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index 37dd453c..f624b3d2 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1479,113 +1479,227 @@ at inception. This will allow the right set of considerations to be addressed fo The considerations will help to guide the build and deployment of the Storage solution for the various Use Cases and Stereotypes outlined in the Table 3-10. -.. list-table:: Storage Considerations - :widths: 20 20 60 - :header-rows: 1 - - * - Use Case - - - - Description - * - **Data-centre Storage** - - - - Provide a highly reliable and scalable storage capability that has flexibility to meet diverse needs - * - - - Meets Needs of - - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) - - Cloud Infrastructure Management Plane (Cloud Infrastructure fault and performance management and platform automation) - - Cloud Infrastructure Tenant / User Plane - * - - - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storage sub-system? - - 1. Can storage support Virtual Machine (RA-1) and Container (RA-2) Hosting cases from single instance? Noting that if you wish to have single storage instance providing storage across multiple clusters and/or availability zones within the same data-centre then this needs to be factored into the underlay network design. - 2. Can the storage system support Live Migration/Multi-Attach within and across Availability Zones (applicable to Virtual Machine hosting (RA-1)) and how does the Cloud Infrastructure solution support migration of Virtual Machines between availability zones in general? - 3. Can the storage system support the full range of Shared File Storage use cases: including the ability to control how network exposed Share File Storage is visible: Within Tenancy, Across Tenancy (noting that a Tenancy can operate across availability zones) and Externally? - 4. Can the storage system support alternate performance tiers to allow tenant selection of best Cost/Performance option? For very high performance storage provision, meeting throughput and IOP needs can be achieved by using: very high IOP flash storage, higher bandwidth networking,performance optimised replication design and storage pool host distribution, while achieving very low latency targets require careful planning of underlay storage VLAN/switch networking. - * - - - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice - - - * - - - Dedicated Software Defined Storage - - 1. Need to establish the physical disk data layout / encoding scheme choice, options could be: replication / mirroring of data across multiple storage hosts or CRC-based redundancy management encoding (such as "erasure encoding"). This typically has performance/cost implications as replication has a lower performance impact, but consumes larger number of physical disks. If using replication then increasing the number of replicas provide greater data loss prevention, but consumes more disk system backend network bandwidth, with bandwidth need proportional to number of replicas. - 2. In general with Software Defined Storage solution it is not to use hardware RAID controllers, as this impacts the scope of recovery on failure as the failed device replacement can only be managed within the RAID volume that disk is part of. With Software Defined Storage failure recovering can be managed within the host that the disk failed in, but also across physical storage hosts. - 3. Can storage be consumed optimally irrespective of whether this is at Control, Management or Tenant / User Plane? Example is iSCSI/NFS, which while available and providing a common technical capability, does not provide best achievable performance. Best performance is achieved using provided OS layer driver that matches the particular software defined storage implementation (example is using RADOS driver in Ceph case vs. Ceph ability to expose iSCSI). - * - - - Dedicated Network Storage Appliance - - 1. Macro choice is made based on vendor / model selection and configuration choices available. - * - - - Traditional SAN - - 1. This is generally made available via Fiber Channel Arbitrated Loop (FC-AL)/SCSI connectivity and hence has a need for very specific connectivity. To provide the features required for Cloud Infrastructure (Shared File Storage, Object Storage and Multi-tenancy support), a SAN storage systems needs to be augmented with other gateway/s to provide an IP Network consumable capability. This is often seen with current deployments where NFS/CIFS (NAS) Gateway is connected by FC-AL (for storage back-end) and IP Network for Cloud Infrastructure consumption (front-end). This model helps to extent use of SAN storage investment. NOTE: This applies to SANs which use SAS/SATA physical disk devices, as direct connect FC-AL disk devices are no longer manufactured. - * - **Satellite Data-centre Storage** - - - - Satellite data-centre is a smaller regional deployment which has connectivity to and utilises resources available from the main Data-centre, so only provides support for subset of needs - * - - - Meets Needs of - - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) - - Cloud Infrastructure Tenant/User Plane - * - - - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in storage sub-system? - - 1. Is there a need to support multiple clusters/availability zones at the same site? If so then use "Data-Centre Storage" use case, otherwise, consider how to put Virtual Machine & Container Hosting control plane and Storage control plane on the same set of hosts to reduce footprint. - 2. Can Shared File Storage establishment be avoided by using capabilities provided by large Data-Centre Storage? - 3. Can very large capacity storage needs be moved to larger Data-Centre Storage capabilities? - * - - - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice - - - * - - - Small Software Defined Storage - - 1. Leverage same technology as "Dedicated Software Defined Storage" scenarios, but avoid/limit Infrastructure boot and Management plane support and Network Storage support - 2. Avoid having dedicated storage instance per cluster/availability zone - 3. Resilience through rapid rebuild (N + 1 failure scenario) - * - **Small Data-centre Storage** - - - - Small data-centre storage deployment is used in cases where software-defined storage and virtual machine / container hosting are running on a converged infrastructure footprint with the aim of reducing the overall size of the platform. This solution behaves as a standalone Infrastructure Cloud platform. - * - - - Meets Needs of - - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) - - Cloud Infrastructure Management Plane (Cloud Infrastructure fault and performance management and platform automation) - - Cloud Infrastructure Tenant / User Plane - * - - - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storagesub-system? - - 1. Is there need to support multiple clusters / availability zones at same site? See guidance for "Satellite Data-centre Storage" use case(1). - 2. Is Shared File Storage required? Check sharing scope carefully as fully virtualised NFs solution adds complexity and increases resources needs. - 3. Is there need for large local capacity? With large capacity flash (15-30 TB/device), the solution can hold significant storage capacity, but need to consider carefully data loss prevention need and impact on rebuilt/recovery times. - * - - - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice - - - * - - - Converged Software Defined Storage - - 1. Leverage same technology as "Dedicated Software-Defined Storage" scenarios, but on converged infrastructure. To meet capacity needs provision three hosts for storage and the rest for virtual infrastructure and storage control and management and tenant workload hosting. - 2. If the solution needs to host two clusters/availability zones then have sharable storage instances. - 3. Resilience through rapid rebuild (N + 0 or N + 1) - * - **Edge Cloud for App Storage** - - - - Support the deployment of Applications at the edge, which tend to have greater storage needs than a network VNF/CNF - * - - - Meets Needs of - - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) - - Cloud Infrastructure Tenant / User Plane - very limited configuration storage - * - **Edge Cloud for VNF/CNF Storage** - - - - Support the deployment of VNF / CNF at the edge. - * - - - Meets Needs of - - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) - - Cloud Infrastructure Tenant / User Plane - limited configuration storage - * - **Edge Cloud for Content Storage** - - - - Support the deployment of deployment of media content cache at the edge. This is a very common Content Distribution Network (CDN) use case - * - - - Meets Needs of - - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) - - Cloud Infrastructure Tenant / User Plane - Media Content storage - * - - - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storage sub-system? - - 1. Consuming and exposing Object storage through Tenant application - 2. Use Embedded Shared File Storage for Control and Tenant Storage Needs - * - - - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice - - - * - - - Embedded Shared File Storage - - 1. What is the best way to achieve some level of data resilience, while minimising required infrastructure? (i.e do not have luxury of having host (VMs) dedicated to supporting storage control and storage data needs) ++----+----+----+----------+-----------------------------------------------------------+ +| Use Case | Description | ++====+====+====+==========+===========================================================+ +| **Data-centre** | Provide a highly reliable and scalable storage capability | +| **Storage** | that has flexibility to meet diverse needs | ++----+----+----+----------+-----------------------------------------------------------+ +| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | +| | | Machine and Container life-cycle management and control) | +| | +-----------------------------------------------------------+ +| | | Cloud Infrastructure Management Plane (Cloud | +| | | Infrastructure fault and performance management and | +| | | platform automation) | +| | +-----------------------------------------------------------+ +| | | Cloud Infrastructure Tenant / User Plane | ++----+----+----+----------+-----------------------------------------------------------+ +| | General Considerations: What are the general considerations, irrespective of | +| | the deployment stereotype/technology used in the storage sub-system? | ++----+----+----+----------+-----------------------------------------------------------+ +| | 1 | Can storage support Virtual Machine (RA-1) and Container (RA-2) Hosting | +| | | cases from single instance? Noting that if you wish to have single | +| | | storage instance providing storage across multiple clusters and/or | +| | | availability zones within the same data-centre then this needs to be | +| | | factored into the underlay network design. | ++----+----+----+----------+-----------------------------------------------------------+ +| | 2 | Can the storage system support Live Migration/Multi-Attach within and | +| | | across Availability Zones (applicable to Virtual Machine hosting (RA-1)) | +| | | and how does the Cloud Infrastructure solution support migration of | +| | | Virtual Machines between availability zones in general? | ++----+----+----+----------+-----------------------------------------------------------+ +| | 3 | Can the storage system support the full range of Shared File Storage use | +| | | cases: including the ability to control how network exposed Share File | +| | | Storage is visible: Within Tenancy, Across Tenancy (noting that a Tenancy | +| | | can operate across availability zones) and Externally? | ++----+----+----+----------+-----------------------------------------------------------+ +| | 4 | Can the storage system support alternate performance tiers to allow | +| | | tenant selection of best Cost/Performance option? For very high | +| | | performance storage provision, meeting throughput and IOP needs can be | +| | | achieved by using: very high IOP flash storage, higher bandwidth | +| | | networking,performance optimised replication design and storage pool host | +| | | distribution, while achieving very low latency targets require careful | +| | | planning of underlay storage VLAN/switch networking. | ++----+----+----+----------+-----------------------------------------------------------+ +| | Specific Considerations: In selecting a particular stereotype/technology this | +| | can bring with it considerations that are specific to this choice | ++----+----+----+----------+-----------------------------------------------------------+ +| | Dedicated Software Defined Storage | ++----+----+----+----------+-----------------------------------------------------------+ +| | 1 | Need to establish the physical disk data layout / encoding scheme | +| | | choice, options could be: replication / mirroring of data across | +| | | multiple storage hosts or CRC-based redundancy management encoding | +| | | (such as "erasure encoding"). This typically has performance/cost | +| | | implications as replication has a lower performance impact, but | +| | | consumes larger number of physical disks. If using replication then | +| | | increasing the number of replicas provide greater data loss | +| | | prevention, but consumes more disk system backend network bandwidth, | +| | | with bandwidth need proportional to number of replicas. | +| +----+----------+-----------------------------------------------------------+ +| | 2 | In general with Software Defined Storage solution it is not | +| | | to use hardware RAID controllers, as this impacts the scope of | +| | | recovery on failure as the failed device replacement can only be | +| | | managed within the RAID volume that disk is part of. With Software | +| | | Defined Storage failure recovering can be managed within the host | +| | | that the disk failed in, but also across physical storage hosts. | +| +----+----------+-----------------------------------------------------------+ +| | 3 | Can storage be consumed optimally irrespective of whether this is at | +| | | Control, Management or Tenant / User Plane? Example is iSCSI/NFS, | +| | | which while available and providing a common technical capability, | +| | | does not provide best achievable performance. Best performance is | +| | | achieved using provided OS layer driver that matches the particular | +| | | software defined storage implementation (example is using RADOS | +| | | driver in Ceph case vs. Ceph ability to expose iSCSI). | ++----+----+----+----------+-----------------------------------------------------------+ +| | Dedicated Network Storage Appliance | ++----+----+----+----------+-----------------------------------------------------------+ +| | 1 | Macro choice is made based on vendor / model selection and | +| | | configuration choices available | ++----+----+----+----------+-----------------------------------------------------------+ +| | Traditional SAN | ++----+----+----+----------+-----------------------------------------------------------+ +| | 1 | This is generally made available via Fiber Channel Arbitrated Loop | +| | | (FC-AL)/SCSI connectivity and hence has a need for very specific | +| | | connectivity. To provide the features required for Cloud | +| | | Infrastructure (Shared File Storage, Object Storage and | +| | | Multi-tenancy support), a SAN storage systems needs to be augmented | +| | | with other gateway/s to provide an IP Network consumable capability. | +| | | This is often seen with current deployments where NFS/CIFS (NAS) | +| | | Gateway is connected by FC-AL (for storage back-end) and IP Network | +| | | for Cloud Infrastructure consumption (front-end). This model helps | +| | | to extent use of SAN storage investment. NOTE: This applies to SANs | +| | | which use SAS/SATA physical disk devices, as direct connect FC-AL | +| | | disk devices are no longer manufactured. | ++----+----+----+----------+-----------------------------------------------------------+ +| **Satellite** | Satellite data-centre is a smaller regional deployment | +| **Data-centre Storage** | which has connectivity to and utilises resources | +| | available from the main Data-centre, so only provides | +| | support for subset of needs | ++----+----+----+----------+-----------------------------------------------------------+ +| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | +| | | Machine and Container life-cycle management and control) | +| | +-----------------------------------------------------------+ +| | | Cloud Infrastructure Tenant/User Plane | +| +----+----+----------+-----------------------------------------------------------+ +| | General Considerations: What are the general considerations, irrespective | +| | of the deployment stereotype/technology used in storage sub-system? | +| +----+----+----------+-----------------------------------------------------------+ +| | 1 | Is there a need to support multiple clusters/availability zones at the | +| | | same site? If so then use "Data-Centre Storage" use case, otherwise, | +| | | consider how to put Virtual Machine & Container Hosting control plane | +| | | and Storage control plane on the same set of hosts to reduce footprint. | +| +----+----+----------+-----------------------------------------------------------+ +| | 2 | Can Shared File Storage establishment be avoided by using capabilities | +| | | provided by large Data-Centre Storage? | +| +----+----+----------+-----------------------------------------------------------+ +| | 3 | Can very large capacity storage needs be moved to larger Data-Centre | +| | | Storage capabilities? | +| +----+----+----------+-----------------------------------------------------------+ +| | Specific Considerations: In selecting a particular stereotype/technology this | +| | can bring with it considerations that are specific to this choice | ++----+----+----+----------+-----------------------------------------------------------+ +| | Small Software Defined Storage | ++----+----+----+----------+-----------------------------------------------------------+ +| | 1 | Leverage same technology as "Dedicated Software Defined Storage" | +| | | scenarios, but avoid/limit Infrastructure boot and Management plane | +| | | support and Network Storage support | +| +----+----------+-----------------------------------------------------------+ +| | 2 | Avoid having dedicated storage instance per cluster/availability | +| | | zone | +| +----+----------+-----------------------------------------------------------+ +| | 3 | Resilience through rapid rebuild (N + 1 failure scenario) | ++----+----+----+----------+-----------------------------------------------------------+ +| **Small Data-centre** | Small data-centre storage deployment is used in cases | +| **Storage** | where software-defined storage and virtual machine / | +| | container hosting are running on a converged | +| | infrastructure footprint with the aim of reducing the | +| | overall size of the platform. This solution behaves as a | +| | standalone Infrastructure Cloud platform. | ++----+----+----+----------+-----------------------------------------------------------+ +| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | +| | | Machine and Container life-cycle management and control) | +| | +-----------------------------------------------------------+ +| | | Cloud Infrastructure Management Plane (Cloud | +| | | Infrastructure fault and performance management and | +| | | platform automation) | +| | +-----------------------------------------------------------+ +| | | Cloud Infrastructure Tenant / User Plane | +| +----+----+----------+-----------------------------------------------------------+ +| | General Considerations: What are the general considerations, irrespective of | +| | the deployment stereotype/technology used in the storagesub-system? | +| +----+----+----------+-----------------------------------------------------------+ +| | 1 | Is there need to support multiple clusters / availability zones at same | +| | | site? See guidance for "Satellite Data-centre Storage" use case(1). | +| +----+----+----------+-----------------------------------------------------------+ +| | 2 | Is Shared File Storage required? Check sharing scope carefully as fully | +| | | virtualised NFs solution adds complexity and increases resources needs. | +| +----+----+----------+-----------------------------------------------------------+ +| | 3 | Is there need for large local capacity? With large capacity flash (15-30 | +| | | TB/device), the solution can hold significant storage capacity, but need | +| | | to consider carefully data loss prevention need and impact on | +| | | rebuilt/recovery times. | +| +----+----+----------+-----------------------------------------------------------+ +| | Specific Considerations: In selecting a particular stereotype/technology this | +| | can bring with it considerations that are specific to this choice | ++----+----+----+----------+-----------------------------------------------------------+ +| | Converged Software Defined Storage | ++----+----+----+----------+-----------------------------------------------------------+ +| | 1 | Leverage same technology as "Dedicated Software-Defined Storage" | +| | | scenarios, but on converged infrastructure. To meet capacity needs | +| | | provision three hosts for storage and the rest for virtual | +| | | infrastructure and storage control and management and tenant | +| | | workload hosting. | +| +----+----------+-----------------------------------------------------------+ +| | 2 | If the solution needs to host two clusters/availability zones then | +| | | have sharable storage instances. | +| +----+----------+-----------------------------------------------------------+ +| | 3 | Resilience through rapid rebuild (N + 0 or N + 1) | ++----+----+----+----------+-----------------------------------------------------------+ +| **Edge Cloud for App** | Support the deployment of Applications at the edge, which | +| **Storage** | tend to have greater storage needs than a network VNF/CNF | ++----+----+----+----------+-----------------------------------------------------------+ +| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | +| | | Machine and Container life-cycle management and control) | +| | +-----------------------------------------------------------+ +| | | Cloud Infrastructure Tenant / User Plane - very limited | +| | | configuration storage | ++----+----+----+----------+-----------------------------------------------------------+ +| **Edge Cloud for** | Support the deployment of VNF / CNF at the edge. | +| **VNF/CNF Storage** | | ++----+----+----+----------+-----------------------------------------------------------+ +| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | +| | | Machine and Container life-cycle management and control) | +| | +-----------------------------------------------------------+ +| | | Cloud Infrastructure Tenant / User Plane - limited | +| | | configuration storage | ++----+----+----+----------+-----------------------------------------------------------+ +| **Edge Cloud for** | Support the deployment of deployment of media content | +| **Content Storage** | cache at the edge. This is a very common Content | +| | Distribution Network (CDN) use case | ++----+----+----+----------+-----------------------------------------------------------+ +| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | +| | | Machine and Container life-cycle management and control) | +| | +-----------------------------------------------------------+ +| | | Cloud Infrastructure Tenant / User Plane - Media Content | +| | | storage | +| +----+----+----------+-----------------------------------------------------------+ +| | General Considerations: What are the general considerations, irrespective of | +| | the deployment stereotype/technology used in the storage sub-system? | +| +----+----+----------+-----------------------------------------------------------+ +| | 1 | Consuming and exposing Object storage through Tenant application | +| +----+----+----------+-----------------------------------------------------------+ +| | 2 | Use Embedded Shared File Storage for Control and Tenant Storage Needs | +| +----+----+----------+-----------------------------------------------------------+ +| | Specific Considerations: In selecting a particular stereotype/technology this | +| | can bring with it considerations that are specific to this choice | +| +----+----+----------+-----------------------------------------------------------+ +| | Embedded Shared File Storage | ++----+----+----+----------+-----------------------------------------------------------+ +| | 1 | What is the best way to achieve some level of data resilience, while | +| | | minimising required infrastructure? (i.e do not have luxury of | +| | | having host (VMs) dedicated to supporting storage control and | +| | | storage data needs) | ++----+----+----+----------+-----------------------------------------------------------+ **Table 3-10:** Storage Considerations From 778c9f840d17487cec334f730e1a0ab97ed1bfb8 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 16:14:07 +0100 Subject: [PATCH 11/32] unroll Table 3-9 previous commit was to unroll Table 3-10 this is to unroll Table 3-9 --- doc/ref_model/chapters/chapter03.rst | 184 ++++----------------------- 1 file changed, 28 insertions(+), 156 deletions(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index f624b3d2..d16e1215 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1312,162 +1312,34 @@ Where: :widths: 30 30 5 5 5 10 10 7 7 5 5 7 :header-rows: 3 - * - - - - - - - - - - - Tenant / User - - - - - - - - - - - - - * - - - - - Infra / Ctrl / Mgmt - - - - - - Platform Native - - - - Shared File - - - - - - - - Object - * - Use Case - - Stereotype - - Boot - - Ctrl - - Mgt - - Hypervisor Attached - - Container Persistent - - Within - - Cross - - Ext - - vNAS - - Object - * - Data-centre Storage - - Dedicated Network Storage Appliance - - Y - - Y - - Y - - Y - - Y - - O - - O - - O - - O - - O - * - - - Dedicated Software Defined Storage - - O - - O - - O - - Y - - Y - - O - - O - - O - - O - - O - * - - - Traditional SAN - - Y - - Y - - Y - - N - - N - - N - - N - - N - - N - - N - * - Satellite data-centre Storage - - Small Software Defined Storage - - O - - O - - O - - Y - - Y - - O - - O - - O - - O - - O - * - Small data-centre Storage - - Converged Software Defined Storage - - O - - O - - O - - Y - - Y - - O - - O - - O - - O - - O - * - Edge Cloud - - Edge Cloud for VNF/CNF Storage - - NA - - O - - NA - - Y - - Y - - O - - O - - O - - O - - O - * - - - Edge Cloud for Apps Storage - - NA - - O - - NA - - Y - - Y - - O - - O - - O - - O - - Y - * - - - Edge Cloud for Content Mgt Storage - - NA - - O - - NA - - Y - - Y - - O - - O - - O - - O - - Y - * - Split Control/User Plane Edge Cloud - - Split Edge Ctrl Plane Storage - - NA - - N - - NA - - Y - - Y - - O - - O - - O - - O - - O - * - - - Split Edge User Plane Storage - - NA - - N - - NA - - N - - N - - N - - N - - N - - N - - N ++-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| | Tenant / User | ++===============================+=====================================+======+======+======+============+============+========+=======+=====+======+========+ +| | Infra / Ctrl / Mgt | Platform Native | Shared File | Object | ++-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| Use Case | Stereotype | Boot | Ctrl | Mgt | Hypervisor | Container | Within | Cross | Ext | vNAS | Object | +| | | | | | Attached | Persistent | | | | | | ++-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| Data-centre Storage | Dedicated Network Storage Appliance | Y | Y | Y | Y | Y | O | O | O | O | O | +| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| | Dedicated Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | +| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| | Traditional SAN | Y | Y | Y | N | N | N | N | N | N | N | ++-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| Satellite data-centre Storage | Small Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | ++-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| Small data-centre Storage | Converged Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | ++-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| Edge Cloud | Edge Cloud for VNF/CNF Storage | NA | O | NA | Y | Y | O | O | O | O | O | +| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| | Edge Cloud for Apps Storage | NA | O | NA | Y | Y | O | O | O | O | Y | +| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| | Edge Cloud for Content Mgt Storage | NA | O | NA | Y | Y | O | O | O | O | Y | ++-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| Split Control/User Plane | Split Edge Ctrl Plane Storage | NA | N | NA | Y | Y | O | O | O | O | O | +| Edge Cloud +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +| + Split Edge User Plane Storage + NA | N | NA | N | N | N | N | N | N | N | ++-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ **Table 3-9:** Storage Use Cases and Stereotypes From e38fd87f23c715b828a1426fea8af1acfe6d855b Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 16:29:30 +0100 Subject: [PATCH 12/32] delete list-table before Table 3-9 ./chapters/chapter03.rst:1311: D000 The "list-table" directive is empty; content required. --- doc/ref_model/chapters/chapter03.rst | 4 ---- 1 file changed, 4 deletions(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index d16e1215..d530c71c 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1308,10 +1308,6 @@ Where: - "N" - No, not available - "NA" - Not Applicable for this Use Case / Stereotype -.. list-table:: Storage Use Cases and Stereotypes - :widths: 30 30 5 5 5 10 10 7 7 5 5 7 - :header-rows: 3 - +-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ | | Tenant / User | +===============================+=====================================+======+======+======+============+============+========+=======+=====+======+========+ From 2accaa2b370c191d37caf3815d51f9b09f487c5b Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 16:38:34 +0100 Subject: [PATCH 13/32] Table 3-9 with header-rows = 2 --- doc/ref_model/chapters/chapter03.rst | 188 +++++++++++++++++++++++---- 1 file changed, 160 insertions(+), 28 deletions(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index d530c71c..a9b1bc59 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1308,34 +1308,166 @@ Where: - "N" - No, not available - "NA" - Not Applicable for this Use Case / Stereotype -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Tenant / User | -+===============================+=====================================+======+======+======+============+============+========+=======+=====+======+========+ -| | Infra / Ctrl / Mgt | Platform Native | Shared File | Object | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Use Case | Stereotype | Boot | Ctrl | Mgt | Hypervisor | Container | Within | Cross | Ext | vNAS | Object | -| | | | | | Attached | Persistent | | | | | | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Data-centre Storage | Dedicated Network Storage Appliance | Y | Y | Y | Y | Y | O | O | O | O | O | -| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Dedicated Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | -| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Traditional SAN | Y | Y | Y | N | N | N | N | N | N | N | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Satellite data-centre Storage | Small Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Small data-centre Storage | Converged Software Defined Storage | O | O | O | Y | Y | O | O | O | O | O | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Edge Cloud | Edge Cloud for VNF/CNF Storage | NA | O | NA | Y | Y | O | O | O | O | O | -| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Edge Cloud for Apps Storage | NA | O | NA | Y | Y | O | O | O | O | Y | -| +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| | Edge Cloud for Content Mgt Storage | NA | O | NA | Y | Y | O | O | O | O | Y | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| Split Control/User Plane | Split Edge Ctrl Plane Storage | NA | N | NA | Y | Y | O | O | O | O | O | -| Edge Cloud +-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ -| + Split Edge User Plane Storage + NA | N | NA | N | N | N | N | N | N | N | -+-------------------------------+-------------------------------------+------+------+------+------------+------------+--------+-------+-----+------+--------+ +.. list-table:: Storage Use Cases and Stereotypes + :widths: 30 30 5 5 5 10 10 7 7 5 5 7 + :header-rows: 2 + + * - + - + - + - + - + - Tenant / User + - + - + - + - + - + - + * - + - + - Infra / Ctrl / Mgmt + - + - + - Platform Native + - + - Shared File + - + - + - + - Object + * - Use Case + - Stereotype + - Boot + - Ctrl + - Mgt + - Hypervisor Attached + - Container Persistent + - Within + - Cross + - Ext + - vNAS + - Object + * - Data-centre Storage + - Dedicated Network Storage Appliance + - Y + - Y + - Y + - Y + - Y + - O + - O + - O + - O + - O + * - + - Dedicated Software Defined Storage + - O + - O + - O + - Y + - Y + - O + - O + - O + - O + - O + * - + - Traditional SAN + - Y + - Y + - Y + - N + - N + - N + - N + - N + - N + - N + * - Satellite data-centre Storage + - Small Software Defined Storage + - O + - O + - O + - Y + - Y + - O + - O + - O + - O + - O + * - Small data-centre Storage + - Converged Software Defined Storage + - O + - O + - O + - Y + - Y + - O + - O + - O + - O + - O + * - Edge Cloud + - Edge Cloud for VNF/CNF Storage + - NA + - O + - NA + - Y + - Y + - O + - O + - O + - O + - O + * - + - Edge Cloud for Apps Storage + - NA + - O + - NA + - Y + - Y + - O + - O + - O + - O + - Y + * - + - Edge Cloud for Content Mgt Storage + - NA + - O + - NA + - Y + - Y + - O + - O + - O + - O + - Y + * - Split Control/User Plane Edge Cloud + - Split Edge Ctrl Plane Storage + - NA + - N + - NA + - Y + - Y + - O + - O + - O + - O + - O + * - + - Split Edge User Plane Storage + - NA + - N + - NA + - N + - N + - N + - N + - N + - N + - N **Table 3-9:** Storage Use Cases and Stereotypes From 9bfc1e0bf92d4c8a3bd6442e536802757cc5c8ed Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:18:04 +0100 Subject: [PATCH 14/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index a9b1bc59..7612ef3a 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -806,7 +806,7 @@ The table below highlights areas under which common SFC functional components ca * - Management - ``SFC orchestrator`` - High Level of orchestrator Orchestrate the SFC based on SFC Models/Policies with help of control components. - * - + * - Management - ``SFC OAM Components`` - Responsible for SFC OAM functions * - From a53f50c8bfb70c0f696ffe36b9d5eb51a8eb98f7 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:18:11 +0100 Subject: [PATCH 15/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index 7612ef3a..513d8a73 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -809,7 +809,7 @@ The table below highlights areas under which common SFC functional components ca * - Management - ``SFC OAM Components`` - Responsible for SFC OAM functions - * - + * - Management - ``VNF MANO`` - NFVO, VNFM, and VIM Responsible for SFC Data components lifecycle * - From 068baeb697c24673af22255ef5c9b202d5b362fb Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:18:21 +0100 Subject: [PATCH 16/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index 513d8a73..18e75d19 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -812,7 +812,7 @@ The table below highlights areas under which common SFC functional components ca * - Management - ``VNF MANO`` - NFVO, VNFM, and VIM Responsible for SFC Data components lifecycle - * - + * - Management - ``CNF MANO`` - CNF DevOps Components Responsible for SFC data components lifecycle * - Control From 5b73cfff9595c1078bcac0c2cb059b00686e22a5 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:18:28 +0100 Subject: [PATCH 17/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index 18e75d19..aec7254d 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -818,7 +818,7 @@ The table below highlights areas under which common SFC functional components ca * - Control - ``SFC SDN Controller`` - SDNC responsible to create the service specific overlay network. Deploy different techniques to stitch the wiring but provide the same functionality, for example l2xconn, SRv6, Segment routing etc. - * - + * - Control - ``SFC Renderer`` - Creates and wires ports/interfaces for SF data path * - Data From efbda73ad666f7ed005628781381fbf41feb021d Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:18:36 +0100 Subject: [PATCH 18/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index aec7254d..c2f7b46e 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1360,7 +1360,7 @@ Where: - O - O - O - * - + * - Data-centre Storage - Dedicated Software Defined Storage - O - O From 3a9ced10053f991497641c243449e530ee3e646e Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:18:47 +0100 Subject: [PATCH 19/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index c2f7b46e..a2dc65a8 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1372,7 +1372,7 @@ Where: - O - O - O - * - + * - Data-centre Storage - Traditional SAN - Y - Y From 1c4973f72a1da31996c12ed0a65ba2c0f7f046d8 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:19:01 +0100 Subject: [PATCH 20/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index a2dc65a8..d7b5d6aa 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1420,7 +1420,7 @@ Where: - O - O - O - * - + * - Edge Cloud - Edge Cloud for Apps Storage - NA - O From 68775b54efd795f3a28d8de809309a070fdf20d0 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:19:06 +0100 Subject: [PATCH 21/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index d7b5d6aa..cd3ec354 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1456,7 +1456,7 @@ Where: - O - O - O - * - + * - Split Control/User Plane Edge Cloud - Split Edge User Plane Storage - NA - N From d58959df622f8ebbcca72b2b40e4373fdd128432 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:20:30 +0100 Subject: [PATCH 22/32] Update doc/ref_model/chapters/chapter03.rst Co-authored-by: Gergely Csatari --- doc/ref_model/chapters/chapter03.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index cd3ec354..edb14f6f 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1432,7 +1432,7 @@ Where: - O - O - Y - * - + * - Edge Cloud - Edge Cloud for Content Mgt Storage - NA - O From 407f3cdad17818c78865094375de6775fbe5c3b5 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 20:43:42 +0100 Subject: [PATCH 23/32] Table 3-9 without empty cells in first two rows --- doc/ref_model/chapters/chapter03.rst | 52 ++++++++++++++-------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index edb14f6f..640c45e2 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1308,34 +1308,34 @@ Where: - "N" - No, not available - "NA" - Not Applicable for this Use Case / Stereotype +Columns relevant for Tenant / User are: + + - Platform Native + - Shared File + - Object + +Columns relevant for Infra / Ctrl / Mgmt are: + + - Boot + - Ctrl + - Mgt + +Columns relevant for Platform Native are: + + - Hypervisor Attached + - Container Persistent + +Columns relevant for Shared File are: + + - Within + - Cross + - Ext + - vNAS + .. list-table:: Storage Use Cases and Stereotypes :widths: 30 30 5 5 5 10 10 7 7 5 5 7 - :header-rows: 2 - - * - - - - - - - - - - - Tenant / User - - - - - - - - - - - - - * - - - - - Infra / Ctrl / Mgmt - - - - - - Platform Native - - - - Shared File - - - - - - - - Object + :header-rows: 1 + * - Use Case - Stereotype - Boot From 5e233aa4c5a056d37600ada1fd1ba5dd14debb72 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 21:19:33 +0100 Subject: [PATCH 24/32] fixed Table 3-10 --- doc/ref_model/chapters/chapter03.rst | 323 ++++++++------------------- 1 file changed, 88 insertions(+), 235 deletions(-) diff --git a/doc/ref_model/chapters/chapter03.rst b/doc/ref_model/chapters/chapter03.rst index 640c45e2..915a66d7 100644 --- a/doc/ref_model/chapters/chapter03.rst +++ b/doc/ref_model/chapters/chapter03.rst @@ -1308,29 +1308,27 @@ Where: - "N" - No, not available - "NA" - Not Applicable for this Use Case / Stereotype -Columns relevant for Tenant / User are: - - - Platform Native - - Shared File - - Object - Columns relevant for Infra / Ctrl / Mgmt are: - Boot - Ctrl - Mgt -Columns relevant for Platform Native are: +Columns relevant for Tenant / User are: + + - Platform Native: - - Hypervisor Attached - - Container Persistent + - Hypervisor Attached + - Container Persistent -Columns relevant for Shared File are: + - Shared File: - - Within - - Cross - - Ext - - vNAS + - Within + - Cross + - Ext + - vNAS + + - Object .. list-table:: Storage Use Cases and Stereotypes :widths: 30 30 5 5 5 10 10 7 7 5 5 7 @@ -1479,227 +1477,82 @@ at inception. This will allow the right set of considerations to be addressed fo The considerations will help to guide the build and deployment of the Storage solution for the various Use Cases and Stereotypes outlined in the Table 3-10. -+----+----+----+----------+-----------------------------------------------------------+ -| Use Case | Description | -+====+====+====+==========+===========================================================+ -| **Data-centre** | Provide a highly reliable and scalable storage capability | -| **Storage** | that has flexibility to meet diverse needs | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Management Plane (Cloud | -| | | Infrastructure fault and performance management and | -| | | platform automation) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane | -+----+----+----+----------+-----------------------------------------------------------+ -| | General Considerations: What are the general considerations, irrespective of | -| | the deployment stereotype/technology used in the storage sub-system? | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Can storage support Virtual Machine (RA-1) and Container (RA-2) Hosting | -| | | cases from single instance? Noting that if you wish to have single | -| | | storage instance providing storage across multiple clusters and/or | -| | | availability zones within the same data-centre then this needs to be | -| | | factored into the underlay network design. | -+----+----+----+----------+-----------------------------------------------------------+ -| | 2 | Can the storage system support Live Migration/Multi-Attach within and | -| | | across Availability Zones (applicable to Virtual Machine hosting (RA-1)) | -| | | and how does the Cloud Infrastructure solution support migration of | -| | | Virtual Machines between availability zones in general? | -+----+----+----+----------+-----------------------------------------------------------+ -| | 3 | Can the storage system support the full range of Shared File Storage use | -| | | cases: including the ability to control how network exposed Share File | -| | | Storage is visible: Within Tenancy, Across Tenancy (noting that a Tenancy | -| | | can operate across availability zones) and Externally? | -+----+----+----+----------+-----------------------------------------------------------+ -| | 4 | Can the storage system support alternate performance tiers to allow | -| | | tenant selection of best Cost/Performance option? For very high | -| | | performance storage provision, meeting throughput and IOP needs can be | -| | | achieved by using: very high IOP flash storage, higher bandwidth | -| | | networking,performance optimised replication design and storage pool host | -| | | distribution, while achieving very low latency targets require careful | -| | | planning of underlay storage VLAN/switch networking. | -+----+----+----+----------+-----------------------------------------------------------+ -| | Specific Considerations: In selecting a particular stereotype/technology this | -| | can bring with it considerations that are specific to this choice | -+----+----+----+----------+-----------------------------------------------------------+ -| | Dedicated Software Defined Storage | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Need to establish the physical disk data layout / encoding scheme | -| | | choice, options could be: replication / mirroring of data across | -| | | multiple storage hosts or CRC-based redundancy management encoding | -| | | (such as "erasure encoding"). This typically has performance/cost | -| | | implications as replication has a lower performance impact, but | -| | | consumes larger number of physical disks. If using replication then | -| | | increasing the number of replicas provide greater data loss | -| | | prevention, but consumes more disk system backend network bandwidth, | -| | | with bandwidth need proportional to number of replicas. | -| +----+----------+-----------------------------------------------------------+ -| | 2 | In general with Software Defined Storage solution it is not | -| | | to use hardware RAID controllers, as this impacts the scope of | -| | | recovery on failure as the failed device replacement can only be | -| | | managed within the RAID volume that disk is part of. With Software | -| | | Defined Storage failure recovering can be managed within the host | -| | | that the disk failed in, but also across physical storage hosts. | -| +----+----------+-----------------------------------------------------------+ -| | 3 | Can storage be consumed optimally irrespective of whether this is at | -| | | Control, Management or Tenant / User Plane? Example is iSCSI/NFS, | -| | | which while available and providing a common technical capability, | -| | | does not provide best achievable performance. Best performance is | -| | | achieved using provided OS layer driver that matches the particular | -| | | software defined storage implementation (example is using RADOS | -| | | driver in Ceph case vs. Ceph ability to expose iSCSI). | -+----+----+----+----------+-----------------------------------------------------------+ -| | Dedicated Network Storage Appliance | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Macro choice is made based on vendor / model selection and | -| | | configuration choices available | -+----+----+----+----------+-----------------------------------------------------------+ -| | Traditional SAN | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | This is generally made available via Fiber Channel Arbitrated Loop | -| | | (FC-AL)/SCSI connectivity and hence has a need for very specific | -| | | connectivity. To provide the features required for Cloud | -| | | Infrastructure (Shared File Storage, Object Storage and | -| | | Multi-tenancy support), a SAN storage systems needs to be augmented | -| | | with other gateway/s to provide an IP Network consumable capability. | -| | | This is often seen with current deployments where NFS/CIFS (NAS) | -| | | Gateway is connected by FC-AL (for storage back-end) and IP Network | -| | | for Cloud Infrastructure consumption (front-end). This model helps | -| | | to extent use of SAN storage investment. NOTE: This applies to SANs | -| | | which use SAS/SATA physical disk devices, as direct connect FC-AL | -| | | disk devices are no longer manufactured. | -+----+----+----+----------+-----------------------------------------------------------+ -| **Satellite** | Satellite data-centre is a smaller regional deployment | -| **Data-centre Storage** | which has connectivity to and utilises resources | -| | available from the main Data-centre, so only provides | -| | support for subset of needs | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant/User Plane | -| +----+----+----------+-----------------------------------------------------------+ -| | General Considerations: What are the general considerations, irrespective | -| | of the deployment stereotype/technology used in storage sub-system? | -| +----+----+----------+-----------------------------------------------------------+ -| | 1 | Is there a need to support multiple clusters/availability zones at the | -| | | same site? If so then use "Data-Centre Storage" use case, otherwise, | -| | | consider how to put Virtual Machine & Container Hosting control plane | -| | | and Storage control plane on the same set of hosts to reduce footprint. | -| +----+----+----------+-----------------------------------------------------------+ -| | 2 | Can Shared File Storage establishment be avoided by using capabilities | -| | | provided by large Data-Centre Storage? | -| +----+----+----------+-----------------------------------------------------------+ -| | 3 | Can very large capacity storage needs be moved to larger Data-Centre | -| | | Storage capabilities? | -| +----+----+----------+-----------------------------------------------------------+ -| | Specific Considerations: In selecting a particular stereotype/technology this | -| | can bring with it considerations that are specific to this choice | -+----+----+----+----------+-----------------------------------------------------------+ -| | Small Software Defined Storage | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Leverage same technology as "Dedicated Software Defined Storage" | -| | | scenarios, but avoid/limit Infrastructure boot and Management plane | -| | | support and Network Storage support | -| +----+----------+-----------------------------------------------------------+ -| | 2 | Avoid having dedicated storage instance per cluster/availability | -| | | zone | -| +----+----------+-----------------------------------------------------------+ -| | 3 | Resilience through rapid rebuild (N + 1 failure scenario) | -+----+----+----+----------+-----------------------------------------------------------+ -| **Small Data-centre** | Small data-centre storage deployment is used in cases | -| **Storage** | where software-defined storage and virtual machine / | -| | container hosting are running on a converged | -| | infrastructure footprint with the aim of reducing the | -| | overall size of the platform. This solution behaves as a | -| | standalone Infrastructure Cloud platform. | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Management Plane (Cloud | -| | | Infrastructure fault and performance management and | -| | | platform automation) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane | -| +----+----+----------+-----------------------------------------------------------+ -| | General Considerations: What are the general considerations, irrespective of | -| | the deployment stereotype/technology used in the storagesub-system? | -| +----+----+----------+-----------------------------------------------------------+ -| | 1 | Is there need to support multiple clusters / availability zones at same | -| | | site? See guidance for "Satellite Data-centre Storage" use case(1). | -| +----+----+----------+-----------------------------------------------------------+ -| | 2 | Is Shared File Storage required? Check sharing scope carefully as fully | -| | | virtualised NFs solution adds complexity and increases resources needs. | -| +----+----+----------+-----------------------------------------------------------+ -| | 3 | Is there need for large local capacity? With large capacity flash (15-30 | -| | | TB/device), the solution can hold significant storage capacity, but need | -| | | to consider carefully data loss prevention need and impact on | -| | | rebuilt/recovery times. | -| +----+----+----------+-----------------------------------------------------------+ -| | Specific Considerations: In selecting a particular stereotype/technology this | -| | can bring with it considerations that are specific to this choice | -+----+----+----+----------+-----------------------------------------------------------+ -| | Converged Software Defined Storage | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | Leverage same technology as "Dedicated Software-Defined Storage" | -| | | scenarios, but on converged infrastructure. To meet capacity needs | -| | | provision three hosts for storage and the rest for virtual | -| | | infrastructure and storage control and management and tenant | -| | | workload hosting. | -| +----+----------+-----------------------------------------------------------+ -| | 2 | If the solution needs to host two clusters/availability zones then | -| | | have sharable storage instances. | -| +----+----------+-----------------------------------------------------------+ -| | 3 | Resilience through rapid rebuild (N + 0 or N + 1) | -+----+----+----+----------+-----------------------------------------------------------+ -| **Edge Cloud for App** | Support the deployment of Applications at the edge, which | -| **Storage** | tend to have greater storage needs than a network VNF/CNF | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane - very limited | -| | | configuration storage | -+----+----+----+----------+-----------------------------------------------------------+ -| **Edge Cloud for** | Support the deployment of VNF / CNF at the edge. | -| **VNF/CNF Storage** | | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane - limited | -| | | configuration storage | -+----+----+----+----------+-----------------------------------------------------------+ -| **Edge Cloud for** | Support the deployment of deployment of media content | -| **Content Storage** | cache at the edge. This is a very common Content | -| | Distribution Network (CDN) use case | -+----+----+----+----------+-----------------------------------------------------------+ -| | Meets Needs of | Cloud Infrastructure Control Plane (tenant Virtual | -| | | Machine and Container life-cycle management and control) | -| | +-----------------------------------------------------------+ -| | | Cloud Infrastructure Tenant / User Plane - Media Content | -| | | storage | -| +----+----+----------+-----------------------------------------------------------+ -| | General Considerations: What are the general considerations, irrespective of | -| | the deployment stereotype/technology used in the storage sub-system? | -| +----+----+----------+-----------------------------------------------------------+ -| | 1 | Consuming and exposing Object storage through Tenant application | -| +----+----+----------+-----------------------------------------------------------+ -| | 2 | Use Embedded Shared File Storage for Control and Tenant Storage Needs | -| +----+----+----------+-----------------------------------------------------------+ -| | Specific Considerations: In selecting a particular stereotype/technology this | -| | can bring with it considerations that are specific to this choice | -| +----+----+----------+-----------------------------------------------------------+ -| | Embedded Shared File Storage | -+----+----+----+----------+-----------------------------------------------------------+ -| | 1 | What is the best way to achieve some level of data resilience, while | -| | | minimising required infrastructure? (i.e do not have luxury of | -| | | having host (VMs) dedicated to supporting storage control and | -| | | storage data needs) | -+----+----+----+----------+-----------------------------------------------------------+ +.. list-table:: Storage Considerations + :widths: 30 70 + :header-rows: 1 + + * - Use Case + - Description + * - **Data-centre Storage** + - Provide a highly reliable and scalable storage capability that has flexibility to meet diverse needs + * - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Management Plane (Cloud Infrastructure fault and performance management and platform automation) + - Cloud Infrastructure Tenant / User Plane + * - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storage sub-system? + - 1. Can storage support Virtual Machine (RA-1) and Container (RA-2) Hosting cases from single instance? Noting that if you wish to have single storage instance providing storage across multiple clusters and/or availability zones within the same data-centre then this needs to be factored into the underlay network design. + 2. Can the storage system support Live Migration/Multi-Attach within and across Availability Zones (applicable to Virtual Machine hosting (RA-1)) and how does the Cloud Infrastructure solution support migration of Virtual Machines between availability zones in general? + 3. Can the storage system support the full range of Shared File Storage use cases: including the ability to control how network exposed Share File Storage is visible: Within Tenancy, Across Tenancy (noting that a Tenancy can operate across availability zones) and Externally? + 4. Can the storage system support alternate performance tiers to allow tenant selection of best Cost/Performance option? For very high performance storage provision, meeting throughput and IOP needs can be achieved by using: very high IOP flash storage, higher bandwidth networking,performance optimised replication design and storage pool host distribution, while achieving very low latency targets require careful planning of underlay storage VLAN/switch networking. + * - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice + - Dedicated Software Defined Storage: + 1. Need to establish the physical disk data layout / encoding scheme choice, options could be: replication / mirroring of data across multiple storage hosts or CRC-based redundancy management encoding (such as "erasure encoding"). This typically has performance/cost implications as replication has a lower performance impact, but consumes larger number of physical disks. If using replication then increasing the number of replicas provide greater data loss prevention, but consumes more disk system backend network bandwidth, with bandwidth need proportional to number of replicas. + 2. In general with Software Defined Storage solution it is not to use hardware RAID controllers, as this impacts the scope of recovery on failure as the failed device replacement can only be managed within the RAID volume that disk is part of. With Software Defined Storage failure recovering can be managed within the host that the disk failed in, but also across physical storage hosts. + 3. Can storage be consumed optimally irrespective of whether this is at Control, Management or Tenant / User Plane? Example is iSCSI/NFS, which while available and providing a common technical capability, does not provide best achievable performance. Best performance is achieved using provided OS layer driver that matches the particular software defined storage implementation (example is using RADOS driver in Ceph case vs. Ceph ability to expose iSCSI). + Dedicated Network Storage Appliance: + 1. Macro choice is made based on vendor / model selection and configuration choices available + Traditional SAN: + 1. This is generally made available via Fiber Channel Arbitrated Loop (FC-AL)/SCSI connectivity and hence has a need for very specific connectivity. To provide the features required for Cloud Infrastructure (Shared File Storage, Object Storage and Multi-tenancy support), a SAN storage systems needs to be augmented with other gateway/s to provide an IP Network consumable capability. This is often seen with current deployments where NFS/CIFS (NAS) Gateway is connected by FC-AL (for storage back-end) and IP Network for Cloud Infrastructure consumption (front-end). This model helps to extent use of SAN storage investment. NOTE: This applies to SANs which use SAS/SATA physical disk devices, as direct connect FC-AL disk devices are no longer manufactured. + * - **Satellite Data-centre Storage** + - Satellite data-centre is a smaller regional deployment which has connectivity to and utilises resources available from the main Data-centre, so only provides support for subset of needs + * - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Tenant/User Plane + * - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in storage sub-system? + - 1. Is there a need to support multiple clusters/availability zones at the same site? If so then use "Data-Centre Storage" use case, otherwise, consider how to put Virtual Machine & Container Hosting control plane and Storage control plane on the same set of hosts to reduce footprint. + 2. Can Shared File Storage establishment be avoided by using capabilities provided by large Data-Centre Storage? + 3. Can very large capacity storage needs be moved to larger Data-Centre Storage capabilities? + * - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice + - Small Software Defined Storage: + 1. Leverage same technology as "Dedicated Software Defined Storage" scenarios, but avoid/limit Infrastructure boot and Management plane support and Network Storage support + 2. Avoid having dedicated storage instance per cluster/availability zone + 3. Resilience through rapid rebuild (N + 1 failure scenario) + * - **Small Data-centre Storage** + - Small data-centre storage deployment is used in cases where software-defined storage and virtual machine / container hosting are running on a converged infrastructure footprint with the aim of reducing the overall size of the platform. This solution behaves as a standalone Infrastructure Cloud platform. + * - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Management Plane (Cloud Infrastructure fault and performance management and platform automation) + - Cloud Infrastructure Tenant / User Plane + * - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storagesub-system? + - 1. Is there need to support multiple clusters / availability zones at same site? See guidance for "Satellite Data-centre Storage" use case(1). + 2. Is Shared File Storage required? Check sharing scope carefully as fully virtualised NFs solution adds complexity and increases resources needs. + 3. Is there need for large local capacity? With large capacity flash (15-30 TB/device), the solution can hold significant storage capacity, but need to consider carefully data loss prevention need and impact on rebuilt/recovery times + * - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice + - Converged Software Defined Storage: + 1. Leverage same technology as "Dedicated Software-Defined Storage" scenarios, but on converged infrastructure. To meet capacity needs provision three hosts for storage and the rest for virtual infrastructure and storage control and management and tenant workload hosting. + 2. If the solution needs to host two clusters/availability zones then have sharable storage instances. + 3. Resilience through rapid rebuild (N + 0 or N + 1) + * - **Edge Cloud for App Storage** + - Support the deployment of Applications at the edge, which tend to have greater storage needs than a network VNF/CNF + * - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Tenant / User Plane - very limited configuration storage + * - **Edge Cloud for VNF/CNF Storage** + - Support the deployment of VNF / CNF at the edge. + * - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Tenant / User Plane - limited configuration storage + * - **Edge Cloud for Content Storage** + - Support the deployment of deployment of media content cache at the edge. This is a very common Content Distribution Network (CDN) use case + * - Meets Needs of + - - Cloud Infrastructure Control Plane (tenant Virtual Machine and Container life-cycle management and control) + - Cloud Infrastructure Tenant / User Plane - Media Content storage + * - General Considerations: What are the general considerations, irrespective of the deployment stereotype/technology used in the storage sub-system? + - 1. Consuming and exposing Object storage through Tenant application + 2. Use Embedded Shared File Storage for Control and Tenant Storage Needs + * - Specific Considerations: In selecting a particular stereotype/technology this can bring with it considerations that are specific to this choice + - Embedded Shared File Storage: + 1. What is the best way to achieve some level of data resilience, while minimising required infrastructure? (i.e do not have luxury of having host (VMs) dedicated to supporting storage control and storage data needs) **Table 3-10:** Storage Considerations From adb151aac4fcc8a9d403a1fb69b2454fdf381985 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 21:38:55 +0100 Subject: [PATCH 25/32] fixed Table 4-8 and renumbered Tables 4-* --- doc/ref_model/chapters/chapter04.rst | 309 ++++++++++++++------------- 1 file changed, 158 insertions(+), 151 deletions(-) diff --git a/doc/ref_model/chapters/chapter04.rst b/doc/ref_model/chapters/chapter04.rst index cc64713b..21e73f2b 100644 --- a/doc/ref_model/chapters/chapter04.rst +++ b/doc/ref_model/chapters/chapter04.rst @@ -275,151 +275,154 @@ Internal Performance Measurement Capabilities these capabilities will be determined by the Cloud Infrastructure Profile used by the workloads. These measurements or events should be collected and monitored by monitoring tools. -* - Ref - - Cloud Infrastructure Capability - - Unit - - Definition/Notes -* - i.pm.001 - - Host CPU usage - - nanoseconds - - Per Compute node. It maps to ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008` processor usage metric (Cloud Infrastructure internal). -* - i.pm.002 - - Virtual compute resource (vCPU) usage - - nanoseconds - - Per VM or Pod. It maps to ETSI GS NFV-IFA 027 v2.4.1 :cite:p:`etsigsnfvifa027` Mean vCPU usage and Peak vCPU usage (Cloud Infrastructure external). -* - i.pm.003 - - Host CPU utilisation - - % - - Per Compute node. It maps to ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008` processor usage metric (Cloud Infrastructure internal). -* - i.pm.004 - - Virtual compute resource (vCPU) utilisation - - % - - Per VM or Pod. It maps to ETSI GS NFV-IFA 027 v2.4.1 :cite:p:`etsigsnfvifa027` Mean vCPU usage and Peak vCPU usage (Cloud Infrastructure external). -* - i.pm.005 - - Network metric, Packet count - - Number of packets - - Number of successfully transmitted or received packets per physical or virtual interface, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.006 - - Network metric, Octet count - - 8-bit bytes - - Number of 8-bit bytes that constitute successfully transmitted or received packets per physical or virtual interface, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.007 - - Network metric, Dropped Packet count - - Number of packets - - Number of discarded packets per physical or virtual interface, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.008 - - Network metric, Errored Packet count - - Number of packets - - Number of erroneous packets per physical or virtual interface, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.009 - - Memory buffered - - KiB - - Amount of temporary storage for raw disk blocks, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.010 - - Memory cached - - KiB - - Amount of RAM used as cache memory, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.011 - - Memory free - - KiB - - Amount of RAM unused, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.012 - - Memory slab - - KiB - - Amount of memory used as a data structure cache by the kernel, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.013 - - Memory total - - KiB - - Amount of usable RAM, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.014 - - Storage free space - - Bytes - - Amount of unused storage for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.015 - - Storage used space - - Bytes - - Amount of storage used for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.016 - - Storage reserved space - - Bytes - - Amount of storage reserved for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.017 - - Storage Read latency - - Milliseconds - - Average amount of time to perform a Read operation for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.018 - - Storage Read IOPS - - Operations per second - - Average rate of performing Read operations for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.019 - - Storage Read Throughput - - Bytes per second - - Average rate of performing Read operations for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1. -* - i.pm.020 - - Storage Write latency - - Milliseconds - - Average amount of time to perform a Write operation for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1. -* - i.pm.021 - - Storage Write IOPS - - Operations per second - - Average rate of performing Write operations for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.022 - - Storage Write Throughput - - Bytes per second - - Average rate of performing Write operations for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.023 - - Host power utilization - - Watt (Joule/s) - - Real-time electrical power used by a node (1). -* - i.pm.024 - - Host energy consumption - - Watt.hour (Joule) - - Electrical energy consumption of a node since the related counter was last reset (2). -* - i.pm.025 - - CPU power utilization - - Watt (Joule/s) - - Real-time electrical power used by the processor(s) of a node (1). -* - i.pm.026 - - CPU energy consumption - - Watt.hour (Joule) - - Electrical energy consumption of the processor(s) of a node since the related counter was last reset (2). -* - i.pm.027 - - PCIe device power utilization - - Watt (Joule/s) - - Real-time electrical power used by a specific PCI device of a node (1). -* - i.pm.028 - - PCIe device energy consumption - - Watt.hour (Joule) - - Electrical energy consumption of a specific PCI device of a node since the related counter was last reset (2). -* - i.pm.029 - - RAM power utilization - - Watt (Joule/s) - - Real-time electrical power used by the memory of a node (1). -* - i.pm.030 - - RAM energy consumption - - Watt.hour (Joule) - - Electrical energy consumption of the memory of a node since the related counter was last reset (2). -* - i.pm.031 - - Disk power utilization - - Watt (Joule/s) - - Real-time electrical power used by a specific storage device of a node (1). -* - i.pm.032 - - Disk energy consumption - - Watt.hour (Joule) - - Electrical energy consumption of a specific storage device of a node since the related counter was last reset (2). -* - i.pm.033 - - Hugepages pool total - - Integer - - The number of Hugepages currently configured in the pool, which is the total of pages available, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.034 - - Hugepages used - - Integer - - The number of used pages in the Hugepage Pool, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. -* - i.pm.035 - - Hugepages free - - Integer - - The number of free pages in the Hugepage Pool, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. +.. list-table:: Internal Measurement Capabilities of Cloud Infrastructure + :widths: 25 25 25 25 + :header-rows: 1 + * - Ref + - Cloud Infrastructure Capability + - Unit + - Definition/Notes + * - i.pm.001 + - Host CPU usage + - nanoseconds + - Per Compute node. It maps to ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008` processor usage metric (Cloud Infrastructure internal). + * - i.pm.002 + - Virtual compute resource (vCPU) usage + - nanoseconds + - Per VM or Pod. It maps to ETSI GS NFV-IFA 027 v2.4.1 :cite:p:`etsigsnfvifa027` Mean vCPU usage and Peak vCPU usage (Cloud Infrastructure external). + * - i.pm.003 + - Host CPU utilisation + - % + - Per Compute node. It maps to ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008` processor usage metric (Cloud Infrastructure internal). + * - i.pm.004 + - Virtual compute resource (vCPU) utilisation + - % + - Per VM or Pod. It maps to ETSI GS NFV-IFA 027 v2.4.1 :cite:p:`etsigsnfvifa027` Mean vCPU usage and Peak vCPU usage (Cloud Infrastructure external). + * - i.pm.005 + - Network metric, Packet count + - Number of packets + - Number of successfully transmitted or received packets per physical or virtual interface, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.006 + - Network metric, Octet count + - 8-bit bytes + - Number of 8-bit bytes that constitute successfully transmitted or received packets per physical or virtual interface, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.007 + - Network metric, Dropped Packet count + - Number of packets + - Number of discarded packets per physical or virtual interface, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.008 + - Network metric, Errored Packet count + - Number of packets + - Number of erroneous packets per physical or virtual interface, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.009 + - Memory buffered + - KiB + - Amount of temporary storage for raw disk blocks, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.010 + - Memory cached + - KiB + - Amount of RAM used as cache memory, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.011 + - Memory free + - KiB + - Amount of RAM unused, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.012 + - Memory slab + - KiB + - Amount of memory used as a data structure cache by the kernel, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.013 + - Memory total + - KiB + - Amount of usable RAM, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.014 + - Storage free space + - Bytes + - Amount of unused storage for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.015 + - Storage used space + - Bytes + - Amount of storage used for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.016 + - Storage reserved space + - Bytes + - Amount of storage reserved for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.017 + - Storage Read latency + - Milliseconds + - Average amount of time to perform a Read operation for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.018 + - Storage Read IOPS + - Operations per second + - Average rate of performing Read operations for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.019 + - Storage Read Throughput + - Bytes per second + - Average rate of performing Read operations for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1. + * - i.pm.020 + - Storage Write latency + - Milliseconds + - Average amount of time to perform a Write operation for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1. + * - i.pm.021 + - Storage Write IOPS + - Operations per second + - Average rate of performing Write operations for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.022 + - Storage Write Throughput + - Bytes per second + - Average rate of performing Write operations for a given storage system, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.023 + - Host power utilization + - Watt (Joule/s) + - Real-time electrical power used by a node (1). + * - i.pm.024 + - Host energy consumption + - Watt.hour (Joule) + - Electrical energy consumption of a node since the related counter was last reset (2). + * - i.pm.025 + - CPU power utilization + - Watt (Joule/s) + - Real-time electrical power used by the processor(s) of a node (1). + * - i.pm.026 + - CPU energy consumption + - Watt.hour (Joule) + - Electrical energy consumption of the processor(s) of a node since the related counter was last reset (2). + * - i.pm.027 + - PCIe device power utilization + - Watt (Joule/s) + - Real-time electrical power used by a specific PCI device of a node (1). + * - i.pm.028 + - PCIe device energy consumption + - Watt.hour (Joule) + - Electrical energy consumption of a specific PCI device of a node since the related counter was last reset (2). + * - i.pm.029 + - RAM power utilization + - Watt (Joule/s) + - Real-time electrical power used by the memory of a node (1). + * - i.pm.030 + - RAM energy consumption + - Watt.hour (Joule) + - Electrical energy consumption of the memory of a node since the related counter was last reset (2). + * - i.pm.031 + - Disk power utilization + - Watt (Joule/s) + - Real-time electrical power used by a specific storage device of a node (1). + * - i.pm.032 + - Disk energy consumption + - Watt.hour (Joule) + - Electrical energy consumption of a specific storage device of a node since the related counter was last reset (2). + * - i.pm.033 + - Hugepages pool total + - Integer + - The number of Hugepages currently configured in the pool, which is the total of pages available, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.034 + - Hugepages used + - Integer + - The number of used pages in the Hugepage Pool, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. + * - i.pm.035 + - Hugepages free + - Integer + - The number of free pages in the Hugepage Pool, as defined in ETSI GS NFV-TST 008 V3.5.1 :cite:p:`etsigsnfvtst008`. **Table 4-8:** Internal Measurement Capabilities of Cloud Infrastructure @@ -728,7 +731,7 @@ Profiles specifications and capability mapping | | | | | depending on technology. | +---------+----------------------------------------+------------+-------------+----------------------------------------+ |e.cap.023| Huge page support according to | No | Yes | Internal performance capabilities, | -| | Table 4-7. | | | according to Table 4-7. | +| | Table 4-2. | | | according to Table 4-2. | +---------+----------------------------------------+------------+-------------+----------------------------------------+ |e.cap.025| AF_XDP | No | Optional | These capabilities require workload | | | | | | support for the AF_XDP socket type. | @@ -741,6 +744,8 @@ Profiles specifications and capability mapping | | | | | storage encryption. | +---------+----------------------------------------+------------+-------------+----------------------------------------+ +**Table 4-13:** Profiles specifications and capability mapping + .. 1 See Figure 5-1 :ref:`chapters/chapter05:cloud infrastructure software profile description`. @@ -853,6 +858,8 @@ they can run on. - Labels a host or node that is connected to a programmable switch fabric or a TOR switch. - +**Table 4-14:** Profile extensions + Workload flavours and specifications of other capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -898,7 +905,7 @@ Workload flavour sizing consists of the following: | persistent | | | +-------------+----------+---------------------------------------------------------------------------------------------+ -**Table 4-12:** Workload flavour geometry specification. +**Table 4-15:** Workload flavour geometry specification. The flavours' syntax consists of pairs, separated by a colon (“:”), for example: {cpu: 4; memory: 8 Gi; storage-permanent: 80 Gi}. @@ -1057,7 +1064,7 @@ The following table shows a complete list of the specifications that need to be extensions. - Optional -**Table 4-13:** Specifications of resource flavours (complete list of workload capabilities) +**Table 4-16:** Specifications of resource flavours (complete list of workload capabilities) Virtual network interface specifications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1088,7 +1095,7 @@ n50, n100, n150, n200, n250, n300 50, 100, 150, 200, 250, 300 Gbps n100, n200, n300, n400, n500, n600 100, 200, 300, 400, 500, 600 Gbps ================================== ================================= -**Table 4-14:** Virtual network interface specification examples +**Table 4-17:** Virtual network interface specification examples Storage extensions ~~~~~~~~~~~~~~~~~~ @@ -1096,8 +1103,8 @@ Storage extensions Persistent storage is associated with workloads via storage extensions. The storage qualities specified by the Storage Extension pertain to the "Platform Native - Hypervisor Attached" and "Platform Native - Container Persistent" storage types, as defined in "3.6.3 Storage for Tenant Consumption". The size of an extension can be specified explicitly in -increments of 100 GB (Table 4-15), ranging from a minimum of 100 GB to a maximum of 16 TB. Extensions are configured -with the required performance category, in accordance with Table 4-15. Multiple persistent storage extensions can be +increments of 100 GB (Table 4-18), ranging from a minimum of 100 GB to a maximum of 16 TB. Extensions are configured +with the required performance category, in accordance with Table 4-18. Multiple persistent storage extensions can be attached to virtual compute instances. *Note:* This specification uses GB and GiB to refer to a Gibibyte (2^30 bytes), except where otherwise stated. @@ -1110,6 +1117,6 @@ attached to virtual compute instances. .gold Up to 680K Up to 360K Up to 2650 Up to 1400 1TB ======= ========== ========== ====================== ======================= ============== -**Table 4-15:** Storage Extensions +**Table 4-18:** Storage Extensions *Note:* Performance is based on a block size of 256 KB or larger. From 0b75cd773a989ba6bfcc6c36373c4ce19a8f1189 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 22:30:10 +0100 Subject: [PATCH 26/32] fixed Table 6-2 and 6-3 --- doc/ref_model/chapters/chapter06.rst | 533 +++++++++++++++++---------- 1 file changed, 345 insertions(+), 188 deletions(-) diff --git a/doc/ref_model/chapters/chapter06.rst b/doc/ref_model/chapters/chapter06.rst index bdd7af6d..2d9d6b6b 100644 --- a/doc/ref_model/chapters/chapter06.rst +++ b/doc/ref_model/chapters/chapter06.rst @@ -55,7 +55,6 @@ points are shown in **Table 6-1**. ETSI NFV architecture mapping - +-----------+----------------+---------------------------------------+-------------------------------------------------+ | Interface | Cloud | Interface between | Description | | point | Infrastructure | | | @@ -132,34 +131,34 @@ capacity parameters are specified. IP addresses and storage volumes can be attac +--------------+------+----+------+------+------+----------------------------------------------------------------------+ | Resource |Create|List|Attach|Detach|Delete| Notes | +==============+======+====+======+======+======+======================================================================+ -| Flavour | + | + | | | + | | +| Flavour | x | x | | | x | | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Image | + | + | | | + | Created and deleted by the appropriate administrators. | +| Image | x | x | | | x | Created and deleted by the appropriate administrators. | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Key pairs | + | + | | | + | | +| Key pairs | x | x | | | x | | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ | Privileges | | | | | | Created and managed by the Cloud Service Provider (CSP) | | | | | | | | administrators. | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Role | + | + | | | + | Created and deleted by authorized administrators where roles are | +| Role | x | x | | | x | Created and deleted by authorized administrators where roles are | | | | | | | | assigned privileges and mapped to the users in scope. | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Security | + | + | | | + | Created and deleted only by the VDC administrators. | +| Security | x | x | | | x | Created and deleted only by the VDC administrators. | | groups | | | | | | | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Stack | + | + | | | + | Created and deleted by VDC users with the appropriate role. | +| Stack | x | x | | | x | Created and deleted by VDC users with the appropriate role. | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Virtual | + | + | + | + | + | Created and deleted by VDC users with the appropriate role. | +| Virtual | x | x | x | x | x | Created and deleted by VDC users with the appropriate role. | | storage | | | | | | | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| User | + | + | | + | + | Created and deleted only by the VDC administrators. | +| User | x | x | | x | x | Created and deleted only by the VDC administrators. | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Tenant | + | + | | + | + | Created and deleted only by the Cloud Zone administrators. | +| Tenant | x | x | | x | x | Created and deleted only by the Cloud Zone administrators. | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Virtual | + | + | | + | + | Created and deleted by VDC users with the appropriate role. | +| Virtual | x | x | | x | x | Created and deleted by VDC users with the appropriate role. | | compute | | | | | | Additional operations include suspend and unsuspend. | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ -| Virtual | + | + | + | + | + | Created and deleted by VDC users with the appropriate role. | +| Virtual | x | x | x | x | x | Created and deleted by VDC users with the appropriate role. | | network | | | | | | | +--------------+------+----+------+------+------+----------------------------------------------------------------------+ @@ -187,182 +186,340 @@ alarm information. These acceleration interfaces are summarized here in Table 6.3 for your convenience only. - - -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| Request | Response | From, | Type | Parameter | Description | -| | | To | | | | -+=======================+========================+=======+========+===============+====================================+ -| InitAccRequest | InitAccResponse | VNF → | Input | accFilter | The accelerator subsystems to | -| | | NFVI | | | initialize and retrieve their | -| | | | | | capabilities. | -| | | +--------+---------------+------------------------------------+ -| | | | Filter | accAttributeS | The attribute names of the | -| | | | | elector | accelerator capabilities. | -| | | +--------+---------------+------------------------------------+ -| | | | Output | accCapabiliti | The acceleration subsystem | -| | | | | es | capabilities. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| RegisterForAccEventRe | RegisterForAccEventRes | VNF → | Input | accEvent | The event in which the VNF is | -| | | | | | interested. | -| quest | ponse | NFVI +--------+---------------+------------------------------------+ -| | | | Input | vnfEventHandl | The handler for the NFVI to use | -| | | | | erId | when notifying the VNF of the | -| | | | | | event. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| AccEventNotificationR | AccEventNotificationRe | NFVI | Input | vnfEventHandl | The handler used by the VNF | -| equest | sponse | → VNF | | erId | registering for this event. | -| | | +--------+---------------+------------------------------------+ -| | | | Input | accEventMetaD | | -| | | | | ata | | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| DeRegisterForAccEvent | DeRegisterForAccEventR | VNF → | Input | accEvent | The event from which the VNF is | -| Request | esponse | NFVI | | | deregistering. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| ReleaseAccRequest | ReleaseAccResponse | VNF → | | | | -| | | NFVI | | | | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | VNF → | Input | accConfigurat | The configuration data for the | -| | | NFVI | | ionData | accelerator. | -| ModifyAccConfiguratio | ModifyAccConfiguration | +--------+---------------+------------------------------------+ -| nRequest | Response | | Input | accSubSysConf | The configuration data for the | -| | | | | igurationData | accelerator subsystem. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | accFilter | The filter for the subsystems from | -| | | | | | which the configuration data is | -| | | | | | requested. | -| | | +--------+---------------+------------------------------------+ -| GetAccConfigsRequest | GetAccConfigsResponse | VNF → | Input | accConfigSele | The attributes of the | -| | | | | ctor | configuration types. | -| | | NFVI +--------+---------------+------------------------------------+ -| | | | Output | accConfigs | The configuration information | -| | | | | | (only for the specified | -| | | | | | attributes) for the specified | -| | | | | | subsystems. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | accFilter | The filter for the subsystems for | -| | | VNF → | | | which the configuration is to be | -| | | | | | reset. | -| ResetAccConfigsReque | ResetAccConfigsRespon | NFVI +--------+---------------+------------------------------------+ -| st | se | | Input | accConfigSele | The attributes of the | -| | | | | ctor | configuration types whose values | -| | | | | | will be reset. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | accData | The data (metadata) sent to | -| | | | | | the accelerator. | -| | | +--------+---------------+------------------------------------+ -| AccDataRequest | AccDataResponse | VNF → | Input | accChannel | The channel to which the data is | -| | | | | | to be sent. | -| | | NFVI +--------+---------------+------------------------------------+ -| | | | Output | accData | The data from the accelerator. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| AccSendDataRequest | AccSendDataResponse | VNF → | Input | accData | The data (metadata) sent to the | -| | | NFVI | | | accelerator. | -| | | +--------+---------------+------------------------------------+ -| | | | Input | accChannel | The channel to which the data is | -| | | | | | to be sent. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | maxNumberOfDa | The maximum number of data items | -| | | | | taItems | to be received. | -| | | +--------+---------------+------------------------------------+ -| AccReceiveDataRequest | AccReceiveDataResponse | VNF → | Input | accChannel | Channel data is requested from the | -| | | | | | accelerator. | -| | | NFVI +--------+---------------+------------------------------------+ -| | | | Output | accData | Data is received from the | -| | | | | | accelerator. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| RegisterForAccDataAva | RegisterForAccDataAvai | VNF → | Input | regHandlerId | Registration identifier. | -| ilableEventRequest | lableEventResponse | NFVI +--------+---------------+------------------------------------+ -| | | | Input | accChannel | Channel where the event is | -| | | | | | requested. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| AccDataAvailableEvent | AccDataAvailableEventN | NFVI | Input | regHandlerId | Reference used by the VNF when | -| NotificationRequest | otificationResponse | → VNF | | | registering for the event. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| DeRegisterForAccDataA | DeRegisterForAccDataAv | VNF → | Input | accChannel | Channel related to the event. | -| vailableEventRequest | ailableEventResponse | NFVI | | | | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | attachTarget | The resource to which the | -| | | | | Info | accelerator is to be attached | -| | | | | | (for example, VM). | -| | | +--------+---------------+------------------------------------+ -| AllocateAccResourceRe | AllocateAccResourceRes | VIM → | Input | accResourceI | Accelerator information. | -| quest | ponse | NFVI | | nfo | | -| | | +--------+---------------+------------------------------------+ -| | | | Output | accResourceId | ID, if successful. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| ReleaseAccResourceReq | ReleaseAccResourceResp | VIM → | Input | accResourceId | ID of the resource to be released. | -| uest | onse | NFVI | | | | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | hostId | ID of the specified host. | -| | | +--------+---------------+------------------------------------+ -| QueryAccResourceReque | QueryAccResourceRespon | VIM → | Input | Filter | Specifies the accelerators to | -| st | se | NFVI | | | which the query applies. | -| | | +--------+---------------+------------------------------------+ -| | | | Output | accQueryResu | Details of the accelerators | -| | | | | lt | matching the input filter located | -| | | | | | in the selected host. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | accFilter | Accelerator subsystems from which | -| | | | | | data is requested. | -| | | +--------+---------------+------------------------------------+ -| GetAccStatisticsReque | GetAccStatisticsRespon | VIM → | Input | accStatSelect | Attributes of AccStatistics whose | -| st | se | NFVI | | or | data is returned. | -| | | +--------+---------------+------------------------------------+ -| | | | Output | accStatistics | Statistics data of the | -| | | | | | accelerators matching the input | -| | | | | | filter located in the selected | -| | | | | | host. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| ResetAccStatisticsReq | ResetAccStatisticsResp | VIM → | Input | accFilter | Accelerator subsystems for which | -| uest | onse | NFVI | | | the data is to be reset. | -| | | +--------+---------------+------------------------------------+ -| | | | Input | accStatSelect | Attributes of AccStatistics whose | -| | | | | or | data will be reset. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | hostId | ID of the specified host. | -| | | +--------+---------------+------------------------------------+ -| SubscribeRequest | SubscribeResponse | VIM → | Input | Filter | Specifies the accelerators and | -| | | NFVI | | | the related alarms. The filter can | -| | | | | | include accelerator information, | -| | | | | | severity of the alarm, and so on. | -| | | +--------+---------------+------------------------------------+ -| | | | Output | Subscriptio | Identifier of the successfully | -| | | | | nId | created subscription. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| UnsubscribeRequest | UnsubscribeResponse | VIM → | Input | hostId | ID of the specified host. | -| | | NFVI +--------+---------------+------------------------------------+ -| | | | Input | Subscription | Identifier of the subscription to | -| | | | | Id | be unsubscribed. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| Notify | | NFVI | | | NFVI notifies an alarm to VIM. | -| | | → VIM | | | | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | hostId | ID of the specified host. | -| | | +--------+---------------+------------------------------------+ -| GetAlarmInfoRequest | GetAlarmInfoResponse | VIM → | Input | Filter | Specifies the accelerators and | -| | | NFVI | | | the related alarms. The filter can | -| | | | | | include accelerator information, | -| | | | | | severity of the alarm, and so on. | -| | | +--------+---------------+------------------------------------+ -| | | | Output | Alarm | Information about the alarms, if | -| | | | | | the filter matches an alarm. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| AccResourcesDiscovery | AccResourcesDiscoveryR | VIM → | Input | hostId | ID of the specified host. | -| Request | esponse | NFVI +--------+---------------+------------------------------------+ -| | | | Output | discoveredAcc | Information on the acceleration | -| | | | | ResourceInfo | resources discovered within the | -| | | | | | NFVI. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ -| | | | Input | accResourceId | Identifier of the chosen | -| | | | | | accelerator in the NFVI. | -| | | +--------+---------------+------------------------------------+ -| OnloadAccImageRequest | OnloadAccImageResponse | VIM → | Input | accImageInfo | Information about the acceleration | -| | | NFVI | | | image. | -| | | +--------+---------------+------------------------------------+ -| | | | Input | accImage | The binary file of the | -| | | | | | acceleration image. | -+-----------------------+------------------------+-------+--------+---------------+------------------------------------+ +.. list-table:: Hardware acceleration interfaces in the ETSI NFV architecture + :widths: 20 20 10 10 20 20 + :header-rows: 1 + + * - Request + - Response + - From, To + - Type + - Parameter + - Description + * - InitAccRequest + - InitAccResponse + - VNF → NFVI + - Input + - accFilter + - The accelerator subsystems to initialize and retrieve their capabilities. + * - + - + - + - Filter + - accAttributeSelector + - The attribute names of the accelerator capabilities. + * - + - + - + - Output + - accCapabilities + - The acceleration subsystem capabilities. + * - RegisterForAccEventRequest + - RegisterForAccEventResponse + - VNF → NFVI + - Input + - accEvent + - The event in which the VNF is interested. + * - + - + - + - Input + - vnfEventHandlerId + - The handler for the NFVI to use when notifying the VNF of the event. + * - AccEventNotificationRequest + - AccEventNotificationResponse + - NFVI → VNF + - Input + - vnfEventHandlerId + - The handler used by the VNF registering for this event. + * - + - + - + - Input + - accEventMetaData + - + * - DeRegisterForAccEventRequest + - DeRegisterForAccEventResponse + - VNF → NFVI + - Input + - accEvent + - The event from which the VNF is deregistering. + * - ReleaseAccRequest + - ReleaseAccResponse + - VNF → NFVI + - + - + - + * - ModifyAccConfigurationRequest + - ModifyAccConfigurationResponse + - VNF → NFVI + - Input + - accConfigurationData + - The configuration data for the accelerator. + * - + - + - + - Input + - accSubSysConfigurationData + - The configuration data for the accelerator subsystem. + * - GetAccConfigsRequest + - GetAccConfigsResponse + - VNF → NFVI + - Input + - accFilter + - The filter for the subsystems from which the configuration data is requested. + * - + - + - + - Input + - accConfigSelector + - The attributes of the configuration types. + * - + - + - + - Output + - accConfigs + - The configuration information (only for the specified attributes) for the specified subsystems. + * - ResetAccConfigsRequest + - ResetAccConfigsResponse + - VNF → NFVI + - Input + - accFilter + - The filter for the subsystems for which the configuration is to be reset. + * - + - + - + - Input + - accConfigSelector + - The attributes of the configuration types whose values will be reset. + * - AccDataRequest + - AccDataResponse + - VNF → NFVI + - Input + - accData + - The data (metadata) sent to the accelerator. + * - + - + - + - Input + - accChannel + - The channel to which the data is to be sent. + * - + - + - + - Output + - accData + - The data from the accelerator. + * - AccSendDataRequest + - AccSendDataResponse + - VNF → NFVI + - Input + - accData + - The data (metadata) sent to the accelerator. + * - + - + - + - Input + - accChannel + - The channel to which the data is to be sent. + * - AccReceiveDataRequest + - AccReceiveDataResponse + - VNF → NFVI + - Input + - maxNumberOfDataItems + - The maximum number of data items to be received. + * - + - + - + - Input + - accChannel + - Channel data is requested from the accelerator. + * - + - + - + - Output + - accData + - Data is received from the accelerator. + * - RegisterForAccDataAvailableEventRequest + - RegisterForAccDataAvailableEventResponse + - VNF → NFVI + - Input + - regHandlerId + - Registration identifier. + * - + - + - + - Input + - accChannel + - Channel where the event is requested. + * - AccDataAvailableEventNotificationRequest + - AccDataAvailableEventNotificationResponse + - NFVI → VNF + - Input + - regHandlerId + - Reference used by the VNF when registering for the event. + * - DeRegisterForAccDataAvailableEventRequest + - DeRegisterForAccDataAvailableEventResponse + - VNF → NFVI + - Input + - accChannel + - Channel related to the event. + * - AllocateAccResourceRequest + - AllocateAccResourceResponse + - VIM → NFVI + - Input + - attachTargetInfo + - The resource to which the accelerator is to be attached (for example, VM). + * - + - + - + - Input + - accResourceInfo + - Accelerator information. + * - + - + - + - Output + - accResourceId + - ID, if successful. + * - ReleaseAccResourceRequest + - ReleaseAccResourceResponse + - VIM → NFVI + - Input + - accResourceId + - ID of the resource to be released. + * - QueryAccResourceRequest + - QueryAccResourceResponse + - VIM → NFVI + - Input + - hostId + - ID of the specified host. + * - + - + - + - Input + - Filter + - Specifies the accelerators to which the query applies. + * - + - + - + - Output + - accQueryResult + - Details of the accelerators matching the input filter located in the selected host. + * - GetAccStatisticsRequest + - GetAccStatisticsResponse + - VIM → NFVI + - Input + - accFilter + - Accelerator subsystems from which data is requested. + * - + - + - + - Input + - accStatSelector + - Attributes of AccStatistics whose data is returned. + * - + - + - + - Output + - accStatistics + - Statistics data of the accelerators matching the input filter located in the selected host. + * - ResetAccStatisticsRequest + - ResetAccStatisticsResponse + - VIM → NFVI + - Input + - accFilter + - Accelerator subsystems for which the data is to be reset. + * - + - + - + - Input + - accStatSelector + - Attributes of AccStatistics whose data will be reset. + * - SubscribeRequest + - SubscribeResponse + - VIM → NFVI + - Input + - hostId + - ID of the specified host. + * - + - + - + - Input + - Filter + - Specifies the accelerators and the related alarms. The filter can include accelerator information, severity of the alarm, and so on. + * - + - + - + - Output + - SubscriptionId + - Identifier of the successfully created subscription. + * - UnsubscribeRequest + - UnsubscribeResponse + - VIM → NFVI + - Input + - hostId + - ID of the specified host. + * - + - + - + - Input + - SubscriptionId + - Identifier of the subscription to be unsubscribed. + * - Notify + - + - NFVI → VIM + - + - + - NFVI notifies an alarm to VIM. + * - GetAlarmInfoRequest + - GetAlarmInfoResponse + - VIM → NFVI + - Input + - hostId + - ID of the specified host. + * - + - + - + - Input + - Filter + - Specifies the accelerators and the related alarms. The filter can include accelerator information, severity of the alarm, and so on. + * - + - + - + - Output + - Alarm + - Information about the alarms, if the filter matches an alarm. + * - AccResourcesDiscoveryRequest + - AccResourcesDiscoveryResponse + - VIM → NFVI + - Input + - hostId + - ID of the specified host. + * - + - + - + - Output + - discoveredAccResourceInfo + - Information on the acceleration resources discovered within the NFVI. + * - OnloadAccImageRequest + - OnloadAccImageResponse + - VIM → NFVI + - Input + - accResourceId + - Identifier of the chosen accelerator in the NFVI. + * - + - + - + - Input + - accImageInfo + - Information about the acceleration image. + * - + - + - + - Input + - accImage + - The binary file of the acceleration image. **Table 6-3:** Hardware acceleration interfaces in the ETSI NFV architecture From c47131192dd4f68c44132d64f4fc7b38e3b8046a Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 22:38:53 +0100 Subject: [PATCH 27/32] fixed Table 7-10 and 7-16 --- doc/ref_model/chapters/chapter07.rst | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/doc/ref_model/chapters/chapter07.rst b/doc/ref_model/chapters/chapter07.rst index 9303426b..923fb021 100644 --- a/doc/ref_model/chapters/chapter07.rst +++ b/doc/ref_model/chapters/chapter07.rst @@ -1492,6 +1492,10 @@ for both the Prod-Platform and the NonProd-Platform. Open-source software ~~~~~~~~~~~~~~~~~~~~ +.. list-table:: Profile extensions + :widths: 20 50 30 + :header-rows: 1 + * - Ref - Requirement - Definition/Note @@ -1673,6 +1677,10 @@ IaaC - Runtime defence and monitoring requirements Compliance with standards ~~~~~~~~~~~~~~~~~~~~~~~~~ +.. list-table:: Profile extensions + :widths: 30 40 30 + :header-rows: 1 + * - Ref - Requirement - Definition/Note From 472a7a44a870ee387a31949d8913956916912b05 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 22:54:47 +0100 Subject: [PATCH 28/32] renumbered and fixed Tables 8-* --- doc/ref_model/chapters/chapter08.rst | 79 +++++++++++++--------------- 1 file changed, 37 insertions(+), 42 deletions(-) diff --git a/doc/ref_model/chapters/chapter08.rst b/doc/ref_model/chapters/chapter08.rst index fce7c909..86482042 100644 --- a/doc/ref_model/chapters/chapter08.rst +++ b/doc/ref_model/chapters/chapter08.rst @@ -419,7 +419,6 @@ These requirements are in addition to the requirements in other chapters of this **HEMP general requirements** - .. list-table:: General requirements of the Hybrid, Edge, and Multicloud operator Platform (HEMP) :widths: 10 20 20 :header-rows: 1 @@ -511,24 +510,25 @@ These requirements are in addition to the requirements in other chapters of this | | centralised analysis of all logs. | | +-------------+--------------------------------------------------------+-----------------------------------------------+ -Table : Lifecycle Management (LCM) requirements of the Hybrid, Edge, and Multicloud operator Platform (HEMP) +**Table 8-4:** Lifecycle Management (LCM) requirements of the Hybrid, Edge, and Multicloud operator Platform (HEMP) **HEMP security requirements** -* hem.sec.001 - - Requirement: The HEMP should provide capabilities for the centralised management of all security policies. - - Definition/Note: (empty) - -* hem.sec.002 - - Requirement: The HEMP should provide capabilities for the centralised tracking of compliance of all security requirements (:ref:`chapters/chapter07:consolidated security requirements`). - - Definition/Note: (empty) - -* hem.sec.003 - - Requirement: The HEMP should provide capabilities for insights into the changes that resulted from resource non-compliance. - - Definition/Note: (empty) +.. list-table:: HEMP security requirements + :widths: 20 60 20 + :header-rows: 1 + * - hem.sec.001 + - Requirement: The HEMP should provide capabilities for the centralised management of all security policies. + - Definition/Note: (empty) + * - hem.sec.002 + - Requirement: The HEMP should provide capabilities for the centralised tracking of compliance of all security requirements (:ref:`chapters/chapter07:consolidated security requirements`). + - Definition/Note: (empty) + * - hem.sec.003 + - Requirement: The HEMP should provide capabilities for insights into the changes that resulted from resource non-compliance. + - Definition/Note: (empty) -**Table 8-4:** Hybrid, Edge, and Multicloud operator Platform (HEMP) security requirements +**Table 8-5:** Hybrid, Edge, and Multicloud operator Platform (HEMP) security requirements Aspects of multicloud security @@ -586,7 +586,7 @@ Security Group (FASG) and the "5G security Guide", FS.40 v2.0 document :cite:p:` | | established overall security operations model. | +--------------------------------+-------------------------------------------------------------------------------------+ -**Table 8-5:** Multicloud security principles +**Table 8-6:** Multicloud security principles For Telco operators to run their network functions in a multicloud environment, specifically, in public clouds, the industry will need a set of new standards and new security tools to manage and regulate the interactions between @@ -637,13 +637,13 @@ Telco Edge Cloud (TEC) deployment locations can be in any of the following envir - Harsh environments: places where there is a likelihood of chemical, heat, or electromagnetic exposure, such as factories, power stations, processing plants, and so on. -Some of the more salient characteristics can be seen in Table 8-2. +Some of the more salient characteristics can be seen in Table 8-7. .. list-table:: TEC deployment location characteristics and capabilities - :widths: 10 10 10 10 10 10 10 + :widths: 10 20 10 10 10 20 20 :header-rows: 1 - * - + * - Environmental type - Facility type - Environmental characteristics - Capabilities @@ -653,10 +653,9 @@ Some of the more salient characteristics can be seen in Table 8-2. * - Environmentally friendly - Indoors: typically commercial or residential buildings. - Protected, and therefore safe for common infrastructure. - - - * Easy access to a continuous electricity supply. - * High/medium bandwidth. - * Fixed and/or wireless network access. + - - Easy access to a continuous electricity supply. + - High/medium bandwidth. + - Fixed and/or wireless network access. - Controlled access - Commoditised infrastructure with minimal need or no need for hardening or ruggedisation. Operational benefits for installation and maintenance. @@ -664,20 +663,17 @@ Some of the more salient characteristics can be seen in Table 8-2. facilities, vendor premises, customer premises. * - Environmentally challenging - Outdoors and/or exposed to environmentally harsh conditions. - - - * Lack of protection. - * Exposure to abnormally high levels of noise, vibration, heat, chemical, and electromagnetic pollution. - - - * Possibility of devices having to rely on battery power only. - * Low/medium bandwidth. - * Fixed and/or mobile network access. + - - Lack of protection. + - Exposure to abnormally high levels of noise, vibration, heat, chemical, and electromagnetic pollution. + - - Possibility of devices having to rely on battery power only. + - Low/medium bandwidth. + - Fixed and/or mobile network access. - Little or no access control. - - - * Ruggedisation is likely to be expensive. - * The system is likely to be complex to operate. + - - Ruggedisation is likely to be expensive. + - The system is likely to be complex to operate. - Example locations: curb side, near cellular radios. -**Table 8-6:** TEC deployment location characteristics and capabilities** +**Table 8-7:** TEC deployment location characteristics and capabilities** Telco Edge Cloud: infrastructure characteristics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -713,10 +709,14 @@ The High-Performance profile can specify extensions for hardware offloading. For :ref:`chapters/chapter03:hardware acceleration abstraction`. The Reference Model High-Performance profile includes an initial set of :ref:`chapters/chapter04:profile extensions`. -Based on the infrastructure deployed at the edge, Table 8-3 specifies the +Based on the infrastructure deployed at the edge, Table 8-8 specifies the :ref:`chapters/chapter05:feature set and requirements from infrastructure` that would need to be relaxed. +.. list-table:: TEC exceptions to infrastructure profile features and requirements + :widths: 10 10 10 20 20 20 10 + :header-rows: 1 + * - Reference - Feature - Description @@ -724,7 +724,6 @@ need to be relaxed. - As specified in RM Chapter 05 - High performance - Exception for edge - Basic type - Exception for edge - High performance - * - infra.stg.cfg.003 - Storage with replication - @@ -732,7 +731,6 @@ need to be relaxed. - Y - N - Optional - * - infra.stg.cfg.004 - Storage with encryption - @@ -740,7 +738,6 @@ need to be relaxed. - Y - N - Optional - * - infra.hw.cpu.cfg.001 - Minimum number of CPU sockets - This determines the minimum number of CPU sockets within each host. @@ -748,7 +745,6 @@ need to be relaxed. - 2 - 1 - 1 - * - infra.hw.cpu.cfg.002 - Minimum Number of cores per CPU - This determines the minimum number of cores needed per CPU. @@ -756,7 +752,6 @@ need to be relaxed. - 20 - 1 - 1 - * - infra.hw.cpu.cfg.003 - NUMA alignment - NUMA alignment support and BIOS configured to enable NUMA. @@ -765,7 +760,7 @@ need to be relaxed. - N - Y (*) -**Table 8-4. TEC exceptions to infrastructure profile features and requirements** +**Table 8-8. TEC exceptions to infrastructure profile features and requirements** * This is immaterial if the number of CPU sockets (infra.hw.cpu.cfg.001) is 1. @@ -803,7 +798,7 @@ on the infrastructure. | nodes | | | | | | | | | | | | | | +-----------+-------+-------+-------+-------+-------+-------+--------+--------+--------+-------+-------+-------+-------+ -**Table 8-5. Characteristics of infrastructure nodes** +**Table 8-9. Characteristics of infrastructure nodes** Depending on the facility capabilities, deployments at the edge may be similar to one of the following: @@ -891,7 +886,7 @@ Comparison of deployment topologies and edge terms - Small Edge - Access Edge -**Table 8-6:** Comparison of Deployment Topologies +**Table 8-10:** Comparison of Deployment Topologies O-RAN alignment and interaction ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From 7c59dbc8ebefadd0ee86001971c45ac1b6edee39 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 23:25:48 +0100 Subject: [PATCH 29/32] fixed Table 8-9 --- doc/ref_model/chapters/chapter08.rst | 110 +++++++++++++++++++++------ 1 file changed, 87 insertions(+), 23 deletions(-) diff --git a/doc/ref_model/chapters/chapter08.rst b/doc/ref_model/chapters/chapter08.rst index 86482042..68445085 100644 --- a/doc/ref_model/chapters/chapter08.rst +++ b/doc/ref_model/chapters/chapter08.rst @@ -673,7 +673,7 @@ Some of the more salient characteristics can be seen in Table 8-7. - The system is likely to be complex to operate. - Example locations: curb side, near cellular radios. -**Table 8-7:** TEC deployment location characteristics and capabilities** +**Table 8-7:** TEC deployment location characteristics and capabilities Telco Edge Cloud: infrastructure characteristics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -760,7 +760,7 @@ need to be relaxed. - N - Y (*) -**Table 8-8. TEC exceptions to infrastructure profile features and requirements** +**Table 8-8:** TEC exceptions to infrastructure profile features and requirements * This is immaterial if the number of CPU sockets (infra.hw.cpu.cfg.001) is 1. @@ -778,27 +778,91 @@ on the infrastructure. The platform services are containerised to save resources, and benefit from intrinsic availability and autoscaling capabilities. -+-----------+--------------------------------------------------------+-------------------------+-----------------------+ -| | Platform services | Storage | Network services | -| +-------+-------+-------+-------+-------+-------+--------+--------+--------+-------+-------+-------+-------+ -| | Iden- | Image | Plac- | Comp- | Netw- | Mess- | DB | Ephem- | Persi- | Pers- | Mana- | Unde- | Over- | -| | tity | | ement | ute | orki- | age | Server | eral | stent | iste- | geme- | rlay | lay | -| | | | | | ng | Queue | | | Block | nt | nt | (Pro- | | -| | | | | | | | | | | Obje- | | vid- | | -| | | | | | | | | | | ct | | er) | | -+===========+=======+=======+=======+=======+=======+=======+========+========+========+=======+=======+=======+=======+ -| Control | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | | ✅ | ✅ | ✅ | -| nodes | | | | | | | | | | | | | | -+-----------+-------+-------+-------+-------+-------+-------+--------+--------+--------+-------+-------+-------+-------+ -| Workload | | | | ✅ | ✅ | | | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | -| nodes | | | | | | | | | | | | | | -| (compute) | | | | | | | | | | | | | | -+-----------+-------+-------+-------+-------+-------+-------+--------+--------+--------+-------+-------+-------+-------+ -| Storage | | | | | | | | | ✅ | ✅ | ✅ | ✅ | ✅ | -| nodes | | | | | | | | | | | | | | -+-----------+-------+-------+-------+-------+-------+-------+--------+--------+--------+-------+-------+-------+-------+ - -**Table 8-9. Characteristics of infrastructure nodes** +Platform services are: + +- Identity +- Image +- Placement +- Compute +- Networking +- Message Queue +- DB Server + +Storage services are: + +- Ephemeral +- Persistent Block +- Persistent Object + +Network services are: + +- Management +- Underlay (Provider) +- Overlay + + +.. list-table:: Characteristics of infrastructure nodes + :widths: 20 20 5 5 5 5 5 5 5 5 5 5 5 5 + :header-rows: 1 + + * - Node type + - Identity + - Image + - Placement + - Compute + - Networking + - Message Queue + - DB Server + - Ephemeral + - Persistent Block + - Persistent Object + - Management + - Underlay (Provider) + - Overlay + * - Control nodes + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ + - + - ✅ + - + - ✅ + - ✅ + - ✅ + * - Workload nodes (Compute) + - + - + - + - ✅ + - ✅ + - + - + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ + * - Storage nodes + - + - + - + - + - + - + - + - + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ + +**Table 8-9:** Characteristics of infrastructure nodes Depending on the facility capabilities, deployments at the edge may be similar to one of the following: From b60347dbe6733452b69f95be155f3b76c800dd91 Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 23:34:55 +0100 Subject: [PATCH 30/32] fixed Table 9-2 --- doc/ref_model/chapters/chapter09.rst | 116 +++++++++++++++++---------- 1 file changed, 72 insertions(+), 44 deletions(-) diff --git a/doc/ref_model/chapters/chapter09.rst b/doc/ref_model/chapters/chapter09.rst index 38fd2947..9e13dab7 100644 --- a/doc/ref_model/chapters/chapter09.rst +++ b/doc/ref_model/chapters/chapter09.rst @@ -124,50 +124,78 @@ However, the key requirements for the infrastructure and for infrastructure mana reference points in the red box, where the configuration is **set**, and where it is **observed**. Table 9-2 lists the main components and capabilities required to manage the configuration and lifecycle of those components. -+---------------------------------+---------------+---------------------------------+-----------------------------+ -| Component | Set/Observe | Capability | Example | -+=================================+===============+=================================+=============================+ -| Cloud infrastructure management | Set | Target software/firmware | Software: v1.2.1 | -| software | | version | | -| | +---------------------------------+-----------------------------+ -| | | Desired configuration attribute | dhcp_lease_time: 86400 | -| | +---------------------------------+-----------------------------+ -| | | Desired component quantities | # hypervisor hosts: 10 | -| +---------------+---------------------------------+-----------------------------+ -| | Observe | Observed software/firmware | Software: v1.2.1 | -| | | version | | -| | +---------------------------------+-----------------------------+ -| | | Observed configuration | dhcp_lease_time: 86400 | -| | | attribute | | -| | +---------------------------------+-----------------------------+ -| | | Observed component quantities | # hypervisor hosts: 10 | -+---------------------------------+---------------+---------------------------------+-----------------------------+ -| Cloud infrastructure software | Set | Target software version | Hypervisor software: v3.4.1 | -| | +---------------------------------+-----------------------------+ -| | | Desired configuration attribute | management_int: eth0 | -| | +---------------------------------+-----------------------------+ -| | | Desired component quantities | # NICs for data: 6 | -| +---------------+---------------------------------+-----------------------------+ -| | Observe | Observed software/firmware | Hypervisor software: v3.4.1 | -| | | version | | -| | +---------------------------------+-----------------------------+ -| | | Observed configuration | management_int: eth0 | -| | | attribute | | -| | +---------------------------------+-----------------------------+ -| | | Observed component quantities | # NICs for data: 6 | -+---------------------------------+---------------+---------------------------------+-----------------------------+ -| Infrastructure hardware | Set | Target software/firmware | Storage controller | -| | | version | firmware: v10.3.4 | -| | +---------------------------------+-----------------------------+ -| | | Desired configuration attribute | Virtual disk 1: RAID1 | -| | | | [HDD1,HDD2] | -| +---------------+---------------------------------+-----------------------------+ -| | Observe | Observed software/firmware | Storage controller | -| | | version | firmware: v10.3.4 | -| | +---------------------------------+-----------------------------+ -| | | Observed configuration | Virtual disk 1: RAID1 | -| | | attribute | [HDD1,HDD2] | -+---------------------------------+---------------+---------------------------------+-----------------------------+ +.. list-table:: Profile extensions + :widths: 25 15 35 25 + :header-rows: 1 + + * - Component + - Set/Observe + - Capability + - Example + * - Cloud infrastructure management software + - Set + - Target software/firmware version + - Software: v1.2.1 + * - + - + - Desired configuration attribute + - dhcp_lease_time: 86400 + * - + - + - Desired component quantities + - # hypervisor hosts: 10 + * - + - Observe + - Observed software/firmware version + - Software: v1.2.1 + * - + - + - Observed configuration attribute + - dhcp_lease_time: 86400 + * - + - + - Observed component quantities + - # hypervisor hosts: 10 + * - Cloud infrastructure software + - Set + - Target software version + - Hypervisor software: v3.4.1 + * - + - + - Desired configuration attribute + - management_int: eth0 + * - + - + - Desired component quantities + - # NICs for data: 6 + * - + - Observe + - Observed software/firmware version + - Hypervisor software: v3.4.1 + * - + - + - Observed configuration attribute + - management_int: eth0 + * - + - + - Observed component quantities + - # NICs for data: 6 + * - Infrastructure hardware + - Set + - Target software/firmware version + - Storage controller firmware: v10.3.4 + * - + - + - Desired configuration attribute + - Virtual disk 1: RAID1 [HDD1,HDD2] + * - + - Observe + - Observed software/firmware version + - Storage controller firmware: v10.3.4 + * - + - + - Observed configuration attribute + - Virtual disk 1: RAID1 [HDD1,HDD2] **Table 9-2:** Configuration and lifecycle management capabilities From 6d70ad4c5f1c4b2e3a801cc022541e643bb05beb Mon Sep 17 00:00:00 2001 From: Petar Torre Date: Thu, 28 Nov 2024 23:45:56 +0100 Subject: [PATCH 31/32] fixed Table 7-3 --- doc/ref_model/chapters/chapter07.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/doc/ref_model/chapters/chapter07.rst b/doc/ref_model/chapters/chapter07.rst index 923fb021..d353bfee 100644 --- a/doc/ref_model/chapters/chapter07.rst +++ b/doc/ref_model/chapters/chapter07.rst @@ -1068,6 +1068,10 @@ Consolidated security requirements System hardening ~~~~~~~~~~~~~~~~ +.. list-table:: Profile extensions + :widths: 20 50 30 + :header-rows: 1 + * - Ref - Requirement - Definition/Note From b175267800987c830d32e54dbed449b1724564b7 Mon Sep 17 00:00:00 2001 From: Pankaj Goyal <52107136+pgoyal01@users.noreply.github.com> Date: Thu, 16 Jan 2025 14:02:58 -0700 Subject: [PATCH 32/32] Update chapter01.rst --- doc/ref_model/chapters/chapter01.rst | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/doc/ref_model/chapters/chapter01.rst b/doc/ref_model/chapters/chapter01.rst index 832eeb40..1735de54 100644 --- a/doc/ref_model/chapters/chapter01.rst +++ b/doc/ref_model/chapters/chapter01.rst @@ -12,21 +12,22 @@ interfaces required by the workloads. This document has been developed by the Li **Problem statement:** Based on community consultations, including Telco operators, technology suppliers, and software developers, it is understood that there are significant technical, operational, and business challenges to the development and deployment of Virtual Network Functions (VNF) and Cloud-Native Network Functions (CNF), due to the -lack of a common cloud infrastructure platform. These include, but are not limited to, the following: +lack of standard cloud infrastructure specifications, viz. cloud infrastructure platform characteristics and features. +These include, but are not limited to, the following: -- Higher development costs, due to the need to develop virtualised/containerised network applications on multiple custom - platforms for each operator. +- Higher development costs, due to the need to develop virtualised/cloud-native network applications on operator-specific custom + platforms. - Increased complexities, due to the need to maintain multiple versions of applications to support each custom environment. - Lack of testing and validation commonalities, leading to inefficiencies and increased time to market. While the operators will still perform internal testing, the application developers utilising an industry standard verification program on a common cloud infrastructure would lead to efficiencies and faster time to market. -- Slower adoption of cloud-native applications and architectures. A common Telco cloud may provide an easier path to +- Slower adoption of cloud-native applications and architectures. A common Telco cloud specification may provide an easier path to methodologies that will drive faster cloud-native development. - Increased operational overheads, due to the need for operators to integrate diverse and sometimes conflicting cloud platform requirements. -One of the main challenges holding back the more rapid and widespread adoption of virtualised/containerised network +One of the main challenges holding back the more rapid and widespread adoption of virtualised/cloud-native network applications is when, while building or designing their virtualised services, specific infrastructure assumptions and requirements are implied, often with custom design parameters. This forces the operators to build complex integrations of various vendor-/function-specific silos which are incompatible with each other and might possibly have differing and @@ -38,7 +39,7 @@ This document starts from the abstract and, as it progresses, becomes more detai process where you start from the core principles, progress to abstract concepts and models, and then finish with operational considerations, such as security and lifecycle management. -- **Chapter 01 - Introduction**: This provides an overall scope of the Reference Model document, including the goals +- **Chapter 01 - Introduction**: This chapter provides the scope of the Reference Model document, including the goals and objectives of the project. - **Audience**: This chapter is aimed at a general technical audience with an interest in this topic. @@ -49,7 +50,7 @@ operational considerations, such as security and lifecycle management. - **Audience**: This chapter is aimed at architects and others with an interest in how the decisions were made. -- **Chapter 03 - Modelling**: This chapter covers the high-level cloud infrastructure model itself. +- **Chapter 03 - Modelling**: This chapter covers the high-level cloud infrastructure model. - **Audience**: This chapter is aimed at architects and others who want to gain a quick high-level understanding of the model. @@ -61,7 +62,7 @@ operational considerations, such as security and lifecycle management. - **Audience**: This chapter is aimed at architects, developers, and others who need to deploy infrastructure or develop applications. -- **Chapter 05 - Feature set and Requirements from Infrastructure**: This chapter goes into more detail on what +- **Chapter 05 - Feature sets and Requirements for Infrastructure**: This chapter goes into more detail on what needs to be part of the cloud infrastructure. It describes the software and hardware capabilities, and the configurations recommended for the different types of cloud infrastructure profiles. @@ -78,7 +79,7 @@ operational considerations, such as security and lifecycle management. when designing and implementing a cloud infrastructure environment. It does not cover details related to company-specific requirements to meet regulatory requirements. - - **Audience**: This chapter is aimed at security professional, architects, developers, and others who need to + - **Audience**: This chapter is aimed at security professionals, architects, developers, and others who need to understand the role of security in the cloud infrastructure environment. - **Chapter 08 - Hybrid Multicloud: Data Centre to Edge**: A generic Telco cloud is a hybrid multicloud, or a federated @@ -117,7 +118,7 @@ This document specifies the following: - **Cloud infrastructure abstraction**: In context with how it interacts with the other components required to build a complete cloud system that supports workloads deployed in virtual machines (VMs) or containers. Network function workloads that are deployed on virtual machines and containers are referred to as virtual network functions (VNFs) - and containerised network functions (CNFs), respectively. + and cloud-native network functions (CNFs), respectively. **Note:** CNFs are now more commonly referred to as cloud-native network functions.