diff --git a/.changesets/feat_add_dns_resolution_strategy.md b/.changesets/feat_add_dns_resolution_strategy.md new file mode 100644 index 0000000000..cfaa9aaf74 --- /dev/null +++ b/.changesets/feat_add_dns_resolution_strategy.md @@ -0,0 +1,28 @@ +### Add ability to configure DNS resolution strategy ([PR #6109](https://github.com/apollographql/router/pull/6109)) + +The router now supports choosing a DNS resolution strategy for the coprocessor's and subgraph's URLs. +The new option is called `dns_resolution_strategy` and supports the following values: +* `ipv4_only` - Only query for `A` (IPv4) records. +* `ipv6_only` - Only query for `AAAA` (IPv6) records. +* `ipv4_and_ipv6` - Query for both `A` (IPv4) and `AAAA` (IPv6) records in parallel. +* `ipv6_then_ipv4` - Query for `AAAA` (IPv6) records first; if that fails, query for `A` (IPv4) records. +* `ipv4_then_ipv6`(default) - Query for `A` (IPv4) records first; if that fails, query for `AAAA` (IPv6) records. + +To change the DNS resolution strategy applied to the subgraph's URL: +```yaml title="router.yaml" +traffic_shaping: + all: + dns_resolution_strategy: ipv4_then_ipv6 + +``` + +You can also change the DNS resolution strategy applied to the coprocessor's URL: +```yaml title="router.yaml" +coprocessor: + url: http://coprocessor.example.com:8081 + client: + dns_resolution_strategy: ipv4_then_ipv6 + +``` + +By [@IvanGoncharov](https://github.com/IvanGoncharov) in https://github.com/apollographql/router/pull/6109 diff --git a/.changesets/feat_geal_subgraph_request_id.md b/.changesets/feat_geal_subgraph_request_id.md new file mode 100644 index 0000000000..b5e0934132 --- /dev/null +++ b/.changesets/feat_geal_subgraph_request_id.md @@ -0,0 +1,5 @@ +### Add a subgraph request id ([PR #5858](https://github.com/apollographql/router/pull/5858)) + +This is a unique string identifying a subgraph request and response, allowing plugins and coprocessors to keep some state per subgraph request by matching on this id. It is available in coprocessors as `subgraphRequestId` and rhai scripts as `request.subgraph.id` and `response.subgraph.id`. + +By [@Geal](https://github.com/Geal) in https://github.com/apollographql/router/pull/5858 \ No newline at end of file diff --git a/.changesets/fix_garypen_log_less_error_for_subgraph_batching.md b/.changesets/fix_garypen_log_less_error_for_subgraph_batching.md new file mode 100644 index 0000000000..f81796d01a --- /dev/null +++ b/.changesets/fix_garypen_log_less_error_for_subgraph_batching.md @@ -0,0 +1,7 @@ +### If subgraph batching, do not log response data for notification failure ([PR #6150](https://github.com/apollographql/router/pull/6150)) + +A subgraph response may contain a lot of data and/or PII data. + +For a subgraph batching operation, we should not log out the entire subgraph response when failing to notify a waiting batch participant. + +By [@garypen](https://github.com/garypen) in https://github.com/apollographql/router/pull/6150 \ No newline at end of file diff --git a/.changesets/fix_tninesling_remove_demand_control_warnings.md b/.changesets/fix_tninesling_remove_demand_control_warnings.md new file mode 100644 index 0000000000..195ad7d04c --- /dev/null +++ b/.changesets/fix_tninesling_remove_demand_control_warnings.md @@ -0,0 +1,5 @@ +### Remove noisy demand control logs ([PR #6192](https://github.com/apollographql/router/pull/6192)) + +Demand control no longer logs warnings when a subgraph response is missing a requested field. + +By [@tninesling](https://github.com/tninesling) in https://github.com/apollographql/router/pull/6192 diff --git a/.changesets/maint_feature_rhaitelemetry.md b/.changesets/maint_feature_rhaitelemetry.md deleted file mode 100644 index b8d780b491..0000000000 --- a/.changesets/maint_feature_rhaitelemetry.md +++ /dev/null @@ -1,5 +0,0 @@ -### Added telemetry for Rhai usage ([PR #6027](https://github.com/apollographql/router/pull/6027)) - -Added telemetry for Rhai usage - -By [@andrewmcgivery](https://github.com/andrewmcgivery) in https://github.com/apollographql/router/pull/6027 diff --git a/.github/workflows/docs-publish.yml b/.github/workflows/docs-publish.yml deleted file mode 100644 index 7da79e0f58..0000000000 --- a/.github/workflows/docs-publish.yml +++ /dev/null @@ -1,15 +0,0 @@ -name: Deploy docs to production - -on: - push: - branches: - - main - paths: - - docs/** - -jobs: - publish: - uses: apollographql/docs/.github/workflows/publish.yml@main - secrets: - NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }} - NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }} diff --git a/.github/workflows/github_projects_tagger.yml b/.github/workflows/github_projects_tagger.yml deleted file mode 100644 index 17bed2cf9b..0000000000 --- a/.github/workflows/github_projects_tagger.yml +++ /dev/null @@ -1,23 +0,0 @@ -name: GitHub Projects Tagger - -on: - issues: - types: - - opened - pull_request: - types: - - opened - - reopened -jobs: - tag: - runs-on: ubuntu-latest - steps: - - uses: abernix/github-issue-pull-api-hook@v2.0.1 - with: - # These are both set in Org level secrets and work on a safe-listed - # set of repositories that affect the Polaris team. This allows them to - # be updated in a single place. If you want a NEW repo to work, you'll - # need an org admin to add it as a repo to (both) the existing secrets. - # https://github.com/organizations/apollographql/settings/secrets/actions - api_url: ${{ secrets.POLARIS_TAGGER_API_URL }} - bearer_token: ${{ secrets.POLARIS_TAGGER_BEARER_TOKEN }} diff --git a/CHANGELOG.md b/CHANGELOG.md index d25a47da5f..beedac4cb6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -45,6 +45,141 @@ Default values of some GraphOS reporting metrics have been changed from v1.x to * `telemetry.apollo.signature_normalization_algorithm` now defaults to `enhanced`. (In v1.x the default is `legacy`.) * `telemetry.apollo.metrics_reference_mode` now defaults to `extended`. (In v1.x the default is `standard`.) +# [1.57.0] - 2024-10-22 + +> [!IMPORTANT] +> If you have enabled [Distributed query plan caching](https://www.apollographql.com/docs/router/configuration/distributed-caching/#distributed-query-plan-caching), updates to the query planner in this release will result in query plan caches being re-generated rather than re-used. On account of this, you should anticipate additional cache regeneration cost when updating between these versions while the new query plans come into service. + +## ๐Ÿš€ Features + +### Remove legacy schema introspection ([PR #6139](https://github.com/apollographql/router/pull/6139)) + +Schema introspection in the router now runs natively without JavaScript. We have high confidence that the new native implementation returns responses that match the previous Javascript implementation, based on differential testing: fuzzing arbitrary queries against a large schema, and testing a corpus of customer schemas against a comprehensive query. + +Changes to the router's YAML configuration: + +* The `experimental_introspection_mode` key has been removed, with the `new` mode as the only behavior in this release. +* The `supergraph.query_planning.legacy_introspection_caching` key is removed, with the behavior in this release now similar to what was `false`: introspection responses are not part of the query plan cache but instead in a separate, small in-memoryโ€”only cache. + +When using the above deprecated configuration options, the router's automatic configuration migration will ensure that existing configuration continue to work until the next major version of the router. To simplify major upgrades, we recommend reviewing incremental updates to your YAML configuration by comparing the output of `./router config upgrade --config path/to/config.yaml` with your existing configuration. + +By [@SimonSapin](https://github.com/SimonSapin) in https://github.com/apollographql/router/pull/6139 + +### Support new `request_context` selector for telemetry ([PR #6160](https://github.com/apollographql/router/pull/6160)) + +The router supports a new `request_context` selector for telemetry that enables access to the supergraph schema ID. + +You can configure the context to access the supergraph schema ID at the router service level: + +```yaml +telemetry: + instrumentation: + events: + router: + my.request_event: + message: "my request event message" + level: info + on: request + attributes: + schema.id: + request_context: "apollo::supergraph_schema_id" # The key containing the supergraph schema id +``` + +You can use the selector in any service at any stage. While this example applies to `events` attributes, the selector can also be used on spans and instruments. + +By [@bnjjj](https://github.com/bnjjj) in https://github.com/apollographql/router/pull/6160 + +### Support reading and setting `port` on request URIs using Rhai ([Issue #5437](https://github.com/apollographql/router/issues/5437)) + +Custom Rhai scripts in the router now support the `request.uri.port` and `request.subgraph.uri.port` functions for reading and setting URI ports. These functions enable you to update the full URI for subgraph fetches. For example: + +```rust +fn subgraph_service(service, subgraph){ + service.map_request(|request|{ + log_info(``); + if request.subgraph.uri.port == {} { + log_info("Port is not explicitly set"); + } + request.subgraph.uri.host = "api.apollographql.com"; + request.subgraph.uri.path = "/api/graphql"; + request.subgraph.uri.port = 1234; + log_info(``); + }); +} +``` + +By [@lleadbet](https://github.com/lleadbet) in https://github.com/apollographql/router/pull/5439 + +## ๐Ÿ› Fixes + +### Fix various edge cases for `__typename` field ([PR #6009](https://github.com/apollographql/router/pull/6009)) + +The router now correctly handles the `__typename` field used on operation root types, even when the subgraph's root type has a name that differs from the supergraph's root type. + +For example, given a query like this: + +```graphql +{ + ...RootFragment +} + +fragment RootFragment on Query { + __typename + me { + name + } +} +``` + +Even if the subgraph's root type returns a `__typename` that differs from `Query`, the router will still use `Query` as the value of the `__typename` field. + +This change also includes fixes for other edge cases related to the handling of `__typename` fields. For a detailed technical description of the edge cases that were fixed, please see [this description](https://github.com/apollographql/router/pull/6009#issue-2529717207). + +By [@IvanGoncharov](https://github.com/IvanGoncharov) in https://github.com/apollographql/router/pull/6009 + +### Support `uri` and `method` properties on router "request" objects in Rhai ([PR #6147](https://github.com/apollographql/router/pull/6147)) + +The router now supports accessing `request.uri` and `request.method` properties from custom Rhai scripts. Previously, when trying to access `request.uri` and `request.method` on a router request in Rhai, the router would return error messages stating the properties were undefined. + +An example Rhai script using these properties: + +```rhai +fn router_service(service) { + let router_request_callback = Fn("router_request_callback"); + service.map_request(router_request_callback); +} + +fn router_request_callback (request) { + log_info(`Router Request... Host: , Path: `); +} +``` + +By [@andrewmcgivery](https://github.com/andrewmcgivery) in https://github.com/apollographql/router/pull/6114 + +### Cost calculation for subgraph requests with named fragments ([PR #6162](https://github.com/apollographql/router/issues/6162)) + +In some cases where subgraph GraphQL operations contain named fragments and abstract types, demand control used the wrong type for cost calculation, and could reject valid operations. Now, the correct type is used. + +This fixes errors of the form: + +``` +Attempted to look up a field on type MyInterface, but the field does not exist +``` + +By [@goto-bus-stop](https://github.com/goto-bus-stop) in https://github.com/apollographql/router/pull/6162 + +### Federation v2.9.3 ([PR #6161](https://github.com/apollographql/router/pull/6161)) + +This release updates to Federation v2.9.3, with query planner fixes: + +- Fixes a query planning bug where operation variables for a subgraph query wouldn't match what's used in that query. +- Fixes a query planning bug where directives applied to `__typename` may be omitted in the subgraph query. +- Fixes a query planning inefficiency where some redundant subgraph queries were not removed. +- Fixes a query planning inefficiency where some redundant inline fragments in `@key`/`@requires` selection sets were not optimized away. +- Fixes a query planning inefficiency where unnecessary subgraph jumps were being added when using `@context`/`@fromContext`. + +By [@sachindshinde](https://github.com/sachindshinde) in https://github.com/apollographql/router/pull/6161 + # [1.56.0] - 2024-10-01 > [!IMPORTANT] @@ -131,6 +266,41 @@ By [@goto-bus-stop](https://github.com/goto-bus-stop) in https://github.com/apol ## ๐Ÿš€ Features +### Entity cache invalidation preview ([PR #5889](https://github.com/apollographql/router/pull/5889)) + +> โš ๏ธ This is a preview for an [Enterprise feature](https://www.apollographql.com/blog/platform/evaluating-apollo-router-understanding-free-and-open-vs-commercial-features/) of the Apollo Router. It requires an organization with a [GraphOS Enterprise plan](https://www.apollographql.com/pricing/). If your organization doesn't currently have an Enterprise plan, you can test out this functionality with a [free Enterprise trial](https://studio.apollographql.com/signup?type=enterprise-trial). +> +> As a preview feature, it's subject to our [Preview launch stage](https://www.apollographql.com/docs/resources/product-launch-stages/#preview) expectations and configuration and performance may change in future releases. + +As a follow up to the Entity cache preview that was published in the router 1.46.0 release, we're introducing a new feature that allows you to invalidate cached entries. + +This introduces two ways to invalidate cached entries: +- through an HTTP endpoint exposed by the router +- via GraphQL response `extensions` returned from subgraph requests + +The invalidation endpoint can be defined in the router's configuration, as follows: + +```yaml +preview_entity_cache: + enabled: true + + # global invalidation configuration + invalidation: + # address of the invalidation endpoint + # this should only be exposed to internal networks + listen: "127.0.0.1:3000" + path: "/invalidation" +``` + +Invalidation requests can target cached entries by: +- subgraph name +- subgraph name and type name +- subgraph name, type name and entity key + +You can learn more about invalidation in the [documentation](https://www.apollographql.com/docs/router/configuration/entity-caching#entity-cache-invalidation). + +By [@bnjjj](https://github.com/bnjjj),[@bryncooke](https://github.com/bryncooke), [@garypen](https://github.com/garypen), [@Geal](https://github.com/Geal), [@IvanGoncharov](https://github.com/IvanGoncharov) in https://github.com/apollographql/router/pull/5889 + ### Support aliasing standard attributes for telemetry ([Issue #5930](https://github.com/apollographql/router/issues/5930)) The router now supports creating aliases for standard attributes for telemetry. diff --git a/Cargo.lock b/Cargo.lock index 4e48110849..6c75a59919 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -224,9 +224,9 @@ dependencies = [ [[package]] name = "apollo-parser" -version = "0.8.2" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9692c1bfa7e0628e5c46bd0538571dd469138a0f062290fd4bf90e20e46f05da" +checksum = "b64257011a999f2e22275cf7a118f651e58dc9170e11b775d435de768fad0387" dependencies = [ "memchr", "rowan", @@ -282,6 +282,7 @@ dependencies = [ "graphql_client", "heck 0.5.0", "hex", + "hickory-resolver", "hmac", "http 0.2.12", "http-body 0.4.6", @@ -391,7 +392,6 @@ dependencies = [ "tracing-serde", "tracing-subscriber", "tracing-test", - "trust-dns-resolver", "uname", "url", "urlencoding", @@ -3176,6 +3176,51 @@ dependencies = [ "serde", ] +[[package]] +name = "hickory-proto" +version = "0.24.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "07698b8420e2f0d6447a436ba999ec85d8fbf2a398bbd737b82cac4a2e96e512" +dependencies = [ + "async-trait", + "cfg-if", + "data-encoding", + "enum-as-inner", + "futures-channel", + "futures-io", + "futures-util", + "idna 0.4.0", + "ipnet", + "once_cell", + "rand 0.8.5", + "thiserror", + "tinyvec", + "tokio", + "tracing", + "url", +] + +[[package]] +name = "hickory-resolver" +version = "0.24.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "28757f23aa75c98f254cf0405e6d8c25b831b32921b050a66692427679b1f243" +dependencies = [ + "cfg-if", + "futures-util", + "hickory-proto", + "ipconfig", + "lru-cache", + "once_cell", + "parking_lot", + "rand 0.8.5", + "resolv-conf", + "smallvec", + "thiserror", + "tokio", + "tracing", +] + [[package]] name = "hmac" version = "0.12.1" @@ -5545,9 +5590,9 @@ dependencies = [ [[package]] name = "router-bridge" -version = "0.6.3+v2.9.2" +version = "0.6.4+v2.9.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2f183e217b4010e7d37d581b7919ca5e0136a46b6d6b1ff297c52e702bce1089" +checksum = "0bcc6f2aa0c619a4fb74ce271873a500f5640c257ca2e7aa8ea6be6226262855" dependencies = [ "anyhow", "async-channel 1.9.0", @@ -7076,52 +7121,6 @@ dependencies = [ "stable_deref_trait", ] -[[package]] -name = "trust-dns-proto" -version = "0.23.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3119112651c157f4488931a01e586aa459736e9d6046d3bd9105ffb69352d374" -dependencies = [ - "async-trait", - "cfg-if", - "data-encoding", - "enum-as-inner", - "futures-channel", - "futures-io", - "futures-util", - "idna 0.4.0", - "ipnet", - "once_cell", - "rand 0.8.5", - "smallvec", - "thiserror", - "tinyvec", - "tokio", - "tracing", - "url", -] - -[[package]] -name = "trust-dns-resolver" -version = "0.23.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "10a3e6c3aff1718b3c73e395d1f35202ba2ffa847c6a62eea0db8fb4cfe30be6" -dependencies = [ - "cfg-if", - "futures-util", - "ipconfig", - "lru-cache", - "once_cell", - "parking_lot", - "rand 0.8.5", - "resolv-conf", - "smallvec", - "thiserror", - "tokio", - "tracing", - "trust-dns-proto", -] - [[package]] name = "try-lock" version = "0.2.5" diff --git a/Cargo.toml b/Cargo.toml index cbfe763b1b..40062ddc7a 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -50,7 +50,7 @@ debug = 1 # https://doc.rust-lang.org/cargo/reference/workspaces.html#the-dependencies-table [workspace.dependencies] apollo-compiler = "=1.0.0-beta.24" -apollo-parser = "0.8.0" +apollo-parser = "0.8.3" apollo-smith = "0.14.0" async-trait = "0.1.77" hex = { version = "0.4.3", features = ["serde"] } diff --git a/apollo-federation/Cargo.toml b/apollo-federation/Cargo.toml index 93f06587b4..74edae2bce 100644 --- a/apollo-federation/Cargo.toml +++ b/apollo-federation/Cargo.toml @@ -37,7 +37,7 @@ strum = "0.26.0" strum_macros = "0.26.0" thiserror = "1.0" url = "2" -either = "1.12" +either = "1.13" tracing = "0.1.40" ron = { version = "0.8.1", optional = true } nom_locate = "4.2.0" diff --git a/apollo-federation/src/compat.rs b/apollo-federation/src/compat.rs index bd571f3f5c..d3d8caa537 100644 --- a/apollo-federation/src/compat.rs +++ b/apollo-federation/src/compat.rs @@ -207,6 +207,7 @@ fn coerce_value( // Custom scalars accept any value, even objects and lists. (Value::Object(_), Some(ExtendedType::Scalar(scalar))) if !scalar.is_built_in() => {} (Value::List(_), Some(ExtendedType::Scalar(scalar))) if !scalar.is_built_in() => {} + (Value::Enum(_), Some(ExtendedType::Scalar(scalar))) if !scalar.is_built_in() => {} // Enums must match the type. (Value::Enum(value), Some(ExtendedType::Enum(enum_))) if enum_.values.contains_key(value) => {} @@ -364,6 +365,11 @@ pub(crate) fn coerce_executable_values(schema: &Valid, document: &mut Ex for operation in document.operations.named.values_mut() { coerce_operation_values(schema, operation); } + for fragment in document.fragments.values_mut() { + let fragment = fragment.make_mut(); + coerce_directive_application_values(schema, &mut fragment.directives); + coerce_selection_set_values(schema, &mut fragment.selection_set); + } } /// Applies default value coercion and removes non-semantic directives so that @@ -415,4 +421,72 @@ mod tests { } "#); } + + #[test] + fn coerces_enum_values() { + let schema = Schema::parse_and_validate( + r#" + scalar CustomScalar + type Query { + test( + string: String!, + strings: [String!]!, + custom: CustomScalar!, + customList: [CustomScalar!]!, + ): Int + } + "#, + "schema.graphql", + ) + .unwrap(); + + // Enum literals are only coerced into lists if the item type is a custom scalar type. + insta::assert_snapshot!(parse_and_coerce(&schema, r#" + { + test(string: enumVal1, strings: enumVal2, custom: enumVal1, customList: enumVal2) + } + "#), @r###" + { + test(string: enumVal1, strings: enumVal2, custom: enumVal1, customList: [enumVal2]) + } + "###); + } + + #[test] + fn coerces_in_fragment_definitions() { + let schema = Schema::parse_and_validate( + r#" + type T { + get(bools: [Boolean!]!): Int + } + type Query { + test: T + } + "#, + "schema.graphql", + ) + .unwrap(); + + insta::assert_snapshot!(parse_and_coerce(&schema, r#" + { + test { + ...f + } + } + + fragment f on T { + get(bools: true) + } + "#), @r###" + { + test { + ...f + } + } + + fragment f on T { + get(bools: [true]) + } + "###); + } } diff --git a/apollo-federation/src/error/mod.rs b/apollo-federation/src/error/mod.rs index d56c9783a6..97b7d2757d 100644 --- a/apollo-federation/src/error/mod.rs +++ b/apollo-federation/src/error/mod.rs @@ -13,6 +13,37 @@ use lazy_static::lazy_static; use crate::subgraph::spec::FederationSpecError; +/// Break out of the current function, returning an internal error. +#[macro_export] +macro_rules! internal_error { + ( $( $arg:tt )+ ) => { + return Err($crate::error::FederationError::internal(format!( $( $arg )+ )).into()); + } +} + +/// A safe assertion: in debug mode, it panicks on failure, and in production, it returns an +/// internal error. +/// +/// Treat this as an assertion. It must only be used for conditions that *should never happen* +/// in normal operation. +#[macro_export] +macro_rules! ensure { + ( $expr:expr, $( $arg:tt )+ ) => { + #[cfg(debug_assertions)] + { + if false { + return Err($crate::error::FederationError::internal("ensure!() must be used in a function that returns a Result").into()); + } + assert!($expr, $( $arg )+); + } + + #[cfg(not(debug_assertions))] + if !$expr { + $crate::internal_error!( $( $arg )+ ); + } + } +} + // What we really needed here was the string representations in enum form, this isn't meant to // replace AST components. #[derive(Clone, Debug, strum_macros::Display)] diff --git a/apollo-federation/src/operation/contains.rs b/apollo-federation/src/operation/contains.rs index 6cc257e7d6..b734dd304d 100644 --- a/apollo-federation/src/operation/contains.rs +++ b/apollo-federation/src/operation/contains.rs @@ -14,8 +14,9 @@ pub(super) fn is_deferred_selection(directives: &executable::DirectiveList) -> b /// Options for the `.containment()` family of selection functions. #[derive(Debug, Clone, Copy)] pub(crate) struct ContainmentOptions { - /// If the right-hand side has a __typename selection but the left-hand side does not, - /// still consider the left-hand side to contain the right-hand side. + /// During query planning, we may add `__typename` selections to sets that did not have it + /// initially. If the right-hand side has a `__typename` selection but the left-hand side + /// does not, this option still considers the left-hand side to contain the right-hand side. pub(crate) ignore_missing_typename: bool, } diff --git a/apollo-federation/src/operation/directive_list.rs b/apollo-federation/src/operation/directive_list.rs index df03995b53..ad716dd1b4 100644 --- a/apollo-federation/src/operation/directive_list.rs +++ b/apollo-federation/src/operation/directive_list.rs @@ -288,13 +288,6 @@ impl DirectiveList { .iter() } - /// Iterate the directives in a consistent sort order. - pub(crate) fn iter_sorted(&self) -> DirectiveIterSorted<'_> { - self.inner - .as_ref() - .map_or_else(DirectiveIterSorted::empty, |inner| inner.iter_sorted()) - } - /// Remove one directive application by name. /// /// To remove a repeatable directive, you may need to call this multiple times. @@ -336,7 +329,7 @@ impl DirectiveList { } /// Iterate over a [`DirectiveList`] in a consistent sort order. -pub(crate) struct DirectiveIterSorted<'a> { +struct DirectiveIterSorted<'a> { directives: &'a [Node], inner: std::slice::Iter<'a, usize>, } @@ -354,15 +347,6 @@ impl ExactSizeIterator for DirectiveIterSorted<'_> { } } -impl DirectiveIterSorted<'_> { - fn empty() -> Self { - Self { - directives: &[], - inner: [].iter(), - } - } -} - #[cfg(test)] mod tests { use std::collections::HashSet; diff --git a/apollo-federation/src/operation/merging.rs b/apollo-federation/src/operation/merging.rs index 4c2b31cbd3..c56ff5e66e 100644 --- a/apollo-federation/src/operation/merging.rs +++ b/apollo-federation/src/operation/merging.rs @@ -15,7 +15,9 @@ use super::NamedFragments; use super::Selection; use super::SelectionSet; use super::SelectionValue; +use crate::ensure; use crate::error::FederationError; +use crate::internal_error; impl<'a> FieldSelectionValue<'a> { /// Merges the given field selections into this one. @@ -36,31 +38,29 @@ impl<'a> FieldSelectionValue<'a> { let mut selection_sets = vec![]; for other in others { let other_field = &other.field; - if other_field.schema != self_field.schema { - return Err(FederationError::internal( - "Cannot merge field selections from different schemas", - )); - } - if other_field.field_position != self_field.field_position { - return Err(FederationError::internal(format!( - "Cannot merge field selection for field \"{}\" into a field selection for field \"{}\"", - other_field.field_position, - self_field.field_position, - ))); - } + ensure!( + other_field.schema == self_field.schema, + "Cannot merge field selections from different schemas", + ); + ensure!( + other_field.field_position == self_field.field_position, + "Cannot merge field selection for field \"{}\" into a field selection for field \"{}\"", + other_field.field_position, + self_field.field_position, + ); if self.get().selection_set.is_some() { let Some(other_selection_set) = &other.selection_set else { - return Err(FederationError::internal(format!( + internal_error!( "Field \"{}\" has composite type but not a selection set", other_field.field_position, - ))); + ); }; selection_sets.push(other_selection_set); } else if other.selection_set.is_some() { - return Err(FederationError::internal(format!( + internal_error!( "Field \"{}\" has non-composite type but also has a selection set", other_field.field_position, - ))); + ); } } if let Some(self_selection_set) = self.get_selection_set_mut() { @@ -87,22 +87,16 @@ impl<'a> InlineFragmentSelectionValue<'a> { let mut selection_sets = vec![]; for other in others { let other_inline_fragment = &other.inline_fragment; - if other_inline_fragment.schema != self_inline_fragment.schema { - return Err(FederationError::internal( - "Cannot merge inline fragment from different schemas", - )); - } - if other_inline_fragment.parent_type_position - != self_inline_fragment.parent_type_position - { - return Err(FederationError::internal( - format!( - "Cannot merge inline fragment of parent type \"{}\" into an inline fragment of parent type \"{}\"", - other_inline_fragment.parent_type_position, - self_inline_fragment.parent_type_position, - ), - )); - } + ensure!( + other_inline_fragment.schema == self_inline_fragment.schema, + "Cannot merge inline fragment from different schemas", + ); + ensure!( + other_inline_fragment.parent_type_position == self_inline_fragment.parent_type_position, + "Cannot merge inline fragment of parent type \"{}\" into an inline fragment of parent type \"{}\"", + other_inline_fragment.parent_type_position, + self_inline_fragment.parent_type_position, + ); selection_sets.push(&other.selection_set); } self.get_selection_set_mut() @@ -127,11 +121,10 @@ impl<'a> FragmentSpreadSelectionValue<'a> { let self_fragment_spread = &self.get().spread; for other in others { let other_fragment_spread = &other.spread; - if other_fragment_spread.schema != self_fragment_spread.schema { - return Err(FederationError::internal( - "Cannot merge fragment spread from different schemas", - )); - } + ensure!( + other_fragment_spread.schema == self_fragment_spread.schema, + "Cannot merge fragment spread from different schemas", + ); // Nothing to do since the fragment spread is already part of the selection set. // Fragment spreads are uniquely identified by fragment name and applied directives. // Since there is already an entry for the same fragment spread, there is no point @@ -157,20 +150,16 @@ impl SelectionSet { ) -> Result<(), FederationError> { let mut selections_to_merge = vec![]; for other in others { - if other.schema != self.schema { - return Err(FederationError::internal( - "Cannot merge selection sets from different schemas", - )); - } - if other.type_position != self.type_position { - return Err(FederationError::internal( - format!( - "Cannot merge selection set for type \"{}\" into a selection set for type \"{}\"", - other.type_position, - self.type_position, - ), - )); - } + ensure!( + other.schema == self.schema, + "Cannot merge selection sets from different schemas", + ); + ensure!( + other.type_position == self.type_position, + "Cannot merge selection set for type \"{}\" into a selection set for type \"{}\"", + other.type_position, + self.type_position, + ); selections_to_merge.extend(other.selections.values()); } self.merge_selections_into(selections_to_merge.into_iter()) @@ -198,12 +187,10 @@ impl SelectionSet { selection_map::Entry::Occupied(existing) => match existing.get() { Selection::Field(self_field_selection) => { let Selection::Field(other_field_selection) = other_selection else { - return Err(FederationError::internal( - format!( - "Field selection key for field \"{}\" references non-field selection", - self_field_selection.field.field_position, - ), - )); + internal_error!( + "Field selection key for field \"{}\" references non-field selection", + self_field_selection.field.field_position, + ); }; fields .entry(other_key) @@ -214,12 +201,10 @@ impl SelectionSet { let Selection::FragmentSpread(other_fragment_spread_selection) = other_selection else { - return Err(FederationError::internal( - format!( - "Fragment spread selection key for fragment \"{}\" references non-field selection", - self_fragment_spread_selection.spread.fragment_name, - ), - )); + internal_error!( + "Fragment spread selection key for fragment \"{}\" references non-field selection", + self_fragment_spread_selection.spread.fragment_name, + ); }; fragment_spreads .entry(other_key) @@ -230,17 +215,15 @@ impl SelectionSet { let Selection::InlineFragment(other_inline_fragment_selection) = other_selection else { - return Err(FederationError::internal( - format!( - "Inline fragment selection key under parent type \"{}\" {}references non-field selection", - self_inline_fragment_selection.inline_fragment.parent_type_position, - self_inline_fragment_selection.inline_fragment.type_condition_position.clone() - .map_or_else( - String::new, - |cond| format!("(type condition: {}) ", cond), - ), - ), - )); + internal_error!( + "Inline fragment selection key under parent type \"{}\" {}references non-field selection", + self_inline_fragment_selection.inline_fragment.parent_type_position, + self_inline_fragment_selection.inline_fragment.type_condition_position.clone() + .map_or_else( + String::new, + |cond| format!("(type condition: {}) ", cond), + ), + ); }; inline_fragments .entry(other_key) @@ -306,9 +289,8 @@ impl SelectionSet { &mut self, selection: &Selection, ) -> Result<(), FederationError> { - debug_assert_eq!( - &self.schema, - selection.schema(), + ensure!( + self.schema == *selection.schema(), "In order to add selection it needs to point to the same schema" ); self.merge_selections_into(std::iter::once(selection)) @@ -328,12 +310,12 @@ impl SelectionSet { &mut self, selection_set: &SelectionSet, ) -> Result<(), FederationError> { - debug_assert_eq!( - self.schema, selection_set.schema, + ensure!( + self.schema == selection_set.schema, "In order to add selection set it needs to point to the same schema." ); - debug_assert_eq!( - self.type_position, selection_set.type_position, + ensure!( + self.type_position == selection_set.type_position, "In order to add selection set it needs to point to the same type position" ); self.merge_into(std::iter::once(selection_set)) @@ -386,9 +368,7 @@ pub(crate) fn merge_selection_sets( mut selection_sets: Vec, ) -> Result { let Some((first, remainder)) = selection_sets.split_first_mut() else { - return Err(FederationError::internal( - "merge_selection_sets(): must have at least one selection set", - )); + internal_error!("merge_selection_sets(): must have at least one selection set"); }; first.merge_into(remainder.iter())?; diff --git a/apollo-federation/src/operation/mod.rs b/apollo-federation/src/operation/mod.rs index f9ae87afa4..2eaacfd4e4 100644 --- a/apollo-federation/src/operation/mod.rs +++ b/apollo-federation/src/operation/mod.rs @@ -822,6 +822,9 @@ impl Selection { } } + /// # Errors + /// Returns an error if the selection contains a fragment spread, or if any of the + /// @skip/@include directives are invalid (per GraphQL validation rules). pub(crate) fn conditions(&self) -> Result { let self_conditions = Conditions::from_directives(self.directives())?; if let Conditions::Boolean(false) = self_conditions { @@ -992,6 +995,7 @@ mod field_selection { use apollo_compiler::Name; use serde::Serialize; + use super::TYPENAME_FIELD; use crate::error::FederationError; use crate::operation::ArgumentList; use crate::operation::DirectiveList; @@ -1158,6 +1162,13 @@ mod field_selection { &self.data } + // Is this a plain simple __typename without any directive or alias? + pub(crate) fn is_plain_typename_field(&self) -> bool { + *self.data.field_position.field_name() == TYPENAME_FIELD + && self.data.directives.is_empty() + && self.data.alias.is_none() + } + pub(crate) fn sibling_typename(&self) -> Option<&SiblingTypename> { self.data.sibling_typename.as_ref() } @@ -1735,38 +1746,63 @@ impl SelectionSet { } } - // TODO: Ideally, this method returns a proper, recursive iterator. As is, there is a lot of - // overhead due to indirection, both from over allocation and from v-table lookups. - pub(crate) fn split_top_level_fields(self) -> Box> { - let parent_type = self.type_position.clone(); - let selections: IndexMap = (**self.selections).clone(); - Box::new(selections.into_values().flat_map(move |sel| { - let digest: Box> = if sel.is_field() { - Box::new(std::iter::once(SelectionSet::from_selection( - parent_type.clone(), - sel.clone(), - ))) - } else { - let Some(ele) = sel.element().ok() else { - let digest: Box> = - Box::new(std::iter::empty()); - return digest; - }; - Box::new( - sel.selection_set() - .cloned() - .into_iter() - .flat_map(SelectionSet::split_top_level_fields) - .filter_map(move |set| { - let parent_type = ele.parent_type_position(); - Selection::from_element(ele.clone(), Some(set)) - .ok() - .map(|sel| SelectionSet::from_selection(parent_type, sel)) - }), - ) - }; - digest - })) + pub(crate) fn split_top_level_fields(self) -> impl Iterator { + // NOTE: Ideally, we could just use a generator but, instead, we have to manually implement + // one :( + struct TopLevelFieldSplitter { + parent_type: CompositeTypeDefinitionPosition, + starting_set: ::IntoIter, + stack: Vec<(OpPathElement, Self)>, + } + + impl TopLevelFieldSplitter { + fn new(selection_set: SelectionSet) -> Self { + Self { + parent_type: selection_set.type_position, + starting_set: Arc::unwrap_or_clone(selection_set.selections).into_iter(), + stack: Vec::new(), + } + } + } + + impl Iterator for TopLevelFieldSplitter { + type Item = SelectionSet; + + fn next(&mut self) -> Option { + loop { + match self.stack.last_mut() { + None => { + let selection = self.starting_set.next()?.1; + if selection.is_field() { + return Some(SelectionSet::from_selection( + self.parent_type.clone(), + selection, + )); + } else if let Ok(element) = selection.element() { + if let Some(set) = selection.selection_set().cloned() { + self.stack.push((element, Self::new(set))); + } + } + } + Some((element, top)) => { + match top.find_map(|set| { + let parent_type = element.parent_type_position(); + Selection::from_element(element.clone(), Some(set)) + .ok() + .map(|sel| SelectionSet::from_selection(parent_type, sel)) + }) { + Some(set) => return Some(set), + None => { + self.stack.pop(); + } + } + } + } + } + } + } + + TopLevelFieldSplitter::new(self) } /// PORT_NOTE: JS calls this `newCompositeTypeSelectionSet` @@ -2075,7 +2111,7 @@ impl SelectionSet { for (key, entry) in mutable_selection_map.iter_mut() { match entry { SelectionValue::Field(mut field_selection) => { - if field_selection.get().field.name() == &TYPENAME_FIELD + if field_selection.get().field.is_plain_typename_field() && !is_interface_object && typename_field_key.is_none() { @@ -2163,6 +2199,9 @@ impl SelectionSet { } } + /// # Errors + /// Returns an error if the selection set contains a fragment spread, or if any of the + /// @skip/@include directives are invalid (per GraphQL validation rules). pub(crate) fn conditions(&self) -> Result { // If the conditions of all the selections within the set are the same, // then those are conditions of the whole set and we return it. @@ -4090,9 +4129,13 @@ impl TryFrom<&SelectionSet> for executable::SelectionSet { for normalized_selection in val.selections.values() { let selection: executable::Selection = normalized_selection.try_into()?; if let executable::Selection::Field(field) = &selection { - if field.name == *INTROSPECTION_TYPENAME_FIELD_NAME && field.alias.is_none() { - // Move unaliased __typename to the start of the selection set. + if field.name == *INTROSPECTION_TYPENAME_FIELD_NAME + && field.directives.is_empty() + && field.alias.is_none() + { + // Move the plain __typename to the start of the selection set. // This looks nicer, and matches existing tests. + // Note: The plain-ness is also defined in `Field::is_plain_typename_field`. // PORT_NOTE: JS does this in `selectionsInPrintOrder` flattened.insert(0, selection); continue; diff --git a/apollo-federation/src/operation/optimize.rs b/apollo-federation/src/operation/optimize.rs index e9e897304f..bde0b6dfa9 100644 --- a/apollo-federation/src/operation/optimize.rs +++ b/apollo-federation/src/operation/optimize.rs @@ -1684,6 +1684,20 @@ impl FragmentGenerator { SelectionValue::InlineFragment(mut candidate) => { self.visit_selection_set(candidate.get_selection_set_mut())?; + // XXX(@goto-bus-stop): This is temporary to support mismatch testing with JS! + // JS federation does not consider fragments without a type condition. + if candidate + .get() + .inline_fragment + .type_condition_position + .is_none() + { + new_selection_set.add_local_selection(&Selection::InlineFragment( + Arc::clone(candidate.get()), + ))?; + continue; + } + let directives = &candidate.get().inline_fragment.directives; let skip_include = directives .iter() diff --git a/apollo-federation/src/operation/rebase.rs b/apollo-federation/src/operation/rebase.rs index 25ac19ec5a..5e2d29c75d 100644 --- a/apollo-federation/src/operation/rebase.rs +++ b/apollo-federation/src/operation/rebase.rs @@ -22,6 +22,7 @@ use super::Selection; use super::SelectionId; use super::SelectionSet; use super::TYPENAME_FIELD; +use crate::ensure; use crate::error::FederationError; use crate::schema::position::CompositeTypeDefinitionPosition; use crate::schema::position::OutputTypeDefinitionPosition; @@ -376,12 +377,12 @@ impl FragmentSpread { } .into()); }; - debug_assert_eq!( - *schema, self.schema, + ensure!( + *schema == self.schema, "Fragment spread should only be rebased within the same subgraph" ); - debug_assert_eq!( - *schema, named_fragment.schema, + ensure!( + *schema == named_fragment.schema, "Referenced named fragment should've been rebased for the subgraph" ); if runtime_types_intersect( diff --git a/apollo-federation/src/operation/simplify.rs b/apollo-federation/src/operation/simplify.rs index 6ada508a6e..f7c339d210 100644 --- a/apollo-federation/src/operation/simplify.rs +++ b/apollo-federation/src/operation/simplify.rs @@ -14,6 +14,7 @@ use super::NamedFragments; use super::Selection; use super::SelectionMap; use super::SelectionSet; +use crate::ensure; use crate::error::FederationError; use crate::schema::position::CompositeTypeDefinitionPosition; use crate::schema::ValidFederationSchema; @@ -136,11 +137,10 @@ impl FragmentSpreadSelection { // We must update the spread parent type if necessary since we're not going deeper, // or we'll be fundamentally losing context. - if self.spread.schema != *schema { - return Err(FederationError::internal( - "Should not try to flatten_unnecessary_fragments using a type from another schema", - )); - } + ensure!( + self.spread.schema == *schema, + "Should not try to flatten_unnecessary_fragments using a type from another schema", + ); let rebased_fragment_spread = self.rebase_on(parent_type, named_fragments, schema)?; Ok(Some(SelectionOrSet::Selection(rebased_fragment_spread))) diff --git a/apollo-federation/src/query_graph/build_query_graph.rs b/apollo-federation/src/query_graph/build_query_graph.rs index 1bb2faa41d..fa8da0cec8 100644 --- a/apollo-federation/src/query_graph/build_query_graph.rs +++ b/apollo-federation/src/query_graph/build_query_graph.rs @@ -1572,8 +1572,8 @@ impl FederatedQueryGraphBuilder { Selection::Field(field_selection) => { let existing_edge_info = base .query_graph - .graph - .edges_directed(node, Direction::Outgoing) + .out_edges_with_federation_self_edges(node) + .into_iter() .find_map(|edge_ref| { let edge_weight = edge_ref.weight(); let QueryGraphEdgeTransition::FieldCollection { @@ -1697,8 +1697,8 @@ impl FederatedQueryGraphBuilder { // construction. let (edge, tail) = base .query_graph - .graph - .edges_directed(node, Direction::Outgoing) + .out_edges_with_federation_self_edges(node) + .into_iter() .find_map(|edge_ref| { let edge_weight = edge_ref.weight(); let QueryGraphEdgeTransition::Downcast { @@ -1810,11 +1810,7 @@ impl FederatedQueryGraphBuilder { new_node_weight.has_reachable_cross_subgraph_edges = has_reachable_cross_subgraph_edges; let mut new_edges = Vec::new(); - for edge_ref in base - .query_graph - .graph - .edges_directed(node, Direction::Outgoing) - { + for edge_ref in base.query_graph.out_edges_with_federation_self_edges(node) { let edge_tail = edge_ref.target(); let edge_weight = edge_ref.weight(); new_edges.push(QueryGraphEdgeData { diff --git a/apollo-federation/src/query_graph/graph_path.rs b/apollo-federation/src/query_graph/graph_path.rs index 6d18e07e4a..da06fffcdf 100644 --- a/apollo-federation/src/query_graph/graph_path.rs +++ b/apollo-federation/src/query_graph/graph_path.rs @@ -1,5 +1,4 @@ use std::cmp::Ordering; -use std::collections::BinaryHeap; use std::fmt::Display; use std::fmt::Formatter; use std::fmt::Write; @@ -11,8 +10,10 @@ use std::sync::Arc; use apollo_compiler::ast::Value; use apollo_compiler::collections::IndexMap; use apollo_compiler::collections::IndexSet; +use either::Either; use itertools::Itertools; use petgraph::graph::EdgeIndex; +use petgraph::graph::EdgeReference; use petgraph::graph::NodeIndex; use petgraph::visit::EdgeRef; use tracing::debug; @@ -50,6 +51,7 @@ use crate::query_plan::query_planner::EnabledOverrideConditions; use crate::query_plan::FetchDataPathElement; use crate::query_plan::QueryPlanCost; use crate::schema::position::AbstractTypeDefinitionPosition; +use crate::schema::position::Captures; use crate::schema::position::CompositeTypeDefinitionPosition; use crate::schema::position::InterfaceFieldDefinitionPosition; use crate::schema::position::ObjectOrInterfaceTypeDefinitionPosition; @@ -589,6 +591,12 @@ pub(crate) struct SimultaneousPathsWithLazyIndirectPaths { pub(crate) lazily_computed_indirect_paths: Vec>, } +impl Display for SimultaneousPathsWithLazyIndirectPaths { + fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { + write!(f, "{}", self.paths) + } +} + /// A "set" of excluded destinations (i.e. subgraph names). Note that we use a `Vec` instead of set /// because this is used in pretty hot paths (the whole path computation is CPU intensive) and will /// basically always be tiny (it's bounded by the number of distinct key on a given type, so usually @@ -815,6 +823,39 @@ pub(crate) struct ClosedBranch(pub(crate) Vec>); #[derive(Debug, serde::Serialize)] pub(crate) struct OpenBranch(pub(crate) Vec); +// A drop-in replacement for `BinaryHeap`, but behaves more like JS QP's `popMin` method. +struct MaxHeap +where + T: Ord, +{ + items: Vec, +} + +impl MaxHeap +where + T: Ord, +{ + fn new() -> Self { + Self { items: Vec::new() } + } + + fn push(&mut self, item: T) { + self.items.push(item); + } + + fn pop(&mut self) -> Option { + // PORT_NOTE: JS QP returns the max item, but favors the first inserted one if there are + // multiple maximum items. + // Note: `position_max` returns the last of the equally maximum items. Thus, we use + // `position_min_by` by reversing the ordering. + let pos = self.items.iter().position_min_by(|a, b| b.cmp(a)); + let Some(pos) = pos else { + return None; + }; + Some(self.items.remove(pos)) + } +} + impl GraphPath where TTrigger: Eq + Hash + std::fmt::Debug, @@ -1168,20 +1209,22 @@ where .map(|((edge, trigger), condition)| (edge, trigger, condition)) } - pub(crate) fn next_edges<'a>( - &'a self, - ) -> Result + 'a>, FederationError> { + pub(crate) fn next_edges( + &self, + ) -> Result + Iterator, FederationError> { + let get_id = |edge_ref: EdgeReference<_>| edge_ref.id(); + if self.defer_on_tail.is_some() { // If the path enters a `@defer` (meaning that what comes after needs to be deferred), // then it's the one special case where we explicitly need to ask for edges to self, // as we will force the use of a `@key` edge (so we can send the non-deferred part // immediately) and we may have to resume the deferred part in the same subgraph than // the one in which we were (hence the need for edges to self). - return Ok(Box::new( + return Ok(Either::Left( self.graph .out_edges_with_federation_self_edges(self.tail) .into_iter() - .map(|edge_ref| edge_ref.id()), + .map(get_id), )); } @@ -1199,15 +1242,12 @@ where "Unexpectedly missing entry for non-trivial followup edges map", )); }; - return Ok(Box::new(non_trivial_followup_edges.iter().copied())); + return Ok(Either::Right(non_trivial_followup_edges.iter().copied())); } } - Ok(Box::new( - self.graph - .out_edges(self.tail) - .into_iter() - .map(|edge_ref| edge_ref.id()), + Ok(Either::Left( + self.graph.out_edges(self.tail).into_iter().map(get_id), )) } @@ -1456,7 +1496,7 @@ where // that means it's important we try the smallest paths first. That is, if we could in theory // have path A -> B and A -> C -> B, and we can do B -> D, then we want to keep A -> B -> D, // not A -> C -> B -> D. - let mut heap: BinaryHeap> = BinaryHeap::new(); + let mut heap: MaxHeap> = MaxHeap::new(); heap.push(HeapElement(self.clone())); while let Some(HeapElement(to_advance)) = heap.pop() { diff --git a/apollo-federation/src/query_graph/path_tree.rs b/apollo-federation/src/query_graph/path_tree.rs index 8711376a45..cb6b9fef18 100644 --- a/apollo-federation/src/query_graph/path_tree.rs +++ b/apollo-federation/src/query_graph/path_tree.rs @@ -226,11 +226,20 @@ where struct ByUniqueEdge<'inputs, TTrigger, GraphPathIter> { target_node: NodeIndex, - by_unique_trigger: - IndexMap<&'inputs Arc, PathTreeChildInputs<'inputs, GraphPathIter>>, + by_unique_trigger: IndexMap< + &'inputs Arc, + PathTreeChildInputs<'inputs, TTrigger, GraphPathIter>, + >, } - struct PathTreeChildInputs<'inputs, GraphPathIter> { + struct PathTreeChildInputs<'inputs, TTrigger, GraphPathIter> { + /// trigger: the final trigger value + /// - Two equivalent triggers can have minor differences in the sibling_typename. + /// This field holds the final trigger value that will be used. + /// PORT_NOTE: The JS QP used the last trigger value. So, we are following that + /// to avoid mismatches. But, it can be revisited. + /// We may want to keep or merge the sibling_typename values. + trigger: &'inputs Arc, conditions: Option>, sub_paths_and_selections: Vec<(GraphPathIter, Option<&'inputs Arc>)>, } @@ -263,6 +272,7 @@ where match for_edge.by_unique_trigger.entry(trigger) { Entry::Occupied(entry) => { let existing = entry.into_mut(); + existing.trigger = trigger; existing.conditions = merge_conditions(&existing.conditions, conditions); existing .sub_paths_and_selections @@ -271,6 +281,7 @@ where } Entry::Vacant(entry) => { entry.insert(PathTreeChildInputs { + trigger, conditions: conditions.clone(), sub_paths_and_selections: vec![(graph_path_iter, selection)], }); @@ -280,10 +291,10 @@ where let mut childs = Vec::new(); for (edge, by_unique_edge) in merged { - for (trigger, child) in by_unique_edge.by_unique_trigger { + for (_, child) in by_unique_edge.by_unique_trigger { childs.push(Arc::new(PathTreeChild { edge, - trigger: trigger.clone(), + trigger: child.trigger.clone(), conditions: child.conditions.clone(), tree: Arc::new(Self::from_paths( graph.clone(), diff --git a/apollo-federation/src/query_plan/conditions.rs b/apollo-federation/src/query_plan/conditions.rs index 7017c53a8a..cf248c8b61 100644 --- a/apollo-federation/src/query_plan/conditions.rs +++ b/apollo-federation/src/query_plan/conditions.rs @@ -9,12 +9,32 @@ use indexmap::map::Entry; use serde::Serialize; use crate::error::FederationError; +use crate::internal_error; use crate::operation::DirectiveList; +use crate::operation::NamedFragments; use crate::operation::Selection; use crate::operation::SelectionMap; +use crate::operation::SelectionMapperReturn; use crate::operation::SelectionSet; use crate::query_graph::graph_path::OpPathElement; +#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize)] +pub(crate) enum ConditionKind { + /// A `@skip(if:)` condition. + Skip, + /// An `@include(if:)` condition. + Include, +} + +impl ConditionKind { + fn as_str(self) -> &'static str { + match self { + Self::Skip => "skip", + Self::Include => "include", + } + } +} + /// This struct is meant for tracking whether a selection set in a `FetchDependencyGraphNode` needs /// to be queried, based on the `@skip`/`@include` applications on the selections within. /// Accordingly, there is much logic around merging and short-circuiting; `OperationConditional` is @@ -30,37 +50,62 @@ pub(crate) enum Conditions { /// is negated in the condition. We maintain the invariant that there's at least one condition (i.e. /// the map is non-empty), and that there's at most one condition per variable name. #[derive(Debug, Clone, PartialEq, Serialize)] -pub(crate) struct VariableConditions(Arc>); +pub(crate) struct VariableConditions( + // TODO(@goto-bus-stop): does it really make sense for this to be an indexmap? we normally only + // have 1 or 2. Can we ever get so many conditions on the same node that it makes sense to use + // a map over a vec? + Arc>, +); impl VariableConditions { /// Construct VariableConditions from a non-empty map of variable names. /// /// In release builds, this does not check if the map is empty. - fn new_unchecked(map: IndexMap) -> Self { + fn new_unchecked(map: IndexMap) -> Self { debug_assert!(!map.is_empty()); Self(Arc::new(map)) } - /// Returns whether a variable condition is negated, or None if there is no condition for the variable name. - pub(crate) fn is_negated(&self, name: &str) -> Option { + /// Returns the condition kind of a variable, or None if there is no condition for the variable name. + fn condition_kind(&self, name: &str) -> Option { self.0.get(name).copied() } - pub(crate) fn iter(&self) -> impl Iterator { - self.0.iter().map(|(name, &negated)| (name, negated)) + /// Iterate all variable conditions and their kinds. + pub(crate) fn iter(&self) -> impl Iterator { + self.0.iter().map(|(name, &kind)| (name, kind)) + } + + /// Merge with another set of variable conditions. If the conditions conflict, returns `None`. + fn merge(mut self, other: Self) -> Option { + let vars = Arc::make_mut(&mut self.0); + for (name, other_kind) in other.0.iter() { + match vars.entry(name.clone()) { + // `@skip(if: $var)` and `@include(if: $var)` on the same selection always means + // it's not included. + Entry::Occupied(self_kind) if self_kind.get() != other_kind => { + return None; + } + Entry::Occupied(_entry) => {} + Entry::Vacant(entry) => { + entry.insert(*other_kind); + } + } + } + Some(self) } } #[derive(Debug, Clone, PartialEq)] pub(crate) struct VariableCondition { variable: Name, - negated: bool, + kind: ConditionKind, } impl Conditions { /// Create conditions from a map of variable conditions. If empty, instead returns a /// condition that always evaluates to true. - fn from_variables(map: IndexMap) -> Self { + fn from_variables(map: IndexMap) -> Self { if map.is_empty() { Self::Boolean(true) } else { @@ -68,53 +113,69 @@ impl Conditions { } } + /// Parse @skip and @include conditions from a directive list. + /// + /// # Errors + /// Returns an error if a @skip/@include directive is invalid (per GraphQL validation rules). pub(crate) fn from_directives(directives: &DirectiveList) -> Result { let mut variables = IndexMap::default(); - for directive in directives.iter_sorted() { - let negated = match directive.name.as_str() { - "include" => false, - "skip" => true, - _ => continue, + + if let Some(skip) = directives.get("skip") { + let Some(value) = skip.specified_argument_by_name("if") else { + internal_error!("missing @skip(if:) argument"); }; - let value = directive.specified_argument_by_name("if").ok_or_else(|| { - FederationError::internal(format!( - "missing if argument on @{}", - if negated { "skip" } else { "include" }, - )) - })?; - match &**value { - Value::Boolean(false) if !negated => return Ok(Self::Boolean(false)), - Value::Boolean(true) if negated => return Ok(Self::Boolean(false)), + + match value.as_ref() { + // Constant @skip(if: true) can never match + Value::Boolean(true) => return Ok(Self::Boolean(false)), + // Constant @skip(if: false) always matches Value::Boolean(_) => {} - Value::Variable(name) => match variables.entry(name.clone()) { - Entry::Occupied(entry) => { - let previous_negated = *entry.get(); - if previous_negated != negated { - return Ok(Self::Boolean(false)); - } - } - Entry::Vacant(entry) => { - entry.insert(negated); + Value::Variable(name) => { + variables.insert(name.clone(), ConditionKind::Skip); + } + _ => { + internal_error!("expected boolean or variable `if` argument, got {value}"); + } + } + } + + if let Some(include) = directives.get("include") { + let Some(value) = include.specified_argument_by_name("if") else { + internal_error!("missing @include(if:) argument"); + }; + + match value.as_ref() { + // Constant @include(if: false) can never match + Value::Boolean(false) => return Ok(Self::Boolean(false)), + // Constant @include(if: true) always matches + Value::Boolean(true) => {} + // If both @skip(if: $var) and @include(if: $var) exist, the condition can also + // never match + Value::Variable(name) => { + if variables.insert(name.clone(), ConditionKind::Include) + == Some(ConditionKind::Skip) + { + return Ok(Self::Boolean(false)); } - }, + } _ => { - return Err(FederationError::internal(format!( - "expected boolean or variable `if` argument, got {value}", - ))) + internal_error!("expected boolean or variable `if` argument, got {value}"); } } } + Ok(Self::from_variables(variables)) } + // TODO(@goto-bus-stop): what exactly is the difference between this and `Self::merge`? pub(crate) fn update_with(&self, new_conditions: &Self) -> Self { match (new_conditions, self) { (Conditions::Boolean(_), _) | (_, Conditions::Boolean(_)) => new_conditions.clone(), (Conditions::Variables(new_conditions), Conditions::Variables(handled_conditions)) => { let mut filtered = IndexMap::default(); - for (cond_name, &cond_negated) in new_conditions.0.iter() { - match handled_conditions.is_negated(cond_name) { - Some(handled_cond) if cond_negated != handled_cond => { + for (cond_name, &cond_kind) in new_conditions.0.iter() { + match handled_conditions.condition_kind(cond_name) { + Some(handled_cond_kind) if cond_kind != handled_cond_kind => { // If we've already handled that exact condition, we can skip it. // But if we've already handled the _negation_ of this condition, then this mean the overall conditions // are unreachable and we can just return `false` directly. @@ -122,7 +183,7 @@ impl Conditions { } Some(_) => {} None => { - filtered.insert(cond_name.clone(), cond_negated); + filtered.insert(cond_name.clone(), cond_kind); } } } @@ -131,6 +192,8 @@ impl Conditions { } } + /// Merge two sets of conditions. The new conditions evaluate to true only if both input + /// conditions evaluate to true. pub(crate) fn merge(self, other: Self) -> Self { match (self, other) { // Absorbing element @@ -141,22 +204,11 @@ impl Conditions { // Neutral element (Conditions::Boolean(true), x) | (x, Conditions::Boolean(true)) => x, - (Conditions::Variables(mut self_vars), Conditions::Variables(other_vars)) => { - let vars = Arc::make_mut(&mut self_vars.0); - for (name, other_negated) in other_vars.0.iter() { - match vars.entry(name.clone()) { - Entry::Occupied(entry) => { - let self_negated = entry.get(); - if self_negated != other_negated { - return Conditions::Boolean(false); - } - } - Entry::Vacant(entry) => { - entry.insert(*other_negated); - } - } + (Conditions::Variables(self_vars), Conditions::Variables(other_vars)) => { + match self_vars.merge(other_vars) { + Some(vars) => Conditions::Variables(vars), + None => Conditions::Boolean(false), } - Conditions::Variables(self_vars) } } } @@ -176,37 +228,37 @@ pub(crate) fn remove_conditions_from_selection_set( Ok(selection_set.clone()) } Conditions::Variables(variable_conditions) => { - let mut selection_map = SelectionMap::new(); - - for selection in selection_set.selections.values() { + selection_set.lazy_map(&NamedFragments::default(), |selection| { let element = selection.element()?; // We remove any of the conditions on the element and recurse. let updated_element = remove_conditions_of_element(element.clone(), variable_conditions); - let new_selection = if let Some(selection_set) = selection.selection_set() { + if let Some(selection_set) = selection.selection_set() { let updated_selection_set = remove_conditions_from_selection_set(selection_set, conditions)?; if updated_element == element { if *selection_set == updated_selection_set { - selection.clone() + Ok(SelectionMapperReturn::Selection(selection.clone())) } else { - selection.with_updated_selection_set(Some(updated_selection_set))? + Ok(SelectionMapperReturn::Selection( + selection + .with_updated_selection_set(Some(updated_selection_set))?, + )) } } else { - Selection::from_element(updated_element, Some(updated_selection_set))? + Ok(SelectionMapperReturn::Selection(Selection::from_element( + updated_element, + Some(updated_selection_set), + )?)) } } else if updated_element == element { - selection.clone() + Ok(SelectionMapperReturn::Selection(selection.clone())) } else { - Selection::from_element(updated_element, None)? - }; - selection_map.insert(new_selection); - } - - Ok(SelectionSet { - schema: selection_set.schema.clone(), - type_position: selection_set.type_position.clone(), - selections: Arc::new(selection_map), + Ok(SelectionMapperReturn::Selection(Selection::from_element( + updated_element, + None, + )?)) + } }) } } @@ -296,39 +348,21 @@ fn remove_conditions_of_element( } } -#[derive(PartialEq)] -enum ConditionKind { - Include, - Skip, -} - fn matches_condition_for_kind( directive: &Directive, conditions: &VariableConditions, kind: ConditionKind, ) -> bool { - let kind_str = match kind { - ConditionKind::Include => "include", - ConditionKind::Skip => "skip", - }; - - if directive.name != kind_str { + if directive.name != kind.as_str() { return false; } - let value = directive.specified_argument_by_name("if"); - - let matches_if_negated = match kind { - ConditionKind::Include => false, - ConditionKind::Skip => true, - }; - match value { - None => false, + match directive.specified_argument_by_name("if") { Some(v) => match v.as_variable() { - Some(directive_var) => conditions.0.iter().any(|(cond_name, cond_is_negated)| { - cond_name == directive_var && *cond_is_negated == matches_if_negated - }), + Some(directive_var) => conditions.condition_kind(directive_var) == Some(kind), None => true, }, + // Directive without argument: unreachable in a valid document. + None => false, } } diff --git a/apollo-federation/src/query_plan/fetch_dependency_graph.rs b/apollo-federation/src/query_plan/fetch_dependency_graph.rs index f31bd0d2ff..5629183fd5 100644 --- a/apollo-federation/src/query_plan/fetch_dependency_graph.rs +++ b/apollo-federation/src/query_plan/fetch_dependency_graph.rs @@ -3352,11 +3352,7 @@ pub(crate) fn compute_nodes_for_tree( initial_defer_context: DeferContext, initial_conditions: &OpGraphPathContext, ) -> Result, FederationError> { - snapshot!( - "OpPathTree", - serde_json_bytes::json!(initial_tree.to_string()).to_string(), - "path_tree" - ); + snapshot!("OpPathTree", initial_tree.to_string(), "path_tree"); let mut stack = vec![ComputeNodesStackItem { tree: initial_tree, node_id: initial_node_id, @@ -3443,7 +3439,11 @@ pub(crate) fn compute_nodes_for_tree( } } } - snapshot!(dependency_graph, "updated_dependency_graph"); + snapshot!( + "FetchDependencyGraph", + dependency_graph.to_dot(), + "Fetch dependency graph updated by compute_nodes_for_tree" + ); Ok(created_nodes) } diff --git a/apollo-federation/src/query_plan/fetch_dependency_graph_processor.rs b/apollo-federation/src/query_plan/fetch_dependency_graph_processor.rs index 37982b3ee2..0a3c6465d3 100644 --- a/apollo-federation/src/query_plan/fetch_dependency_graph_processor.rs +++ b/apollo-federation/src/query_plan/fetch_dependency_graph_processor.rs @@ -5,6 +5,7 @@ use apollo_compiler::executable::VariableDefinition; use apollo_compiler::Name; use apollo_compiler::Node; +use super::conditions::ConditionKind; use super::query_planner::SubgraphOperationCompression; use super::QueryPathElement; use crate::error::FederationError; @@ -304,11 +305,10 @@ impl FetchDependencyGraphProcessor, DeferredDeferBlock> condition.then_some(value) } Conditions::Variables(variables) => { - for (name, negated) in variables.iter() { - let (if_clause, else_clause) = if negated { - (None, Some(Box::new(value))) - } else { - (Some(Box::new(value)), None) + for (name, kind) in variables.iter() { + let (if_clause, else_clause) = match kind { + ConditionKind::Skip => (None, Some(Box::new(value))), + ConditionKind::Include => (Some(Box::new(value)), None), }; value = PlanNode::from(ConditionNode { condition_variable: name.clone(), diff --git a/apollo-federation/src/query_plan/query_planner.rs b/apollo-federation/src/query_plan/query_planner.rs index 500076ce4c..24e0972944 100644 --- a/apollo-federation/src/query_plan/query_planner.rs +++ b/apollo-federation/src/query_plan/query_planner.rs @@ -11,6 +11,7 @@ use apollo_compiler::ExecutableDocument; use apollo_compiler::Name; use itertools::Itertools; use serde::Serialize; +use tracing::trace; use super::fetch_dependency_graph::FetchIdGenerator; use super::ConditionNode; @@ -551,7 +552,15 @@ impl QueryPlanner { statistics, }; - snapshot!(plan, "query plan"); + snapshot!( + "QueryPlan", + plan.to_string(), + "QueryPlan from build_query_plan" + ); + snapshot!( + plan.statistics, + "QueryPlanningStatistics from build_query_plan" + ); Ok(plan) } @@ -716,7 +725,11 @@ pub(crate) fn compute_root_fetch_groups( root_kind, root_type.clone(), )?; - snapshot!(dependency_graph, "tree_with_root_node"); + snapshot!( + "FetchDependencyGraph", + dependency_graph.to_dot(), + "tree_with_root_node" + ); compute_nodes_for_tree( dependency_graph, &child.tree, @@ -737,16 +750,13 @@ fn compute_root_parallel_dependency_graph( parameters: &QueryPlanningParameters, has_defers: bool, ) -> Result { - snapshot!( - "FetchDependencyGraph", - "Empty", - "Starting process to construct a parallel fetch dependency graph" - ); + trace!("Starting process to construct a parallel fetch dependency graph"); let selection_set = parameters.operation.selection_set.clone(); let best_plan = compute_root_parallel_best_plan(parameters, selection_set, has_defers)?; snapshot!( - best_plan.fetch_dependency_graph, - "Plan returned from compute_root_parallel_best_plan" + "FetchDependencyGraph", + best_plan.fetch_dependency_graph.to_dot(), + "Fetch dependency graph returned from compute_root_parallel_best_plan" ); Ok(best_plan.fetch_dependency_graph) } @@ -807,7 +817,8 @@ fn compute_plan_internal( let (main, deferred) = dependency_graph.process(&mut *processor, root_kind)?; snapshot!( - dependency_graph, + "FetchDependencyGraph", + dependency_graph.to_dot(), "Plan after calling FetchDependencyGraph::process" ); // XXX(@goto-bus-stop) Maybe `.defer_tracking` should be on the return value of `process()`..? diff --git a/apollo-federation/src/query_plan/query_planning_traversal.rs b/apollo-federation/src/query_plan/query_planning_traversal.rs index 787013471a..055498218f 100644 --- a/apollo-federation/src/query_plan/query_planning_traversal.rs +++ b/apollo-federation/src/query_plan/query_planning_traversal.rs @@ -46,8 +46,17 @@ use crate::schema::position::CompositeTypeDefinitionPosition; use crate::schema::position::ObjectTypeDefinitionPosition; use crate::schema::position::SchemaRootDefinitionKind; use crate::schema::ValidFederationSchema; +use crate::utils::logging::format_open_branch; use crate::utils::logging::snapshot; +#[cfg(feature = "snapshot_tracing")] +mod snapshot_helper { + // A module to import functions only used within `snapshot!(...)` macros. + pub(crate) use crate::utils::logging::closed_branches_to_string; + pub(crate) use crate::utils::logging::open_branch_to_string; + pub(crate) use crate::utils::logging::open_branches_to_string; +} + // PORT_NOTE: Named `PlanningParameters` in the JS codebase, but there was no particular reason to // leave out to the `Query` prefix, so it's been added for consistency. Similar to `GraphPath`, we // don't have a distinguished type for when the head is a root vertex, so we instead check this at @@ -113,13 +122,33 @@ pub(crate) struct QueryPlanningTraversal<'a, 'b> { } #[derive(Debug, Serialize)] -struct OpenBranchAndSelections { +pub(crate) struct OpenBranchAndSelections { /// The options for this open branch. open_branch: OpenBranch, /// A stack of the remaining selections to plan from the node this open branch ends on. selections: Vec, } +impl std::fmt::Display for OpenBranchAndSelections { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + let Some((current_selection, remaining_selections)) = self.selections.split_last() else { + return Ok(()); + }; + format_open_branch(f, &(current_selection, &self.open_branch.0))?; + write!(f, " * Remaining selections:")?; + if remaining_selections.is_empty() { + writeln!(f, " (none)")?; + } else { + // Print in reverse order since remaining selections are processed in that order. + writeln!(f)?; // newline + for selection in remaining_selections.iter().rev() { + writeln!(f, " - {selection}")?; + } + } + Ok(()) + } +} + struct PlanInfo { fetch_dependency_graph: FetchDependencyGraph, path_tree: Arc, @@ -284,7 +313,17 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { ) )] fn find_best_plan_inner(&mut self) -> Result, FederationError> { - while let Some(mut current_branch) = self.open_branches.pop() { + while !self.open_branches.is_empty() { + snapshot!( + "OpenBranches", + snapshot_helper::open_branches_to_string(&self.open_branches), + "Query planning open branches" + ); + let Some(mut current_branch) = self.open_branches.pop() else { + return Err(FederationError::internal( + "Branch stack unexpectedly empty during query plan traversal", + )); + }; let Some(current_selection) = current_branch.selections.pop() else { return Err(FederationError::internal( "Sub-stack unexpectedly empty during query plan traversal", @@ -293,7 +332,7 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { let (terminate_planning, new_branch) = self.handle_open_branch(¤t_selection, &mut current_branch.open_branch.0)?; if terminate_planning { - trace!("Planning termianted!"); + trace!("Planning terminated!"); // We clear both open branches and closed ones as a means to terminate the plan // computation with no plan. self.open_branches = vec![]; @@ -330,12 +369,10 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { let mut new_options = vec![]; let mut no_followups: bool = false; - snapshot!(name = "Options", options, "options"); - snapshot!( - "OperationElement", - operation_element.to_string(), - "operation_element" + "OpenBranch", + snapshot_helper::open_branch_to_string(selection, options), + "open branch" ); for option in options.iter_mut() { @@ -368,7 +405,11 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { } } - snapshot!(new_options, "new_options"); + snapshot!( + "OpenBranch", + snapshot_helper::open_branch_to_string(selection, &new_options), + "new_options" + ); if no_followups { // This operation element is valid from this option, but is guarantee to yield no result @@ -610,8 +651,8 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { )] fn compute_best_plan_from_closed_branches(&mut self) -> Result<(), FederationError> { snapshot!( - name = "ClosedBranches", - self.closed_branches, + "ClosedBranches", + snapshot_helper::closed_branches_to_string(&self.closed_branches), "closed_branches" ); @@ -622,8 +663,8 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { self.reduce_options_if_needed(); snapshot!( - name = "ClosedBranches", - self.closed_branches, + "ClosedBranches", + snapshot_helper::closed_branches_to_string(&self.closed_branches), "closed_branches_after_reduce" ); @@ -653,7 +694,7 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { let (first_group, second_group) = self.closed_branches.split_at(sole_path_branch_index); let initial_tree; - snapshot!("FetchDependencyGraph", "", "Generating initial dep graph"); + trace!("Generating initial fetch dependency graph"); let mut initial_dependency_graph = self.new_dependency_graph(); let federated_query_graph = &self.parameters.federated_query_graph; let root = &self.parameters.head; @@ -678,21 +719,32 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { self.parameters.config.type_conditioned_fetching, )?; snapshot!( - initial_dependency_graph, + "FetchDependencyGraph", + initial_dependency_graph.to_dot(), "Updated dep graph with initial tree" ); if first_group.is_empty() { // Well, we have the only possible plan; it's also the best. let cost = self.cost(&mut initial_dependency_graph)?; - self.best_plan = BestQueryPlanInfo { + let best_plan = BestQueryPlanInfo { fetch_dependency_graph: initial_dependency_graph, path_tree: initial_tree.into(), cost, - } - .into(); + }; - snapshot!(self.best_plan, "best_plan"); + snapshot!( + "FetchDependencyGraph", + best_plan.fetch_dependency_graph.to_dot(), + "best_plan.fetch_dependency_graph" + ); + snapshot!( + "OpPathTree", + best_plan.path_tree.to_string(), + "best_plan.path_tree" + ); + snapshot!(best_plan.cost, "best_plan.cost"); + self.best_plan = best_plan.into(); return Ok(()); } } @@ -723,14 +775,25 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { other_trees, /*plan_builder*/ self, )?; - self.best_plan = BestQueryPlanInfo { + let best_plan = BestQueryPlanInfo { fetch_dependency_graph: best.fetch_dependency_graph, path_tree: best.path_tree, cost, - } - .into(); + }; + + snapshot!( + "FetchDependencyGraph", + best_plan.fetch_dependency_graph.to_dot(), + "best_plan.fetch_dependency_graph" + ); + snapshot!( + "OpPathTree", + best_plan.path_tree.to_string(), + "best_plan.path_tree" + ); + snapshot!(best_plan.cost, "best_plan.cost"); - snapshot!(self.best_plan, "best_plan"); + self.best_plan = best_plan.into(); Ok(()) } @@ -975,7 +1038,11 @@ impl<'a: 'b, 'b> QueryPlanningTraversal<'a, 'b> { )?; } - snapshot!(dependency_graph, "updated_dependency_graph"); + snapshot!( + "FetchDependencyGraph", + dependency_graph.to_dot(), + "updated_dependency_graph" + ); Ok(()) } diff --git a/apollo-federation/src/schema/position.rs b/apollo-federation/src/schema/position.rs index 43e1aa634d..67a2074ce9 100644 --- a/apollo-federation/src/schema/position.rs +++ b/apollo-federation/src/schema/position.rs @@ -24,6 +24,7 @@ use apollo_compiler::schema::UnionType; use apollo_compiler::Name; use apollo_compiler::Node; use apollo_compiler::Schema; +use either::Either; use lazy_static::lazy_static; use serde::Serialize; use strum::IntoEnumIterator; @@ -485,16 +486,16 @@ impl ObjectOrInterfaceTypeDefinitionPosition { &'a self, schema: &'a Schema, ) -> Result< - Box + 'a>, + impl Iterator + Captures<&'a ()>, FederationError, > { match self { - ObjectOrInterfaceTypeDefinitionPosition::Object(type_) => { - Ok(Box::new(type_.fields(schema)?.map(|field| field.into()))) - } - ObjectOrInterfaceTypeDefinitionPosition::Interface(type_) => { - Ok(Box::new(type_.fields(schema)?.map(|field| field.into()))) - } + ObjectOrInterfaceTypeDefinitionPosition::Object(type_) => Ok(Either::Left( + type_.fields(schema)?.map(|field| field.into()), + )), + ObjectOrInterfaceTypeDefinitionPosition::Interface(type_) => Ok(Either::Right( + type_.fields(schema)?.map(|field| field.into()), + )), } } } diff --git a/apollo-federation/src/utils/logging.rs b/apollo-federation/src/utils/logging.rs index c7a07c2ef2..0b247c1698 100644 --- a/apollo-federation/src/utils/logging.rs +++ b/apollo-federation/src/utils/logging.rs @@ -1,3 +1,10 @@ +#![allow(dead_code)] + +use crate::operation::Selection; +use crate::query_graph::graph_path::ClosedBranch; +use crate::query_graph::graph_path::SimultaneousPathsWithLazyIndirectPaths; +use crate::query_plan::query_planning_traversal::OpenBranchAndSelections; + /// This macro is a wrapper around `tracing::trace!` and should not be confused with our snapshot /// testing. This primary goal of this macro is to add the necessary context to logging statements /// so that external tools (like the snapshot log visualizer) can show how various key data @@ -32,17 +39,6 @@ macro_rules! snapshot { $msg ); }; - (name = $name:literal, $value:expr, $msg:literal) => { - #[cfg(feature = "snapshot_tracing")] - tracing::trace!( - snapshot = std::any::type_name_of_val(&$value), - data = ron::ser::to_string(&$value).expect(concat!( - "Could not serialize value for a snapshot with message: ", - $msg - )), - $msg - ); - }; ($name:literal, $value:expr, $msg:literal) => { #[cfg(feature = "snapshot_tracing")] tracing::trace!(snapshot = $name, data = $value, $msg); @@ -50,3 +46,76 @@ macro_rules! snapshot { } pub(crate) use snapshot; + +pub(crate) fn make_string( + data: &T, + writer: fn(&mut std::fmt::Formatter<'_>, &T) -> std::fmt::Result, +) -> String { + // One-off struct to implement `Display` for `data` using `writer`. + struct Stringify<'a, T: ?Sized> { + data: &'a T, + writer: fn(&mut std::fmt::Formatter<'_>, &T) -> std::fmt::Result, + } + + impl<'a, T: ?Sized> std::fmt::Display for Stringify<'a, T> { + fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { + (self.writer)(f, self.data) + } + } + + Stringify { data, writer }.to_string() +} + +// PORT_NOTE: This is a (partial) port of `QueryPlanningTraversal.debugStack` JS method. +pub(crate) fn format_open_branch( + f: &mut std::fmt::Formatter<'_>, + (selection, options): &(&Selection, &[SimultaneousPathsWithLazyIndirectPaths]), +) -> std::fmt::Result { + writeln!(f, "{selection}")?; + writeln!(f, " * Options:")?; + for option in *options { + writeln!(f, " - {option}")?; + } + Ok(()) +} + +pub(crate) fn open_branch_to_string( + selection: &Selection, + options: &[SimultaneousPathsWithLazyIndirectPaths], +) -> String { + make_string(&(selection, options), format_open_branch) +} + +// PORT_NOTE: This is a port of `QueryPlanningTraversal.debugStack` JS method. +pub(crate) fn format_open_branches( + f: &mut std::fmt::Formatter<'_>, + open_branches: &[OpenBranchAndSelections], +) -> std::fmt::Result { + // Print from the stack top to the bottom. + for branch in open_branches.iter().rev() { + writeln!(f, "{branch}")?; + } + Ok(()) +} + +pub(crate) fn open_branches_to_string(open_branches: &[OpenBranchAndSelections]) -> String { + make_string(open_branches, format_open_branches) +} + +pub(crate) fn format_closed_branches( + f: &mut std::fmt::Formatter<'_>, + closed_branches: &[ClosedBranch], +) -> std::fmt::Result { + writeln!(f, "All branches:")?; + for (i, closed_branch) in closed_branches.iter().enumerate() { + writeln!(f, "{i}:")?; + for closed_path in &closed_branch.0 { + writeln!(f, " - {closed_path}")?; + } + } + Ok(()) +} + +pub(crate) fn closed_branches_to_string(closed_branches: &[ClosedBranch]) -> String { + make_string(closed_branches, format_closed_branches) +} diff --git a/apollo-federation/tests/query_plan/build_query_plan_tests.rs b/apollo-federation/tests/query_plan/build_query_plan_tests.rs index 1f9fd8587c..0e93f5a229 100644 --- a/apollo-federation/tests/query_plan/build_query_plan_tests.rs +++ b/apollo-federation/tests/query_plan/build_query_plan_tests.rs @@ -1294,3 +1294,70 @@ fn handles_multiple_conditions_on_abstract_types() { "### ); } + +#[test] +fn condition_order_router799() { + let planner = planner!( + books: r#" + type Query { + bookName: String! + } + type Mutation { + bookName(name: String!): Int! + } + "#, + ); + + assert_plan!( + &planner, + r#" + mutation($var0: Boolean! = true, $var1: Boolean!) { + ... on Mutation @skip(if: $var0) @include(if: $var1) { + field0: __typename + } + } + "#, + @r###" + QueryPlan { + Include(if: $var1) { + Skip(if: $var0) { + Fetch(service: "books") { + { + ... on Mutation { + field0: __typename + } + } + }, + }, + }, + } + "### + ); + + // Reordering @skip/@include should produce the same plan. + assert_plan!( + &planner, + r#" + mutation($var0: Boolean! = true, $var1: Boolean!) { + ... on Mutation @include(if: $var1) @skip(if: $var0) { + field0: __typename + } + } + "#, + @r###" + QueryPlan { + Include(if: $var1) { + Skip(if: $var0) { + Fetch(service: "books") { + { + ... on Mutation { + field0: __typename + } + } + }, + }, + }, + } + "### + ); +} diff --git a/apollo-federation/tests/query_plan/build_query_plan_tests/field_merging_with_skip_and_include.rs b/apollo-federation/tests/query_plan/build_query_plan_tests/field_merging_with_skip_and_include.rs index 852ce06728..7b6f874cef 100644 --- a/apollo-federation/tests/query_plan/build_query_plan_tests/field_merging_with_skip_and_include.rs +++ b/apollo-federation/tests/query_plan/build_query_plan_tests/field_merging_with_skip_and_include.rs @@ -146,7 +146,7 @@ fn merging_skip_and_include_directives_multiple_applications_differing_order() { hello: Hello! extraFieldToPreventSkipIncludeNodes: String! } - + type Hello { world: String! goodbye: String! @@ -227,3 +227,85 @@ fn merging_skip_and_include_directives_multiple_applications_differing_quantity( "### ); } + +#[test] +fn fields_are_not_overwritten_when_directives_are_removed() { + let planner = planner!( + SubgraphSkip: r#" + type Query { + foo: Foo + } + + type Foo { + bar: Bar + } + + type Bar { + things: String + name: String + } + "#, + ); + assert_plan!( + &planner, + r#" + query Test($b: Boolean!) { + foo @include(if: $b) { + bar { + name + } + bar @include(if: $b) { + things + } + } + } + "#, + @r###" + QueryPlan { + Include(if: $b) { + Fetch(service: "SubgraphSkip") { + { + foo { + bar { + name + things + } + } + } + }, + }, + } + "### + ); + assert_plan!( + &planner, + r#" + query Test($b: Boolean!) { + foo @skip(if: $b) { + bar { + name + } + bar @skip(if: $b) { + things + } + } + } + "#, + @r###" + QueryPlan { + Skip(if: $b) { + Fetch(service: "SubgraphSkip") { + { + foo { + bar { + name + things + } + } + } + }, + }, + } + "### + ); +} diff --git a/apollo-federation/tests/query_plan/build_query_plan_tests/fragment_autogeneration.rs b/apollo-federation/tests/query_plan/build_query_plan_tests/fragment_autogeneration.rs index 876334fa5d..03b43c6245 100644 --- a/apollo-federation/tests/query_plan/build_query_plan_tests/fragment_autogeneration.rs +++ b/apollo-federation/tests/query_plan/build_query_plan_tests/fragment_autogeneration.rs @@ -177,6 +177,9 @@ fn it_handles_fragments_with_one_non_leaf_field() { ); } +/// XXX(@goto-bus-stop): this test is meant to check that fragments with @skip and @include *are* +/// migrated. But we are currently matching JS behavior, where they are not. This test should be +/// updated when we remove JS compatibility. #[test] fn it_migrates_skip_include() { let planner = planner!( @@ -348,6 +351,50 @@ fn fragments_that_share_a_hash_but_are_not_identical_generate_their_own_fragment ); } +#[test] +fn same_as_js_router798() { + let planner = planner!( + config = QueryPlannerConfig { generate_query_fragments: true, reuse_query_fragments: false, ..Default::default() }, + Subgraph1: r#" + interface Interface { a: Int } + type Y implements Interface { a: Int b: Int } + type Z implements Interface { a: Int c: Int } + + type Query { + interfaces(id: Int!): Interface + } + "#, + ); + assert_plan!( + &planner, + r#" + query($var0: Boolean! = true) { + ... @skip(if: $var0) { + field0: interfaces(id: 0) { + field1: __typename + } + } + } + "#, + @r###" + QueryPlan { + Skip(if: $var0) { + Fetch(service: "Subgraph1") { + { + ... { + field0: interfaces(id: 0) { + __typename + field1: __typename + } + } + } + }, + }, + } + "### + ); +} + #[test] fn works_with_key_chains() { let planner = planner!( diff --git a/apollo-federation/tests/query_plan/build_query_plan_tests/interface_object.rs b/apollo-federation/tests/query_plan/build_query_plan_tests/interface_object.rs index d15e05bf6f..2f8ec2a798 100644 --- a/apollo-federation/tests/query_plan/build_query_plan_tests/interface_object.rs +++ b/apollo-federation/tests/query_plan/build_query_plan_tests/interface_object.rs @@ -826,3 +826,133 @@ fn it_handles_interface_object_input_rewrites_when_cloning_dependency_graph() { "### ); } + +#[test] +fn test_interface_object_advance_with_non_collecting_and_type_preserving_transitions_ordering() { + let planner = planner!( + S1: r#" + type A @key(fields: "id") { + id: ID! + } + + type Query { + test: A + } + "#, + S2: r#" + type A @key(fields: "id") { + id: ID! + } + "#, + S3: r#" + type A @key(fields: "id") { + id: ID! + } + "#, + S4: r#" + type A @key(fields: "id") { + id: ID! + } + "#, + Y1: r#" + interface I { + id: ID! + } + + type A implements I @key(fields: "id") @key(fields: "alt_id { id }") { + id: ID! + alt_id: AltID! + } + + type AltID { + id: ID! + } + "#, + Y2: r#" + interface I { + id: ID! + } + + type A implements I @key(fields: "id") @key(fields: "alt_id { id }") { + id: ID! + alt_id: AltID! + } + + type AltID { + id: ID! + } + "#, + Z: r#" + type I @interfaceObject @key(fields: "alt_id { id }") { + alt_id: AltID! + data: String! + } + + type AltID { + id: ID! + } + "#, + ); + assert_plan!( + &planner, + r#" + { + test { + data + } + } + "#, + + // Make sure we fetch S1 -> Y1 -> Z, not S1 -> Y2 -> Z. + // That's following JS QP's behavior. + @r###" + QueryPlan { + Sequence { + Fetch(service: "S1") { + { + test { + __typename + id + } + } + }, + Flatten(path: "test") { + Fetch(service: "Y1") { + { + ... on A { + __typename + id + } + } => + { + ... on A { + __typename + alt_id { + id + } + } + } + }, + }, + Flatten(path: "test") { + Fetch(service: "Z") { + { + ... on A { + __typename + alt_id { + id + } + } + } => + { + ... on I { + data + } + } + }, + }, + }, + } + "### + ); +} diff --git a/apollo-federation/tests/query_plan/build_query_plan_tests/introspection_typename_handling.rs b/apollo-federation/tests/query_plan/build_query_plan_tests/introspection_typename_handling.rs index ccf37d9c29..73d6567f98 100644 --- a/apollo-federation/tests/query_plan/build_query_plan_tests/introspection_typename_handling.rs +++ b/apollo-federation/tests/query_plan/build_query_plan_tests/introspection_typename_handling.rs @@ -63,6 +63,68 @@ fn it_preservers_aliased_typename() { ); } +#[test] +fn it_preserves_typename_with_directives() { + let planner = planner!( + Subgraph1: r#" + type Query { + t: T + } + + type T @key(fields: "id") { + id: ID! + } + "#, + ); + assert_plan!( + &planner, + r#" + query($v: Boolean!) { + t { + __typename + __typename @skip(if: $v) + } + } + "#, + @r###" + QueryPlan { + Fetch(service: "Subgraph1") { + { + t { + __typename + __typename @skip(if: $v) + } + } + }, + } + "### + ); + + assert_plan!( + &planner, + r#" + query($v: Boolean!) { + t { + __typename @skip(if: $v) + __typename + } + } + "#, + @r###" + QueryPlan { + Fetch(service: "Subgraph1") { + { + t { + __typename + __typename @skip(if: $v) + } + } + }, + } + "### + ); +} + #[test] fn it_does_not_needlessly_consider_options_for_typename() { let planner = planner!( @@ -295,3 +357,104 @@ fn add_back_sibling_typename_to_interface_object() { "### ); } + +#[test] +fn test_indirect_branch_merging_with_typename_sibling() { + let planner = planner!( + Subgraph1: r#" + type Query { + test: T + } + + interface T { + id: ID! + } + + type A implements T @key(fields: "id") { + id: ID! + } + + type B implements T @key(fields: "id") { + id: ID! + } + "#, + Subgraph2: r#" + interface T { + id: ID! + f: Int! + } + + type A implements T @key(fields: "id") { + id: ID! + f: Int! + } + + type B implements T @key(fields: "id") { + id: ID! + f: Int! + } + "#, + ); + // This operation has two `f` selection instances: One with __typename sibling and one without. + // It creates multiple identical branches in the form of `... on A { f }` with different `f`. + // The query plan must chose one over the other, which is implementation specific. + // Currently, the last one is chosen. + assert_plan!( + &planner, + r#" + { + test { + __typename + f # <= This will have a sibling typename value. + ... on A { + f # <= This one will have no sibling typename. + } + } + } + "#, + @r###" + QueryPlan { + Sequence { + Fetch(service: "Subgraph1") { + { + test { + __typename + ... on A { + __typename + id + } + ... on B { + __typename + id + } + } + } + }, + Flatten(path: "test") { + Fetch(service: "Subgraph2") { + { + ... on A { + __typename + id + } + ... on B { + __typename + id + } + } => + { + ... on A { + f + } + ... on B { + __typename + f + } + } + }, + }, + }, + } + "### + ); +} diff --git a/apollo-federation/tests/query_plan/build_query_plan_tests/provides.rs b/apollo-federation/tests/query_plan/build_query_plan_tests/provides.rs index d992cf72e6..71809d9109 100644 --- a/apollo-federation/tests/query_plan/build_query_plan_tests/provides.rs +++ b/apollo-federation/tests/query_plan/build_query_plan_tests/provides.rs @@ -846,3 +846,73 @@ fn it_works_with_type_condition_even_for_types_only_reachable_by_the_at_provides "### ); } + +#[test] +fn test_provides_edge_ordering() { + let planner = planner!( + SubgraphQ: r#" +type A { + id: ID! @external +} + +type Query { + test: A @provides(fields: "id") +} + "#, + SubgraphX: r#" +type A @key(fields: "id") { + id: ID! + data: String! @shareable +} + "#, + SubgraphY: r#" +type A @key(fields: "id") { + id: ID! + data: String! @shareable +} + "#, + ); + + assert_plan!( + &planner, + r#" +{ + test { # provides id + data + } +} + "#, + + // Make sure @provides edges are ordered as expected. + // `data` is expected to be fetched from SubgraphX, not SubgraphY. + @r###" + QueryPlan { + Sequence { + Fetch(service: "SubgraphQ") { + { + test { + __typename + id + } + } + }, + Flatten(path: "test") { + Fetch(service: "SubgraphX") { + { + ... on A { + __typename + id + } + } => + { + ... on A { + data + } + } + }, + }, + }, + } + "### + ); +} diff --git a/apollo-federation/tests/query_plan/supergraphs/condition_order_router799.graphql b/apollo-federation/tests/query_plan/supergraphs/condition_order_router799.graphql new file mode 100644 index 0000000000..69ef1b485a --- /dev/null +++ b/apollo-federation/tests/query_plan/supergraphs/condition_order_router799.graphql @@ -0,0 +1,58 @@ +# Composed from subgraphs with hash: 3ebe0e8c55ad21763879da6341617f14953e80d7 +schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.4", for: EXECUTION) +{ + query: Query + mutation: Mutation +} + +directive @join__directive(graphs: [join__Graph!], name: String!, args: join__DirectiveArguments) repeatable on SCHEMA | OBJECT | INTERFACE | FIELD_DEFINITION + +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + +directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean, overrideLabel: String) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION + +directive @join__graph(name: String!, url: String!) on ENUM_VALUE + +directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE + +directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + +directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + +directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + +scalar join__DirectiveArguments + +scalar join__FieldSet + +enum join__Graph { + BOOKS @join__graph(name: "books", url: "none") +} + +scalar link__Import + +enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + +type Mutation + @join__type(graph: BOOKS) +{ + bookName(name: String!): Int! +} + +type Query + @join__type(graph: BOOKS) +{ + bookName: String! +} diff --git a/apollo-federation/tests/query_plan/supergraphs/fields_are_not_overwritten_when_directives_are_removed.graphql b/apollo-federation/tests/query_plan/supergraphs/fields_are_not_overwritten_when_directives_are_removed.graphql new file mode 100644 index 0000000000..1b369c434a --- /dev/null +++ b/apollo-federation/tests/query_plan/supergraphs/fields_are_not_overwritten_when_directives_are_removed.graphql @@ -0,0 +1,64 @@ +# Composed from subgraphs with hash: 654c7bcba86c6f5845a7cb520710cfa37b678e3d +schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.4", for: EXECUTION) +{ + query: Query +} + +directive @join__directive(graphs: [join__Graph!], name: String!, args: join__DirectiveArguments) repeatable on SCHEMA | OBJECT | INTERFACE | FIELD_DEFINITION + +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + +directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean, overrideLabel: String) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION + +directive @join__graph(name: String!, url: String!) on ENUM_VALUE + +directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE + +directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + +directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + +directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + +type Bar + @join__type(graph: SUBGRAPHSKIP) +{ + things: String + name: String +} + +type Foo + @join__type(graph: SUBGRAPHSKIP) +{ + bar: Bar +} + +scalar join__DirectiveArguments + +scalar join__FieldSet + +enum join__Graph { + SUBGRAPHSKIP @join__graph(name: "SubgraphSkip", url: "none") +} + +scalar link__Import + +enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + +type Query + @join__type(graph: SUBGRAPHSKIP) +{ + foo: Foo +} diff --git a/apollo-federation/tests/query_plan/supergraphs/it_preserves_typename_with_directives.graphql b/apollo-federation/tests/query_plan/supergraphs/it_preserves_typename_with_directives.graphql new file mode 100644 index 0000000000..79dbbde767 --- /dev/null +++ b/apollo-federation/tests/query_plan/supergraphs/it_preserves_typename_with_directives.graphql @@ -0,0 +1,57 @@ +# Composed from subgraphs with hash: 5781ed69d05761a7c894a8aac04728581a2475d9 +schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.4", for: EXECUTION) +{ + query: Query +} + +directive @join__directive(graphs: [join__Graph!], name: String!, args: join__DirectiveArguments) repeatable on SCHEMA | OBJECT | INTERFACE | FIELD_DEFINITION + +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + +directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean, overrideLabel: String) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION + +directive @join__graph(name: String!, url: String!) on ENUM_VALUE + +directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE + +directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + +directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + +directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + +scalar join__DirectiveArguments + +scalar join__FieldSet + +enum join__Graph { + SUBGRAPH1 @join__graph(name: "Subgraph1", url: "none") +} + +scalar link__Import + +enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + +type Query + @join__type(graph: SUBGRAPH1) +{ + t: T +} + +type T + @join__type(graph: SUBGRAPH1, key: "id") +{ + id: ID! +} diff --git a/apollo-federation/tests/query_plan/supergraphs/merging_skip_and_include_directives_multiple_applications_differing_order.graphql b/apollo-federation/tests/query_plan/supergraphs/merging_skip_and_include_directives_multiple_applications_differing_order.graphql index 9d3b36f76a..80f6562f18 100644 --- a/apollo-federation/tests/query_plan/supergraphs/merging_skip_and_include_directives_multiple_applications_differing_order.graphql +++ b/apollo-federation/tests/query_plan/supergraphs/merging_skip_and_include_directives_multiple_applications_differing_order.graphql @@ -1,4 +1,4 @@ -# Composed from subgraphs with hash: 1cce0a8dc674e651027f8d368504edb388f9b239 +# Composed from subgraphs with hash: fbbe470b5e40af130de42043f74772ec30ee60ff schema @link(url: "https://specs.apollo.dev/link/v1.0") @link(url: "https://specs.apollo.dev/join/v0.4", for: EXECUTION) diff --git a/apollo-federation/tests/query_plan/supergraphs/same_as_js_router798.graphql b/apollo-federation/tests/query_plan/supergraphs/same_as_js_router798.graphql new file mode 100644 index 0000000000..6895c4bf96 --- /dev/null +++ b/apollo-federation/tests/query_plan/supergraphs/same_as_js_router798.graphql @@ -0,0 +1,73 @@ +# Composed from subgraphs with hash: ee7fce9eb672edf9b036a25bcae0b056ccf5f451 +schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.4", for: EXECUTION) +{ + query: Query +} + +directive @join__directive(graphs: [join__Graph!], name: String!, args: join__DirectiveArguments) repeatable on SCHEMA | OBJECT | INTERFACE | FIELD_DEFINITION + +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + +directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean, overrideLabel: String) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION + +directive @join__graph(name: String!, url: String!) on ENUM_VALUE + +directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE + +directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + +directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + +directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + +interface Interface + @join__type(graph: SUBGRAPH1) +{ + a: Int +} + +scalar join__DirectiveArguments + +scalar join__FieldSet + +enum join__Graph { + SUBGRAPH1 @join__graph(name: "Subgraph1", url: "none") +} + +scalar link__Import + +enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + +type Query + @join__type(graph: SUBGRAPH1) +{ + interfaces(id: Int!): Interface +} + +type Y implements Interface + @join__implements(graph: SUBGRAPH1, interface: "Interface") + @join__type(graph: SUBGRAPH1) +{ + a: Int + b: Int +} + +type Z implements Interface + @join__implements(graph: SUBGRAPH1, interface: "Interface") + @join__type(graph: SUBGRAPH1) +{ + a: Int + c: Int +} diff --git a/apollo-federation/tests/query_plan/supergraphs/test_indirect_branch_merging_with_typename_sibling.graphql b/apollo-federation/tests/query_plan/supergraphs/test_indirect_branch_merging_with_typename_sibling.graphql new file mode 100644 index 0000000000..462629b77a --- /dev/null +++ b/apollo-federation/tests/query_plan/supergraphs/test_indirect_branch_merging_with_typename_sibling.graphql @@ -0,0 +1,81 @@ +# Composed from subgraphs with hash: 9be0826e3b911556466c2c410f7df8b53c241774 +schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.4", for: EXECUTION) +{ + query: Query +} + +directive @join__directive(graphs: [join__Graph!], name: String!, args: join__DirectiveArguments) repeatable on SCHEMA | OBJECT | INTERFACE | FIELD_DEFINITION + +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + +directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean, overrideLabel: String) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION + +directive @join__graph(name: String!, url: String!) on ENUM_VALUE + +directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE + +directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + +directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + +directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + +type A implements T + @join__implements(graph: SUBGRAPH1, interface: "T") + @join__implements(graph: SUBGRAPH2, interface: "T") + @join__type(graph: SUBGRAPH1, key: "id") + @join__type(graph: SUBGRAPH2, key: "id") +{ + id: ID! + f: Int! @join__field(graph: SUBGRAPH2) +} + +type B implements T + @join__implements(graph: SUBGRAPH1, interface: "T") + @join__implements(graph: SUBGRAPH2, interface: "T") + @join__type(graph: SUBGRAPH1, key: "id") + @join__type(graph: SUBGRAPH2, key: "id") +{ + id: ID! + f: Int! @join__field(graph: SUBGRAPH2) +} + +scalar join__DirectiveArguments + +scalar join__FieldSet + +enum join__Graph { + SUBGRAPH1 @join__graph(name: "Subgraph1", url: "none") + SUBGRAPH2 @join__graph(name: "Subgraph2", url: "none") +} + +scalar link__Import + +enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + +type Query + @join__type(graph: SUBGRAPH1) + @join__type(graph: SUBGRAPH2) +{ + test: T @join__field(graph: SUBGRAPH1) +} + +interface T + @join__type(graph: SUBGRAPH1) + @join__type(graph: SUBGRAPH2) +{ + id: ID! + f: Int! @join__field(graph: SUBGRAPH2) +} diff --git a/apollo-federation/tests/query_plan/supergraphs/test_interface_object_advance_with_non_collecting_and_type_preserving_transitions_ordering.graphql b/apollo-federation/tests/query_plan/supergraphs/test_interface_object_advance_with_non_collecting_and_type_preserving_transitions_ordering.graphql new file mode 100644 index 0000000000..da08220702 --- /dev/null +++ b/apollo-federation/tests/query_plan/supergraphs/test_interface_object_advance_with_non_collecting_and_type_preserving_transitions_ordering.graphql @@ -0,0 +1,98 @@ +# Composed from subgraphs with hash: 415e22ad50245441330e67dc6637cf09714a4831 +schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.4", for: EXECUTION) +{ + query: Query +} + +directive @join__directive(graphs: [join__Graph!], name: String!, args: join__DirectiveArguments) repeatable on SCHEMA | OBJECT | INTERFACE | FIELD_DEFINITION + +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + +directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean, overrideLabel: String) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION + +directive @join__graph(name: String!, url: String!) on ENUM_VALUE + +directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE + +directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + +directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + +directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + +type A implements I + @join__implements(graph: Y1, interface: "I") + @join__implements(graph: Y2, interface: "I") + @join__type(graph: S1, key: "id") + @join__type(graph: S2, key: "id") + @join__type(graph: S3, key: "id") + @join__type(graph: S4, key: "id") + @join__type(graph: Y1, key: "id") + @join__type(graph: Y1, key: "alt_id { id }") + @join__type(graph: Y2, key: "id") + @join__type(graph: Y2, key: "alt_id { id }") +{ + id: ID! + alt_id: AltID! @join__field(graph: Y1) @join__field(graph: Y2) + data: String! @join__field +} + +type AltID + @join__type(graph: Y1) + @join__type(graph: Y2) + @join__type(graph: Z) +{ + id: ID! +} + +interface I + @join__type(graph: Y1) + @join__type(graph: Y2) + @join__type(graph: Z, key: "alt_id { id }", isInterfaceObject: true) +{ + id: ID! @join__field(graph: Y1) @join__field(graph: Y2) + alt_id: AltID! @join__field(graph: Z) + data: String! @join__field(graph: Z) +} + +scalar join__DirectiveArguments + +scalar join__FieldSet + +enum join__Graph { + S1 @join__graph(name: "S1", url: "none") + S2 @join__graph(name: "S2", url: "none") + S3 @join__graph(name: "S3", url: "none") + S4 @join__graph(name: "S4", url: "none") + Y1 @join__graph(name: "Y1", url: "none") + Y2 @join__graph(name: "Y2", url: "none") + Z @join__graph(name: "Z", url: "none") +} + +scalar link__Import + +enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + +type Query + @join__type(graph: S1) + @join__type(graph: S2) + @join__type(graph: S3) + @join__type(graph: S4) + @join__type(graph: Y1) + @join__type(graph: Y2) + @join__type(graph: Z) +{ + test: A @join__field(graph: S1) +} diff --git a/apollo-federation/tests/query_plan/supergraphs/test_provides_edge_ordering.graphql b/apollo-federation/tests/query_plan/supergraphs/test_provides_edge_ordering.graphql new file mode 100644 index 0000000000..6e2401f7d4 --- /dev/null +++ b/apollo-federation/tests/query_plan/supergraphs/test_provides_edge_ordering.graphql @@ -0,0 +1,64 @@ +# Composed from subgraphs with hash: f5cb1210587d45fee11b9c57247d6c570d0ae7fd +schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.4", for: EXECUTION) +{ + query: Query +} + +directive @join__directive(graphs: [join__Graph!], name: String!, args: join__DirectiveArguments) repeatable on SCHEMA | OBJECT | INTERFACE | FIELD_DEFINITION + +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + +directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean, overrideLabel: String) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION + +directive @join__graph(name: String!, url: String!) on ENUM_VALUE + +directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE + +directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + +directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + +directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + +type A + @join__type(graph: SUBGRAPHQ) + @join__type(graph: SUBGRAPHX, key: "id") + @join__type(graph: SUBGRAPHY, key: "id") +{ + id: ID! @join__field(graph: SUBGRAPHQ, external: true) @join__field(graph: SUBGRAPHX) @join__field(graph: SUBGRAPHY) + data: String! @join__field(graph: SUBGRAPHX) @join__field(graph: SUBGRAPHY) +} + +scalar join__DirectiveArguments + +scalar join__FieldSet + +enum join__Graph { + SUBGRAPHQ @join__graph(name: "SubgraphQ", url: "none") + SUBGRAPHX @join__graph(name: "SubgraphX", url: "none") + SUBGRAPHY @join__graph(name: "SubgraphY", url: "none") +} + +scalar link__Import + +enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + +type Query + @join__type(graph: SUBGRAPHQ) + @join__type(graph: SUBGRAPHX) + @join__type(graph: SUBGRAPHY) +{ + test: A @join__field(graph: SUBGRAPHQ, provides: "id") +} diff --git a/apollo-router/Cargo.toml b/apollo-router/Cargo.toml index 246ba6b68c..15fc64ac23 100644 --- a/apollo-router/Cargo.toml +++ b/apollo-router/Cargo.toml @@ -189,7 +189,7 @@ regex = "1.10.5" reqwest.workspace = true # note: this dependency should _always_ be pinned, prefix the version with an `=` -router-bridge = "=0.6.3+v2.9.2" +router-bridge = "=0.6.4+v2.9.3" rust-embed = { version = "8.4.0", features = ["include-exclude"] } rustls = "0.21.12" @@ -236,7 +236,6 @@ tracing = "0.1.40" tracing-core = "0.1.32" tracing-futures = { version = "0.2.5", features = ["futures-03"] } tracing-subscriber = { version = "0.3.18", features = ["env-filter", "json"] } -trust-dns-resolver = "0.23.2" url = { version = "2.5.2", features = ["serde"] } urlencoding = "2.1.3" uuid = { version = "1.9.1", features = ["serde", "v4"] } @@ -247,6 +246,7 @@ tokio-tungstenite = { version = "0.20.1", features = [ "rustls-tls-native-roots", ] } tokio-rustls = "0.24.1" +hickory-resolver = "0.24.1" http-serde = "1.1.3" hmac = "0.12.1" parking_lot = { version = "0.12.3", features = ["serde"] } diff --git a/apollo-router/src/apollo_studio_interop/mod.rs b/apollo-router/src/apollo_studio_interop/mod.rs index 9aeaee61f6..4499a9b557 100644 --- a/apollo-router/src/apollo_studio_interop/mod.rs +++ b/apollo-router/src/apollo_studio_interop/mod.rs @@ -223,20 +223,17 @@ pub(crate) fn generate_extended_references( pub(crate) fn extract_enums_from_response( query: Arc, - operation_name: Option<&str>, schema: &Valid, response_body: &Object, existing_refs: &mut ReferencedEnums, ) { - if let Some(operation) = query.operation(operation_name) { - extract_enums_from_selection_set( - &operation.selection_set, - &query.fragments, - schema, - response_body, - existing_refs, - ); - } + extract_enums_from_selection_set( + &query.operation.selection_set, + &query.fragments, + schema, + response_body, + existing_refs, + ); } fn add_enum_value_to_map( diff --git a/apollo-router/src/apollo_studio_interop/tests.rs b/apollo-router/src/apollo_studio_interop/tests.rs index 466ca301b0..754951983c 100644 --- a/apollo-router/src/apollo_studio_interop/tests.rs +++ b/apollo-router/src/apollo_studio_interop/tests.rs @@ -133,7 +133,6 @@ fn enums_from_response( let mut result = ReferencedEnums::new(); extract_enums_from_response( Arc::new(query), - operation_name, schema.supergraph_schema(), &response_body, &mut result, diff --git a/apollo-router/src/batching.rs b/apollo-router/src/batching.rs index b570587b6f..a66aca8d87 100644 --- a/apollo-router/src/batching.rs +++ b/apollo-router/src/batching.rs @@ -28,6 +28,7 @@ use crate::services::http::HttpClientServiceFactory; use crate::services::process_batches; use crate::services::router::body::get_body_bytes; use crate::services::router::body::RouterBody; +use crate::services::subgraph::SubgraphRequestId; use crate::services::SubgraphRequest; use crate::services::SubgraphResponse; use crate::Context; @@ -426,7 +427,7 @@ pub(crate) async fn assemble_batch( ) -> Result< ( String, - Vec, + Vec<(Context, SubgraphRequestId)>, http::Request, Vec>>, ), @@ -445,8 +446,8 @@ pub(crate) async fn assemble_batch( // Retain the various contexts for later use let contexts = requests .iter() - .map(|x| x.context.clone()) - .collect::>(); + .map(|request| (request.context.clone(), request.id.clone())) + .collect::>(); // Grab the common info from the first request let first_request = requests .into_iter() @@ -470,19 +471,30 @@ mod tests { use std::sync::Arc; use std::time::Duration; + use http::header::ACCEPT; + use http::header::CONTENT_TYPE; use tokio::sync::oneshot; + use tower::ServiceExt; + use wiremock::matchers; + use wiremock::MockServer; + use wiremock::ResponseTemplate; use super::assemble_batch; use super::Batch; use super::BatchQueryInfo; use crate::graphql; - use crate::plugins::traffic_shaping::Http2Config; + use crate::graphql::Request; + use crate::layers::ServiceExt as LayerExt; use crate::query_planner::fetch::QueryHash; use crate::services::http::HttpClientServiceFactory; + use crate::services::router; + use crate::services::subgraph; + use crate::services::subgraph::SubgraphRequestId; use crate::services::SubgraphRequest; use crate::services::SubgraphResponse; use crate::Configuration; use crate::Context; + use crate::TestHarness; #[tokio::test(flavor = "multi_thread")] async fn it_assembles_batch() { @@ -523,7 +535,7 @@ mod tests { let output_context_ids = contexts .iter() - .map(|r| r.id.clone()) + .map(|r| r.0.id.clone()) .collect::>(); // Make sure all of our contexts are preserved during assembly assert_eq!(input_context_ids, output_context_ids); @@ -562,6 +574,7 @@ mod tests { .unwrap(), context: Context::new(), subgraph_name: None, + id: SubgraphRequestId(String::new()), }; tx.send(Ok(response)).unwrap(); @@ -620,7 +633,7 @@ mod tests { let factory = HttpClientServiceFactory::from_config( "testbatch", &Configuration::default(), - Http2Config::Disable, + crate::configuration::shared::Client::default(), ); let request = SubgraphRequest::fake_builder() .subgraph_request( @@ -659,7 +672,7 @@ mod tests { let factory = HttpClientServiceFactory::from_config( "testbatch", &Configuration::default(), - Http2Config::Disable, + crate::configuration::shared::Client::default(), ); let request = SubgraphRequest::fake_builder() .subgraph_request( @@ -694,7 +707,7 @@ mod tests { let factory = HttpClientServiceFactory::from_config( "testbatch", &Configuration::default(), - Http2Config::Disable, + crate::configuration::shared::Client::default(), ); let request = SubgraphRequest::fake_builder() .subgraph_request( @@ -722,4 +735,130 @@ mod tests { .await .is_err()); } + + fn expect_batch(request: &wiremock::Request) -> ResponseTemplate { + let requests: Vec = request.body_json().unwrap(); + + // Extract info about this operation + let (subgraph, count): (String, usize) = { + let re = regex::Regex::new(r"entry([AB])\(count:([0-9]+)\)").unwrap(); + let captures = re.captures(requests[0].query.as_ref().unwrap()).unwrap(); + + (captures[1].to_string(), captures[2].parse().unwrap()) + }; + + // We should have gotten `count` elements + assert_eq!(requests.len(), count); + + // Each element should have be for the specified subgraph and should have a field selection + // of index. + // Note: The router appends info to the query, so we append it at this check + for (index, request) in requests.into_iter().enumerate() { + assert_eq!( + request.query, + Some(format!( + "query op{index}__{}__0{{entry{}(count:{count}){{index}}}}", + subgraph.to_lowercase(), + subgraph + )) + ); + } + + ResponseTemplate::new(200).set_body_json( + (0..count) + .map(|index| { + serde_json::json!({ + "data": { + format!("entry{subgraph}"): { + "index": index + } + } + }) + }) + .collect::>(), + ) + } + + #[tokio::test(flavor = "multi_thread")] + async fn it_matches_subgraph_request_ids_to_responses() { + // Create a wiremock server for each handler + let mock_server = MockServer::start().await; + mock_server + .register( + wiremock::Mock::given(matchers::method("POST")) + .and(matchers::path("/a")) + .respond_with(expect_batch) + .expect(1), + ) + .await; + + let schema = include_str!("../tests/fixtures/batching/schema.graphql"); + let service = TestHarness::builder() + .configuration_json(serde_json::json!({ + "include_subgraph_errors": { + "all": true + }, + "batching": { + "enabled": true, + "mode": "batch_http_link", + "subgraph": { + "all": { + "enabled": true + } + } + }, + "override_subgraph_url": { + "a": format!("{}/a", mock_server.uri()) + }})) + .unwrap() + .schema(schema) + .subgraph_hook(move |_subgraph_name, service| { + service + .map_future_with_request_data( + |r: &subgraph::Request| r.id.clone(), + |id, f| async move { + let r: subgraph::ServiceResult = f.await; + assert_eq!(id, r.as_ref().map(|r| r.id.clone()).unwrap()); + r + }, + ) + .boxed() + }) + .with_subgraph_network_requests() + .build_router() + .await + .unwrap(); + + let requests: Vec<_> = (0..3) + .map(|index| { + Request::fake_builder() + .query(format!("query op{index}{{ entryA(count: 3) {{ index }} }}")) + .build() + }) + .collect(); + let request = serde_json::to_value(requests).unwrap(); + + let context = Context::new(); + let request = router::Request { + context, + router_request: http::Request::builder() + .method("POST") + .header(CONTENT_TYPE, "application/json") + .header(ACCEPT, "application/json") + .body(serde_json::to_vec(&request).unwrap().into()) + .unwrap(), + }; + + let response = service + .oneshot(request) + .await + .unwrap() + .next_response() + .await + .unwrap() + .unwrap(); + + let response: serde_json::Value = serde_json::from_slice(&response).unwrap(); + insta::assert_json_snapshot!(response); + } } diff --git a/apollo-router/src/cache/storage.rs b/apollo-router/src/cache/storage.rs index 15f3b3dc8a..15f452ed28 100644 --- a/apollo-router/src/cache/storage.rs +++ b/apollo-router/src/cache/storage.rs @@ -101,6 +101,18 @@ where }) } + pub(crate) fn new_in_memory(max_capacity: NonZeroUsize, caller: &'static str) -> Self { + Self { + cache_size_gauge: Default::default(), + cache_estimated_storage_gauge: Default::default(), + cache_size: Default::default(), + cache_estimated_storage: Default::default(), + caller, + inner: Arc::new(Mutex::new(LruCache::new(max_capacity))), + redis: None, + } + } + fn create_cache_size_gauge(&self) -> ObservableGauge { let meter: opentelemetry::metrics::Meter = metrics::meter_provider().meter(METER_NAME); let current_cache_size_for_gauge = self.cache_size.clone(); diff --git a/apollo-router/src/configuration/metrics.rs b/apollo-router/src/configuration/metrics.rs index b4a89d6476..221e1003e8 100644 --- a/apollo-router/src/configuration/metrics.rs +++ b/apollo-router/src/configuration/metrics.rs @@ -554,11 +554,6 @@ impl InstrumentData { super::QueryPlannerMode::BothBestEffort => "both_best_effort", super::QueryPlannerMode::New => "new", }; - let experimental_introspection_mode = match configuration.experimental_introspection_mode { - super::IntrospectionMode::Legacy => "legacy", - super::IntrospectionMode::Both => "both", - super::IntrospectionMode::New => "new", - }; self.data.insert( "apollo.router.config.experimental_query_planner_mode".to_string(), @@ -567,13 +562,6 @@ impl InstrumentData { HashMap::from_iter([("mode".to_string(), experimental_query_planner_mode.into())]), ), ); - self.data.insert( - "apollo.router.config.experimental_introspection_mode".to_string(), - ( - 1, - HashMap::from_iter([("mode".to_string(), experimental_introspection_mode.into())]), - ), - ); } } @@ -607,7 +595,6 @@ mod test { use crate::configuration::metrics::InstrumentData; use crate::configuration::metrics::Metrics; - use crate::configuration::IntrospectionMode; use crate::configuration::QueryPlannerMode; use crate::uplink::license_enforcement::LicenseState; use crate::Configuration; @@ -688,7 +675,6 @@ mod test { fn test_experimental_mode_metrics() { let mut data = InstrumentData::default(); data.populate_deno_or_rust_mode_instruments(&Configuration { - experimental_introspection_mode: IntrospectionMode::Legacy, experimental_query_planner_mode: QueryPlannerMode::Both, ..Default::default() }); @@ -700,10 +686,7 @@ mod test { fn test_experimental_mode_metrics_2() { let mut data = InstrumentData::default(); // Default query planner value should still be reported - data.populate_deno_or_rust_mode_instruments(&Configuration { - experimental_introspection_mode: IntrospectionMode::New, - ..Default::default() - }); + data.populate_deno_or_rust_mode_instruments(&Configuration::default()); let _metrics: Metrics = data.into(); assert_non_zero_metrics_snapshot!(); } @@ -712,7 +695,6 @@ mod test { fn test_experimental_mode_metrics_3() { let mut data = InstrumentData::default(); data.populate_deno_or_rust_mode_instruments(&Configuration { - experimental_introspection_mode: IntrospectionMode::New, experimental_query_planner_mode: QueryPlannerMode::New, ..Default::default() }); diff --git a/apollo-router/src/configuration/migrations/0030-experimental_introspection_mode.yaml b/apollo-router/src/configuration/migrations/0030-experimental_introspection_mode.yaml new file mode 100644 index 0000000000..feb3fdafc2 --- /dev/null +++ b/apollo-router/src/configuration/migrations/0030-experimental_introspection_mode.yaml @@ -0,0 +1,4 @@ +description: experimental_introspection_mode was removed +actions: + - type: delete + path: experimental_introspection_mode diff --git a/apollo-router/src/configuration/migrations/0030-legacy-introspection-caching.yaml b/apollo-router/src/configuration/migrations/0030-legacy-introspection-caching.yaml new file mode 100644 index 0000000000..37c6f28180 --- /dev/null +++ b/apollo-router/src/configuration/migrations/0030-legacy-introspection-caching.yaml @@ -0,0 +1,4 @@ +description: supergraph.query_planning.legacy_introspection_caching was removed +actions: + - type: delete + path: supergraph.query_planning.legacy_introspection_caching diff --git a/apollo-router/src/configuration/mod.rs b/apollo-router/src/configuration/mod.rs index a1f0129603..df8f7b40f4 100644 --- a/apollo-router/src/configuration/mod.rs +++ b/apollo-router/src/configuration/mod.rs @@ -166,10 +166,6 @@ pub struct Configuration { #[serde(default)] pub(crate) experimental_query_planner_mode: QueryPlannerMode, - /// Set the GraphQL schema introspection implementation to use. - #[serde(default)] - pub(crate) experimental_introspection_mode: IntrospectionMode, - /// Plugin configuration #[serde(default)] pub(crate) plugins: UserPlugins, @@ -232,21 +228,6 @@ pub(crate) enum QueryPlannerMode { BothBestEffort, } -/// Which implementation of GraphQL schema introspection to use, if enabled -#[derive(Copy, Clone, PartialEq, Eq, Default, Derivative, Serialize, Deserialize, JsonSchema)] -#[derivative(Debug)] -#[serde(rename_all = "lowercase")] -pub(crate) enum IntrospectionMode { - /// Use the new Rust-based implementation. - New, - /// Use the old JavaScript-based implementation. - Legacy, - /// Use Rust-based and Javascript-based implementations side by side, - /// logging warnings if the implementations disagree. - #[default] - Both, -} - impl<'de> serde::Deserialize<'de> for Configuration { fn deserialize(deserializer: D) -> Result where @@ -273,7 +254,6 @@ impl<'de> serde::Deserialize<'de> for Configuration { batching: Batching, experimental_type_conditioned_fetching: bool, experimental_query_planner_mode: QueryPlannerMode, - experimental_introspection_mode: IntrospectionMode, } let mut ad_hoc: AdHocConfiguration = serde::Deserialize::deserialize(deserializer)?; @@ -301,7 +281,6 @@ impl<'de> serde::Deserialize<'de> for Configuration { experimental_chaos: ad_hoc.experimental_chaos, experimental_type_conditioned_fetching: ad_hoc.experimental_type_conditioned_fetching, experimental_query_planner_mode: ad_hoc.experimental_query_planner_mode, - experimental_introspection_mode: ad_hoc.experimental_introspection_mode, plugins: ad_hoc.plugins, apollo_plugins: ad_hoc.apollo_plugins, batching: ad_hoc.batching, @@ -348,7 +327,6 @@ impl Configuration { experimental_type_conditioned_fetching: Option, batching: Option, experimental_query_planner_mode: Option, - experimental_introspection_mode: Option, ) -> Result { let notify = Self::notify(&apollo_plugins)?; @@ -364,7 +342,6 @@ impl Configuration { limits: operation_limits.unwrap_or_default(), experimental_chaos: chaos.unwrap_or_default(), experimental_query_planner_mode: experimental_query_planner_mode.unwrap_or_default(), - experimental_introspection_mode: experimental_introspection_mode.unwrap_or_default(), plugins: UserPlugins { plugins: Some(plugins), }, @@ -467,7 +444,6 @@ impl Configuration { batching: Option, experimental_type_conditioned_fetching: Option, experimental_query_planner_mode: Option, - experimental_introspection_mode: Option, ) -> Result { let configuration = Self { validated_yaml: Default::default(), @@ -479,7 +455,6 @@ impl Configuration { limits: operation_limits.unwrap_or_default(), experimental_chaos: chaos.unwrap_or_default(), experimental_query_planner_mode: experimental_query_planner_mode.unwrap_or_default(), - experimental_introspection_mode: experimental_introspection_mode.unwrap_or_default(), plugins: UserPlugins { plugins: Some(plugins), }, @@ -885,7 +860,7 @@ impl Default for Apq { } /// Query planning cache configuration -#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema)] +#[derive(Debug, Clone, Deserialize, Serialize, JsonSchema, Default)] #[serde(deny_unknown_fields, default)] pub(crate) struct QueryPlanning { /// Cache configuration @@ -926,32 +901,6 @@ pub(crate) struct QueryPlanning { /// Set the size of a pool of workers to enable query planning parallelism. /// Default: 1. pub(crate) experimental_parallelism: AvailableParallelism, - - /// Activates introspection response caching - /// Historically, the Router has executed introspection queries in the query planner, and cached their - /// response in its cache because they were expensive. This will change soon as introspection will be - /// removed from the query planner. In the meantime, since storing introspection responses can fill up - /// the cache, this option can be used to deactivate it. - /// Default: true - pub(crate) legacy_introspection_caching: bool, -} - -impl Default for QueryPlanning { - fn default() -> Self { - Self { - cache: QueryPlanCache::default(), - warmed_up_queries: Default::default(), - experimental_plans_limit: Default::default(), - experimental_parallelism: Default::default(), - experimental_paths_limit: Default::default(), - experimental_reuse_query_plans: Default::default(), - legacy_introspection_caching: default_legacy_introspection_caching(), - } - } -} - -const fn default_legacy_introspection_caching() -> bool { - true } impl QueryPlanning { diff --git a/apollo-router/src/configuration/shared/mod.rs b/apollo-router/src/configuration/shared/mod.rs index 9186b65a3d..ab164ec13f 100644 --- a/apollo-router/src/configuration/shared/mod.rs +++ b/apollo-router/src/configuration/shared/mod.rs @@ -3,8 +3,25 @@ use serde::Deserialize; use crate::plugins::traffic_shaping::Http2Config; -#[derive(Debug, Clone, Default, Deserialize, JsonSchema)] +#[derive(PartialEq, Debug, Clone, Default, Deserialize, JsonSchema, buildstructor::Builder)] #[serde(deny_unknown_fields, default)] pub(crate) struct Client { pub(crate) experimental_http2: Option, + pub(crate) dns_resolution_strategy: Option, +} + +#[derive(PartialEq, Default, Debug, Clone, Copy, Deserialize, JsonSchema)] +#[serde(rename_all = "snake_case")] +pub(crate) enum DnsResolutionStrategy { + /// Only query for `A` (IPv4) records + Ipv4Only, + /// Only query for `AAAA` (IPv6) records + Ipv6Only, + /// Query for both `A` (IPv4) and `AAAA` (IPv6) records in parallel + Ipv4AndIpv6, + /// Query for `AAAA` (IPv6) records first; if that fails, query for `A` (IPv4) records + Ipv6ThenIpv4, + #[default] + /// Default: Query for `A` (IPv4) records first; if that fails, query for `AAAA` (IPv6) records + Ipv4ThenIpv6, } diff --git a/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics.snap b/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics.snap index 3bd3c58289..513298fd51 100644 --- a/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics.snap +++ b/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics.snap @@ -2,12 +2,6 @@ source: apollo-router/src/configuration/metrics.rs expression: "&metrics.non_zero()" --- -- name: apollo.router.config.experimental_introspection_mode - data: - datapoints: - - value: 1 - attributes: - mode: legacy - name: apollo.router.config.experimental_query_planner_mode data: datapoints: diff --git a/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics_2.snap b/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics_2.snap index 660542eeba..43cb1b8568 100644 --- a/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics_2.snap +++ b/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics_2.snap @@ -2,12 +2,6 @@ source: apollo-router/src/configuration/metrics.rs expression: "&metrics.non_zero()" --- -- name: apollo.router.config.experimental_introspection_mode - data: - datapoints: - - value: 1 - attributes: - mode: new - name: apollo.router.config.experimental_query_planner_mode data: datapoints: diff --git a/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics_3.snap b/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics_3.snap index ba8bdf43af..34d0b7fe07 100644 --- a/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics_3.snap +++ b/apollo-router/src/configuration/snapshots/apollo_router__configuration__metrics__test__experimental_mode_metrics_3.snap @@ -2,12 +2,6 @@ source: apollo-router/src/configuration/metrics.rs expression: "&metrics.non_zero()" --- -- name: apollo.router.config.experimental_introspection_mode - data: - datapoints: - - value: 1 - attributes: - mode: new - name: apollo.router.config.experimental_query_planner_mode data: datapoints: diff --git a/apollo-router/src/configuration/snapshots/apollo_router__configuration__tests__schema_generation.snap b/apollo-router/src/configuration/snapshots/apollo_router__configuration__tests__schema_generation.snap index 0cc1c7cf4c..a05e3d0831 100644 --- a/apollo-router/src/configuration/snapshots/apollo_router__configuration__tests__schema_generation.snap +++ b/apollo-router/src/configuration/snapshots/apollo_router__configuration__tests__schema_generation.snap @@ -549,6 +549,11 @@ expression: "&schema" "Client": { "additionalProperties": false, "properties": { + "dns_resolution_strategy": { + "$ref": "#/definitions/DnsResolutionStrategy", + "description": "#/definitions/DnsResolutionStrategy", + "nullable": true + }, "experimental_http2": { "$ref": "#/definitions/Http2Config", "description": "#/definitions/Http2Config", @@ -2596,6 +2601,45 @@ expression: "&schema" } ] }, + "DnsResolutionStrategy": { + "oneOf": [ + { + "description": "Only query for `A` (IPv4) records", + "enum": [ + "ipv4_only" + ], + "type": "string" + }, + { + "description": "Only query for `AAAA` (IPv6) records", + "enum": [ + "ipv6_only" + ], + "type": "string" + }, + { + "description": "Query for both `A` (IPv4) and `AAAA` (IPv6) records in parallel", + "enum": [ + "ipv4_and_ipv6" + ], + "type": "string" + }, + { + "description": "Query for `AAAA` (IPv6) records first; if that fails, query for `A` (IPv4) records", + "enum": [ + "ipv6_then_ipv4" + ], + "type": "string" + }, + { + "description": "Default: Query for `A` (IPv4) records first; if that fails, query for `AAAA` (IPv6) records", + "enum": [ + "ipv4_then_ipv6" + ], + "type": "string" + } + ] + }, "Enabled": { "enum": [ "enabled" @@ -4070,32 +4114,6 @@ expression: "&schema" }, "type": "object" }, - "IntrospectionMode": { - "description": "Which implementation of GraphQL schema introspection to use, if enabled", - "oneOf": [ - { - "description": "Use the new Rust-based implementation.", - "enum": [ - "new" - ], - "type": "string" - }, - { - "description": "Use the old JavaScript-based implementation.", - "enum": [ - "legacy" - ], - "type": "string" - }, - { - "description": "Use Rust-based and Javascript-based implementations side by side, logging warnings if the implementations disagree.", - "enum": [ - "both" - ], - "type": "string" - } - ] - }, "InvalidationEndpointConfig": { "additionalProperties": false, "properties": { @@ -4987,11 +5005,6 @@ expression: "&schema" "description": "If cache warm up is configured, this will allow the router to keep a query plan created with the old schema, if it determines that the schema update does not affect the corresponding query", "type": "boolean" }, - "legacy_introspection_caching": { - "default": true, - "description": "Activates introspection response caching Historically, the Router has executed introspection queries in the query planner, and cached their response in its cache because they were expensive. This will change soon as introspection will be removed from the query planner. In the meantime, since storing introspection responses can fill up the cache, this option can be used to deactivate it. Default: true", - "type": "boolean" - }, "warmed_up_queries": { "default": null, "description": "Warms up the cache on reloads by running the query plan over a list of the most used queries (from the in memory cache) Configures the number of queries warmed up. Defaults to 1/3 of the in memory cache", @@ -5534,6 +5547,25 @@ expression: "&schema" ], "type": "object" }, + { + "additionalProperties": false, + "description": "A value from context.", + "properties": { + "default": { + "$ref": "#/definitions/AttributeValue", + "description": "#/definitions/AttributeValue", + "nullable": true + }, + "request_context": { + "description": "The request context key.", + "type": "string" + } + }, + "required": [ + "request_context" + ], + "type": "object" + }, { "additionalProperties": false, "description": "A header from the response", @@ -6535,6 +6567,11 @@ expression: "&schema" "description": "Send the service name", "type": "boolean" }, + "subgraph_request_id": { + "default": false, + "description": "Send the subgraph request id", + "type": "boolean" + }, "uri": { "default": false, "description": "Send the subgraph URI", @@ -6576,6 +6613,11 @@ expression: "&schema" "default": false, "description": "Send the http status", "type": "boolean" + }, + "subgraph_request_id": { + "default": false, + "description": "Send the subgraph request id", + "type": "boolean" } }, "type": "object" @@ -7000,6 +7042,11 @@ expression: "&schema" "nullable": true, "type": "boolean" }, + "dns_resolution_strategy": { + "$ref": "#/definitions/DnsResolutionStrategy", + "description": "#/definitions/DnsResolutionStrategy", + "nullable": true + }, "experimental_http2": { "$ref": "#/definitions/Http2Config", "description": "#/definitions/Http2Config", @@ -8976,10 +9023,6 @@ expression: "&schema" "$ref": "#/definitions/Chaos", "description": "#/definitions/Chaos" }, - "experimental_introspection_mode": { - "$ref": "#/definitions/IntrospectionMode", - "description": "#/definitions/IntrospectionMode" - }, "experimental_query_planner_mode": { "$ref": "#/definitions/QueryPlannerMode", "description": "#/definitions/QueryPlannerMode" diff --git a/apollo-router/src/graphql/mod.rs b/apollo-router/src/graphql/mod.rs index a205648f4a..7ec24d7dd7 100644 --- a/apollo-router/src/graphql/mod.rs +++ b/apollo-router/src/graphql/mod.rs @@ -7,6 +7,8 @@ mod visitor; use std::fmt; use std::pin::Pin; +use apollo_compiler::execution::GraphQLError as CompilerExecutionError; +use apollo_compiler::execution::ResponseDataPathElement; use futures::Stream; use heck::ToShoutySnakeCase; pub use request::Request; @@ -276,6 +278,44 @@ impl From for Error { } } +impl From for Error { + fn from(error: CompilerExecutionError) -> Self { + let CompilerExecutionError { + message, + locations, + path, + extensions, + } = error; + let locations = locations + .into_iter() + .map(|location| Location { + line: location.line as u32, + column: location.column as u32, + }) + .collect::>(); + let path = if !path.is_empty() { + let elements = path + .into_iter() + .map(|element| match element { + ResponseDataPathElement::Field(name) => { + JsonPathElement::Key(name.as_str().to_owned(), None) + } + ResponseDataPathElement::ListIndex(i) => JsonPathElement::Index(i), + }) + .collect(); + Some(Path(elements)) + } else { + None + }; + Self { + message, + locations, + path, + extensions, + } + } +} + impl ErrorExtension for WorkerGraphQLError {} impl From for Error { diff --git a/apollo-router/src/graphql/visitor.rs b/apollo-router/src/graphql/visitor.rs index b344393821..aa901508db 100644 --- a/apollo-router/src/graphql/visitor.rs +++ b/apollo-router/src/graphql/visitor.rs @@ -92,18 +92,11 @@ pub(crate) trait ResponseVisitor { inner_field.as_ref(), value, ); - } else { - tracing::warn!("The response did not include a field corresponding to query field {:?}", inner_field); } } apollo_compiler::executable::Selection::FragmentSpread(fragment_spread) => { if let Some(fragment) = fragment_spread.fragment_def(request) { self.visit_selections(request, variables, &fragment.selection_set, fields); - } else { - tracing::warn!( - "The fragment {} was not found in the query document.", - fragment_spread.fragment_name - ); } } apollo_compiler::executable::Selection::InlineFragment(inline_fragment) => { diff --git a/apollo-router/src/introspection.rs b/apollo-router/src/introspection.rs index be0e4a1d88..a0a539895d 100644 --- a/apollo-router/src/introspection.rs +++ b/apollo-router/src/introspection.rs @@ -1,142 +1,163 @@ -#[cfg(test)] -use std::collections::HashMap; use std::num::NonZeroUsize; +use std::ops::ControlFlow; use std::sync::Arc; -use router_bridge::introspect::IntrospectionError; -use router_bridge::planner::Planner; -use tower::BoxError; +use apollo_compiler::executable::Selection; +use serde_json_bytes::json; use crate::cache::storage::CacheStorage; -use crate::graphql::Response; -use crate::query_planner::QueryPlanResult; +use crate::graphql; +use crate::query_planner::QueryKey; +use crate::services::layers::query_analysis::ParsedDocument; +use crate::spec; +use crate::Configuration; const DEFAULT_INTROSPECTION_CACHE_CAPACITY: NonZeroUsize = unsafe { NonZeroUsize::new_unchecked(5) }; -pub(crate) async fn default_cache_storage() -> CacheStorage { - // This cannot fail as redis is not used. - CacheStorage::new(DEFAULT_INTROSPECTION_CACHE_CAPACITY, None, "introspection") - .await - .expect("failed to create cache storage") +#[derive(Clone)] +pub(crate) enum IntrospectionCache { + Disabled, + Enabled { + storage: Arc>, + }, } -/// A cache containing our well known introspection queries. -pub(crate) struct Introspection { - cache: CacheStorage, - pub(crate) planner: Arc>, -} - -impl Introspection { - pub(crate) async fn with_cache( - planner: Arc>, - cache: CacheStorage, - ) -> Result { - Ok(Self { cache, planner }) +impl IntrospectionCache { + pub(crate) fn new(configuration: &Configuration) -> Self { + if configuration.supergraph.introspection { + let storage = Arc::new(CacheStorage::new_in_memory( + DEFAULT_INTROSPECTION_CACHE_CAPACITY, + "introspection", + )); + storage.activate(); + Self::Enabled { storage } + } else { + Self::Disabled + } } - #[cfg(test)] - pub(crate) async fn from_cache( - planner: Arc>, - cache: HashMap, - ) -> Result { - let this = Self::with_cache( - planner, - CacheStorage::new(cache.len().try_into().unwrap(), None, "introspection").await?, - ) - .await?; - - for (query, response) in cache.into_iter() { - this.cache.insert(query, response).await; + pub(crate) fn activate(&self) { + match self { + IntrospectionCache::Disabled => {} + IntrospectionCache::Enabled { storage } => storage.activate(), } - Ok(this) } - /// Execute an introspection and cache the response. - pub(crate) async fn execute(&self, query: String) -> Result { - if let Some(response) = self.cache.get(&query, |_| Ok(())).await { - return Ok(response); + /// If `request` is a query with only introspection fields, + /// execute it and return a (cached) response + pub(crate) async fn maybe_execute( + &self, + schema: &Arc, + key: &QueryKey, + doc: &ParsedDocument, + ) -> ControlFlow { + Self::maybe_lone_root_typename(schema, doc)?; + if doc.operation.is_query() { + if doc.has_explicit_root_fields && doc.has_schema_introspection { + ControlFlow::Break(Self::mixed_fields_error())?; + } else if !doc.has_explicit_root_fields { + ControlFlow::Break(self.cached_introspection(schema, key, doc).await)? + } } + ControlFlow::Continue(()) + } - // Do the introspection query and cache it - let response = - self.planner - .introspect(query.clone()) - .await - .map_err(|_e| IntrospectionError { - message: String::from("cannot find the introspection response").into(), - })?; + /// A `{ __typename }` query is often used as a ping or health check. + /// Handle it without touching the cache. + /// + /// This fast path only applies if no fragment or directive is used, + /// so that we donโ€™t have to deal with `@skip` or `@include` here. + fn maybe_lone_root_typename( + schema: &Arc, + doc: &ParsedDocument, + ) -> ControlFlow { + if doc.operation.selection_set.selections.len() == 1 { + if let Selection::Field(field) = &doc.operation.selection_set.selections[0] { + if field.name == "__typename" && field.directives.is_empty() { + // `{ alias: __typename }` is much less common so handling it here is not essential + // but easier than a conditional to reject it + let key = field.response_key().as_str(); + let object_type_name = schema + .api_schema() + .root_operation(doc.operation.operation_type) + .expect("validation should have caught undefined root operation") + .as_str(); + let data = json!({key: object_type_name}); + ControlFlow::Break(graphql::Response::builder().data(data).build())? + } + } + } + ControlFlow::Continue(()) + } - let introspection_result = response.into_result().map_err(|err| IntrospectionError { - message: format!( - "introspection error : {}", - err.into_iter() - .map(|err| err.to_string()) - .collect::>() - .join(", "), + fn mixed_fields_error() -> graphql::Response { + let error = graphql::Error::builder() + .message( + "\ + Mixed queries with both schema introspection and concrete fields \ + are not supported yet: https://github.com/apollographql/router/issues/2789\ + ", ) - .into(), - })?; - - let response = Response::builder().data(introspection_result).build(); - - self.cache.insert(query, response.clone()).await; - - Ok(response) + .extension_code("MIXED_INTROSPECTION") + .build(); + graphql::Response::builder().error(error).build() } -} - -#[cfg(test)] -mod introspection_tests { - use std::sync::Arc; - - use router_bridge::planner::IncrementalDeliverySupport; - use router_bridge::planner::QueryPlannerConfig; - - use super::*; - #[tokio::test] - async fn test_plan_cache() { - let query_to_test = r#"{ - __schema { - types { - name - } + async fn cached_introspection( + &self, + schema: &Arc, + key: &QueryKey, + doc: &ParsedDocument, + ) -> graphql::Response { + let storage = match self { + IntrospectionCache::Enabled { storage } => storage, + IntrospectionCache::Disabled => { + let error = graphql::Error::builder() + .message(String::from("introspection has been disabled")) + .extension_code("INTROSPECTION_DISABLED") + .build(); + return graphql::Response::builder().error(error).build(); } - }"#; - let schema = include_str!("../tests/fixtures/supergraph.graphql"); - let expected_data = Response::builder().data(42).build(); + }; + let query = key.filtered_query.clone(); + // TODO: when adding support for variables in introspection queries, + // variable values should become part of the cache key. + // https://github.com/apollographql/router/issues/3831 + let cache_key = query; + if let Some(response) = storage.get(&cache_key, |_| unreachable!()).await { + return response; + } + let schema = schema.clone(); + let doc = doc.clone(); + let response = + tokio::task::spawn_blocking(move || Self::execute_introspection(&schema, &doc)) + .await + .expect("Introspection panicked"); + storage.insert(cache_key, response.clone()).await; + response + } - let planner = Arc::new( - Planner::new( - schema.to_string(), - QueryPlannerConfig { - incremental_delivery: Some(IncrementalDeliverySupport { - enable_defer: Some(true), - }), - graphql_validation: true, - reuse_query_fragments: Some(false), - generate_query_fragments: None, - debug: None, - type_conditioned_fetching: false, - }, + fn execute_introspection(schema: &spec::Schema, doc: &ParsedDocument) -> graphql::Response { + let schema = schema.api_schema(); + let operation = &doc.operation; + let variable_values = Default::default(); + match apollo_compiler::execution::coerce_variable_values( + schema, + operation, + &variable_values, + ) { + Ok(variable_values) => apollo_compiler::execution::execute_introspection_only_query( + schema, + &doc.executable, + operation, + &variable_values, ) - .await - .unwrap(), - ); - - let cache = [(query_to_test.to_string(), expected_data.clone())] - .iter() - .cloned() - .collect(); - let introspection = Introspection::from_cache(planner, cache).await.unwrap(); - - assert_eq!( - expected_data, - introspection - .execute(query_to_test.to_string()) - .await - .unwrap() - ); + .into(), + Err(e) => { + let error = e.into_graphql_error(&doc.executable.sources); + graphql::Response::builder().error(error).build() + } + } } } diff --git a/apollo-router/src/plugin/mod.rs b/apollo-router/src/plugin/mod.rs index 46dfbb1539..85110d91c3 100644 --- a/apollo-router/src/plugin/mod.rs +++ b/apollo-router/src/plugin/mod.rs @@ -69,6 +69,8 @@ pub struct PluginInit { pub config: T, /// Router Supergraph Schema (schema definition language) pub supergraph_sdl: Arc, + /// Router Supergraph Schema ID (SHA256 of the SDL)) + pub(crate) supergraph_schema_id: Arc, /// The supergraph schema (parsed) pub(crate) supergraph_schema: Arc>, @@ -91,6 +93,7 @@ where Schema::parse_and_validate(supergraph_sdl.to_string(), PathBuf::from("synthetic")) .expect("failed to parse supergraph schema"), )) + .supergraph_schema_id(crate::spec::Schema::schema_id(&supergraph_sdl).into()) .supergraph_sdl(supergraph_sdl) .notify(Notify::builder().build()) .build() @@ -114,6 +117,7 @@ where BoxError::from(e.errors.to_string()) })?, )) + .supergraph_schema_id(crate::spec::Schema::schema_id(&supergraph_sdl).into()) .supergraph_sdl(supergraph_sdl) .notify(Notify::builder().build()) .build() @@ -130,6 +134,7 @@ where PluginInit::fake_builder() .config(config) + .supergraph_schema_id(crate::spec::Schema::schema_id(&supergraph_sdl).into()) .supergraph_sdl(supergraph_sdl) .supergraph_schema(supergraph_schema) .notify(Notify::for_tests()) @@ -165,6 +170,7 @@ where pub(crate) fn new_builder( config: T, supergraph_sdl: Arc, + supergraph_schema_id: Arc, supergraph_schema: Arc>, subgraph_schemas: Option>, notify: Notify, @@ -172,6 +178,7 @@ where PluginInit { config, supergraph_sdl, + supergraph_schema_id, supergraph_schema, subgraph_schemas: subgraph_schemas.unwrap_or_default(), notify, @@ -186,6 +193,7 @@ where pub(crate) fn try_new_builder( config: serde_json::Value, supergraph_sdl: Arc, + supergraph_schema_id: Arc, supergraph_schema: Arc>, subgraph_schemas: Option>, notify: Notify, @@ -195,6 +203,7 @@ where config, supergraph_sdl, supergraph_schema, + supergraph_schema_id, subgraph_schemas: subgraph_schemas.unwrap_or_default(), notify, }) @@ -205,6 +214,7 @@ where fn fake_new_builder( config: T, supergraph_sdl: Option>, + supergraph_schema_id: Option>, supergraph_schema: Option>>, subgraph_schemas: Option>, notify: Option>, @@ -212,6 +222,7 @@ where PluginInit { config, supergraph_sdl: supergraph_sdl.unwrap_or_default(), + supergraph_schema_id: supergraph_schema_id.unwrap_or_default(), supergraph_schema: supergraph_schema .unwrap_or_else(|| Arc::new(Valid::assume_valid(Schema::new()))), subgraph_schemas: subgraph_schemas.unwrap_or_default(), @@ -229,6 +240,7 @@ impl PluginInit { PluginInit::try_builder() .config(self.config) .supergraph_schema(self.supergraph_schema) + .supergraph_schema_id(self.supergraph_schema_id) .supergraph_sdl(self.supergraph_sdl) .subgraph_schemas(self.subgraph_schemas) .notify(self.notify.clone()) diff --git a/apollo-router/src/plugin/test/mock/subgraph.rs b/apollo-router/src/plugin/test/mock/subgraph.rs index 46321b585b..ee38234606 100644 --- a/apollo-router/src/plugin/test/mock/subgraph.rs +++ b/apollo-router/src/plugin/test/mock/subgraph.rs @@ -209,7 +209,12 @@ impl Service for MockSubgraph { let http_response = http_response_builder .body(response.clone()) .expect("Response is serializable; qed"); - SubgraphResponse::new_from_response(http_response, req.context, "test".to_string()) + SubgraphResponse::new_from_response( + http_response, + req.context, + "test".to_string(), + req.id, + ) } else { let error = crate::error::Error::builder() .message(format!( @@ -222,6 +227,7 @@ impl Service for MockSubgraph { SubgraphResponse::fake_builder() .error(error) .context(req.context) + .id(req.id) .build() }; future::ok(response) diff --git a/apollo-router/src/plugins/authentication/subgraph.rs b/apollo-router/src/plugins/authentication/subgraph.rs index dccf2e3c37..fd4cca045a 100644 --- a/apollo-router/src/plugins/authentication/subgraph.rs +++ b/apollo-router/src/plugins/authentication/subgraph.rs @@ -509,6 +509,7 @@ mod test { use crate::graphql::Request; use crate::plugin::test::MockSubgraphService; use crate::query_planner::fetch::OperationKind; + use crate::services::subgraph::SubgraphRequestId; use crate::services::SubgraphRequest; use crate::services::SubgraphResponse; use crate::Context; @@ -807,6 +808,7 @@ mod test { http::Response::default(), Context::new(), req.subgraph_name.unwrap_or_else(|| String::from("test")), + SubgraphRequestId(String::new()), )) } diff --git a/apollo-router/src/plugins/authorization/authenticated.rs b/apollo-router/src/plugins/authorization/authenticated.rs index 15bcd7a969..bfcb28c51a 100644 --- a/apollo-router/src/plugins/authorization/authenticated.rs +++ b/apollo-router/src/plugins/authorization/authenticated.rs @@ -560,7 +560,7 @@ mod tests { `SECURITY` features provide metadata necessary to securely resolve fields. """ SECURITY - + """ `EXECUTION` features provide metadata necessary for operation execution. """ @@ -891,7 +891,7 @@ mod tests { `SECURITY` features provide metadata necessary to securely resolve fields. """ SECURITY - + """ `EXECUTION` features provide metadata necessary for operation execution. """ @@ -978,7 +978,7 @@ mod tests { `SECURITY` features provide metadata necessary to securely resolve fields. """ SECURITY - + """ `EXECUTION` features provide metadata necessary for operation execution. """ @@ -1073,7 +1073,7 @@ mod tests { `SECURITY` features provide metadata necessary to securely resolve fields. """ SECURITY - + """ `EXECUTION` features provide metadata necessary for operation execution. """ @@ -1137,7 +1137,7 @@ mod tests { `SECURITY` features provide metadata necessary to securely resolve fields. """ SECURITY - + """ `EXECUTION` features provide metadata necessary for operation execution. """ @@ -1225,7 +1225,7 @@ mod tests { `SECURITY` features provide metadata necessary to securely resolve fields. """ SECURITY - + """ `EXECUTION` features provide metadata necessary for operation execution. """ @@ -1310,7 +1310,7 @@ mod tests { `SECURITY` features provide metadata necessary to securely resolve fields. """ SECURITY - + """ `EXECUTION` features provide metadata necessary for operation execution. """ @@ -1319,18 +1319,18 @@ mod tests { type Query { post(id: ID!): Post } - + interface Post { id: ID! author: String! title: String! content: String! } - + type Stats { views: Int } - + type PublicBlog implements Post { id: ID! author: String! @@ -1338,7 +1338,7 @@ mod tests { content: String! stats: Stats @authenticated } - + type PrivateBlog implements Post @authenticated { id: ID! author: String! @@ -1410,14 +1410,14 @@ mod tests { `SECURITY` features provide metadata necessary to securely resolve fields. """ SECURITY - + """ `EXECUTION` features provide metadata necessary for operation execution. """ EXECUTION } - + scalar join__FieldSet enum join__Graph { USER @join__graph(name: "user", url: "http://localhost:4001/graphql") @@ -1704,6 +1704,7 @@ mod tests { } directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA directive @authenticated on OBJECT | FIELD_DEFINITION | INTERFACE | SCALAR | ENUM + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field( graph: join__Graph requires: join__FieldSet @@ -1725,6 +1726,10 @@ mod tests { resolvable: Boolean! = true isInterfaceObject: Boolean! = false ) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember( + graph: join__Graph! + member: String! + ) repeatable on UNION scalar join__FieldSet scalar link__Import @@ -1787,6 +1792,7 @@ mod tests { } directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA directive @authenticated on OBJECT | FIELD_DEFINITION | INTERFACE | SCALAR | ENUM + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field( graph: join__Graph requires: join__FieldSet @@ -1808,6 +1814,10 @@ mod tests { resolvable: Boolean! = true isInterfaceObject: Boolean! = false ) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember( + graph: join__Graph! + member: String! + ) repeatable on UNION scalar join__FieldSet scalar link__Import diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__introspection_fragment_with_authenticated_root_query.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__introspection_fragment_with_authenticated_root_query.snap index b7f895cc65..6ba6efb168 100644 --- a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__introspection_fragment_with_authenticated_root_query.snap +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__introspection_fragment_with_authenticated_root_query.snap @@ -6,18 +6,6 @@ expression: first_response "data": { "__schema": { "types": [ - { - "name": "Query" - }, - { - "name": "T" - }, - { - "name": "String" - }, - { - "name": "Boolean" - }, { "name": "__Schema" }, @@ -41,6 +29,18 @@ expression: first_response }, { "name": "__DirectiveLocation" + }, + { + "name": "String" + }, + { + "name": "Boolean" + }, + { + "name": "Query" + }, + { + "name": "T" } ] } diff --git a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__introspection_mixed_with_authenticated_fields.snap b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__introspection_mixed_with_authenticated_fields.snap index 90938cf936..59e47ba51e 100644 --- a/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__introspection_mixed_with_authenticated_fields.snap +++ b/apollo-router/src/plugins/authorization/snapshots/apollo_router__plugins__authorization__authenticated__tests__introspection_mixed_with_authenticated_fields.snap @@ -5,7 +5,7 @@ expression: first_response { "errors": [ { - "message": "Mixed queries with both schema introspection and concrete fields are not supported", + "message": "Mixed queries with both schema introspection and concrete fields are not supported yet: https://github.com/apollographql/router/issues/2789", "extensions": { "code": "MIXED_INTROSPECTION" } diff --git a/apollo-router/src/plugins/coprocessor/mod.rs b/apollo-router/src/plugins/coprocessor/mod.rs index ccaaf61223..cbc5b88213 100644 --- a/apollo-router/src/plugins/coprocessor/mod.rs +++ b/apollo-router/src/plugins/coprocessor/mod.rs @@ -48,13 +48,13 @@ use crate::services::external::Externalizable; use crate::services::external::PipelineStep; use crate::services::external::DEFAULT_EXTERNALIZATION_TIMEOUT; use crate::services::external::EXTERNALIZABLE_VERSION; +use crate::services::hickory_dns_connector::new_async_http_connector; +use crate::services::hickory_dns_connector::AsyncHyperResolver; use crate::services::router; use crate::services::router::body::get_body_bytes; use crate::services::router::body::RouterBody; use crate::services::router::body::RouterBodyConverter; use crate::services::subgraph; -use crate::services::trust_dns_connector::new_async_http_connector; -use crate::services::trust_dns_connector::AsyncHyperResolver; #[cfg(test)] mod test; @@ -78,7 +78,9 @@ impl Plugin for CoprocessorPlugin { type Config = Conf; async fn new(init: PluginInit) -> Result { - let mut http_connector = new_async_http_connector()?; + let client_config = init.config.client.clone().unwrap_or_default(); + let mut http_connector = + new_async_http_connector(client_config.dns_resolution_strategy.unwrap_or_default())?; http_connector.set_nodelay(true); http_connector.set_keepalive(Some(std::time::Duration::from_secs(60))); http_connector.enforce_http(false); @@ -93,9 +95,8 @@ impl Plugin for CoprocessorPlugin { .https_or_http() .enable_http1(); - let connector = if init.config.client.is_none() - || init.config.client.as_ref().unwrap().experimental_http2 != Some(Http2Config::Disable) - { + let experimental_http2 = client_config.experimental_http2.unwrap_or_default(); + let connector = if experimental_http2 != Http2Config::Disable { builder.enable_http2().wrap_connector(http_connector) } else { builder.wrap_connector(http_connector) @@ -106,11 +107,7 @@ impl Plugin for CoprocessorPlugin { .layer(TimeoutLayer::new(init.config.timeout)) .service( hyper::Client::builder() - .http2_only( - init.config.client.is_some() - && init.config.client.as_ref().unwrap().experimental_http2 - == Some(Http2Config::Http2Only), - ) + .http2_only(experimental_http2 == Http2Config::Http2Only) .pool_idle_timeout(POOL_IDLE_TIMEOUT_DURATION) .build(connector), ), @@ -291,6 +288,8 @@ pub(super) struct SubgraphRequestConf { pub(super) method: bool, /// Send the service name pub(super) service_name: bool, + /// Send the subgraph request id + pub(super) subgraph_request_id: bool, } /// What information is passed to a subgraph request/response stage @@ -310,6 +309,8 @@ pub(super) struct SubgraphResponseConf { pub(super) service_name: bool, /// Send the http status pub(super) status_code: bool, + /// Send the subgraph request id + pub(super) subgraph_request_id: bool, } /// Configures the externalization plugin @@ -1012,6 +1013,9 @@ where let uri = request_config.uri.then(|| parts.uri.to_string()); let subgraph_name = service_name.clone(); let service_name = request_config.service_name.then_some(service_name); + let subgraph_request_id = request_config + .subgraph_request_id + .then_some(request.id.clone()); let payload = Externalizable::subgraph_builder() .stage(PipelineStep::SubgraphRequest) @@ -1023,6 +1027,7 @@ where .method(parts.method.to_string()) .and_service_name(service_name) .and_uri(uri) + .and_subgraph_request_id(subgraph_request_id) .build(); tracing::debug!(?payload, "externalized output"); @@ -1081,6 +1086,7 @@ where response: http_response, context: request.context, subgraph_name: Some(subgraph_name), + id: request.id, }; if let Some(context) = co_processor_output.context { @@ -1168,6 +1174,9 @@ where .transpose()?; let context_to_send = response_config.context.then(|| response.context.clone()); let service_name = response_config.service_name.then_some(service_name); + let subgraph_request_id = response_config + .subgraph_request_id + .then_some(response.id.clone()); let payload = Externalizable::subgraph_builder() .stage(PipelineStep::SubgraphResponse) @@ -1177,6 +1186,7 @@ where .and_context(context_to_send) .and_status_code(status_to_send) .and_service_name(service_name) + .and_subgraph_request_id(subgraph_request_id) .build(); tracing::debug!(?payload, "externalized output"); diff --git a/apollo-router/src/plugins/coprocessor/test.rs b/apollo-router/src/plugins/coprocessor/test.rs index 50786f336d..a2b9374fb9 100644 --- a/apollo-router/src/plugins/coprocessor/test.rs +++ b/apollo-router/src/plugins/coprocessor/test.rs @@ -15,6 +15,7 @@ mod tests { use router::body::RouterBody; use serde_json::json; use serde_json_bytes::Value; + use services::subgraph::SubgraphRequestId; use tower::BoxError; use tower::ServiceExt; @@ -279,12 +280,8 @@ mod tests { let subgraph_stage = SubgraphStage { request: SubgraphRequestConf { condition: Default::default(), - headers: false, - context: false, body: true, - uri: false, - method: false, - service_name: false, + ..Default::default() }, response: Default::default(), }; @@ -342,12 +339,9 @@ mod tests { let subgraph_stage = SubgraphStage { request: SubgraphRequestConf { condition: Default::default(), - headers: false, - context: false, body: true, - uri: false, - method: false, - service_name: false, + subgraph_request_id: true, + ..Default::default() }, response: Default::default(), }; @@ -384,16 +378,27 @@ mod tests { req.subgraph_request.into_body().query.unwrap() ); + // this should be the same as the initial request id + assert_eq!(&*req.id, "5678"); + Ok(subgraph::Response::builder() .data(json!({ "test": 1234_u32 })) .errors(Vec::new()) .extensions(crate::json_ext::Object::new()) .context(req.context) + .id(req.id) .build()) }); - let mock_http_client = mock_with_callback(move |_: http::Request| { + let mock_http_client = mock_with_callback(move |req: http::Request| { Box::pin(async { + let deserialized_request: Externalizable = + serde_json::from_slice(&hyper::body::to_bytes(req.into_body()).await.unwrap()) + .unwrap(); + assert_eq!( + deserialized_request.subgraph_request_id.as_deref(), + Some("5678") + ); Ok(http::Response::builder() .body(RouterBody::from( r#"{ @@ -438,7 +443,8 @@ mod tests { } }, "serviceName": "service name shouldn't change", - "uri": "http://thisurihaschanged" + "uri": "http://thisurihaschanged", + "subgraphRequestId": "9abc" }"#, )) .unwrap()) @@ -452,18 +458,15 @@ mod tests { "my_subgraph_service_name".to_string(), ); - let request = subgraph::Request::fake_builder().build(); + let mut request = subgraph::Request::fake_builder().build(); + request.id = SubgraphRequestId("5678".to_string()); + + let response = service.oneshot(request).await.unwrap(); + assert_eq!("5678", &*response.id); assert_eq!( serde_json_bytes::json!({ "test": 1234_u32 }), - service - .oneshot(request) - .await - .unwrap() - .response - .into_body() - .data - .unwrap() + response.response.into_body().data.unwrap() ); } @@ -480,12 +483,8 @@ mod tests { SelectorOrValue::Value("value".to_string().into()), ]) .into(), - headers: false, - context: false, body: true, - uri: false, - method: false, - service_name: false, + ..Default::default() }, response: Default::default(), }; @@ -554,12 +553,8 @@ mod tests { let subgraph_stage = SubgraphStage { request: SubgraphRequestConf { condition: Default::default(), - headers: false, - context: false, body: true, - uri: false, - method: false, - service_name: false, + ..Default::default() }, response: Default::default(), }; @@ -624,12 +619,8 @@ mod tests { let subgraph_stage = SubgraphStage { request: SubgraphRequestConf { condition: Default::default(), - headers: false, - context: false, body: true, - uri: false, - method: false, - service_name: false, + ..Default::default() }, response: Default::default(), }; @@ -689,11 +680,9 @@ mod tests { request: Default::default(), response: SubgraphResponseConf { condition: Default::default(), - headers: false, - context: false, body: true, - service_name: false, - status_code: false, + subgraph_request_id: true, + ..Default::default() }, }; @@ -703,16 +692,23 @@ mod tests { mock_subgraph_service .expect_call() .returning(|req: subgraph::Request| { + assert_eq!(&*req.id, "5678"); Ok(subgraph::Response::builder() .data(json!({ "test": 1234_u32 })) .errors(Vec::new()) .extensions(crate::json_ext::Object::new()) .context(req.context) + .id(req.id) .build()) }); - let mock_http_client = mock_with_callback(move |_: http::Request| { - Box::pin(async { + let mock_http_client = mock_with_callback(move |r: http::Request| { + Box::pin(async move { + let (_, body) = r.into_parts(); + let body: Value = serde_json::from_slice(&body.to_bytes().await.unwrap()).unwrap(); + let subgraph_id = body.get("subgraphRequestId").unwrap(); + assert_eq!(subgraph_id.as_str(), Some("5678")); + Ok(http::Response::builder() .body(RouterBody::from( r#"{ @@ -756,7 +752,8 @@ mod tests { "accepts-multipart": false, "this-is-a-test-context": 42 } - } + }, + "subgraphRequestId": "9abc" }"#, )) .unwrap()) @@ -770,7 +767,8 @@ mod tests { "my_subgraph_service_name".to_string(), ); - let request = subgraph::Request::fake_builder().build(); + let mut request = subgraph::Request::fake_builder().build(); + request.id = SubgraphRequestId("5678".to_string()); let response = service.oneshot(request).await.unwrap(); @@ -779,6 +777,7 @@ mod tests { response.response.headers().get("cookie").unwrap(), "tasty_cookie=strawberry" ); + assert_eq!(&*response.id, "5678"); assert_eq!( response @@ -807,11 +806,8 @@ mod tests { default: None, }) .into(), - headers: false, - context: false, body: true, - service_name: false, - status_code: false, + ..Default::default() }, }; diff --git a/apollo-router/src/plugins/demand_control/cost_calculator/fixtures/basic_fragments_query.graphql b/apollo-router/src/plugins/demand_control/cost_calculator/fixtures/basic_fragments_query.graphql new file mode 100644 index 0000000000..5361148967 --- /dev/null +++ b/apollo-router/src/plugins/demand_control/cost_calculator/fixtures/basic_fragments_query.graphql @@ -0,0 +1,21 @@ +# Fragment narrowing an abstract type to a concrete type +fragment narrowFrag on SecondObjectType { + field1 +} +# Fragment widening a concrete type to an abstract type +fragment widenFrag on MyInterface { + field2 +} + +{ + interfaceInstance1 { + field2 + ...narrowFrag + } + someUnion { + ...narrowFrag + ... on FirstObjectType { + innerList { ...widenFrag } + } + } +} diff --git a/apollo-router/src/plugins/demand_control/cost_calculator/fixtures/basic_supergraph_schema.graphql b/apollo-router/src/plugins/demand_control/cost_calculator/fixtures/basic_supergraph_schema.graphql new file mode 100644 index 0000000000..660459b5b2 --- /dev/null +++ b/apollo-router/src/plugins/demand_control/cost_calculator/fixtures/basic_supergraph_schema.graphql @@ -0,0 +1,102 @@ +schema + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) +{ + query: Query + mutation: Mutation +} + +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + +directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION + +directive @join__graph(name: String!, url: String!) on ENUM_VALUE + +directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE + +directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + +directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + +directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + +type FirstObjectType + @join__type(graph: PRODUCTS) +{ + field1: Int + innerList: [SecondObjectType] +} + +input InnerInput + @join__type(graph: PRODUCTS) +{ + id: ID +} + +scalar join__FieldSet + +enum join__Graph { + PRODUCTS @join__graph(name: "products", url: "http://localhost:4000") +} + +scalar link__Import + +enum link__Purpose { + """ + `SECURITY` features provide metadata necessary to securely resolve fields. + """ + SECURITY + + """ + `EXECUTION` features provide metadata necessary for operation execution. + """ + EXECUTION +} + +type Mutation + @join__type(graph: PRODUCTS) +{ + doSomething: Int +} + +interface MyInterface + @join__type(graph: PRODUCTS) +{ + field2: String +} + +input OuterInput + @join__type(graph: PRODUCTS) +{ + inner: InnerInput + inner2: InnerInput + listOfInner: [InnerInput!] +} + +type Query + @join__type(graph: PRODUCTS) +{ + getScalar(id: ID): String + getScalarByObject(args: OuterInput): String + anotherScalar: Int + object1: FirstObjectType + interfaceInstance1: MyInterface + someUnion: UnionOfObjectTypes + someObjects: [FirstObjectType] + intList: [Int] + getObjectsByObject(args: OuterInput): [SecondObjectType] +} + +type SecondObjectType implements MyInterface + @join__implements(graph: PRODUCTS, interface: "MyInterface") + @join__type(graph: PRODUCTS) +{ + field1: Int + field2: String +} + +union UnionOfObjectTypes + @join__type(graph: PRODUCTS) + @join__unionMember(graph: PRODUCTS, member: "FirstObjectType") + @join__unionMember(graph: PRODUCTS, member: "SecondObjectType") + = FirstObjectType | SecondObjectType diff --git a/apollo-router/src/plugins/demand_control/cost_calculator/static_cost.rs b/apollo-router/src/plugins/demand_control/cost_calculator/static_cost.rs index 752479b256..98f3f275ed 100644 --- a/apollo-router/src/plugins/demand_control/cost_calculator/static_cost.rs +++ b/apollo-router/src/plugins/demand_control/cost_calculator/static_cost.rs @@ -255,7 +255,6 @@ impl StaticCostCalculator { &self, ctx: &ScoringContext, fragment_spread: &FragmentSpread, - parent_type: &NamedType, list_size_directive: Option<&ListSizeDirective>, ) -> Result { let fragment = fragment_spread.fragment_def(ctx.query).ok_or_else(|| { @@ -267,7 +266,7 @@ impl StaticCostCalculator { self.score_selection_set( ctx, &fragment.selection_set, - parent_type, + fragment.type_condition(), list_size_directive, ) } @@ -282,7 +281,10 @@ impl StaticCostCalculator { self.score_selection_set( ctx, &inline_fragment.selection_set, - parent_type, + inline_fragment + .type_condition + .as_ref() + .unwrap_or(parent_type), list_size_directive, ) } @@ -320,15 +322,10 @@ impl StaticCostCalculator { parent_type, list_size_directive.and_then(|dir| dir.size_of(f)), ), - Selection::FragmentSpread(s) => { - self.score_fragment_spread(ctx, s, parent_type, list_size_directive) + Selection::FragmentSpread(s) => self.score_fragment_spread(ctx, s, list_size_directive), + Selection::InlineFragment(i) => { + self.score_inline_fragment(ctx, i, parent_type, list_size_directive) } - Selection::InlineFragment(i) => self.score_inline_fragment( - ctx, - i, - i.type_condition.as_ref().unwrap_or(parent_type), - list_size_directive, - ), } } @@ -585,7 +582,7 @@ mod tests { use tower::Service; use super::*; - use crate::introspection::default_cache_storage; + use crate::introspection::IntrospectionCache; use crate::query_planner::BridgeQueryPlanner; use crate::services::layers::query_analysis::ParsedDocument; use crate::services::QueryPlannerContent; @@ -677,7 +674,7 @@ mod tests { config.clone(), None, None, - default_cache_storage().await, + Arc::new(IntrospectionCache::new(&config)), ) .await .unwrap(); @@ -908,6 +905,17 @@ mod tests { assert_eq!(basic_estimated_cost(schema, query, variables), 0.0) } + #[test(tokio::test)] + async fn fragments_cost() { + let schema = include_str!("./fixtures/basic_supergraph_schema.graphql"); + let query = include_str!("./fixtures/basic_fragments_query.graphql"); + let variables = "{}"; + + assert_eq!(basic_estimated_cost(schema, query, variables), 102.0); + assert_eq!(planned_cost_js(schema, query, variables).await, 102.0); + assert_eq!(planned_cost_rust(schema, query, variables), 102.0); + } + #[test(tokio::test)] async fn federated_query_with_name() { let schema = include_str!("./fixtures/federated_ships_schema.graphql"); diff --git a/apollo-router/src/plugins/demand_control/mod.rs b/apollo-router/src/plugins/demand_control/mod.rs index 62255fa832..5e3ba587f8 100644 --- a/apollo-router/src/plugins/demand_control/mod.rs +++ b/apollo-router/src/plugins/demand_control/mod.rs @@ -471,6 +471,7 @@ mod test { use apollo_compiler::ast; use apollo_compiler::validation::Valid; use apollo_compiler::ExecutableDocument; + use apollo_compiler::Schema; use futures::StreamExt; use schemars::JsonSchema; use serde::Deserialize; @@ -482,7 +483,6 @@ mod test { use crate::plugins::demand_control::DemandControlContext; use crate::plugins::demand_control::DemandControlError; use crate::plugins::test::PluginTestHarness; - use crate::query_planner::fetch::QueryHash; use crate::services::execution; use crate::services::layers::query_analysis::ParsedDocument; use crate::services::layers::query_analysis::ParsedDocumentInner; @@ -660,14 +660,14 @@ mod test { } fn context() -> Context { - let parsed_document = ParsedDocumentInner { - executable: Arc::new(Valid::assume_valid(ExecutableDocument::new())), - hash: Arc::new(QueryHash::default()), - ast: ast::Document::new(), - }; + let schema = Schema::parse_and_validate("type Query { f: Int }", "").unwrap(); + let ast = ast::Document::parse("{__typename}", "").unwrap(); + let doc = ast.to_executable_validate(&schema).unwrap(); + let parsed_document = + ParsedDocumentInner::new(ast, doc.into(), None, Default::default()).unwrap(); let ctx = Context::new(); ctx.extensions() - .with_lock(|mut lock| lock.insert(ParsedDocument::new(parsed_document))); + .with_lock(|mut lock| lock.insert::(parsed_document)); ctx } diff --git a/apollo-router/src/plugins/headers.rs b/apollo-router/src/plugins/headers.rs index 1e52cd444c..c6e0082cfd 100644 --- a/apollo-router/src/plugins/headers.rs +++ b/apollo-router/src/plugins/headers.rs @@ -454,6 +454,7 @@ mod test { use std::str::FromStr; use std::sync::Arc; + use subgraph::SubgraphRequestId; use tower::BoxError; use super::*; @@ -863,6 +864,7 @@ mod test { query_hash: Default::default(), authorization: Default::default(), executable_document: None, + id: SubgraphRequestId(String::new()), }; service.modify_request(&mut request); let headers = request @@ -935,6 +937,7 @@ mod test { query_hash: Default::default(), authorization: Default::default(), executable_document: None, + id: SubgraphRequestId(String::new()), }; service.modify_request(&mut request); let headers = request @@ -956,6 +959,7 @@ mod test { http::Response::default(), Context::new(), req.subgraph_name.unwrap_or_default(), + SubgraphRequestId(String::new()), )) } @@ -998,6 +1002,7 @@ mod test { query_hash: Default::default(), authorization: Default::default(), executable_document: None, + id: SubgraphRequestId(String::new()), } } diff --git a/apollo-router/src/plugins/record_replay/replay.rs b/apollo-router/src/plugins/record_replay/replay.rs index dc79380f13..814e3e750e 100644 --- a/apollo-router/src/plugins/record_replay/replay.rs +++ b/apollo-router/src/plugins/record_replay/replay.rs @@ -219,6 +219,7 @@ impl Plugin for Replay { http::Response::new(fetch.response.chunks[0].clone()), req.context.clone(), subgraph_name.clone(), + req.id.clone(), ); let runtime_variables = req.subgraph_request.body().variables.clone(); diff --git a/apollo-router/src/plugins/rhai/engine.rs b/apollo-router/src/plugins/rhai/engine.rs index 062143711f..fde2fa8e7d 100644 --- a/apollo-router/src/plugins/rhai/engine.rs +++ b/apollo-router/src/plugins/rhai/engine.rs @@ -692,6 +692,13 @@ mod router_plugin { *obj.uri_mut() = uri; Ok(()) } + + #[rhai_fn(get = "subgraph_request_id", pure, return_raw)] + pub(crate) fn get_subgraph_id( + obj: &mut SharedMut, + ) -> Result> { + Ok(obj.with_mut(|request| request.id.to_string())) + } // End of SubgraphRequest specific section #[rhai_fn(get = "headers", pure, return_raw)] @@ -790,6 +797,13 @@ mod router_plugin { Ok(obj.with_mut(|response| response.response.headers().clone())) } + #[rhai_fn(get = "subgraph_request_id", pure, return_raw)] + pub(crate) fn get_subgraph_id_response( + obj: &mut SharedMut, + ) -> Result> { + Ok(obj.with_mut(|response| response.id.to_string())) + } + /*TODO: reenable when https://github.com/apollographql/router/issues/3642 is decided #[rhai_fn(get = "body", pure, return_raw)] pub(crate) fn get_originating_body_router_response( @@ -1147,6 +1161,32 @@ mod router_plugin { Ok(()) } + // Uri.port + #[rhai_fn(get = "port", pure, return_raw)] + pub(crate) fn uri_port_get(x: &mut Uri) -> Result> { + to_dynamic(x.port().map(|p| p.as_u16())) + } + + #[rhai_fn(set = "port", return_raw)] + pub(crate) fn uri_port_set(x: &mut Uri, value: i64) -> Result<(), Box> { + // Because there is no simple way to update parts on an existing + // Uri (no parts_mut()), then we need to create a new Uri from our + // existing parts, preserving any port, and update our existing + // Uri. + let mut parts: Parts = x.clone().into_parts(); + match parts.authority { + Some(old_authority) => { + let host = old_authority.host(); + let new_authority = Authority::from_maybe_shared(format!("{host}:{value}")) + .map_err(|e| e.to_string())?; + parts.authority = Some(new_authority); + *x = Uri::from_parts(parts).map_err(|e| e.to_string())?; + Ok(()) + } + None => Err("invalid URI; unable to set port".into()), + } + } + // Response.label #[rhai_fn(get = "label", pure)] pub(crate) fn response_label_get(x: &mut Response) -> Dynamic { @@ -1486,12 +1526,31 @@ macro_rules! register_rhai_router_interface { );*/ $engine.register_get( + "uri", + |obj: &mut SharedMut<$base::FirstRequest>| -> Result> { + Ok(obj.with_mut(|request| request.request.uri().clone())) + } + ).register_get( "uri", |obj: &mut SharedMut<$base::Request>| -> Result> { Ok(obj.with_mut(|request| request.router_request.uri().clone())) } ); + $engine.register_set( + "uri", + |obj: &mut SharedMut<$base::FirstRequest>, uri: Uri| { + if_subgraph! { + $base => { + let _unused = (obj, headers); + Err("cannot mutate originating request on a subgraph".into()) + } else { + obj.with_mut(|request| *request.request.uri_mut() = uri); + Ok(()) + } + } + } + ).register_set( "uri", |obj: &mut SharedMut<$base::Request>, uri: Uri| { if_subgraph! { @@ -1505,6 +1564,13 @@ macro_rules! register_rhai_router_interface { } } ); + + $engine.register_get( + "method", + |obj: &mut SharedMut<$base::FirstRequest>| -> Result> { + Ok(obj.with_mut(|request| request.request.method().clone())) + } + ); )* }; } diff --git a/apollo-router/src/plugins/rhai/tests.rs b/apollo-router/src/plugins/rhai/tests.rs index 11c4f3a03a..dd3d4080f6 100644 --- a/apollo-router/src/plugins/rhai/tests.rs +++ b/apollo-router/src/plugins/rhai/tests.rs @@ -6,6 +6,7 @@ use std::sync::Mutex; use std::time::SystemTime; use http::HeaderMap; +use http::HeaderValue; use http::Method; use http::StatusCode; use rhai::Engine; @@ -32,6 +33,7 @@ use crate::plugin::DynPlugin; use crate::plugins::rhai::engine::RhaiExecutionDeferredResponse; use crate::plugins::rhai::engine::RhaiExecutionResponse; use crate::plugins::rhai::engine::RhaiRouterChunkedResponse; +use crate::plugins::rhai::engine::RhaiRouterFirstRequest; use crate::plugins::rhai::engine::RhaiRouterResponse; use crate::plugins::rhai::engine::RhaiSupergraphDeferredResponse; use crate::plugins::rhai::engine::RhaiSupergraphResponse; @@ -347,6 +349,20 @@ map )); } +#[tokio::test] +async fn it_can_process_router_request() { + let mut request = RhaiRouterFirstRequest::default(); + request.request.headers_mut().insert( + "content-type", + HeaderValue::from_str("application/json").unwrap(), + ); + *request.request.method_mut() = http::Method::GET; + + call_rhai_function_with_arg("process_router_request", request) + .await + .expect("test failed"); +} + #[tokio::test] async fn it_can_process_supergraph_request() { let request = SupergraphRequest::canned_builder() @@ -433,6 +449,18 @@ async fn it_can_process_subgraph_response() { .expect("test failed"); } +#[tokio::test] +async fn it_can_parse_request_uri() { + let mut request = SupergraphRequest::canned_builder() + .operation_name("canned") + .build() + .expect("build canned supergraph request"); + *request.supergraph_request.uri_mut() = "https://not-default:8080/path".parse().unwrap(); + call_rhai_function_with_arg("test_parse_request_details", request) + .await + .expect("test failed"); +} + #[test] fn it_can_urlencode_string() { let engine = new_rhai_test_engine(); @@ -641,7 +669,7 @@ async fn it_can_process_string_subgraph_forbidden() { if let Err(error) = call_rhai_function("process_subgraph_response_string").await { let processed_error = process_error(error); assert_eq!(processed_error.status, StatusCode::INTERNAL_SERVER_ERROR); - assert_eq!(processed_error.message, Some("rhai execution error: 'Runtime error: I have raised an error (line 223, position 5)'".to_string())); + assert_eq!(processed_error.message, Some("rhai execution error: 'Runtime error: I have raised an error (line 251, position 5)'".to_string())); } else { // Test failed panic!("error processed incorrectly"); @@ -669,7 +697,7 @@ async fn it_cannot_process_om_subgraph_missing_message_and_body() { assert_eq!( processed_error.message, Some( - "rhai execution error: 'Runtime error: #{\"status\": 400} (line 234, position 5)'" + "rhai execution error: 'Runtime error: #{\"status\": 400} (line 262, position 5)'" .to_string() ) ); diff --git a/apollo-router/src/plugins/telemetry/config_new/selectors.rs b/apollo-router/src/plugins/telemetry/config_new/selectors.rs index 8c9c8dde7a..1b49d9548c 100644 --- a/apollo-router/src/plugins/telemetry/config_new/selectors.rs +++ b/apollo-router/src/plugins/telemetry/config_new/selectors.rs @@ -124,6 +124,17 @@ pub(crate) enum RouterSelector { /// The request method enabled or not request_method: bool, }, + /// A value from context. + RequestContext { + /// The request context key. + request_context: String, + #[serde(skip)] + #[allow(dead_code)] + /// Optional redaction pattern. + redact: Option, + /// Optional default value. + default: Option, + }, /// A header from the response ResponseHeader { /// The name of the request header. @@ -660,6 +671,16 @@ impl Selector for RouterSelector { RouterSelector::RequestMethod { request_method } if *request_method => { Some(request.router_request.method().to_string().into()) } + RouterSelector::RequestContext { + request_context, + default, + .. + } => request + .context + .get_json_value(request_context) + .as_ref() + .and_then(|v| v.maybe_to_otel_value()) + .or_else(|| default.maybe_to_otel_value()), RouterSelector::RequestHeader { request_header, default, @@ -811,6 +832,7 @@ impl Selector for RouterSelector { matches!( self, RouterSelector::RequestHeader { .. } + | RouterSelector::RequestContext { .. } | RouterSelector::RequestMethod { .. } | RouterSelector::TraceId { .. } | RouterSelector::StudioOperationId { .. } @@ -1853,6 +1875,98 @@ mod test { None ); } + + #[test] + fn router_request_context() { + let selector = RouterSelector::RequestContext { + request_context: "context_key".to_string(), + redact: None, + default: Some("defaulted".into()), + }; + let context = crate::context::Context::new(); + let _ = context.insert("context_key".to_string(), "context_value".to_string()); + assert_eq!( + selector + .on_request( + &crate::services::RouterRequest::fake_builder() + .context(context.clone()) + .build() + .unwrap() + ) + .unwrap(), + "context_value".into() + ); + + assert_eq!( + selector + .on_request( + &crate::services::RouterRequest::fake_builder() + .build() + .unwrap() + ) + .unwrap(), + "defaulted".into() + ); + assert_eq!( + selector.on_response( + &crate::services::RouterResponse::fake_builder() + .context(context) + .build() + .unwrap() + ), + None + ); + } + + #[test] + fn router_response_context() { + let selector = RouterSelector::ResponseContext { + response_context: "context_key".to_string(), + redact: None, + default: Some("defaulted".into()), + }; + let context = crate::context::Context::new(); + let _ = context.insert("context_key".to_string(), "context_value".to_string()); + assert_eq!( + selector + .on_response( + &crate::services::RouterResponse::fake_builder() + .context(context.clone()) + .build() + .unwrap() + ) + .unwrap(), + "context_value".into() + ); + + assert_eq!( + selector + .on_error(&BoxError::from(String::from("my error")), &context) + .unwrap(), + "context_value".into() + ); + + assert_eq!( + selector + .on_response( + &crate::services::RouterResponse::fake_builder() + .build() + .unwrap() + ) + .unwrap(), + "defaulted".into() + ); + assert_eq!( + selector.on_request( + &crate::services::RouterRequest::fake_builder() + .context(context) + .build() + .unwrap() + ), + None + ); + } + #[test] fn router_response_header() { let selector = RouterSelector::ResponseHeader { @@ -2179,55 +2293,6 @@ mod test { ); } - #[test] - fn router_response_context() { - let selector = RouterSelector::ResponseContext { - response_context: "context_key".to_string(), - redact: None, - default: Some("defaulted".into()), - }; - let context = crate::context::Context::new(); - let _ = context.insert("context_key".to_string(), "context_value".to_string()); - assert_eq!( - selector - .on_response( - &crate::services::RouterResponse::fake_builder() - .context(context.clone()) - .build() - .unwrap() - ) - .unwrap(), - "context_value".into() - ); - - assert_eq!( - selector - .on_error(&BoxError::from(String::from("my error")), &context) - .unwrap(), - "context_value".into() - ); - - assert_eq!( - selector - .on_response( - &crate::services::RouterResponse::fake_builder() - .build() - .unwrap() - ) - .unwrap(), - "defaulted".into() - ); - assert_eq!( - selector.on_request( - &crate::services::RouterRequest::fake_builder() - .context(context) - .build() - .unwrap() - ), - None - ); - } - #[test] fn supergraph_request_context() { let selector = SupergraphSelector::RequestContext { diff --git a/apollo-router/src/plugins/telemetry/mod.rs b/apollo-router/src/plugins/telemetry/mod.rs index e44762b412..35792cf12d 100644 --- a/apollo-router/src/plugins/telemetry/mod.rs +++ b/apollo-router/src/plugins/telemetry/mod.rs @@ -182,6 +182,7 @@ const SUBGRAPH_FTV1: &str = "apollo_telemetry::subgraph_ftv1"; pub(crate) const STUDIO_EXCLUDE: &str = "apollo_telemetry::studio::exclude"; pub(crate) const LOGGING_DISPLAY_HEADERS: &str = "apollo_telemetry::logging::display_headers"; pub(crate) const LOGGING_DISPLAY_BODY: &str = "apollo_telemetry::logging::display_body"; +pub(crate) const SUPERGRAPH_SCHEMA_ID_CONTEXT_KEY: &str = "apollo::supergraph_schema_id"; const GLOBAL_TRACER_NAME: &str = "apollo-router"; const DEFAULT_EXPOSE_TRACE_ID_HEADER: &str = "apollo-trace-id"; static DEFAULT_EXPOSE_TRACE_ID_HEADER_NAME: HeaderName = @@ -201,6 +202,7 @@ pub(crate) const APOLLO_PRIVATE_QUERY_ROOT_FIELDS: Key = #[doc(hidden)] // Only public for integration tests pub(crate) struct Telemetry { pub(crate) config: Arc, + supergraph_schema_id: Arc, custom_endpoints: MultiMap, apollo_metrics_sender: apollo_exporter::Sender, field_level_instrumentation_ratio: f64, @@ -322,6 +324,7 @@ impl PluginPrivate for Telemetry { Ok(Telemetry { custom_endpoints: metrics_builder.custom_endpoints, apollo_metrics_sender: metrics_builder.apollo_metrics_sender, + supergraph_schema_id: init.supergraph_schema_id, field_level_instrumentation_ratio, activation: Mutex::new(TelemetryActivation { tracer_provider: Some(tracer_provider), @@ -349,6 +352,7 @@ impl PluginPrivate for Telemetry { fn router_service(&self, service: router::BoxService) -> router::BoxService { let config = self.config.clone(); + let supergraph_schema_id = self.supergraph_schema_id.clone(); let config_later = self.config.clone(); let config_request = self.config.clone(); let span_mode = config.instrumentation.spans.mode; @@ -401,6 +405,10 @@ impl PluginPrivate for Telemetry { })) .map_future_with_request_data( move |request: &router::Request| { + let _ = request.context.insert( + SUPERGRAPH_SCHEMA_ID_CONTEXT_KEY, + supergraph_schema_id.clone(), + ); if !use_legacy_request_span { let span = Span::current(); @@ -707,30 +715,21 @@ impl PluginPrivate for Telemetry { fn execution_service(&self, service: execution::BoxService) -> execution::BoxService { ServiceBuilder::new() .instrument(move |req: &ExecutionRequest| { - let operation_kind = req - .query_plan - .query - .operation(req.supergraph_request.body().operation_name.as_deref()) - .map(|op| *op.kind()); + let operation_kind = req.query_plan.query.operation.kind(); match operation_kind { - Some(operation_kind) => match operation_kind { - OperationKind::Subscription => info_span!( - EXECUTION_SPAN_NAME, - "otel.kind" = "INTERNAL", - "graphql.operation.type" = operation_kind.as_apollo_operation_type(), - "apollo_private.operation.subtype" = - OperationSubType::SubscriptionRequest.as_str(), - ), - _ => info_span!( - EXECUTION_SPAN_NAME, - "otel.kind" = "INTERNAL", - "graphql.operation.type" = operation_kind.as_apollo_operation_type(), - ), - }, - None => { - info_span!(EXECUTION_SPAN_NAME, "otel.kind" = "INTERNAL",) - } + OperationKind::Subscription => info_span!( + EXECUTION_SPAN_NAME, + "otel.kind" = "INTERNAL", + "graphql.operation.type" = operation_kind.as_apollo_operation_type(), + "apollo_private.operation.subtype" = + OperationSubType::SubscriptionRequest.as_str(), + ), + _ => info_span!( + EXECUTION_SPAN_NAME, + "otel.kind" = "INTERNAL", + "graphql.operation.type" = operation_kind.as_apollo_operation_type(), + ), } }) .service(service) diff --git a/apollo-router/src/plugins/test.rs b/apollo-router/src/plugins/test.rs index 76f7410412..fb8e3ade57 100644 --- a/apollo-router/src/plugins/test.rs +++ b/apollo-router/src/plugins/test.rs @@ -10,7 +10,7 @@ use tower::BoxError; use tower::ServiceBuilder; use tower_service::Service; -use crate::introspection::default_cache_storage; +use crate::introspection::IntrospectionCache; use crate::plugin::DynPlugin; use crate::plugin::Plugin; use crate::plugin::PluginInit; @@ -96,12 +96,13 @@ impl PluginTestHarness { let sdl = schema.raw_sdl.clone(); let supergraph = schema.supergraph_schema().clone(); let rust_planner = PlannerMode::maybe_rust(&schema, &config).unwrap(); + let introspection = Arc::new(IntrospectionCache::new(&config)); let planner = BridgeQueryPlanner::new( schema.into(), Arc::new(config), None, rust_planner, - default_cache_storage().await, + introspection, ) .await .unwrap(); @@ -116,6 +117,7 @@ impl PluginTestHarness { let plugin_init = PluginInit::builder() .config(config_for_plugin.clone()) + .supergraph_schema_id(crate::spec::Schema::schema_id(&supergraph_sdl).into()) .supergraph_sdl(supergraph_sdl) .supergraph_schema(Arc::new(parsed_schema)) .subgraph_schemas(subgraph_schemas) diff --git a/apollo-router/src/plugins/traffic_shaping/deduplication.rs b/apollo-router/src/plugins/traffic_shaping/deduplication.rs index 639d0d12b9..0df2574eb4 100644 --- a/apollo-router/src/plugins/traffic_shaping/deduplication.rs +++ b/apollo-router/src/plugins/traffic_shaping/deduplication.rs @@ -49,6 +49,7 @@ impl Clone for CloneSubgraphResponse { response: http_ext::Response::from(&self.0.response).inner, context: self.0.context.clone(), subgraph_name: self.0.subgraph_name.clone(), + id: self.0.id.clone(), }) } } @@ -105,6 +106,7 @@ where response.0.response, request.context, request.subgraph_name.unwrap_or_default(), + request.id, ) }) .map_err(|e| e.into()) @@ -121,6 +123,7 @@ where let context = request.context.clone(); let authorization_cache_key = request.authorization.clone(); + let id = request.id.clone(); let cache_key = ((&request.subgraph_request).into(), authorization_cache_key); let res = { // when _drop_signal is dropped, either by getting out of the block, returning @@ -162,6 +165,7 @@ where response.0.response, context, response.0.subgraph_name.unwrap_or_default(), + id, ) }); } diff --git a/apollo-router/src/plugins/traffic_shaping/mod.rs b/apollo-router/src/plugins/traffic_shaping/mod.rs index e77322fffa..1125239d10 100644 --- a/apollo-router/src/plugins/traffic_shaping/mod.rs +++ b/apollo-router/src/plugins/traffic_shaping/mod.rs @@ -35,6 +35,7 @@ use self::rate::RateLimited; pub(crate) use self::retry::RetryPolicy; use self::timeout::Elapsed; use self::timeout::TimeoutLayer; +use crate::configuration::shared::DnsResolutionStrategy; use crate::error::ConfigurationError; use crate::graphql; use crate::layers::ServiceBuilderExt; @@ -72,6 +73,8 @@ struct Shaping { experimental_retry: Option, /// Enable HTTP2 for subgraphs experimental_http2: Option, + /// DNS resolution strategy for subgraphs + dns_resolution_strategy: Option, } #[derive(PartialEq, Default, Debug, Clone, Deserialize, JsonSchema)] @@ -109,6 +112,11 @@ impl Merge for Shaping { .as_ref() .or(fallback.experimental_http2.as_ref()) .cloned(), + dns_resolution_strategy: self + .dns_resolution_strategy + .as_ref() + .or(fallback.dns_resolution_strategy.as_ref()) + .cloned(), }, } } @@ -444,13 +452,19 @@ impl TrafficShaping { } } - pub(crate) fn enable_subgraph_http2(&self, service_name: &str) -> Http2Config { + pub(crate) fn subgraph_client_config( + &self, + service_name: &str, + ) -> crate::configuration::shared::Client { Self::merge_config( self.config.all.as_ref(), self.config.subgraphs.get(service_name), ) - .and_then(|config| config.shaping.experimental_http2) - .unwrap_or(Http2Config::Enable) + .map(|config| crate::configuration::shared::Client { + experimental_http2: config.shaping.experimental_http2, + dns_resolution_strategy: config.shaping.dns_resolution_strategy, + }) + .unwrap_or_default() } } @@ -749,16 +763,19 @@ mod test { } #[tokio::test] - async fn test_enable_subgraph_http2() { + async fn test_subgraph_client_config() { let config = serde_yaml::from_str::( r#" all: experimental_http2: disable + dns_resolution_strategy: ipv6_only subgraphs: products: experimental_http2: enable + dns_resolution_strategy: ipv6_then_ipv4 reviews: experimental_http2: disable + dns_resolution_strategy: ipv4_only router: timeout: 65s "#, @@ -769,9 +786,27 @@ mod test { .await .unwrap(); - assert!(shaping_config.enable_subgraph_http2("products") == Http2Config::Enable); - assert!(shaping_config.enable_subgraph_http2("reviews") == Http2Config::Disable); - assert!(shaping_config.enable_subgraph_http2("this_doesnt_exist") == Http2Config::Disable); + assert_eq!( + shaping_config.subgraph_client_config("products"), + crate::configuration::shared::Client { + experimental_http2: Some(Http2Config::Enable), + dns_resolution_strategy: Some(DnsResolutionStrategy::Ipv6ThenIpv4), + }, + ); + assert_eq!( + shaping_config.subgraph_client_config("reviews"), + crate::configuration::shared::Client { + experimental_http2: Some(Http2Config::Disable), + dns_resolution_strategy: Some(DnsResolutionStrategy::Ipv4Only), + }, + ); + assert_eq!( + shaping_config.subgraph_client_config("this_doesnt_exist"), + crate::configuration::shared::Client { + experimental_http2: Some(Http2Config::Disable), + dns_resolution_strategy: Some(DnsResolutionStrategy::Ipv6Only), + }, + ); } #[tokio::test(flavor = "multi_thread")] diff --git a/apollo-router/src/query_planner/bridge_query_planner.rs b/apollo-router/src/query_planner/bridge_query_planner.rs index 9cab43aab8..f10e424bb3 100644 --- a/apollo-router/src/query_planner/bridge_query_planner.rs +++ b/apollo-router/src/query_planner/bridge_query_planner.rs @@ -3,11 +3,11 @@ use std::collections::HashMap; use std::fmt::Debug; use std::fmt::Write; +use std::ops::ControlFlow; use std::sync::Arc; use std::time::Instant; use apollo_compiler::ast; -use apollo_compiler::execution::InputCoercionError; use apollo_compiler::validation::Valid; use apollo_compiler::Name; use apollo_federation::error::FederationError; @@ -18,7 +18,6 @@ use futures::future::BoxFuture; use opentelemetry_api::metrics::MeterProvider as _; use opentelemetry_api::metrics::ObservableGauge; use opentelemetry_api::KeyValue; -use router_bridge::introspect::IntrospectionError; use router_bridge::planner::PlanOptions; use router_bridge::planner::PlanSuccess; use router_bridge::planner::Planner; @@ -30,8 +29,6 @@ use tower::Service; use super::PlanNode; use super::QueryKey; use crate::apollo_studio_interop::generate_usage_reporting; -use crate::cache::storage::CacheStorage; -use crate::configuration::IntrospectionMode as IntrospectionConfig; use crate::configuration::QueryPlannerMode; use crate::error::PlanErrors; use crate::error::QueryPlannerError; @@ -39,8 +36,7 @@ use crate::error::SchemaError; use crate::error::ServiceBuildError; use crate::error::ValidationErrors; use crate::graphql; -use crate::graphql::Response; -use crate::introspection::Introspection; +use crate::introspection::IntrospectionCache; use crate::json_ext::Object; use crate::json_ext::Path; use crate::metrics::meter_provider; @@ -80,11 +76,11 @@ pub(crate) struct BridgeQueryPlanner { planner: PlannerMode, schema: Arc, subgraph_schemas: Arc>>>, - introspection: IntrospectionMode, configuration: Arc, enable_authorization_directives: bool, _federation_instrument: ObservableGauge, signature_normalization_algorithm: ApolloSignatureNormalizationAlgorithm, + introspection: Arc, } #[derive(Clone)] @@ -97,14 +93,6 @@ pub(crate) enum PlannerMode { Rust(Arc), } -#[derive(Clone)] -enum IntrospectionMode { - Js(Arc), - Both(Arc), - Rust, - Disabled, -} - fn federation_version_instrument(federation_version: Option) -> ObservableGauge { meter_provider() .meter("apollo/router") @@ -236,27 +224,6 @@ impl PlannerMode { Ok(Arc::new(planner)) } - async fn js_introspection( - &self, - sdl: &str, - configuration: &Configuration, - old_js_planner: &Option>>, - cache: CacheStorage, - ) -> Result, ServiceBuildError> { - let js_planner = match self { - Self::Js(js) => js.clone(), - Self::Both { js, .. } => js.clone(), - Self::Rust(_) => { - // JS "planner" (actually runtime) was not created for planning - // but is still needed for introspection, so create it now - Self::js_planner(sdl, configuration, old_js_planner).await? - } - }; - Ok(Arc::new( - Introspection::with_cache(js_planner, cache).await?, - )) - } - async fn plan( &self, doc: &ParsedDocument, @@ -424,31 +391,13 @@ impl BridgeQueryPlanner { configuration: Arc, old_js_planner: Option>>, rust_planner: Option>, - cache: CacheStorage, + introspection_cache: Arc, ) -> Result { let planner = PlannerMode::new(&schema, &configuration, &old_js_planner, rust_planner).await?; let subgraph_schemas = Arc::new(planner.subgraphs().await?); - let introspection = if configuration.supergraph.introspection { - match configuration.experimental_introspection_mode { - IntrospectionConfig::New => IntrospectionMode::Rust, - IntrospectionConfig::Legacy => IntrospectionMode::Js( - planner - .js_introspection(&schema.raw_sdl, &configuration, &old_js_planner, cache) - .await?, - ), - IntrospectionConfig::Both => IntrospectionMode::Both( - planner - .js_introspection(&schema.raw_sdl, &configuration, &old_js_planner, cache) - .await?, - ), - } - } else { - IntrospectionMode::Disabled - }; - let enable_authorization_directives = AuthorizationPlugin::enable_directives(&configuration, &schema)?; let federation_instrument = federation_version_instrument(schema.federation_version()); @@ -459,11 +408,11 @@ impl BridgeQueryPlanner { planner, schema, subgraph_schemas, - introspection, enable_authorization_directives, configuration, _federation_instrument: federation_instrument, signature_normalization_algorithm, + introspection: introspection_cache, }) } @@ -471,13 +420,7 @@ impl BridgeQueryPlanner { match &self.planner { PlannerMode::Js(js) => Some(js.clone()), PlannerMode::Both { js, .. } => Some(js.clone()), - PlannerMode::Rust(_) => match &self.introspection { - IntrospectionMode::Js(js_introspection) - | IntrospectionMode::Both(js_introspection) => { - Some(js_introspection.planner.clone()) - } - IntrospectionMode::Rust | IntrospectionMode::Disabled => None, - }, + PlannerMode::Rust(_) => None, } } @@ -508,19 +451,19 @@ impl BridgeQueryPlanner { operation_name, )?; - let (fragments, operations, defer_stats, schema_aware_hash) = + let (fragments, operation, defer_stats, schema_aware_hash) = Query::extract_query_information(&self.schema, executable, operation_name)?; let subselections = crate::spec::query::subselections::collect_subselections( &self.configuration, - &operations, + &operation, &fragments.map, &defer_stats, )?; Ok(Query { string: query, fragments, - operations, + operation, filtered_query: None, unauthorized: UnauthorizedPaths { paths: vec![], @@ -533,118 +476,6 @@ impl BridgeQueryPlanner { }) } - async fn introspection( - &self, - key: QueryKey, - doc: ParsedDocument, - ) -> Result { - match &self.introspection { - IntrospectionMode::Disabled => return Ok(QueryPlannerContent::IntrospectionDisabled), - IntrospectionMode::Rust => { - let schema = self.schema.clone(); - let response = Box::new( - tokio::task::spawn_blocking(move || { - Self::rust_introspection(&schema, &key, &doc) - }) - .await - .expect("Introspection panicked")?, - ); - return Ok(QueryPlannerContent::Response { response }); - } - IntrospectionMode::Js(_) | IntrospectionMode::Both(_) => {} - } - - if doc.executable.operations.len() > 1 { - // TODO: add an operation_name parameter to router-bridge to fix this? - let error = graphql::Error::builder() - .message( - "Schema introspection is currently not supported \ - with multiple operations in the same document", - ) - .extension_code("INTROSPECTION_WITH_MULTIPLE_OPERATIONS") - .build(); - return Ok(QueryPlannerContent::Response { - response: Box::new(graphql::Response::builder().error(error).build()), - }); - } - - let response = match &self.introspection { - IntrospectionMode::Rust | IntrospectionMode::Disabled => unreachable!(), // returned above - IntrospectionMode::Js(js) => js - .execute(key.filtered_query) - .await - .map_err(QueryPlannerError::Introspection)?, - IntrospectionMode::Both(js) => { - let js_result = js - .execute(key.filtered_query.clone()) - .await - .map_err(QueryPlannerError::Introspection); - let schema = self.schema.clone(); - let js_result_clone = js_result.clone(); - tokio::task::spawn_blocking(move || { - let rust_result = match Self::rust_introspection(&schema, &key, &doc) { - Ok(response) => { - if response.errors.is_empty() { - Ok(response) - } else { - Err(QueryPlannerError::Introspection(IntrospectionError { - message: Some( - response - .errors - .into_iter() - .map(|e| e.to_string()) - .collect::>() - .join(", "), - ), - })) - } - } - Err(e) => Err(e), - }; - super::dual_introspection::compare_introspection_responses( - &key.original_query, - js_result_clone, - rust_result, - ); - }) - .await - .expect("Introspection comparison panicked"); - js_result? - } - }; - Ok(QueryPlannerContent::Response { - response: Box::new(response), - }) - } - - fn rust_introspection( - schema: &Schema, - key: &QueryKey, - doc: &ParsedDocument, - ) -> Result { - let schema = schema.api_schema(); - let operation = doc.get_operation(key.operation_name.as_deref())?; - let variable_values = Default::default(); - let variable_values = - apollo_compiler::execution::coerce_variable_values(schema, operation, &variable_values) - .map_err(|e| { - let message = match &e { - InputCoercionError::SuspectedValidationBug(e) => &e.message, - InputCoercionError::ValueError { message, .. } => message, - }; - QueryPlannerError::Introspection(IntrospectionError { - message: Some(message.clone()), - }) - })?; - let response = apollo_compiler::execution::execute_introspection_only_query( - schema, - &doc.executable, - operation, - &variable_values, - ); - Ok(response.into()) - } - #[allow(clippy::too_many_arguments)] async fn plan( &self, @@ -798,11 +629,12 @@ impl Service for BridgeQueryPlanner { operation_name.as_deref(), ) .map_err(|e| SpecError::QueryHashing(e.to_string()))?; - doc = Arc::new(ParsedDocumentInner { - executable: Arc::new(executable_document), - ast: modified_query, - hash: Arc::new(QueryHash(hash)), - }); + doc = ParsedDocumentInner::new( + modified_query, + Arc::new(executable_document), + operation_name.as_deref(), + Arc::new(QueryHash(hash)), + )?; context .extensions() .with_lock(|mut lock| lock.insert::(doc.clone())); @@ -880,10 +712,7 @@ impl BridgeQueryPlanner { ) .await?; - if selections - .operation(key.operation_name.as_deref()) - .is_some_and(|op| op.selection_set.is_empty()) - { + if selections.operation.selection_set.is_empty() { // All selections have @skip(true) or @include(false) // Return an empty response now to avoid dealing with an empty query plan later return Ok(QueryPlannerContent::Response { @@ -895,69 +724,16 @@ impl BridgeQueryPlanner { }); } + match self + .introspection + .maybe_execute(&self.schema, &key, &doc) + .await { - let operation = doc - .executable - .operations - .get(key.operation_name.as_deref()) - .ok(); - let mut has_root_typename = false; - let mut has_schema_introspection = false; - let mut has_other_root_fields = false; - if let Some(operation) = operation { - for field in operation.root_fields(&doc.executable) { - match field.name.as_str() { - "__typename" => has_root_typename = true, - "__schema" | "__type" if operation.is_query() => { - has_schema_introspection = true - } - _ => has_other_root_fields = true, - } - } - if has_root_typename && !has_schema_introspection && !has_other_root_fields { - // Fast path for __typename alone - if operation - .selection_set - .selections - .iter() - .all(|sel| sel.as_field().is_some_and(|f| f.name == "__typename")) - { - let root_type_name: serde_json_bytes::ByteString = - operation.object_type().as_str().into(); - let data = Value::Object( - operation - .root_fields(&doc.executable) - .filter(|field| field.name == "__typename") - .map(|field| { - ( - field.response_key().as_str().into(), - Value::String(root_type_name.clone()), - ) - }) - .collect(), - ); - return Ok(QueryPlannerContent::Response { - response: Box::new(graphql::Response::builder().data(data).build()), - }); - } else { - // fragments might use @include or @skip - } - } - } else { - // Should be unreachable as QueryAnalysisLayer would have returned an error - } - - if has_schema_introspection { - if has_other_root_fields { - let error = graphql::Error::builder() - .message("Mixed queries with both schema introspection and concrete fields are not supported") - .extension_code("MIXED_INTROSPECTION") - .build(); - return Ok(QueryPlannerContent::Response { - response: Box::new(graphql::Response::builder().error(error).build()), - }); - } - return self.introspection(key, doc).await; + ControlFlow::Continue(()) => (), + ControlFlow::Break(response) => { + return Ok(QueryPlannerContent::CachedIntrospectionResponse { + response: Box::new(response), + }) } } @@ -1001,11 +777,12 @@ impl BridgeQueryPlanner { key.operation_name.as_deref(), ) .map_err(|e| SpecError::QueryHashing(e.to_string()))?; - doc = Arc::new(ParsedDocumentInner { - executable: Arc::new(executable_document), - ast: new_doc, - hash: Arc::new(QueryHash(hash)), - }); + doc = ParsedDocumentInner::new( + new_doc, + Arc::new(executable_document), + key.operation_name.as_deref(), + Arc::new(QueryHash(hash)), + )?; selections.unauthorized.paths = unauthorized_paths; } @@ -1126,7 +903,6 @@ mod tests { use tower::ServiceExt; use super::*; - use crate::introspection::default_cache_storage; use crate::metrics::FutureMetricsExt as _; use crate::services::subgraph; use crate::services::supergraph; @@ -1171,15 +947,11 @@ mod tests { let sdl = include_str!("../testdata/minimal_fed1_supergraph.graphql"); let config = Arc::default(); let schema = Schema::parse(sdl, &config).unwrap(); - let _planner = BridgeQueryPlanner::new( - schema.into(), - config, - None, - None, - default_cache_storage().await, - ) - .await - .unwrap(); + let introspection = Arc::new(IntrospectionCache::new(&config)); + let _planner = + BridgeQueryPlanner::new(schema.into(), config, None, None, introspection) + .await + .unwrap(); assert_gauge!( "apollo.router.supergraph.federation", @@ -1194,15 +966,11 @@ mod tests { let sdl = include_str!("../testdata/minimal_supergraph.graphql"); let config = Arc::default(); let schema = Schema::parse(sdl, &config).unwrap(); - let _planner = BridgeQueryPlanner::new( - schema.into(), - config, - None, - None, - default_cache_storage().await, - ) - .await - .unwrap(); + let introspection = Arc::new(IntrospectionCache::new(&config)); + let _planner = + BridgeQueryPlanner::new(schema.into(), config, None, None, introspection) + .await + .unwrap(); assert_gauge!( "apollo.router.supergraph.federation", @@ -1216,7 +984,8 @@ mod tests { #[test(tokio::test)] async fn empty_query_plan_should_be_a_planner_error() { - let schema = Arc::new(Schema::parse(EXAMPLE_SCHEMA, &Default::default()).unwrap()); + let config = Default::default(); + let schema = Arc::new(Schema::parse(EXAMPLE_SCHEMA, &config).unwrap()); let query = include_str!("testdata/unknown_introspection_query.graphql"); let planner = BridgeQueryPlanner::new( @@ -1224,7 +993,7 @@ mod tests { Default::default(), None, None, - default_cache_storage().await, + Arc::new(IntrospectionCache::new(&config)), ) .await .unwrap(); @@ -1268,10 +1037,11 @@ mod tests { #[test(tokio::test)] async fn test_plan_error() { - let result = plan(EXAMPLE_SCHEMA, "", "", None, PlanOptions::default()).await; + let query = ""; + let result = plan(EXAMPLE_SCHEMA, query, query, None, PlanOptions::default()).await; assert_eq!( - "couldn't plan query: query validation errors: Syntax Error: Unexpected .", + "spec error: parsing error: syntax error: Unexpected .", result.unwrap_err().to_string() ); } @@ -1287,7 +1057,7 @@ mod tests { ) .await .unwrap(); - if let QueryPlannerContent::Response { response } = result { + if let QueryPlannerContent::CachedIntrospectionResponse { response } = result { assert_eq!( r#"{"data":{"x":"Query"}}"#, serde_json::to_string(&response).unwrap() @@ -1308,7 +1078,7 @@ mod tests { ) .await .unwrap(); - if let QueryPlannerContent::Response { response } = result { + if let QueryPlannerContent::CachedIntrospectionResponse { response } = result { assert_eq!( r#"{"data":{"x":"Query","__typename":"Query"}}"#, serde_json::to_string(&response).unwrap() @@ -1330,7 +1100,7 @@ mod tests { configuration.clone(), None, None, - default_cache_storage().await, + Arc::new(IntrospectionCache::new(&configuration)), ) .await .unwrap(); @@ -1644,7 +1414,7 @@ mod tests { configuration.clone(), None, None, - default_cache_storage().await, + Arc::new(IntrospectionCache::new(&configuration)), ) .await .unwrap(); @@ -1654,8 +1424,7 @@ mod tests { operation_name.as_deref(), &planner.schema(), &configuration, - ) - .unwrap(); + )?; planner .get( diff --git a/apollo-router/src/query_planner/bridge_query_planner_pool.rs b/apollo-router/src/query_planner/bridge_query_planner_pool.rs index 3200e59e7e..722a965efd 100644 --- a/apollo-router/src/query_planner/bridge_query_planner_pool.rs +++ b/apollo-router/src/query_planner/bridge_query_planner_pool.rs @@ -21,11 +21,9 @@ use tower::ServiceExt; use super::bridge_query_planner::BridgeQueryPlanner; use super::QueryPlanResult; -use crate::cache::storage::CacheStorage; use crate::error::QueryPlannerError; use crate::error::ServiceBuildError; -use crate::graphql::Response; -use crate::introspection::default_cache_storage; +use crate::introspection::IntrospectionCache; use crate::metrics::meter_provider; use crate::query_planner::PlannerMode; use crate::services::QueryPlannerRequest; @@ -49,7 +47,7 @@ pub(crate) struct BridgeQueryPlannerPool { v8_heap_used_gauge: Arc>>>, v8_heap_total: Arc, v8_heap_total_gauge: Arc>>>, - introspection_cache: CacheStorage, + introspection_cache: Arc, } impl BridgeQueryPlannerPool { @@ -72,7 +70,7 @@ impl BridgeQueryPlannerPool { // All query planners in the pool now share the same introspection cache. // This allows meaningful gauges, and it makes sense that queries should be cached across all planners. - let introspection_cache = default_cache_storage().await; + let introspection_cache = Arc::new(IntrospectionCache::new(&configuration)); for _ in 0..size.into() { let schema = schema.clone(); diff --git a/apollo-router/src/query_planner/caching_query_planner.rs b/apollo-router/src/query_planner/caching_query_planner.rs index b0a527584e..1c4e953c05 100644 --- a/apollo-router/src/query_planner/caching_query_planner.rs +++ b/apollo-router/src/query_planner/caching_query_planner.rs @@ -81,8 +81,6 @@ pub(crate) struct CachingQueryPlanner { plugins: Arc, enable_authorization_directives: bool, config_mode: ConfigMode, - introspection: bool, - legacy_introspection_caching: bool, } fn init_query_plan_from_redis( @@ -149,11 +147,6 @@ where plugins: Arc::new(plugins), enable_authorization_directives, config_mode, - introspection: configuration.supergraph.introspection, - legacy_introspection_caching: configuration - .supergraph - .query_planning - .legacy_introspection_caching, }) } @@ -203,7 +196,6 @@ where plan_options, config_mode: _, schema_id: _, - introspection: _, }, _, )| WarmUpCachingQueryKey { @@ -213,7 +205,6 @@ where metadata: metadata.clone(), plan_options: plan_options.clone(), config_mode: self.config_mode.clone(), - introspection: self.introspection, }, ) .take(count) @@ -259,7 +250,6 @@ where metadata: CacheKeyMetadata::default(), plan_options: PlanOptions::default(), config_mode: self.config_mode.clone(), - introspection: self.introspection, }); } } @@ -276,11 +266,10 @@ where metadata, plan_options, config_mode: _, - introspection: _, } in all_cache_keys { let context = Context::new(); - let (doc, _operation_def) = match query_analysis + let doc = match query_analysis .parse_document(&query, operation_name.as_deref()) .await { @@ -296,7 +285,6 @@ where metadata, plan_options, config_mode: self.config_mode.clone(), - introspection: self.introspection, }; if experimental_reuse_query_plans { @@ -324,7 +312,7 @@ where }) .await; if entry.is_first() { - let (doc, _operation_def) = match query_analysis + let doc = match query_analysis .parse_document(&query, operation_name.as_deref()) .await { @@ -498,7 +486,6 @@ where metadata, plan_options, config_mode: self.config_mode.clone(), - introspection: self.introspection, }; let context = request.context.clone(); @@ -542,8 +529,12 @@ where }) => { if let Some(content) = content.clone() { let can_cache = match &content { - QueryPlannerContent::Plan { .. } => true, - _ => self.legacy_introspection_caching, + // Already cached in an introspection-specific, small-size, + // in-memory-only cache. + QueryPlannerContent::CachedIntrospectionResponse { .. } => { + false + } + _ => true, }; if can_cache { @@ -645,7 +636,6 @@ pub(crate) struct CachingQueryKey { pub(crate) metadata: CacheKeyMetadata, pub(crate) plan_options: PlanOptions, pub(crate) config_mode: ConfigMode, - pub(crate) introspection: bool, } // Update this key every time the cache key or the query plan format has to change. @@ -666,7 +656,6 @@ impl std::fmt::Display for CachingQueryKey { hasher .update(serde_json::to_vec(&self.config_mode).expect("serialization should not fail")); hasher.update(&*self.schema_id); - hasher.update([self.introspection as u8]); let metadata = hex::encode(hasher.finalize()); write!( @@ -685,7 +674,6 @@ impl Hash for CachingQueryKey { self.metadata.hash(state); self.plan_options.hash(state); self.config_mode.hash(state); - self.introspection.hash(state); } } @@ -697,14 +685,16 @@ pub(crate) struct WarmUpCachingQueryKey { pub(crate) metadata: CacheKeyMetadata, pub(crate) plan_options: PlanOptions, pub(crate) config_mode: ConfigMode, - pub(crate) introspection: bool, } impl ValueType for Result> { fn estimated_size(&self) -> Option { match self { Ok(QueryPlannerContent::Plan { plan }) => Some(plan.estimated_size()), - Ok(QueryPlannerContent::Response { response }) => Some(estimate_size(response)), + Ok(QueryPlannerContent::Response { response }) + | Ok(QueryPlannerContent::CachedIntrospectionResponse { response }) => { + Some(estimate_size(response)) + } Ok(QueryPlannerContent::IntrospectionDisabled) => None, Err(e) => Some(estimate_size(e)), } @@ -938,7 +928,7 @@ mod tests { .returning(|| { let mut planner = MockMyQueryPlanner::new(); planner.expect_sync_call().returning(|_| { - let qp_content = QueryPlannerContent::Response { + let qp_content = QueryPlannerContent::CachedIntrospectionResponse { response: Box::new( crate::graphql::Response::builder() .data(Object::new()) @@ -954,16 +944,7 @@ mod tests { planner }); - let configuration = Arc::new(crate::Configuration { - supergraph: crate::configuration::Supergraph { - query_planning: crate::configuration::QueryPlanning { - legacy_introspection_caching: false, - ..Default::default() - }, - ..Default::default() - }, - ..Default::default() - }); + let configuration = Default::default(); let schema = include_str!("testdata/schema.graphql"); let schema = Arc::new(Schema::parse(schema, &configuration).unwrap()); diff --git a/apollo-router/src/query_planner/dual_introspection.rs b/apollo-router/src/query_planner/dual_introspection.rs deleted file mode 100644 index f07ffe2036..0000000000 --- a/apollo-router/src/query_planner/dual_introspection.rs +++ /dev/null @@ -1,197 +0,0 @@ -use std::cmp::Ordering; - -use apollo_compiler::ast; -use serde_json_bytes::Value; - -use crate::error::QueryPlannerError; -use crate::graphql; - -pub(crate) fn compare_introspection_responses( - query: &str, - mut js_result: Result, - mut rust_result: Result, -) { - let is_matched; - match (&mut js_result, &mut rust_result) { - (Err(_), Err(_)) => { - is_matched = true; - } - (Err(err), Ok(_)) => { - is_matched = false; - tracing::warn!("JS introspection error: {err}") - } - (Ok(_), Err(err)) => { - is_matched = false; - tracing::warn!("Rust introspection error: {err}") - } - (Ok(js_response), Ok(rust_response)) => { - if let (Some(js_data), Some(rust_data)) = - (&mut js_response.data, &mut rust_response.data) - { - normalize_response(js_data); - normalize_response(rust_data); - } - is_matched = js_response.data == rust_response.data; - if is_matched { - tracing::trace!("Introspection match! ๐ŸŽ‰") - } else { - tracing::debug!("Introspection mismatch"); - tracing::trace!("Introspection query:\n{query}"); - tracing::debug!("Introspection diff:\n{}", { - let rust = rust_response - .data - .as_ref() - .map(|d| serde_json::to_string_pretty(&d).unwrap()) - .unwrap_or_default(); - let js = js_response - .data - .as_ref() - .map(|d| serde_json::to_string_pretty(&d).unwrap()) - .unwrap_or_default(); - let diff = similar::TextDiff::from_lines(&js, &rust); - diff.unified_diff() - .context_radius(10) - .header("JS", "Rust") - .to_string() - }) - } - } - } - - u64_counter!( - "apollo.router.operations.introspection.both", - "Comparing JS v.s. Rust introspection", - 1, - "generation.is_matched" = is_matched, - "generation.js_error" = js_result.is_err(), - "generation.rust_error" = rust_result.is_err() - ); -} - -fn normalize_response(value: &mut Value) { - match value { - Value::Array(array) => { - for item in array.iter_mut() { - normalize_response(item) - } - array.sort_by(json_compare) - } - Value::Object(object) => { - for (key, value) in object { - if let Some(new_value) = normalize_default_value(key.as_str(), value) { - *value = new_value - } else { - normalize_response(value) - } - } - } - Value::Null | Value::Bool(_) | Value::Number(_) | Value::String(_) => {} - } -} - -/// When a default value is an input object, graphql-js seems to sort its fields by name -fn normalize_default_value(key: &str, value: &Value) -> Option { - if key != "defaultValue" { - return None; - } - let default_value = value.as_str()?; - // We donโ€™t have a parser entry point for a standalone GraphQL `Value`, - // so mint a document that contains that value. - let doc = format!("{{ field(arg: {default_value}) }}"); - let doc = ast::Document::parse(doc, "").ok()?; - let parsed_default_value = &doc - .definitions - .first()? - .as_operation_definition()? - .selection_set - .first()? - .as_field()? - .arguments - .first()? - .value; - match parsed_default_value.as_ref() { - ast::Value::List(_) | ast::Value::Object(_) => { - let normalized = normalize_parsed_default_value(parsed_default_value); - Some(normalized.serialize().no_indent().to_string().into()) - } - ast::Value::Null - | ast::Value::Enum(_) - | ast::Value::Variable(_) - | ast::Value::String(_) - | ast::Value::Float(_) - | ast::Value::Int(_) - | ast::Value::Boolean(_) => None, - } -} - -fn normalize_parsed_default_value(value: &ast::Value) -> ast::Value { - match value { - ast::Value::List(items) => ast::Value::List( - items - .iter() - .map(|item| normalize_parsed_default_value(item).into()) - .collect(), - ), - ast::Value::Object(fields) => { - let mut new_fields: Vec<_> = fields - .iter() - .map(|(name, value)| (name.clone(), normalize_parsed_default_value(value).into())) - .collect(); - new_fields.sort_by(|(name_1, _value_1), (name_2, _value_2)| name_1.cmp(name_2)); - ast::Value::Object(new_fields) - } - v => v.clone(), - } -} - -fn json_compare(a: &Value, b: &Value) -> Ordering { - match (a, b) { - (Value::Null, Value::Null) => Ordering::Equal, - (Value::Bool(a), Value::Bool(b)) => a.cmp(b), - (Value::Number(a), Value::Number(b)) => a.as_f64().unwrap().total_cmp(&b.as_f64().unwrap()), - (Value::String(a), Value::String(b)) => a.cmp(b), - (Value::Array(a), Value::Array(b)) => iter_cmp(a, b, json_compare), - (Value::Object(a), Value::Object(b)) => { - iter_cmp(a, b, |(key_a, a), (key_b, b)| { - debug_assert_eq!(key_a, key_b); // Response object keys are in selection set order - json_compare(a, b) - }) - } - _ => json_discriminant(a).cmp(&json_discriminant(b)), - } -} - -// TODO: use `Iterator::cmp_by` when available: -// https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.cmp_by -// https://github.com/rust-lang/rust/issues/64295 -fn iter_cmp( - a: impl IntoIterator, - b: impl IntoIterator, - cmp: impl Fn(T, T) -> Ordering, -) -> Ordering { - use itertools::Itertools; - for either_or_both in a.into_iter().zip_longest(b) { - match either_or_both { - itertools::EitherOrBoth::Both(a, b) => { - let ordering = cmp(a, b); - if ordering != Ordering::Equal { - return ordering; - } - } - itertools::EitherOrBoth::Left(_) => return Ordering::Less, - itertools::EitherOrBoth::Right(_) => return Ordering::Greater, - } - } - Ordering::Equal -} - -fn json_discriminant(value: &Value) -> u8 { - match value { - Value::Null => 0, - Value::Bool(_) => 1, - Value::Number(_) => 2, - Value::String(_) => 3, - Value::Array(_) => 4, - Value::Object(_) => 5, - } -} diff --git a/apollo-router/src/query_planner/dual_query_planner.rs b/apollo-router/src/query_planner/dual_query_planner.rs index 91d273fb26..5c0f41846f 100644 --- a/apollo-router/src/query_planner/dual_query_planner.rs +++ b/apollo-router/src/query_planner/dual_query_planner.rs @@ -1,6 +1,8 @@ //! Running two query planner implementations and comparing their results use std::borrow::Borrow; +use std::collections::hash_map::HashMap; +use std::fmt::Write; use std::hash::DefaultHasher; use std::hash::Hash; use std::hash::Hasher; @@ -268,7 +270,7 @@ fn fetch_node_matches(this: &FetchNode, other: &FetchNode) -> Result<(), MatchFa check_match_eq!(*operation_kind, other.operation_kind); check_match_eq!(*id, other.id); check_match_eq!(*authorization, other.authorization); - check_match!(same_selection_set_sorted(requires, &other.requires)); + check_match!(same_requires(requires, &other.requires)); check_match!(vec_matches_sorted(variable_usages, &other.variable_usages)); check_match!(same_rewrites(input_rewrites, &other.input_rewrites)); check_match!(same_rewrites(output_rewrites, &other.output_rewrites)); @@ -300,7 +302,12 @@ fn operation_matches( this: &SubgraphOperation, other: &SubgraphOperation, ) -> Result<(), MatchFailure> { - let this_ast = match ast::Document::parse(this.as_serialized(), "this_operation.graphql") { + document_str_matches(this.as_serialized(), other.as_serialized()) +} + +// Compare operation document strings such as query or just selection set. +fn document_str_matches(this: &str, other: &str) -> Result<(), MatchFailure> { + let this_ast = match ast::Document::parse(this, "this_operation.graphql") { Ok(document) => document, Err(_) => { return Err(MatchFailure::new( @@ -308,7 +315,7 @@ fn operation_matches( )); } }; - let other_ast = match ast::Document::parse(other.as_serialized(), "other_operation.graphql") { + let other_ast = match ast::Document::parse(other, "other_operation.graphql") { Ok(document) => document, Err(_) => { return Err(MatchFailure::new( @@ -319,6 +326,20 @@ fn operation_matches( same_ast_document(&this_ast, &other_ast) } +fn opt_document_string_matches( + this: &Option, + other: &Option, +) -> Result<(), MatchFailure> { + match (this, other) { + (None, None) => Ok(()), + (Some(this_sel), Some(other_sel)) => document_str_matches(this_sel, other_sel), + _ => Err(MatchFailure::new(format!( + "mismatched at opt_document_string_matches\nleft: {:?}\nright: {:?}", + this, other + ))), + } +} + // The rest is calling the comparison functions above instead of `PartialEq`, // but otherwise behave just like `PartialEq`: @@ -369,6 +390,9 @@ fn opt_plan_node_matches( } } +//================================================================================================== +// Vec comparison functions + fn vec_matches(this: &[T], other: &[T], item_matches: impl Fn(&T, &T) -> bool) -> bool { this.len() == other.len() && std::iter::zip(this, other).all(|(this, other)| item_matches(this, other)) @@ -386,7 +410,6 @@ fn vec_matches_result( item_matches(this, other) .map_err(|err| err.add_description(&format!("under item[{}]", index))) })?; - assert!(vec_matches(this, other, |a, b| item_matches(a, b).is_ok())); // Note: looks redundant Ok(()) } @@ -398,7 +421,20 @@ fn vec_matches_sorted(this: &[T], other: &[T]) -> bool { vec_matches(&this_sorted, &other_sorted, T::eq) } -fn vec_matches_sorted_by( +fn vec_matches_sorted_by( + this: &[T], + other: &[T], + compare: impl Fn(&T, &T) -> std::cmp::Ordering, + item_matches: impl Fn(&T, &T) -> bool, +) -> bool { + let mut this_sorted = this.to_owned(); + let mut other_sorted = other.to_owned(); + this_sorted.sort_by(&compare); + other_sorted.sort_by(&compare); + vec_matches(&this_sorted, &other_sorted, item_matches) +} + +fn vec_matches_result_sorted_by( this: &[T], other: &[T], compare: impl Fn(&T, &T) -> std::cmp::Ordering, @@ -414,53 +450,159 @@ fn vec_matches_sorted_by( Ok(()) } +// `this` vector includes `other` vector as a set +fn vec_includes_as_set(this: &[T], other: &[T], item_matches: impl Fn(&T, &T) -> bool) -> bool { + other.iter().all(|other_node| { + this.iter() + .any(|this_node| item_matches(this_node, other_node)) + }) +} + // performs a set comparison, ignoring order fn vec_matches_as_set(this: &[T], other: &[T], item_matches: impl Fn(&T, &T) -> bool) -> bool { // Set-inclusion test in both directions this.len() == other.len() - && this.iter().all(|this_node| { - other - .iter() - .any(|other_node| item_matches(this_node, other_node)) - }) - && other.iter().all(|other_node| { - this.iter() - .any(|this_node| item_matches(this_node, other_node)) - }) + && vec_includes_as_set(this, other, &item_matches) + && vec_includes_as_set(other, this, &item_matches) } -fn vec_matches_result_as_set( +// Forward/reverse mappings from one Vec items (indices) to another. +type VecMapping = (HashMap, HashMap); + +// performs a set comparison, ignoring order +// and returns a mapping from `this` to `other`. +fn vec_matches_as_set_with_mapping( this: &[T], other: &[T], item_matches: impl Fn(&T, &T) -> bool, -) -> Result<(), MatchFailure> { +) -> VecMapping { // Set-inclusion test in both directions - check_match_eq!(this.len(), other.len()); - for (index, this_node) in this.iter().enumerate() { - if !other + // - record forward/reverse mapping from this items <-> other items for reporting mismatches + let mut forward_map: HashMap = HashMap::new(); + let mut reverse_map: HashMap = HashMap::new(); + for (this_pos, this_node) in this.iter().enumerate() { + if let Some(other_pos) = other .iter() - .any(|other_node| item_matches(this_node, other_node)) + .position(|other_node| item_matches(this_node, other_node)) { - return Err(MatchFailure::new(format!( - "mismatched set: missing item[{}]", - index - ))); + forward_map.insert(this_pos, other_pos); + reverse_map.insert(other_pos, this_pos); } } - for other_node in other.iter() { - if !this + for (other_pos, other_node) in other.iter().enumerate() { + if reverse_map.contains_key(&other_pos) { + continue; + } + if let Some(this_pos) = this .iter() - .any(|this_node| item_matches(this_node, other_node)) + .position(|this_node| item_matches(this_node, other_node)) { + forward_map.insert(this_pos, other_pos); + reverse_map.insert(other_pos, this_pos); + } + } + (forward_map, reverse_map) +} + +// Returns a formatted mismatch message and an optional pair of mismatched positions if the pair +// are the only remaining unmatched items. +fn format_mismatch_as_set( + this_len: usize, + other_len: usize, + forward_map: &HashMap, + reverse_map: &HashMap, +) -> Result<(String, Option<(usize, usize)>), std::fmt::Error> { + let mut ret = String::new(); + let buf = &mut ret; + write!(buf, "- mapping from left to right: [")?; + let mut this_missing_pos = None; + for this_pos in 0..this_len { + if this_pos != 0 { + write!(buf, ", ")?; + } + if let Some(other_pos) = forward_map.get(&this_pos) { + write!(buf, "{}", other_pos)?; + } else { + this_missing_pos = Some(this_pos); + write!(buf, "?")?; + } + } + writeln!(buf, "]")?; + + write!(buf, "- left-over on the right: [")?; + let mut other_missing_count = 0; + let mut other_missing_pos = None; + for other_pos in 0..other_len { + if reverse_map.get(&other_pos).is_none() { + if other_missing_count != 0 { + write!(buf, ", ")?; + } + other_missing_count += 1; + other_missing_pos = Some(other_pos); + write!(buf, "{}", other_pos)?; + } + } + write!(buf, "]")?; + let unmatched_pair = if let (Some(this_missing_pos), Some(other_missing_pos)) = + (this_missing_pos, other_missing_pos) + { + if this_len == 1 + forward_map.len() && other_len == 1 + reverse_map.len() { + // Special case: There are only one missing item on each side. They are supposed to + // match each other. + Some((this_missing_pos, other_missing_pos)) + } else { + None + } + } else { + None + }; + Ok((ret, unmatched_pair)) +} + +fn vec_matches_result_as_set( + this: &[T], + other: &[T], + item_matches: impl Fn(&T, &T) -> Result<(), MatchFailure>, +) -> Result { + // Set-inclusion test in both directions + // - record forward/reverse mapping from this items <-> other items for reporting mismatches + let (forward_map, reverse_map) = + vec_matches_as_set_with_mapping(this, other, |a, b| item_matches(a, b).is_ok()); + if forward_map.len() == this.len() && reverse_map.len() == other.len() { + Ok((forward_map, reverse_map)) + } else { + // report mismatch + let Ok((message, unmatched_pair)) = + format_mismatch_as_set(this.len(), other.len(), &forward_map, &reverse_map) + else { + // Exception: Unable to format mismatch report => fallback to most generic message return Err(MatchFailure::new( - "mismatched set: extra item found".to_string(), + "mismatch at vec_matches_result_as_set (failed to format mismatched sets)" + .to_string(), )); + }; + if let Some(unmatched_pair) = unmatched_pair { + // found a unique pair to report => use that pair's error message + let Err(err) = item_matches(&this[unmatched_pair.0], &other[unmatched_pair.1]) else { + // Exception: Unable to format unique pair mismatch error => fallback to overall report + return Err(MatchFailure::new(format!( + "mismatched sets (failed to format unique pair mismatch error):\n{}", + message + ))); + }; + Err(err.add_description(&format!( + "under a sole unmatched pair ({} -> {}) in a set comparison", + unmatched_pair.0, unmatched_pair.1 + ))) + } else { + Err(MatchFailure::new(format!("mismatched sets:\n{}", message))) } } - assert!(vec_matches_as_set(this, other, item_matches)); - Ok(()) } +//================================================================================================== +// PlanNode comparison functions + fn option_to_string(name: Option) -> String { name.map_or_else(|| "".to_string(), |name| name.to_string()) } @@ -472,7 +614,7 @@ fn plan_node_matches(this: &PlanNode, other: &PlanNode) -> Result<(), MatchFailu .map_err(|err| err.add_description("under Sequence node"))?; } (PlanNode::Parallel { nodes: this }, PlanNode::Parallel { nodes: other }) => { - vec_matches_result_as_set(this, other, |a, b| plan_node_matches(a, b).is_ok()) + vec_matches_result_as_set(this, other, plan_node_matches) .map_err(|err| err.add_description("under Parallel node"))?; } (PlanNode::Fetch(this), PlanNode::Fetch(other)) => { @@ -495,8 +637,8 @@ fn plan_node_matches(this: &PlanNode, other: &PlanNode) -> Result<(), MatchFailu deferred: other_deferred, }, ) => { - check_match!(defer_primary_node_matches(primary, other_primary)); - check_match!(vec_matches(deferred, other_deferred, deferred_node_matches)); + defer_primary_node_matches(primary, other_primary)?; + vec_matches_result(deferred, other_deferred, deferred_node_matches)?; } ( PlanNode::Subscription { primary, rest }, @@ -537,12 +679,15 @@ fn plan_node_matches(this: &PlanNode, other: &PlanNode) -> Result<(), MatchFailu Ok(()) } -fn defer_primary_node_matches(this: &Primary, other: &Primary) -> bool { +fn defer_primary_node_matches(this: &Primary, other: &Primary) -> Result<(), MatchFailure> { let Primary { subselection, node } = this; - *subselection == other.subselection && opt_plan_node_matches(node, &other.node).is_ok() + opt_document_string_matches(subselection, &other.subselection) + .map_err(|err| err.add_description("under defer primary subselection"))?; + opt_plan_node_matches(node, &other.node) + .map_err(|err| err.add_description("under defer primary plan node")) } -fn deferred_node_matches(this: &DeferredNode, other: &DeferredNode) -> bool { +fn deferred_node_matches(this: &DeferredNode, other: &DeferredNode) -> Result<(), MatchFailure> { let DeferredNode { depends, label, @@ -550,11 +695,14 @@ fn deferred_node_matches(this: &DeferredNode, other: &DeferredNode) -> bool { subselection, node, } = this; - *depends == other.depends - && *label == other.label - && *query_path == other.query_path - && *subselection == other.subselection - && opt_plan_node_matches(node, &other.node).is_ok() + + check_match_eq!(*depends, other.depends); + check_match_eq!(*label, other.label); + check_match_eq!(*query_path, other.query_path); + opt_document_string_matches(subselection, &other.subselection) + .map_err(|err| err.add_description("under deferred subselection"))?; + opt_plan_node_matches(node, &other.node) + .map_err(|err| err.add_description("under deferred node")) } fn flatten_node_matches(this: &FlattenNode, other: &FlattenNode) -> Result<(), MatchFailure> { @@ -606,17 +754,22 @@ fn hash_selection_key(selection: &Selection) -> u64 { hash_value(&get_selection_key(selection)) } +// Note: This `Selection` struct is a limited version used for the `requires` field. fn same_selection(x: &Selection, y: &Selection) -> bool { - let x_key = get_selection_key(x); - let y_key = get_selection_key(y); - if x_key != y_key { - return false; - } - let x_selections = x.selection_set(); - let y_selections = y.selection_set(); - match (x_selections, y_selections) { - (Some(x), Some(y)) => same_selection_set_sorted(x, y), - (None, None) => true, + match (x, y) { + (Selection::Field(x), Selection::Field(y)) => { + x.name == y.name + && x.alias == y.alias + && match (&x.selections, &y.selections) { + (Some(x), Some(y)) => same_selection_set_sorted(x, y), + (None, None) => true, + _ => false, + } + } + (Selection::InlineFragment(x), Selection::InlineFragment(y)) => { + x.type_condition == y.type_condition + && same_selection_set_sorted(&x.selections, &y.selections) + } _ => false, } } @@ -637,6 +790,10 @@ fn same_selection_set_sorted(x: &[Selection], y: &[Selection]) -> bool { .all(|(x, y)| same_selection(x, y)) } +fn same_requires(x: &[Selection], y: &[Selection]) -> bool { + vec_matches_as_set(x, y, same_selection) +} + fn same_rewrites(x: &Option>, y: &Option>) -> bool { match (x, y) { (None, None) => true, @@ -666,7 +823,6 @@ fn same_ast_document(x: &ast::Document, y: &ast::Document) -> Result<(), MatchFa _ => others.push(def), } } - fragments.sort_by_key(|frag| frag.name.clone()); (operations, fragments, others) } @@ -680,32 +836,49 @@ fn same_ast_document(x: &ast::Document, y: &ast::Document) -> Result<(), MatchFa "Different number of operation definitions" ); + check_match_eq!(x_frags.len(), y_frags.len()); + let mut fragment_map: HashMap = HashMap::new(); + // Assumption: x_frags and y_frags are topologically sorted. + // Thus, we can build the fragment name mapping in a single pass and compare + // fragment definitions using the mapping at the same time, since earlier fragments + // will never reference later fragments. + x_frags.iter().try_fold((), |_, x_frag| { + let y_frag = y_frags + .iter() + .find(|y_frag| same_ast_fragment_definition(x_frag, y_frag, &fragment_map).is_ok()); + if let Some(y_frag) = y_frag { + if x_frag.name != y_frag.name { + // record it only if they are not identical + fragment_map.insert(x_frag.name.clone(), y_frag.name.clone()); + } + Ok(()) + } else { + Err(MatchFailure::new(format!( + "mismatch: no matching fragment definition for {}", + x_frag.name + ))) + } + })?; + check_match_eq!(x_ops.len(), y_ops.len()); x_ops .iter() .zip(y_ops.iter()) .try_fold((), |_, (x_op, y_op)| { - same_ast_operation_definition(x_op, y_op) + same_ast_operation_definition(x_op, y_op, &fragment_map) .map_err(|err| err.add_description("under operation definition")) })?; - check_match_eq!(x_frags.len(), y_frags.len()); - x_frags - .iter() - .zip(y_frags.iter()) - .try_fold((), |_, (x_frag, y_frag)| { - same_ast_fragment_definition(x_frag, y_frag) - .map_err(|err| err.add_description("under fragment definition")) - })?; Ok(()) } fn same_ast_operation_definition( x: &ast::OperationDefinition, y: &ast::OperationDefinition, + fragment_map: &HashMap, ) -> Result<(), MatchFailure> { // Note: Operation names are ignored, since parallel fetches may have different names. check_match_eq!(x.operation_type, y.operation_type); - vec_matches_sorted_by( + vec_matches_result_sorted_by( &x.variables, &y.variables, |a, b| a.name.cmp(&b.name), @@ -715,11 +888,64 @@ fn same_ast_operation_definition( check_match_eq!(x.directives, y.directives); check_match!(same_ast_selection_set_sorted( &x.selection_set, - &y.selection_set + &y.selection_set, + fragment_map, )); Ok(()) } +// `x` may be coerced to `y`. +// - `x` should be a value from JS QP. +// - `y` should be a value from Rust QP. +// - Assume: x and y are already checked not equal. +// Due to coercion differences, we need to compare AST values with special cases. +fn ast_value_maybe_coerced_to(x: &ast::Value, y: &ast::Value) -> bool { + match (x, y) { + // Special case 1: JS QP may convert an enum value into string. + // - In this case, compare them as strings. + (ast::Value::String(ref x), ast::Value::Enum(ref y)) => { + if x == y.as_str() { + return true; + } + } + + // Special case 2: Rust QP expands a object value by filling in its + // default field values. + // - If the Rust QP object value subsumes the JS QP object value, consider it a match. + // - Assuming the Rust QP object value has only default field values. + // - Warning: This is an unsound heuristic. + (ast::Value::Object(ref x), ast::Value::Object(ref y)) => { + if vec_includes_as_set(y, x, |(yy_name, yy_val), (xx_name, xx_val)| { + xx_name == yy_name + && (xx_val == yy_val || ast_value_maybe_coerced_to(xx_val, yy_val)) + }) { + return true; + } + } + + // Special case 3: JS QP may convert string to int for custom scalars, while Rust doesn't. + // - Note: This conversion seems a bit difficult to implement in the `apollo-federation`'s + // `coerce_value` function, since IntValue's constructor is private to the crate. + (ast::Value::Int(ref x), ast::Value::String(ref y)) => { + if x.as_str() == y { + return true; + } + } + + // Recurse into list items. + (ast::Value::List(ref x), ast::Value::List(ref y)) => { + if vec_matches(x, y, |xx, yy| { + xx == yy || ast_value_maybe_coerced_to(xx, yy) + }) { + return true; + } + } + + _ => {} // otherwise, fall through + } + false +} + // Use this function, instead of `VariableDefinition`'s `PartialEq` implementation, // due to known differences. fn same_variable_definition( @@ -730,27 +956,8 @@ fn same_variable_definition( check_match_eq!(x.ty, y.ty); if x.default_value != y.default_value { if let (Some(x), Some(y)) = (&x.default_value, &y.default_value) { - match (x.as_ref(), y.as_ref()) { - // Special case 1: JS QP may convert an enum value into string. - // - In this case, compare them as strings. - (ast::Value::String(ref x), ast::Value::Enum(ref y)) => { - if x == y.as_str() { - return Ok(()); - } - } - - // Special case 2: Rust QP expands an empty object value by filling in its - // default field values. - // - If the JS QP value is an empty object, consider any object is a match. - // - Assuming the Rust QP object value has only default field values. - // - Warning: This is an unsound heuristic. - (ast::Value::Object(ref x), ast::Value::Object(_)) => { - if x.is_empty() { - return Ok(()); - } - } - - _ => {} // otherwise, fall through + if ast_value_maybe_coerced_to(x, y) { + return Ok(()); } } @@ -766,25 +973,41 @@ fn same_variable_definition( fn same_ast_fragment_definition( x: &ast::FragmentDefinition, y: &ast::FragmentDefinition, + fragment_map: &HashMap, ) -> Result<(), MatchFailure> { - check_match_eq!(x.name, y.name); + // Note: Fragment names at definitions are ignored. check_match_eq!(x.type_condition, y.type_condition); check_match_eq!(x.directives, y.directives); check_match!(same_ast_selection_set_sorted( &x.selection_set, - &y.selection_set + &y.selection_set, + fragment_map, )); Ok(()) } -fn get_ast_selection_key(selection: &ast::Selection) -> SelectionKey { +fn same_ast_argument_value(x: &ast::Value, y: &ast::Value) -> bool { + x == y || ast_value_maybe_coerced_to(x, y) +} + +fn same_ast_argument(x: &ast::Argument, y: &ast::Argument) -> bool { + x.name == y.name && same_ast_argument_value(&x.value, &y.value) +} + +fn get_ast_selection_key( + selection: &ast::Selection, + fragment_map: &HashMap, +) -> SelectionKey { match selection { ast::Selection::Field(field) => SelectionKey::Field { response_name: field.response_name().clone(), directives: field.directives.clone(), }, ast::Selection::FragmentSpread(fragment) => SelectionKey::FragmentSpread { - fragment_name: fragment.fragment_name.clone(), + fragment_name: fragment_map + .get(&fragment.fragment_name) + .unwrap_or(&fragment.fragment_name) + .clone(), directives: fragment.directives.clone(), }, ast::Selection::InlineFragment(fragment) => SelectionKey::InlineFragment { @@ -794,54 +1017,68 @@ fn get_ast_selection_key(selection: &ast::Selection) -> SelectionKey { } } -use std::ops::Not; - -/// Get the sub-selections of a selection. -fn get_ast_selection_set(selection: &ast::Selection) -> Option<&Vec> { - match selection { - ast::Selection::Field(field) => field - .selection_set - .is_empty() - .not() - .then(|| &field.selection_set), - ast::Selection::FragmentSpread(_) => None, - ast::Selection::InlineFragment(fragment) => Some(&fragment.selection_set), - } -} - -fn same_ast_selection(x: &ast::Selection, y: &ast::Selection) -> bool { - let x_key = get_ast_selection_key(x); - let y_key = get_ast_selection_key(y); - if x_key != y_key { - return false; - } - let x_selections = get_ast_selection_set(x); - let y_selections = get_ast_selection_set(y); - match (x_selections, y_selections) { - (Some(x), Some(y)) => same_ast_selection_set_sorted(x, y), - (None, None) => true, +fn same_ast_selection( + x: &ast::Selection, + y: &ast::Selection, + fragment_map: &HashMap, +) -> bool { + match (x, y) { + (ast::Selection::Field(x), ast::Selection::Field(y)) => { + x.name == y.name + && x.alias == y.alias + && vec_matches_sorted_by( + &x.arguments, + &y.arguments, + |a, b| a.name.cmp(&b.name), + |a, b| same_ast_argument(a, b), + ) + && x.directives == y.directives + && same_ast_selection_set_sorted(&x.selection_set, &y.selection_set, fragment_map) + } + (ast::Selection::FragmentSpread(x), ast::Selection::FragmentSpread(y)) => { + let mapped_fragment_name = fragment_map + .get(&x.fragment_name) + .unwrap_or(&x.fragment_name); + *mapped_fragment_name == y.fragment_name && x.directives == y.directives + } + (ast::Selection::InlineFragment(x), ast::Selection::InlineFragment(y)) => { + x.type_condition == y.type_condition + && x.directives == y.directives + && same_ast_selection_set_sorted(&x.selection_set, &y.selection_set, fragment_map) + } _ => false, } } -fn hash_ast_selection_key(selection: &ast::Selection) -> u64 { - hash_value(&get_ast_selection_key(selection)) +fn hash_ast_selection_key(selection: &ast::Selection, fragment_map: &HashMap) -> u64 { + hash_value(&get_ast_selection_key(selection, fragment_map)) } -fn same_ast_selection_set_sorted(x: &[ast::Selection], y: &[ast::Selection]) -> bool { - fn sorted_by_selection_key(s: &[ast::Selection]) -> Vec<&ast::Selection> { +// Selections are sorted and compared after renaming x's fragment spreads according to the +// fragment_map. +fn same_ast_selection_set_sorted( + x: &[ast::Selection], + y: &[ast::Selection], + fragment_map: &HashMap, +) -> bool { + fn sorted_by_selection_key<'a>( + s: &'a [ast::Selection], + fragment_map: &HashMap, + ) -> Vec<&'a ast::Selection> { let mut sorted: Vec<&ast::Selection> = s.iter().collect(); - sorted.sort_by_key(|x| hash_ast_selection_key(x)); + sorted.sort_by_key(|x| hash_ast_selection_key(x, fragment_map)); sorted } if x.len() != y.len() { return false; } - sorted_by_selection_key(x) + let x_sorted = sorted_by_selection_key(x, fragment_map); // Map fragment spreads + let y_sorted = sorted_by_selection_key(y, &Default::default()); // Don't map fragment spreads + x_sorted .into_iter() - .zip(sorted_by_selection_key(y)) - .all(|(x, y)| same_ast_selection(x, y)) + .zip(y_sorted) + .all(|(x, y)| same_ast_selection(x, y, fragment_map)) } #[cfg(test)] @@ -868,7 +1105,7 @@ mod ast_comparison_tests { } #[test] - fn test_query_variable_decl_object_value_coercion() { + fn test_query_variable_decl_object_value_coercion_empty_case() { // Note: Rust QP expands empty object default values by filling in its default field // values. let op_x = r#"query($qv1: T! = {}) { x(arg1: $qv1) }"#; @@ -879,6 +1116,28 @@ mod ast_comparison_tests { assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); } + #[test] + fn test_query_variable_decl_object_value_coercion_non_empty_case() { + // Note: Rust QP expands an object default values by filling in its default field values. + let op_x = r#"query($qv1: T! = {field1: true}) { x(arg1: $qv1) }"#; + let op_y = + r#"query($qv1: T! = { field1: true, field2: "default_value" }) { x(arg1: $qv1) }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); + } + + #[test] + fn test_query_variable_decl_list_of_object_value_coercion() { + // Testing a combination of list and object value coercion. + let op_x = r#"query($qv1: [T!]! = [{}]) { x(arg1: $qv1) }"#; + let op_y = + r#"query($qv1: [T!]! = [{field1: true, field2: "default_value"}]) { x(arg1: $qv1) }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); + } + #[test] fn test_entities_selection_order() { let op_x = r#" @@ -913,6 +1172,123 @@ mod ast_comparison_tests { let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); } + + #[test] + fn test_selection_argument_is_compared() { + let op_x = r#"{ x(arg1: "one") }"#; + let op_y = r#"{ x(arg1: "two") }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_err()); + } + + #[test] + fn test_selection_argument_order() { + let op_x = r#"{ x(arg1: "one", arg2: "two") }"#; + let op_y = r#"{ x(arg2: "two", arg1: "one") }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); + } + + #[test] + fn test_string_to_id_coercion_difference() { + // JS QP coerces strings into integer for ID type, while Rust QP doesn't. + // This tests a special case that same_ast_document accepts this difference. + let op_x = r#"{ x(id: 123) }"#; + let op_y = r#"{ x(id: "123") }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); + } + + #[test] + fn test_fragment_definition_different_names() { + let op_x = r#"{ q { ...f1 ...f2 } } fragment f1 on T { x y } fragment f2 on T { w z }"#; + let op_y = r#"{ q { ...g1 ...g2 } } fragment g1 on T { x y } fragment g2 on T { w z }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); + } + + #[test] + fn test_fragment_definition_different_names_nested_1() { + // Nested fragments have the same name, only top-level fragments have different names. + let op_x = r#"{ q { ...f2 } } fragment f1 on T { x y } fragment f2 on T { z ...f1 }"#; + let op_y = r#"{ q { ...g2 } } fragment f1 on T { x y } fragment g2 on T { z ...f1 }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); + } + + #[test] + fn test_fragment_definition_different_names_nested_2() { + // Nested fragments have different names. + let op_x = r#"{ q { ...f2 } } fragment f1 on T { x y } fragment f2 on T { z ...f1 }"#; + let op_y = r#"{ q { ...g2 } } fragment g1 on T { x y } fragment g2 on T { z ...g1 }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); + } + + #[test] + fn test_fragment_definition_different_names_nested_3() { + // Nested fragments have different names. + // Also, fragment definitions are in different order. + let op_x = r#"{ q { ...f2 ...f3 } } fragment f1 on T { x y } fragment f2 on T { z ...f1 } fragment f3 on T { w } "#; + let op_y = r#"{ q { ...g2 ...g3 } } fragment g1 on T { x y } fragment g2 on T { w } fragment g3 on T { z ...g1 }"#; + let ast_x = ast::Document::parse(op_x, "op_x").unwrap(); + let ast_y = ast::Document::parse(op_y, "op_y").unwrap(); + assert!(super::same_ast_document(&ast_x, &ast_y).is_ok()); + } +} + +#[cfg(test)] +mod qp_selection_comparison_tests { + use serde_json::json; + + use super::*; + + #[test] + fn test_requires_comparison_with_same_selection_key() { + let requires_json = json!([ + { + "kind": "InlineFragment", + "typeCondition": "T", + "selections": [ + { + "kind": "Field", + "name": "id", + }, + ] + }, + { + "kind": "InlineFragment", + "typeCondition": "T", + "selections": [ + { + "kind": "Field", + "name": "id", + }, + { + "kind": "Field", + "name": "job", + } + ] + }, + ]); + + // The only difference between requires1 and requires2 is the order of selections. + // But, their items all have the same SelectionKey. + let requires1: Vec = serde_json::from_value(requires_json).unwrap(); + let requires2: Vec = requires1.iter().rev().cloned().collect(); + + // `same_selection_set_sorted` fails to match, since it doesn't account for + // two items with the same SelectionKey but in different order. + assert!(!same_selection_set_sorted(&requires1, &requires2)); + // `same_requires` should succeed. + assert!(same_requires(&requires1, &requires2)); + } } #[cfg(test)] diff --git a/apollo-router/src/query_planner/execution.rs b/apollo-router/src/query_planner/execution.rs index 1b3e8e0ca0..a17feca5d3 100644 --- a/apollo-router/src/query_planner/execution.rs +++ b/apollo-router/src/query_planner/execution.rs @@ -378,6 +378,8 @@ impl PlanNode { let _ = primary_sender.send((value.clone(), errors.clone())); } else { let _ = primary_sender.send((value.clone(), errors.clone())); + // primary response should be an empty object + value.deep_merge(Value::Object(Default::default())); } } .instrument(tracing::info_span!( @@ -398,11 +400,6 @@ impl PlanNode { let v = parameters .query .variable_value( - parameters - .supergraph_request - .body() - .operation_name - .as_deref(), condition.as_str(), ¶meters.supergraph_request.body().variables, ) diff --git a/apollo-router/src/query_planner/mod.rs b/apollo-router/src/query_planner/mod.rs index c66d5298b3..e4a3f49b39 100644 --- a/apollo-router/src/query_planner/mod.rs +++ b/apollo-router/src/query_planner/mod.rs @@ -15,7 +15,6 @@ pub(crate) mod bridge_query_planner; mod bridge_query_planner_pool; mod caching_query_planner; mod convert; -mod dual_introspection; pub(crate) mod dual_query_planner; mod execution; pub(crate) mod fetch; diff --git a/apollo-router/src/query_planner/plan.rs b/apollo-router/src/query_planner/plan.rs index 447adb7ba7..f0e5763358 100644 --- a/apollo-router/src/query_planner/plan.rs +++ b/apollo-router/src/query_planner/plan.rs @@ -77,25 +77,21 @@ impl QueryPlan { } impl QueryPlan { - pub(crate) fn is_deferred(&self, operation: Option<&str>, variables: &Object) -> bool { - self.root.is_deferred(operation, variables, &self.query) + pub(crate) fn is_deferred(&self, variables: &Object) -> bool { + self.root.is_deferred(variables, &self.query) } - pub(crate) fn is_subscription(&self, operation: Option<&str>) -> bool { - match self.query.operation(operation) { - Some(op) => matches!(op.kind(), OperationKind::Subscription), - None => false, - } + pub(crate) fn is_subscription(&self) -> bool { + matches!(self.query.operation.kind(), OperationKind::Subscription) } pub(crate) fn query_hashes( &self, batching_config: Batching, - operation: Option<&str>, variables: &Object, ) -> Result>, CacheResolverError> { self.root - .query_hashes(batching_config, operation, variables, &self.query) + .query_hashes(batching_config, variables, &self.query) } pub(crate) fn estimated_size(&self) -> usize { @@ -180,20 +176,11 @@ impl PlanNode { } } - pub(crate) fn is_deferred( - &self, - operation: Option<&str>, - variables: &Object, - query: &Query, - ) -> bool { + pub(crate) fn is_deferred(&self, variables: &Object, query: &Query) -> bool { match self { - Self::Sequence { nodes } => nodes - .iter() - .any(|n| n.is_deferred(operation, variables, query)), - Self::Parallel { nodes } => nodes - .iter() - .any(|n| n.is_deferred(operation, variables, query)), - Self::Flatten(node) => node.node.is_deferred(operation, variables, query), + Self::Sequence { nodes } => nodes.iter().any(|n| n.is_deferred(variables, query)), + Self::Parallel { nodes } => nodes.iter().any(|n| n.is_deferred(variables, query)), + Self::Flatten(node) => node.node.is_deferred(variables, query), Self::Fetch(..) => false, Self::Defer { .. } => true, Self::Subscription { .. } => false, @@ -203,19 +190,19 @@ impl PlanNode { condition, } => { if query - .variable_value(operation, condition.as_str(), variables) + .variable_value(condition.as_str(), variables) .map(|v| *v == Value::Bool(true)) .unwrap_or(true) { // right now ConditionNode is only used with defer, but it might be used // in the future to implement @skip and @include execution if let Some(node) = if_clause { - if node.is_deferred(operation, variables, query) { + if node.is_deferred(variables, query) { return true; } } } else if let Some(node) = else_clause { - if node.is_deferred(operation, variables, query) { + if node.is_deferred(variables, query) { return true; } } @@ -240,7 +227,6 @@ impl PlanNode { pub(crate) fn query_hashes( &self, batching_config: Batching, - operation: Option<&str>, variables: &Object, query: &Query, ) -> Result>, CacheResolverError> { @@ -286,7 +272,7 @@ impl PlanNode { condition, } => { if query - .variable_value(operation, condition.as_str(), variables) + .variable_value(condition.as_str(), variables) .map(|v| *v == Value::Bool(true)) .unwrap_or(true) { diff --git a/apollo-router/src/query_planner/selection.rs b/apollo-router/src/query_planner/selection.rs index 810c1c1ac5..629b678347 100644 --- a/apollo-router/src/query_planner/selection.rs +++ b/apollo-router/src/query_planner/selection.rs @@ -23,17 +23,6 @@ pub(crate) enum Selection { InlineFragment(InlineFragment), } -impl Selection { - pub(crate) fn selection_set(&self) -> Option<&[Selection]> { - match self { - Selection::Field(Field { selections, .. }) => selections.as_deref(), - Selection::InlineFragment(InlineFragment { selections, .. }) => { - Some(selections.as_slice()) - } - } - } -} - /// The field that is used #[derive(Debug, Clone, PartialEq, Deserialize, Serialize)] #[serde(rename_all = "camelCase")] diff --git a/apollo-router/src/router_factory.rs b/apollo-router/src/router_factory.rs index ca5870988c..2c99f42645 100644 --- a/apollo-router/src/router_factory.rs +++ b/apollo-router/src/router_factory.rs @@ -181,6 +181,7 @@ impl RouterSuperServiceFactory for YamlRouterFactory { PluginInit::builder() .config(plugin_config.clone()) .supergraph_sdl(schema.raw_sdl.clone()) + .supergraph_schema_id(schema.schema_id.clone()) .supergraph_schema(Arc::new(schema.supergraph_schema().clone())) .notify(configuration.notify.clone()) .build(), @@ -447,7 +448,7 @@ pub(crate) async fn create_http_services( name, configuration, &tls_root_store, - shaping.enable_subgraph_http2(name), + shaping.subgraph_client_config(name), )?; let http_service_factory = HttpClientServiceFactory::new(http_service, plugins.clone()); @@ -532,6 +533,7 @@ pub(crate) async fn add_plugin( factory: &PluginFactory, plugin_config: &Value, schema: Arc, + schema_id: Arc, supergraph_schema: Arc>, subgraph_schemas: Arc>>>, notify: &crate::notification::Notify, @@ -543,6 +545,7 @@ pub(crate) async fn add_plugin( PluginInit::builder() .config(plugin_config.clone()) .supergraph_sdl(schema) + .supergraph_schema_id(schema_id) .supergraph_schema(supergraph_schema) .subgraph_schemas(subgraph_schemas) .notify(notify.clone()) @@ -568,6 +571,7 @@ pub(crate) async fn create_plugins( extra_plugins: Option)>>, ) -> Result { let supergraph_schema = Arc::new(schema.supergraph_schema().clone()); + let supergraph_schema_id = schema.schema_id.clone(); let mut apollo_plugins_config = configuration.apollo_plugins.clone().plugins; let user_plugins_config = configuration.plugins.clone().plugins.unwrap_or_default(); let extra = extra_plugins.unwrap_or_default(); @@ -598,6 +602,7 @@ pub(crate) async fn create_plugins( $factory, &$plugin_config, schema.as_string().clone(), + supergraph_schema_id.clone(), supergraph_schema.clone(), subgraph_schemas.clone(), &configuration.notify.clone(), diff --git a/apollo-router/src/services/execution/service.rs b/apollo-router/src/services/execution/service.rs index 27081174b1..642e999511 100644 --- a/apollo-router/src/services/execution/service.rs +++ b/apollo-router/src/services/execution/service.rs @@ -123,13 +123,10 @@ impl ExecutionService { let context = req.context; let ctx = context.clone(); let variables = req.supergraph_request.body().variables.clone(); - let operation_name = req.supergraph_request.body().operation_name.clone(); let (sender, receiver) = mpsc::channel(10); - let is_deferred = req - .query_plan - .is_deferred(operation_name.as_deref(), &variables); - let is_subscription = req.query_plan.is_subscription(operation_name.as_deref()); + let is_deferred = req.query_plan.is_deferred(&variables); + let is_subscription = req.query_plan.is_subscription(); let mut claims = None; if is_deferred { claims = context.get(APOLLO_AUTHENTICATION_JWT_CLAIMS).ok().flatten() @@ -240,7 +237,6 @@ impl ExecutionService { ready(execution_span.in_scope(|| { Self::process_graphql_response( &query, - operation_name.as_deref(), &variables, is_deferred, &schema, @@ -259,7 +255,6 @@ impl ExecutionService { #[allow(clippy::too_many_arguments)] fn process_graphql_response( query: &Arc, - operation_name: Option<&str>, variables: &Object, is_deferred: bool, schema: &Arc, @@ -295,7 +290,7 @@ impl ExecutionService { } let has_next = response.has_next.unwrap_or(true); - let variables_set = query.defer_variables_set(operation_name, variables); + let variables_set = query.defer_variables_set(variables); tracing::debug_span!("format_response").in_scope(|| { let mut paths = Vec::new(); @@ -332,7 +327,6 @@ impl ExecutionService { if let Some(filtered_query) = query.filtered_query.as_ref() { paths = filtered_query.format_response( &mut response, - operation_name, variables.clone(), schema.api_schema(), variables_set, @@ -343,7 +337,6 @@ impl ExecutionService { query .format_response( &mut response, - operation_name, variables.clone(), schema.api_schema(), variables_set, @@ -360,7 +353,6 @@ impl ExecutionService { if let (ApolloMetricsReferenceMode::Extended, Some(Value::Object(response_body))) = (metrics_ref_mode, &response.data) { extract_enums_from_response( query.clone(), - operation_name, schema.api_schema(), response_body, &mut referenced_enums, @@ -380,12 +372,9 @@ impl ExecutionService { response.errors.retain(|error| match &error.path { None => true, - Some(error_path) => query.contains_error_path( - operation_name, - &response.label, - error_path, - variables_set, - ), + Some(error_path) => { + query.contains_error_path(&response.label, error_path, variables_set) + } }); response.label = rewrite_defer_label(&response); @@ -433,7 +422,6 @@ impl ExecutionService { Self::split_incremental_response( query, - operation_name, has_next, variables_set, response, @@ -445,7 +433,6 @@ impl ExecutionService { fn split_incremental_response( query: &Arc, - operation_name: Option<&str>, has_next: bool, variables_set: BooleanValues, response: Response, @@ -464,12 +451,8 @@ impl ExecutionService { .filter(|error| match &error.path { None => false, Some(error_path) => { - query.contains_error_path( - operation_name, - &response.label, - error_path, - variables_set, - ) && error_path.starts_with(&path) + query.contains_error_path(&response.label, error_path, variables_set) + && error_path.starts_with(&path) } }) .cloned() diff --git a/apollo-router/src/services/external.rs b/apollo-router/src/services/external.rs index bf3ff9fb9c..c356ddd39c 100644 --- a/apollo-router/src/services/external.rs +++ b/apollo-router/src/services/external.rs @@ -20,6 +20,7 @@ use strum_macros::Display; use tower::BoxError; use tower::Service; +use super::subgraph::SubgraphRequestId; use crate::plugins::telemetry::otel::OpenTelemetrySpanExt; use crate::plugins::telemetry::reload::prepare_context; use crate::query_planner::QueryPlan; @@ -102,6 +103,8 @@ pub(crate) struct Externalizable { pub(crate) has_next: Option, #[serde(skip_serializing_if = "Option::is_none")] query_plan: Option>, + #[serde(skip_serializing_if = "Option::is_none")] + pub(crate) subgraph_request_id: Option, } #[buildstructor::buildstructor] @@ -145,6 +148,7 @@ where service_name: None, has_next: None, query_plan: None, + subgraph_request_id: None, } } @@ -184,6 +188,7 @@ where service_name: None, has_next, query_plan: None, + subgraph_request_id: None, } } @@ -224,6 +229,7 @@ where service_name: None, has_next, query_plan, + subgraph_request_id: None, } } @@ -242,6 +248,7 @@ where method: Option, service_name: Option, uri: Option, + subgraph_request_id: Option, ) -> Self { assert!(matches!( stage, @@ -263,6 +270,7 @@ where service_name, has_next: None, query_plan: None, + subgraph_request_id, } } diff --git a/apollo-router/src/services/trust_dns_connector.rs b/apollo-router/src/services/hickory_dns_connector.rs similarity index 51% rename from apollo-router/src/services/trust_dns_connector.rs rename to apollo-router/src/services/hickory_dns_connector.rs index 9855c93e7b..987c6ec52f 100644 --- a/apollo-router/src/services/trust_dns_connector.rs +++ b/apollo-router/src/services/hickory_dns_connector.rs @@ -6,13 +6,17 @@ use std::pin::Pin; use std::task::Context; use std::task::Poll; +use hickory_resolver::config::LookupIpStrategy; +use hickory_resolver::system_conf::read_system_conf; +use hickory_resolver::TokioAsyncResolver; use hyper::client::connect::dns::Name; use hyper::client::HttpConnector; use hyper::service::Service; -use trust_dns_resolver::TokioAsyncResolver; -/// Wrapper around trust-dns-resolver's -/// [`TokioAsyncResolver`](https://docs.rs/trust-dns-resolver/0.23.2/trust_dns_resolver/type.TokioAsyncResolver.html) +use crate::configuration::shared::DnsResolutionStrategy; + +/// Wrapper around hickory-resolver's +/// [`TokioAsyncResolver`](https://docs.rs/hickory-resolver/0.24.1/hickory_resolver/type.TokioAsyncResolver.html) /// /// The resolver runs a background Task which manages dns requests. When a new resolver is created, /// the background task is also created, it needs to be spawned on top of an executor before using the client, @@ -21,11 +25,14 @@ use trust_dns_resolver::TokioAsyncResolver; pub(crate) struct AsyncHyperResolver(TokioAsyncResolver); impl AsyncHyperResolver { - /// constructs a new resolver from default configuration, uses the corresponding method of - /// [`TokioAsyncResolver`](https://docs.rs/trust-dns-resolver/0.23.2/trust_dns_resolver/type.TokioAsyncResolver.html#method.new) - pub(crate) fn new_from_system_conf() -> Result { - let resolver = TokioAsyncResolver::tokio_from_system_conf()?; - Ok(Self(resolver)) + /// constructs a new resolver from default configuration, using [read_system_conf](https://docs.rs/hickory-resolver/0.24.1/hickory_resolver/system_conf/fn.read_system_conf.html) + fn new_from_system_conf( + dns_resolution_strategy: DnsResolutionStrategy, + ) -> Result { + let (config, mut options) = read_system_conf()?; + options.ip_strategy = dns_resolution_strategy.into(); + + Ok(Self(TokioAsyncResolver::tokio(config, options))) } } @@ -56,8 +63,22 @@ impl Service for AsyncHyperResolver { } } +impl From for LookupIpStrategy { + fn from(value: DnsResolutionStrategy) -> LookupIpStrategy { + match value { + DnsResolutionStrategy::Ipv4Only => LookupIpStrategy::Ipv4Only, + DnsResolutionStrategy::Ipv6Only => LookupIpStrategy::Ipv6Only, + DnsResolutionStrategy::Ipv4AndIpv6 => LookupIpStrategy::Ipv4AndIpv6, + DnsResolutionStrategy::Ipv6ThenIpv4 => LookupIpStrategy::Ipv6thenIpv4, + DnsResolutionStrategy::Ipv4ThenIpv6 => LookupIpStrategy::Ipv4thenIpv6, + } + } +} + /// A helper function to create an http connector and a dns task with the default configuration -pub(crate) fn new_async_http_connector() -> Result, io::Error> { - let resolver = AsyncHyperResolver::new_from_system_conf()?; +pub(crate) fn new_async_http_connector( + dns_resolution_strategy: DnsResolutionStrategy, +) -> Result, io::Error> { + let resolver = AsyncHyperResolver::new_from_system_conf(dns_resolution_strategy)?; Ok(HttpConnector::new_with_resolver(resolver)) } diff --git a/apollo-router/src/services/http.rs b/apollo-router/src/services/http.rs index 7f5d782498..105bb26065 100644 --- a/apollo-router/src/services/http.rs +++ b/apollo-router/src/services/http.rs @@ -47,7 +47,7 @@ impl HttpClientServiceFactory { pub(crate) fn from_config( service: impl Into, configuration: &crate::Configuration, - http2: crate::plugins::traffic_shaping::Http2Config, + client_config: crate::configuration::shared::Client, ) -> Self { use indexmap::IndexMap; @@ -55,7 +55,7 @@ impl HttpClientServiceFactory { service, configuration, &rustls::RootCertStore::empty(), - http2, + client_config, ) .unwrap(); diff --git a/apollo-router/src/services/http/service.rs b/apollo-router/src/services/http/service.rs index cc37fa9083..d629412f0c 100644 --- a/apollo-router/src/services/http/service.rs +++ b/apollo-router/src/services/http/service.rs @@ -43,9 +43,9 @@ use crate::plugins::telemetry::reload::prepare_context; use crate::plugins::telemetry::LOGGING_DISPLAY_BODY; use crate::plugins::telemetry::LOGGING_DISPLAY_HEADERS; use crate::plugins::traffic_shaping::Http2Config; +use crate::services::hickory_dns_connector::new_async_http_connector; +use crate::services::hickory_dns_connector::AsyncHyperResolver; use crate::services::router::body::RouterBody; -use crate::services::trust_dns_connector::new_async_http_connector; -use crate::services::trust_dns_connector::AsyncHyperResolver; use crate::Configuration; use crate::Context; @@ -103,7 +103,7 @@ impl HttpClientService { service: impl Into, configuration: &Configuration, tls_root_store: &RootCertStore, - http2: Http2Config, + client_config: crate::configuration::shared::Client, ) -> Result { let name: String = service.into(); let tls_cert_store = configuration @@ -131,15 +131,16 @@ impl HttpClientService { let tls_client_config = generate_tls_client_config(tls_cert_store, client_cert_config)?; - HttpClientService::new(name, http2, tls_client_config) + HttpClientService::new(name, tls_client_config, client_config) } pub(crate) fn new( service: impl Into, - http2: Http2Config, tls_config: ClientConfig, + client_config: crate::configuration::shared::Client, ) -> Result { - let mut http_connector = new_async_http_connector()?; + let mut http_connector = + new_async_http_connector(client_config.dns_resolution_strategy.unwrap_or_default())?; http_connector.set_nodelay(true); http_connector.set_keepalive(Some(std::time::Duration::from_secs(60))); http_connector.enforce_http(false); @@ -149,6 +150,7 @@ impl HttpClientService { .https_or_http() .enable_http1(); + let http2 = client_config.experimental_http2.unwrap_or_default(); let connector = if http2 != Http2Config::Disable { builder.enable_http2().wrap_connector(http_connector) } else { diff --git a/apollo-router/src/services/http/tests.rs b/apollo-router/src/services/http/tests.rs index 892dc30e1b..68bf996939 100644 --- a/apollo-router/src/services/http/tests.rs +++ b/apollo-router/src/services/http/tests.rs @@ -116,7 +116,7 @@ async fn tls_self_signed() { "test", &config, &rustls::RootCertStore::empty(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ) .unwrap(); @@ -173,7 +173,7 @@ async fn tls_custom_root() { "test", &config, &rustls::RootCertStore::empty(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ) .unwrap(); @@ -283,7 +283,7 @@ async fn tls_client_auth() { "test", &config, &rustls::RootCertStore::empty(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ) .unwrap(); @@ -343,11 +343,13 @@ async fn test_subgraph_h2c() { tokio::task::spawn(emulate_h2c_server(listener)); let subgraph_service = HttpClientService::new( "test", - Http2Config::Http2Only, rustls::ClientConfig::builder() .with_safe_defaults() .with_native_roots() .with_no_client_auth(), + crate::configuration::shared::Client::builder() + .experimental_http2(Http2Config::Http2Only) + .build(), ) .expect("can create a HttpService"); @@ -419,11 +421,13 @@ async fn test_compressed_request_response_body() { tokio::task::spawn(emulate_subgraph_compressed_response(listener)); let subgraph_service = HttpClientService::new( "test", - Http2Config::Http2Only, rustls::ClientConfig::builder() .with_safe_defaults() .with_native_roots() .with_no_client_auth(), + crate::configuration::shared::Client::builder() + .experimental_http2(Http2Config::Http2Only) + .build(), ) .expect("can create a HttpService"); diff --git a/apollo-router/src/services/layers/allow_only_http_post_mutations.rs b/apollo-router/src/services/layers/allow_only_http_post_mutations.rs index 16179f07bb..6c737ec76e 100644 --- a/apollo-router/src/services/layers/allow_only_http_post_mutations.rs +++ b/apollo-router/src/services/layers/allow_only_http_post_mutations.rs @@ -286,11 +286,10 @@ mod forbid_http_get_mutations_tests { let context = Context::new(); context.extensions().with_lock(|mut lock| { - lock.insert::(Arc::new(ParsedDocumentInner { - ast, - executable: Arc::new(executable), - hash: Default::default(), - })) + lock.insert::( + ParsedDocumentInner::new(ast, Arc::new(executable), None, Default::default()) + .unwrap(), + ) }); SupergraphRequest::fake_builder() diff --git a/apollo-router/src/services/layers/query_analysis.rs b/apollo-router/src/services/layers/query_analysis.rs index d4280df414..ce008c0a0f 100644 --- a/apollo-router/src/services/layers/query_analysis.rs +++ b/apollo-router/src/services/layers/query_analysis.rs @@ -79,7 +79,7 @@ impl QueryAnalysisLayer { &self, query: &str, operation_name: Option<&str>, - ) -> Result<(ParsedDocument, Node), SpecError> { + ) -> Result { let query = query.to_string(); let operation_name = operation_name.map(|o| o.to_string()); let schema = self.schema.clone(); @@ -91,14 +91,12 @@ impl QueryAnalysisLayer { task::spawn_blocking(move || { span.in_scope(|| { - let doc = Query::parse_document( + Query::parse_document( &query, operation_name.as_deref(), schema.as_ref(), conf.as_ref(), - )?; - let operation = doc.get_operation(operation_name.as_deref())?.clone(); - Ok((doc, operation)) + ) }) }) .await @@ -161,7 +159,7 @@ impl QueryAnalysisLayer { ); Err(errors) } - Ok((doc, operation)) => { + Ok(doc) => { let context = Context::new(); if self.enable_authorization_directives { @@ -174,9 +172,9 @@ impl QueryAnalysisLayer { } context - .insert(OPERATION_NAME, operation.name.clone()) + .insert(OPERATION_NAME, doc.operation.name.clone()) .expect("cannot insert operation name into context; this is a bug"); - let operation_kind = OperationKind::from(operation.operation_type); + let operation_kind = OperationKind::from(doc.operation.operation_type); context .insert(OPERATION_KIND, operation_kind) .expect("cannot insert operation kind in the context; this is a bug"); @@ -257,25 +255,59 @@ pub(crate) struct ParsedDocumentInner { pub(crate) ast: ast::Document, pub(crate) executable: Arc>, pub(crate) hash: Arc, + pub(crate) operation: Node, + /// `__schema` or `__type` + pub(crate) has_schema_introspection: bool, + /// Non-meta fields explicitly defined in the schema + pub(crate) has_explicit_root_fields: bool, } +#[derive(Debug)] +pub(crate) struct RootFieldKinds {} + impl ParsedDocumentInner { - pub(crate) fn get_operation( - &self, + pub(crate) fn new( + ast: ast::Document, + executable: Arc>, operation_name: Option<&str>, - ) -> Result<&Node, SpecError> { - if let Ok(operation) = self.executable.operations.get(operation_name) { - Ok(operation) - } else if let Some(name) = operation_name { - Err(SpecError::UnknownOperation(name.to_owned())) - } else if self.executable.operations.is_empty() { - // Maybe not reachable? - // A valid document is non-empty and has no unused fragments - Err(SpecError::NoOperation) - } else { - debug_assert!(self.executable.operations.len() > 1); - Err(SpecError::MultipleOperationWithoutOperationName) + hash: Arc, + ) -> Result, SpecError> { + let operation = get_operation(&executable, operation_name)?; + let mut has_schema_introspection = false; + let mut has_explicit_root_fields = false; + for field in operation.root_fields(&executable) { + match field.name.as_str() { + "__typename" => {} // turns out we have no conditional on `has_root_typename` + "__schema" | "__type" if operation.is_query() => has_schema_introspection = true, + _ => has_explicit_root_fields = true, + } } + Ok(Arc::new(Self { + ast, + executable, + hash, + operation, + has_schema_introspection, + has_explicit_root_fields, + })) + } +} + +pub(crate) fn get_operation( + executable: &ExecutableDocument, + operation_name: Option<&str>, +) -> Result, SpecError> { + if let Ok(operation) = executable.operations.get(operation_name) { + Ok(operation.clone()) + } else if let Some(name) = operation_name { + Err(SpecError::UnknownOperation(name.to_owned())) + } else if executable.operations.is_empty() { + // Maybe not reachable? + // A valid document is non-empty and has no unused fragments + Err(SpecError::NoOperation) + } else { + debug_assert!(executable.operations.len() > 1); + Err(SpecError::MultipleOperationWithoutOperationName) } } diff --git a/apollo-router/src/services/mod.rs b/apollo-router/src/services/mod.rs index aae20ed4ce..b0c2be83bb 100644 --- a/apollo-router/src/services/mod.rs +++ b/apollo-router/src/services/mod.rs @@ -33,6 +33,7 @@ pub mod execution; pub(crate) mod external; pub(crate) mod fetch; pub(crate) mod fetch_service; +pub(crate) mod hickory_dns_connector; pub(crate) mod http; pub(crate) mod layers; pub(crate) mod new_service; @@ -42,7 +43,6 @@ pub mod subgraph; pub(crate) mod subgraph_service; pub mod supergraph; pub mod transport; -pub(crate) mod trust_dns_connector; impl AsRef for http_ext::Request { fn as_ref(&self) -> &Request { diff --git a/apollo-router/src/services/query_planner.rs b/apollo-router/src/services/query_planner.rs index 2fda97b60c..494bb7e2c7 100644 --- a/apollo-router/src/services/query_planner.rs +++ b/apollo-router/src/services/query_planner.rs @@ -80,6 +80,7 @@ pub(crate) struct Response { pub(crate) enum QueryPlannerContent { Plan { plan: Arc }, Response { response: Box }, + CachedIntrospectionResponse { response: Box }, IntrospectionDisabled, } diff --git a/apollo-router/src/services/subgraph.rs b/apollo-router/src/services/subgraph.rs index 23ebc608e3..987eef24a0 100644 --- a/apollo-router/src/services/subgraph.rs +++ b/apollo-router/src/services/subgraph.rs @@ -1,5 +1,6 @@ #![allow(missing_docs)] // FIXME +use std::fmt::Display; use std::pin::Pin; use std::sync::Arc; @@ -7,6 +8,8 @@ use apollo_compiler::validation::Valid; use http::StatusCode; use http::Version; use multimap::MultiMap; +use serde::Deserialize; +use serde::Serialize; use serde_json_bytes::ByteString; use serde_json_bytes::Map as JsonMap; use serde_json_bytes::Value; @@ -35,6 +38,15 @@ pub type BoxService = tower::util::BoxService; pub type BoxCloneService = tower::util::BoxCloneService; pub type ServiceResult = Result; pub(crate) type BoxGqlStream = Pin + Send + Sync>>; +/// unique id for a subgraph request and the related response +#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)] +pub struct SubgraphRequestId(pub String); + +impl Display for SubgraphRequestId { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + write!(f, "{}", self.0) + } +} assert_impl_all!(Request: Send); #[non_exhaustive] @@ -62,6 +74,9 @@ pub struct Request { pub(crate) authorization: Arc, pub(crate) executable_document: Option>>, + + /// unique id for this request + pub(crate) id: SubgraphRequestId, } #[buildstructor::buildstructor] @@ -90,6 +105,7 @@ impl Request { query_hash: Default::default(), authorization: Default::default(), executable_document: None, + id: SubgraphRequestId::new(), } } @@ -154,10 +170,36 @@ impl Clone for Request { query_hash: self.query_hash.clone(), authorization: self.authorization.clone(), executable_document: self.executable_document.clone(), + id: self.id.clone(), } } } +impl SubgraphRequestId { + pub fn new() -> Self { + SubgraphRequestId( + uuid::Uuid::new_v4() + .as_hyphenated() + .encode_lower(&mut uuid::Uuid::encode_buffer()) + .to_string(), + ) + } +} + +impl std::ops::Deref for SubgraphRequestId { + type Target = str; + + fn deref(&self) -> &str { + &self.0 + } +} + +impl Default for SubgraphRequestId { + fn default() -> Self { + Self::new() + } +} + assert_impl_all!(Response: Send); #[derive(Debug)] #[non_exhaustive] @@ -167,6 +209,8 @@ pub struct Response { /// Name of the subgraph, it's an Option to not introduce breaking change pub(crate) subgraph_name: Option, pub context: Context, + /// unique id matching the corresponding field in the request + pub(crate) id: SubgraphRequestId, } #[buildstructor::buildstructor] @@ -179,11 +223,13 @@ impl Response { response: http::Response, context: Context, subgraph_name: String, + id: SubgraphRequestId, ) -> Response { Self { response, context, subgraph_name: Some(subgraph_name), + id, } } @@ -202,6 +248,7 @@ impl Response { context: Context, headers: Option>, subgraph_name: Option, + id: Option, ) -> Response { // Build a response let res = graphql::Response::builder() @@ -220,10 +267,16 @@ impl Response { *response.headers_mut() = headers.unwrap_or_default(); + // Warning: the id argument for this builder is an Option to make that a non breaking change + // but this means that if a subgraph response is created explicitely without an id, it will + // be generated here and not match the id from the subgraph request + let id = id.unwrap_or_default(); + Self { response, context, subgraph_name, + id, } } @@ -244,6 +297,7 @@ impl Response { context: Option, headers: Option>, subgraph_name: Option, + id: Option, ) -> Response { Response::new( label, @@ -255,6 +309,7 @@ impl Response { context.unwrap_or_default(), headers, subgraph_name, + id, ) } @@ -276,6 +331,7 @@ impl Response { context: Option, headers: MultiMap, subgraph_name: Option, + id: Option, ) -> Result { Ok(Response::new( label, @@ -287,6 +343,7 @@ impl Response { context.unwrap_or_default(), Some(header_map(headers)?), subgraph_name, + id, )) } @@ -299,6 +356,7 @@ impl Response { status_code: Option, context: Context, subgraph_name: Option, + id: Option, ) -> Result { Ok(Response::new( Default::default(), @@ -310,6 +368,7 @@ impl Response { context, Default::default(), subgraph_name, + id, )) } } diff --git a/apollo-router/src/services/subgraph_service.rs b/apollo-router/src/services/subgraph_service.rs index 2b99f3bd24..9dbb9fb773 100644 --- a/apollo-router/src/services/subgraph_service.rs +++ b/apollo-router/src/services/subgraph_service.rs @@ -44,6 +44,7 @@ use super::http::HttpClientServiceFactory; use super::http::HttpRequest; use super::layers::content_negotiation::GRAPHQL_JSON_RESPONSE_HEADER_VALUE; use super::router::body::RouterBody; +use super::subgraph::SubgraphRequestId; use super::Plugins; use crate::batching::assemble_batch; use crate::batching::BatchQuery; @@ -494,6 +495,7 @@ async fn call_websocket( subgraph_request, subscription_stream, connection_closed_signal, + id: subgraph_request_id, .. } = request; let subscription_stream_tx = @@ -707,6 +709,7 @@ async fn call_websocket( resp.map(|_| graphql::Response::default()), context, service_name, + subgraph_request_id, )) } @@ -807,7 +810,7 @@ fn http_response_to_graphql_response( pub(crate) async fn process_batch( client_factory: HttpClientServiceFactory, service: String, - mut contexts: Vec, + mut contexts: Vec<(Context, SubgraphRequestId)>, mut request: http::Request, listener_count: usize, ) -> Result, FetchError> { @@ -854,6 +857,7 @@ pub(crate) async fn process_batch( let batch_context = contexts .first() .expect("we have at least one context in the batch") + .0 .clone(); let display_body = batch_context.contains_key(LOGGING_DISPLAY_BODY); let client = client_factory.create(&service); @@ -1014,9 +1018,10 @@ pub(crate) async fn process_batch( .map(|mut http_res| { *http_res.headers_mut() = parts.headers.clone(); // Use the original context for the request to create the response - let context = contexts.pop().expect("we have a context for each response"); + let (context, id) = + contexts.pop().expect("we have a context for each response"); let resp = - SubgraphResponse::new_from_response(http_res, context, subgraph_name); + SubgraphResponse::new_from_response(http_res, context, subgraph_name, id); tracing::debug!("we have a resp: {resp:?}"); resp @@ -1047,6 +1052,8 @@ pub(crate) async fn notify_batch_query( Err(e) => { for tx in senders { // Try to notify all waiters. If we can't notify an individual sender, then log an error + // which, unlike failing to notify on success (see below), contains the the entire error + // response. if let Err(log_error) = tx.send(Err(Box::new(e.clone()))).map_err(|error| { FetchError::SubrequestBatchingError { service: service.clone(), @@ -1076,13 +1083,15 @@ pub(crate) async fn notify_batch_query( // graphql_response, so zip_eq shouldn't panic. // Use the tx to send a graphql_response message to each waiter. for (response, sender) in rs.into_iter().zip_eq(senders) { - if let Err(log_error) = - sender - .send(Ok(response)) - .map_err(|error| FetchError::SubrequestBatchingError { - service: service.to_string(), - reason: format!("tx send failed: {error:?}"), - }) + if let Err(log_error) = sender + .send(Ok(response)) + // If we fail to notify the waiter that our request succeeded, do not log + // out the entire response since this may be substantial and/or contain + // PII data. Simply log that the send failed. + .map_err(|_error| FetchError::SubrequestBatchingError { + service: service.to_string(), + reason: "tx send failed".to_string(), + }) { tracing::error!(service, error=%log_error, "failed to notify sender that batch processing succeeded"); } @@ -1094,7 +1103,12 @@ pub(crate) async fn notify_batch_query( } type BatchInfo = ( - (String, http::Request, Vec, usize), + ( + String, + http::Request, + Vec<(Context, SubgraphRequestId)>, + usize, + ), Vec>>, ); @@ -1222,7 +1236,9 @@ pub(crate) async fn call_single_http( }); let SubgraphRequest { - subgraph_request, .. + subgraph_request, + id: subgraph_request_id, + .. } = request; let operation_name = subgraph_request @@ -1353,6 +1369,7 @@ pub(crate) async fn call_single_http( .expect("it won't fail everything is coming from an existing response"), context.clone(), service_name.to_owned(), + subgraph_request_id.clone(), ); should_log = condition.lock().evaluate_response(&subgraph_response); } @@ -1397,6 +1414,7 @@ pub(crate) async fn call_single_http( resp, context, service_name.to_owned(), + subgraph_request_id, )) } @@ -1695,7 +1713,6 @@ mod tests { use crate::plugins::subscription::SubgraphPassthroughMode; use crate::plugins::subscription::SubscriptionModeConfig; use crate::plugins::subscription::SUBSCRIPTION_CALLBACK_HMAC_KEY; - use crate::plugins::traffic_shaping::Http2Config; use crate::protocols::websocket::ClientMessage; use crate::protocols::websocket::ServerMessage; use crate::protocols::websocket::WebSocketProtocol; @@ -2372,7 +2389,7 @@ mod tests { HttpClientServiceFactory::from_config( "testbis", &Configuration::default(), - Http2Config::Disable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2416,7 +2433,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2450,7 +2467,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2485,7 +2502,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2523,7 +2540,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2562,7 +2579,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2605,7 +2622,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2646,7 +2663,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2699,7 +2716,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2743,7 +2760,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2785,7 +2802,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2823,7 +2840,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2861,7 +2878,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2898,7 +2915,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2935,7 +2952,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -2981,7 +2998,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -3025,7 +3042,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -3066,7 +3083,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -3107,7 +3124,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); @@ -3148,7 +3165,7 @@ mod tests { HttpClientServiceFactory::from_config( "test", &Configuration::default(), - Http2Config::Enable, + crate::configuration::shared::Client::default(), ), ) .expect("can create a SubgraphService"); diff --git a/apollo-router/src/services/supergraph/service.rs b/apollo-router/src/services/supergraph/service.rs index 8438e6a741..db58202caf 100644 --- a/apollo-router/src/services/supergraph/service.rs +++ b/apollo-router/src/services/supergraph/service.rs @@ -221,7 +221,8 @@ async fn service_call( } match content { - Some(QueryPlannerContent::Response { response }) => Ok( + Some(QueryPlannerContent::Response { response }) + | Some(QueryPlannerContent::CachedIntrospectionResponse { response }) => Ok( SupergraphResponse::new_from_graphql_response(*response, context), ), Some(QueryPlannerContent::IntrospectionDisabled) => { @@ -244,9 +245,8 @@ async fn service_call( let _ = lock.insert::>(query_metrics); }); - let operation_name = body.operation_name.clone(); - let is_deferred = plan.is_deferred(operation_name.as_deref(), &variables); - let is_subscription = plan.is_subscription(operation_name.as_deref()); + let is_deferred = plan.is_deferred(&variables); + let is_subscription = plan.is_subscription(); if let Some(batching) = context .extensions() @@ -277,8 +277,7 @@ async fn service_call( .extensions() .with_lock(|lock| lock.get::().cloned()); if let Some(batch_query) = batch_query_opt { - let query_hashes = - plan.query_hashes(batching, operation_name.as_deref(), &variables)?; + let query_hashes = plan.query_hashes(batching, &variables)?; batch_query .set_query_hashes(query_hashes) .await diff --git a/apollo-router/src/services/supergraph/tests.rs b/apollo-router/src/services/supergraph/tests.rs index 7a3d6765a1..b62b83737b 100644 --- a/apollo-router/src/services/supergraph/tests.rs +++ b/apollo-router/src/services/supergraph/tests.rs @@ -493,15 +493,17 @@ async fn errors_from_primary_on_deferred_responses() { let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") - @link(url: "https://specs.apollo.dev/join/v0.2", for: EXECUTION) + @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) { query: Query } + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field(graph: join__Graph!, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA scalar link__Import @@ -1101,12 +1103,12 @@ async fn subscription_without_header() { async fn root_typename_with_defer_and_empty_first_response() { let subgraphs = MockedSubgraphs([ ("user", MockSubgraph::builder().with_json( - serde_json::json!{{"query":"{currentUser{activeOrganization{__typename id}}}"}}, + serde_json::json!{{"query":"{... on Query{currentUser{activeOrganization{__typename id}}}}"}}, serde_json::json!{{"data": {"currentUser": { "activeOrganization": { "__typename": "Organization", "id": "0" } }}}} ).build()), ("orga", MockSubgraph::builder().with_json( serde_json::json!{{ - "query":"query($representations:[_Any!]!){_entities(representations:$representations){...on Organization{suborga{__typename id}}}}", + "query":"query($representations:[_Any!]!){_entities(representations:$representations){...on Organization{suborga{id name}}}}", "variables": { "representations":[{"__typename": "Organization", "id":"0"}] } @@ -1115,33 +1117,11 @@ async fn root_typename_with_defer_and_empty_first_response() { "data": { "_entities": [{ "suborga": [ { "__typename": "Organization", "id": "1"}, - { "__typename": "Organization", "id": "2"}, + { "__typename": "Organization", "id": "2", "name": "A"}, { "__typename": "Organization", "id": "3"}, ] }] }, - }} - ) - .with_json( - serde_json::json!{{ - "query":"query($representations:[_Any!]!){_entities(representations:$representations){...on Organization{name}}}", - "variables": { - "representations":[ - {"__typename": "Organization", "id":"1"}, - {"__typename": "Organization", "id":"2"}, - {"__typename": "Organization", "id":"3"} - - ] - } - }}, - serde_json::json!{{ - "data": { - "_entities": [ - { "__typename": "Organization", "id": "1"}, - { "__typename": "Organization", "id": "2", "name": "A"}, - { "__typename": "Organization", "id": "3"}, - ] - } - }} + }} ).build()) ].into_iter().collect()); @@ -1154,23 +1134,77 @@ async fn root_typename_with_defer_and_empty_first_response() { .await .unwrap(); + let query = r#" + query { + ...OnlyTypename + ... @defer { + currentUser { + activeOrganization { + id + suborga { + id + name + } + } + } + } + } + + fragment OnlyTypename on Query { + __typename + } + "#; let request = supergraph::Request::fake_builder() - .context(defer_context()) - .query( - "query { __typename ... @defer { currentUser { activeOrganization { id suborga { id name } } } } }", - ) - .build() - .unwrap(); + .context(defer_context()) + .query(query) + .build() + .unwrap(); let mut stream = service.oneshot(request).await.unwrap(); let res = stream.next_response().await.unwrap(); - assert_eq!( - res.data.as_ref().unwrap().get("__typename"), - Some(&serde_json_bytes::Value::String("Query".into())) - ); + + insta::assert_json_snapshot!(res, @r###" + { + "data": { + "__typename": "Query" + }, + "hasNext": true + } + "###); // Must have 2 chunks - let _ = stream.next_response().await.unwrap(); + let res = stream.next_response().await.unwrap(); + insta::assert_json_snapshot!(res, @r###" + { + "hasNext": false, + "incremental": [ + { + "data": { + "currentUser": { + "activeOrganization": { + "id": "0", + "suborga": [ + { + "id": "1", + "name": null + }, + { + "id": "2", + "name": "A" + }, + { + "id": "3", + "name": null + } + ] + } + } + }, + "path": [] + } + ] + } + "###); } #[tokio::test] @@ -1246,6 +1280,8 @@ async fn query_reconstruction() { directive @inaccessible on FIELD_DEFINITION | OBJECT | INTERFACE | UNION | ARGUMENT_DEFINITION | SCALAR | ENUM | ENUM_VALUE | INPUT_OBJECT | INPUT_FIELD_DEFINITION + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + directive @join__field(graph: join__Graph!, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE @@ -1254,6 +1290,8 @@ async fn query_reconstruction() { directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION + directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA directive @tag(name: String!) repeatable on FIELD_DEFINITION | OBJECT | INTERFACE | UNION | ARGUMENT_DEFINITION | SCALAR | ENUM | ENUM_VALUE | INPUT_OBJECT | INPUT_FIELD_DEFINITION @@ -1471,10 +1509,12 @@ async fn reconstruct_deferred_query_under_interface() { } directive @inaccessible on FIELD_DEFINITION | OBJECT | INTERFACE | UNION | ARGUMENT_DEFINITION | SCALAR | ENUM | ENUM_VALUE | INPUT_OBJECT | INPUT_FIELD_DEFINITION + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field(graph: join__Graph!, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA directive @tag(name: String!) repeatable on FIELD_DEFINITION | OBJECT | INTERFACE | UNION | ARGUMENT_DEFINITION | SCALAR | ENUM | ENUM_VALUE | INPUT_OBJECT | INPUT_FIELD_DEFINITION @@ -2309,7 +2349,7 @@ async fn errors_on_nullified_paths() { let schema = r#" schema @link(url: "https://specs.apollo.dev/link/v1.0") - @link(url: "https://specs.apollo.dev/join/v0.1", for: EXECUTION) + @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) { query: Query } @@ -2492,10 +2532,12 @@ async fn no_typename_on_interface() { query: Query } directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION interface Animal @join__type(graph: ANIMAL) { id: String! @@ -2662,6 +2704,7 @@ async fn aliased_typename_on_fragments() { query: Query } directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE @@ -2676,7 +2719,7 @@ async fn aliased_typename_on_fragments() { } scalar join__FieldSet - interface Animal + interface Animal @join__type(graph: ANIMAL) { id: String! @@ -3003,10 +3046,12 @@ async fn interface_object_typename() { query: Query } + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA directive @owner( @@ -3189,9 +3234,11 @@ async fn fragment_reuse() { query: Query } directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION directive @join__implements( graph: join__Graph! interface: String!) repeatable on OBJECT | INTERFACE scalar link__Import @@ -3207,7 +3254,7 @@ async fn fragment_reuse() { ORGA @join__graph(name: "orga", url: "http://localhost:4002/graphql") } - type Query + type Query @join__type(graph: ORGA) @join__type(graph: USER) { @@ -3219,7 +3266,7 @@ async fn fragment_reuse() { @join__type(graph: USER, key: "id") { id: ID! - name: String + name: String organizations: [Organization] @join__field(graph: ORGA) } type Organization @@ -3294,6 +3341,7 @@ async fn abstract_types_in_requires() { query: Query } + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE @@ -3447,10 +3495,12 @@ const ENUM_SCHEMA: &str = r#"schema query: Query } directive @link(url: String, as: String, for: link__Purpose, import: [link__Import]) repeatable on SCHEMA + directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet, type: String, external: Boolean, override: String, usedOverridden: Boolean) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION directive @join__graph(name: String!, url: String!) on ENUM_VALUE directive @join__implements(graph: join__Graph!, interface: String!) repeatable on OBJECT | INTERFACE directive @join__type(graph: join__Graph!, key: join__FieldSet, extension: Boolean! = false, resolvable: Boolean! = true, isInterfaceObject: Boolean! = false) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR + directive @join__unionMember(graph: join__Graph!, member: String!) repeatable on UNION scalar link__Import diff --git a/apollo-router/src/snapshots/apollo_router__batching__tests__it_matches_subgraph_request_ids_to_responses.snap b/apollo-router/src/snapshots/apollo_router__batching__tests__it_matches_subgraph_request_ids_to_responses.snap new file mode 100644 index 0000000000..086b0189a6 --- /dev/null +++ b/apollo-router/src/snapshots/apollo_router__batching__tests__it_matches_subgraph_request_ids_to_responses.snap @@ -0,0 +1,27 @@ +--- +source: apollo-router/src/batching.rs +expression: response +--- +[ + { + "data": { + "entryA": { + "index": 0 + } + } + }, + { + "data": { + "entryA": { + "index": 1 + } + } + }, + { + "data": { + "entryA": { + "index": 2 + } + } + } +] diff --git a/apollo-router/src/spec/query.rs b/apollo-router/src/spec/query.rs index 21f2914035..3ad855870a 100644 --- a/apollo-router/src/spec/query.rs +++ b/apollo-router/src/spec/query.rs @@ -21,6 +21,7 @@ use self::change::QueryHashVisitor; use self::subselections::BooleanValues; use self::subselections::SubSelectionKey; use self::subselections::SubSelectionValue; +use super::Fragment; use crate::error::FetchError; use crate::graphql::Error; use crate::graphql::Request; @@ -32,6 +33,7 @@ use crate::json_ext::Value; use crate::plugins::authorization::UnauthorizedPaths; use crate::query_planner::fetch::OperationKind; use crate::query_planner::fetch::QueryHash; +use crate::services::layers::query_analysis::get_operation; use crate::services::layers::query_analysis::ParsedDocument; use crate::services::layers::query_analysis::ParsedDocumentInner; use crate::spec::schema::ApiSchema; @@ -58,7 +60,7 @@ pub(crate) struct Query { #[derivative(PartialEq = "ignore", Hash = "ignore")] pub(crate) fragments: Fragments, #[derivative(PartialEq = "ignore", Hash = "ignore")] - pub(crate) operations: Vec, + pub(crate) operation: Operation, #[derivative(PartialEq = "ignore", Hash = "ignore")] pub(crate) subselections: HashMap, #[derivative(PartialEq = "ignore", Hash = "ignore")] @@ -99,7 +101,7 @@ impl Query { fragments: Fragments { map: HashMap::new(), }, - operations: Vec::new(), + operation: Operation::empty(), subselections: HashMap::new(), unauthorized: UnauthorizedPaths::default(), filtered_query: None, @@ -121,14 +123,12 @@ impl Query { pub(crate) fn format_response( &self, response: &mut Response, - operation_name: Option<&str>, variables: Object, schema: &ApiSchema, defer_conditions: BooleanValues, ) -> Vec { let data = std::mem::take(&mut response.data); - let original_operation = self.operation(operation_name); match data { Some(Value::Object(mut input)) => { if self.is_deferred(defer_conditions) { @@ -146,18 +146,6 @@ impl Query { errors: Vec::new(), nullified: Vec::new(), }; - // Detect if root __typename is asked in the original query (the qp doesn't put root __typename in subselections) - // cf https://github.com/apollographql/router/issues/1677 - let operation_kind_if_root_typename = - original_operation.and_then(|op| { - op.selection_set - .iter() - .any(|f| f.is_typename_field()) - .then(|| *op.kind()) - }); - if let Some(operation_kind) = operation_kind_if_root_typename { - output.insert(TYPENAME, operation_kind.default_type_name().into()); - } response.data = Some( match self.apply_root_selection_set( @@ -186,13 +174,13 @@ impl Query { return vec![]; } } - } else if let Some(operation) = original_operation { - let mut output = Object::with_capacity(operation.selection_set.len()); + } else { + let mut output = Object::with_capacity(self.operation.selection_set.len()); - let all_variables = if operation.variables.is_empty() { + let all_variables = if self.operation.variables.is_empty() { variables } else { - operation + self.operation .variables .iter() .filter_map(|(k, Variable { default_value, .. })| { @@ -204,9 +192,9 @@ impl Query { }; let operation_type_name = schema - .root_operation(operation.kind.into()) + .root_operation(self.operation.kind.into()) .map(|name| name.as_str()) - .unwrap_or(operation.kind.default_type_name()); + .unwrap_or(self.operation.kind.default_type_name()); let mut parameters = FormatParameters { variables: &all_variables, schema, @@ -217,7 +205,7 @@ impl Query { response.data = Some( match self.apply_root_selection_set( operation_type_name, - &operation.selection_set, + &self.operation.selection_set, &mut parameters, &mut input, &mut output, @@ -234,28 +222,10 @@ impl Query { } return parameters.nullified; - } else { - failfast_debug!("can't find operation for {:?}", operation_name); } } Some(Value::Null) => { - // Detect if root __typename is asked in the original query (the qp doesn't put root __typename in subselections) - // cf https://github.com/apollographql/router/issues/1677 - let operation_kind_if_root_typename = original_operation.and_then(|op| { - op.selection_set - .iter() - .any(|f| f.is_typename_field()) - .then(|| *op.kind()) - }); - response.data = match operation_kind_if_root_typename { - Some(operation_kind) => { - let mut output = Object::default(); - output.insert(TYPENAME, operation_kind.default_type_name().into()); - Some(output.into()) - } - None => Some(Value::default()), - }; - + response.data = Some(Value::Null); return vec![]; } _ => { @@ -263,7 +233,7 @@ impl Query { } } - response.data = Some(Value::default()); + response.data = Some(Value::Null); vec![] } @@ -304,11 +274,12 @@ impl Query { ) .map_err(|e| SpecError::QueryHashing(e.to_string()))?; - Ok(Arc::new(ParsedDocumentInner { + ParsedDocumentInner::new( ast, - executable: Arc::new(executable_document), - hash: Arc::new(QueryHash(hash)), - })) + Arc::new(executable_document), + operation_name, + Arc::new(QueryHash(hash)), + ) } #[cfg(test)] @@ -321,13 +292,13 @@ impl Query { let query = query.into(); let doc = Self::parse_document(&query, operation_name, schema, configuration)?; - let (fragments, operations, defer_stats, schema_aware_hash) = + let (fragments, operation, defer_stats, schema_aware_hash) = Self::extract_query_information(schema, &doc.executable, operation_name)?; Ok(Query { string: query, fragments, - operations, + operation, subselections: HashMap::new(), unauthorized: UnauthorizedPaths::default(), filtered_query: None, @@ -342,18 +313,15 @@ impl Query { schema: &Schema, document: &ExecutableDocument, operation_name: Option<&str>, - ) -> Result<(Fragments, Vec, DeferStats, Vec), SpecError> { + ) -> Result<(Fragments, Operation, DeferStats, Vec), SpecError> { let mut defer_stats = DeferStats { has_defer: false, has_unconditional_defer: false, conditional_defer_variable_names: IndexSet::default(), }; let fragments = Fragments::from_hir(document, schema, &mut defer_stats)?; - let operations = document - .operations - .iter() - .map(|operation| Operation::from_hir(operation, schema, &mut defer_stats, &fragments)) - .collect::, SpecError>>()?; + let operation = get_operation(document, operation_name)?; + let operation = Operation::from_hir(&operation, schema, &mut defer_stats, &fragments)?; let mut visitor = QueryHashVisitor::new(schema.supergraph_schema(), &schema.raw_sdl, document); @@ -362,7 +330,7 @@ impl Query { })?; let hash = visitor.finish(); - Ok((fragments, operations, defer_stats, hash)) + Ok((fragments, operation, defer_stats, hash)) } #[allow(clippy::too_many_arguments)] @@ -571,19 +539,13 @@ impl Query { .and_then(|s| apollo_compiler::ast::NamedType::new(s).ok()) .map(apollo_compiler::ast::Type::Named); - let current_type = if parameters - .schema - .get_interface(field_type.inner_named_type()) - .is_some() - || parameters - .schema - .get_union(field_type.inner_named_type()) - .is_some() - { - typename.as_ref().unwrap_or(field_type) - } else { - field_type - }; + let current_type = + match parameters.schema.types.get(field_type.inner_named_type()) { + Some(ExtendedType::Interface(..) | ExtendedType::Union(..)) => { + typename.as_ref().unwrap_or(field_type) + } + _ => field_type, + }; if self .apply_selection_set( @@ -640,21 +602,18 @@ impl Query { } if name.as_str() == TYPENAME { - let input_value = input - .get(field_name.as_str()) - .cloned() - .filter(|v| v.is_string()) - .unwrap_or_else(|| { - Value::String(ByteString::from( - current_type.inner_named_type().as_str().to_owned(), - )) + let object_type = parameters + .schema + .get_object(current_type.inner_named_type()) + .or_else(|| { + let input_value = input.get(field_name.as_str())?.as_str()?; + parameters.schema.get_object(input_value) }); - if let Some(input_str) = input_value.as_str() { - if parameters.schema.get_object(input_str).is_some() { - output.insert((*field_name).clone(), input_value); - } else { - return Err(InvalidValue); - } + + if let Some(object_type) = object_type { + output.insert((*field_name).clone(), object_type.name.as_str().into()); + } else { + return Err(InvalidValue); } continue; } @@ -751,11 +710,15 @@ impl Query { continue; } - if let Some(fragment) = self.fragments.get(name) { + if let Some(Fragment { + type_condition, + selection_set, + }) = self.fragments.get(name) + { let is_apply = current_type.inner_named_type().as_str() - == fragment.type_condition.as_str() + == type_condition.as_str() || parameters.schema.is_subtype( - &fragment.type_condition, + type_condition, current_type.inner_named_type().as_str(), ); @@ -768,7 +731,7 @@ impl Query { } self.apply_selection_set( - &fragment.selection_set, + selection_set, parameters, input, output, @@ -811,7 +774,12 @@ impl Query { let field_name = alias.as_ref().unwrap_or(name); let field_name_str = field_name.as_str(); - if let Some(input_value) = input.get_mut(field_name_str) { + + if name.as_str() == TYPENAME { + if !output.contains_key(field_name_str) { + output.insert(field_name.clone(), Value::String(root_type_name.into())); + } + } else if let Some(input_value) = input.get_mut(field_name_str) { // if there's already a value for that key in the output it means either: // - the value is a scalar and was moved into output using take(), replacing // the input value with Null @@ -837,10 +805,6 @@ impl Query { ); path.pop(); res? - } else if name.as_str() == TYPENAME { - if !output.contains_key(field_name_str) { - output.insert(field_name.clone(), Value::String(root_type_name.into())); - } } else if field_type.is_non_null() { parameters.errors.push(Error { message: format!( @@ -861,25 +825,25 @@ impl Query { include_skip, .. } => { - // top level objects will not provide a __typename field - if type_condition.as_str() != root_type_name { - return Err(InvalidValue); - } - if include_skip.should_skip(parameters.variables) { continue; } - self.apply_selection_set( - selection_set, - parameters, - input, - output, - path, - // FIXME: use `ast::Name` everywhere so fallible conversion isnโ€™t needed - #[allow(clippy::unwrap_used)] - &FieldType::new_named(type_condition.try_into().unwrap()).0, - )?; + // check if the fragment matches the input type directly, and if not, check if the + // input type is a subtype of the fragment's type condition (interface, union) + let is_apply = (root_type_name == type_condition.as_str()) + || parameters.schema.is_subtype(type_condition, root_type_name); + + if is_apply { + self.apply_root_selection_set( + root_type_name, + selection_set, + parameters, + input, + output, + path, + )?; + } } Selection::FragmentSpread { name, @@ -892,30 +856,26 @@ impl Query { continue; } - if let Some(fragment) = self.fragments.get(name) { - let is_apply = { - // check if the fragment matches the input type directly, and if not, check if the - // input type is a subtype of the fragment's type condition (interface, union) - root_type_name == fragment.type_condition.as_str() - || parameters - .schema - .is_subtype(&fragment.type_condition, root_type_name) - }; + if let Some(Fragment { + type_condition, + selection_set, + }) = self.fragments.get(name) + { + // check if the fragment matches the input type directly, and if not, check if the + // input type is a subtype of the fragment's type condition (interface, union) + let is_apply = (root_type_name == type_condition.as_str()) + || parameters.schema.is_subtype(type_condition, root_type_name); - if !is_apply { - return Err(InvalidValue); + if is_apply { + self.apply_root_selection_set( + root_type_name, + selection_set, + parameters, + input, + output, + path, + )?; } - - self.apply_selection_set( - &fragment.selection_set, - parameters, - input, - output, - path, - // FIXME: use `ast::Name` everywhere so fallible conversion isnโ€™t needed - #[allow(clippy::unwrap_used)] - &FieldType::new_named(root_type_name.try_into().unwrap()).0, - )?; } else { // the fragment should have been already checked with the schema failfast_debug!("missing fragment named: {}", name); @@ -934,19 +894,13 @@ impl Query { request: &Request, schema: &Schema, ) -> Result<(), Response> { - let operation_name = request.operation_name.as_deref(); - let operation_variable_types = - self.operations - .iter() - .fold(HashMap::new(), |mut acc, operation| { - if operation_name.is_none() || operation.name.as_deref() == operation_name { - acc.extend(operation.variables.iter().map(|(k, v)| (k.as_str(), v))) - } - acc - }); - if LevelFilter::current() >= LevelFilter::DEBUG { - let known_variables = operation_variable_types.keys().cloned().collect(); + let known_variables = self + .operation + .variables + .keys() + .map(|k| k.as_str()) + .collect(); let provided_variables = request .variables .keys() @@ -963,7 +917,9 @@ impl Query { } } - let errors = operation_variable_types + let errors = self + .operation + .variables .iter() .filter_map( |( @@ -975,12 +931,12 @@ impl Query { )| { let value = request .variables - .get(*name) + .get(name.as_str()) .or(default_value.as_ref()) .unwrap_or(&Value::Null); ty.validate_input_value(value, schema).err().map(|_| { FetchError::ValidationInvalidTypeVariable { - name: name.to_string(), + name: name.as_str().to_string(), } .to_graphql_error(None) }) @@ -997,47 +953,23 @@ impl Query { pub(crate) fn variable_value<'a>( &'a self, - operation_name: Option<&str>, variable_name: &str, variables: &'a Object, ) -> Option<&'a Value> { variables .get(variable_name) - .or_else(|| self.default_variable_value(operation_name, variable_name)) + .or_else(|| self.default_variable_value(variable_name)) } - pub(crate) fn default_variable_value( - &self, - operation_name: Option<&str>, - variable_name: &str, - ) -> Option<&Value> { - self.operation(operation_name).and_then(|op| { - op.variables - .get(variable_name) - .and_then(|Variable { default_value, .. }| default_value.as_ref()) - }) - } - - pub(crate) fn operation(&self, operation_name: Option>) -> Option<&Operation> { - match operation_name { - Some(name) => self - .operations - .iter() - // we should have an error if the only operation is anonymous but the query specifies a name - .find(|op| { - if let Some(op_name) = op.name.as_deref() { - op_name == name.as_ref() - } else { - false - } - }), - None => self.operations.first(), - } + pub(crate) fn default_variable_value(&self, variable_name: &str) -> Option<&Value> { + self.operation + .variables + .get(variable_name) + .and_then(|Variable { default_value, .. }| default_value.as_ref()) } pub(crate) fn contains_error_path( &self, - operation_name: Option<&str>, label: &Option, path: &Path, defer_conditions: BooleanValues, @@ -1047,21 +979,14 @@ impl Query { defer_conditions, }) { Some(subselection) => &subselection.selection_set, - None => match self.operation(operation_name) { - None => return false, - Some(op) => &op.selection_set, - }, + None => &self.operation.selection_set, }; selection_set .iter() .any(|selection| selection.contains_error_path(&path.0, &self.fragments)) } - pub(crate) fn defer_variables_set( - &self, - operation_name: Option<&str>, - variables: &Object, - ) -> BooleanValues { + pub(crate) fn defer_variables_set(&self, variables: &Object) -> BooleanValues { let mut bits = 0_u32; for (i, variable) in self .defer_stats @@ -1071,7 +996,7 @@ impl Query { { let value = variables .get(variable.as_str()) - .or_else(|| self.default_variable_value(operation_name, variable)); + .or_else(|| self.default_variable_value(variable)); if matches!(value, Some(serde_json_bytes::Value::Bool(true))) { bits |= 1 << i; @@ -1110,6 +1035,16 @@ pub(crate) struct Variable { } impl Operation { + fn empty() -> Self { + Self { + name: None, + kind: OperationKind::Query, + type_name: "".into(), + selection_set: Vec::new(), + variables: HashMap::new(), + } + } + pub(crate) fn from_hir( operation: &executable::Operation, schema: &Schema, diff --git a/apollo-router/src/spec/query/subselections.rs b/apollo-router/src/spec/query/subselections.rs index d14146010e..2ec203144e 100644 --- a/apollo-router/src/spec/query/subselections.rs +++ b/apollo-router/src/spec/query/subselections.rs @@ -100,7 +100,7 @@ const MAX_DEFER_VARIABLES: usize = 4; pub(crate) fn collect_subselections( configuration: &Configuration, - operations: &[Operation], + operation: &Operation, fragments: &HashMap, defer_stats: &DeferStats, ) -> Result, SpecError> { @@ -122,29 +122,27 @@ pub(crate) fn collect_subselections( }; for defer_conditions in variable_combinations(defer_stats) { shared.defer_conditions = defer_conditions; - for operation in operations { - let type_name = operation.type_name.clone(); - let primary = collect_from_selection_set( - &mut shared, - // FIXME: use `ast::Name` everywhere so fallible conversion isnโ€™t needed - #[allow(clippy::unwrap_used)] - &FieldType::new_named((&type_name).try_into().unwrap()), - &operation.selection_set, - ) - .map_err(|err| SpecError::TransformError(err.to_owned()))?; - debug_assert!(shared.path.is_empty()); - if !primary.is_empty() { - shared.subselections.insert( - SubSelectionKey { - defer_label: None, - defer_conditions, - }, - SubSelectionValue { - selection_set: primary, - type_name, - }, - ); - } + let type_name = operation.type_name.clone(); + let primary = collect_from_selection_set( + &mut shared, + // FIXME: use `ast::Name` everywhere so fallible conversion isnโ€™t needed + #[allow(clippy::unwrap_used)] + &FieldType::new_named((&type_name).try_into().unwrap()), + &operation.selection_set, + ) + .map_err(|err| SpecError::TransformError(err.to_owned()))?; + debug_assert!(shared.path.is_empty()); + if !primary.is_empty() { + shared.subselections.insert( + SubSelectionKey { + defer_label: None, + defer_conditions, + }, + SubSelectionValue { + selection_set: primary, + type_name, + }, + ); } } Ok(shared.subselections) diff --git a/apollo-router/src/spec/query/tests.rs b/apollo-router/src/spec/query/tests.rs index bb13004402..1d97962405 100644 --- a/apollo-router/src/spec/query/tests.rs +++ b/apollo-router/src/spec/query/tests.rs @@ -131,7 +131,6 @@ impl FormatTest { query.format_response( &mut response, - self.operation, self.variables .unwrap_or_else(|| Value::Object(Object::default())) .as_object() @@ -3506,10 +3505,8 @@ fn it_statically_includes() { &Default::default(), ) .expect("could not parse query"); - assert_eq!(query.operations.len(), 1); - let operation = &query.operations[0]; - assert_eq!(operation.selection_set.len(), 1); - match operation.selection_set.first().unwrap() { + assert_eq!(query.operation.selection_set.len(), 1); + match query.operation.selection_set.first().unwrap() { Selection::Field { name, .. } => assert_eq!(name, &ByteString::from("product")), _ => panic!("expected a field"), } @@ -3530,14 +3527,12 @@ fn it_statically_includes() { ) .expect("could not parse query"); - assert_eq!(query.operations.len(), 1); - let operation = &query.operations[0]; - assert_eq!(operation.selection_set.len(), 2); - match operation.selection_set.first().unwrap() { + assert_eq!(query.operation.selection_set.len(), 2); + match query.operation.selection_set.first().unwrap() { Selection::Field { name, .. } => assert_eq!(name, &ByteString::from("review")), _ => panic!("expected a field"), } - match operation.selection_set.get(1).unwrap() { + match query.operation.selection_set.get(1).unwrap() { Selection::Field { name, .. } => assert_eq!(name, &ByteString::from("product")), _ => panic!("expected a field"), } @@ -3561,10 +3556,8 @@ fn it_statically_includes() { ) .expect("could not parse query"); - assert_eq!(query.operations.len(), 1); - let operation = &query.operations[0]; - assert_eq!(operation.selection_set.len(), 1); - match operation.selection_set.first().unwrap() { + assert_eq!(query.operation.selection_set.len(), 1); + match query.operation.selection_set.first().unwrap() { Selection::Field { name, selection_set: Some(selection_set), @@ -3597,14 +3590,12 @@ fn it_statically_includes() { ) .expect("could not parse query"); - assert_eq!(query.operations.len(), 1); - let operation = &query.operations[0]; - assert_eq!(operation.selection_set.len(), 2); - match operation.selection_set.first().unwrap() { + assert_eq!(query.operation.selection_set.len(), 2); + match query.operation.selection_set.first().unwrap() { Selection::Field { name, .. } => assert_eq!(name, &ByteString::from("review")), _ => panic!("expected a field"), } - match operation.selection_set.get(1).unwrap() { + match query.operation.selection_set.get(1).unwrap() { Selection::Field { name, selection_set: Some(selection_set), @@ -3655,10 +3646,8 @@ fn it_statically_skips() { &Default::default(), ) .expect("could not parse query"); - assert_eq!(query.operations.len(), 1); - let operation = &query.operations[0]; - assert_eq!(operation.selection_set.len(), 1); - match operation.selection_set.first().unwrap() { + assert_eq!(query.operation.selection_set.len(), 1); + match query.operation.selection_set.first().unwrap() { Selection::Field { name, .. } => assert_eq!(name, &ByteString::from("product")), _ => panic!("expected a field"), } @@ -3679,14 +3668,12 @@ fn it_statically_skips() { ) .expect("could not parse query"); - assert_eq!(query.operations.len(), 1); - let operation = &query.operations[0]; - assert_eq!(operation.selection_set.len(), 2); - match operation.selection_set.first().unwrap() { + assert_eq!(query.operation.selection_set.len(), 2); + match query.operation.selection_set.first().unwrap() { Selection::Field { name, .. } => assert_eq!(name, &ByteString::from("review")), _ => panic!("expected a field"), } - match operation.selection_set.get(1).unwrap() { + match query.operation.selection_set.get(1).unwrap() { Selection::Field { name, .. } => assert_eq!(name, &ByteString::from("product")), _ => panic!("expected a field"), } @@ -3710,10 +3697,8 @@ fn it_statically_skips() { ) .expect("could not parse query"); - assert_eq!(query.operations.len(), 1); - let operation = &query.operations[0]; - assert_eq!(operation.selection_set.len(), 1); - match operation.selection_set.first().unwrap() { + assert_eq!(query.operation.selection_set.len(), 1); + match query.operation.selection_set.first().unwrap() { Selection::Field { name, selection_set: Some(selection_set), @@ -3746,14 +3731,12 @@ fn it_statically_skips() { ) .expect("could not parse query"); - assert_eq!(query.operations.len(), 1); - let operation = &query.operations[0]; - assert_eq!(operation.selection_set.len(), 2); - match operation.selection_set.first().unwrap() { + assert_eq!(query.operation.selection_set.len(), 2); + match query.operation.selection_set.first().unwrap() { Selection::Field { name, .. } => assert_eq!(name, &ByteString::from("review")), _ => panic!("expected a field"), } - match operation.selection_set.get(1).unwrap() { + match query.operation.selection_set.get(1).unwrap() { Selection::Field { name, selection_set: Some(selection_set), @@ -5132,7 +5115,6 @@ fn fragment_on_interface_on_query() { query.format_response( &mut response, - None, Default::default(), api_schema, BooleanValues { bits: 0 }, @@ -5666,7 +5648,6 @@ fn test_error_path_works_across_inline_fragments() { .unwrap(); assert!(query.contains_error_path( - None, &None, &Path::from("rootType/edges/0/node/subType/edges/0/node/myField"), BooleanValues { bits: 0 } @@ -5713,7 +5694,7 @@ fn test_query_not_named_query() { ) .unwrap(); let query = Query::parse("{ example }", None, &schema, &config).unwrap(); - let selection = &query.operations[0].selection_set[0]; + let selection = &query.operation.selection_set[0]; assert!( matches!( selection, @@ -5784,12 +5765,12 @@ fn filtered_defer_fragment() { .parse_ast(filtered_query, "filtered_query.graphql") .unwrap(); let doc = ast.to_executable(schema.supergraph_schema()).unwrap(); - let (fragments, operations, defer_stats, schema_aware_hash) = + let (fragments, operation, defer_stats, schema_aware_hash) = Query::extract_query_information(&schema, &doc, None).unwrap(); let subselections = crate::spec::query::subselections::collect_subselections( &config, - &operations, + &operation, &fragments.map, &defer_stats, ) @@ -5797,7 +5778,7 @@ fn filtered_defer_fragment() { let mut query = Query { string: query.to_string(), fragments, - operations, + operation, filtered_query: None, subselections, defer_stats, @@ -5810,12 +5791,12 @@ fn filtered_defer_fragment() { .parse_ast(filtered_query, "filtered_query.graphql") .unwrap(); let doc = ast.to_executable(schema.supergraph_schema()).unwrap(); - let (fragments, operations, defer_stats, schema_aware_hash) = + let (fragments, operation, defer_stats, schema_aware_hash) = Query::extract_query_information(&schema, &doc, None).unwrap(); let subselections = crate::spec::query::subselections::collect_subselections( &config, - &operations, + &operation, &fragments.map, &defer_stats, ) @@ -5824,7 +5805,7 @@ fn filtered_defer_fragment() { let filtered = Query { string: filtered_query.to_string(), fragments, - operations, + operation, filtered_query: None, subselections, defer_stats, @@ -5845,7 +5826,6 @@ fn filtered_defer_fragment() { query.filtered_query.as_ref().unwrap().format_response( &mut response, - None, Object::new(), schema.api_schema(), BooleanValues { bits: 0 }, @@ -5855,7 +5835,6 @@ fn filtered_defer_fragment() { query.format_response( &mut response, - None, Object::new(), schema.api_schema(), BooleanValues { bits: 0 }, diff --git a/apollo-router/src/spec/selection.rs b/apollo-router/src/spec/selection.rs index f9ac7e42b9..4e093ea4fb 100644 --- a/apollo-router/src/spec/selection.rs +++ b/apollo-router/src/spec/selection.rs @@ -11,7 +11,6 @@ use crate::spec::query::DeferStats; use crate::spec::FieldType; use crate::spec::Schema; use crate::spec::SpecError; -use crate::spec::TYPENAME; #[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)] pub(crate) enum Selection { @@ -190,10 +189,6 @@ impl Selection { }) } - pub(crate) fn is_typename_field(&self) -> bool { - matches!(self, Selection::Field {name, ..} if name.as_str() == TYPENAME) - } - pub(crate) fn contains_error_path(&self, path: &[PathElement], fragments: &Fragments) -> bool { match (path.first(), self) { (None, _) => true, diff --git a/apollo-router/src/test_harness.rs b/apollo-router/src/test_harness.rs index a0b5384489..516048e3d7 100644 --- a/apollo-router/src/test_harness.rs +++ b/apollo-router/src/test_harness.rs @@ -283,6 +283,7 @@ impl<'a> TestHarness<'a> { let empty_response = subgraph::Response::builder() .extensions(crate::json_ext::Object::new()) .context(request.context) + .id(request.id) .build(); std::future::ready(Ok(empty_response)) }) diff --git a/apollo-router/src/testdata/minimal_supergraph.graphql b/apollo-router/src/testdata/minimal_supergraph.graphql index ac0b1860af..bf822a6455 100644 --- a/apollo-router/src/testdata/minimal_supergraph.graphql +++ b/apollo-router/src/testdata/minimal_supergraph.graphql @@ -4,6 +4,8 @@ schema query: Query } +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE + directive @join__field( graph: join__Graph requires: join__FieldSet @@ -24,6 +26,11 @@ directive @join__type( isInterfaceObject: Boolean! = false ) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR +directive @join__unionMember( + graph: join__Graph! + member: String! +) repeatable on UNION + directive @link( url: String as: String diff --git a/apollo-router/src/testdata/orga_supergraph.graphql b/apollo-router/src/testdata/orga_supergraph.graphql index a6ddb679d0..14947aeff5 100644 --- a/apollo-router/src/testdata/orga_supergraph.graphql +++ b/apollo-router/src/testdata/orga_supergraph.graphql @@ -7,6 +7,7 @@ schema } directive @inaccessible on FIELD_DEFINITION | OBJECT | INTERFACE | UNION | ARGUMENT_DEFINITION | SCALAR | ENUM | ENUM_VALUE | INPUT_OBJECT | INPUT_FIELD_DEFINITION +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE directive @join__field( graph: join__Graph requires: join__FieldSet @@ -28,6 +29,10 @@ directive @join__type( resolvable: Boolean! = true isInterfaceObject: Boolean! = false ) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR +directive @join__unionMember( + graph: join__Graph! + member: String! +) repeatable on UNION directive @link( url: String as: String diff --git a/apollo-router/tests/fixtures/batching/subgraph_id.rhai b/apollo-router/tests/fixtures/batching/subgraph_id.rhai new file mode 100644 index 0000000000..e69de29bb2 diff --git a/apollo-router/tests/fixtures/batching/subgraph_id.router.yaml b/apollo-router/tests/fixtures/batching/subgraph_id.router.yaml new file mode 100644 index 0000000000..af5deda700 --- /dev/null +++ b/apollo-router/tests/fixtures/batching/subgraph_id.router.yaml @@ -0,0 +1,15 @@ +# Simple config to enable batching and rhai scripts for testing + +batching: + enabled: true + mode: batch_http_link + subgraph: + all: + enabled: true + +rhai: + scripts: ./tests/fixtures/batching + main: subgraph_id.rhai + +include_subgraph_errors: + all: true diff --git a/apollo-router/tests/fixtures/request_response_test.rhai b/apollo-router/tests/fixtures/request_response_test.rhai index 4dc88d42ad..966a1a42b0 100644 --- a/apollo-router/tests/fixtures/request_response_test.rhai +++ b/apollo-router/tests/fixtures/request_response_test.rhai @@ -3,7 +3,7 @@ // If any of the tests fail, the thrown error will cause the respective rust // unit test to fail. -fn process_common_request(check_context_method_and_id, request) { +fn process_common_request(check_context_method_and_id, check_body, request) { if check_context_method_and_id { if request.context.entries != () { throw(`context entries: expected: (), actual: ${request.context.entries}`); @@ -15,17 +15,19 @@ fn process_common_request(check_context_method_and_id, request) { throw(`query: expected: "string", actual: ${type_of(request.id)}`); } } - if request.body.operation_name != () { - throw(`operation name: expected: canned, actual: ${request.body.operation_name}`); - } - if request.body.query != () { - throw(`query: expected: (), actual: ${request.body.query}`); - } - if request.body.variables != #{} { - throw(`query: expected: #{}, actual: ${request.body.variables}`); - } - if request.body.extensions != #{} { - throw(`query: expected: #{}, actual: ${request.body.extensions}`); + if check_body { + if request.body.operation_name != () { + throw(`operation name: expected: canned, actual: ${request.body.operation_name}`); + } + if request.body.query != () { + throw(`query: expected: (), actual: ${request.body.query}`); + } + if request.body.variables != #{} { + throw(`query: expected: #{}, actual: ${request.body.variables}`); + } + if request.body.extensions != #{} { + throw(`query: expected: #{}, actual: ${request.body.extensions}`); + } } if request.uri.host != () { throw(`query: expected: (), actual: ${request.uri.host}`); @@ -33,6 +35,13 @@ fn process_common_request(check_context_method_and_id, request) { if request.uri.path != "/" { throw(`query: expected: "/", actual: ${request.uri.path}`); } + if request.uri.port != {} { + throw(`query: expected: {}, actual: ${request.uri.port}`); + } +} + +fn process_router_request(request){ + process_common_request(true, false, request); } fn process_supergraph_request(request) { @@ -80,16 +89,20 @@ fn process_supergraph_request(request) { } fn process_execution_request(request) { - process_common_request(true, request); + process_common_request(true, true, request); if request.query_plan != "" { throw(`query: expected: (), actual: ${request.query_plan}`); } } fn process_subgraph_request(request) { - process_common_request(true, request); + process_common_request(true, true, request); // subgraph doesn't have a context member - process_common_request(false, request.subgraph); + process_common_request(false, true, request.subgraph); + + if request.subgraph_request_id == () { + throw(`subgraph request must have a subgraph request id`); + } } fn test_response_is_primary(response) { @@ -146,6 +159,18 @@ fn process_common_response(response) { } } +fn test_parse_request_details(request){ + if request.uri.host != "not-default" { + throw(`query: expected: not-default, actual: ${request.uri.host}`); + } + if request.uri.path != "/path" { + throw(`query: expected: "/path", actual: ${request.uri.path}`); + } + if request.uri.port != 8080 { + throw(`query: expected: 8080, actual: ${request.uri.port}`); + } +} + fn process_router_response(response) { test_response_is_primary(response); process_common_response(response); @@ -185,6 +210,9 @@ fn process_subgraph_response(response) { process_common_response(response); test_response_body(response); test_response_status_code(response); + if response.subgraph_request_id == () { + throw(`subgraph response must have a subgraph request id`); + } } fn process_subgraph_response_om_forbidden(response) { @@ -234,4 +262,4 @@ fn process_subgraph_response_om_missing_message(response) { throw #{ status: 400, }; -} +} \ No newline at end of file diff --git a/apollo-router/tests/integration/fixtures/query_planner_redis_config_update_introspection.router.yaml b/apollo-router/tests/integration/fixtures/query_planner_redis_config_update_introspection.router.yaml deleted file mode 100644 index cd9a3e2d4d..0000000000 --- a/apollo-router/tests/integration/fixtures/query_planner_redis_config_update_introspection.router.yaml +++ /dev/null @@ -1,11 +0,0 @@ -# This config updates the query plan options so that we can see if there is a different redis cache entry generted for query plans -supergraph: - introspection: true - query_planning: - cache: - redis: - required_to_start: true - urls: - - redis://localhost:6379 - ttl: 10s - diff --git a/apollo-router/tests/integration/introspection.rs b/apollo-router/tests/integration/introspection.rs index ac5ae33915..95c8ad9c8c 100644 --- a/apollo-router/tests/integration/introspection.rs +++ b/apollo-router/tests/integration/introspection.rs @@ -6,32 +6,12 @@ use tower::ServiceExt; use crate::integration::IntegrationTest; #[tokio::test] -async fn simple_legacy_mode() { +async fn simple() { let request = Request::fake_builder() .query("{ __schema { queryType { name } } }") .build() .unwrap(); - let response = make_request(request, "legacy").await; - insta::assert_json_snapshot!(response, @r###" - { - "data": { - "__schema": { - "queryType": { - "name": "Query" - } - } - } - } - "###); -} - -#[tokio::test] -async fn simple_new_mode() { - let request = Request::fake_builder() - .query("{ __schema { queryType { name } } }") - .build() - .unwrap(); - let response = make_request(request, "new").await; + let response = make_request(request).await; insta::assert_json_snapshot!(response, @r###" { "data": { @@ -51,7 +31,7 @@ async fn top_level_inline_fragment() { .query("{ ... { __schema { queryType { name } } } }") .build() .unwrap(); - let response = make_request(request, "legacy").await; + let response = make_request(request).await; insta::assert_json_snapshot!(response, @r###" { "data": { @@ -82,15 +62,18 @@ async fn variable() { .variable("d", true) .build() .unwrap(); - let response = make_request(request, "legacy").await; + let response = make_request(request).await; insta::assert_json_snapshot!(response, @r###" { "errors": [ { - "message": "introspection error : Variable \"$d\" of required type \"Boolean!\" was not provided.", - "extensions": { - "code": "INTROSPECTION_ERROR" - } + "message": "missing value for non-null variable 'd'", + "locations": [ + { + "line": 2, + "column": 23 + } + ] } ] } @@ -109,17 +92,16 @@ async fn two_operations() { .operation_name("ThisOp") .build() .unwrap(); - let response = make_request(request, "legacy").await; + let response = make_request(request).await; insta::assert_json_snapshot!(response, @r###" { - "errors": [ - { - "message": "Schema introspection is currently not supported with multiple operations in the same document", - "extensions": { - "code": "INTROSPECTION_WITH_MULTIPLE_OPERATIONS" + "data": { + "__schema": { + "queryType": { + "name": "Query" } } - ] + } } "###); } @@ -135,7 +117,7 @@ async fn operation_name_error() { ) .build() .unwrap(); - let response = make_request(request, "legacy").await; + let response = make_request(request).await; insta::assert_json_snapshot!(response, @r###" { "errors": [ @@ -154,7 +136,7 @@ async fn operation_name_error() { .operation_name("NonExistentOp") .build() .unwrap(); - let response = make_request(request, "legacy").await; + let response = make_request(request).await; insta::assert_json_snapshot!(response, @r###" { "errors": [ @@ -180,12 +162,12 @@ async fn mixed() { ) .build() .unwrap(); - let response = make_request(request, "legacy").await; + let response = make_request(request).await; insta::assert_json_snapshot!(response, @r###" { "errors": [ { - "message": "Mixed queries with both schema introspection and concrete fields are not supported", + "message": "Mixed queries with both schema introspection and concrete fields are not supported yet: https://github.com/apollographql/router/issues/2789", "extensions": { "code": "MIXED_INTROSPECTION" } @@ -195,10 +177,9 @@ async fn mixed() { "###); } -async fn make_request(request: Request, mode: &str) -> apollo_router::graphql::Response { +async fn make_request(request: Request) -> apollo_router::graphql::Response { apollo_router::TestHarness::builder() .configuration_json(json!({ - "experimental_introspection_mode": mode, "supergraph": { "introspection": true, }, @@ -228,37 +209,11 @@ async fn make_request(request: Request, mode: &str) -> apollo_router::graphql::R .unwrap() } -#[tokio::test] -async fn both_mode_integration() { - let mut router = IntegrationTest::builder() - .config( - " - # `experimental_introspection_mode` now defaults to `both` - supergraph: - introspection: true - ", - ) - .supergraph("tests/fixtures/schema_to_introspect.graphql") - .log("error,apollo_router=info,apollo_router::query_planner=trace") - .build() - .await; - router.start().await; - router.assert_started().await; - let query = json!({ - "query": include_str!("../fixtures/introspect_full_schema.graphql"), - }); - let (_trace_id, response) = router.execute_query(&query).await; - insta::assert_json_snapshot!(response.json::().await.unwrap()); - router.assert_log_contains("Introspection match! ๐ŸŽ‰").await; - router.graceful_shutdown().await; -} - #[tokio::test] async fn integration() { let mut router = IntegrationTest::builder() .config( " - experimental_introspection_mode: new supergraph: introspection: true ", diff --git a/apollo-router/tests/integration/operation_name.rs b/apollo-router/tests/integration/operation_name.rs index 1359b54398..6d2ef81226 100644 --- a/apollo-router/tests/integration/operation_name.rs +++ b/apollo-router/tests/integration/operation_name.rs @@ -14,9 +14,15 @@ async fn empty_document() { { "errors": [ { - "message": "Syntax Error: Unexpected .", + "message": "parsing error: syntax error: Unexpected .", + "locations": [ + { + "line": 1, + "column": 27 + } + ], "extensions": { - "code": "GRAPHQL_PARSE_FAILED" + "code": "PARSING_ERROR" } } ] diff --git a/apollo-router/tests/integration/redis.rs b/apollo-router/tests/integration/redis.rs index bb8bb5c38e..9e6af18826 100644 --- a/apollo-router/tests/integration/redis.rs +++ b/apollo-router/tests/integration/redis.rs @@ -46,9 +46,12 @@ use crate::integration::IntegrationTest; #[tokio::test(flavor = "multi_thread")] async fn query_planner_cache() -> Result<(), BoxError> { + if !graph_os_enabled() { + return Ok(()); + } // If this test fails and the cache key format changed you'll need to update the key here. // Look at the top of the file for instructions on getting the new cache key. - let known_cache_key = "plan:0:v2.9.2:70f115ebba5991355c17f4f56ba25bb093c519c4db49a30f3b10de279a4e3fa4:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:4f9f0183101b2f249a364b98adadfda6e5e2001d1f2465c988428cf1ac0b545f"; + let known_cache_key = "plan:0:v2.9.3:70f115ebba5991355c17f4f56ba25bb093c519c4db49a30f3b10de279a4e3fa4:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:68e167191994b73c1892549ef57d0ec4cd76d518fad4dac5350846fe9af0b3f1"; let config = RedisConfig::from_url("redis://127.0.0.1:6379").unwrap(); let client = RedisClient::new(config, None, None, None); @@ -179,6 +182,10 @@ async fn query_planner_cache() -> Result<(), BoxError> { #[tokio::test(flavor = "multi_thread")] async fn apq() -> Result<(), BoxError> { + if !graph_os_enabled() { + return Ok(()); + } + let config = RedisConfig::from_url("redis://127.0.0.1:6379").unwrap(); let client = RedisClient::new(config, None, None, None); let connection_task = client.connect(); @@ -318,7 +325,11 @@ async fn apq() -> Result<(), BoxError> { } #[tokio::test(flavor = "multi_thread")] -async fn entity_cache() -> Result<(), BoxError> { +async fn entity_cache_basic() -> Result<(), BoxError> { + if !graph_os_enabled() { + return Ok(()); + } + let config = RedisConfig::from_url("redis://127.0.0.1:6379").unwrap(); let client = RedisClient::new(config, None, None, None); let connection_task = client.connect(); @@ -562,6 +573,10 @@ async fn entity_cache() -> Result<(), BoxError> { #[tokio::test(flavor = "multi_thread")] async fn entity_cache_authorization() -> Result<(), BoxError> { + if !graph_os_enabled() { + return Ok(()); + } + let config = RedisConfig::from_url("redis://127.0.0.1:6379").unwrap(); let client = RedisClient::new(config, None, None, None); let connection_task = client.connect(); @@ -886,6 +901,10 @@ async fn entity_cache_authorization() -> Result<(), BoxError> { #[tokio::test(flavor = "multi_thread")] async fn connection_failure_blocks_startup() { + if !graph_os_enabled() { + return; + } + let _ = apollo_router::TestHarness::builder() .with_subgraph_network_requests() .configuration_json(json!({ @@ -944,7 +963,7 @@ async fn connection_failure_blocks_startup() { async fn query_planner_redis_update_query_fragments() { test_redis_query_plan_config_update( include_str!("fixtures/query_planner_redis_config_update_query_fragments.router.yaml"), - "plan:0:v2.9.2:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:78f3ccab3def369f4b809a0f8c8f6e90545eb08cd1efeb188ffc663b902c1f2d", + "plan:0:v2.9.3:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:d239cf1d493e71f4bcb05e727c38e4cf55b32eb806791fa415bb6f6c8e5352e5", ) .await; } @@ -959,26 +978,6 @@ async fn query_planner_redis_update_planner_mode() { .await; } -#[tokio::test(flavor = "multi_thread")] -async fn query_planner_redis_update_introspection() { - // If this test fails and the cache key format changed you'll need to update - // the key here. Look at the top of the file for instructions on getting - // the new cache key. - // - // You first need to follow the process and update the key in - // `test_redis_query_plan_config_update`, and then update the key in this - // test. - // - // This test requires graphos license, so make sure you have - // "TEST_APOLLO_KEY" and "TEST_APOLLO_GRAPH_REF" env vars set, otherwise the - // test just passes locally. - test_redis_query_plan_config_update( - include_str!("fixtures/query_planner_redis_config_update_introspection.router.yaml"), - "plan:0:v2.9.2:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:99a70d6c967eea3bc68721e1094f586f5ae53c7e12f83a650abd5758c372d048", - ) - .await; -} - #[tokio::test(flavor = "multi_thread")] async fn query_planner_redis_update_defer() { // If this test fails and the cache key format changed you'll need to update @@ -994,7 +993,7 @@ async fn query_planner_redis_update_defer() { // test just passes locally. test_redis_query_plan_config_update( include_str!("fixtures/query_planner_redis_config_update_defer.router.yaml"), - "plan:0:v2.9.2:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:d6a3d7807bb94cfb26be4daeb35e974680b53755658fafd4c921c70cec1b7c39", + "plan:0:v2.9.3:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:752b870a0241594f54b7b593f16ab6cf6529eb5c9fe3d24e6bc4a618c24a5b81", ) .await; } @@ -1016,7 +1015,7 @@ async fn query_planner_redis_update_type_conditional_fetching() { include_str!( "fixtures/query_planner_redis_config_update_type_conditional_fetching.router.yaml" ), - "plan:0:v2.9.2:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:8991411cc7b66d9f62ab1e661f2ce9ccaf53b0d203a275e43ced9a8b6bba02dd", + "plan:0:v2.9.3:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:e2145b320a44bebbd687c714dcfd046c032e56fe394aedcf50d9ab539f4354ea", ) .await; } @@ -1038,7 +1037,7 @@ async fn query_planner_redis_update_reuse_query_fragments() { include_str!( "fixtures/query_planner_redis_config_update_reuse_query_fragments.router.yaml" ), - "plan:0:v2.9.2:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:c05e89caeb8efc4e8233e8648099b33414716fe901e714416fd0f65a67867f07", + "plan:0:v2.9.3:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:8b6c1838a55cbc6327adb5507f103eed1d5b1071e9acb9c67e098c5b9ea2887e", ) .await; } @@ -1063,7 +1062,7 @@ async fn test_redis_query_plan_config_update(updated_config: &str, new_cache_key router.clear_redis_cache().await; // If the tests above are failing, this is the key that needs to be changed first. - let starting_key = "plan:0:v2.9.2:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:a52c81e3e2e47c8363fbcd2653e196431c15716acc51fce4f58d9368ac4c2d8d"; + let starting_key = "plan:0:v2.9.3:e15b4f5cd51b8cc728e3f5171611073455601e81196cd3cbafc5610d9769a370:3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112:41ae54204ebb1911412cf23e8f1d458cb08d6fabce16f255f7a497fd2b6fe213"; assert_ne!(starting_key, new_cache_key, "starting_key (cache key for the initial config) and new_cache_key (cache key with the updated config) should not be equal. This either means that the cache key is not being generated correctly, or that the test is not actually checking the updated key."); router.execute_default_query().await; diff --git a/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache-2.snap b/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic-2.snap similarity index 100% rename from apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache-2.snap rename to apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic-2.snap diff --git a/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache-3.snap b/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic-3.snap similarity index 100% rename from apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache-3.snap rename to apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic-3.snap diff --git a/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache-4.snap b/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic-4.snap similarity index 100% rename from apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache-4.snap rename to apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic-4.snap diff --git a/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache-5.snap b/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic-5.snap similarity index 100% rename from apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache-5.snap rename to apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic-5.snap diff --git a/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache.snap b/apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic.snap similarity index 100% rename from apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache.snap rename to apollo-router/tests/integration/snapshots/integration_tests__integration__redis__entity_cache_basic.snap diff --git a/apollo-router/tests/integration/subgraph_response.rs b/apollo-router/tests/integration/subgraph_response.rs index 2dd8fc68d6..52fc56fa27 100644 --- a/apollo-router/tests/integration/subgraph_response.rs +++ b/apollo-router/tests/integration/subgraph_response.rs @@ -9,6 +9,78 @@ include_subgraph_errors: all: true "#; +#[tokio::test(flavor = "multi_thread")] +async fn test_subgraph_returning_data_null() -> Result<(), BoxError> { + let mut router = IntegrationTest::builder() + .config(CONFIG) + .responder(ResponseTemplate::new(200).set_body_json(json!({ "data": null }))) + .build() + .await; + + router.start().await; + router.assert_started().await; + + let query = "{ __typename topProducts { name } }"; + let (_trace_id, response) = router.execute_query(&json!({ "query": query })).await; + assert_eq!(response.status(), 200); + assert_eq!( + response.json::().await?, + json!({ "data": null }) + ); + Ok(()) +} + +#[tokio::test(flavor = "multi_thread")] +async fn test_subgraph_returning_different_typename_on_query_root() -> Result<(), BoxError> { + let mut router = IntegrationTest::builder() + .config(CONFIG) + .responder(ResponseTemplate::new(200).set_body_json(json!({ + "data": { + "topProducts": null, + "__typename": "SomeQueryRoot", + "aliased": "SomeQueryRoot", + "inside_fragment": "SomeQueryRoot", + "inside_inline_fragment": "SomeQueryRoot" + } + }))) + .build() + .await; + + router.start().await; + router.assert_started().await; + + let query = r#" + { + topProducts { name } + __typename + aliased: __typename + ...TypenameFragment + ... { + inside_inline_fragment: __typename + } + } + + fragment TypenameFragment on Query { + inside_fragment: __typename + } + "#; + let (_trace_id, response) = router.execute_query(&json!({ "query": query })).await; + assert_eq!(response.status(), 200); + assert_eq!( + response.json::().await?, + json!({ + "data": { + "topProducts": null, + "__typename": "Query", + "aliased": "Query", + "inside_fragment": "Query", + "inside_inline_fragment": "Query" + } + }) + ); + Ok(()) +} + #[tokio::test(flavor = "multi_thread")] async fn test_valid_error_locations() -> Result<(), BoxError> { let mut router = IntegrationTest::builder() @@ -35,7 +107,7 @@ async fn test_valid_error_locations() -> Result<(), BoxError> { .await; assert_eq!(response.status(), 200); assert_eq!( - serde_json::from_str::(&response.text().await?)?, + response.json::().await?, json!({ "data": { "topProducts": null }, "errors": [{ @@ -76,7 +148,7 @@ async fn test_empty_error_locations() -> Result<(), BoxError> { .await; assert_eq!(response.status(), 200); assert_eq!( - serde_json::from_str::(&response.text().await?)?, + response.json::().await?, json!({ "data": { "topProducts": null }, "errors": [{ @@ -113,7 +185,7 @@ async fn test_invalid_error_locations() -> Result<(), BoxError> { .await; assert_eq!(response.status(), 200); assert_eq!( - serde_json::from_str::(&response.text().await?)?, + response.json::().await?, json!({ "data": null, "errors": [{ @@ -155,7 +227,7 @@ async fn test_invalid_error_locations_with_single_negative_one_location() -> Res .await; assert_eq!(response.status(), 200); assert_eq!( - serde_json::from_str::(&response.text().await?)?, + response.json::().await?, json!({ "data": { "topProducts": null }, "errors": [{ @@ -196,7 +268,7 @@ async fn test_invalid_error_locations_contains_negative_one_location() -> Result .await; assert_eq!(response.status(), 200); assert_eq!( - serde_json::from_str::(&response.text().await?)?, + response.json::().await?, json!({ "data": { "topProducts": null }, "errors": [{ diff --git a/apollo-router/tests/integration/telemetry/fixtures/json.router.yaml b/apollo-router/tests/integration/telemetry/fixtures/json.router.yaml index fa8fba775e..8fa0f2a74e 100644 --- a/apollo-router/tests/integration/telemetry/fixtures/json.router.yaml +++ b/apollo-router/tests/integration/telemetry/fixtures/json.router.yaml @@ -22,11 +22,13 @@ telemetry: - "log" - request_header: "x-log-request" my.request_event: - message: "my event message" + message: "my request event message" level: info on: request attributes: http.request.body.size: true + schema.id: + request_context: "apollo::supergraph_schema_id" my.response_event: message: "my response event message" level: info diff --git a/apollo-router/tests/integration/telemetry/logging.rs b/apollo-router/tests/integration/telemetry/logging.rs index 64b43fe032..9e41160572 100644 --- a/apollo-router/tests/integration/telemetry/logging.rs +++ b/apollo-router/tests/integration/telemetry/logging.rs @@ -29,6 +29,16 @@ async fn test_json() -> Result<(), BoxError> { router.assert_log_contains("span_id").await; router.execute_query(&query).await; router.assert_log_contains(r#""static_one":"test""#).await; + #[cfg(unix)] + { + router.execute_query(&query).await; + router + .assert_log_contains( + r#""schema.id":"dd8960ccefda82ca58e8ac0bc266459fd49ee8215fd6b3cc72e7bc3d7f3464b9""#, + ) + .await; + } + router.execute_query(&query).await; router .assert_log_contains(r#""on_supergraph_response_event":"on_supergraph_event""#) diff --git a/apollo-router/tests/integration/telemetry/metrics.rs b/apollo-router/tests/integration/telemetry/metrics.rs index aee5de813e..1ae93f2d4f 100644 --- a/apollo-router/tests/integration/telemetry/metrics.rs +++ b/apollo-router/tests/integration/telemetry/metrics.rs @@ -278,7 +278,7 @@ async fn test_gauges_on_reload() { .await; router .assert_metrics_contains( - r#"apollo_router_cache_size{kind="query planner",type="memory",otel_scope_name="apollo/router"} 3"#, + r#"apollo_router_cache_size{kind="query planner",type="memory",otel_scope_name="apollo/router"} 1"#, None, ) .await; diff --git a/apollo-router/tests/integration/typename.rs b/apollo-router/tests/integration/typename.rs index 4331333594..782e90adb6 100644 --- a/apollo-router/tests/integration/typename.rs +++ b/apollo-router/tests/integration/typename.rs @@ -4,37 +4,72 @@ use tower::ServiceExt; const SCHEMA: &str = r#" schema - @core(feature: "https://specs.apollo.dev/core/v0.1"), - @core(feature: "https://specs.apollo.dev/join/v0.1") -{ + @link(url: "https://specs.apollo.dev/link/v1.0") + @link(url: "https://specs.apollo.dev/join/v0.3", for: EXECUTION) { query: MyQuery mutation: MyMutation } -directive @core(feature: String!) repeatable on SCHEMA +directive @join__enumValue(graph: join__Graph!) repeatable on ENUM_VALUE -directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet) on FIELD_DEFINITION +directive @join__field( + graph: join__Graph + requires: join__FieldSet + provides: join__FieldSet + type: String + external: Boolean + override: String + usedOverridden: Boolean +) repeatable on FIELD_DEFINITION | INPUT_FIELD_DEFINITION -directive @join__type(graph: join__Graph!, key: join__FieldSet) repeatable on OBJECT | INTERFACE +directive @join__graph(name: String!, url: String!) on ENUM_VALUE -directive @join__owner(graph: join__Graph!) on OBJECT | INTERFACE +directive @join__type( + graph: join__Graph! + key: join__FieldSet + extension: Boolean! = false + resolvable: Boolean! = true + isInterfaceObject: Boolean! = false +) repeatable on OBJECT | INTERFACE | UNION | ENUM | INPUT_OBJECT | SCALAR -directive @join__graph(name: String!, url: String!) on ENUM_VALUE +directive @join__unionMember( + graph: join__Graph! + member: String! +) repeatable on UNION + +directive @link( + url: String + as: String + for: link__Purpose + import: [link__Import] +) repeatable on SCHEMA + +directive @join__implements( + graph: join__Graph! + interface: String! +) repeatable on OBJECT | INTERFACE scalar join__FieldSet +scalar link__Import enum join__Graph { - ACCOUNTS @join__graph(name: "accounts" url: "http://localhost:4001") - INVENTORY @join__graph(name: "inventory" url: "http://localhost:4004") - PRODUCTS @join__graph(name: "products" url: "http://localhost:4003") - REVIEWS @join__graph(name: "reviews" url: "http://localhost:4002") + SUBGRAPH_A + @join__graph( + name: "subgraph-a" + url: "http://graphql.subgraph-a.svc.cluster.local:4000" + ) +} + +enum link__Purpose { + SECURITY + EXECUTION } -type MyMutation { +type MyMutation @join__type(graph: SUBGRAPH_A) { createThing: String } -type MyQuery { +type MyQuery @join__type(graph: SUBGRAPH_A) { thing: String } "#; @@ -71,6 +106,74 @@ async fn aliased() { "###); } +// FIXME: bellow test panic because of bug in query planner, failing with: +// "value retrieval failed: empty query plan. This behavior is unexpected and we suggest opening an issue to apollographql/router with a reproduction." +// See: https://github.com/apollographql/router/issues/6154 +#[tokio::test] +#[should_panic] +async fn inside_inline_fragment() { + let request = Request::fake_builder() + .query("{ ... { __typename } }") + .build() + .unwrap(); + let response = make_request(request).await; + insta::assert_json_snapshot!(response, @r###" + { + "data": { + "n": "MyQuery" + } + } + "###); +} + +#[tokio::test] +#[should_panic] // See above FIXME +async fn inside_fragment() { + let query = r#" + { ...SomeFragment } + + fragment SomeFragment on MyQuery { + __typename + } + "#; + let request = Request::fake_builder().query(query).build().unwrap(); + let response = make_request(request).await; + insta::assert_json_snapshot!(response, @r###" + { + "data": { + "n": "MyQuery" + } + } + "###); +} + +#[tokio::test] +#[should_panic] // See above FIXME +async fn deeply_nested_inside_fragments() { + let query = r#" + { ...SomeFragment } + + fragment SomeFragment on MyQuery { + ... { + ...AnotherFragment + } + } + + fragment AnotherFragment on MyQuery { + __typename + } + "#; + let request = Request::fake_builder().query(query).build().unwrap(); + let response = make_request(request).await; + insta::assert_json_snapshot!(response, @r###" + { + "data": { + "n": "MyQuery" + } + } + "###); +} + #[tokio::test] async fn mutation() { let request = Request::fake_builder() diff --git a/deny.toml b/deny.toml index 378e5ed28c..915ca9e58c 100644 --- a/deny.toml +++ b/deny.toml @@ -27,7 +27,10 @@ git-fetch-with-cli = true # output a note when they are encountered. # rustsec advisory exemptions -ignore = ["RUSTSEC-2023-0071"] +ignore = [ + "RUSTSEC-2023-0071", + "RUSTSEC-2024-0376", # we do not use tonic::transport::Server +] # This section is considered when running `cargo deny check licenses` # More documentation for the licenses section can be found here: diff --git a/docs/shared/coproc-typical-config.mdx b/docs/shared/coproc-typical-config.mdx index a9812d47c3..5986426fd7 100644 --- a/docs/shared/coproc-typical-config.mdx +++ b/docs/shared/coproc-typical-config.mdx @@ -38,10 +38,12 @@ coprocessor: uri: false method: false service_name: false + subgraph_request_id: false response: # By including this key, the `SubgraphService` sends a coprocessor request whenever receives a subgraph response. headers: true body: false context: false service_name: false status_code: false + subgraph_request_id: false ``` diff --git a/docs/source/configuration/entity-caching.mdx b/docs/source/configuration/entity-caching.mdx index a47967c253..4f898ecdbc 100644 --- a/docs/source/configuration/entity-caching.mdx +++ b/docs/source/configuration/entity-caching.mdx @@ -109,6 +109,15 @@ preview_entity_cache: private_id: "user_id" ``` + + + +In router v1.51 and earlier, Redis and per-subgraph caching configurations are set directly on `preview_entity_cache`, for example `preview_entity_cache.redis`. + +This configuration may change while the feature is in [preview](/resources/product-launch-stages/#product-launch-stages). + + + ### Configure time to live (TTL) Besides configuring a global TTL for all the entries in Redis, the GraphOS Router also honors the [`Cache-Control` header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control) returned with the subgraph response. It generates a `Cache-Control` header for the client response by aggregating the TTL information from all response parts. diff --git a/docs/source/configuration/overview.mdx b/docs/source/configuration/overview.mdx index ff875fc4ce..03ce38c5c9 100644 --- a/docs/source/configuration/overview.mdx +++ b/docs/source/configuration/overview.mdx @@ -589,13 +589,18 @@ Starting with v1.49.0, the router can run a Rust-native query planner. This nati -Starting with v1.56.0, to run the most performant and resource-efficient native query planner and to disable the V8 JavaScript runtime in the router, set the following options in your `router.yaml`: +Starting with v1.57.0, to run the most performant and resource-efficient native query planner and to disable the V8 JavaScript runtime in the router, set the following option in your `router.yaml`: ```yaml title="router.yaml" experimental_query_planner_mode: new -experimental_introspection_mode: new ``` + + +In router v1.56, running the most performant and resource-efficient native query planner also requires setting `experimental_introspection_mode: new`. + + + You can also improve throughput by reducing the size of queries sent to subgraphs with the following option: ```yaml title="router.yaml" @@ -639,20 +644,6 @@ The default value of `experimental_parallelism` is `1`. In practice, you should tune `experimental_parallelism` based on metrics and benchmarks gathered from your router. - - -### Introspection response caching - - - -Introspection responses are generated by the query planner for now, so they are expensive to execute and the router stores them in its query planner cache. Unfortunately, they can fill up the cache, so until we move out introspection execution, there is an option to deactivate response caching. - -```yaml title="router.yaml" -supergraph: - query_planning: - legacy_introspection_caching: false -``` - ### Enhanced operation signature normalization diff --git a/docs/source/configuration/telemetry/instrumentation/selectors.mdx b/docs/source/configuration/telemetry/instrumentation/selectors.mdx index d63e9003a3..5a49a7223d 100644 --- a/docs/source/configuration/telemetry/instrumentation/selectors.mdx +++ b/docs/source/configuration/telemetry/instrumentation/selectors.mdx @@ -35,6 +35,7 @@ The router service is the initial entrypoint for all requests. It is HTTP centri | `operation_name` | Yes | `string`\|`hash` | The operation name from the query | | `studio_operation_id` | Yes | `true`\|`false` | The Apollo Studio operation id | | `request_header` | Yes | | The name of the request header | +| `request_context` | Yes | | The name of a request context key | | `response_header` | Yes | | The name of a response header | | `response_status` | Yes | `code`\|`reason` | The response status | | `response_context` | Yes | | The name of a response context key | diff --git a/docs/source/configuration/traffic-shaping.mdx b/docs/source/configuration/traffic-shaping.mdx index c6d027fbca..adfbe3f8ad 100644 --- a/docs/source/configuration/traffic-shaping.mdx +++ b/docs/source/configuration/traffic-shaping.mdx @@ -34,6 +34,7 @@ traffic_shaping: retry_percent: 0.2 # defines the proportion of available retries to the current number of tokens retry_mutations: false # allows retries on mutations. This should only be enabled if mutations are idempotent experimental_http2: enable # Configures HTTP/2 usage. Can be 'enable' (default), 'disable' or 'http2only' + dns_resolution_strategy: ipv4_then_ipv6 # Changes DNS resolution strategy for subgraph. ``` ### Preset values @@ -42,6 +43,7 @@ The preset values of `traffic_shaping` that's enabled by default: - `timeout: 30s` for all timeouts - `experimental_http2: enable` +- `dns_resolution_strategy: ipv4_then_ipv6` ## Client side traffic shaping @@ -186,6 +188,23 @@ traffic_shaping: +### DNS resolution strategy + +You can also change DNS resolution strategy applied to subgraphs's URL: +```yaml title="router.yaml" +traffic_shaping: + all: + dns_resolution_strategy: ipv4_then_ipv6 + +``` + +Possible strategies are: +* `ipv4_only` - Only query for `A` (IPv4) records. +* `ipv6_only` - Only query for `AAAA` (IPv6) records. +* `ipv4_and_ipv6` - Query for both `A` (IPv4) and `AAAA` (IPv6) records in parallel. +* `ipv6_then_ipv4` - Query for `AAAA` (IPv6) records first; if that fails, query for `A` (IPv4) records. +* `ipv4_then_ipv6`(default) - Query for `A` (IPv4) records first; if that fails, query for `AAAA` (IPv6) records. + ### Ordering Traffic shaping always executes these steps in the same order, to ensure a consistent behaviour. Declaration order in the configuration will not affect the runtime order: diff --git a/docs/source/customizations/coprocessor.mdx b/docs/source/customizations/coprocessor.mdx index 935468ce9c..f52f5bb4a8 100644 --- a/docs/source/customizations/coprocessor.mdx +++ b/docs/source/customizations/coprocessor.mdx @@ -152,6 +152,22 @@ coprocessor: ``` +You can also change DNS resolution strategy applied to coprocessor's URL: +```yaml title="router.yaml" +coprocessor: + url: http://coprocessor.example.com:8081 + client: + dns_resolution_strategy: ipv4_then_ipv6 + +``` + +Possible strategies are: +* `ipv4_only` - Only query for `A` (IPv4) records. +* `ipv6_only` - Only query for `AAAA` (IPv6) records. +* `ipv4_and_ipv6` - Query for both `A` (IPv4) and `AAAA` (IPv6) records in parallel. +* `ipv6_then_ipv4` - Query for `AAAA` (IPv6) records first; if that fails, query for `A` (IPv4) records. +* `ipv4_then_ipv6`(default) - Query for `A` (IPv4) records first; if that fails, query for `AAAA` (IPv6) records. + ## Coprocessor request format The router communicates with your coprocessor via HTTP POST requests (called **coprocessor requests**). The body of each coprocessor request is a JSON object with properties that describe either the current client request or the current router response. @@ -437,7 +453,7 @@ Properties of the JSON body are divided into two high-level categories: }, "serviceName": "service name shouldn't change", "uri": "http://thisurihaschanged", - "query_plan": { + "queryPlan": { "usage_reporting":{"statsReportKey":"# Me\nquery Me{me{name username}}","referencedFieldsByType":{"User":{"fieldNames":["name","username"],"isInterface":false},"Query":{"fieldNames":["me"],"isInterface":false}}}, "root":{ "kind":"Fetch", @@ -504,6 +520,7 @@ Properties of the JSON body are divided into two high-level categories: "stage": "SubgraphRequest", "control": "continue", "id": "666d677225c1bc6d7c54a52b409dbd4e", + "subgraphRequestId": "b5964998b2394b64a864ef802fb5a4b3", // Data properties "headers": {}, @@ -580,6 +597,7 @@ Properties of the JSON body are divided into two high-level categories: "version": 1, "stage": "SubgraphResponse", "id": "b7810c6f7f95640fd6c6c8781e3953c0", + "subgraphRequestId": "b5964998b2394b64a864ef802fb5a4b3", "control": "continue", // Data properties @@ -758,6 +776,23 @@ A unique ID corresponding to the client request associated with this coprocessor +##### `subgraphRequestId` + +`string` + + + + +A unique ID corresponding to the subgraph request associated with this coprocessor request (only available at the `SubgraphRequest` and `SubgraphResponse` stages). + +**Do not return a _different_ value for this property.** If you do, the router treats the coprocessor request as if it failed. + + + + + + + ##### `stage` `string` diff --git a/docs/source/customizations/overview.mdx b/docs/source/customizations/overview.mdx index 74a318dbc7..18e68103c2 100644 --- a/docs/source/customizations/overview.mdx +++ b/docs/source/customizations/overview.mdx @@ -209,7 +209,7 @@ httpServer --"14. HTTP response" --> client For simplicity's sake, the preceding diagrams show the request and response sides separately and sequentially. In reality, some requests and responses may happen simultaneously and repeatedly. -For example, `SubgraphRequest`s can happen both in parallel _and_ in sequence: one subgraph's response may be necessary for another's `SubgraphRequest`. (The query planner decides which requests can happen in parallel vs. which need to happen in sequence.) +For example, `SubgraphRequest`s can happen both in parallel _and_ in sequence: one subgraph's response may be necessary for another's `SubgraphRequest`. (The query planner decides which requests can happen in parallel vs. which need to happen in sequence). To match subgraph requests to responses in customizations, the router exposes a `subgraph_request_id` field that will hold the same value in paired requests and responses. ##### Requests run in parallel diff --git a/docs/source/customizations/rhai-api.mdx b/docs/source/customizations/rhai-api.mdx index b75824048d..a881d3f0ff 100644 --- a/docs/source/customizations/rhai-api.mdx +++ b/docs/source/customizations/rhai-api.mdx @@ -383,6 +383,7 @@ request.body.variables request.body.extensions request.uri.host request.uri.path +request.uri.port ``` @@ -401,8 +402,11 @@ request.subgraph.body.variables request.subgraph.body.extensions request.subgraph.uri.host request.subgraph.uri.path +request.subgraph.uri.port ``` +**For `subgraph_service` callbacks only,** the `request` object provides the non modifiable field `request.subgraph_request_id` which is a unique ID corresponding to the subgraph request. + ### `request.context` The context is a generic key/value store that exists for the entire lifespan of a particular client request. You can use this to share information between multiple callbacks throughout the request's lifespan. @@ -547,6 +551,17 @@ print(`${request.uri.path}`); // log the request path request.uri.path += "/added-context"; // Add an extra element to the query path ``` +### `request.uri.port` + +This is the port component of the request's URI, as an integer. If no port is explicitly defined in the URI, this value defaults to an empty value. + +Modifying this value for a client request has no effect, because the request has already reached the router. However, modifying `request.subgraph.uri.port` in a `subgraph_service` callback _does_ modify the URI that the router uses to communicate with the corresponding subgraph. + +```rhai +print(`${request.uri.port}`); // log the request port +request.uri.port = 4040; // Changes the port to be 4040 +``` + ### `request.subgraph.*` The `request.subgraph` object is available _only_ for `map_request` callbacks registered in `subgraph_service`. This object has the exact same fields as `request` itself, but these fields apply to the HTTP request that the router will send to the corresponding subgraph. @@ -581,6 +596,8 @@ response.body.extensions All of the above fields are read/write. +**For `subgraph_service` callbacks only,** the `response` object provides the non modifiable field `response.subgraph_request_id` which is a unique ID corresponding to the subgraph request, and the same id that can be obtained on the request side. + The following fields are identical in behavior to their `request` counterparts: * [`context`](#requestcontext) diff --git a/docs/source/executing-operations/native-query-planner.mdx b/docs/source/executing-operations/native-query-planner.mdx index 2ee1263c36..c40a3d89ad 100644 --- a/docs/source/executing-operations/native-query-planner.mdx +++ b/docs/source/executing-operations/native-query-planner.mdx @@ -26,20 +26,19 @@ The `experimental_query_planner_mode` option has the following supported modes: - `legacy` - enables only the legacy JavaScript query planner - `both_best_effort` (default) - enables both new and legacy query planners for comparison. The legacy query planner is used for execution. - + ## Optimize native query planner -To run the native query planner with the best performance and resource utilization, configure your router with the following options: +To run the native query planner with the best performance and resource utilization, configure your router with the following option: ```yaml title="router.yaml" experimental_query_planner_mode: new -experimental_introspection_mode: new ``` -Setting `experimental_query_planner_mode: new` and `experimental_introspection_mode: new` not only enables native query planning and schema introspection, it also disables the V8 JavaScript runtime used by the legacy query planner. Disabling V8 frees up CPU and memory and improves native query planning performance. +Setting `experimental_query_planner_mode: new` not only enables native query planning, it also disables the V8 JavaScript runtime used by the legacy query planner. Disabling V8 frees up CPU and memory and improves native query planning performance. Additionally, to enable more optimal native query planning and faster throughput by reducing the size of queries sent to subgraphs, you can enable query fragment generation with the following option: diff --git a/docs/source/federation-version-support.mdx b/docs/source/federation-version-support.mdx index defc58f1c4..e7d7a4aa98 100644 --- a/docs/source/federation-version-support.mdx +++ b/docs/source/federation-version-support.mdx @@ -37,7 +37,15 @@ The table below shows which version of federation each router release is compile - v1.56.0 and later (see latest releases) + v1.57.0 and later (see latest releases) + + + 2.9.3 + + + + + v1.56.0 2.9.2 diff --git a/helm/chart/router/templates/deployment.yaml b/helm/chart/router/templates/deployment.yaml index 19557a6b5e..5fc7c4619a 100644 --- a/helm/chart/router/templates/deployment.yaml +++ b/helm/chart/router/templates/deployment.yaml @@ -72,8 +72,10 @@ spec: - /app/configuration.yaml {{- end }} {{- end }} - {{- if or .Values.managedFederation.apiKey .Values.managedFederation.existingSecret .Values.managedFederation.graphRef .Values.extraEnvVars }} env: + - name: APOLLO_ROUTER_OFFICIAL_HELM_CHART + value: "true" + {{- if or .Values.managedFederation.apiKey .Values.managedFederation.existingSecret .Values.managedFederation.graphRef .Values.extraEnvVars }} {{- if or .Values.managedFederation.apiKey .Values.managedFederation.existingSecret }} - name: APOLLO_KEY valueFrom: diff --git a/licenses.html b/licenses.html index 46749816d5..01b9b1d8ef 100644 --- a/licenses.html +++ b/licenses.html @@ -45,7 +45,7 @@

Third Party Licenses

Overview of licenses:

                                 Apache License
@@ -4446,6 +4444,217 @@ 

Used by:

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. +
+ +
  • +

    Apache License 2.0

    +

    Used by:

    + +
                                     Apache License
    +                           Version 2.0, January 2004
    +                        https://www.apache.org/licenses/
    +
    +   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
    +
    +   1. Definitions.
    +
    +      "License" shall mean the terms and conditions for use, reproduction,
    +      and distribution as defined by Sections 1 through 9 of this document.
    +
    +      "Licensor" shall mean the copyright owner or entity authorized by
    +      the copyright owner that is granting the License.
    +
    +      "Legal Entity" shall mean the union of the acting entity and all
    +      other entities that control, are controlled by, or are under common
    +      control with that entity. For the purposes of this definition,
    +      "control" means (i) the power, direct or indirect, to cause the
    +      direction or management of such entity, whether by contract or
    +      otherwise, or (ii) ownership of fifty percent (50%) or more of the
    +      outstanding shares, or (iii) beneficial ownership of such entity.
    +
    +      "You" (or "Your") shall mean an individual or Legal Entity
    +      exercising permissions granted by this License.
    +
    +      "Source" form shall mean the preferred form for making modifications,
    +      including but not limited to software source code, documentation
    +      source, and configuration files.
    +
    +      "Object" form shall mean any form resulting from mechanical
    +      transformation or translation of a Source form, including but
    +      not limited to compiled object code, generated documentation,
    +      and conversions to other media types.
    +
    +      "Work" shall mean the work of authorship, whether in Source or
    +      Object form, made available under the License, as indicated by a
    +      copyright notice that is included in or attached to the work
    +      (an example is provided in the Appendix below).
    +
    +      "Derivative Works" shall mean any work, whether in Source or Object
    +      form, that is based on (or derived from) the Work and for which the
    +      editorial revisions, annotations, elaborations, or other modifications
    +      represent, as a whole, an original work of authorship. For the purposes
    +      of this License, Derivative Works shall not include works that remain
    +      separable from, or merely link (or bind by name) to the interfaces of,
    +      the Work and Derivative Works thereof.
    +
    +      "Contribution" shall mean any work of authorship, including
    +      the original version of the Work and any modifications or additions
    +      to that Work or Derivative Works thereof, that is intentionally
    +      submitted to Licensor for inclusion in the Work by the copyright owner
    +      or by an individual or Legal Entity authorized to submit on behalf of
    +      the copyright owner. For the purposes of this definition, "submitted"
    +      means any form of electronic, verbal, or written communication sent
    +      to the Licensor or its representatives, including but not limited to
    +      communication on electronic mailing lists, source code control systems,
    +      and issue tracking systems that are managed by, or on behalf of, the
    +      Licensor for the purpose of discussing and improving the Work, but
    +      excluding communication that is conspicuously marked or otherwise
    +      designated in writing by the copyright owner as "Not a Contribution."
    +
    +      "Contributor" shall mean Licensor and any individual or Legal Entity
    +      on behalf of whom a Contribution has been received by Licensor and
    +      subsequently incorporated within the Work.
    +
    +   2. Grant of Copyright License. Subject to the terms and conditions of
    +      this License, each Contributor hereby grants to You a perpetual,
    +      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
    +      copyright license to reproduce, prepare Derivative Works of,
    +      publicly display, publicly perform, sublicense, and distribute the
    +      Work and such Derivative Works in Source or Object form.
    +
    +   3. Grant of Patent License. Subject to the terms and conditions of
    +      this License, each Contributor hereby grants to You a perpetual,
    +      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
    +      (except as stated in this section) patent license to make, have made,
    +      use, offer to sell, sell, import, and otherwise transfer the Work,
    +      where such license applies only to those patent claims licensable
    +      by such Contributor that are necessarily infringed by their
    +      Contribution(s) alone or by combination of their Contribution(s)
    +      with the Work to which such Contribution(s) was submitted. If You
    +      institute patent litigation against any entity (including a
    +      cross-claim or counterclaim in a lawsuit) alleging that the Work
    +      or a Contribution incorporated within the Work constitutes direct
    +      or contributory patent infringement, then any patent licenses
    +      granted to You under this License for that Work shall terminate
    +      as of the date such litigation is filed.
    +
    +   4. Redistribution. You may reproduce and distribute copies of the
    +      Work or Derivative Works thereof in any medium, with or without
    +      modifications, and in Source or Object form, provided that You
    +      meet the following conditions:
    +
    +      (a) You must give any other recipients of the Work or
    +          Derivative Works a copy of this License; and
    +
    +      (b) You must cause any modified files to carry prominent notices
    +          stating that You changed the files; and
    +
    +      (c) You must retain, in the Source form of any Derivative Works
    +          that You distribute, all copyright, patent, trademark, and
    +          attribution notices from the Source form of the Work,
    +          excluding those notices that do not pertain to any part of
    +          the Derivative Works; and
    +
    +      (d) If the Work includes a "NOTICE" text file as part of its
    +          distribution, then any Derivative Works that You distribute must
    +          include a readable copy of the attribution notices contained
    +          within such NOTICE file, excluding those notices that do not
    +          pertain to any part of the Derivative Works, in at least one
    +          of the following places: within a NOTICE text file distributed
    +          as part of the Derivative Works; within the Source form or
    +          documentation, if provided along with the Derivative Works; or,
    +          within a display generated by the Derivative Works, if and
    +          wherever such third-party notices normally appear. The contents
    +          of the NOTICE file are for informational purposes only and
    +          do not modify the License. You may add Your own attribution
    +          notices within Derivative Works that You distribute, alongside
    +          or as an addendum to the NOTICE text from the Work, provided
    +          that such additional attribution notices cannot be construed
    +          as modifying the License.
    +
    +      You may add Your own copyright statement to Your modifications and
    +      may provide additional or different license terms and conditions
    +      for use, reproduction, or distribution of Your modifications, or
    +      for any such Derivative Works as a whole, provided Your use,
    +      reproduction, and distribution of the Work otherwise complies with
    +      the conditions stated in this License.
    +
    +   5. Submission of Contributions. Unless You explicitly state otherwise,
    +      any Contribution intentionally submitted for inclusion in the Work
    +      by You to the Licensor shall be under the terms and conditions of
    +      this License, without any additional terms or conditions.
    +      Notwithstanding the above, nothing herein shall supersede or modify
    +      the terms of any separate license agreement you may have executed
    +      with Licensor regarding such Contributions.
    +
    +   6. Trademarks. This License does not grant permission to use the trade
    +      names, trademarks, service marks, or product names of the Licensor,
    +      except as required for reasonable and customary use in describing the
    +      origin of the Work and reproducing the content of the NOTICE file.
    +
    +   7. Disclaimer of Warranty. Unless required by applicable law or
    +      agreed to in writing, Licensor provides the Work (and each
    +      Contributor provides its Contributions) on an "AS IS" BASIS,
    +      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
    +      implied, including, without limitation, any warranties or conditions
    +      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
    +      PARTICULAR PURPOSE. You are solely responsible for determining the
    +      appropriateness of using or redistributing the Work and assume any
    +      risks associated with Your exercise of permissions under this License.
    +
    +   8. Limitation of Liability. In no event and under no legal theory,
    +      whether in tort (including negligence), contract, or otherwise,
    +      unless required by applicable law (such as deliberate and grossly
    +      negligent acts) or agreed to in writing, shall any Contributor be
    +      liable to You for damages, including any direct, indirect, special,
    +      incidental, or consequential damages of any character arising as a
    +      result of this License or out of the use or inability to use the
    +      Work (including but not limited to damages for loss of goodwill,
    +      work stoppage, computer failure or malfunction, or any and all
    +      other commercial damages or losses), even if such Contributor
    +      has been advised of the possibility of such damages.
    +
    +   9. Accepting Warranty or Additional Liability. While redistributing
    +      the Work or Derivative Works thereof, You may choose to offer,
    +      and charge a fee for, acceptance of support, warranty, indemnity,
    +      or other liability obligations and/or rights consistent with this
    +      License. However, in accepting such obligations, You may act only
    +      on Your own behalf and on Your sole responsibility, not on behalf
    +      of any other Contributor, and only if You agree to indemnify,
    +      defend, and hold each Contributor harmless for any liability
    +      incurred by, or claims asserted against, such Contributor by reason
    +      of your accepting any such warranty or additional liability.
    +
    +   END OF TERMS AND CONDITIONS
    +
    +   APPENDIX: How to apply the Apache License to your work.
    +
    +      To apply the Apache License to your work, attach the following
    +      boilerplate notice, with the fields enclosed by brackets "{}"
    +      replaced with your own identifying information. (Don't include
    +      the brackets!)  The text should be enclosed in the appropriate
    +      comment syntax for the file format. We also recommend that a
    +      file or class name and description of purpose be included on the
    +      same "printed page" as the copyright notice for easier
    +      identification within third-party archives.
    +
    +   Copyright {yyyy} {name of copyright owner}
    +
    +   Licensed under the Apache License, Version 2.0 (the "License");
    +   you may not use this file except in compliance with the License.
    +   You may obtain a copy of the License at
    +
    +       https://www.apache.org/licenses/LICENSE-2.0
    +
    +   Unless required by applicable law or agreed to in writing, software
    +   distributed under the License is distributed on an "AS IS" BASIS,
    +   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +   See the License for the specific language governing permissions and
    +   limitations under the License.
    +
     
  • @@ -10650,7 +10859,8 @@

    Used by:

    Apache License 2.0

    Used by:

    ../../LICENSE-APACHE
  • @@ -11301,8 +11511,7 @@

    Used by:

    Apache License 2.0

    Used by:

      -
    • apollo-compiler
    • -
    • apollo-smith
    • +
    • apollo-parser
    • async-graphql-axum
    • async-graphql-derive
    • async-graphql-parser
    • @@ -13513,33 +13722,6 @@

      Used by:

      OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -
    • -

      MIT License

      -

      Used by:

      - -
      Copyright 2017-2019 Florent Fayolle, Valentin Lorentz
      -
      -Permission is hereby granted, free of charge, to any person obtaining a copy of
      -this software and associated documentation files (the "Software"), to deal in
      -the Software without restriction, including without limitation the rights to
      -use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
      -of the Software, and to permit persons to whom the Software is furnished to do
      -so, subject to the following conditions:
      -
      -The above copyright notice and this permission notice shall be included in all
      -copies or substantial portions of the Software.
      -
      -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
      -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
      -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
      -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
      -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
      -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
      -SOFTWARE.
       
    • @@ -14348,35 +14530,6 @@

      Used by:

      LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -
    • -
    • -

      MIT License

      -

      Used by:

      - -
      MIT License
      -
      -Copyright (c) 2020 Nicholas Fleck
      -
      -Permission is hereby granted, free of charge, to any person obtaining a copy
      -of this software and associated documentation files (the "Software"), to deal
      -in the Software without restriction, including without limitation the rights
      -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
      -copies of the Software, and to permit persons to whom the Software is
      -furnished to do so, subject to the following conditions:
      -
      -The above copyright notice and this permission notice shall be included in all
      -copies or substantial portions of the Software.
      -
      -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
      -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
      -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
      -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
      -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
      -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
      -SOFTWARE.
      -
    • MIT License