Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add processors to autoscaling capacity response #87895

Merged

Conversation

benwtrent
Copy link
Member

This adds processors to the autocaling capacity response.

processors in the current capacity calculation is determined by the allocated_processors field in OsInfo. The reason behind this is because allocated_processors can be overridden by the process starting the node. Consequently, when autoscaling deciders need to look at the current processors and request for more, they should look at allocated_processors not available_processors. Mostly, this is because certain systems may want to account for threads being used by external processes and not allow ES to use all the available processors on the OS.

@elasticmachine elasticmachine added the Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. label Jun 21, 2022
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

@elasticsearchmachine
Copy link
Collaborator

Hi @benwtrent, I've created a changelog YAML for you.

@benwtrent
Copy link
Member Author

Verified that response values are correct given the current capacities for various tiers. Responses containing current capacity look like:

 "current_capacity": {
        "node": {
          "storage": 128849018880,
          "memory": 4294967296,
          "processors": 2
        },
        "total": {
          "storage": 257698037760,
          "memory": 8589934592,
          "processors": 4
        }
      }

This matches up with true node values based on OsInfo.

@benwtrent benwtrent requested review from henningandersen and droberts195 and removed request for henningandersen June 21, 2022 18:44
Copy link
Contributor

@droberts195 droberts195 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM


public static final AutoscalingResources ZERO = new AutoscalingResources(new ByteSizeValue(0), new ByteSizeValue(0));
public static final AutoscalingResources ZERO = new AutoscalingResources(new ByteSizeValue(0), new ByteSizeValue(0), 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: since you're changing this line anyway you could change new ByteSizeValue(0) to ByteSizeValue.ZERO.

private volatile TimeValue fetchTimeout;

private final Client client;
private final Object mutex = new Object();

@Inject
public AutoscalingMemoryInfoService(ClusterService clusterService, Client client) {
public AutoscalingMemoryAndProcessorInfoService(ClusterService clusterService, Client client) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not something for this PR, but this class is doing the same thing as https://github.com/elastic/elasticsearch/pull/87366/files#diff-d71d08cc6d198f07a28ee2e1179053197941de85851c6797f30a516af523d158R725-R726 in a different way. So we have two different plugins that both want to know how much memory and how many processors other nodes have and are getting that information in two completely different ways. It might be nice to consolidate this into core server at some point in the future. If each node provided its memory and processors when joining the cluster then it could be available in cluster state - effectively like what ML is doing with node attributes but integrated as a first-class feature.

Copy link
Contributor

@henningandersen henningandersen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to see a REST (yml) test added as well validating that we see processors in the current capacity output as well as that fixed decider can produce a processors output, see "get_autoscaling_capacity.yml".

Left a number of detailed other comments. The naming is tricky. But just a name ofc... Otherwise looking good.

@sethmlarson sethmlarson added the Team:Clients Meta label for clients team label Jun 22, 2022
@elasticmachine
Copy link
Collaborator

Pinging @elastic/clients-team (Team:Clients)

Copy link
Contributor

@henningandersen henningandersen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

* @param memory node total memory
* @param processors allocated processors
*/
public record NodeInfo(long memory, float processors) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NodeInfo is fine, but I'd prefer to add Autoscaling prefix like on the service. AutoscalingNodeInfo can become AutoscalingNodeInfos or AutoscalingNodesInfo instead.

Comment on lines 133 to 137
"fixed storage [%s] memory [%s] nodes [%d]%s",
storage,
memory,
nodes,
Optional.ofNullable(processors).map(i -> " processors [" + i + "]").orElse("")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let us keep nodes at then end instead. There is no API compatibility guarantee on this message.

Suggested change
"fixed storage [%s] memory [%s] nodes [%d]%s",
storage,
memory,
nodes,
Optional.ofNullable(processors).map(i -> " processors [" + i + "]").orElse("")
"fixed storage [%s] memory [%s] processors [%s] nodes [%d]",
storage,
memory,
processors,
nodes

Copy link
Member Author

@benwtrent benwtrent Jun 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@henningandersen this technically breaks our REST compatibility as the reason for old API calls is now different.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a change to a message field, like a log message output. We make many such changes. The individual fields are available for direct consumption too, there is no need for clients to parse this field.

Another example is allocation explain, where we have refined the messaging several times without calling that a breaking change.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rest tests probably shouldn't do a perfect string match on this field anyways...I will mute them for this PR and update.

@benwtrent
Copy link
Member Author

@elasticmachine update branch

@benwtrent
Copy link
Member Author

@elasticmachine update branch

@benwtrent benwtrent merged commit 164decb into elastic:master Jun 28, 2022
@benwtrent benwtrent deleted the feature/add-processors-to-autoscaling-api branch June 28, 2022 14:46
henningandersen added a commit to henningandersen/elasticsearch that referenced this pull request Jul 4, 2022
Add documentation for new `processors` response in autoscaling capacity API.

Relates elastic#87895
henningandersen added a commit that referenced this pull request Jul 5, 2022
Add documentation for new `processors` response in autoscaling capacity API.

Relates #87895
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cloud-deploy Publish cloud docker image for Cloud-First-Testing :Distributed Coordination/Autoscaling >enhancement Team:Clients Meta label for clients team Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. v8.4.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants