Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Orleans] Silo Metadata and Placement Filtering #44187

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
110 changes: 110 additions & 0 deletions docs/orleans/grains/grain-placement-filtering.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
---
title: Grain placement filtering
description: Learn about grain placement filtering in .NET Orleans.
ms.date: 01/08/2025
---

# Grain Placement Filtering

## Overview

Placement Filtering in Orleans allows developers additional control over the placement of grains within a cluster. It works in conjunction with placement strategies, adding an additional layer of filtering to determine candidate silos for grain activation.

This filterting takes place before candidate silos are passed on to the configured placement method allowing for more flexibility and reuse of the filters.
For example, the existing PreferLocal placement strategy is hard coded to fall back to Random placement if the local silo is unable to host the grain type. But by using filters, a `PreferLocalPlacementFilter` could be implemented to filter down to either the local silo or all compatible silos. Then any placement strategy (Random, Resource Optimised Placement, Activation Count, etc.) could be configured for that grain type. This allows for any set of filters and any placement strategy to be configured for a grain type.

---

## How Placement Filtering Works

Placement Filtering operates as an additional step in the grain placement process. After all compatible silos for the grain type are identified, all Placement Filters configured for that grain type, if any, are applied to allow further refinement of the selection by eliminating silos that do not meet the defined criteria.

### Ordering

Filters running in different orders may result in different behavior so explicit ordering is required when two or more filters are defined on a type. This needs to be configured with the `order:` parameter, as the type metadata pulled at runtime may return the attributes on a type in a different order from how they appear in the source code. Ordering must have unique values so an explicit ordering can be determined.

---

## Built-in Filters

### Silo Metadata

These filters work with [*Silo Metadata*](../grains/silo-metadata.md) to filter candidate silos.

#### RequiredMatchSiloMetadata

Silo Metadata is used to filter candidate silos to only ones that matches all of the specified metadata keys with the calling silo. If there are no compatible silos that match all of the keys then an empty set of silos will be returned and placement will ultimately fail for the grain.

#### PreferredMatchSiloMetadata

This filtering will attempt to be filtered to only silos that match all of the configured metadata keys with the calling silo. However, instead of returning an empty set if there are not matches as the above Required filtering this will then fall back to partial matches. The first configured metadata key is dropped and a match is made against the remaining keys. This will continue, dropping the initial keys, until a sufficient number of matches are made. If there are not any compatible silos that match *any* of the metadata keys, then all of the candidate silos are returned.

The `minCandidates` value configures how many candidates must be found to stop the filtering process. This value is used to prevent a single silo from getting quickly overloaded if it would be the only match.
For example, if filtering on `[PreferredMatchSiloMetadata(["cloud.availability-zone", "cloud.region"], minCandidates:2)]` and there is only one matching silo on both `cloud.availability-zone` and `cloud.region`, then all activations would get placed on that one silo. It is often desirable to not focus activation (or do scheduling in general) on one target. With the above `minCandidates` value of 2, this scenario would fail to match on both keys because only one silo matches both metadata keys. Then it would then fall back to matching only on `cloud.region`. If there were 2 or more silos that match only that key then those would get returned. Otherwise, it would fall back to returning all of the candidates. Note that this config is a minimum value; more candidates could be returned. If you would prefer a most specific matching only then setting this to 1 would only return the best match (the one silo in the above scenario). This could be preferable in specific use cases where there is low activation throughput and where there is a great penalty when moving to a less specific match from a more specific one. In general use, the default value of 2 should be used (and not need to be specified in the attribute).

---

## Implementing Placement Filters

To implement a custom Placement Filter in Orleans, follow these steps:

1. **Implementation**
- Create marker Attribute derived from `PlacementFilterAttribute`
- Create Strategy derived from `PlacementFilterStrategy` to manage any configuration values
- Create Director derived from `IPlacementFilterDirector` which contains the filtering logic
- Define the filtering logic in the `Filter` method, which takes a list of candidate silos and returns a filtered list.

2. **Register the Filter**
- Call `AddPlacementFilter` to register the Strategy and corresponding Director

3. **Apply the Filter**
- Add the Attribute to a grain class to apply the filter

Here is an example of a simple custom Placement Filter. It is similar in behavior to using `[PreferLocalPlacement]` without any filter, but this has the advantage of being able to specify any placement method. Whereas `PreferLocalPlacement` falls back to Random placement if the local silo is unable to host a grain, this example has configured `ActivationCountBasedPlacement`. Any other placement could similarly be used with this filter

```csharp
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public class ExamplePreferLocalPlacementFilterAttribute(int order)
: PlacementFilterAttribute(new ExamplePreferLocalPlacementFilterStrategy(order));
```

```csharp
public class ExamplePreferLocalPlacementFilterStrategy(int order) : PlacementFilterStrategy(order);
```

```csharp
internal class ExamplePreferLocalPlacementFilterDirector(ILocalSiloDetails localSiloDetails)
: IPlacementFilterDirector
{
public IEnumerable<SiloAddress> Filter(PlacementFilterStrategy filterStrategy, PlacementTarget target, IEnumerable<SiloAddress> silos)
{
var siloList = silos.ToList();
var localSilo = siloList.FirstOrDefault(s => s == localSiloDetails.SiloAddress);
if (localSilo is not null)
{
return [localSilo];
}
return siloList;
}
}
```

After implementing this filter, it can be registered and applied to grains.

```csharp
builder.ConfigureServices(services =>
{
services.AddPlacementFilter<ExamplePreferLocalPlacementFilterStrategy, ExamplePreferLocalPlacementFilterDirector>();
});
```

```csharp
[ExamplePreferLocalPlacementFilter]
[ActivationCountBasedPlacement]
public class MyGrain() : Grain, IMyGrain
{
...
}
```

---
121 changes: 121 additions & 0 deletions docs/orleans/grains/silo-metadata.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
---
title: Silo metadata
description: Learn about silo metadata in .NET Orleans.
ms.date: 01/08/2025
---

# Silo Metadata

Silo Metadata is a new feature in Orleans that allows developers to assign custom metadata to silos within a cluster. This metadata provides a flexible mechanism for annotating silos with descriptive information or specific capabilities.

This feature is particularly useful in scenarios where different silos have distinct roles, hardware configurations, or other unique characteristics. For example, silos can be tagged based on their region, compute power, or specialized responsibilities within the system.

Silo Metadata lays the groundwork for additional Orleans features, such as Placement Filtering.

## Key Concepts

Silo Metadata introduces a way to attach key-value pairs to silos within an Orleans cluster. This feature allows developers to configure silo-specific characteristics that can be leveraged by Orleans components.

Silo Metadata is represented as an **immutable** dictionary of key-value pairs:

- **Keys**: Strings that identify the metadata (e.g., `"cloud.region"`, `"compute.reservation.type"`).
- **Values**: Strings that describe the corresponding property (e.g., `"us-east1"`, `"spot"`).

## Configuration

Silo Metadata in Orleans can be configured using two methods: via .NET Configuration or directly in code.

### **Configuring Silo Metadata via .NET Configuration**

Silo Metadata can be defined in the application’s Configuration, such as `appsettings.json`, environment variables, or any other Configuration source.

#### Example: `appsettings.json` Configuration

```json
{
"Orleans": {
"Metadata": {
"cloud.region": "us-east1",
"compute.reservation.type": "spot",
"role": "worker"
}
}
}
```

The above configuration defines metadata for a silo, tagging it with:

- `cloud.region`: `"us-east1"`
- `compute.reservation.type`: `"spot"`
- `role`: `"worker"`

To apply this configuration, use the following setup in your silo host builder:

```csharp
var siloBuilder = new SiloHostBuilder()
// Configuration section Orleans:Metadata is used by default
.UseSiloMetadata();
```

Alternatively, an explicit `IConfiguration` or `IConfigurationSection` can be passed in to control where in configuration the metadata is pulled from.

---

### **Configuring Silo Metadata Directly in Code**

For scenarios requiring programmatic metadata configuration, developers can add metadata directly in the silo host builder.

#### Example: Direct Code Configuration

```csharp
var siloBuilder = new SiloHostBuilder()
.UseSiloMetadata(new Dictionary<string, string>
{
{"cloud.region", "us-east1"},
{"compute.reservation.type", "spot"},
{"role", "worker"}
});
```

This example achieves the same result as the JSON configuration but allows metadata values to be computed or loaded dynamically during silo initialization.

---

### **Merging Configurations**

If both .NET Configuration and direct code configuration are used, the direct configuration overrides any conflicting metadata values from the .NET Configuration. This allows developers to set defaults via configuration files and dynamically adjust specific metadata during runtime.

## Usage

Developers can retrieve metadata through the `ISiloMetadataCache` interface. This interface allows for querying metadata for individual silos across the cluster. Metadata will always be returned from a local cache of metadata that gets updated in the background as cluster membership changes.

### **Accessing Metadata for a Specific Silo**

The `ISiloMetadataCache` provides a method to retrieve the metadata for a specific silo by its unique identifier (`SiloAddress`). The `ISoloMetadataCache` implementation is registered in the `UseSiloMetadata` method and can be injected as a dependency.

#### Example: Accessing Metadata for a Silo

```csharp
var siloMetadata = siloMetadataCache.GetSiloMetadata(siloAddress);

if (siloMetadata.Metadata.TryGetValue("role", out var role))
{
Console.WriteLine($"Silo Role for {siloAddress}: {role}");
// Execute role-specific logic
}
```

In this example:

- `GetSiloMetadata(siloAddress)` retrieves the metadata for the specified silo.
- Metadata keys like `"role"` can be used to influence application logic.

---

## Internal Implementation

Internally, the `SiloMetadataCache` monitors changes in cluster membership on `MembershipTableManager` and will keep the local cache of metadata in sync with membership changes. Metadata is immutable for a given Silo so it will be retreived once and cached until that Silo leaves the cluster. Cached metadata for clusters that are `Dead` or have left the membership table will be cleared out of the local cache.

Each silo hosts a [*GrainService*](../grains/grainservices.md) that provides that silo's metadata. Other silos request a client to a specific silo's `GrainService` to pull a remote silo's metadata to populate its local cache.

Calls to `SiloMetadataCache : ISiloMetadataCache` then return a result from this local cache.
Loading