Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New ssv_types #69

Merged
merged 4 commits into from
Dec 12, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
159 changes: 84 additions & 75 deletions Cargo.lock

Large diffs are not rendered by default.

5 changes: 5 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ members = [
"anchor/network",
"anchor/processor",
"anchor/qbft",
"anchor/common/ssv_types",
]
resolver = "2"

Expand All @@ -23,13 +24,15 @@ http_metrics = { path = "anchor/http_metrics" }
network = { path ="anchor/network"}
version = { path ="anchor/common/version"}
processor = { path = "anchor/processor" }
ssv_types = { path = "anchor/common/ssv_types" }
lighthouse_network = { git = "https://github.com/sigp/lighthouse", branch = "unstable"}
task_executor = { git = "https://github.com/sigp/lighthouse", branch = "unstable", default-features = false, features = [ "tracing", ] }
metrics = { git = "https://github.com/agemanning/lighthouse", branch = "modularize-vc" }
validator_metrics = { git = "https://github.com/agemanning/lighthouse", branch = "modularize-vc" }
sensitive_url = { git = "https://github.com/agemanning/lighthouse", branch = "modularize-vc" }
slot_clock = { git = "https://github.com/agemanning/lighthouse", branch = "modularize-vc" }
unused_port = { git = "https://github.com/sigp/lighthouse", branch = "unstable" }
types = { git = "https://github.com/sigp/lighthouse", branch = "unstable" }
derive_more = { version = "1.0.0", features = ["full"] }
async-channel = "1.9"
axum = "0.7.7"
Expand All @@ -53,6 +56,8 @@ tokio = { version = "1.39.2", features = [
] }
tracing = "0.1.40"
tracing-subscriber = { version = "0.3.18", features = ["fmt", "env-filter"] }
base64 = "0.22.1"
openssl = "0.10.68"

[profile.maxperf]
inherits = "release"
Expand Down
11 changes: 11 additions & 0 deletions anchor/common/ssv_types/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
[package]
name = "ssv_types"
version = "0.1.0"
edition = { workspace = true }
authors = ["Sigma Prime <contact@sigmaprime.io>"]

[dependencies]
types = { workspace = true}
openssl = { workspace = true }
derive_more = { workspace = true }
base64 = { workspace = true }
53 changes: 53 additions & 0 deletions anchor/common/ssv_types/src/cluster.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
use crate::OperatorId;
use crate::Share;
use derive_more::{Deref, From};
use types::{Address, Graffiti, PublicKey};

/// Unique identifier for a cluster
#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Hash, From, Deref)]
pub struct ClusterId(pub u64);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know there is some terminology overlap but it's probably better we stick with "committee" for now so we're using the same terminology as the other client team

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An argument for "cluster" is that "committee" might be confusable with the committees from Ethereum consensus

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep Daniel mentioned my idea behind Cluster. Im good to change it but in that case we should name it something like SSVCommittee just to be explicit about the difference.


/// A Cluster is a group of Operators that are acting on behalf of a Validator
#[derive(Debug, Clone)]
pub struct Cluster {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think domain needs to be included on the Cluster unless you had another plan for that implementation

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left domain and some other specific Validator data off since a lot of it is going to be included via the lighthouse types. If we realize it makes more sense to add it here we can do that.

/// Unique identifier for a Cluster
pub cluster_id: ClusterId,
/// All of the members of this Cluster
pub cluster_members: Vec<ClusterMember>,
/// The number of faulty operator in the Cluster
pub faulty: u64,
/// If the Cluster is liquidated or active
pub liquidated: bool,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a liquidated flag in the specs but can understand why it would be useful. Is there no analogous implementation in the go client?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its on the metadata here in the client.

/// Metadata about the validator this committee represents
pub validator_metadata: ValidatorMetadata,
}

/// A member of a Cluster. This is just an Operator that holds onto a share of the Validator key
#[derive(Debug, Clone)]
pub struct ClusterMember {
/// Unique identifier for the Operator this member represents
pub operator_id: OperatorId,
/// Unique identifier for the Cluster this member is a part of
pub cluster_id: ClusterId,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think we need clusterid here since ClusterMember is already contained within a Cluster -- suggest drop unless there's a reason

Copy link
Member Author

@Zacholme7 Zacholme7 Dec 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, but I think this depends on how we want to structure the maps for the database. If we want to do something like OperatorID => Vec of ClusterMembers, then this ClusterID is important. If we do ClusterID => Vec of ClusterMembers, then it is not important. The design of the database quite tightly influences the design of the types and it's hard to anticipate what the right structure is at this point.

/// The Share this member is responsible for
pub share: Share,
}

/// Index of the validator in the validator registry.
#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Hash, From, Deref)]
pub struct ValidatorIndex(pub usize);

/// General Metadata about a Validator
#[derive(Debug, Clone)]
pub struct ValidatorMetadata {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason we've pulled out the committee ID and domaintype from here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this type is supposed to basically represent the metadata of the validator operated by the cluster, not the cluster itself, so I think I like leaving it out here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, this is just about the Validator the cluster is acting on behalf of. Can be stored in database as ClusterID => ValidatorMetadata so this info still gets included. As for the domain, I left it off because I believe it is being included via lighthouse types, but I have to double check to confirm what it gets included through.

/// Index of the validator
pub validator_index: ValidatorIndex,
/// Public key of the validator
pub validator_pubkey: PublicKey,
/// Eth1 fee address
pub fee_recipient: Address,
/// Graffiti
pub graffiti: Graffiti,
/// The owner of the validator
pub owner: Address,
}
7 changes: 7 additions & 0 deletions anchor/common/ssv_types/src/lib.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
pub use cluster::{Cluster, ClusterId, ClusterMember, ValidatorIndex, ValidatorMetadata};
pub use operator::{Operator, OperatorId};
pub use share::Share;
mod cluster;
mod operator;
mod share;
mod util;
60 changes: 60 additions & 0 deletions anchor/common/ssv_types/src/operator.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
use crate::util::parse_rsa;
use derive_more::{Deref, From};
use openssl::pkey::Public;
use openssl::rsa::Rsa;
use std::cmp::Eq;
use std::fmt::Debug;
use std::hash::Hash;
use types::Address;

/// Unique identifier for an Operator.
#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Hash, From, Deref)]
pub struct OperatorId(pub u64);

/// Client responsible for maintaining the overall health of the network.
#[derive(Debug, Clone)]
pub struct Operator {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any benefit to tracking our cluster memberships here or is that being handled in the DB?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO that should be handled in the db

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I recommend keeping Operator self contained and handling this in the DB

/// ID to uniquely identify this operator
pub id: OperatorId,
/// Base-64 encoded PEM RSA public key
pub rsa_pubkey: Rsa<Public>,
/// Owner of the operator
pub owner: Address,
Copy link

@diegomrsantos diegomrsantos Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary? I think I didn't see it in the go code

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The public key is used for initial operator identification. When you first join as an operator and do not yet know your id, the public key is used to match to event logs and figure out which id is yours.

The owner address is where the SSV you get paid for running an operator is sent to

}

impl Operator {
/// Creates a new operator from its OperatorId and PEM-encoded public key string
pub fn new(pem_data: &str, operator_id: OperatorId, owner: Address) -> Result<Self, String> {
let rsa_pubkey = parse_rsa(pem_data)?;
Ok(Self::new_with_pubkey(rsa_pubkey, operator_id, owner))
}

// Creates a new operator from an existing RSA public key and OperatorId
pub fn new_with_pubkey(rsa_pubkey: Rsa<Public>, id: OperatorId, owner: Address) -> Self {
Self {
id,
rsa_pubkey,
owner,
}
}
}

#[cfg(test)]
mod operator_tests {
use super::*;

#[test]
fn operator_from_pubkey_and_id() {
// Random valid operator public key and id: https://explorer.ssv.network/operators/1141
let pem_data = "LS0tLS1CRUdJTiBSU0EgUFVCTElDIEtFWS0tLS0tCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBbFFmQVIzMEd4bFpacEwrNDByU0IKTEpSYlkwY2laZDBVMXhtTlp1bFB0NzZKQXJ5d2lia0Y4SFlQV2xkM3dERVdWZXZjRzRGVVBSZ0hDM1MrTHNuMwpVVC9TS280eE9nNFlnZ0xqbVVXQysyU3ZGRFhXYVFvdFRXYW5UU0drSEllNGFnTVNEYlUzOWhSMWdOSTJhY2NNCkVCcjU2eXpWcFMvKytkSk5xU002S1FQM3RnTU5ia2IvbEtlY0piTXM0ZWNRMTNkWUQwY3dFNFQxcEdTYUdhcEkKbFNaZ2lYd0cwSGFNTm5GUkt0OFlkZjNHaTFMRlh3Zlo5NHZFRjJMLzg3RCtidjdkSFVpSGRjRnh0Vm0rVjVvawo3VFptcnpVdXB2NWhKZ3lDVE9zc0xHOW1QSGNORnhEVDJ4NUJKZ2FFOVpJYnMrWVZ5a1k3UTE4VEhRS2lWcDFaCmp3SURBUUFCCi0tLS0tRU5EIFJTQSBQVUJMSUMgS0VZLS0tLS0K";
let operator_id = 1141;
let address = Address::random();

let operator = Operator::new(pem_data, operator_id.into(), address);
assert!(operator.is_ok());

if let Ok(op) = operator {
assert_eq!(op.id.0, operator_id);
}
}
}
8 changes: 8 additions & 0 deletions anchor/common/ssv_types/src/share.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
use types::PublicKey;

/// One of N shares of a split validator key.
#[derive(Debug, Clone)]
pub struct Share {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add the (encrypted) private key here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had it in but removed it right before the PR as I convinced myself it wasn't needed in the Share for some reason. Thinking on it now it should probably be here. I'll add it back in thanks!

/// The public key of this Share
pub share_pubkey: PublicKey,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would we not need operatorID here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Share is part of ClusterMember, and the operator ID can be found there

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something I considered but this is also a it depends on the structure of the database. We do have the access to the OperatorID from the ClusterMember. My idea was to go from ClusterID => Share where the Share is owned by the current operator, so storing the ID on the share is not needed in this case.

}
29 changes: 29 additions & 0 deletions anchor/common/ssv_types/src/util.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
use base64::prelude::*;
use openssl::pkey::Public;
use openssl::rsa::Rsa;

// Parse from a RSA public key string into the associated RSA representation
pub fn parse_rsa(pem_data: &str) -> Result<Rsa<Public>, String> {
// First decode the base64 data
let pem_decoded = BASE64_STANDARD
.decode(pem_data)
.map_err(|e| format!("Unable to decode base64 pem data: {}", e))?;

// Convert the decoded data to a string
let mut pem_string = String::from_utf8(pem_decoded)
.map_err(|e| format!("Unable to convert decoded pem data into a string: {}", e))?;

// Fix the header - replace PKCS1 header with PKCS8 header
pem_string = pem_string
.replace(
"-----BEGIN RSA PUBLIC KEY-----",
"-----BEGIN PUBLIC KEY-----",
)
.replace("-----END RSA PUBLIC KEY-----", "-----END PUBLIC KEY-----");

// Parse the PEM string into an RSA public key using PKCS8 format
let rsa_pubkey = Rsa::public_key_from_pem(pem_string.as_bytes())
.map_err(|e| format!("Failed to parse RSA public key: {}", e))?;

Ok(rsa_pubkey)
}
Loading