Skip to content

Commit

Permalink
add readmes etc
Browse files Browse the repository at this point in the history
  • Loading branch information
deryacog committed Jan 10, 2025
1 parent 2f4af8c commit 488831d
Show file tree
Hide file tree
Showing 15 changed files with 319 additions and 9,351 deletions.
37 changes: 37 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,42 @@
# Fledger - the fast, fun, easy ledger

## Student Project 24
- The loopix module implemented for this project and be found in ./flmodules/src/loopix, see readme on [here](https://github.com/fledg-re/flmodules/src/loopix/README.md).
- The scripts that were used to run experiments as well as data processing can be found in ./mergetb: [here](https://github.com/fledg-re/mergetb/README.md).
- The latex for the report can be found and compiled in [here](https://github.com/fledg-re/mergetb/blob/main/report/).

### Runnning the code
Rust binary can be built with the following command:
```bash
cd cli/fledger && cargo build -r
```

Note that the main is configured to run a simulation with multiples node and will not work as a single node. A single node can be run with the following command:
```bash
./target-common/release/fledger --config <directory to save configs> \
--name <name of the node> \
--n-clients <number of loopix clients> \
--duplicates <number of duplicates> \
--path-len <path length> \
--retry <number of retries> \
--start_loopix_time <start time> \
--save_new_metrics_file <save new metrics file> \
--token <token>
```
where token is api key for ipinfo.io.

To run multiples nodes, you can use the the script:
```bash
./start_simul.sh
```

### Changes to Fledger
Aside from the loopix module, and the cli/src/main.rs minor changes have been made to the code.

Notably there are some changes int the web proxy module and docker files.


## Fledger
Fledger's main goal is to create a web3 experience in the browser without the
need for proxies.
Once the code starts in your browser, it will connect to other browsers,
Expand Down
26 changes: 26 additions & 0 deletions flmodules/src/loopix/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Fledger Loopix Module

## broker.rs
Broker for loopix messsages, it handles starting correct threads for each role and emitting messages to other brokers.

## messages.rs
Main file for loopix messager processing, it handles the processing of messages according to the role of the node, and sends the messages to the loopix broker

## sphinx.rs
Wrapper for sphinx packets, enabling serialization etc

## storage.rs
Storage of all information related network topology, the loopix role have different storages.

## config.rs
Loopix parameters are stored in this file, it can be directly passed to the broker to create a loopix node.
Note, there are many functions starting witht the name default_with, these are used to create network topologies easily for experiments.

## mod.rs
Exports all Loopix files and contains declarations of loopix metrics.

## core.rs
Core functionality of loopix nodes, common to all roles

## client.rs, mixnode.rs, provider.rs
Role specific functionality, each node processes and handles messages differently.
10 changes: 8 additions & 2 deletions flmodules/src/web_proxy/broker.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ use flarch::{
use thiserror::Error;
use tokio::sync::{mpsc::channel, watch};

use crate::overlay::messages::{NetworkWrapper, OverlayIn, OverlayMessage, OverlayOut};
use crate::{loopix::{PROXY_MESSAGE_BANDWIDTH, PROXY_MESSAGE_COUNT, RETRY_COUNT}, overlay::messages::{NetworkWrapper, OverlayIn, OverlayMessage, OverlayOut}};
use flarch::{
broker::{Broker, BrokerError, Subsystem, SubsystemHandler},
nodeids::{NodeID, U256},
Expand Down Expand Up @@ -102,6 +102,9 @@ impl WebProxy {

pub async fn get_with_retry_and_timeout(&mut self, url: &str, retry: u8, time: Duration) -> Result<Response, WebProxyError> {
for i in 0..(retry + 1) { // +1 because it should retry at least once
if i != 0 {
RETRY_COUNT.inc();
}
log::debug!("Getting {url} with retry {i}");
match self.get_with_timeout(url, time).await {
Ok(resp) => return Ok(resp),
Expand Down Expand Up @@ -180,10 +183,13 @@ impl Translate {

fn link_proxy_overlay(msg: WebProxyMessage) -> Option<OverlayMessage> {
if let WebProxyMessage::Output(WebProxyOut::ToNetwork(id, msg_node)) = msg {
let wrapper = NetworkWrapper::wrap_yaml(MODULE_NAME, &msg_node).unwrap();
PROXY_MESSAGE_BANDWIDTH.inc_by(wrapper.msg.len() as f64);
PROXY_MESSAGE_COUNT.inc();
Some(
OverlayIn::NetworkWrapperToNetwork(
id,
NetworkWrapper::wrap_yaml(MODULE_NAME, &msg_node).unwrap(),
wrapper,
)
.into(),
)
Expand Down
2 changes: 2 additions & 0 deletions flmodules/src/web_proxy/messages.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ use futures::StreamExt;
use serde::{Deserialize, Serialize};
use tokio::sync::mpsc::Sender;

use crate::loopix::PROXY_REQUEST_RECEIVED;
use crate::nodeconfig::NodeInfo;
use crate::Modules;

Expand Down Expand Up @@ -125,6 +126,7 @@ impl WebProxyMessages {
}

fn start_request(&mut self, src: NodeID, nonce: U256, request: String) -> Vec<WebProxyOut> {
PROXY_REQUEST_RECEIVED.inc();
let mut broker = self.broker.clone();
spawn_local(async move {
match reqwest::get(request).await {
Expand Down
72 changes: 72 additions & 0 deletions mergetb/REAME.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# MergeTB

This folder contains the scripts used to run experiments and process data.

The ansible folder contains the scripts used to run the simulations with the MergeTB XDC. To set up experiments as well as the XDC please refer to the Sphere Research Infrastructure documentation.

Firstly with MergeTB, an experiment needs to be reserved and activated. The configuration for the experiment can be found in ansible/experiment.py

Once the experiment is activated, and ansible is isntalled on the XDC, the following playbooks can be used to run the simulations:

1. `playbook.yml` runs the simulation without any churn for 300 seconds
2. `playbook_churn.yml` kills specificied amount of nodes at the start of the simulation

The playbooks require inventory.ini to be present in the folder. This file can be generated using ansible/generate_inventory.py according to the mergetb reservation.

To collect data with different parameters, the following scripts can be used (they run the playbooks with different parameters and collect the data in the xdc)

1. ./ansible/get_simul_data.bash: runs the simulation with different configurations of means delays and lambda paramteres
2. ./ansible/max_retreive_time_pull_grid_search.bash: runs a grid search for the max retrieve time for the pull parameters
3. ./ansible/get_churn_data.bash: runs playbook_churn with different values of retry and duplicate mechanisms
4. ./ansible/find_mu.bash: runs the playbook.yml with different values of mean number of messages per second at a client

To process the data from the simulations, the following scripts can be used respectively:
1. ./run_data_processing.bash
2. ./run_grid_search_data_processing.bash
3. ./run_churn_data_processing.bash
4. ./find_mu.bash

Note that the data processing scripts require the data to be save in the <data_dir>/raw_data folder.

A venv is also required to run the data processing scripts (at the path ../venv), the requirements can be found in ../requirements.txt

# Example

First generate the inventory.ini for an experiment with path length 3 and 3 clients and reservation name (on mergetb) "test":
```bash
cd ansible
python3 generate_inventory.py 3 3 test
```

Upload all the required files to the XDC. Example commands can be found in useful_mrg_commands.bash.

Make sure the nodes have docker installed (install_docker.yml). (after this installation the node might neeed a warm reboot)

Run the a with the following command:
```bash
nohup ./get_simul_data.bash <ipinfo.io token> > logs.txt 2>&1 &
```

All data from the the simulation will be saved in the ./metrics directory in the XDC.

Once the data is collected, download this data into ./mean_delay_lambdas/raw_data

The data processing can be run with the following command:
```bash
./run_data_processing.bash <path_length> <n_clients> <duration>
```
In our case path_length = 3, n_clients = 3 and duration = 300.

This will extract data from the metrics files and create two files in ./mean_delay_lambdas:
1. raw_data.json: the same data from the metrics files in json format
2. average_data.json: the data averaged for each run

The script will also generate a plots directory in ./mean_delay_lambdas/plots where are all the related plots will be saved.

## Other playbooks
Other playbooks in the ansible folder are used to delete the data directory in the nodes, delete the docker image, stop container etc.





12 changes: 6 additions & 6 deletions mergetb/ansible/get_churn_data.bash
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ token=$1

initial_path_length=3

lambda_loop=1.65
lambda_drop=1.65
lambda_payload=6.1
lambda_loop=1.35
lambda_drop=1.35
lambda_payload=5
path_length=3
mean_delay=80
lambda_loop_mix=1.65
time_pull=0.8
mean_delay=100
lambda_loop_mix=1.35
time_pull=1
max_retrieve=5
pad_length=150

Expand Down
23 changes: 7 additions & 16 deletions mergetb/ansible/get_simul_data.bash
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ token=$1
# Define initial values
lambda_loop=1.15
lambda_drop=1.15
lambda_payload=6
lambda_payload=6.1
path_length=3
mean_delay=100
lambda_loop_mix=1.15
Expand Down Expand Up @@ -68,24 +68,15 @@ done

# Try different mean_delay values
mean_delays=(50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200)
payload_values=(6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6)
chaff_values=(3.5 3.03 2.56 2.09 1.62 1.15 1.06 0.97 0.88 0.8 0.71 0.62 0.53 0.44 0.35 0.26)


# mean_delays=(50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200)
# chaff_values=(4 3.3 2.79 2.45 2.2 2 1.79 1.62 1.48 1.37 1.27 1.18 1.11 1.05 0.99 0.95)
# payload_values=(8 7.2 6.4 5.6 4.8 4 3.83333333 3.66666667 3.5 3.33333333 3.16666667 3 2.83333333 2.66666667 2.5)

# mean_delays=(50 60)
# chaff_values=(4 3.3)
# payload_values=(8 7.2)

# mu value = 20 16.67 14.29 12.5 11.11 10 9.09 8.33 7.69 7.14 6.67 6.25 5.88 5.56 5.26 5
# required number of messages per second = 48 24
payload_values=(6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1 6.1)
chaff_values=(3.65 3.15 2.65 2.15 1.65 1.15 1.06 0.97 0.88 0.8 0.71 0.62 0.53 0.44 0.35 0.26)

mkdir -p metrics/mean_delay

for i in "${!mean_delays[@]}"; do
start_index=0
end_index=16

for ((i=start_index; i < end_index; i++)); do
mean_delay=${mean_delays[$i]}
lambda_drop=${chaff_values[$i]}
lambda_loop=${chaff_values[$i]}
Expand Down
1 change: 0 additions & 1 deletion mergetb/ansible/playbook_churn.yml
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,6 @@
--start_loopix_time 15
--retry {{ retry }}
--n-clients {{ n_clients }}
--save_new_metrics_file
--duplicates {{ duplicates }}
--token {{ token }}
-v
Expand Down
84 changes: 69 additions & 15 deletions mergetb/average_churn_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@
import sys
import os
from average_data import metrics_to_extract, get_data, calculate_average_data
import extract_raw_data
import extract_raw_data

from extract_raw_churn_data import metrics_interval

if __name__ == "__main__":
if len(sys.argv) < 3:
Expand All @@ -18,31 +20,83 @@

print(f"variables: {data.keys()}")

for variable, runs in data.items():
average_data[variable] = {}
print(f"runs: {runs.keys()}")
for run, runs in runs.items():
average_data[variable][run] = {}
print("Getting data for: ", variable, run)
for variable_name, variable_data in data.items():
# for variable_name, variable_data in ["duplicates"]:
# for variable_name, variable_data in ["duplicates"]:
# variable_name = "retry"
# variable_data = data[variable_name]

average_data[variable_name] = {}
print(f"runs: {variable_data.keys()}")
for run, runs in variable_data.items():
# run = "4"
average_data[variable_name][run] = {}
print("Getting data for: ", variable_name, run)

for time_index, time_data in runs.items():
for time_index, time_data in variable_data[run].items():
# for time_index, time_data in runs.items():
if time_index != "0":
prev_step = runs[str(int(time_index) - 20)]
prev_step = variable_data[run][str(int(time_index) - metrics_interval)]
# print(f"time_index: {time_index}, prev_step: {prev_step}")
# print(f"prev_step data: {prev_step['loopix_incoming_messages'], prev_step['loopix_number_of_proxy_requests']}")
# print(f"time_data data: {time_data['loopix_incoming_messages'], time_data['loopix_number_of_proxy_requests']}")

for metric in extract_raw_data.metrics_to_extract:
if metric in ["loopix_bandwidth_bytes", "loopix_incoming_messages", "loopix_number_of_proxy_requests"]:
data[metric] = [t - p for t, p in zip(time_data[metric], prev_step[metric])]
# print(f"{time_index}--------------------------------")
prev_step_data = np.array(prev_step[metric])
time_data_data = np.array(time_data[metric])
# print(f"prev_step_data: {prev_step_data}")
# print(f"time_data_data: {time_data_data}")
# print(f"data[{metric}]:")
if len(prev_step_data) < len(time_data_data):
data[metric] = time_data_data[:len(prev_step_data)] - prev_step_data
elif len(prev_step_data) > len(time_data_data):
data[metric] = time_data_data - prev_step_data[:len(time_data_data)]
else:
data[metric] = time_data_data - prev_step_data
# print(f"{data[metric]}")

elif metric == "loopix_start_time_seconds":
data[metric] = time_data[metric]
else:
data[metric]["sum"] = [t - p for t, p in zip(time_data[metric]["sum"], prev_step[metric]["sum"])]
data[metric]["count"] = [t - p for t, p in zip(time_data[metric]["count"], prev_step[metric]["count"])]
# print("--------------------------------")
prev_step_sum = np.array(prev_step[metric]["sum"])
time_data_sum = np.array(time_data[metric]["sum"])
# print(f"prev_step_sum: {prev_step_sum}")
# print(f"time_data_sum: {time_data_sum}")

if len(prev_step_sum) < len(time_data_sum):
data[metric]["sum"] = time_data_sum[:len(prev_step_sum)] - prev_step_sum
elif len(prev_step_sum) > len(time_data_sum):
data[metric]["sum"] = time_data_sum - prev_step_sum[:len(time_data_sum)]
else:
data[metric]["sum"] = time_data_sum - prev_step_sum

prev_step_count = np.array(prev_step[metric]["count"])
time_data_count = np.array(time_data[metric]["count"])
# print(f"prev_step_count: {prev_step_count}")
# print(f"time_data_count: {time_data_count}")

if len(prev_step_count) < len(time_data_count):
data[metric]["count"] = time_data_count[:len(prev_step_count)] - prev_step_count
elif len(prev_step_count) > len(time_data_count):
data[metric]["count"] = time_data_count - prev_step_count[:len(time_data_count)]
else:
data[metric]["count"] = time_data_count - prev_step_count

# print(f"data[{metric}]: {data[metric]}")


else:
data = time_data
average_data[variable][run][time_index] = {}
print(f"time_index: {time_index}, data: {data['loopix_incoming_messages'], data['loopix_number_of_proxy_requests']}")

print(data)
average_data[variable][run][time_index] = calculate_average_data(data, duration)

average_data[variable_name][run][time_index] = {}
print(f"data: {data}")
# data = time_data
average_data[variable_name][run][time_index] = calculate_average_data(data, metrics_interval)

with open(os.path.join(directory, 'average_data.json'), 'w') as f:
json.dump(average_data, f, indent=2)
Expand Down
Loading

0 comments on commit 488831d

Please sign in to comment.