Curate Twitter Bot
Watch and tweet about things happening on Kleros Curate.
- NodeJS version 14
- Clone this repo.
- Duplicate
.env.example
, rename it to.env.<networkName>
and fill in the environment variables. - Run
yarn
to install dependencies. - Build it.
yarn build
This is a script that will notify for everything that happened within a period. To do so, run
yarn start
.
You can run this script as it is, or with cron. There are also loop:<networkName>
yarn scripts to run this process automatically with pm2. Adding them is straightforward, just create a new .env.<networkName>
and create the loop script for it.
This is an example of how to set this loop with pm2:
pm2 start yarn --interpreter bash --name ctb-xdai -- loop:xdai
This bot purpose is to minimize RPC calls. The previous way of obtaining events was to create many event listeners for every GTCR and LightGTCR contract. Instead, this bot relies on subgraphs to obtain event information, with entities that store timestamps in some capacity.
This bot in particular needs a database, in order to remember twitter threads belonging to specific items.
- Obtain previous timestamp, and store the timestamp corresponding to the present.
- Send a complex query to the subgraph. For each entity type,
where
clauses filter to only include entities that are relevant fromstart
toend
, which are the two timestamps that define the window of time we're interested in. - As each query fetches custom information, each query has all results go through a custom method, to parse the results into an standarized form that the rest of the program can consume, called
Events
(they're not actual EVM events) - These
Events
are sorted according to their timestamp. - Each
Event
is now handled. It goes through a function that acts as a router, and directs each event type to be handled on its own customized way.
This bot uses a database to remember the last tweet that was posted on an specific item, in order to reply to it and form threads.
It also needs an RPC connection in order to obtain some information that is not available through the subgraph, such as some metaEvidenceURIs
, and some amounts. But, RPC calls are only made to obtain info per handling, so it's considerably cheaper. (Currently, maximum 2 RPC calls per tweet).
TLDR: update subgraph, build queries, parsers and handlers.
Analyze the current bot, and make a list of everything that needs to be handled. Check out how it's obtaining this information.
Go to a subgraph that is indexing the relevant contract. Build a subgraph if it doesn't exist yet. If something cannot be queried, modify the subgraph to expose this information to the queries. This usually entails creating entities that map 1:1 to events, or adding some timestamps to existing entities.
In this project, these queries are strings inside the getEvents
function in get-events.ts
. Note timestamps matter.
You may want to use an existing library to make these queries, it may have been a mistake to write the query strings manually and fetch with node-fetch
.
Each query will net a result that needs to be parsed into an standarized format for the handlers to consume. This is done because it makes the rest of the bot straightforward and less prone to bugs.
For example, in this bot, there's a CurateEvent
interface that all parsers return.
This is completely dependant on the purpose of the bot. This bot purpose is making tweets, so all the handlers do is tweet and store the tweet id somewhere.
A bot could do other things as well: call intermediate transactions, such as reward withdrawals, advancing a phase or executing a request. It could send telegram messages, send mails, etc.
- This architecture looks more reliable, the pipeline is more modular.
- It's not vulnerable to RPC degrading their service (rate limits, timeouts...).
- Reliant on the subgraph. If TheGraph is bugged, it will affect it. If the subgraph is not keeping up, this bot won't keep up either.
- Parsers are error prone, especially if there's a lot of information and and there are many parsers. I had to do a lot of manual debugging and I'm still not sure I found all of them.
- Usually entails indexing new information on the subgraph.