Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

plugin: Implement the htlc_accepted hook #2267

Merged
merged 15 commits into from
Jun 4, 2019
Merged

Conversation

cdecker
Copy link
Member

@cdecker cdecker commented Jan 14, 2019

This used to be part of #2237, but since there were some unaddressed concerns I split the two.

This PR builds on top of #2237, and implements the hook for htlc_accepted allowing a plugin to decide on what we should be doing with an HTLC that we just added to the commitment:

  • continue proceeds as usual with the HTLC, attempting to resolve it if we are the recipient of the payment, or forward it if we just an intermediate hop. This can be used to apply some policy that can't be implemented in lightningd itself, or delay resolution to see that we can actually deliver whatever we promised in exchange of the payment.
  • resolve claims the HTLC using a provided payment_key, this can be used to short-circuit payments, moving processing of the payment out (cross-chain atomic swaps), implement external invoicing functionality.
  • fail fails the HTLC directly.

The open questions mentioned above are about how to handle recovery on restart if we got interrupted after the HTLC was accepted, but before the plugin result was processed, i.e., we asked the plugin but couldn't act on its decision yet. The current plan is to just call the plugin again on startup, to ensure we act on its decision, but the downside is that it'd require the hooks to be idempotent.

@cdecker cdecker self-assigned this Jan 14, 2019
@cdecker cdecker added the plugin label Jan 14, 2019
@cdecker cdecker force-pushed the htlc_accepted_hook branch 4 times, most recently from 3295f3e to a7a3f07 Compare January 15, 2019 18:15
@bitonic-cjp
Copy link
Contributor

Reading mostly the example plug-in code I wondered: what is the impact if the plug-in can't immediately resolve the transaction? I think this is a very common case, especially if the plug-in forwards the transaction to some other actor.

In the current interface, at least in the Python example, it seems the plug-in call must return the definitive status of the transaction; it can't do so until it knows the definitive status, so it must block. I assume this translates directly into the stdin/stdout protocol.

I guess this interface has at least one undesirable consequence: the plug-in is blocked from accepting other calls until this transaction is completed. I really don't like this consequence, and I don't think it's true for regular Lightning transactions that pass the node so it shouldn't be true for this plug-in assisted type of transactions either.

Another undesirable consequence could be that, as long as the plug-in function blocks, lightningd blocks as well? I haven't looked enough at the lightningd to know whether this is the case. Obviously this is also undesirable, but if the interface is (re)designed such that the plug-in never has to block, the problem is greatly reduced.

If the plugin function doesn't block, this means that, once it knows the transaction status after a while, it has to call back to lightningd to inform lightningd about the transaction status (and preimage in case of success). Another interface will be needed for this.

@bitonic-cjp
Copy link
Contributor

BTW thanks for developing this. It's awesome to finally see something like this getting picked up!

@cdecker
Copy link
Member Author

cdecker commented Jan 16, 2019

I guess this interface has at least one undesirable consequence: the plug-in is blocked from accepting other calls until this transaction is completed. I really don't like this consequence, and I don't think it's true for regular Lightning transactions that pass the node so it shouldn't be true for this plug-in assisted type of transactions either.

That is certainly true for the current implementation of the plugin framework in pylightning, however it is not intrinsic to the plugin system in general. I am planning to add the possibility for asynchronous results being returned from the method invoked by rpc method and hook calls, so that we can hand off those result objects to other threads, or postpone their completion at will. The result object then would have two methods set_result and set_failure that'd send back the result of the call. So you'd be able to define an async method for rpcmethod or hook calls, stash the given result object while you are working on it, and eventually return the result or failure as you want.

This is how futures are commonly used, but the python interpretation of futures is tightly bound to multithread or multiprocess executors, which is why I chose not to use that name here.

Another undesirable consequence could be that, as long as the plug-in function blocks, lightningd blocks as well? I haven't looked enough at the lightningd to know whether this is the case. Obviously this is also undesirable, but if the interface is (re)designed such that the plug-in never has to block, the problem is greatly reduced.

This is not the case, as I mentioned blocking in the python library can and will be solved in future (as soon as I find time to do so in fact). lightningd will not block, and even the channel we have called the hook from will remain fully functional since we call it after the commit and are just waiting on how to resolve that HTLC. Specifically you can continue to add and remove other HTLCs as they come in.

If the plugin function doesn't block, this means that, once it knows the transaction status after a while, it has to call back to lightningd to inform lightningd about the transaction status (and preimage in case of success). Another interface will be needed for this.

That's exactly what I wanted to avoid with the async variant of the hooks, since adding new RPC methods also exposes them on the normal RPC interface as well as requiring additional boiler plate to demux incoming calls to the correct HTLCs. With the async hooks we hand all the context from the plugin_hook_call_ to the callback, and the demuxing is already done on JSON-RPC result dispatch 😉

@cdecker
Copy link
Member Author

cdecker commented Jan 16, 2019

Thanks for the excellent feedback btw, I noticed that I never wrote down these things, so this helps me document my plans for the future a bit better.

@cdecker cdecker force-pushed the htlc_accepted_hook branch 2 times, most recently from 67b80fe to eb44b20 Compare January 23, 2019 17:21
@cdecker cdecker force-pushed the htlc_accepted_hook branch from eb44b20 to 0841d2f Compare January 27, 2019 18:48
@cdecker cdecker force-pushed the htlc_accepted_hook branch from 0841d2f to 1324450 Compare February 4, 2019 17:31
@bitonic-cjp
Copy link
Contributor

What would be the recommended way to build large applications around this? I can see a division between an RPC-calling part (possibly a GUI or another front-end) and a plugin part. My question is how much functionality should be in the plug-in part.

What intuitively concerns me most is the potential impact of plugin crashes. I try to prevent crashes by making good software, but a bit of a defensive approach shouldn't hurt, so I'd also like to design a recovery path for the (hopefully unlikely) case a crash happened. Let's design systems where small problems don't escalate.

Since a plugin is started by lightningd, either lightningd should have a policy on what to do in case of a plugin crash, or a higher-level application should handle this. In the second case, the higher-level application should not just check lightningd's health, but also its plugins. In case the policy is to restart on crash, it should probably restart lightningd entirely to get a fresh plugin process.

Detecting plugin crash and restarting it is more involved than doing the same for a stand-alone process that is a user of lightningd. This could be an argument for moving as much functionality as possible out of the plugin and into the RPC-calling part (to minimize complexity in the plugin). In the extreme case, the plugin is just a pass-through, which, for instance, forwards messages between the stdio interface and some socket.

Is this the recommended way to go?

@bitonic-cjp
Copy link
Contributor

bitonic-cjp commented Feb 19, 2019

I'm slowly getting more familiar with this hook. Based on htlc_accepted_hook_serialize, I understand the parameter format is something like this:

'onion':
	{
	'hop_data':
		{
		'realm': hex,
		'short_channel_id': str,
		'amt_to_forward': int,
		'outgoing_cltv_value': int,
		},
	'next_onion': hex,
	'shared_secret': hex,
	},
'htlc':
	{
	'msatoshi': int,
	'cltv_expiry': int,
	'payment_hash': hex,
	}

Some comments:

  • I always think of the realm number as an integer. Why is it a hex string here?

  • short_channel_id, amt_to_forward and outgoing_cltv_value only have meaning for realm 0.
    Quoting BOLT 4:

    currently, only realm 0 is defined, for which the per_hop format follows:

    I'd like to have a way to receive the raw per_hop data instead of these three values.

Edit:

  • I don't know if this is the case, but I'd like cltv_expiry to be relative to the current block height, so I don't have to query the current block height to know how much time I probably have to get the preimage.

@cdecker
Copy link
Member Author

cdecker commented Feb 19, 2019

  • I always think of the realm number as an integer. Why is it a hex string here?

With the upcoming multi-frame proposal we'll be storing the number of additional frames used for the payload in the realm. In hex that gives us the first digit for the additional frames, and the last digit is the actual payload determining realm.

  • I'd like to have a way to receive the raw per_hop data instead of these three values.

Agreed, I'll add the raw payload to the hook call.

  • I don't know if this is the case, but I'd like cltv_expiry to be relative to the current block height, so I don't have to query the current block height to know how much time I probably have to get the preimage.

We can certainly add a cltv_expiry_delta which gives us the current delta to the payload

@cdecker cdecker force-pushed the htlc_accepted_hook branch from 1324450 to fb1ae99 Compare March 29, 2019 12:10
@cdecker cdecker force-pushed the htlc_accepted_hook branch 2 times, most recently from e8a468b to ef1d19b Compare April 9, 2019 11:23
@cdecker cdecker force-pushed the htlc_accepted_hook branch 2 times, most recently from f07c8b4 to 071898c Compare April 25, 2019 18:33
@cdecker
Copy link
Member Author

cdecker commented Apr 25, 2019

After mulling the restart issue (what to do with HTLCs that were accepted, but we restarted while the plugin was still deliberating about what to do with it) that was brought to my attention by @rustyrussell and @niftynei, I think we have some options here.

I wrote a test in which a plugin just holds on to an HTLC and then restarts the node. This results in the node having called the htlc_accepted hook, but not yet reacted to it. Upon restarting the HTLC is automatically failed and removed from the channel, since it didn't match an invoice and we didn't forward it either:

/* Now fail any which were stuck. */
for (hin = htlc_in_map_first(&unprocessed, &ini); hin;
hin = htlc_in_map_next(&unprocessed, &ini)) {
log_unusual(hin->key.channel->log,
"Failing old unprocessed HTLC #%"PRIu64,
hin->key.id);
fail_in_htlc(hin, WIRE_TEMPORARY_NODE_FAILURE, NULL, NULL);
}

This means that the plugin might have been called, triggering some side-effect, and then the HTLC gets failed anyway. If we are updating some sort of external invoice tracking system for example then this is highly undesirable.

I have the following possible proposals:

  • Just document that the hook should only be used to make decisions about how to proceed with the HTLC, not have side-effects. Any side-effect could be offloaded to a invoice_paid or htlc_forwarded hook which would be replayed upon restart. I think this was also proposed by @rustyrussell at some point. This is basically a 2PC protocol that defaults to failure.
  • Add an additional state to the HTLC state-machine, that just means that the HTLC is handled by a plugin and therefore should not be automatically failed, and the call to the hook should be replayed. This means that if we were waiting for some external event before deciding on what should happen with the HTLC we can reconnect with that logic and resume from there. This pretty much matches the internal forwarding case where we keep accepted HTLCs as long as they have a downstream counterpart. Only that in this case the downstream is external, not internal

In either case we would need to also document that the hooks that may trigger external data changes must be idempotent.

The former is very simple but I don't currently see how it could safely be used to forward a payment on another chain (e.g., we receive an incoming HTLC, call the plugin, which triggers a forward on another daemon, but then we restart, and lose the incoming HTLC despite having already forwarded the payment on the other system). For the latter I have a partial implementation that just replays all HTLCs (calls peer_accepted_htlc on restart), but I wanted to ask your opinion before finishing that proposal up, since it changes the behavior of the node a bit (resume HTLCs after restart instead of failing them).

WDYT @rustyrussell @niftynei @bitonic-cjp ?

@cdecker
Copy link
Member Author

cdecker commented Apr 25, 2019

And like always the solution comes minutes after I hit submit 😉

For the case of forwarding on another chain we can introduce a new outcome to the hook: hold, marking the HTLC as to be kept and replayed on restart. Meanwhile the plugin will use something like an resolve-htlc JSON-RPC call to resolve HTLCs that were marked as hold.

This would involve adding a new state-field to the struct htlc_in and the channel_htlcs table:

enum htlc_mark {
  ACCEPTED,
  HELD,
  FORWARDED,
};

This is orthogonal to the enum htlc_state and only has meaning if the hstate is RCVD_ADD_ACK_REVOCATION, i.e., the HTLC is fully committed.

If the HTLC is marked as ACCEPTED we replay the htlc_accepted hook, from there it may proceed to HELD if the hook returns {"result": "hold"} until it gets resolved with resolve-htlc. For the other cases it either proceeds to FORWARDED (if the hook told us to continue and/or we needed to forward the HTLC) or the hstate can switch to the next normal state on fail or resolve.

Notice that FORWARDED basically is just a materialization of what we are already doing, i.e., keeping the HTLC if there is a matching outgoing HTLC, but I thought it's easier not to clobber the HELD mark with something that is not plugin related.

The major advantage is that we can defer the actual resolution to an asynchronous JSON-RPC call later, instead of forcing the plugin to tell us exactly how to proceed before we can do anything else. That's important for the cross-chain routing (@bitonic-cjp's main focus), since otherwise we'd be completely stuck until the downstream payment is resolved

Would that work, or is this torturing the system too much?

@niftynei
Copy link
Contributor

So I have two thoughts. First, it feels like we should have 2 separate checks; one for HTLC's that are 'destination us' and one for HTLCs that are 'destination next hop'. On the face of it, this may appear to leak information as to whether or not we're the final node, but I'd argue that whether it does or not in practice will depend on the behavior of the calling hook. As is I think we probably 'leak' our 'destinationness' with how quickly we resolve HTLCs that are for us anyway, so practically this isn't much of a change.

My worry about keeping them as the same hook and also adding a HELD state is that some local systemic failure may create unnecessary delays for other network participants. One downside to the HELD and callback process is that the resolution isn't certain. For this reason and the plain simplicity of it, I prefer the first proposal, i.e.

Just document that the hook should only be used to make decisions about how to proceed with the HTLC, not have side-effects. Any side-effect could be offloaded to a invoice_paid or htlc_forwarded hook which would be replayed upon restart. I think this was also proposed by @rustyrussell at some point. This is basically a 2PC protocol that defaults to failure.

However, if we separate out the hooks, one for 'destination us' and one for 'destination next hop', the HELD state only affects payments that we might receive, which feels totally kosher.

Apologies if I've misconstrued what you're proposing, I'm not entirely convinced I grok the original purpose of this hook.

@bitonic-cjp
Copy link
Contributor

So it seems we now have three proposals:

  • The htlc_accepted hook must not have side-effects. I understand the simplicity of this approach, but as you expected, this makes the plugin hook completely useless for my use case, so I consider it unacceptable.
  • The htlc_accepted hook must be idempotent, and it will be called again for unfinished transactions on restart. This is acceptable to me. Since this was the oldest proposal, my code currently takes it into account (in the form of a TODO comment "check if we already have this one").
  • The htlc_accepted hook can have a 'hold' outcome; the plugin can use an RPC command to resolve on-hold transactions. To me, this is the preferred approach: it allows for 'forwarding' applications like mine, without requiring idempotence of the htlc_accepted hook.

In case of the third approach, you might consider adding an optional plug-in interface, allowing lightningd to ask a plugin whether it is processing a certain transaction. Lightningd would typically call this on restart for on-hold transactions and for transactions for which htlc_accepted hadn't returned anything yet at the time of a crash. If the plugin offers this interface, and the plugin answers "no, I'm not processing that transaction", then lightningd can safely cancel the incoming transaction long before its time-out. This is an optimization, and can be added as a future improvement.

@cdecker
Copy link
Member Author

cdecker commented Apr 29, 2019

So I have two thoughts. First, it feels like we should have 2 separate checks; one for HTLC's that are 'destination us' and one for HTLCs that are 'destination next hop'. On the face of it, this may appear to leak information as to whether or not we're the final node, but I'd argue that whether it does or not in practice will depend on the behavior of the calling hook. As is I think we probably 'leak' our 'destinationness' with how quickly we resolve HTLCs that are for us anyway, so practically this isn't much of a change.

I think we have two ways of addressing this: either we provide two separate hooks (one for payments that are destined for us, and one for HTLCs that were originally going to be forwarded), or we can determine the original action (forward or terminate) before calling the hook and just adding that to the hook payload. I tend to prefer the second option, since local accept and forward aren't all that different, and I prefer smaller interfaces, but I do agree that indicating what we would have done is better than forcing a plugin to figure it out on its own. For the cross-chain route and most of the interesting cases, we want to change from one case to another anyway: cross-chain routes take what would have been a failing forward and make it into a normal forward, early termination like we discussed for some high-availability setups would take a forward and turn it into a local resolution.

However, if we separate out the hooks, one for 'destination us' and one for 'destination next hop', the HELD state only affects payments that we might receive, which feels totally kosher.

But like @bitonic-cjp mentions, HELD is needed for some forwarding cases as well.

Apologies if I've misconstrued what you're proposing, I'm not entirely convinced I grok the original purpose of this hook.

No problem at all, it's a very flexible hook, that allows a lot of cool functionality, however it is also very abstract which makes the discussion all the more "interesting" 😄

In case of the third approach, you might consider adding an optional plug-in interface, allowing lightningd to ask a plugin whether it is processing a certain transaction. Lightningd would typically call this on restart for on-hold transactions and for transactions for which htlc_accepted hadn't returned anything yet at the time of a crash. If the plugin offers this interface, and the plugin answers "no, I'm not processing that transaction", then lightningd can safely cancel the incoming transaction long before its time-out. This is an optimization, and can be added as a future improvement.

Good point, we might want to check with the plugin on restart if it is still aware that we are waiting on it to make a decision. If we squint at it though, introducing the HELD status is more of an optimization rather than its own solution: if the plugin crashes after making a decision but before returning it to lightningd it'll still get a replay of the hook on restart, so the plugin still needs to be idempotent. What it does allow is deferring the resolution until a later time, and making things asynchronous again.

I think we're pretty good with requiring the hook to be idempotent, replaying incoming HTLC hooks on restart (if we have a plugin that has registered the hook, failing them automatically otherwise to keep existing semantics). In a second PR we can then introduce the HOLD action that plugins may return, which marks the HTLC as HELD but still always replays the hook call on restart (in lieu of periodically checking in with the plugin).

Replaying on restart allows the plugin to be stateless, but still learning about any HTLCs we expect it to return a result.

@cdecker cdecker force-pushed the htlc_accepted_hook branch from 17b8681 to 9d3b9da Compare May 22, 2019 11:08
@@ -735,6 +735,7 @@ static void htlc_accepted_hook_serialize(struct htlc_accepted_hook_payload *p,
json_object_start(s, "htlc");
json_add_amount_msat_only(s, "amount", hin->msat);
json_add_u64(s, "cltv_expiry", hin->cltv_expiry);
json_add_u64(s, "cltv_expiry_delta", hin->cltv_expiry - p->ld->topology->tip->height);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is not cltv_expiry_delta the diference between the ctlv_expiry of incoming minus the cltv_expiry of outgoing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, maybe call this "cltv_expiry_relative" and make it a signed int just in case?

@@ -0,0 +1,27 @@
#!/usr/bin/env python3
"""Plugin that holds on to HTLCs indefinitely.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: it looks to me the plugin holds on to the HTLC for only 10 seconds, not indefinitely.

Copy link
Contributor

@rustyrussell rustyrussell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I went through this with a fine-tooth comb, and there are some annoying changes, but all minor. Very nice work!

You also need to add to doc/PLUGINS.md and CHANGELOG.md.

const struct htlc_in *hin = p->hin;
json_object_start(s, "onion");

json_object_start(s, "hop_data");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a hop_data_raw field, which is the literal hex, which is always present. Then this object should be called "realm0" and only present if realm is 0. That makes the API more future-proof.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see this in the later patch. We should use spec naming for our APIs, so this should be called "per_hop" and probably "per_hop0" for realm 0.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed name to per_hop_v0 and added an if statement around this object that will cause a compilation error once we merge PR #2689 due to it referencing the hop_data->realm.

json_add_u64(s, "outgoing_cltv_value", rs->hop_data.outgoing_cltv);
json_object_end(s);

json_add_hex_talarr(s, "next_onion", p->next_onion);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this value if we're the final hop? Does it make sense?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's an onion that is mostly made up of 0s. It can be sensible if we stuff some extra data into hops (not that I want to encourage that, but...). It is definitely a set value, but not sure how much can be learnt from it.

For reference this is the final onion after running through the test vector:

Generated onion: 0002eec7245d6b7d2ccb30380bfbe2a3648cd7a942653f5aa340edcea1f283686619e5f14350c2a76fc232b5e46d421e9615471ab9e0bc887beff8c95fdb878f7b3a71da571226458c510bbadd1276f045c21c520a07d35da256ef75b4367962437b0dd10f7d61ab590531cf08000178a333a347f8b4072e216400406bdf3bf038659793a86cae5f52d32f3438527b47a1cfc54285a8afec3a4c9f3323db0c946f5d4cb2ce721caad69320c3a469a202f3e468c67eaf7a7cda226d0fd32f7b48084dca885d15222e60826d5d971f64172d98e0760154400958f00e86697aa1aa9d41bee8119a1ec866abe044a9ad635778ba61fc0776dc832b39451bd5d35072d2269cf9b040d6ba38b54ec35f81d7fc67678c3be47274f3c4cc472aff005c3469eb3bc140769ed4c7f0218ff8c6c7dd7221d189c65b3b9aaa71a01484b122846c7c7b57e02e679ea8469b70e14fe4f70fee4d87b910cf144be6fe48eef24da475c0b0bcc6565ae82cd3f4e3b24c76eaa5616c6111343306ab35c1fe5ca4a77c0e314ed7dba39d6f1e0de791719c241a939cc493bea2bae1c1e932679ea94d29084278513c77b899cc98059d06a27d171b0dbdf6bee13ddc4fc17a0c4d2827d488436b57baa167544138ca2e64a11b43ac8a06cd0c2fba2d4d900ed2d9205305e2d7383cc98dacb078133de5f6fb6bed2ef26ba92cea28aafc3b9948dd9ae5559e8bd6920b8cea462aa445ca6a95e0e7ba52961b181c79e73bd581821df2b10173727a810c92b83b5ba4a0403eb710d2ca10689a35bec6c3a708e9e92f7d78ff3c5d9989574b00c6736f84c199256e76e19e78f0c98a9d580b4a658c84fc8f2096c2fbea8f5f8c59d0fdacb3be2802ef802abbecb3aba4acaac69a0e965abd8981e9896b1f6ef9d60f7a164b371af869fd0e48073742825e9434fc54da837e120266d53302954843538ea7c6c3dbfb4ff3b2fdbe244437f2a153ccf7bdb4c92aa08102d4f3cff2ae5ef86fab4653595e6a5837fa2f3e29f27a9cde5966843fb847a4a61f1e76c281fe8bb2b0a181d096100db5a1a5ce7a910238251a43ca556712eaadea167fb4d7d75825e440f3ecd782036d7574df8bceacb397abefc5f5254d2722215c53ff54af8299aaaad642c6d72a14d27882d9bbd539e1cc7a527526ba89b8c037ad09120e98ab042d3e8652b31ae0e478516bfaf88efca9f3676ffe99d2819dcaeb7610a626695f53117665d267d3f7abebd6bbd6733f645c72c389f03855bdf1e4b8075b516569b118233a0f0971d24b83113c0b096f5216a207ca99a7cddc81c130923fe3d91e7508c9ac5f2e914ff5dccab9e558566fa14efb34ac98d878580814b94b73acbfde9072f30b881f7f0fff42d4045d1ace6322d86a97d164aa84d93a60498065cc7c20e636f5862dc81531a88c60305a2e59a985be327a6902e4bed986dbf4a0b50c217af0ea7fdf9ab37f9ea1a1aaa72f54cf40154ea9b269f1a7c09f9f43245109431a175d50e2db0132337baa0ef97eed0fcf20489da36b79a1172faccc2f7ded7c60e00694282d93359c4682135642bc81f433574aa8ef0c97b4ade7ca372c5ffc23c7eddd839bab4e0f14d6df15c9dbeab176bec8b5701cf054eb3072f6dadc98f88819042bf10c407516ee58bce33fbe3b3d86a54255e577db4598e30a135361528c101683a5fcde7e8ba53f3456254be8f45fe3a56120ae96ea3773631fcb3873aa3abd91bcff00bd38bd43697a2e789e00da6077482e7b1b1a677b5afae4c54e6cbdf7377b694eb7d7a5b913476a5be923322d3de06060fd5e819635232a2cf4f0731da13b8546d1d6d4f8d75b9fce6c2341a71b0ea6f780df54bfdb0dd5cd9855179f602f917265f21f9190c70217774a6fbaaa7d63ad64199f4664813b955cff954949076dcf
Processing at hop 0
  Realm: 0
  Payload: 0000000000000000000000000000000000000000000000000000000000000000
  Next onion: 00028f9438bfbf7feac2e108d677e3a82da596be706cc1cf342b75c7b7e22bf4e6e203445185327b8cbf5c5bfa27f825f3a9af4f431f6e7a16ad786704887cbd85bd93ea121ae2ef7bd3e013160bcd3f3221de72fb18b2817404cb6ed0f761eacda9a22e464b80d7772900ad5040e65374840ee298625c75768d6199fa32154615bb0bd8d9876f25d08d2569c04c58f1b425b6719fe213327d45f13b277a6334bd175ae300b29fe5456be97ea49b4b0b4501a96b8432e111c936ae7d94c9de13ecda4ed4b79d2d3e3936e59384ae852d338141b196ea5fe26ad429bf1372a003730981f0bb85a7e9c07d9fba335719c04ec58607918874c513a94c6426bd0181a3233ba12283b03be24cb8e2a530a8cb92b9bed43889267aab86d69d3d72f734e6768324125c75b25988ac2aab93da5939e2059fc0d5479d7c404b6bd1e5a9e995e47cb6966bb063377aedd689052f0a72c6576548f7264989ccd9838068b56cf7e3620f88627276674be6376007208afb027234b9e2a0267a51141ffa8ae1f3da72a8604417e63cffc3d5c4263c7e96c7ef2baa91dcf629efb87a088550b6165686c75ff9f60f24d79b5e560ceb38e987f0371cd002c1ebd09cfe3bfa1191ea1e4b23886d20a64a4526584d7beb2db9c51c3895d84cc9ecb15bff52cc64320c1f79f7d3659e2ee54a4aeed1cebb412eacd782f83ae2288c623e1a1256370262fd1afbb1a61d5e323d5bd679690105b597f027141d035a0f6ec4ef468d9a3ab39b9d54f1817878b745567f181b6678b9900299c11f21c92618da90cb10b2ecbef3d07a0b78cd35ddcc11d02e2e859f9296687268979557b99a9eaffd0875bdce1c03b1f507537916d0aba5aa437c3611ddf9e6975fc82545b67344a4f02125b08a27873079e4490519ec839a105638728d6b2dffa8024b6c4eb08dd793fbd7e53aac0e327401e1de38f826d7ac66beea3925232b0c7d086095a474b1c4f5dbb5651f5de1d1699b90bb500202673d867d00654244ece624c820d7a84db2c5e9515ace71158e45d52790a47b3b7c33ffda69b04dcf107a61a5bb0e6ab6bf06d6b44552d823f2b5969ccbe2f3986cb7180bf7acad66684117c391ff428a2023d7d29063766bb9fbb2c276c3067fab08c7fcf18fa3741b2bd409a58a4fc67d9cffde1faeb39dd86b21fa9b7930c2c6b8d28e4aeea263b3bec2bbc95b939c6ad208689204b83ea8eec2e90a42c0fde1ca8de5aaacd8a614328f83480b4a5bb5e12546ba0e56aef9e88d813dcc7c83f905c91bc5853b7d734a160f3e0efc10237aae97767abea0181c48861f2fabe52a38bcd7859d25ac5dc80eb457615d5064d77a81516b97f90c2feab7861dd9728d51a05247ef5defa05d77486812bd909195ef11c772b374d6cbe18a218af26aa19c72de077307567fa90b18eb1bc77a357ab09cc637b93d5d55793ea075285d60b04ca48bcdd45a32f55932358687db09440ff4ba255fa72e4dacee47e9029bfc16af51e121efdeb189b31dd5be103ef9dd2b5eac641768a81fce1056e6416cb8fc53b8fe946f8e37ab1f94520c3fb0d01dc15207cdec370b0fe76e5234343db30b2f8e0afdc50f1d52d761fbcd3dfcca053f01cfd3c43763546f9d5fc66e1adba7c9d66af0f61d27622f12372c4a75af5861151dcd2da571c7e90c7bde4dae9aea645f85c0781e4472e2b68d63d6789e3dc7ccb543ec47d68d1fa700c0081908a8d2e4687b6e826f4254bc169921b7e02643f3faa0264c7ff77a52205a9388d0a98c0394dc4dee81989115ade30d309374fea8435815418038534d12e4ffe88b91406a71d89d5a083e3b8224d86b2be11be32169afb04b9ea997854be9085472c342ef5fca19bf54799b122c79c8aee73ea2cdbc22eca15bbcc9409a4cdd73d2b3fcd4fe26a492d376
  Next HMAC: 9b122c79c8aee73ea2cdbc22eca15bbcc9409a4cdd73d2b3fcd4fe26a492d376
Processing at hop 1
  Realm: 0
  Payload: 0101010101010101000000000000000100000001000000000000000000000000
  Next onion: 0003bfd8225241ea71cd0843db7709f4c222f62ff2d4516fd38b39914ab6b83e0da0e22ca641baa077154733abc584fae578ea0ed4c1871d0554235171e45e1e2a1868760d4c61ed34c17bc188784a4684939fd54d7b4aa9d83c21aec9ad843035b63836277ec6404d85da476af5cfbce7a392d4b8c931b67629c04ff70c08e51fa8d73eaef8c0bdad85ad9872ba87afe0846b75a05e004a744b02687a8c3110a88471b7faea10de1025016f6fcb6f685abdaf55c7908fd922f531d27e206307f7c9086355a351298fd6ed1f05d8724a6b159f52ed4cb988cde07f1d0eb0384803511b480ed31a1ecb7320bbd98da168ae64de73347d59df8ed0588aba661dc2d0bba2bc74ac0d2c479f466ee73def4a32cd806b0966b8f535e742c6a907af0f3dad003016db95194d381109c53499efcf4d868677c3a6dc4a184eccc2198aec260673a801814e60405670f557e618bb3b5df1e51cc90d995d0832b307c102b3fd1d083f52794ccb5a185885280dbd9e36ce64ddf320b6a7e340180e4f525c682bcd1f9cfa3ae5bbd8cdbd83e3d291e98606a7890bd8abbcf57aea1492fc1d6c876cad86446fc85b2db47ff2c6ea384fdda8bc53e81152687a702318c91f657253403612393ec7f864ddf5e807497cb88816f736f08a593d8d1e2c46c13675de2b2aa8c383b8610a5c772bbcf831e426f750c6358e623d9938890aa1d70297c4e4d238b14967f48da074ca469212336a77a2fe0fc80dc2ecd59a2ca389440cb2c1f9835786ad7e1f5bf3ceb27c8abe10178bffcbd888a9b31b9b9a5e308cefe00bdc25c82597dc22ff814b4dcb717be6ec6d87cf25dbb9fde6726461d504d4b9707d708238415ca3dc6e86d3738b902ff4a449597fcac06757fd97e0fe52c1addf7b34731366e2a5694883c60f4c9676ea6167a9fbf8b7d64932676e11fd38f7338e8954236ae31dbd7e2732d74a16b4d0809229ce44e696ceb383c41375ea0e40686f4e0c3967fa22a9e8a2ebb637816c8d1875edf86b1a4998b8525708a027a57852e28e7e84ab3d0db6b73b7e6775604d317139bc0146fedf45a17bc8b3559d4921cb87c51a21f374e040da55e013f4748b22e58f3a53f8ba733bf823eec2f2e8250d3584bd0719bdc36c1f411bfc3d6baf0db966ff2f60dc348da8ed5d011634677ff04c705b6a6942eaffbea4dc0ff5451083f5f7117df044b868df8ae2291ee68998f4c159e885ebcd1073f751899557546e5d8ddbee87a5747e5fa3f5610adf7dece55a339276e7efc3d9a152f222dd3f0452ee4ec56af7a5c1f3b9130ccf1c689e8e1401e8b885ca67349b69526ff523f217c3b36c643a9c46c46b0dc2d0f1d80c83435f752740ee7e423a59badfd5981706ec62d38ce07295ef4fbe23a4ab6cf85a2289e09cc5ad0bae981d3f42e9c6c85eeaff1f257e8ab99c2e93754e88ccd23fe3ad9c78be182b3944b79877aa1eced48b8fcf1f51c5cd01c4b2b111c97f0338e0efccc61e728e784c51d50371f453d926b02fae5c46e118a2f23a28d6f0830ec04b736ed84cf09ea1b0e228d13d7c8794ae6f363d100538a9baa9fbe533a909717dcce4c012d7f258aaee4b41c2e3f1bfe44a652ba8da51dc67164c43112805d474372b9076f3f3f0e7af94604f4fcddd2136dd7dc80ce3ff845924462cf52f5818aebf3b64f38f98edf8cb9c460477a2f94b891573929c4b51deafe6db81bc30680f7226a68567588f195ce96a791e28204b9b5844c28a61736ac20722fe156175210c7b9b6e1804c89c0a7ee136597b5b3de5d54be23671bc9477805ba10d03afb0715782845d2ab45df012f6644207cc5fa4739aa3eaf6bf84e790128aa08aede33bf30c6be2b264b33fac566209548e58057ab0a0e6c2d8ad8e855d89f9224279a5652895ea14f60bffb81590eb
  Next HMAC: 548e58057ab0a0e6c2d8ad8e855d89f9224279a5652895ea14f60bffb81590eb
Processing at hop 2
  Realm: 0
  Payload: 0202020202020202000000000000000200000002000000000000000000000000
  Next onion: 00031dde6926381289671300239ea8e57ffaf9bebd05b9a5b95beaf07af05cd4359535dd8ba6f4e53e2f0372992046827fc994c3312b57591844fc713c0cca433626a0b62453d756641c1372dfde1a586ef971823104027062465283f8ff9b89ea3c3daf3e17e529ce3c0c58316f255a42848df14bce0785a4653c84ac8f58bf89671cd0d4b692e9e42584a52509e03f8769bb4ef6a0f6f4002c81011da7f5f9a8da9b19bd25cb25aeb3873c5fa95396b56900847794a97935e542637ef5f42f088c6e24e8e95de7be71d110872c84a60a53bbd789f3a58c1072c5fbb84192dd5f780deb14bdbbf458be297eb3244499a7921db98912625c88350cb29e71ca8d788bfb9bd11146e48659b0d86afe13b47685ec49d36e57292d819442ba516d365be31e7980c0e7b14f40807393b1391d9b83c30191d4bfe49c143da2848d2d7bfd54c32bdfdaefcd4cb84ea895e2305745d2ae257b0414563224af66f3b25d3d451bdd8331670e9dcac78b0690c4ae849d0bd8e956dbc56b5a8114008e9dc54f1252544a45861521efd6376b6c0fb34c1c433b677c08d6bf5356e81772bfeed5a894b590859cd409dedd1cd9881c3b8caed81d4d8d834dbd3e870a39e69d6a1ce2dc15df8f8c34d64d292b0b99bf17bb92b263176b65634dc8f368cf7b558fef5b259964f42ec94eeac74d0abd829baa9b04f89ea3606074a4aace03e7f2217f2dec855f4028392f64f0baddd178d56d84cab16c8484633f7c5e8e7f8651ce8a7f5ce1ca3a966562a0a2a7247d105a3114ab2adbd10cfd70b2765113c42e4d1cbcb8721a719f46b25d8579a670be085d1afd2d2e541c4a27bdfffe49674798cf690f28bf4f58ec1495561be071d1c94156f15c95f3426a7409680ef8cf335a0d4786bbb0bf108589c3c8baf10f04dc7f8beda2f70117f28c771a83890d4e1c47903d642fef8a93a216576f377db3ad5be3abe7eaa924e1e7de9ac88e07177e57cad41ef5a561f6bdb741961eddee56f3d9aef9fc4e52343f2efda3a2d40b8d2b4afa735cd5655d3014d21b41eba3a8cf34a8c29cbf94dfb9a4312ee0851c6e32896b33d52d991560e3ae97c769071453a4969374aecf7423f07ee17ce71111dc925294b0130249ad00d49e26dbfddcaf214dc8356cee3a0d7486fa6144c6c826be0892ceea30e52d7b64a13b08eda3e991f37e210ccd216108c847adb58df3b2a53553689f5119e41b4f4f221474c21e197d8a13e9cf7cf23875a0dbc6ddaeca9eb6d95add2a0054528d9dc03d59bc108157291b4caa0faa53f2511bec3088a164a0768d38d7fe3281ea8459abbe3a5b5f43c697e5cbdc995832220a3c6fcf1e82bd35f4b94b13cc9312efc3ca4e3c8199e862c551fa7916e3cb0ad17ba24e456fcd2159f9e81628c508da4c2de4825100fee1c3226d1d431a740d2d8ab0c04953b073d9a20919e0f2d03e97b3f34a1644cbeeab7d0c78d898e8c523fbccc75ffc329fafe45288e21c82817e23589d0c6db453be7d3f8ecfa0ae1d24ae80c63511193c718195255a576c1b43c8d941ba22f6d9f743a7dd558e08ac2262fd67056d60e837edd3e21ce23ef27d48c9180d845ce8c5d6d4c488fcf55af99aef02f3bdf6a07833660628a672b878f8b15427189e49bf2d9e0e7e63e3c5632d9ef1e412e9bbd732b616a353097e90494a098df6a21729f1d3658f91b1bde4aaaae58530ab0e402fc10eb0910c07cace1afd0aacb579690e6dcbc184025e4160cf4de3e47106339046724d098b5b7b92f5a2bb33c11f86d4953f372fdd9ebc260b0ee2e391420c4b11145bd439954834d9a79e78abc57e03d3ee20d239d8a13014976e3f057ab3c38ca79ee81ff8849d94dca37b0920cc3e720daed5f832ef34ea8d0d2cc0699134287a2739c77152d9edc8fe5ccce7ec838f
  Next HMAC: 0daed5f832ef34ea8d0d2cc0699134287a2739c77152d9edc8fe5ccce7ec838f
Processing at hop 3
  Realm: 0
  Payload: 0303030303030303000000000000000300000003000000000000000000000000
  Next onion: 0003a214ebd875aab6ddfd77f22c5e7311d7f77f17a169e599f157bbcdae8bf071f4f6a9e85e5c6a0458c98282b782ffe8efb2f3f0e2cae0283a71aa4d873686d7efeac03dcb113fd91113df4bb5d5e8c13cde4bd12eced21d367381fdc31efe09695d6ecc095083c1571d10ed076ee914745479dfedcab84efd3889e6375f544562ea4a6522035738a9ba1a9ab4204339569d3d0c217324f1d5f3a107e930ade50b913777d1140096e2b26ce47486b66a6740f026ae91429115e17270c5a542b4c82ce438725060248726956e7639da64a55cec00b61719383271e0ab97dafe44ad194e7338ea841bf25a505d0041eb53dc0d0e6c3f7986f6647ede9327467212a3ca5724ece8503bfaba3800372d747abe2c174cbc2da01dd967fc0bae56d2bc7184e1d5d823661b8ff06eb466d483d6fa69a4f8b592ebc3006b7c047055b12407925ea74c2bb09ca21f4bc9d39dc4a7237ef6facbb15f5ba56d8e2ab94ad24267bf85cea55b0605cd6260b4b0f17994a03e39bebfcad0834849b188e83bf5c954dc112d33b0fada26f25d76d53ec9b95ed344059a279719d5edc953a4de2b1cc4f47d5da090c3b0f4c7eef14830f9a2230bffb722f11fd2ea82c43d58261e5a868361262abbd81b9f1e6145562ed6765aac732e7d91bfff7716509d72314d46904bcc8053cfd1e3cd7824b4b6499bf8c77762275fa4f651d163a8701f02a5ca67bb7d37787a66d269d4079aa92a489493020b11d87791a3e3ef66780bd0007c8fa2782bac7a839b57e8ace7618a75fe03604097960d2236616d579207fb943c85bef4f804e7f13d7d78ff3aab0f7f440a3005a363f32c31567b8d64669a7ae0299b06b80e4d224d2d70b80077ff966cfb8c11cf763954f398c592ca619acc2e0e86dad54b9cfa47bc10520f3c9ba2e0d2e27e0ba8ef8ece3e71c8a1ff75fe0c8369a3555e42db30565b2e12a8e862d0f873bd781ebf8255372e540bf6fbf4c1dae200823144f8da8df82ccd41b1b678eb91a289b88c6ee5aff71d98b64e8b17e2027fcb6826c418dbdb51f06bec866d69d9554931412808bd90021be2e0dad1d1ececfdd41fcdf6f67b88ef09eb9edcc4c226e07bdfe86b8e50e3bf68b19c0c9e0bf9aaaaddb15bc05a5666733789fa6efde866a49d0329185d852dc84a7e16c41f6c2daadc04665197152d9c0db63d0bd926c06bf3646b810186ae908a2f27a33d010b3262b0c0805c3adf135b41f15a033bb82a8db2c38160dbce672290878d7ad8d08fdd483ef9d95b3a1b2ea2b6ff0adde762e9df7b78fe5f3644df54c6d98709d70924ce6ec94650cb207b4f90d35569acb55811ad3f01b28bb2b16cfde454531b9462024abf419fb3bef80db9bb9fa5bd62a7e94f61a667e74140c8da4b27ad7b934c0824756fbc56f8ac7529fc4c5224158fd4915eba25a3f2a72a7f718a5cda6395bd8b2bc484a950aba483c8cadec376f9bee62c93f886d83371b41c361a36c15679ff933422e09c5eaf98d3c6b008cf6414ed6e4c42c291eb505e9f22f5fe7d0ecdd15a833f4d016ac974d33adc6ea3293e20859e87ebfb937ba406abd025d14af692b12e9c9c2adbe307a679779259676211c071e614fdb386d1ff02db223a5b2fae03df68d321c7b29f7c7240edd3fa1b7cb6903f89dc01abf41b2eb0b49b6b8d73bb0774b58204c0d0e96d3cce45ad75406be0bc009e327b3e712a4bd178609c00b41da2daf8a4b0e1319f07a492ab4efb056f0f599f75e6dc7e0d10ce1cf59088ab6e873de377343880f7a24f0e36731a0b72092f8d5bc8cd346762e93b2bf203d00264e4bc136fc142de8f7b69154deb05854ea88e2d7506222c95ba1aab065c8a851391377d3406a35a9af3ac62cc962876e734e089e79eda497077fb411fac5f36afd43329040ecd1e16c6d9
  Next HMAC: 62cc962876e734e089e79eda497077fb411fac5f36afd43329040ecd1e16c6d9
Processing at hop 4
  Realm: 0
  Payload: 0404040404040404000000000000000400000004000000000000000000000000
  Next onion: 0003633e93b6ae3db02f2eee2c130b0439e4ccd0c028f9cc7a93d093f93855dadd0e0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007b4f90a76454eaf3e2931a1d16e7140d0eac366e29dee82b2d6818fe8f4061f817484d03f5e899d1923291cad58666506ea3e7bbebbe32bfdc18dd991f7fe97988e7487ae789d67c6809247f12132580dd9e1c34517b4f0bceded7feda8ebd44a6acfe1df5f00f97fa4099d3a7408d48da98ec747287dea6206ee7e0aa3b5d978b7fce2424e3d56e6f57515726a7add3679a64b74bc833ff25fe8e443f3395ba09687e9ffe9621364ba5457b8dd308353a8eb647ff054f62993ddb3ceb99310fb712fe57d265dfe63f69ba861b6bc3f86b70b4854400cc1ec59c37d536bbe46df9ed02adc625e0d29d4531941f7bbc00e942aa63c80bb66c3a5ce75adafb64759e8a18f3c1fa453c5eebad1ae5af17b14d30d99463491d7d441853483a7ed84868e2504873c4b831d3a228ecf452d12510739f46f7e11e0601458a53422a5d727308f94a7c0000000000000000000000000000000000000000000000000000000000000000
  Next HMAC: 0000000000000000000000000000000000000000000000000000000000000000

I mainly left it in there for symmetry with the intermediate hops.


json_object_start(s, "htlc");
json_add_amount_msat_only(s, "amount", hin->msat);
json_add_u64(s, "cltv_expiry", hin->cltv_expiry);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a u32; doesn't matter here, but might be confusing.

@@ -663,7 +684,7 @@ htlc_accepted_hook_callback(struct htlc_accepted_hook_payload *request,
if (rs->nextcase == ONION_FORWARD) {
struct gossip_resolve *gr = tal(ld, struct gossip_resolve);

gr->next_onion = serialize_onionpacket(gr, rs->next);
gr->next_onion = tal_steal(gr, request->next_onion); serialize_onionpacket(gr, rs->next);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you meant to delete serialize_onionpacket(gr, rs->next); here?

* payment details" which matches our use-case
* pretty closely. */
response->failure_code =
WIRE_INCORRECT_OR_UNKNOWN_PAYMENT_DETAILS;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. It's OK to provide a default, but we should get upset if it's invalid, not silently replace it.
  2. An intermediary node MUST NOT return this; we should probably just use TEMPORARY_NODE_FAILURE which is always a valid response.

* this is the default behavior for this hook anyway */
if (!resulttok) {
fatal("Plugin return value does not contain 'result' key %s",
json_tok_full(buffer, toks));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You want json_tok_strdup() here I think: json_tok_full just returns the pointer to the start of the token, which may not even be NUL terminated AFAIK.

@@ -637,7 +650,68 @@ static struct htlc_accepted_hook_response *
htlc_accepted_hook_deserialize(const tal_t *ctx, const char *buffer,
const jsmntok_t *toks)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This API is a bit awkward. We don't really use htlc_accepted_continue; instead we use a NULL response from deserialize() to imply it.

And the struct is really a discriminated union already; I think in this case we should simply open-code those parameters into htlc_accepted_hook_deserialize, which is still awkward but at least explicit:

static enum htlc_accepted_result htlc_accepted_hook_deserialize(const char *buffer, const jsmntok_t *toks,
                                                                /* If accepted */ 
                                                                struct preimage *payment_preimage, 
                                                                /* If rejected */
                                                                enum onion_type *failure_code,
                                                                u8 **channel_update)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It used to work nicely before we decided to just smash the deserialization into the handler. This was the easy way to merge the logic of the two 😁

break;
case htlc_accepted_resolve:
fulfill_htlc(hin, &response->payment_key);
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You realize this means we can write a "try_stealing" plugin now that simply fulfills known preimages? :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And this is a problem? :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, that's my evil master plan thwarted 😜

@@ -735,6 +735,7 @@ static void htlc_accepted_hook_serialize(struct htlc_accepted_hook_payload *p,
json_object_start(s, "htlc");
json_add_amount_msat_only(s, "amount", hin->msat);
json_add_u64(s, "cltv_expiry", hin->cltv_expiry);
json_add_u64(s, "cltv_expiry_delta", hin->cltv_expiry - p->ld->topology->tip->height);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, maybe call this "cltv_expiry_relative" and make it a signed int just in case?

@cdecker
Copy link
Member Author

cdecker commented May 30, 2019

Rebased and addressed feedback as fixups. Once I get the all-clear I'll squash and merge ^^

Ping @rustyrussell @ZmnSCPxj

@cdecker cdecker force-pushed the htlc_accepted_hook branch from d638021 to e2dd381 Compare May 30, 2019 10:47
hin->key.id);
fail_in_htlc(hin, WIRE_TEMPORARY_NODE_FAILURE, NULL, NULL);
if (!peer_accepted_htlc(hin->key.channel, hin->key.id, true, &failcode)) {
fail_in_htlc(hin, WIRE_TEMPORARY_NODE_FAILURE, NULL, NULL);
Copy link
Collaborator

@trueptolemy trueptolemy May 31, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi, cdecker, when parse_onionpacket() filed it may set a failcode and return op as NULL, then peer_accepted_htlc() would return false

op = parse_onionpacket(tmpctx, hin->onion_routing_packet, sizeof(hin->onion_routing_packet), failcode);

should there be fail_in_htlc(hin, failcode? failcode : WIRE_TEMPORARY_NODE_FAILURE, NULL, NULL);?

I aslo have a related question...if there need to store forward status as LOCAL_FAILED?
It seems I missed this case...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed failcode is not always set; sometimes it will try to fail the channel itself. I don't know what happens in those cases for this code path, though I suspect it Can't Happen?

As to LOCAL_FAILED, I think if we crash at this point we'll just replay anyway, so it's not vital. But it would be a good idea, I think.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed failcode is not always set; sometimes it will try to fail the channel itself. I don't know what happens in those cases for this code path, though I suspect it Can't Happen?

In those cases the HTLC will be marked as failed in the DB in the same transaction it was added in, hence it won't be replayed. So yes, this should never happen ™️

As to LOCAL_FAILED, I think if we crash at this point we'll just replay anyway, so it's not vital. But it would be a good idea, I think.

Indeed, that is the reason the documentation explicitly states that the plugin needs to implement the hook method in an idempotent manner.

CHANGELOG.md Outdated
@@ -20,6 +20,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Config: Added a default plugin directory : `lightning_dir/plugins`. Each plugin directory it contains will be added to lightningd on startup.
- DB: Store the signatures of channel announcement sent from remote peer into DB, and init channel with signatures from DB directly when reenable the channel.
(Issue [#2409](https://github.com/ElementsProject/lightning/issues/2409))
- plugin: A new hook was added to interact with incoming HTLCs. HTLCs can be resolved, rejected or continued in the plugin, allowing enforcement of custom policies, short-circuiting of payments, and alternative transports (PR [#2267](https://github.com/ElementsProject/lightning/pull/226)])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just edit the line: '- JSON API: new plugin hooks invoice_payment for intercepting invoices before they're paid, and openchannel for intercepting channel opens.' and add yours.

BTW: I generally dislike referring to PRs since they're hosted on GH and we should consider them at risk. They're OK for bugfixes, but important info needs to be in CHANGELOG.md or git history.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For that matter the link seems to point to #226 instead of #2267

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I usually use the changelog to find changes when debugging things with users, that's the main reason I have the references in there, but I'll just start tracking these externally.

@cdecker cdecker force-pushed the htlc_accepted_hook branch from 35cea46 to 1c76034 Compare June 3, 2019 12:23
cdecker added 15 commits June 3, 2019 14:28
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is a rather simple hook that allows a plugin to take control over
HTLCs that were accepted, but weren't resolved as part of an invoice
or forwarded to the next hop yet.

The goal is to allow plugins to terminate a route early, perform
intermediate checks before the payment is accepted (check inventory or
service delivery before accepting in order to avoid a refund for
example) or handle an onion differently if it has a different
realm (cross-chain atomic swaps).

This doesn't implement serializing the payload or deserializing it,
instead just passes the full context along. The details for
serializing and deserializing will be implemented in a future commit.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
This uses the `htlc_accepted` hook to delay payment acceptance.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Two tests: one for failures and one for in-path resolution.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Suggested-by: Corné Plooy <@bitonic-cjp>
Since we might soon be changing the payload it is a good idea to not just
expose the v0 payload, but also the raw payload for the plugin to
interpret. This might also include payloads that `lightningd` itself cannot
understand, but the plugin might.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
Suggested-by: Corné Plooy <@bitonic-cjp>
Since the hook needs to pass information about the current blockheight to the
plugin we need to first initialize the topology.
It disables the error when attempting to do a state transition from
`RCVD_ADD_ACK_REVOCATION` to `RCVD_ADD_ACK_REVOCATION` which was done before
getting to this point.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we have more or less given up on the separation between response
callback and deserialization we can also just have the individual parts
returned.

Signed-off-by: Christian Decker <decker.christian@gmail.com>
Suggested-by: Rusty Russell <@rustyrussell>
@cdecker cdecker force-pushed the htlc_accepted_hook branch from 1c76034 to 0e8e8fe Compare June 3, 2019 12:29
@rustyrussell
Copy link
Contributor

Ack 0e8e8fe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants