-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge probabilistic scores from external source #3562
base: main
Are you sure you want to change the base?
Conversation
71a9fc0
to
f726a8e
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3562 +/- ##
==========================================
+ Coverage 88.52% 88.53% +0.01%
==========================================
Files 149 149
Lines 115030 115279 +249
Branches 115030 115279 +249
==========================================
+ Hits 101833 102067 +234
- Misses 10706 10720 +14
- Partials 2491 2492 +1 ☔ View full report in Codecov by Sentry. |
c24cc83
to
85f3fee
Compare
|
||
fn time_passed(&mut self, duration_since_epoch: Duration, decay_params: ProbabilisticScoringDecayParameters) { | ||
self.0.retain(|_scid, liquidity| { | ||
liquidity.min_liquidity_offset_msat = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe move this into the ChannelLiquidity (singular) struct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do so, would we gain much by introducing ChannelLiquidities
at all? Maybe we just use HashMap<u64, ChannelLiquidity>
in the API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think being able to offer state-level functionality does make things a bit cleaner. Maybe I should also add a merge
method on this level.
The original reason for this struct though is to be able to use ser/deser logic without a scorer.
lightning/src/routing/scoring.rs
Outdated
channel_liquidities: HashMap<u64, ChannelLiquidity>, | ||
channel_liquidities: ChannelLiquidities, | ||
} | ||
/// ChannelLiquidities contains live and historical liquidity bounds for each channel. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Objections to moving this into its own file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, splitting scoring.rs into two modules would be nice. We generally don't put individual structs in their own module just for the sake of it but when files get too big, splitting them down the middle (if there's a clean way to do it) is always nice...there's a few files that are in desperate need of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my workflow, sticking to one struct per file would work well. I do find myself navigating quite a bit in these large files, using editor features (find, find symbol, find ref) to make it easier but not perfect. I'd rather use the folder/file hierarchy and pinning of files as tabs for example. But it is personal of course.
Some type of split would be welcome either way. For this PR I could start with a liquidity_information
(open to naming suggestions) module and place the new ChannelLiquidities
in it. Then in a separate PR move more liquidity code (ChannelLiquidity
, HistoricalLiquidityTracker
, HistoricalBucketRangeTracker
and tests) in there. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts?
I like the idea of breaking up our humongous modules and splitting more types out, be it in this PR or a follow up.
IMO, we could consider a folder structure such as:
src/routing/scoring/mod.rs (moved from src/routing/scoring.rs for backwards compat of the path)
src/routing/scoring/liquidity_tracking.rs (or just liquidity.rs ?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, another easy step towards cleaning up/smaller files would be to move the entire bucketed_history
sub-module out of scoring.rs
and into a dedicated bucketed_history.rs
file. Although, if we do this, we could consider movng the *Liquidity*
types there, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skipping the moves for now as this PR is getting close to ready.
85f3fee
to
05bca3b
Compare
|
||
impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> Writeable for ProbabilisticScorer<G, L> where L::Target: Logger { | ||
#[inline] | ||
fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> { | ||
write_tlv_fields!(w, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want to maintain reading/writing the TLV fields here, rather than moving them into ChannelLiquidities
, given that we're more likely to add additional fields requiring persisting on the more general ProbabilisticScorer
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it likely that there will be more persistent state unrelated to channels? If so, wouldn't that state also be placed in ChannelLiquidities, extending it with additional fields beyond the hash map? At that point, the struct might need a more general name, but I imagine those new fields would still be part of what you'd want to export/import.
For now, the purpose of creating the struct is to allow deserialization of state from disk or network without having to construct a full probabilistic scorer, which includes non-persistent and irrelevant fields.
|
||
fn time_passed(&mut self, duration_since_epoch: Duration, decay_params: ProbabilisticScoringDecayParameters) { | ||
self.0.retain(|_scid, liquidity| { | ||
liquidity.min_liquidity_offset_msat = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do so, would we gain much by introducing ChannelLiquidities
at all? Maybe we just use HashMap<u64, ChannelLiquidity>
in the API?
05bca3b
to
d6caa86
Compare
fdad047
to
ced0adc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay here.
lightning/src/routing/scoring.rs
Outdated
liquidity.liquidity_history.decay_buckets(elapsed_time.as_secs_f64() / half_life); | ||
liquidity.offset_history_last_updated = duration_since_epoch; | ||
liquidity.offset_history_last_updated += decay_params.historical_no_updates_half_life; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I get why we're calling decay_buckets
in a loop. It already does buckets *= (1/2)^half_lives
so we shouldn't need to call it repeatedly.
Well...actually, looking at it it is wrong, its doing buckets *= 1024 / 2048^half_lives
instead of buckets *= (1024 / 2048)^half_lives
, but we should fix the math instead of calling it in a loop :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure either 😅 At some point I concluded that this was a discrete operation, probably set on the wrong foot by that if elapsed_time > decay_params.historical_no_updates_half_life
statement.
I do wonder though why the buckets aren't decayed always like the live bounds and have this 1 half life waiting time? In the end, half life is just a way to express a rate, and it seems a bit strange to also use that in the way it is used in the if expression.
Good catch on the formula. Your suggestion is correct, but 1024/2048 is just 0.5 and doesn't work with integer math. Added a commit to fix it, and a unit test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do wonder though why the buckets aren't decayed always like the live bounds and have this 1 half life waiting time? In the end, half life is just a way to express a rate, and it seems a bit strange to also use that in the way it is used in the if expression.
This goes back a bit to the conclusion we took that half-lives for scoring data are Always Wrong (because they're both too fast and too slow). Thus, we really want the historical buckets to only decay as we get new data in them (which they always do when we get new data in them). However, it seems like it'd be obviously bad if we're scoring one channel with data that's a month old while treating it the same as another channel with data that's a minute old, so we added a decay, but made it pretty aggressively slow and only run it if we haven't gotten data in a while (since we'd prefer to only decay based on new data).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not so easy to feel confident that that is indeed the right thing to do, but that's the gut-part of all of this I suppose.
I've added this explanation as a comment to the code.
lightning/src/routing/scoring.rs
Outdated
channel_liquidities: HashMap<u64, ChannelLiquidity>, | ||
channel_liquidities: ChannelLiquidities, | ||
} | ||
/// ChannelLiquidities contains live and historical liquidity bounds for each channel. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, splitting scoring.rs into two modules would be nice. We generally don't put individual structs in their own module just for the sake of it but when files get too big, splitting them down the middle (if there's a clean way to do it) is always nice...there's a few files that are in desperate need of it.
b459831
to
055177c
Compare
055177c
to
6acce79
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically LGTM, a few nits and one actual comment.
6acce79
to
0609cc1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Can you rewrite the commit messages to ad a bit more context and not have any lines in the commit message run longer than ~70 chars before a line break?
0609cc1
to
a4993a4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically LGTM, one question
Also, nit: please avoid rebasing if not necessary for conflict resolution, as it's making it harder for reviewers to follow what changed during a push.
|
||
impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> Writeable for CombinedScorer<G, L> where L::Target: Logger { | ||
fn write<W: crate::util::ser::Writer>(&self, writer: &mut W) -> Result<(), crate::io::Error> { | ||
self.local_only_scorer.write(writer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we discussed also storing a cache of the merged scorer
to avoid regressing, e.g., after restart. Should we still add that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if that is really necessary. Restarts may be infrequent, and also in ldk-node, logic could be added to always sync during the startup sequence before any payments can be made?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, if we're thinking 'self-custodial mobile node' restarts might be in fact very frequent, and we'd want to make sure that the scoring data is available as soon as the node is ready to pay. We definitely also don't want to block startup of the node on the scoring update. We could of course consider having LDK Node store the last received update and merge that on startup, but might as well persist the cache here, if we're already doing all the other handling of the two scorers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's reasonable. I can add it to this PR. How would that work - add another level of tlv-nesting here with two types for the two scorers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably the simplest approach, yea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But does this work? Previously a single scorer was persisted underneath a key in the store. Now to that same key, another tlv level is added, breaking backward compatibility. Interested to hear how this is typically handled in LDK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, we also talked about a simple setter that would allow us to override (not merge) the stored scores. Could we still add that as a separate commit in this PR?
Added |
3808c85
to
2c1bdfd
Compare
} | ||
/// Container for live and historical liquidity bounds for each channel. | ||
#[derive(Clone)] | ||
pub struct ChannelLiquidities(HashMap<u64, ChannelLiquidity>); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realize we don't actually expose this anywhere so its not possible with the current API for an LSP to expose one of these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How this works in ldk-node
- or is intended to work - is that the serialized version of this data is fetched from the database and exposed.
This struct comes into play when merging the scores again. Bytes are retrieved via a url, and then deserialized into this ChannelLiquidities
struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but we have to also expose the ability to fetch this struct from a ProbabilisticScorer
on the LSP end so that it can write the data to a server to be fetched from the URL :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is that? I am assuming the LSP does something like https://github.com/lightningdevkit/ldk-node/pull/458/files
Wrap the liquidities hash map into a struct so that decay and serialization functionality can be attached. This allows external data to be serialized into this struct and decayed to make it comparable and mergeable.
Add a new scorer that is able to combine local score with scores coming in from an external source. This allows light nodes with a limited view on the network to improve payment success rates.
The formula for applying half lives was incorrect. Test coverage added.
This commit expands on the previously introduced merge method by offering a way to simply replace the local scores by the liquidity information that is obtained from an external source.
2c1bdfd
to
6bd71c2
Compare
Fixes #2709
Usage in LDK node: lightningdevkit/ldk-node#449
Only "fix historical liquidity bucket decay" should be backported