You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We're having some trouble with relations getting "stuck" outbound from Bridgy Fed. In some cases the establishing rkey is lost on our end, which prevents us from removing the relation. This likely also affects other idempotent relations like likes and follows, possibly list memberships.
Individual counting of self-blocking records was also recently exploited in Clearsky to promote seemingly-illegal services.
(That's hearsay coming from me. I saw it recounted in a community discord.)
Describe the solution you'd like
Ideally, when receiving a new record for a unique relation that already exists, AppViews should cancel all visible effects this new record would have but must update their internal representation of the relation to store the new record's identity. Then, if the respective latest known establishing record is deleted, AppViews must remove the relation in its entirety.
Applications should try to avoid creating multiple records for the same relation in parallel and should not edit unique-relation records to change their subject (targeted DID or record). When undoing a unique relation, applications should try to find and delete all records establishing it.
Describe alternatives you've considered
An alternative would be to somewhat frequently re-crawl all known repositories. That's likely less efficient.
Another alternative would be to let users request a full re-crawl of their repository or specific collections, though this could by default easily be abused to increase upkeep costs of AppViews.
Additional context
My preferred solution can introduce inconsistencies between scraped and firehose-observed state iff multiple records are created for the same relation or a unique-relation record is edited to change its subject. I think that, given clients that try to avoid this, this is less of an issue than end-user-unrecoverable states that can come about through partial data loss (revert to backup) or missed firehose events.
To recover such1 a desync in a latest-record-tracked scheme, the end user only has to redo and then re-undo the relation (i.e. block and unblock to remove a "stuck" block, like and unlike to remove a "stuck" like, follow and unfollow to remove a "stuck" follow), which seems to be what most users do on their own as first attempt to resolve such a situation once they become aware of it.
I'm not entirely sure this belongs in the protocol per se, since these unique relations are application-level.
I do think it would be a good idea to have protocol-level guidance regarding the behaviour of unique-relation records in general though. Apps should then specify which of their defined relations are unique.
Footnotes
The opposite direction, creating a missing relation, can already be done by end-users by undoing and then re-establishing it. This doesn't change under latest-record-tracked. ↩
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
We're having some trouble with relations getting "stuck" outbound from Bridgy Fed. In some cases the establishing rkey is lost on our end, which prevents us from removing the relation. This likely also affects other idempotent relations like likes and follows, possibly list memberships.
Individual counting of self-blocking records was also recently exploited in Clearsky to promote seemingly-illegal services.
(That's hearsay coming from me. I saw it recounted in a community discord.)
Describe the solution you'd like
Ideally, when receiving a new record for a unique relation that already exists, AppViews should cancel all visible effects this new record would have but must update their internal representation of the relation to store the new record's identity. Then, if the respective latest known establishing record is deleted, AppViews must remove the relation in its entirety.
Applications should try to avoid creating multiple records for the same relation in parallel and should not edit unique-relation records to change their
subject
(targeted DID or record). When undoing a unique relation, applications should try to find and delete all records establishing it.Describe alternatives you've considered
An alternative would be to somewhat frequently re-crawl all known repositories. That's likely less efficient.
Another alternative would be to let users request a full re-crawl of their repository or specific collections, though this could by default easily be abused to increase upkeep costs of AppViews.
Additional context
My preferred solution can introduce inconsistencies between scraped and firehose-observed state iff multiple records are created for the same relation or a unique-relation record is edited to change its
subject
. I think that, given clients that try to avoid this, this is less of an issue than end-user-unrecoverable states that can come about through partial data loss (revert to backup) or missed firehose events.To recover such1 a desync in a latest-record-tracked scheme, the end user only has to redo and then re-undo the relation (i.e. block and unblock to remove a "stuck" block, like and unlike to remove a "stuck" like, follow and unfollow to remove a "stuck" follow), which seems to be what most users do on their own as first attempt to resolve such a situation once they become aware of it.
I'm not entirely sure this belongs in the protocol per se, since these unique relations are application-level.
I do think it would be a good idea to have protocol-level guidance regarding the behaviour of unique-relation records in general though. Apps should then specify which of their defined relations are unique.
Footnotes
The opposite direction, creating a missing relation, can already be done by end-users by undoing and then re-establishing it. This doesn't change under latest-record-tracked. ↩
The text was updated successfully, but these errors were encountered: