-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there any reason to not use xsync.Map? #102
Comments
Hi,
|
I have a usecase where I found no reason to switch from the standard library's |
@puzpuzpuz I guess I'll need to do some testing @mgnsk Interesting. Not sure how much of a difference it might make, but In my case I was thinking of using |
@mgnsk the only parallel benchmark you have has a contention point that kills the scalability of operations in both Also, you always evict the same key which looks very artificial. Finally, you're using a mutex in |
@65947689564 first of all, I advise you to write benchmarks that are close enough to your use case rather than trust benchmarks in In general, See https://github.com/golang/go/blob/13529cc5f443ef4e242da3716daa0032aa8d34f2/src/sync/map.go for more details. As for As a rule of thumb, if your map never or almost never mutates, both |
See #102 example of |
I have an application that makes use of a lot of different
sync.Map
s. It mostly fits the recommended use case forsync.Map
("append-only maps" with a lot more reads than writes), but the benchmarks forxsync.Map
do look better across the board, so I was thinking of maybe just swapping them all out. I see your blog post mentions "When it comes to reads which are the strongest point ofsync.Map
, both data structures are on par.", though in the repo README you say "Due to this design, in all considered scenariosMap
outperformssync.Map
."Just wondering if you can foresee any reason to not do this. This blog post mentions potential memory concerns, but I think that probably won't be an issue for me.
The text was updated successfully, but these errors were encountered: