Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any reason to not use xsync.Map? #102

Open
pelicans45 opened this issue Jun 25, 2023 · 6 comments
Open

Is there any reason to not use xsync.Map? #102

pelicans45 opened this issue Jun 25, 2023 · 6 comments
Labels
performance question Further information is requested

Comments

@pelicans45
Copy link

pelicans45 commented Jun 25, 2023

I have an application that makes use of a lot of different sync.Maps. It mostly fits the recommended use case for sync.Map ("append-only maps" with a lot more reads than writes), but the benchmarks for xsync.Map do look better across the board, so I was thinking of maybe just swapping them all out. I see your blog post mentions "When it comes to reads which are the strongest point of sync.Map, both data structures are on par.", though in the repo README you say "Due to this design, in all considered scenarios Map outperforms sync.Map."

Just wondering if you can foresee any reason to not do this. This blog post mentions potential memory concerns, but I think that probably won't be an issue for me.

@puzpuzpuz
Copy link
Owner

Hi,

xsync.Map doesn't have drawbacks when compared with sync.Map or at least I'm not aware of them. But in certain niche use cases, a plain map protected with a mutex may be a better choice. See #94 (comment) for more details.

@puzpuzpuz puzpuzpuz added question Further information is requested performance labels Jun 25, 2023
@mgnsk
Copy link

mgnsk commented Jun 26, 2023

I have a usecase where I found no reason to switch from the standard library's sync.Map: mgnsk/evcache#35

@pelicans45
Copy link
Author

@puzpuzpuz I guess I'll need to do some testing

@mgnsk Interesting. Not sure how much of a difference it might make, but In my case I was thinking of using xsync.Map rather than xsync.MapOf.

@puzpuzpuz
Copy link
Owner

@mgnsk the only parallel benchmark you have has a contention point that kills the scalability of operations in both sync.Map and xsync.Map: https://github.com/mgnsk/evcache/blob/5e3f071a8d4be4eabddb0fb227d24121163e8741/cache_test.go#L302

Also, you always evict the same key which looks very artificial. Finally, you're using a mutex in Evict and PushBack methods (I didn't check others), so probably that's why a different concurrent map doesn't make any difference - the overhead of synchronization and other data structures is much more noticeable.

@puzpuzpuz
Copy link
Owner

@65947689564 first of all, I advise you to write benchmarks that are close enough to your use case rather than trust benchmarks in xsync or any other repository.

In general, sync.Map has very similar scalability characteristics to xsync.Map when it comes to read operations. Both don't use mutexes when reading data. But write operation scalability is very different: sync.Map uses a single mutex in many cases when it comes to the map mutation leading to blocked reads.

See https://github.com/golang/go/blob/13529cc5f443ef4e242da3716daa0032aa8d34f2/src/sync/map.go for more details. As for xsync.Map, this post explains how it works (the current design is slightly different).

As a rule of thumb, if your map never or almost never mutates, both sync.Map and xsync.Map should be on par. On the other hand, if it never mutates, you should go for a plain map which doesn't involve atomic operations and has plain memory layout. As for xsync.Map vs. xsync.MapOf, MapOf is a better choice since it doesn't use intermediate interface{} structs.

@rabbbit
Copy link

rabbbit commented Mar 29, 2024

See #102 example of sync.Map performing better xsync.MapOf (in a fairly specific use-case).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants