Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrency: Add section on considering trade-offs #198

Merged
merged 2 commits into from
May 1, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions projects/concurrency/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -374,6 +374,22 @@ After you have successfully written your cache, see if you can speed it up. Here
* How many locks do you have? If you just have one for the whole cache, that means that any operation on the cache may be blocked by another. Could you have more than one lock to isolate changes so you're less likely to be blocked by an unrelated operation? **Remember the importance of considering the critical section when thinking about this.**
* Are there some guarantees we can loosen to require less locking (or locking at less busy times)? As an example, when do you remove old entries? Could you remove them in batches, or in a background goroutine, to avoid doing extra work on the write path? Could copying data under a read lock and then manipulating a copy allow you to do less work under a write lock? What trade-offs and risks do different approaches introduce?

## Reading code and considering trade-offs

It's important to be able to read code, and to consider the trade-offs when reading and writing code.

When you've implemented the above projects (recommended: also the extensions), have a read of [the sample solutions](https://github.com/CodeYourFuture/immersive-go-course/pull/181). There are four different implementations. With your own implementation, that's at least five. Each is valid and works. Each has different trade-offs and is better suited for different constraints or use-cases.

Try to answer at least the following questions:
* What are the differences between each implementation? For example:
* How much work needs to be done to evict an entry? (What's the big-O of evicting an entry?)
* How much contention is there for locks when putting in the cache?
* How much contention is there for locks when reading from the cache?
* For each implementation, what use-cases is this implementation well-suited to? What use-cases would this implementation be particularly _bad_ at? When would one of the other solutions be better? Some things to generally think about when considering trade-offs:
* What is the: average memory consumption, peak memory consumption, average-case latency, worst-case latency, consistency of latency, for each implementation?
* Think about usage patterns: which is better if callers are mostly reading? Mostly writing? _Always_ reading? Alternating between reads and writes? Other usage patterns?
* What guarantees can we offer with some implementations but not others? e.g. If we need to never use more than 10 MB of RAM, which implementations can guarantee that?

## Extensions

### Computing cache
Expand Down
Loading