-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added example for limiting workflow concurrency in .NET fan-in/out example #4132
Merged
Merged
Changes from all commits
Commits
Show all changes
8 commits
Select commit
Hold shift + click to select a range
41fa825
Added workflow concurrency example for .NET
WhitWaldo 732b92b
Removed extension method that was doubling up calls and shortened to …
WhitWaldo 5220e55
Removed unused extension method
WhitWaldo 499819a
Neglected to include the Task.WhenAll line persisting the remaining r…
WhitWaldo d45dbc5
Fixed parallism limit
WhitWaldo 7ac6ded
Adding proposed concluding thoughts
WhitWaldo 30c9231
Approved proposed language
WhitWaldo 1e6e609
Merge branch 'v1.13' into limit-workflow-concurrency-net
hhunter-ms File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
performance nit:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was actually quite curious about this as I've seen this proposed elsewhere, so I wrote up a small benchmark to test it out.
WithListSize
is your proposal andWithoutListSize
is the original. I tested with a set of 10, 100, 1000 and 10000 numbers in the list.My benchmarking code:
And the summary:
BenchmarkDotNet v0.13.12, Windows 10 (10.0.19045.4291/22H2/2022Update)
AMD Ryzen Threadripper 1950X, 1 CPU, 32 logical and 16 physical cores
.NET SDK 8.0.300-preview.24203.14
[Host] : .NET 8.0.4 (8.0.424.16909), X64 RyuJIT AVX2
DefaultJob : .NET 8.0.4 (8.0.424.16909), X64 RyuJIT AVX2
In conclusion, the allocations are the same either way, but it's ever so slightly faster performance to stick with
new List<int>()
overnew List<int>(N)
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could it be because you're using
AddRange
in your benchmark instead ofAdd
? Using a for-loop in the benchmark which makes individualAdd
calls might be more representative.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fair - I rewrote the benchmark to be nearly identical to the code here:
I've got to say, I've run this test a dozen times with different iterations and from a timing perspective, the results are mixed. From a memory allocation perspective, with the exception of small list counts (sub 40 items), the allocated memory is less with an empty list than when setting the size during initialization. In the latest iteration, below, the empty list was ever so slightly slower than the initialized list size approach, but the speed has flip flopped back and forth each time I've run it (perhaps it's the use of the async tasks that's fiddling with the speed, who knows). Either way, allocations being the only regular difference between the two, I don't know the performance benefit is there to specify the size at initialization:
Round 1
BenchmarkDotNet v0.13.12, Windows 10 (10.0.19045.4291/22H2/2022Update)
AMD Ryzen Threadripper 1950X, 1 CPU, 32 logical and 16 physical cores
.NET SDK 8.0.300-preview.24203.14
[Host] : .NET 8.0.4 (8.0.424.16909), X64 RyuJIT AVX2 [AttachedDebugger]
DefaultJob : .NET 8.0.4 (8.0.424.16909), X64 RyuJIT AVX2
Round 2