-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ASoC: SOF: ipc4-topology: Keep secondary core on until pipeline deletion #4692
Conversation
A Data Processing (DP) domain widget can be scheduled to run on a secondary core while it's pipeline runs the primary core. In this case, the secondary core could get powered off before the pipeline is deleted resulting in a crash when the firmware tries to free the DP module during pipeline deletion. Prevent this by setting the pipe_widget's core ID to match that of the widget so that the core would be kept powered on until the pipeline has been deleted. This would work only with the assumption that a pipeline contains a single DP module running on ia seconday core. Signed-off-by: Ranjani Sridharan <[email protected]>
|
||
/* | ||
* A Data Processing (DP) domain widget can be scheduled to run on a secondary core while | ||
* it's pipeline runs the primary core. In this case, the secondary core could get powered |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the pipeline has nothing to do with cores. Never has, never will.
I think you meant "a data processing domain widget can be scheduled to run on a secondary core while other widgets of the same pipeline run on the primary core" ?
It would be useful to also explain why LL is not impacted, or what's so special about DP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Special is -
a pipeline has a core it is bind to, All LL modules that belongs to the pipeline are bind to the same core as the pipeline itself.
One of DP principals was that a single module from the pipeline can be offloaded to secondary core and executed there, i.e.
LL1 (core0) ---> LL2(core0) ---> DP1 (core1) ---> LL3 (core0) --> DP2 (core2) --> LL4 (core0)
* it's pipeline runs the primary core. In this case, the secondary core could get powered | ||
* off before the pipeline is deleted resulting in a crash when the firmware tries to free | ||
* the DP module during pipeline deletion. Prevent this by setting the pipe_widget's core | ||
* ID to match that of the widget so that the core would be kept powered on until the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the pipe_widget->core is a concept that should not exist. The pipeline does not "run", only module instances are scheduled, and the scheduling does not happen at the pipeline level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pipeline core ID was added to make FW management of pipeline easier; it is just an info for FW on which core it should run pipeline management code.
EDIT: and all LLs of this pipeline should run on the same core as pipeline itself (at least for now)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EDIT: and all LLs of this pipeline should run on the same core as pipeline itself (at least for now)
We already know that changes are coming for LL on multiple cores so we don't want to have to revisit the core management multiple times.... Let's do the things right and future-proof, shall we?
* the DP module during pipeline deletion. Prevent this by setting the pipe_widget's core | ||
* ID to match that of the widget so that the core would be kept powered on until the | ||
* pipeline has been deleted. This would work only in the case where a pipeline only | ||
* contains a single DP module that is scheduled on a secondary core. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not great....
At the very least we would need to have something that detects that there are more than 2 non-primary cores used by a pipeline. But clearly with the AEC/NS sequence it's quite likely that this configuration will happen.
But I would really not invest any time in this, it's a stop-gap which will be defeated by 3rd parties.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, a pipeline with couple of DPs spread on all cores will be needed probably very soon.
replaced with #4705 |
A Data Processing (DP) domain widget can be scheduled to run on a secondary core while it's pipeline runs the primary core. In this case, the secondary core could get powered off before the pipeline is deleted resulting in a crash when the firmware tries to free the DP module during pipeline deletion. Prevent this by setting the pipe_widget's core ID to match that of the widget so that the core would be kept powered on until the pipeline has been deleted. This would work only with the assumption that a pipeline contains a single DP module running on ia seconday core.