Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Deepin Kernel SIG] [Intel] Intel pstate backport from v6.11 #540

Merged
Merged
Changes from 1 commit
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
e4dcdfc
cpufreq: intel_pstate: Revise global turbo disable check
spandruvada Sep 7, 2023
7497277
cpufreq: intel_pstate: Prioritize firmware-provided balance performan…
spandruvada Nov 20, 2023
a82506e
cpufreq: intel_pstate: Add Emerald Rapids support in no-HWP mode
ZhenguoYao1 Dec 13, 2023
afabdef
cpufreq: intel_pstate: remove cpudata::prev_cummulative_iowait
Feb 13, 2024
65b4db9
cpufreq: intel_pstate: Drop redundant locking from intel_pstate_drive…
rafaeljw Mar 21, 2024
4369b85
cpufreq: intel_pstate: Wait for canceled delayed work to complete
rafaeljw Mar 21, 2024
6a59524
cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations
rafaeljw Mar 28, 2024
c0a0faf
cpufreq: intel_pstate: Use __ro_after_init for three variables
rafaeljw Mar 21, 2024
bbe7866
cpufreq: intel_pstate: Fold intel_pstate_max_within_limits() into caller
rafaeljw Mar 25, 2024
855eac6
cpufreq: intel_pstate: Do not update global.turbo_disabled after init…
rafaeljw Mar 25, 2024
c18eb71
cpufreq: intel_pstate: Rearrange show_no_turbo() and store_no_turbo()
rafaeljw Mar 25, 2024
b33662e
cpufreq: intel_pstate: Read global.no_turbo under READ_ONCE()
rafaeljw Mar 25, 2024
4cbadce
cpufreq: intel_pstate: Replace three global.turbo_disabled checks
rafaeljw Mar 25, 2024
77e85a7
cpufreq: intel_pstate: Update the maximum CPU frequency consistently
rafaeljw Mar 28, 2024
e2442cb
cpufreq: intel_pstate: hide unused intel_pstate_cpu_oob_ids[]
arndb Apr 3, 2024
7146aaa
cpufreq: intel_pstate: fix struct cpudata::epp_cached kernel-doc
May 5, 2024
1c754db
cpufreq: intel_pstate: Fix unchecked HWP MSR access
spandruvada May 31, 2024
0d1b676
cpufreq: intel_pstate: Check turbo_is_disabled() in store_no_turbo()
rafaeljw Jun 11, 2024
48c1247
x86/cpufeatures: Add HWP highest perf change feature flag
spandruvada Jun 24, 2024
e20aa05
cpufreq: intel_pstate: Replace boot_cpu_has()
spandruvada Jun 24, 2024
e33b906
cpufreq: intel_pstate: Support highest performance change interrupt
spandruvada Jun 24, 2024
f696d8c
cpufreq: intel_pstate: Update Balance performance EPP for Emerald Rapids
phckopper Aug 1, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations
commit 0f2828e upstream.

Drop two redundant checks involving READ_ONCE() from notify_hwp_interrupt()
and make it check hwp_active without READ_ONCE() which is not necessary,
because that variable is only set once during the early initialization of
the driver.

In order to make that clear, annotate hwp_active with __ro_after_init.

Intel-SIG: commit 0f2828e cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations.
Backport intel_pstate driver update for 6.6 from 6.11

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
[ Yingbao Jia: amend commit log ]
Signed-off-by: Yingbao Jia <yingbao.jia@intel.com>
  • Loading branch information
rafaeljw authored and Avenger-285714 committed Dec 27, 2024
commit 6a5952479fc1ca08f5abaa5d9af6edc1c86e1f6c
27 changes: 5 additions & 22 deletions drivers/cpufreq/intel_pstate.c
Original file line number Diff line number Diff line change
@@ -292,7 +292,7 @@ struct pstate_funcs {

static struct pstate_funcs pstate_funcs __read_mostly;

static int hwp_active __read_mostly;
static bool hwp_active __ro_after_init;
static int hwp_mode_bdw __read_mostly;
static bool per_cpu_limits __read_mostly;
static bool hwp_boost __read_mostly;
@@ -1635,11 +1635,10 @@ static cpumask_t hwp_intr_enable_mask;
void notify_hwp_interrupt(void)
{
unsigned int this_cpu = smp_processor_id();
struct cpudata *cpudata;
unsigned long flags;
u64 value;

if (!READ_ONCE(hwp_active) || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
if (!hwp_active || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
return;

rdmsrl_safe(MSR_HWP_STATUS, &value);
@@ -1651,24 +1650,8 @@ void notify_hwp_interrupt(void)
if (!cpumask_test_cpu(this_cpu, &hwp_intr_enable_mask))
goto ack_intr;

/*
* Currently we never free all_cpu_data. And we can't reach here
* without this allocated. But for safety for future changes, added
* check.
*/
if (unlikely(!READ_ONCE(all_cpu_data)))
goto ack_intr;

/*
* The free is done during cleanup, when cpufreq registry is failed.
* We wouldn't be here if it fails on init or switch status. But for
* future changes, added check.
*/
cpudata = READ_ONCE(all_cpu_data[this_cpu]);
if (unlikely(!cpudata))
goto ack_intr;

schedule_delayed_work(&cpudata->hwp_notify_work, msecs_to_jiffies(10));
schedule_delayed_work(&all_cpu_data[this_cpu]->hwp_notify_work,
msecs_to_jiffies(10));

raw_spin_unlock_irqrestore(&hwp_notify_lock, flags);

@@ -3466,7 +3449,7 @@ static int __init intel_pstate_init(void)
* deal with it.
*/
if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || hwp_forced) {
WRITE_ONCE(hwp_active, 1);
hwp_active = true;
hwp_mode_bdw = id->driver_data;
intel_pstate.attr = hwp_cpufreq_attrs;
intel_cpufreq.attr = hwp_cpufreq_attrs;