commit 0a4a7855302d56a1d75cec3aa9a6914a3af9c6af Author: Greg Kroah-Hartman Date: Tue Aug 8 20:03:51 2023 +0200 Linux 6.1.44 Tested-by: Salvatore Bonaccorso Signed-off-by: Greg Kroah-Hartman commit dd5f2ef16e3c6b83f14b5e620f51f42bc05a5d47 Author: Greg Kroah-Hartman Date: Tue Aug 8 19:20:48 2023 +0200 x86: fix backwards merge of GDS/SRSO bit Stable-tree-only change. Due to the way the GDS and SRSO patches flowed into the stable tree, it was a 50% chance that the merge of the which value GDS and SRSO should be. Of course, I lost that bet, and chose the opposite of what Linus chose in commit 64094e7e3118 ("Merge tag 'gds-for-linus-2023-08-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip") Fix this up by switching the values to match what is now in Linus's tree as that is the correct value to mirror. Signed-off-by: Greg Kroah-Hartman commit fa5b932b77c815d0e416612859d5899424bb4212 Author: Ross Lagerwall Date: Thu Aug 3 08:41:22 2023 +0200 xen/netback: Fix buffer overrun triggered by unusual packet commit 534fc31d09b706a16d83533e16b5dc855caf7576 upstream. It is possible that a guest can send a packet that contains a head + 18 slots and yet has a len <= XEN_NETBACK_TX_COPY_LEN. This causes nr_slots to underflow in xenvif_get_requests() which then causes the subsequent loop's termination condition to be wrong, causing a buffer overrun of queue->tx_map_ops. Rework the code to account for the extra frag_overflow slots. This is CVE-2023-34319 / XSA-432. Fixes: ad7f402ae4f4 ("xen/netback: Ensure protocol headers don't fall in the non-linear area") Signed-off-by: Ross Lagerwall Reviewed-by: Paul Durrant Reviewed-by: Wei Liu Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman commit 4f25355540ad4d40dd3445f66159a321dad29cc8 Author: Borislav Petkov (AMD) Date: Mon Aug 7 10:46:04 2023 +0200 x86/srso: Tie SBPB bit setting to microcode patch detection commit 5a15d8348881e9371afdf9f5357a135489496955 upstream. The SBPB bit in MSR_IA32_PRED_CMD is supported only after a microcode patch has been applied so set X86_FEATURE_SBPB only then. Otherwise, guests would attempt to set that bit and #GP on the MSR write. While at it, make SMT detection more robust as some guests - depending on how and what CPUID leafs their report - lead to cpu_smt_control getting set to CPU_SMT_NOT_SUPPORTED but SRSO_NO should be set for any guest incarnation where one simply cannot do SMT, for whatever reason. Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Reported-by: Konrad Rzeszutek Wilk Reported-by: Salvatore Bonaccorso Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit 77cf32d0dbfbf575fe66561e069228c532dc1da9 Author: Borislav Petkov (AMD) Date: Fri Jul 28 23:03:22 2023 +0200 x86/srso: Add a forgotten NOENDBR annotation Upstream commit: 3bbbe97ad83db8d9df06daf027b0840188de625d Fix: vmlinux.o: warning: objtool: .export_symbol+0x29e40: data relocation to !ENDBR: srso_untrain_ret_alias+0x0 Reported-by: Linus Torvalds Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit c7f2cd04554259c2474c4f9fa134528bc2826b22 Author: Josh Poimboeuf Date: Fri Jul 28 17:28:43 2023 -0500 x86/srso: Fix return thunks in generated code Upstream commit: 238ec850b95a02dcdff3edc86781aa913549282f Set X86_FEATURE_RETHUNK when enabling the SRSO mitigation so that generated code (e.g., ftrace, static call, eBPF) generates "jmp __x86_return_thunk" instead of RET. [ bp: Add a comment. ] Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit c9ae63d773ca182c4ef63fbdd22cdf090d9c1cd7 Author: Borislav Petkov (AMD) Date: Fri Jul 7 13:53:41 2023 +0200 x86/srso: Add IBPB on VMEXIT Upstream commit: d893832d0e1ef41c72cdae444268c1d64a2be8ad Add the option to flush IBPB only on VMEXIT in order to protect from malicious guests but one otherwise trusts the software that runs on the hypervisor. Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit 79c8091888ef61aac79ef72122d1e6cd0b620669 Author: Borislav Petkov (AMD) Date: Thu Jul 6 15:04:35 2023 +0200 x86/srso: Add IBPB Upstream commit: 233d6f68b98d480a7c42ebe78c38f79d44741ca9 Add the option to mitigate using IBPB on a kernel entry. Pull in the Retbleed alternative so that the IBPB call from there can be used. Also, if Retbleed mitigation is done using IBPB, the same mitigation can and must be used here. Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit 98f62883e7519011bf63f85381d637f65d7f180e Author: Borislav Petkov (AMD) Date: Thu Jun 29 17:43:40 2023 +0200 x86/srso: Add SRSO_NO support Upstream commit: 1b5277c0ea0b247393a9c426769fde18cff5e2f6 Add support for the CPUID flag which denotes that the CPU is not affected by SRSO. Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit 9139f4b6dd4fe1003ba79ab317d1a9f48849b369 Author: Borislav Petkov (AMD) Date: Tue Jul 18 11:13:40 2023 +0200 x86/srso: Add IBPB_BRTYPE support Upstream commit: 79113e4060aba744787a81edb9014f2865193854 Add support for the synthetic CPUID flag which "if this bit is 1, it indicates that MSR 49h (PRED_CMD) bit 0 (IBPB) flushes all branch type predictions from the CPU branch predictor." This flag is there so that this capability in guests can be detected easily (otherwise one would have to track microcode revisions which is impossible for guests). It is also needed only for Zen3 and -4. The other two (Zen1 and -2) always flush branch type predictions by default. Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit ac41e90d8daa8815d8bee774a1975435fbfe1ae7 Author: Borislav Petkov (AMD) Date: Wed Jun 28 11:02:39 2023 +0200 x86/srso: Add a Speculative RAS Overflow mitigation Upstream commit: fb3bd914b3ec28f5fb697ac55c4846ac2d542855 Add a mitigation for the speculative return address stack overflow vulnerability found on AMD processors. The mitigation works by ensuring all RET instructions speculate to a controlled location, similar to how speculation is controlled in the retpoline sequence. To accomplish this, the __x86_return_thunk forces the CPU to mispredict every function return using a 'safe return' sequence. To ensure the safety of this mitigation, the kernel must ensure that the safe return sequence is itself free from attacker interference. In Zen3 and Zen4, this is accomplished by creating a BTB alias between the untraining function srso_untrain_ret_alias() and the safe return function srso_safe_ret_alias() which results in evicting a potentially poisoned BTB entry and using that safe one for all function returns. In older Zen1 and Zen2, this is accomplished using a reinterpretation technique similar to Retbleed one: srso_untrain_ret() and srso_safe_ret(). Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit dec3b91f2c4b2c9b24d933e2c3f17493e30149ac Author: Kim Phillips Date: Tue Jan 10 16:46:37 2023 -0600 x86/cpu, kvm: Add support for CPUID_80000021_EAX commit 8415a74852d7c24795007ee9862d25feb519007c upstream. Add support for CPUID leaf 80000021, EAX. The majority of the features will be used in the kernel and thus a separate leaf is appropriate. Include KVM's reverse_cpuid entry because features are used by VM guests, too. [ bp: Massage commit message. ] Signed-off-by: Kim Phillips Signed-off-by: Borislav Petkov (AMD) Acked-by: Sean Christopherson Link: https://lore.kernel.org/r/20230124163319.2277355-2-kim.phillips@amd.com [bwh: Backported to 6.1: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit dfede4cb8ef732039b7a479d260bd89d3b474f14 Author: Borislav Petkov (AMD) Date: Sat Jul 8 10:21:35 2023 +0200 x86/bugs: Increase the x86 bugs vector size to two u32s Upstream commit: 0e52740ffd10c6c316837c6c128f460f1aaba1ea There was never a doubt in my mind that they would not fit into a single u32 eventually. Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit dacb0bac2edb649ce01c25da9f8898769516d716 Author: Dave Hansen Date: Tue Aug 1 07:31:07 2023 -0700 Documentation/x86: Fix backwards on/off logic about YMM support commit 1b0fc0345f2852ffe54fb9ae0e12e2ee69ad6a20 upstream These options clearly turn *off* XSAVE YMM support. Correct the typo. Reported-by: Ben Hutchings Fixes: 553a5c03e90a ("x86/speculation: Add force option to GDS mitigation") Signed-off-by: Dave Hansen Signed-off-by: Greg Kroah-Hartman commit 051f5dcf144aa7659c4f4be04c66c3eda9b1bad3 Author: Peter Zijlstra Date: Tue Oct 25 21:38:25 2022 +0200 x86/mm: Initialize text poking earlier commit 5b93a83649c7cba3a15eb7e8959b250841acb1b1 upstream. Move poking_init() up a bunch; specifically move it right after mm_init() which is right before ftrace_init(). This will allow simplifying ftrace text poking which currently has a bunch of exceptions for early boot. Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20221025201057.881703081@infradead.org Signed-off-by: Greg Kroah-Hartman commit e0fd83a193c530fdeced8b2e2ec83039ffdb884b Author: Peter Zijlstra Date: Tue Oct 25 21:38:18 2022 +0200 mm: Move mm_cachep initialization to mm_init() commit af80602799681c78f14fbe20b6185a56020dedee upstream. In order to allow using mm_alloc() much earlier, move initializing mm_cachep into mm_init(). Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20221025201057.751153381@infradead.org Signed-off-by: Greg Kroah-Hartman commit 9ae15aaff39c831e2f9d8b029e85a2d70c7c8a68 Author: Peter Zijlstra Date: Tue Oct 25 21:38:21 2022 +0200 x86/mm: Use mm_alloc() in poking_init() commit 3f4c8211d982099be693be9aa7d6fc4607dff290 upstream. Instead of duplicating init_mm, allocate a fresh mm. The advantage is that mm_alloc() has much simpler dependencies. Additionally it makes more conceptual sense, init_mm has no (and must not have) user state to duplicate. Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20221025201057.816175235@infradead.org Signed-off-by: Greg Kroah-Hartman commit d972c8c08f96518ff02efd87c4fef594a833f6ea Author: Juergen Gross Date: Mon Jan 9 16:09:22 2023 +0100 x86/mm: fix poking_init() for Xen PV guests commit 26ce6ec364f18d2915923bc05784084e54a5c4cc upstream. Commit 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") broke the kernel for running as Xen PV guest. It seems as if the new address space is never activated before being used, resulting in Xen rejecting to accept the new CR3 value (the PGD isn't pinned). Fix that by adding the now missing call of paravirt_arch_dup_mmap() to poking_init(). That call was previously done by dup_mm()->dup_mmap() and it is a NOP for all cases but for Xen PV, where it is just doing the pinning of the PGD. Fixes: 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") Signed-off-by: Juergen Gross Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20230109150922.10578-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman commit 7f3982de36c6620c2faae6fd960fa4021d71e16a Author: Juergen Gross Date: Mon Jul 3 15:00:32 2023 +0200 x86/xen: Fix secondary processors' FPU initialization commit fe3e0a13e597c1c8617814bf9b42ab732db5c26e upstream. Moving the call of fpu__init_cpu() from cpu_init() to start_secondary() broke Xen PV guests, as those don't call start_secondary() for APs. Call fpu__init_cpu() in Xen's cpu_bringup(), which is the Xen PV replacement of start_secondary(). Fixes: b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()") Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov (AMD) Reviewed-by: Boris Ostrovsky Acked-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230703130032.22916-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman commit baa7b7501e41344f95da0bd3042dd04110d58edb Author: Thomas Gleixner Date: Fri Jun 16 22:15:31 2023 +0200 x86/mem_encrypt: Unbreak the AMD_MEM_ENCRYPT=n build commit 0a9567ac5e6a40cdd9c8cd15b19a62a15250f450 upstream. Moving mem_encrypt_init() broke the AMD_MEM_ENCRYPT=n because the declaration of that function was under #ifdef CONFIG_AMD_MEM_ENCRYPT and the obvious placement for the inline stub was the #else path. This is a leftover of commit 20f07a044a76 ("x86/sev: Move common memory encryption code to mem_encrypt.c") which made mem_encrypt_init() depend on X86_MEM_ENCRYPT without moving the prototype. That did not fail back then because there was no stub inline as the core init code had a weak function. Move both the declaration and the stub out of the CONFIG_AMD_MEM_ENCRYPT section and guard it with CONFIG_X86_MEM_ENCRYPT. Fixes: 439e17576eb4 ("init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init()") Reported-by: kernel test robot Signed-off-by: Thomas Gleixner Closes: https://lore.kernel.org/oe-kbuild-all/202306170247.eQtCJPE8-lkp@intel.com/ Signed-off-by: Greg Kroah-Hartman commit b6fd07c41b4c64faff368728cef13439ee62860d Author: Daniel Sneddon Date: Tue Aug 1 16:36:26 2023 +0200 KVM: Add GDS_NO support to KVM commit 81ac7e5d741742d650b4ed6186c4826c1a0631a7 upstream Gather Data Sampling (GDS) is a transient execution attack using gather instructions from the AVX2 and AVX512 extensions. This attack allows malicious code to infer data that was previously stored in vector registers. Systems that are not vulnerable to GDS will set the GDS_NO bit of the IA32_ARCH_CAPABILITIES MSR. This is useful for VM guests that may think they are on vulnerable systems that are, in fact, not affected. Guests that are running on affected hosts where the mitigation is enabled are protected as if they were running on an unaffected system. On all hosts that are not affected or that are mitigated, set the GDS_NO bit. Signed-off-by: Daniel Sneddon Signed-off-by: Dave Hansen Acked-by: Josh Poimboeuf Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit c04579e95492dff342cb4976dd2f5728c0f87eee Author: Daniel Sneddon Date: Tue Aug 1 16:36:26 2023 +0200 x86/speculation: Add Kconfig option for GDS commit 53cf5797f114ba2bd86d23a862302119848eff19 upstream Gather Data Sampling (GDS) is mitigated in microcode. However, on systems that haven't received the updated microcode, disabling AVX can act as a mitigation. Add a Kconfig option that uses the microcode mitigation if available and disables AVX otherwise. Setting this option has no effect on systems not affected by GDS. This is the equivalent of setting gather_data_sampling=force. Signed-off-by: Daniel Sneddon Signed-off-by: Dave Hansen Acked-by: Josh Poimboeuf Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 92fc27c79bc7f3e2bfd2b88e197762566daf02a1 Author: Daniel Sneddon Date: Tue Aug 1 16:36:26 2023 +0200 x86/speculation: Add force option to GDS mitigation commit 553a5c03e90a6087e88f8ff878335ef0621536fb upstream The Gather Data Sampling (GDS) vulnerability allows malicious software to infer stale data previously stored in vector registers. This may include sensitive data such as cryptographic keys. GDS is mitigated in microcode, and systems with up-to-date microcode are protected by default. However, any affected system that is running with older microcode will still be vulnerable to GDS attacks. Since the gather instructions used by the attacker are part of the AVX2 and AVX512 extensions, disabling these extensions prevents gather instructions from being executed, thereby mitigating the system from GDS. Disabling AVX2 is sufficient, but we don't have the granularity to do this. The XCR0[2] disables AVX, with no option to just disable AVX2. Add a kernel parameter gather_data_sampling=force that will enable the microcode mitigation if available, otherwise it will disable AVX on affected systems. This option will be ignored if cmdline mitigations=off. This is a *big* hammer. It is known to break buggy userspace that uses incomplete, buggy AVX enumeration. Unfortunately, such userspace does exist in the wild: https://www.mail-archive.com/bug-coreutils@gnu.org/msg33046.html [ dhansen: add some more ominous warnings about disabling AVX ] Signed-off-by: Daniel Sneddon Signed-off-by: Dave Hansen Acked-by: Josh Poimboeuf Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit c66ebe070d9641c9339e42e1c2d707a5052e9904 Author: Daniel Sneddon Date: Tue Aug 1 16:36:25 2023 +0200 x86/speculation: Add Gather Data Sampling mitigation commit 8974eb588283b7d44a7c91fa09fcbaf380339f3a upstream Gather Data Sampling (GDS) is a hardware vulnerability which allows unprivileged speculative access to data which was previously stored in vector registers. Intel processors that support AVX2 and AVX512 have gather instructions that fetch non-contiguous data elements from memory. On vulnerable hardware, when a gather instruction is transiently executed and encounters a fault, stale data from architectural or internal vector registers may get transiently stored to the destination vector register allowing an attacker to infer the stale data using typical side channel techniques like cache timing attacks. This mitigation is different from many earlier ones for two reasons. First, it is enabled by default and a bit must be set to *DISABLE* it. This is the opposite of normal mitigation polarity. This means GDS can be mitigated simply by updating microcode and leaving the new control bit alone. Second, GDS has a "lock" bit. This lock bit is there because the mitigation affects the hardware security features KeyLocker and SGX. It needs to be enabled and *STAY* enabled for these features to be mitigated against GDS. The mitigation is enabled in the microcode by default. Disable it by setting gather_data_sampling=off or by disabling all mitigations with mitigations=off. The mitigation status can be checked by reading: /sys/devices/system/cpu/vulnerabilities/gather_data_sampling Signed-off-by: Daniel Sneddon Signed-off-by: Dave Hansen Acked-by: Josh Poimboeuf Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit f25ad76d92176f41a543a812972e9937ce4f7d08 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 x86/fpu: Move FPU initialization into arch_cpu_finalize_init() commit b81fac906a8f9e682e513ddd95697ec7a20878d4 upstream Initializing the FPU during the early boot process is a pointless exercise. Early boot is convoluted and fragile enough. Nothing requires that the FPU is set up early. It has to be initialized before fork_init() because the task_struct size depends on the FPU register buffer size. Move the initialization to arch_cpu_finalize_init() which is the perfect place to do so. No functional change. This allows to remove quite some of the custom early command line parsing, but that's subject to the next installment. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.902376621@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit e26932942b2c505d5e8a9f263cbe66de4fab1b24 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 x86/fpu: Mark init functions __init commit 1703db2b90c91b2eb2d699519fc505fe431dde0e upstream No point in keeping them around. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.841685728@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 9e8d9d399094dd911059ff337dd8a104f052e1ca Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 x86/fpu: Remove cpuinfo argument from init functions commit 1f34bb2a24643e0087652d81078e4f616562738d upstream Nothing in the call chain requires it Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.783704297@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit c956807d8462e94a1450dc0737728c25917b1d67 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 x86/init: Initialize signal frame size late commit 54d9a91a3d6713d1332e93be13b4eaf0fa54349d upstream No point in doing this during really early boot. Move it to an early initcall so that it is set up before possible user mode helpers are started during device initialization. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.727330699@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit b0837880fa65fa4a6dc407b42e9b33e18f7b44e3 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init() commit 439e17576eb47f26b78c5bbc72e344d4206d2327 upstream Invoke the X86ism mem_encrypt_init() from X86 arch_cpu_finalize_init() and remove the weak fallback from the core code. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.670360645@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 8183a89caf67a1f56f1da1d6081e26a0ae7a5fdf Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 init: Invoke arch_cpu_finalize_init() earlier commit 9df9d2f0471b4c4702670380b8d8a45b40b23a7d upstream X86 is reworking the boot process so that initializations which are not required during early boot can be moved into the late boot process and out of the fragile and restricted initial boot phase. arch_cpu_finalize_init() is the obvious place to do such initializations, but arch_cpu_finalize_init() is invoked too late in start_kernel() e.g. for initializing the FPU completely. fork_init() requires that the FPU is initialized as the size of task_struct on X86 depends on the size of the required FPU register buffer. Fortunately none of the init calls between calibrate_delay() and arch_cpu_finalize_init() is relevant for the functionality of arch_cpu_finalize_init(). Invoke it right after calibrate_delay() where everything which is relevant for arch_cpu_finalize_init() has been set up already. No functional change intended. Signed-off-by: Thomas Gleixner Reviewed-by: Rick Edgecombe Link: https://lore.kernel.org/r/20230613224545.612182854@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit a3342c60dcc58007cc14b2cf1ebc7e2b563423a8 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 init: Remove check_bugs() leftovers commit 61235b24b9cb37c13fcad5b9596d59a1afdcec30 upstream Everything is converted over to arch_cpu_finalize_init(). Remove the check_bugs() leftovers including the empty stubs in asm-generic, alpha, parisc, powerpc and xtensa. Signed-off-by: Thomas Gleixner Reviewed-by: Richard Henderson Link: https://lore.kernel.org/r/20230613224545.553215951@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 8beabde0ed8d31e45a3d9484f0591a18c0c94cc7 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 um/cpu: Switch to arch_cpu_finalize_init() commit 9349b5cd0908f8afe95529fc7a8cbb1417df9b0c upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Acked-by: Richard Weinberger Link: https://lore.kernel.org/r/20230613224545.493148694@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit ce97072e10cc844fac8176681b2cb17bf3eaaa7b Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 sparc/cpu: Switch to arch_cpu_finalize_init() commit 44ade508e3bfac45ae97864587de29eb1a881ec0 upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Reviewed-by: Sam Ravnborg Link: https://lore.kernel.org/r/20230613224545.431995857@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 84f585542ec69226311be5a4500a4b3cbad6fb5b Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 sh/cpu: Switch to arch_cpu_finalize_init() commit 01eb454e9bfe593f320ecbc9aaec60bf87cd453d upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.371697797@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 6a90583dbd9b794071b8b54d8c36f40a459d1051 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 mips/cpu: Switch to arch_cpu_finalize_init() commit 7f066a22fe353a827a402ee2835e81f045b1574d upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.312438573@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 489ae02c89936c7e40f04191e8c160ac53649526 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 m68k/cpu: Switch to arch_cpu_finalize_init() commit 9ceecc2589b9d7cef6b321339ed8de484eac4b20 upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Acked-by: Geert Uytterhoeven Link: https://lore.kernel.org/r/20230613224545.254342916@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 08e86d42e2c916e362d124e3bc6c824eb1862498 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 loongarch/cpu: Switch to arch_cpu_finalize_init() commit 9841c423164787feb8f1442f922b7d80a70c82f1 upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.195288218@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 403e4cc67e4cf9226c57a7cb27c7f4365d2143b7 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 ia64/cpu: Switch to arch_cpu_finalize_init() commit 6c38e3005621800263f117fb00d6787a76e16de7 upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.137045745@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit e2e06240ae4780977387906e2e11774283ca7997 Author: Thomas Gleixner Date: Tue Aug 1 16:36:25 2023 +0200 ARM: cpu: Switch to arch_cpu_finalize_init() commit ee31bb0524a2e7c99b03f50249a411cc1eaa411f upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.078124882@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 7918a3555a2502a4d86b831da089f3b985d1bca9 Author: Thomas Gleixner Date: Tue Aug 1 16:36:24 2023 +0200 x86/cpu: Switch to arch_cpu_finalize_init() commit 7c7077a72674402654f3291354720cd73cdf649e upstream check_bugs() is a dumping ground for finalizing the CPU bringup. Only parts of it has to do with actual CPU bugs. Split it apart into arch_cpu_finalize_init() and cpu_select_mitigations(). Fixup the bogus 32bit comments while at it. No functional change. Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230613224545.019583869@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit d5501f2ff80d30d615d59531825d3a5f0bb0d35d Author: Thomas Gleixner Date: Tue Aug 1 16:36:24 2023 +0200 init: Provide arch_cpu_finalize_init() commit 7725acaa4f0c04fbefb0e0d342635b967bb7d414 upstream check_bugs() has become a dumping ground for all sorts of activities to finalize the CPU initialization before running the rest of the init code. Most are empty, a few do actual bug checks, some do alternative patching and some cobble a CPU advertisement string together.... Aside of that the current implementation requires duplicated function declaration and mostly empty header files for them. Provide a new function arch_cpu_finalize_init(). Provide a generic declaration if CONFIG_ARCH_HAS_CPU_FINALIZE_INIT is selected and a stub inline otherwise. This requires a temporary #ifdef in start_kernel() which will be removed along with check_bugs() once the architectures are converted over. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224544.957805717@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman