Merge 6.6.89 into android15-6.6-lts
Changes in 6.6.89 module: sign with sha512 instead of sha1 by default x86/extable: Remove unused fixup type EX_TYPE_COPY x86/mce: use is_copy_from_user() to determine copy-from-user context tracing: Add __string_len() example tracing: Add __print_dynamic_array() helper tracing: Verify event formats that have "%*p.." media: subdev: Fix use of sd->enabled_streams in call_s_stream() media: subdev: Improve v4l2_subdev_enable/disable_streams_fallback media: subdev: Add v4l2_subdev_is_streaming() media: vimc: skip .s_stream() for stopped entities soc: qcom: ice: introduce devm_of_qcom_ice_get mmc: sdhci-msm: fix dev reference leaked through of_qcom_ice_get auxdisplay: hd44780: Convert to platform remove callback returning void auxdisplay: hd44780: Fix an API misuse in hd44780.c net: dsa: mv88e6xxx: fix internal PHYs for 6320 family net: dsa: mv88e6xxx: fix VTU methods for 6320 family ASoC: qcom: q6apm-dai: drop unused 'q6apm_dai_rtd' fields ASoC: q6apm-dai: schedule all available frames to avoid dsp under-runs ASoC: qcom: lpass: Make asoc_qcom_lpass_cpu_platform_remove() return void ASoC: qcom: Fix trivial code style issues ASoC: q6apm-dai: make use of q6apm_get_hw_pointer iio: adc: ad7768-1: Move setting of val a bit later to avoid unnecessary return value check iio: adc: ad7768-1: Fix conversion result sign arm64: tegra: Remove the Orin NX/Nano suspend key clk: renesas: rzg2l: Use u32 for flag and mux_flags clk: renesas: rzg2l: Add struct clk_hw_data clk: renesas: rzg2l: Remove CPG_SDHI_DSEL from generic header clk: renesas: rzg2l: Refactor SD mux driver clk: renesas: r9a07g04[34]: Use SEL_SDHI1_STS status configuration for SD1 mux clk: renesas: r9a07g04[34]: Fix typo for sel_shdi variable clk: renesas: r9a07g043: Fix HP clock source for RZ/Five of: resolver: Simplify of_resolve_phandles() using __free() of: resolver: Fix device node refcount leakage in of_resolve_phandles() PCI: Fix reference leak in pci_register_host_bridge() scsi: ufs: qcom: fix dev reference leaked through of_qcom_ice_get sched/topology: Consolidate and clean up access to a CPU's max compute capacity sched/cpufreq: Rework schedutil governor performance estimation cpufreq/sched: Explicitly synchronize limits_changed flag handling ceph: Fix incorrect flush end position calculation dma/contiguous: avoid warning about unused size_bytes cpufreq: apple-soc: Fix null-ptr-deref in apple_soc_cpufreq_get_rate() cpufreq: scmi: Fix null-ptr-deref in scmi_cpufreq_get_rate() cpufreq: scpi: Fix null-ptr-deref in scpi_cpufreq_get_rate() scsi: ufs: mcq: Add NULL check in ufshcd_mcq_abort() cpufreq: cppc: Fix invalid return value in .get() callback btrfs: avoid page_lockend underflow in btrfs_punch_hole_lock_range() scsi: core: Clear flags for scsi_cmnd that did not complete net: lwtunnel: disable BHs when required net: phy: leds: fix memory leak tipc: fix NULL pointer dereference in tipc_mon_reinit_self() net: ethernet: mtk_eth_soc: net: revise NETSYSv3 hardware configuration fix a couple of races in MNT_TREE_BENEATH handling by do_move_mount() net_sched: hfsc: Fix a UAF vulnerability in class handling net_sched: hfsc: Fix a potential UAF in hfsc_dequeue() too net: dsa: mt7530: sync driver-specific behavior of MT7531 variants pds_core: handle unsupported PDS_CORE_CMD_FW_CONTROL result pds_core: Remove unnecessary check in pds_client_adminq_cmd() pds_core: make wait_context part of q_info iommu/amd: Return an error if vCPU affinity is set for non-vCPU IRTE splice: remove duplicate noinline from pipe_clear_nowait perf/x86: Fix non-sampling (counting) events on certain x86 platforms LoongArch: Select ARCH_USE_MEMTEST LoongArch: Make regs_irqs_disabled() more clear LoongArch: Make do_xyz() exception handlers more robust virtio_console: fix missing byte order handling for cols and rows crypto: atmel-sha204a - Set hwrng quality to lowest possible xen-netfront: handle NULL returned by xdp_convert_buff_to_frame() net: selftests: initialize TCP header and skb payload with zero net: phy: microchip: force IRQ polling mode for lan88xx drm/amd/display: Fix gpu reset in multidisplay config drm/amd/display: Force full update in gpu reset irqchip/gic-v2m: Prevent use after free of gicv2m_get_fwnode() LoongArch: Return NULL from huge_pte_offset() for invalid PMD LoongArch: Remove a bogus reference to ZONE_DMA io_uring: fix 'sync' handling of io_fallback_tw() KVM: SVM: Allocate IR data using atomic allocation cxl/core/regs.c: Skip Memory Space Enable check for RCD and RCH Ports mcb: fix a double free bug in chameleon_parse_gdd() ata: libata-scsi: Improve CDL control ata: libata-scsi: Fix ata_mselect_control_ata_feature() return type ata: libata-scsi: Fix ata_msense_control_ata_feature() USB: storage: quirk for ADATA Portable HDD CH94 scsi: Improve CDL control mei: me: add panther lake H DID KVM: x86: Explicitly treat routing entry type changes as changes KVM: x86: Reset IRTE to host control if *new* route isn't postable char: misc: register chrdev region with all possible minors misc: microchip: pci1xxxx: Fix Kernel panic during IRQ handler registration misc: microchip: pci1xxxx: Fix incorrect IRQ status handling during ack serial: msm: Configure correct working mode before starting earlycon serial: sifive: lock port in startup()/shutdown() callbacks USB: serial: ftdi_sio: add support for Abacus Electrics Optical Probe USB: serial: option: add Sierra Wireless EM9291 USB: serial: simple: add OWON HDS200 series oscilloscope support usb: xhci: Fix invalid pointer dereference in Etron workaround usb: cdns3: Fix deadlock when using NCM gadget usb: chipidea: ci_hdrc_imx: fix usbmisc handling usb: chipidea: ci_hdrc_imx: fix call balance of regulator routines usb: chipidea: ci_hdrc_imx: implement usb_phy_init() error handling USB: OHCI: Add quirk for LS7A OHCI controller (rev 0x02) usb: dwc3: gadget: check that event count does not exceed event buffer length usb: dwc3: xilinx: Prevent spike in reset signal usb: quirks: add DELAY_INIT quirk for Silicon Motion Flash Drive usb: quirks: Add delay init quirk for SanDisk 3.2Gen1 Flash Drive USB: VLI disk crashes if LPM is used USB: wdm: handle IO errors in wdm_wwan_port_start USB: wdm: close race between wdm_open and wdm_wwan_port_stop USB: wdm: wdm_wwan_port_tx_complete mutex in atomic context USB: wdm: add annotation pinctrl: renesas: rza2: Fix potential NULL pointer dereference MIPS: cm: Detect CM quirks from device tree crypto: ccp - Add support for PCI device 0x1134 crypto: null - Use spin lock instead of mutex bpf: Fix deadlock between rcu_tasks_trace and event_mutex. clk: check for disabled clock-provider in of_clk_get_hw_from_clkspec() parisc: PDT: Fix missing prototype warning s390/sclp: Add check for get_zeroed_page() s390/tty: Fix a potential memory leak bug bpf: bpftool: Setting error code in do_loader() bpf: Only fails the busy counter check in bpf_cgrp_storage_get if it creates storage bpf: Reject attaching fexit/fmod_ret to __noreturn functions mailbox: pcc: Fix the possible race in updation of chan_in_use flag mailbox: pcc: Always clear the platform ack interrupt first usb: host: max3421-hcd: Add missing spi_device_id table fs/ntfs3: Fix WARNING in ntfs_extend_initialized_size usb: dwc3: gadget: Refactor loop to avoid NULL endpoints usb: dwc3: gadget: Avoid using reserved endpoints on Intel Merrifield sound/virtio: Fix cancel_sync warnings on uninitialized work_structs dmaengine: dmatest: Fix dmatest waiting less when interrupted usb: xhci: Avoid Stop Endpoint retry loop if the endpoint seems Running usb: gadget: aspeed: Add NULL pointer check in ast_vhub_init_dev() usb: host: xhci-plat: mvebu: use ->quirks instead of ->init_quirk() func thunderbolt: Scan retimers after device router has been enumerated objtool: Silence more KCOV warnings objtool, panic: Disable SMAP in __stack_chk_fail() objtool, ASoC: codecs: wcd934x: Remove potential undefined behavior in wcd934x_slim_irq_handler() objtool, regulator: rk808: Remove potential undefined behavior in rk806_set_mode_dcdc() objtool, lkdtm: Obfuscate the do_nothing() pointer qibfs: fix _another_ leak ntb: reduce stack usage in idt_scan_mws ntb_hw_amd: Add NTB PCI ID for new gen CPU 9p/net: fix improper handling of bogus negative read/write replies rtc: pcf85063: do a SW reset if POR failed io_uring: always do atomic put from iowq sched/isolation: Make CONFIG_CPU_ISOLATION depend on CONFIG_SMP KVM: s390: Don't use %pK through tracepoints KVM: s390: Don't use %pK through debug printing udmabuf: fix a buf size overflow issue during udmabuf creation selftests: ublk: fix test_stripe_04 perf/core: Fix WARN_ON(!ctx) in __free_event() for partial init xen: Change xen-acpi-processor dom0 dependency nvme: requeue namespace scan on missed AENs ACPI: EC: Set ec_no_wakeup for Lenovo Go S ACPI PPTT: Fix coding mistakes in a couple of sizeof() calls nvme: re-read ANA log page after ns scan completes nvme: multipath: fix return value of nvme_available_path objtool: Stop UNRET validation on UD2 gpiolib: of: Move Atmel HSMCI quirk up out of the regulator comment selftests/mincore: Allow read-ahead pages to reach the end of the file x86/bugs: Use SBPB in write_ibpb() if applicable x86/bugs: Don't fill RSB on VMEXIT with eIBRS+retpoline x86/bugs: Don't fill RSB on context switch with eIBRS nvmet-fc: take tgtport reference only once nvmet-fc: put ref when assoc->del_work is already scheduled cifs: Fix encoding of SMB1 Session Setup Kerberos Request in non-UNICODE mode timekeeping: Add a lockdep override in tick_freeze() cifs: Fix querying of WSL CHR and BLK reparse points over SMB1 ext4: make block validity check resistent to sb bh corruption scsi: hisi_sas: Fix I/O errors caused by hardware port ID changes scsi: ufs: exynos: Ensure pre_link() executes before exynos_ufs_phy_init() scsi: pm80xx: Set phy_attached to zero when device is gone x86/i8253: Call clockevent_i8253_disable() with interrupts disabled iomap: skip unnecessary ifs_block_is_uptodate check riscv: Provide all alternative macros all the time loop: aio inherit the ioprio of original request spi: tegra210-quad: use WARN_ON_ONCE instead of WARN_ON for timeouts spi: tegra210-quad: add rate limiting and simplify timeout error message ubsan: Fix panic from test_ubsan_out_of_bounds x86/cpu: Add CPU model number for Bartlett Lake CPUs with Raptor Cove cores md/raid1: Add check for missing source disk in process_checks() spi: spi-imx: Add check for spi_imx_setupxfer() x86/pvh: Call C code via the kernel virtual mapping Revert "drivers: core: synchronize really_probe() and dev_uevent()" driver core: introduce device_set_driver() helper driver core: fix potential NULL pointer dereference in dev_uevent() vmxnet3: Fix malformed packet sizing in vmxnet3_process_xdp comedi: jr3_pci: Fix synchronous deletion of timer ext4: goto right label 'out_mmap_sem' in ext4_setattr() net: dsa: mv88e6xxx: fix atu_move_port_mask for 6341 family net: dsa: mv88e6xxx: enable PVT for 6321 switch net: dsa: mv88e6xxx: enable .port_set_policy() for 6320 family net: dsa: mv88e6xxx: enable STU methods for 6320 family MIPS: cm: Fix warning if MIPS_CM is disabled nvme: fixup scan failure for non-ANA multipath controllers objtool: Ignore end-of-section jumps for KCOV/GCOV objtool: Silence more KCOV warnings, part 2 Linux 6.6.89 Change-Id: I06853c36c30b263a0de4cf72c390b6f92e4c1d85 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 6
|
||||
SUBLEVEL = 88
|
||||
SUBLEVEL = 89
|
||||
EXTRAVERSION =
|
||||
NAME = Pinguïn Aangedreven
|
||||
|
||||
|
@@ -59,6 +59,7 @@ config LOONGARCH
|
||||
select ARCH_SUPPORTS_NUMA_BALANCING
|
||||
select ARCH_USE_BUILTIN_BSWAP
|
||||
select ARCH_USE_CMPXCHG_LOCKREF
|
||||
select ARCH_USE_MEMTEST
|
||||
select ARCH_USE_QUEUED_RWLOCKS
|
||||
select ARCH_USE_QUEUED_SPINLOCKS
|
||||
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
|
||||
|
@@ -33,9 +33,9 @@ struct pt_regs {
|
||||
unsigned long __last[];
|
||||
} __aligned(8);
|
||||
|
||||
static inline int regs_irqs_disabled(struct pt_regs *regs)
|
||||
static __always_inline bool regs_irqs_disabled(struct pt_regs *regs)
|
||||
{
|
||||
return arch_irqs_disabled_flags(regs->csr_prmd);
|
||||
return !(regs->csr_prmd & CSR_PRMD_PIE);
|
||||
}
|
||||
|
||||
static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
|
||||
|
@@ -527,9 +527,10 @@ asmlinkage void noinstr do_ale(struct pt_regs *regs)
|
||||
die_if_kernel("Kernel ale access", regs);
|
||||
force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);
|
||||
#else
|
||||
bool pie = regs_irqs_disabled(regs);
|
||||
unsigned int *pc;
|
||||
|
||||
if (regs->csr_prmd & CSR_PRMD_PIE)
|
||||
if (!pie)
|
||||
local_irq_enable();
|
||||
|
||||
perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr);
|
||||
@@ -556,7 +557,7 @@ sigbus:
|
||||
die_if_kernel("Kernel ale access", regs);
|
||||
force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);
|
||||
out:
|
||||
if (regs->csr_prmd & CSR_PRMD_PIE)
|
||||
if (!pie)
|
||||
local_irq_disable();
|
||||
#endif
|
||||
irqentry_exit(regs, state);
|
||||
@@ -588,12 +589,13 @@ static void bug_handler(struct pt_regs *regs)
|
||||
asmlinkage void noinstr do_bce(struct pt_regs *regs)
|
||||
{
|
||||
bool user = user_mode(regs);
|
||||
bool pie = regs_irqs_disabled(regs);
|
||||
unsigned long era = exception_era(regs);
|
||||
u64 badv = 0, lower = 0, upper = ULONG_MAX;
|
||||
union loongarch_instruction insn;
|
||||
irqentry_state_t state = irqentry_enter(regs);
|
||||
|
||||
if (regs->csr_prmd & CSR_PRMD_PIE)
|
||||
if (!pie)
|
||||
local_irq_enable();
|
||||
|
||||
current->thread.trap_nr = read_csr_excode();
|
||||
@@ -659,7 +661,7 @@ asmlinkage void noinstr do_bce(struct pt_regs *regs)
|
||||
force_sig_bnderr((void __user *)badv, (void __user *)lower, (void __user *)upper);
|
||||
|
||||
out:
|
||||
if (regs->csr_prmd & CSR_PRMD_PIE)
|
||||
if (!pie)
|
||||
local_irq_disable();
|
||||
|
||||
irqentry_exit(regs, state);
|
||||
@@ -677,11 +679,12 @@ bad_era:
|
||||
asmlinkage void noinstr do_bp(struct pt_regs *regs)
|
||||
{
|
||||
bool user = user_mode(regs);
|
||||
bool pie = regs_irqs_disabled(regs);
|
||||
unsigned int opcode, bcode;
|
||||
unsigned long era = exception_era(regs);
|
||||
irqentry_state_t state = irqentry_enter(regs);
|
||||
|
||||
if (regs->csr_prmd & CSR_PRMD_PIE)
|
||||
if (!pie)
|
||||
local_irq_enable();
|
||||
|
||||
if (__get_inst(&opcode, (u32 *)era, user))
|
||||
@@ -747,7 +750,7 @@ asmlinkage void noinstr do_bp(struct pt_regs *regs)
|
||||
}
|
||||
|
||||
out:
|
||||
if (regs->csr_prmd & CSR_PRMD_PIE)
|
||||
if (!pie)
|
||||
local_irq_disable();
|
||||
|
||||
irqentry_exit(regs, state);
|
||||
@@ -982,6 +985,7 @@ static void init_restore_lbt(void)
|
||||
|
||||
asmlinkage void noinstr do_lbt(struct pt_regs *regs)
|
||||
{
|
||||
bool pie = regs_irqs_disabled(regs);
|
||||
irqentry_state_t state = irqentry_enter(regs);
|
||||
|
||||
/*
|
||||
@@ -991,7 +995,7 @@ asmlinkage void noinstr do_lbt(struct pt_regs *regs)
|
||||
* (including the user using 'MOVGR2GCSR' to turn on TM, which
|
||||
* will not trigger the BTE), we need to check PRMD first.
|
||||
*/
|
||||
if (regs->csr_prmd & CSR_PRMD_PIE)
|
||||
if (!pie)
|
||||
local_irq_enable();
|
||||
|
||||
if (!cpu_has_lbt) {
|
||||
@@ -1005,7 +1009,7 @@ asmlinkage void noinstr do_lbt(struct pt_regs *regs)
|
||||
preempt_enable();
|
||||
|
||||
out:
|
||||
if (regs->csr_prmd & CSR_PRMD_PIE)
|
||||
if (!pie)
|
||||
local_irq_disable();
|
||||
|
||||
irqentry_exit(regs, state);
|
||||
|
@@ -47,7 +47,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
|
||||
pmd = pmd_offset(pud, addr);
|
||||
}
|
||||
}
|
||||
return (pte_t *) pmd;
|
||||
return pmd_none(pmdp_get(pmd)) ? NULL : (pte_t *) pmd;
|
||||
}
|
||||
|
||||
int pmd_huge(pmd_t pmd)
|
||||
|
@@ -64,9 +64,6 @@ void __init paging_init(void)
|
||||
{
|
||||
unsigned long max_zone_pfns[MAX_NR_ZONES];
|
||||
|
||||
#ifdef CONFIG_ZONE_DMA
|
||||
max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
|
||||
#endif
|
||||
#ifdef CONFIG_ZONE_DMA32
|
||||
max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;
|
||||
#endif
|
||||
|
@@ -47,6 +47,16 @@ extern phys_addr_t __mips_cm_phys_base(void);
|
||||
*/
|
||||
extern int mips_cm_is64;
|
||||
|
||||
/*
|
||||
* mips_cm_is_l2_hci_broken - determine if HCI is broken
|
||||
*
|
||||
* Some CM reports show that Hardware Cache Initialization is
|
||||
* complete, but in reality it's not the case. They also incorrectly
|
||||
* indicate that Hardware Cache Initialization is supported. This
|
||||
* flags allows warning about this broken feature.
|
||||
*/
|
||||
extern bool mips_cm_is_l2_hci_broken;
|
||||
|
||||
/**
|
||||
* mips_cm_error_report - Report CM cache errors
|
||||
*/
|
||||
@@ -85,6 +95,18 @@ static inline bool mips_cm_present(void)
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
* mips_cm_update_property - update property from the device tree
|
||||
*
|
||||
* Retrieve the properties from the device tree if a CM node exist and
|
||||
* update the internal variable based on this.
|
||||
*/
|
||||
#ifdef CONFIG_MIPS_CM
|
||||
extern void mips_cm_update_property(void);
|
||||
#else
|
||||
static inline void mips_cm_update_property(void) {}
|
||||
#endif
|
||||
|
||||
/**
|
||||
* mips_cm_has_l2sync - determine whether an L2-only sync region is present
|
||||
*
|
||||
|
@@ -5,6 +5,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/errno.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/percpu.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
@@ -14,6 +15,7 @@
|
||||
void __iomem *mips_gcr_base;
|
||||
void __iomem *mips_cm_l2sync_base;
|
||||
int mips_cm_is64;
|
||||
bool mips_cm_is_l2_hci_broken;
|
||||
|
||||
static char *cm2_tr[8] = {
|
||||
"mem", "gcr", "gic", "mmio",
|
||||
@@ -243,6 +245,18 @@ static void mips_cm_probe_l2sync(void)
|
||||
mips_cm_l2sync_base = ioremap(addr, MIPS_CM_L2SYNC_SIZE);
|
||||
}
|
||||
|
||||
void mips_cm_update_property(void)
|
||||
{
|
||||
struct device_node *cm_node;
|
||||
|
||||
cm_node = of_find_compatible_node(of_root, NULL, "mobileye,eyeq6-cm");
|
||||
if (!cm_node)
|
||||
return;
|
||||
pr_info("HCI (Hardware Cache Init for the L2 cache) in GCR_L2_RAM_CONFIG from the CM3 is broken");
|
||||
mips_cm_is_l2_hci_broken = true;
|
||||
of_node_put(cm_node);
|
||||
}
|
||||
|
||||
int mips_cm_probe(void)
|
||||
{
|
||||
phys_addr_t addr;
|
||||
|
@@ -63,6 +63,7 @@ static unsigned long pdt_entry[MAX_PDT_ENTRIES] __page_aligned_bss;
|
||||
#define PDT_ADDR_PERM_ERR (pdt_type != PDT_PDC ? 2UL : 0UL)
|
||||
#define PDT_ADDR_SINGLE_ERR 1UL
|
||||
|
||||
#ifdef CONFIG_PROC_FS
|
||||
/* report PDT entries via /proc/meminfo */
|
||||
void arch_report_meminfo(struct seq_file *m)
|
||||
{
|
||||
@@ -74,6 +75,7 @@ void arch_report_meminfo(struct seq_file *m)
|
||||
seq_printf(m, "PDT_cur_entries: %7lu\n",
|
||||
pdt_status.pdt_entries);
|
||||
}
|
||||
#endif
|
||||
|
||||
static int get_info_pat_new(void)
|
||||
{
|
||||
|
@@ -115,24 +115,19 @@
|
||||
\old_c
|
||||
.endm
|
||||
|
||||
#define _ALTERNATIVE_CFG(old_c, ...) \
|
||||
ALTERNATIVE_CFG old_c
|
||||
|
||||
#define _ALTERNATIVE_CFG_2(old_c, ...) \
|
||||
ALTERNATIVE_CFG old_c
|
||||
#define __ALTERNATIVE_CFG(old_c, ...) ALTERNATIVE_CFG old_c
|
||||
#define __ALTERNATIVE_CFG_2(old_c, ...) ALTERNATIVE_CFG old_c
|
||||
|
||||
#else /* !__ASSEMBLY__ */
|
||||
|
||||
#define __ALTERNATIVE_CFG(old_c) \
|
||||
old_c "\n"
|
||||
|
||||
#define _ALTERNATIVE_CFG(old_c, ...) \
|
||||
__ALTERNATIVE_CFG(old_c)
|
||||
|
||||
#define _ALTERNATIVE_CFG_2(old_c, ...) \
|
||||
__ALTERNATIVE_CFG(old_c)
|
||||
#define __ALTERNATIVE_CFG(old_c, ...) old_c "\n"
|
||||
#define __ALTERNATIVE_CFG_2(old_c, ...) old_c "\n"
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#define _ALTERNATIVE_CFG(old_c, ...) __ALTERNATIVE_CFG(old_c)
|
||||
#define _ALTERNATIVE_CFG_2(old_c, ...) __ALTERNATIVE_CFG_2(old_c)
|
||||
|
||||
#endif /* CONFIG_RISCV_ALTERNATIVE */
|
||||
|
||||
/*
|
||||
|
@@ -94,7 +94,7 @@ static int handle_validity(struct kvm_vcpu *vcpu)
|
||||
|
||||
vcpu->stat.exit_validity++;
|
||||
trace_kvm_s390_intercept_validity(vcpu, viwhy);
|
||||
KVM_EVENT(3, "validity intercept 0x%x for pid %u (kvm 0x%pK)", viwhy,
|
||||
KVM_EVENT(3, "validity intercept 0x%x for pid %u (kvm 0x%p)", viwhy,
|
||||
current->pid, vcpu->kvm);
|
||||
|
||||
/* do not warn on invalid runtime instrumentation mode */
|
||||
|
@@ -3161,7 +3161,7 @@ void kvm_s390_gisa_clear(struct kvm *kvm)
|
||||
if (!gi->origin)
|
||||
return;
|
||||
gisa_clear_ipm(gi->origin);
|
||||
VM_EVENT(kvm, 3, "gisa 0x%pK cleared", gi->origin);
|
||||
VM_EVENT(kvm, 3, "gisa 0x%p cleared", gi->origin);
|
||||
}
|
||||
|
||||
void kvm_s390_gisa_init(struct kvm *kvm)
|
||||
@@ -3178,7 +3178,7 @@ void kvm_s390_gisa_init(struct kvm *kvm)
|
||||
gi->timer.function = gisa_vcpu_kicker;
|
||||
memset(gi->origin, 0, sizeof(struct kvm_s390_gisa));
|
||||
gi->origin->next_alert = (u32)virt_to_phys(gi->origin);
|
||||
VM_EVENT(kvm, 3, "gisa 0x%pK initialized", gi->origin);
|
||||
VM_EVENT(kvm, 3, "gisa 0x%p initialized", gi->origin);
|
||||
}
|
||||
|
||||
void kvm_s390_gisa_enable(struct kvm *kvm)
|
||||
@@ -3219,7 +3219,7 @@ void kvm_s390_gisa_destroy(struct kvm *kvm)
|
||||
process_gib_alert_list();
|
||||
hrtimer_cancel(&gi->timer);
|
||||
gi->origin = NULL;
|
||||
VM_EVENT(kvm, 3, "gisa 0x%pK destroyed", gisa);
|
||||
VM_EVENT(kvm, 3, "gisa 0x%p destroyed", gisa);
|
||||
}
|
||||
|
||||
void kvm_s390_gisa_disable(struct kvm *kvm)
|
||||
@@ -3468,7 +3468,7 @@ int __init kvm_s390_gib_init(u8 nisc)
|
||||
}
|
||||
}
|
||||
|
||||
KVM_EVENT(3, "gib 0x%pK (nisc=%d) initialized", gib, gib->nisc);
|
||||
KVM_EVENT(3, "gib 0x%p (nisc=%d) initialized", gib, gib->nisc);
|
||||
goto out;
|
||||
|
||||
out_unreg_gal:
|
||||
|
@@ -990,7 +990,7 @@ static int kvm_s390_set_mem_control(struct kvm *kvm, struct kvm_device_attr *att
|
||||
}
|
||||
mutex_unlock(&kvm->lock);
|
||||
VM_EVENT(kvm, 3, "SET: max guest address: %lu", new_limit);
|
||||
VM_EVENT(kvm, 3, "New guest asce: 0x%pK",
|
||||
VM_EVENT(kvm, 3, "New guest asce: 0x%p",
|
||||
(void *) kvm->arch.gmap->asce);
|
||||
break;
|
||||
}
|
||||
@@ -3418,7 +3418,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
|
||||
kvm_s390_gisa_init(kvm);
|
||||
INIT_LIST_HEAD(&kvm->arch.pv.need_cleanup);
|
||||
kvm->arch.pv.set_aside = NULL;
|
||||
KVM_EVENT(3, "vm 0x%pK created by pid %u", kvm, current->pid);
|
||||
KVM_EVENT(3, "vm 0x%p created by pid %u", kvm, current->pid);
|
||||
|
||||
return 0;
|
||||
out_err:
|
||||
@@ -3481,7 +3481,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
|
||||
kvm_s390_destroy_adapters(kvm);
|
||||
kvm_s390_clear_float_irqs(kvm);
|
||||
kvm_s390_vsie_destroy(kvm);
|
||||
KVM_EVENT(3, "vm 0x%pK destroyed", kvm);
|
||||
KVM_EVENT(3, "vm 0x%p destroyed", kvm);
|
||||
}
|
||||
|
||||
/* Section: vcpu related */
|
||||
@@ -3602,7 +3602,7 @@ static int sca_switch_to_extended(struct kvm *kvm)
|
||||
|
||||
free_page((unsigned long)old_sca);
|
||||
|
||||
VM_EVENT(kvm, 2, "Switched to ESCA (0x%pK -> 0x%pK)",
|
||||
VM_EVENT(kvm, 2, "Switched to ESCA (0x%p -> 0x%p)",
|
||||
old_sca, kvm->arch.sca);
|
||||
return 0;
|
||||
}
|
||||
@@ -3974,7 +3974,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
|
||||
goto out_free_sie_block;
|
||||
}
|
||||
|
||||
VM_EVENT(vcpu->kvm, 3, "create cpu %d at 0x%pK, sie block at 0x%pK",
|
||||
VM_EVENT(vcpu->kvm, 3, "create cpu %d at 0x%p, sie block at 0x%p",
|
||||
vcpu->vcpu_id, vcpu, vcpu->arch.sie_block);
|
||||
trace_kvm_s390_create_vcpu(vcpu->vcpu_id, vcpu, vcpu->arch.sie_block);
|
||||
|
||||
|
@@ -56,7 +56,7 @@ TRACE_EVENT(kvm_s390_create_vcpu,
|
||||
__entry->sie_block = sie_block;
|
||||
),
|
||||
|
||||
TP_printk("create cpu %d at 0x%pK, sie block at 0x%pK",
|
||||
TP_printk("create cpu %d at 0x%p, sie block at 0x%p",
|
||||
__entry->id, __entry->vcpu, __entry->sie_block)
|
||||
);
|
||||
|
||||
@@ -255,7 +255,7 @@ TRACE_EVENT(kvm_s390_enable_css,
|
||||
__entry->kvm = kvm;
|
||||
),
|
||||
|
||||
TP_printk("enabling channel I/O support (kvm @ %pK)\n",
|
||||
TP_printk("enabling channel I/O support (kvm @ %p)\n",
|
||||
__entry->kvm)
|
||||
);
|
||||
|
||||
|
@@ -16,7 +16,7 @@
|
||||
|
||||
SYM_FUNC_START(entry_ibpb)
|
||||
movl $MSR_IA32_PRED_CMD, %ecx
|
||||
movl $PRED_CMD_IBPB, %eax
|
||||
movl _ASM_RIP(x86_pred_cmd), %eax
|
||||
xorl %edx, %edx
|
||||
wrmsr
|
||||
|
||||
|
@@ -621,7 +621,7 @@ int x86_pmu_hw_config(struct perf_event *event)
|
||||
if (event->attr.type == event->pmu->type)
|
||||
event->hw.config |= event->attr.config & X86_RAW_EVENT_MASK;
|
||||
|
||||
if (!event->attr.freq && x86_pmu.limit_period) {
|
||||
if (is_sampling_event(event) && !event->attr.freq && x86_pmu.limit_period) {
|
||||
s64 left = event->attr.sample_period;
|
||||
x86_pmu.limit_period(event, &left);
|
||||
if (left > event->attr.sample_period)
|
||||
|
@@ -159,6 +159,8 @@
|
||||
#define INTEL_FAM6_GRANITERAPIDS_D 0xAE
|
||||
#define INTEL_GRANITERAPIDS_D IFM(6, 0xAE)
|
||||
|
||||
#define INTEL_BARTLETTLAKE IFM(6, 0xD7) /* Raptor Cove */
|
||||
|
||||
/* "Hybrid" Processors (P-Core/E-Core) */
|
||||
|
||||
#define INTEL_FAM6_LAKEFIELD 0x8A /* Sunny Cove / Tremont */
|
||||
|
@@ -1574,7 +1574,7 @@ static void __init spec_ctrl_disable_kernel_rrsba(void)
|
||||
rrsba_disabled = true;
|
||||
}
|
||||
|
||||
static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode)
|
||||
static void __init spectre_v2_select_rsb_mitigation(enum spectre_v2_mitigation mode)
|
||||
{
|
||||
/*
|
||||
* Similar to context switches, there are two types of RSB attacks
|
||||
@@ -1598,27 +1598,30 @@ static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_
|
||||
*/
|
||||
switch (mode) {
|
||||
case SPECTRE_V2_NONE:
|
||||
return;
|
||||
break;
|
||||
|
||||
case SPECTRE_V2_EIBRS_LFENCE:
|
||||
case SPECTRE_V2_EIBRS:
|
||||
if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
|
||||
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
|
||||
pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n");
|
||||
}
|
||||
return;
|
||||
|
||||
case SPECTRE_V2_EIBRS_LFENCE:
|
||||
case SPECTRE_V2_EIBRS_RETPOLINE:
|
||||
if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
|
||||
pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n");
|
||||
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
|
||||
}
|
||||
break;
|
||||
|
||||
case SPECTRE_V2_RETPOLINE:
|
||||
case SPECTRE_V2_LFENCE:
|
||||
case SPECTRE_V2_IBRS:
|
||||
pr_info("Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT\n");
|
||||
setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
|
||||
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
|
||||
pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n");
|
||||
return;
|
||||
}
|
||||
break;
|
||||
|
||||
pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exit");
|
||||
dump_stack();
|
||||
default:
|
||||
pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation\n");
|
||||
dump_stack();
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1844,10 +1847,7 @@ static void __init spectre_v2_select_mitigation(void)
|
||||
*
|
||||
* FIXME: Is this pointless for retbleed-affected AMD?
|
||||
*/
|
||||
setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
|
||||
pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
|
||||
|
||||
spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
|
||||
spectre_v2_select_rsb_mitigation(mode);
|
||||
|
||||
/*
|
||||
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
|
||||
|
@@ -46,7 +46,8 @@ bool __init pit_timer_init(void)
|
||||
* VMMs otherwise steal CPU time just to pointlessly waggle
|
||||
* the (masked) IRQ.
|
||||
*/
|
||||
clockevent_i8253_disable();
|
||||
scoped_guard(irq)
|
||||
clockevent_i8253_disable();
|
||||
return false;
|
||||
}
|
||||
clockevent_i8253_init(true);
|
||||
|
@@ -820,7 +820,7 @@ static int svm_ir_list_add(struct vcpu_svm *svm, struct amd_iommu_pi_data *pi)
|
||||
* Allocating new amd_iommu_pi_data, which will get
|
||||
* add to the per-vcpu ir_list.
|
||||
*/
|
||||
ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_KERNEL_ACCOUNT);
|
||||
ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_ATOMIC | __GFP_ACCOUNT);
|
||||
if (!ir) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
@@ -896,6 +896,7 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
|
||||
{
|
||||
struct kvm_kernel_irq_routing_entry *e;
|
||||
struct kvm_irq_routing_table *irq_rt;
|
||||
bool enable_remapped_mode = true;
|
||||
int idx, ret = 0;
|
||||
|
||||
if (!kvm_arch_has_assigned_device(kvm) ||
|
||||
@@ -933,6 +934,8 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
|
||||
kvm_vcpu_apicv_active(&svm->vcpu)) {
|
||||
struct amd_iommu_pi_data pi;
|
||||
|
||||
enable_remapped_mode = false;
|
||||
|
||||
/* Try to enable guest_mode in IRTE */
|
||||
pi.base = __sme_set(page_to_phys(svm->avic_backing_page) &
|
||||
AVIC_HPA_MASK);
|
||||
@@ -951,33 +954,6 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
|
||||
*/
|
||||
if (!ret && pi.is_guest_mode)
|
||||
svm_ir_list_add(svm, &pi);
|
||||
} else {
|
||||
/* Use legacy mode in IRTE */
|
||||
struct amd_iommu_pi_data pi;
|
||||
|
||||
/**
|
||||
* Here, pi is used to:
|
||||
* - Tell IOMMU to use legacy mode for this interrupt.
|
||||
* - Retrieve ga_tag of prior interrupt remapping data.
|
||||
*/
|
||||
pi.prev_ga_tag = 0;
|
||||
pi.is_guest_mode = false;
|
||||
ret = irq_set_vcpu_affinity(host_irq, &pi);
|
||||
|
||||
/**
|
||||
* Check if the posted interrupt was previously
|
||||
* setup with the guest_mode by checking if the ga_tag
|
||||
* was cached. If so, we need to clean up the per-vcpu
|
||||
* ir_list.
|
||||
*/
|
||||
if (!ret && pi.prev_ga_tag) {
|
||||
int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag);
|
||||
struct kvm_vcpu *vcpu;
|
||||
|
||||
vcpu = kvm_get_vcpu_by_id(kvm, id);
|
||||
if (vcpu)
|
||||
svm_ir_list_del(to_svm(vcpu), &pi);
|
||||
}
|
||||
}
|
||||
|
||||
if (!ret && svm) {
|
||||
@@ -993,6 +969,34 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
if (enable_remapped_mode) {
|
||||
/* Use legacy mode in IRTE */
|
||||
struct amd_iommu_pi_data pi;
|
||||
|
||||
/**
|
||||
* Here, pi is used to:
|
||||
* - Tell IOMMU to use legacy mode for this interrupt.
|
||||
* - Retrieve ga_tag of prior interrupt remapping data.
|
||||
*/
|
||||
pi.prev_ga_tag = 0;
|
||||
pi.is_guest_mode = false;
|
||||
ret = irq_set_vcpu_affinity(host_irq, &pi);
|
||||
|
||||
/**
|
||||
* Check if the posted interrupt was previously
|
||||
* setup with the guest_mode by checking if the ga_tag
|
||||
* was cached. If so, we need to clean up the per-vcpu
|
||||
* ir_list.
|
||||
*/
|
||||
if (!ret && pi.prev_ga_tag) {
|
||||
int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag);
|
||||
struct kvm_vcpu *vcpu;
|
||||
|
||||
vcpu = kvm_get_vcpu_by_id(kvm, id);
|
||||
if (vcpu)
|
||||
svm_ir_list_del(to_svm(vcpu), &pi);
|
||||
}
|
||||
}
|
||||
out:
|
||||
srcu_read_unlock(&kvm->irq_srcu, idx);
|
||||
return ret;
|
||||
|
@@ -274,6 +274,7 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
|
||||
{
|
||||
struct kvm_kernel_irq_routing_entry *e;
|
||||
struct kvm_irq_routing_table *irq_rt;
|
||||
bool enable_remapped_mode = true;
|
||||
struct kvm_lapic_irq irq;
|
||||
struct kvm_vcpu *vcpu;
|
||||
struct vcpu_data vcpu_info;
|
||||
@@ -312,21 +313,8 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
|
||||
|
||||
kvm_set_msi_irq(kvm, e, &irq);
|
||||
if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) ||
|
||||
!kvm_irq_is_postable(&irq)) {
|
||||
/*
|
||||
* Make sure the IRTE is in remapped mode if
|
||||
* we don't handle it in posted mode.
|
||||
*/
|
||||
ret = irq_set_vcpu_affinity(host_irq, NULL);
|
||||
if (ret < 0) {
|
||||
printk(KERN_INFO
|
||||
"failed to back to remapped mode, irq: %u\n",
|
||||
host_irq);
|
||||
goto out;
|
||||
}
|
||||
|
||||
!kvm_irq_is_postable(&irq))
|
||||
continue;
|
||||
}
|
||||
|
||||
vcpu_info.pi_desc_addr = __pa(vcpu_to_pi_desc(vcpu));
|
||||
vcpu_info.vector = irq.vector;
|
||||
@@ -334,11 +322,12 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
|
||||
trace_kvm_pi_irte_update(host_irq, vcpu->vcpu_id, e->gsi,
|
||||
vcpu_info.vector, vcpu_info.pi_desc_addr, set);
|
||||
|
||||
if (set)
|
||||
ret = irq_set_vcpu_affinity(host_irq, &vcpu_info);
|
||||
else
|
||||
ret = irq_set_vcpu_affinity(host_irq, NULL);
|
||||
if (!set)
|
||||
continue;
|
||||
|
||||
enable_remapped_mode = false;
|
||||
|
||||
ret = irq_set_vcpu_affinity(host_irq, &vcpu_info);
|
||||
if (ret < 0) {
|
||||
printk(KERN_INFO "%s: failed to update PI IRTE\n",
|
||||
__func__);
|
||||
@@ -346,6 +335,9 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
|
||||
}
|
||||
}
|
||||
|
||||
if (enable_remapped_mode)
|
||||
ret = irq_set_vcpu_affinity(host_irq, NULL);
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
srcu_read_unlock(&kvm->irq_srcu, idx);
|
||||
|
@@ -13297,7 +13297,8 @@ int kvm_arch_update_irqfd_routing(struct kvm *kvm, unsigned int host_irq,
|
||||
bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old,
|
||||
struct kvm_kernel_irq_routing_entry *new)
|
||||
{
|
||||
if (new->type != KVM_IRQ_ROUTING_MSI)
|
||||
if (old->type != KVM_IRQ_ROUTING_MSI ||
|
||||
new->type != KVM_IRQ_ROUTING_MSI)
|
||||
return true;
|
||||
|
||||
return !!memcmp(&old->msi, &new->msi, sizeof(new->msi));
|
||||
|
@@ -392,9 +392,9 @@ static void cond_mitigation(struct task_struct *next)
|
||||
prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_spec);
|
||||
|
||||
/*
|
||||
* Avoid user/user BTB poisoning by flushing the branch predictor
|
||||
* when switching between processes. This stops one process from
|
||||
* doing Spectre-v2 attacks on another.
|
||||
* Avoid user->user BTB/RSB poisoning by flushing them when switching
|
||||
* between processes. This stops one process from doing Spectre-v2
|
||||
* attacks on another.
|
||||
*
|
||||
* Both, the conditional and the always IBPB mode use the mm
|
||||
* pointer to avoid the IBPB when switching between tasks of the
|
||||
|
@@ -100,7 +100,12 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
|
||||
xor %edx, %edx
|
||||
wrmsr
|
||||
|
||||
call xen_prepare_pvh
|
||||
/* Call xen_prepare_pvh() via the kernel virtual mapping */
|
||||
leaq xen_prepare_pvh(%rip), %rax
|
||||
subq phys_base(%rip), %rax
|
||||
addq $__START_KERNEL_map, %rax
|
||||
ANNOTATE_RETPOLINE_SAFE
|
||||
call *%rax
|
||||
|
||||
/* startup_64 expects boot_params in %rsi. */
|
||||
mov $_pa(pvh_bootparams), %rsi
|
||||
|
@@ -17,10 +17,10 @@
|
||||
#include <crypto/internal/skcipher.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/string.h>
|
||||
|
||||
static DEFINE_MUTEX(crypto_default_null_skcipher_lock);
|
||||
static DEFINE_SPINLOCK(crypto_default_null_skcipher_lock);
|
||||
static struct crypto_sync_skcipher *crypto_default_null_skcipher;
|
||||
static int crypto_default_null_skcipher_refcnt;
|
||||
|
||||
@@ -152,23 +152,32 @@ MODULE_ALIAS_CRYPTO("cipher_null");
|
||||
|
||||
struct crypto_sync_skcipher *crypto_get_default_null_skcipher(void)
|
||||
{
|
||||
struct crypto_sync_skcipher *ntfm = NULL;
|
||||
struct crypto_sync_skcipher *tfm;
|
||||
|
||||
mutex_lock(&crypto_default_null_skcipher_lock);
|
||||
spin_lock_bh(&crypto_default_null_skcipher_lock);
|
||||
tfm = crypto_default_null_skcipher;
|
||||
|
||||
if (!tfm) {
|
||||
tfm = crypto_alloc_sync_skcipher("ecb(cipher_null)", 0, 0);
|
||||
if (IS_ERR(tfm))
|
||||
goto unlock;
|
||||
spin_unlock_bh(&crypto_default_null_skcipher_lock);
|
||||
|
||||
crypto_default_null_skcipher = tfm;
|
||||
ntfm = crypto_alloc_sync_skcipher("ecb(cipher_null)", 0, 0);
|
||||
if (IS_ERR(ntfm))
|
||||
return ntfm;
|
||||
|
||||
spin_lock_bh(&crypto_default_null_skcipher_lock);
|
||||
tfm = crypto_default_null_skcipher;
|
||||
if (!tfm) {
|
||||
tfm = ntfm;
|
||||
ntfm = NULL;
|
||||
crypto_default_null_skcipher = tfm;
|
||||
}
|
||||
}
|
||||
|
||||
crypto_default_null_skcipher_refcnt++;
|
||||
spin_unlock_bh(&crypto_default_null_skcipher_lock);
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&crypto_default_null_skcipher_lock);
|
||||
crypto_free_sync_skcipher(ntfm);
|
||||
|
||||
return tfm;
|
||||
}
|
||||
@@ -176,12 +185,16 @@ EXPORT_SYMBOL_GPL(crypto_get_default_null_skcipher);
|
||||
|
||||
void crypto_put_default_null_skcipher(void)
|
||||
{
|
||||
mutex_lock(&crypto_default_null_skcipher_lock);
|
||||
struct crypto_sync_skcipher *tfm = NULL;
|
||||
|
||||
spin_lock_bh(&crypto_default_null_skcipher_lock);
|
||||
if (!--crypto_default_null_skcipher_refcnt) {
|
||||
crypto_free_sync_skcipher(crypto_default_null_skcipher);
|
||||
tfm = crypto_default_null_skcipher;
|
||||
crypto_default_null_skcipher = NULL;
|
||||
}
|
||||
mutex_unlock(&crypto_default_null_skcipher_lock);
|
||||
spin_unlock_bh(&crypto_default_null_skcipher_lock);
|
||||
|
||||
crypto_free_sync_skcipher(tfm);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(crypto_put_default_null_skcipher);
|
||||
|
||||
|
@@ -2301,6 +2301,34 @@ static const struct dmi_system_id acpi_ec_no_wakeup[] = {
|
||||
DMI_MATCH(DMI_PRODUCT_FAMILY, "103C_5336AN HP ZHAN 66 Pro"),
|
||||
},
|
||||
},
|
||||
/*
|
||||
* Lenovo Legion Go S; touchscreen blocks HW sleep when woken up from EC
|
||||
* https://gitlab.freedesktop.org/drm/amd/-/issues/3929
|
||||
*/
|
||||
{
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "83L3"),
|
||||
}
|
||||
},
|
||||
{
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "83N6"),
|
||||
}
|
||||
},
|
||||
{
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "83Q2"),
|
||||
}
|
||||
},
|
||||
{
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "83Q3"),
|
||||
}
|
||||
},
|
||||
{ },
|
||||
};
|
||||
|
||||
|
@@ -229,7 +229,7 @@ static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr,
|
||||
node_entry = ACPI_PTR_DIFF(node, table_hdr);
|
||||
entry = ACPI_ADD_PTR(struct acpi_subtable_header, table_hdr,
|
||||
sizeof(struct acpi_table_pptt));
|
||||
proc_sz = sizeof(struct acpi_pptt_processor *);
|
||||
proc_sz = sizeof(struct acpi_pptt_processor);
|
||||
|
||||
while ((unsigned long)entry + proc_sz < table_end) {
|
||||
cpu_node = (struct acpi_pptt_processor *)entry;
|
||||
@@ -270,7 +270,7 @@ static struct acpi_pptt_processor *acpi_find_processor_node(struct acpi_table_he
|
||||
table_end = (unsigned long)table_hdr + table_hdr->length;
|
||||
entry = ACPI_ADD_PTR(struct acpi_subtable_header, table_hdr,
|
||||
sizeof(struct acpi_table_pptt));
|
||||
proc_sz = sizeof(struct acpi_pptt_processor *);
|
||||
proc_sz = sizeof(struct acpi_pptt_processor);
|
||||
|
||||
/* find the processor structure associated with this cpuid */
|
||||
while ((unsigned long)entry + proc_sz < table_end) {
|
||||
|
@@ -2354,8 +2354,8 @@ static unsigned int ata_msense_control_ata_feature(struct ata_device *dev,
|
||||
*/
|
||||
put_unaligned_be16(ATA_FEATURE_SUB_MPAGE_LEN - 4, &buf[2]);
|
||||
|
||||
if (dev->flags & ATA_DFLAG_CDL)
|
||||
buf[4] = 0x02; /* Support T2A and T2B pages */
|
||||
if (dev->flags & ATA_DFLAG_CDL_ENABLED)
|
||||
buf[4] = 0x02; /* T2A and T2B pages enabled */
|
||||
else
|
||||
buf[4] = 0;
|
||||
|
||||
@@ -3764,12 +3764,11 @@ static int ata_mselect_control_spg0(struct ata_queued_cmd *qc,
|
||||
}
|
||||
|
||||
/*
|
||||
* Translate MODE SELECT control mode page, sub-pages f2h (ATA feature mode
|
||||
* Translate MODE SELECT control mode page, sub-page f2h (ATA feature mode
|
||||
* page) into a SET FEATURES command.
|
||||
*/
|
||||
static unsigned int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,
|
||||
const u8 *buf, int len,
|
||||
u16 *fp)
|
||||
static int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,
|
||||
const u8 *buf, int len, u16 *fp)
|
||||
{
|
||||
struct ata_device *dev = qc->dev;
|
||||
struct ata_taskfile *tf = &qc->tf;
|
||||
@@ -3787,17 +3786,27 @@ static unsigned int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,
|
||||
/* Check cdl_ctrl */
|
||||
switch (buf[0] & 0x03) {
|
||||
case 0:
|
||||
/* Disable CDL */
|
||||
/* Disable CDL if it is enabled */
|
||||
if (!(dev->flags & ATA_DFLAG_CDL_ENABLED))
|
||||
return 0;
|
||||
ata_dev_dbg(dev, "Disabling CDL\n");
|
||||
cdl_action = 0;
|
||||
dev->flags &= ~ATA_DFLAG_CDL_ENABLED;
|
||||
break;
|
||||
case 0x02:
|
||||
/* Enable CDL T2A/T2B: NCQ priority must be disabled */
|
||||
/*
|
||||
* Enable CDL if not already enabled. Since this is mutually
|
||||
* exclusive with NCQ priority, allow this only if NCQ priority
|
||||
* is disabled.
|
||||
*/
|
||||
if (dev->flags & ATA_DFLAG_CDL_ENABLED)
|
||||
return 0;
|
||||
if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) {
|
||||
ata_dev_err(dev,
|
||||
"NCQ priority must be disabled to enable CDL\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
ata_dev_dbg(dev, "Enabling CDL\n");
|
||||
cdl_action = 1;
|
||||
dev->flags |= ATA_DFLAG_CDL_ENABLED;
|
||||
break;
|
||||
|
@@ -73,6 +73,7 @@ static inline void subsys_put(struct subsys_private *sp)
|
||||
kset_put(&sp->subsys);
|
||||
}
|
||||
|
||||
struct subsys_private *bus_to_subsys(const struct bus_type *bus);
|
||||
struct subsys_private *class_to_subsys(const struct class *class);
|
||||
|
||||
struct driver_private {
|
||||
@@ -179,6 +180,22 @@ int driver_add_groups(struct device_driver *drv, const struct attribute_group **
|
||||
void driver_remove_groups(struct device_driver *drv, const struct attribute_group **groups);
|
||||
void device_driver_detach(struct device *dev);
|
||||
|
||||
static inline void device_set_driver(struct device *dev, const struct device_driver *drv)
|
||||
{
|
||||
/*
|
||||
* Majority (all?) read accesses to dev->driver happens either
|
||||
* while holding device lock or in bus/driver code that is only
|
||||
* invoked when the device is bound to a driver and there is no
|
||||
* concern of the pointer being changed while it is being read.
|
||||
* However when reading device's uevent file we read driver pointer
|
||||
* without taking device lock (so we do not block there for
|
||||
* arbitrary amount of time). We use WRITE_ONCE() here to prevent
|
||||
* tearing so that READ_ONCE() can safely be used in uevent code.
|
||||
*/
|
||||
// FIXME - this cast should not be needed "soon"
|
||||
WRITE_ONCE(dev->driver, (struct device_driver *)drv);
|
||||
}
|
||||
|
||||
int devres_release_all(struct device *dev);
|
||||
void device_block_probing(void);
|
||||
void device_unblock_probing(void);
|
||||
|
@@ -57,7 +57,7 @@ static int __must_check bus_rescan_devices_helper(struct device *dev,
|
||||
* NULL. A call to subsys_put() must be done when finished with the pointer in
|
||||
* order for it to be properly freed.
|
||||
*/
|
||||
static struct subsys_private *bus_to_subsys(const struct bus_type *bus)
|
||||
struct subsys_private *bus_to_subsys(const struct bus_type *bus)
|
||||
{
|
||||
struct subsys_private *sp = NULL;
|
||||
struct kobject *kobj;
|
||||
|
@@ -2571,6 +2571,35 @@ static const char *dev_uevent_name(const struct kobject *kobj)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Try filling "DRIVER=<name>" uevent variable for a device. Because this
|
||||
* function may race with binding and unbinding the device from a driver,
|
||||
* we need to be careful. Binding is generally safe, at worst we miss the
|
||||
* fact that the device is already bound to a driver (but the driver
|
||||
* information that is delivered through uevents is best-effort, it may
|
||||
* become obsolete as soon as it is generated anyways). Unbinding is more
|
||||
* risky as driver pointer is transitioning to NULL, so READ_ONCE() should
|
||||
* be used to make sure we are dealing with the same pointer, and to
|
||||
* ensure that driver structure is not going to disappear from under us
|
||||
* we take bus' drivers klist lock. The assumption that only registered
|
||||
* driver can be bound to a device, and to unregister a driver bus code
|
||||
* will take the same lock.
|
||||
*/
|
||||
static void dev_driver_uevent(const struct device *dev, struct kobj_uevent_env *env)
|
||||
{
|
||||
struct subsys_private *sp = bus_to_subsys(dev->bus);
|
||||
|
||||
if (sp) {
|
||||
scoped_guard(spinlock, &sp->klist_drivers.k_lock) {
|
||||
struct device_driver *drv = READ_ONCE(dev->driver);
|
||||
if (drv)
|
||||
add_uevent_var(env, "DRIVER=%s", drv->name);
|
||||
}
|
||||
|
||||
subsys_put(sp);
|
||||
}
|
||||
}
|
||||
|
||||
static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
|
||||
{
|
||||
const struct device *dev = kobj_to_dev(kobj);
|
||||
@@ -2602,8 +2631,8 @@ static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
|
||||
if (dev->type && dev->type->name)
|
||||
add_uevent_var(env, "DEVTYPE=%s", dev->type->name);
|
||||
|
||||
if (dev->driver)
|
||||
add_uevent_var(env, "DRIVER=%s", dev->driver->name);
|
||||
/* Add "DRIVER=%s" variable if the device is bound to a driver */
|
||||
dev_driver_uevent(dev, env);
|
||||
|
||||
/* Add common DT information about the device */
|
||||
of_device_uevent(dev, env);
|
||||
@@ -2673,11 +2702,8 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
|
||||
if (!env)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Synchronize with really_probe() */
|
||||
device_lock(dev);
|
||||
/* let the kset specific function add its keys */
|
||||
retval = kset->uevent_ops->uevent(&dev->kobj, env);
|
||||
device_unlock(dev);
|
||||
if (retval)
|
||||
goto out;
|
||||
|
||||
@@ -3692,7 +3718,7 @@ done:
|
||||
device_pm_remove(dev);
|
||||
dpm_sysfs_remove(dev);
|
||||
DPMError:
|
||||
dev->driver = NULL;
|
||||
device_set_driver(dev, NULL);
|
||||
bus_remove_device(dev);
|
||||
BusError:
|
||||
device_remove_attrs(dev);
|
||||
|
@@ -550,7 +550,7 @@ static void device_unbind_cleanup(struct device *dev)
|
||||
arch_teardown_dma_ops(dev);
|
||||
kfree(dev->dma_range_map);
|
||||
dev->dma_range_map = NULL;
|
||||
dev->driver = NULL;
|
||||
device_set_driver(dev, NULL);
|
||||
dev_set_drvdata(dev, NULL);
|
||||
if (dev->pm_domain && dev->pm_domain->dismiss)
|
||||
dev->pm_domain->dismiss(dev);
|
||||
@@ -629,7 +629,7 @@ static int really_probe(struct device *dev, struct device_driver *drv)
|
||||
}
|
||||
|
||||
re_probe:
|
||||
dev->driver = drv;
|
||||
device_set_driver(dev, drv);
|
||||
|
||||
/* If using pinctrl, bind pins now before probing */
|
||||
ret = pinctrl_bind_pins(dev);
|
||||
@@ -1037,7 +1037,7 @@ static int __device_attach(struct device *dev, bool allow_async)
|
||||
if (ret == 0)
|
||||
ret = 1;
|
||||
else {
|
||||
dev->driver = NULL;
|
||||
device_set_driver(dev, NULL);
|
||||
ret = 0;
|
||||
}
|
||||
} else {
|
||||
|
@@ -441,7 +441,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
|
||||
cmd->iocb.ki_filp = file;
|
||||
cmd->iocb.ki_complete = lo_rw_aio_complete;
|
||||
cmd->iocb.ki_flags = IOCB_DIRECT;
|
||||
cmd->iocb.ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
|
||||
cmd->iocb.ki_ioprio = req_get_ioprio(rq);
|
||||
|
||||
if (rw == ITER_SOURCE)
|
||||
ret = call_write_iter(file, &cmd->iocb, &iter);
|
||||
|
@@ -315,7 +315,7 @@ static int __init misc_init(void)
|
||||
goto fail_remove;
|
||||
|
||||
err = -EIO;
|
||||
if (register_chrdev(MISC_MAJOR, "misc", &misc_fops))
|
||||
if (__register_chrdev(MISC_MAJOR, 0, MINORMASK + 1, "misc", &misc_fops))
|
||||
goto fail_printk;
|
||||
return 0;
|
||||
|
||||
|
@@ -1612,8 +1612,8 @@ static void handle_control_message(struct virtio_device *vdev,
|
||||
break;
|
||||
case VIRTIO_CONSOLE_RESIZE: {
|
||||
struct {
|
||||
__u16 rows;
|
||||
__u16 cols;
|
||||
__virtio16 rows;
|
||||
__virtio16 cols;
|
||||
} size;
|
||||
|
||||
if (!is_console_port(port))
|
||||
@@ -1621,7 +1621,8 @@ static void handle_control_message(struct virtio_device *vdev,
|
||||
|
||||
memcpy(&size, buf->buf + buf->offset + sizeof(*cpkt),
|
||||
sizeof(size));
|
||||
set_console_size(port, size.rows, size.cols);
|
||||
set_console_size(port, virtio16_to_cpu(vdev, size.rows),
|
||||
virtio16_to_cpu(vdev, size.cols));
|
||||
|
||||
port->cons.hvc->irq_requested = 1;
|
||||
resize_console(port);
|
||||
|
@@ -5281,6 +5281,10 @@ of_clk_get_hw_from_clkspec(struct of_phandle_args *clkspec)
|
||||
if (!clkspec)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
/* Check if node in clkspec is in disabled/fail state */
|
||||
if (!of_device_is_available(clkspec->np))
|
||||
return ERR_PTR(-ENOENT);
|
||||
|
||||
mutex_lock(&of_clk_mutex);
|
||||
list_for_each_entry(provider, &of_clk_providers, link) {
|
||||
if (provider->node == clkspec->np) {
|
||||
|
@@ -758,7 +758,7 @@ static void jr3_pci_detach(struct comedi_device *dev)
|
||||
struct jr3_pci_dev_private *devpriv = dev->private;
|
||||
|
||||
if (devpriv)
|
||||
del_timer_sync(&devpriv->timer);
|
||||
timer_shutdown_sync(&devpriv->timer);
|
||||
|
||||
comedi_pci_detach(dev);
|
||||
}
|
||||
|
@@ -103,11 +103,17 @@ static const struct of_device_id apple_soc_cpufreq_of_match[] = {
|
||||
|
||||
static unsigned int apple_soc_cpufreq_get_rate(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
|
||||
struct apple_cpu_priv *priv = policy->driver_data;
|
||||
struct cpufreq_policy *policy;
|
||||
struct apple_cpu_priv *priv;
|
||||
struct cpufreq_frequency_table *p;
|
||||
unsigned int pstate;
|
||||
|
||||
policy = cpufreq_cpu_get_raw(cpu);
|
||||
if (unlikely(!policy))
|
||||
return 0;
|
||||
|
||||
priv = policy->driver_data;
|
||||
|
||||
if (priv->info->cur_pstate_mask) {
|
||||
u64 reg = readq_relaxed(priv->reg_base + APPLE_DVFS_STATUS);
|
||||
|
||||
|
@@ -773,7 +773,7 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
|
||||
int ret;
|
||||
|
||||
if (!policy)
|
||||
return -ENODEV;
|
||||
return 0;
|
||||
|
||||
cpu_data = policy->driver_data;
|
||||
|
||||
|
@@ -33,11 +33,17 @@ static const struct scmi_perf_proto_ops *perf_ops;
|
||||
|
||||
static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
|
||||
struct scmi_data *priv = policy->driver_data;
|
||||
struct cpufreq_policy *policy;
|
||||
struct scmi_data *priv;
|
||||
unsigned long rate;
|
||||
int ret;
|
||||
|
||||
policy = cpufreq_cpu_get_raw(cpu);
|
||||
if (unlikely(!policy))
|
||||
return 0;
|
||||
|
||||
priv = policy->driver_data;
|
||||
|
||||
ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false);
|
||||
if (ret)
|
||||
return 0;
|
||||
|
@@ -29,9 +29,16 @@ static struct scpi_ops *scpi_ops;
|
||||
|
||||
static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
|
||||
struct scpi_data *priv = policy->driver_data;
|
||||
unsigned long rate = clk_get_rate(priv->clk);
|
||||
struct cpufreq_policy *policy;
|
||||
struct scpi_data *priv;
|
||||
unsigned long rate;
|
||||
|
||||
policy = cpufreq_cpu_get_raw(cpu);
|
||||
if (unlikely(!policy))
|
||||
return 0;
|
||||
|
||||
priv = policy->driver_data;
|
||||
rate = clk_get_rate(priv->clk);
|
||||
|
||||
return rate / 1000;
|
||||
}
|
||||
|
@@ -107,6 +107,12 @@ static int atmel_sha204a_probe(struct i2c_client *client)
|
||||
i2c_priv->hwrng.name = dev_name(&client->dev);
|
||||
i2c_priv->hwrng.read = atmel_sha204a_rng_read;
|
||||
|
||||
/*
|
||||
* According to review by Bill Cox [1], this HWRNG has very low entropy.
|
||||
* [1] https://www.metzdowd.com/pipermail/cryptography/2014-December/023858.html
|
||||
*/
|
||||
i2c_priv->hwrng.quality = 1;
|
||||
|
||||
ret = devm_hwrng_register(&client->dev, &i2c_priv->hwrng);
|
||||
if (ret)
|
||||
dev_warn(&client->dev, "failed to register RNG (%d)\n", ret);
|
||||
|
@@ -577,6 +577,7 @@ static const struct pci_device_id sp_pci_table[] = {
|
||||
{ PCI_VDEVICE(AMD, 0x14CA), (kernel_ulong_t)&dev_vdata[5] },
|
||||
{ PCI_VDEVICE(AMD, 0x15C7), (kernel_ulong_t)&dev_vdata[6] },
|
||||
{ PCI_VDEVICE(AMD, 0x1649), (kernel_ulong_t)&dev_vdata[6] },
|
||||
{ PCI_VDEVICE(AMD, 0x1134), (kernel_ulong_t)&dev_vdata[7] },
|
||||
{ PCI_VDEVICE(AMD, 0x17E0), (kernel_ulong_t)&dev_vdata[7] },
|
||||
{ PCI_VDEVICE(AMD, 0x156E), (kernel_ulong_t)&dev_vdata[8] },
|
||||
/* Last entry must be zero */
|
||||
|
@@ -478,7 +478,6 @@ resource_size_t __rcrb_to_component(struct device *dev, struct cxl_rcrb_info *ri
|
||||
resource_size_t rcrb = ri->base;
|
||||
void __iomem *addr;
|
||||
u32 bar0, bar1;
|
||||
u16 cmd;
|
||||
u32 id;
|
||||
|
||||
if (which == CXL_RCRB_UPSTREAM)
|
||||
@@ -500,7 +499,6 @@ resource_size_t __rcrb_to_component(struct device *dev, struct cxl_rcrb_info *ri
|
||||
}
|
||||
|
||||
id = readl(addr + PCI_VENDOR_ID);
|
||||
cmd = readw(addr + PCI_COMMAND);
|
||||
bar0 = readl(addr + PCI_BASE_ADDRESS_0);
|
||||
bar1 = readl(addr + PCI_BASE_ADDRESS_1);
|
||||
iounmap(addr);
|
||||
@@ -515,8 +513,6 @@ resource_size_t __rcrb_to_component(struct device *dev, struct cxl_rcrb_info *ri
|
||||
dev_err(dev, "Failed to access Downstream Port RCRB\n");
|
||||
return CXL_RESOURCE_NONE;
|
||||
}
|
||||
if (!(cmd & PCI_COMMAND_MEMORY))
|
||||
return CXL_RESOURCE_NONE;
|
||||
/* The RCRB is a Memory Window, and the MEM_TYPE_1M bit is obsolete */
|
||||
if (bar0 & (PCI_BASE_ADDRESS_MEM_TYPE_1M | PCI_BASE_ADDRESS_SPACE_IO))
|
||||
return CXL_RESOURCE_NONE;
|
||||
|
@@ -214,7 +214,7 @@ static long udmabuf_create(struct miscdevice *device,
|
||||
if (!ubuf)
|
||||
return -ENOMEM;
|
||||
|
||||
pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
|
||||
pglimit = ((u64)size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
|
||||
for (i = 0; i < head->count; i++) {
|
||||
if (!IS_ALIGNED(list[i].offset, PAGE_SIZE))
|
||||
goto err;
|
||||
|
@@ -827,9 +827,9 @@ static int dmatest_func(void *data)
|
||||
} else {
|
||||
dma_async_issue_pending(chan);
|
||||
|
||||
wait_event_freezable_timeout(thread->done_wait,
|
||||
done->done,
|
||||
msecs_to_jiffies(params->timeout));
|
||||
wait_event_timeout(thread->done_wait,
|
||||
done->done,
|
||||
msecs_to_jiffies(params->timeout));
|
||||
|
||||
status = dma_async_is_tx_complete(chan, cookie, NULL,
|
||||
NULL);
|
||||
|
@@ -247,6 +247,9 @@ static void of_gpio_set_polarity_by_property(const struct device_node *np,
|
||||
{ "fsl,imx8qm-fec", "phy-reset-gpios", "phy-reset-active-high" },
|
||||
{ "fsl,s32v234-fec", "phy-reset-gpios", "phy-reset-active-high" },
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_MMC_ATMELMCI)
|
||||
{ "atmel,hsmci", "cd-gpios", "cd-inverted" },
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_PCI_IMX6)
|
||||
{ "fsl,imx6q-pcie", "reset-gpio", "reset-gpio-active-high" },
|
||||
{ "fsl,imx6sx-pcie", "reset-gpio", "reset-gpio-active-high" },
|
||||
@@ -272,9 +275,6 @@ static void of_gpio_set_polarity_by_property(const struct device_node *np,
|
||||
#if IS_ENABLED(CONFIG_REGULATOR_GPIO)
|
||||
{ "regulator-gpio", "enable-gpio", "enable-active-high" },
|
||||
{ "regulator-gpio", "enable-gpios", "enable-active-high" },
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_MMC_ATMELMCI)
|
||||
{ "atmel,hsmci", "cd-gpios", "cd-inverted" },
|
||||
#endif
|
||||
};
|
||||
unsigned int i;
|
||||
|
@@ -2787,16 +2787,16 @@ static void dm_gpureset_commit_state(struct dc_state *dc_state,
|
||||
for (k = 0; k < dc_state->stream_count; k++) {
|
||||
bundle->stream_update.stream = dc_state->streams[k];
|
||||
|
||||
for (m = 0; m < dc_state->stream_status->plane_count; m++) {
|
||||
for (m = 0; m < dc_state->stream_status[k].plane_count; m++) {
|
||||
bundle->surface_updates[m].surface =
|
||||
dc_state->stream_status->plane_states[m];
|
||||
dc_state->stream_status[k].plane_states[m];
|
||||
bundle->surface_updates[m].surface->force_full_update =
|
||||
true;
|
||||
}
|
||||
|
||||
update_planes_and_stream_adapter(dm->dc,
|
||||
UPDATE_TYPE_FULL,
|
||||
dc_state->stream_status->plane_count,
|
||||
dc_state->stream_status[k].plane_count,
|
||||
dc_state->streams[k],
|
||||
&bundle->stream_update,
|
||||
bundle->surface_updates);
|
||||
@@ -9588,6 +9588,9 @@ static bool should_reset_plane(struct drm_atomic_state *state,
|
||||
if (adev->ip_versions[DCE_HWIP][0] < IP_VERSION(3, 2, 0) && state->allow_modeset)
|
||||
return true;
|
||||
|
||||
if (amdgpu_in_reset(adev) && state->allow_modeset)
|
||||
return true;
|
||||
|
||||
/* Exit early if we know that we're adding or removing the plane. */
|
||||
if (old_plane_state->crtc != new_plane_state->crtc)
|
||||
return true;
|
||||
|
@@ -55,6 +55,7 @@ static int qibfs_mknod(struct inode *dir, struct dentry *dentry,
|
||||
struct inode *inode = new_inode(dir->i_sb);
|
||||
|
||||
if (!inode) {
|
||||
dput(dentry);
|
||||
error = -EPERM;
|
||||
goto bail;
|
||||
}
|
||||
|
@@ -3619,7 +3619,7 @@ static int amd_ir_set_vcpu_affinity(struct irq_data *data, void *vcpu_info)
|
||||
* we should not modify the IRTE
|
||||
*/
|
||||
if (!dev_data || !dev_data->use_vapic)
|
||||
return 0;
|
||||
return -EINVAL;
|
||||
|
||||
ir_data->cfg = irqd_cfg(data);
|
||||
pi_data->ir_data = ir_data;
|
||||
|
@@ -454,7 +454,7 @@ static int __init gicv2m_of_init(struct fwnode_handle *parent_handle,
|
||||
#ifdef CONFIG_ACPI
|
||||
static int acpi_num_msi;
|
||||
|
||||
static __init struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
|
||||
static struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
|
||||
{
|
||||
struct v2m_data *data;
|
||||
|
||||
|
@@ -313,6 +313,10 @@ static irqreturn_t pcc_mbox_irq(int irq, void *p)
|
||||
int ret;
|
||||
|
||||
pchan = chan->con_priv;
|
||||
|
||||
if (pcc_chan_reg_read_modify_write(&pchan->plat_irq_ack))
|
||||
return IRQ_NONE;
|
||||
|
||||
if (pchan->type == ACPI_PCCT_TYPE_EXT_PCC_MASTER_SUBSPACE &&
|
||||
!pchan->chan_in_use)
|
||||
return IRQ_NONE;
|
||||
@@ -330,13 +334,16 @@ static irqreturn_t pcc_mbox_irq(int irq, void *p)
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
if (pcc_chan_reg_read_modify_write(&pchan->plat_irq_ack))
|
||||
return IRQ_NONE;
|
||||
|
||||
/*
|
||||
* Clear this flag after updating interrupt ack register and just
|
||||
* before mbox_chan_received_data() which might call pcc_send_data()
|
||||
* where the flag is set again to start new transfer. This is
|
||||
* required to avoid any possible race in updatation of this flag.
|
||||
*/
|
||||
pchan->chan_in_use = false;
|
||||
mbox_chan_received_data(chan, NULL);
|
||||
|
||||
check_and_ack(pchan, chan);
|
||||
pchan->chan_in_use = false;
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
@@ -101,7 +101,7 @@ static int chameleon_parse_gdd(struct mcb_bus *bus,
|
||||
|
||||
ret = mcb_device_register(bus, mdev);
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
|
||||
|
@@ -2061,14 +2061,9 @@ static int fix_sync_read_error(struct r1bio *r1_bio)
|
||||
if (!rdev_set_badblocks(rdev, sect, s, 0))
|
||||
abort = 1;
|
||||
}
|
||||
if (abort) {
|
||||
conf->recovery_disabled =
|
||||
mddev->recovery_disabled;
|
||||
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
||||
md_done_sync(mddev, r1_bio->sectors, 0);
|
||||
put_buf(r1_bio);
|
||||
if (abort)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Try next page */
|
||||
sectors -= s;
|
||||
sect += s;
|
||||
@@ -2207,10 +2202,21 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio)
|
||||
int disks = conf->raid_disks * 2;
|
||||
struct bio *wbio;
|
||||
|
||||
if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
|
||||
/* ouch - failed to read all of that. */
|
||||
if (!fix_sync_read_error(r1_bio))
|
||||
if (!test_bit(R1BIO_Uptodate, &r1_bio->state)) {
|
||||
/*
|
||||
* ouch - failed to read all of that.
|
||||
* No need to fix read error for check/repair
|
||||
* because all member disks are read.
|
||||
*/
|
||||
if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) ||
|
||||
!fix_sync_read_error(r1_bio)) {
|
||||
conf->recovery_disabled = mddev->recovery_disabled;
|
||||
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
||||
md_done_sync(mddev, r1_bio->sectors, 0);
|
||||
put_buf(r1_bio);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
|
||||
process_checks(r1_bio);
|
||||
|
@@ -28,6 +28,13 @@ static const unsigned long rodata = 0xAA55AA55;
|
||||
/* This is marked __ro_after_init, so it should ultimately be .rodata. */
|
||||
static unsigned long ro_after_init __ro_after_init = 0x55AA5500;
|
||||
|
||||
/*
|
||||
* This is a pointer to do_nothing() which is initialized at runtime rather
|
||||
* than build time to avoid objtool IBT validation warnings caused by an
|
||||
* inlined unrolled memcpy() in execute_location().
|
||||
*/
|
||||
static void __ro_after_init *do_nothing_ptr;
|
||||
|
||||
/*
|
||||
* This just returns to the caller. It is designed to be copied into
|
||||
* non-executable memory regions.
|
||||
@@ -65,13 +72,12 @@ static noinline __nocfi void execute_location(void *dst, bool write)
|
||||
{
|
||||
void (*func)(void);
|
||||
func_desc_t fdesc;
|
||||
void *do_nothing_text = dereference_function_descriptor(do_nothing);
|
||||
|
||||
pr_info("attempting ok execution at %px\n", do_nothing_text);
|
||||
pr_info("attempting ok execution at %px\n", do_nothing_ptr);
|
||||
do_nothing();
|
||||
|
||||
if (write == CODE_WRITE) {
|
||||
memcpy(dst, do_nothing_text, EXEC_SIZE);
|
||||
memcpy(dst, do_nothing_ptr, EXEC_SIZE);
|
||||
flush_icache_range((unsigned long)dst,
|
||||
(unsigned long)dst + EXEC_SIZE);
|
||||
}
|
||||
@@ -267,6 +273,8 @@ static void lkdtm_ACCESS_NULL(void)
|
||||
|
||||
void __init lkdtm_perms_init(void)
|
||||
{
|
||||
do_nothing_ptr = dereference_function_descriptor(do_nothing);
|
||||
|
||||
/* Make sure we can write to __ro_after_init values during __init */
|
||||
ro_after_init |= 0xAA;
|
||||
}
|
||||
|
@@ -37,6 +37,7 @@
|
||||
struct pci1xxxx_gpio {
|
||||
struct auxiliary_device *aux_dev;
|
||||
void __iomem *reg_base;
|
||||
raw_spinlock_t wa_lock;
|
||||
struct gpio_chip gpio;
|
||||
spinlock_t lock;
|
||||
int irq_base;
|
||||
@@ -164,7 +165,7 @@ static void pci1xxxx_gpio_irq_ack(struct irq_data *data)
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
pci1xxx_assign_bit(priv->reg_base, INTR_STAT_OFFSET(gpio), (gpio % 32), true);
|
||||
writel(BIT(gpio % 32), priv->reg_base + INTR_STAT_OFFSET(gpio));
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
}
|
||||
|
||||
@@ -254,6 +255,7 @@ static irqreturn_t pci1xxxx_gpio_irq_handler(int irq, void *dev_id)
|
||||
struct pci1xxxx_gpio *priv = dev_id;
|
||||
struct gpio_chip *gc = &priv->gpio;
|
||||
unsigned long int_status = 0;
|
||||
unsigned long wa_flags;
|
||||
unsigned long flags;
|
||||
u8 pincount;
|
||||
int bit;
|
||||
@@ -277,7 +279,9 @@ static irqreturn_t pci1xxxx_gpio_irq_handler(int irq, void *dev_id)
|
||||
writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank));
|
||||
spin_unlock_irqrestore(&priv->lock, flags);
|
||||
irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32)));
|
||||
handle_nested_irq(irq);
|
||||
raw_spin_lock_irqsave(&priv->wa_lock, wa_flags);
|
||||
generic_handle_irq(irq);
|
||||
raw_spin_unlock_irqrestore(&priv->wa_lock, wa_flags);
|
||||
}
|
||||
}
|
||||
spin_lock_irqsave(&priv->lock, flags);
|
||||
|
@@ -117,6 +117,7 @@
|
||||
|
||||
#define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */
|
||||
|
||||
#define MEI_DEV_ID_PTL_H 0xE370 /* Panther Lake H */
|
||||
#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */
|
||||
|
||||
/*
|
||||
|
@@ -124,6 +124,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)},
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PTL_H, MEI_ME_PCH15_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)},
|
||||
|
||||
/* required last entry */
|
||||
|
@@ -2596,6 +2596,9 @@ mt7531_setup_common(struct dsa_switch *ds)
|
||||
struct mt7530_priv *priv = ds->priv;
|
||||
int ret, i;
|
||||
|
||||
ds->assisted_learning_on_cpu_port = true;
|
||||
ds->mtu_enforcement_ingress = true;
|
||||
|
||||
mt753x_trap_frames(priv);
|
||||
|
||||
/* Enable and reset MIB counters */
|
||||
@@ -2735,9 +2738,6 @@ mt7531_setup(struct dsa_switch *ds)
|
||||
|
||||
mt7531_setup_common(ds);
|
||||
|
||||
ds->assisted_learning_on_cpu_port = true;
|
||||
ds->mtu_enforcement_ingress = true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@@ -5047,6 +5047,7 @@ static const struct mv88e6xxx_ops mv88e6320_ops = {
|
||||
.port_set_rgmii_delay = mv88e6320_port_set_rgmii_delay,
|
||||
.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
|
||||
.port_tag_remap = mv88e6095_port_tag_remap,
|
||||
.port_set_policy = mv88e6352_port_set_policy,
|
||||
.port_set_frame_mode = mv88e6351_port_set_frame_mode,
|
||||
.port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
|
||||
.port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
|
||||
@@ -5073,6 +5074,8 @@ static const struct mv88e6xxx_ops mv88e6320_ops = {
|
||||
.reset = mv88e6352_g1_reset,
|
||||
.vtu_getnext = mv88e6352_g1_vtu_getnext,
|
||||
.vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
|
||||
.stu_getnext = mv88e6352_g1_stu_getnext,
|
||||
.stu_loadpurge = mv88e6352_g1_stu_loadpurge,
|
||||
.gpio_ops = &mv88e6352_gpio_ops,
|
||||
.avb_ops = &mv88e6352_avb_ops,
|
||||
.ptp_ops = &mv88e6352_ptp_ops,
|
||||
@@ -5097,6 +5100,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
|
||||
.port_set_rgmii_delay = mv88e6320_port_set_rgmii_delay,
|
||||
.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
|
||||
.port_tag_remap = mv88e6095_port_tag_remap,
|
||||
.port_set_policy = mv88e6352_port_set_policy,
|
||||
.port_set_frame_mode = mv88e6351_port_set_frame_mode,
|
||||
.port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
|
||||
.port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
|
||||
@@ -5122,6 +5126,8 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
|
||||
.reset = mv88e6352_g1_reset,
|
||||
.vtu_getnext = mv88e6352_g1_vtu_getnext,
|
||||
.vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
|
||||
.stu_getnext = mv88e6352_g1_stu_getnext,
|
||||
.stu_loadpurge = mv88e6352_g1_stu_loadpurge,
|
||||
.gpio_ops = &mv88e6352_gpio_ops,
|
||||
.avb_ops = &mv88e6352_avb_ops,
|
||||
.ptp_ops = &mv88e6352_ptp_ops,
|
||||
@@ -5713,7 +5719,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.global1_addr = 0x1b,
|
||||
.global2_addr = 0x1c,
|
||||
.age_time_coeff = 3750,
|
||||
.atu_move_port_mask = 0x1f,
|
||||
.atu_move_port_mask = 0xf,
|
||||
.g1_irqs = 9,
|
||||
.g2_irqs = 10,
|
||||
.pvt = true,
|
||||
@@ -6118,6 +6124,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.internal_phys_offset = 3,
|
||||
.num_gpio = 15,
|
||||
.max_vid = 4095,
|
||||
.max_sid = 63,
|
||||
.port_base_addr = 0x10,
|
||||
.phy_base_addr = 0x0,
|
||||
.global1_addr = 0x1b,
|
||||
@@ -6144,6 +6151,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.internal_phys_offset = 3,
|
||||
.num_gpio = 15,
|
||||
.max_vid = 4095,
|
||||
.max_sid = 63,
|
||||
.port_base_addr = 0x10,
|
||||
.phy_base_addr = 0x0,
|
||||
.global1_addr = 0x1b,
|
||||
@@ -6152,6 +6160,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.g1_irqs = 8,
|
||||
.g2_irqs = 10,
|
||||
.atu_move_port_mask = 0xf,
|
||||
.pvt = true,
|
||||
.multi_chip = true,
|
||||
.edsa_support = MV88E6XXX_EDSA_SUPPORTED,
|
||||
.ptp_support = true,
|
||||
@@ -6174,7 +6183,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
|
||||
.global1_addr = 0x1b,
|
||||
.global2_addr = 0x1c,
|
||||
.age_time_coeff = 3750,
|
||||
.atu_move_port_mask = 0x1f,
|
||||
.atu_move_port_mask = 0xf,
|
||||
.g1_irqs = 9,
|
||||
.g2_irqs = 10,
|
||||
.pvt = true,
|
||||
|
@@ -5,11 +5,6 @@
|
||||
|
||||
#include "core.h"
|
||||
|
||||
struct pdsc_wait_context {
|
||||
struct pdsc_qcq *qcq;
|
||||
struct completion wait_completion;
|
||||
};
|
||||
|
||||
static int pdsc_process_notifyq(struct pdsc_qcq *qcq)
|
||||
{
|
||||
union pds_core_notifyq_comp *comp;
|
||||
@@ -110,10 +105,10 @@ void pdsc_process_adminq(struct pdsc_qcq *qcq)
|
||||
q_info = &q->info[q->tail_idx];
|
||||
q->tail_idx = (q->tail_idx + 1) & (q->num_descs - 1);
|
||||
|
||||
/* Copy out the completion data */
|
||||
memcpy(q_info->dest, comp, sizeof(*comp));
|
||||
|
||||
complete_all(&q_info->wc->wait_completion);
|
||||
if (!completion_done(&q_info->completion)) {
|
||||
memcpy(q_info->dest, comp, sizeof(*comp));
|
||||
complete(&q_info->completion);
|
||||
}
|
||||
|
||||
if (cq->tail_idx == cq->num_descs - 1)
|
||||
cq->done_color = !cq->done_color;
|
||||
@@ -166,8 +161,7 @@ irqreturn_t pdsc_adminq_isr(int irq, void *data)
|
||||
static int __pdsc_adminq_post(struct pdsc *pdsc,
|
||||
struct pdsc_qcq *qcq,
|
||||
union pds_core_adminq_cmd *cmd,
|
||||
union pds_core_adminq_comp *comp,
|
||||
struct pdsc_wait_context *wc)
|
||||
union pds_core_adminq_comp *comp)
|
||||
{
|
||||
struct pdsc_queue *q = &qcq->q;
|
||||
struct pdsc_q_info *q_info;
|
||||
@@ -209,9 +203,9 @@ static int __pdsc_adminq_post(struct pdsc *pdsc,
|
||||
/* Post the request */
|
||||
index = q->head_idx;
|
||||
q_info = &q->info[index];
|
||||
q_info->wc = wc;
|
||||
q_info->dest = comp;
|
||||
memcpy(q_info->desc, cmd, sizeof(*cmd));
|
||||
reinit_completion(&q_info->completion);
|
||||
|
||||
dev_dbg(pdsc->dev, "head_idx %d tail_idx %d\n",
|
||||
q->head_idx, q->tail_idx);
|
||||
@@ -235,16 +229,13 @@ int pdsc_adminq_post(struct pdsc *pdsc,
|
||||
union pds_core_adminq_comp *comp,
|
||||
bool fast_poll)
|
||||
{
|
||||
struct pdsc_wait_context wc = {
|
||||
.wait_completion =
|
||||
COMPLETION_INITIALIZER_ONSTACK(wc.wait_completion),
|
||||
};
|
||||
unsigned long poll_interval = 1;
|
||||
unsigned long poll_jiffies;
|
||||
unsigned long time_limit;
|
||||
unsigned long time_start;
|
||||
unsigned long time_done;
|
||||
unsigned long remaining;
|
||||
struct completion *wc;
|
||||
int err = 0;
|
||||
int index;
|
||||
|
||||
@@ -254,20 +245,19 @@ int pdsc_adminq_post(struct pdsc *pdsc,
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
wc.qcq = &pdsc->adminqcq;
|
||||
index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp, &wc);
|
||||
index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp);
|
||||
if (index < 0) {
|
||||
err = index;
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
wc = &pdsc->adminqcq.q.info[index].completion;
|
||||
time_start = jiffies;
|
||||
time_limit = time_start + HZ * pdsc->devcmd_timeout;
|
||||
do {
|
||||
/* Timeslice the actual wait to catch IO errors etc early */
|
||||
poll_jiffies = msecs_to_jiffies(poll_interval);
|
||||
remaining = wait_for_completion_timeout(&wc.wait_completion,
|
||||
poll_jiffies);
|
||||
remaining = wait_for_completion_timeout(wc, poll_jiffies);
|
||||
if (remaining)
|
||||
break;
|
||||
|
||||
@@ -296,9 +286,11 @@ int pdsc_adminq_post(struct pdsc *pdsc,
|
||||
dev_dbg(pdsc->dev, "%s: elapsed %d msecs\n",
|
||||
__func__, jiffies_to_msecs(time_done - time_start));
|
||||
|
||||
/* Check the results */
|
||||
if (time_after_eq(time_done, time_limit))
|
||||
/* Check the results and clear an un-completed timeout */
|
||||
if (time_after_eq(time_done, time_limit) && !completion_done(wc)) {
|
||||
err = -ETIMEDOUT;
|
||||
complete(wc);
|
||||
}
|
||||
|
||||
dev_dbg(pdsc->dev, "read admin queue completion idx %d:\n", index);
|
||||
dynamic_hex_dump("comp ", DUMP_PREFIX_OFFSET, 16, 1,
|
||||
|
@@ -107,9 +107,6 @@ int pds_client_adminq_cmd(struct pds_auxiliary_dev *padev,
|
||||
dev_dbg(pf->dev, "%s: %s opcode %d\n",
|
||||
__func__, dev_name(&padev->aux_dev.dev), req->opcode);
|
||||
|
||||
if (pf->state)
|
||||
return -ENXIO;
|
||||
|
||||
/* Wrap the client's request */
|
||||
cmd.client_request.opcode = PDS_AQ_CMD_CLIENT_CMD;
|
||||
cmd.client_request.client_id = cpu_to_le16(padev->client_id);
|
||||
|
@@ -169,8 +169,10 @@ static void pdsc_q_map(struct pdsc_queue *q, void *base, dma_addr_t base_pa)
|
||||
q->base = base;
|
||||
q->base_pa = base_pa;
|
||||
|
||||
for (i = 0, cur = q->info; i < q->num_descs; i++, cur++)
|
||||
for (i = 0, cur = q->info; i < q->num_descs; i++, cur++) {
|
||||
cur->desc = base + (i * q->desc_size);
|
||||
init_completion(&cur->completion);
|
||||
}
|
||||
}
|
||||
|
||||
static void pdsc_cq_map(struct pdsc_cq *cq, void *base, dma_addr_t base_pa)
|
||||
|
@@ -96,7 +96,7 @@ struct pdsc_q_info {
|
||||
unsigned int bytes;
|
||||
unsigned int nbufs;
|
||||
struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS];
|
||||
struct pdsc_wait_context *wc;
|
||||
struct completion completion;
|
||||
void *dest;
|
||||
};
|
||||
|
||||
|
@@ -101,7 +101,7 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
|
||||
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
|
||||
.fw_control.oper = PDS_CORE_FW_GET_LIST,
|
||||
};
|
||||
struct pds_core_fw_list_info fw_list;
|
||||
struct pds_core_fw_list_info fw_list = {};
|
||||
struct pdsc *pdsc = devlink_priv(dl);
|
||||
union pds_core_dev_comp comp;
|
||||
char buf[32];
|
||||
@@ -114,8 +114,6 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
|
||||
if (!err)
|
||||
memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));
|
||||
mutex_unlock(&pdsc->devcmd_lock);
|
||||
if (err && err != -EIO)
|
||||
return err;
|
||||
|
||||
listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names));
|
||||
for (i = 0; i < listlen; i++) {
|
||||
|
@@ -3949,11 +3949,27 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset)
|
||||
mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP);
|
||||
|
||||
if (mtk_is_netsys_v3_or_greater(eth)) {
|
||||
/* PSE should not drop port1, port8 and port9 packets */
|
||||
mtk_w32(eth, 0x00000302, PSE_DROP_CFG);
|
||||
/* PSE dummy page mechanism */
|
||||
mtk_w32(eth, PSE_DUMMY_WORK_GDM(1) | PSE_DUMMY_WORK_GDM(2) |
|
||||
PSE_DUMMY_WORK_GDM(3) | DUMMY_PAGE_THR, PSE_DUMY_REQ);
|
||||
|
||||
/* PSE free buffer drop threshold */
|
||||
mtk_w32(eth, 0x00600009, PSE_IQ_REV(8));
|
||||
|
||||
/* PSE should not drop port8, port9 and port13 packets from
|
||||
* WDMA Tx
|
||||
*/
|
||||
mtk_w32(eth, 0x00002300, PSE_DROP_CFG);
|
||||
|
||||
/* PSE should drop packets to port8, port9 and port13 on WDMA Rx
|
||||
* ring full
|
||||
*/
|
||||
mtk_w32(eth, 0x00002300, PSE_PPE_DROP(0));
|
||||
mtk_w32(eth, 0x00002300, PSE_PPE_DROP(1));
|
||||
mtk_w32(eth, 0x00002300, PSE_PPE_DROP(2));
|
||||
|
||||
/* GDM and CDM Threshold */
|
||||
mtk_w32(eth, 0x00000707, MTK_CDMW0_THRES);
|
||||
mtk_w32(eth, 0x08000707, MTK_CDMW0_THRES);
|
||||
mtk_w32(eth, 0x00000077, MTK_CDMW1_THRES);
|
||||
|
||||
/* Disable GDM1 RX CRC stripping */
|
||||
@@ -3970,7 +3986,7 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset)
|
||||
mtk_w32(eth, 0x00000300, PSE_DROP_CFG);
|
||||
|
||||
/* PSE should drop packets to port 8/9 on WDMA Rx ring full */
|
||||
mtk_w32(eth, 0x00000300, PSE_PPE0_DROP);
|
||||
mtk_w32(eth, 0x00000300, PSE_PPE_DROP(0));
|
||||
|
||||
/* PSE Free Queue Flow Control */
|
||||
mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2);
|
||||
|
@@ -149,7 +149,15 @@
|
||||
#define PSE_FQFC_CFG1 0x100
|
||||
#define PSE_FQFC_CFG2 0x104
|
||||
#define PSE_DROP_CFG 0x108
|
||||
#define PSE_PPE0_DROP 0x110
|
||||
#define PSE_PPE_DROP(x) (0x110 + ((x) * 0x4))
|
||||
|
||||
/* PSE Last FreeQ Page Request Control */
|
||||
#define PSE_DUMY_REQ 0x10C
|
||||
/* PSE_DUMY_REQ is not a typo but actually called like that also in
|
||||
* MediaTek's datasheet
|
||||
*/
|
||||
#define PSE_DUMMY_WORK_GDM(x) BIT(16 + (x))
|
||||
#define DUMMY_PAGE_THR 0x1
|
||||
|
||||
/* PSE Input Queue Reservation Register*/
|
||||
#define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2))
|
||||
|
@@ -31,47 +31,6 @@ static int lan88xx_write_page(struct phy_device *phydev, int page)
|
||||
return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page);
|
||||
}
|
||||
|
||||
static int lan88xx_phy_config_intr(struct phy_device *phydev)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
|
||||
/* unmask all source and clear them before enable */
|
||||
rc = phy_write(phydev, LAN88XX_INT_MASK, 0x7FFF);
|
||||
rc = phy_read(phydev, LAN88XX_INT_STS);
|
||||
rc = phy_write(phydev, LAN88XX_INT_MASK,
|
||||
LAN88XX_INT_MASK_MDINTPIN_EN_ |
|
||||
LAN88XX_INT_MASK_LINK_CHANGE_);
|
||||
} else {
|
||||
rc = phy_write(phydev, LAN88XX_INT_MASK, 0);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Ack interrupts after they have been disabled */
|
||||
rc = phy_read(phydev, LAN88XX_INT_STS);
|
||||
}
|
||||
|
||||
return rc < 0 ? rc : 0;
|
||||
}
|
||||
|
||||
static irqreturn_t lan88xx_handle_interrupt(struct phy_device *phydev)
|
||||
{
|
||||
int irq_status;
|
||||
|
||||
irq_status = phy_read(phydev, LAN88XX_INT_STS);
|
||||
if (irq_status < 0) {
|
||||
phy_error(phydev);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
if (!(irq_status & LAN88XX_INT_STS_LINK_CHANGE_))
|
||||
return IRQ_NONE;
|
||||
|
||||
phy_trigger_machine(phydev);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int lan88xx_suspend(struct phy_device *phydev)
|
||||
{
|
||||
struct lan88xx_priv *priv = phydev->priv;
|
||||
@@ -392,8 +351,9 @@ static struct phy_driver microchip_phy_driver[] = {
|
||||
.config_aneg = lan88xx_config_aneg,
|
||||
.link_change_notify = lan88xx_link_change_notify,
|
||||
|
||||
.config_intr = lan88xx_phy_config_intr,
|
||||
.handle_interrupt = lan88xx_handle_interrupt,
|
||||
/* Interrupt handling is broken, do not define related
|
||||
* functions to force polling.
|
||||
*/
|
||||
|
||||
.suspend = lan88xx_suspend,
|
||||
.resume = genphy_resume,
|
||||
|
@@ -91,9 +91,8 @@ int phy_led_triggers_register(struct phy_device *phy)
|
||||
if (!phy->phy_num_led_triggers)
|
||||
return 0;
|
||||
|
||||
phy->led_link_trigger = devm_kzalloc(&phy->mdio.dev,
|
||||
sizeof(*phy->led_link_trigger),
|
||||
GFP_KERNEL);
|
||||
phy->led_link_trigger = kzalloc(sizeof(*phy->led_link_trigger),
|
||||
GFP_KERNEL);
|
||||
if (!phy->led_link_trigger) {
|
||||
err = -ENOMEM;
|
||||
goto out_clear;
|
||||
@@ -103,10 +102,9 @@ int phy_led_triggers_register(struct phy_device *phy)
|
||||
if (err)
|
||||
goto out_free_link;
|
||||
|
||||
phy->phy_led_triggers = devm_kcalloc(&phy->mdio.dev,
|
||||
phy->phy_num_led_triggers,
|
||||
sizeof(struct phy_led_trigger),
|
||||
GFP_KERNEL);
|
||||
phy->phy_led_triggers = kcalloc(phy->phy_num_led_triggers,
|
||||
sizeof(struct phy_led_trigger),
|
||||
GFP_KERNEL);
|
||||
if (!phy->phy_led_triggers) {
|
||||
err = -ENOMEM;
|
||||
goto out_unreg_link;
|
||||
@@ -127,11 +125,11 @@ int phy_led_triggers_register(struct phy_device *phy)
|
||||
out_unreg:
|
||||
while (i--)
|
||||
phy_led_trigger_unregister(&phy->phy_led_triggers[i]);
|
||||
devm_kfree(&phy->mdio.dev, phy->phy_led_triggers);
|
||||
kfree(phy->phy_led_triggers);
|
||||
out_unreg_link:
|
||||
phy_led_trigger_unregister(phy->led_link_trigger);
|
||||
out_free_link:
|
||||
devm_kfree(&phy->mdio.dev, phy->led_link_trigger);
|
||||
kfree(phy->led_link_trigger);
|
||||
phy->led_link_trigger = NULL;
|
||||
out_clear:
|
||||
phy->phy_num_led_triggers = 0;
|
||||
@@ -145,8 +143,13 @@ void phy_led_triggers_unregister(struct phy_device *phy)
|
||||
|
||||
for (i = 0; i < phy->phy_num_led_triggers; i++)
|
||||
phy_led_trigger_unregister(&phy->phy_led_triggers[i]);
|
||||
kfree(phy->phy_led_triggers);
|
||||
phy->phy_led_triggers = NULL;
|
||||
|
||||
if (phy->led_link_trigger)
|
||||
if (phy->led_link_trigger) {
|
||||
phy_led_trigger_unregister(phy->led_link_trigger);
|
||||
kfree(phy->led_link_trigger);
|
||||
phy->led_link_trigger = NULL;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(phy_led_triggers_unregister);
|
||||
|
@@ -397,7 +397,7 @@ vmxnet3_process_xdp(struct vmxnet3_adapter *adapter,
|
||||
|
||||
xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq);
|
||||
xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset,
|
||||
rbi->len, false);
|
||||
rcd->len, false);
|
||||
xdp_buff_clear_frags_flag(&xdp);
|
||||
|
||||
xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog);
|
||||
|
@@ -985,20 +985,27 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
|
||||
act = bpf_prog_run_xdp(prog, xdp);
|
||||
switch (act) {
|
||||
case XDP_TX:
|
||||
get_page(pdata);
|
||||
xdpf = xdp_convert_buff_to_frame(xdp);
|
||||
err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0);
|
||||
if (unlikely(!err))
|
||||
xdp_return_frame_rx_napi(xdpf);
|
||||
else if (unlikely(err < 0))
|
||||
if (unlikely(!xdpf)) {
|
||||
trace_xdp_exception(queue->info->netdev, prog, act);
|
||||
break;
|
||||
}
|
||||
get_page(pdata);
|
||||
err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0);
|
||||
if (unlikely(err <= 0)) {
|
||||
if (err < 0)
|
||||
trace_xdp_exception(queue->info->netdev, prog, act);
|
||||
xdp_return_frame_rx_napi(xdpf);
|
||||
}
|
||||
break;
|
||||
case XDP_REDIRECT:
|
||||
get_page(pdata);
|
||||
err = xdp_do_redirect(queue->info->netdev, xdp, prog);
|
||||
*need_xdp_flush = true;
|
||||
if (unlikely(err))
|
||||
if (unlikely(err)) {
|
||||
trace_xdp_exception(queue->info->netdev, prog, act);
|
||||
xdp_return_buff(xdp);
|
||||
}
|
||||
break;
|
||||
case XDP_PASS:
|
||||
case XDP_DROP:
|
||||
|
@@ -1318,6 +1318,7 @@ static const struct pci_device_id amd_ntb_pci_tbl[] = {
|
||||
{ PCI_VDEVICE(AMD, 0x148b), (kernel_ulong_t)&dev_data[1] },
|
||||
{ PCI_VDEVICE(AMD, 0x14c0), (kernel_ulong_t)&dev_data[1] },
|
||||
{ PCI_VDEVICE(AMD, 0x14c3), (kernel_ulong_t)&dev_data[1] },
|
||||
{ PCI_VDEVICE(AMD, 0x155a), (kernel_ulong_t)&dev_data[1] },
|
||||
{ PCI_VDEVICE(HYGON, 0x145b), (kernel_ulong_t)&dev_data[0] },
|
||||
{ 0, }
|
||||
};
|
||||
|
@@ -1041,7 +1041,7 @@ static inline char *idt_get_mw_name(enum idt_mw_type mw_type)
|
||||
static struct idt_mw_cfg *idt_scan_mws(struct idt_ntb_dev *ndev, int port,
|
||||
unsigned char *mw_cnt)
|
||||
{
|
||||
struct idt_mw_cfg mws[IDT_MAX_NR_MWS], *ret_mws;
|
||||
struct idt_mw_cfg *mws;
|
||||
const struct idt_ntb_bar *bars;
|
||||
enum idt_mw_type mw_type;
|
||||
unsigned char widx, bidx, en_cnt;
|
||||
@@ -1049,6 +1049,11 @@ static struct idt_mw_cfg *idt_scan_mws(struct idt_ntb_dev *ndev, int port,
|
||||
int aprt_size;
|
||||
u32 data;
|
||||
|
||||
mws = devm_kcalloc(&ndev->ntb.pdev->dev, IDT_MAX_NR_MWS,
|
||||
sizeof(*mws), GFP_KERNEL);
|
||||
if (!mws)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
/* Retrieve the array of the BARs registers */
|
||||
bars = portdata_tbl[port].bars;
|
||||
|
||||
@@ -1103,16 +1108,7 @@ static struct idt_mw_cfg *idt_scan_mws(struct idt_ntb_dev *ndev, int port,
|
||||
}
|
||||
}
|
||||
|
||||
/* Allocate memory for memory window descriptors */
|
||||
ret_mws = devm_kcalloc(&ndev->ntb.pdev->dev, *mw_cnt, sizeof(*ret_mws),
|
||||
GFP_KERNEL);
|
||||
if (!ret_mws)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
/* Copy the info of detected memory windows */
|
||||
memcpy(ret_mws, mws, (*mw_cnt)*sizeof(*ret_mws));
|
||||
|
||||
return ret_mws;
|
||||
return mws;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@@ -3972,6 +3972,15 @@ static void nvme_scan_work(struct work_struct *work)
|
||||
nvme_scan_ns_sequential(ctrl);
|
||||
}
|
||||
mutex_unlock(&ctrl->scan_lock);
|
||||
|
||||
/* Requeue if we have missed AENs */
|
||||
if (test_bit(NVME_AER_NOTICE_NS_CHANGED, &ctrl->events))
|
||||
nvme_queue_scan(ctrl);
|
||||
#ifdef CONFIG_NVME_MULTIPATH
|
||||
else if (ctrl->ana_log_buf)
|
||||
/* Re-read the ANA log page to not miss updates */
|
||||
queue_work(nvme_wq, &ctrl->ana_work);
|
||||
#endif
|
||||
}
|
||||
|
||||
/*
|
||||
|
@@ -426,7 +426,7 @@ static bool nvme_available_path(struct nvme_ns_head *head)
|
||||
struct nvme_ns *ns;
|
||||
|
||||
if (!test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags))
|
||||
return NULL;
|
||||
return false;
|
||||
|
||||
list_for_each_entry_srcu(ns, &head->list, siblings,
|
||||
srcu_read_lock_held(&head->srcu)) {
|
||||
|
@@ -1030,33 +1030,24 @@ nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
|
||||
struct nvmet_fc_hostport *newhost, *match = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
/*
|
||||
* Caller holds a reference on tgtport.
|
||||
*/
|
||||
|
||||
/* if LLDD not implemented, leave as NULL */
|
||||
if (!hosthandle)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* take reference for what will be the newly allocated hostport if
|
||||
* we end up using a new allocation
|
||||
*/
|
||||
if (!nvmet_fc_tgtport_get(tgtport))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
spin_lock_irqsave(&tgtport->lock, flags);
|
||||
match = nvmet_fc_match_hostport(tgtport, hosthandle);
|
||||
spin_unlock_irqrestore(&tgtport->lock, flags);
|
||||
|
||||
if (match) {
|
||||
/* no new allocation - release reference */
|
||||
nvmet_fc_tgtport_put(tgtport);
|
||||
if (match)
|
||||
return match;
|
||||
}
|
||||
|
||||
newhost = kzalloc(sizeof(*newhost), GFP_KERNEL);
|
||||
if (!newhost) {
|
||||
/* no new allocation - release reference */
|
||||
nvmet_fc_tgtport_put(tgtport);
|
||||
if (!newhost)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&tgtport->lock, flags);
|
||||
match = nvmet_fc_match_hostport(tgtport, hosthandle);
|
||||
@@ -1065,6 +1056,7 @@ nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
|
||||
kfree(newhost);
|
||||
newhost = match;
|
||||
} else {
|
||||
nvmet_fc_tgtport_get(tgtport);
|
||||
newhost->tgtport = tgtport;
|
||||
newhost->hosthandle = hosthandle;
|
||||
INIT_LIST_HEAD(&newhost->host_list);
|
||||
@@ -1099,7 +1091,8 @@ static void
|
||||
nvmet_fc_schedule_delete_assoc(struct nvmet_fc_tgt_assoc *assoc)
|
||||
{
|
||||
nvmet_fc_tgtport_get(assoc->tgtport);
|
||||
queue_work(nvmet_wq, &assoc->del_work);
|
||||
if (!queue_work(nvmet_wq, &assoc->del_work))
|
||||
nvmet_fc_tgtport_put(assoc->tgtport);
|
||||
}
|
||||
|
||||
static struct nvmet_fc_tgt_assoc *
|
||||
|
@@ -243,6 +243,9 @@ static int rza2_gpio_register(struct rza2_pinctrl_priv *priv)
|
||||
int ret;
|
||||
|
||||
chip.label = devm_kasprintf(priv->dev, GFP_KERNEL, "%pOFn", np);
|
||||
if (!chip.label)
|
||||
return -ENOMEM;
|
||||
|
||||
chip.parent = priv->dev;
|
||||
chip.ngpio = priv->npins;
|
||||
|
||||
|
@@ -267,8 +267,8 @@ static const unsigned int rk817_buck1_4_ramp_table[] = {
|
||||
|
||||
static int rk806_set_mode_dcdc(struct regulator_dev *rdev, unsigned int mode)
|
||||
{
|
||||
int rid = rdev_get_id(rdev);
|
||||
int ctr_bit, reg;
|
||||
unsigned int rid = rdev_get_id(rdev);
|
||||
unsigned int ctr_bit, reg;
|
||||
|
||||
reg = RK806_POWER_FPWM_EN0 + rid / 8;
|
||||
ctr_bit = rid % 8;
|
||||
|
@@ -35,6 +35,7 @@
|
||||
#define PCF85063_REG_CTRL1_CAP_SEL BIT(0)
|
||||
#define PCF85063_REG_CTRL1_STOP BIT(5)
|
||||
#define PCF85063_REG_CTRL1_EXT_TEST BIT(7)
|
||||
#define PCF85063_REG_CTRL1_SWR 0x58
|
||||
|
||||
#define PCF85063_REG_CTRL2 0x01
|
||||
#define PCF85063_CTRL2_AF BIT(6)
|
||||
@@ -589,7 +590,7 @@ static int pcf85063_probe(struct i2c_client *client)
|
||||
|
||||
i2c_set_clientdata(client, pcf85063);
|
||||
|
||||
err = regmap_read(pcf85063->regmap, PCF85063_REG_CTRL1, &tmp);
|
||||
err = regmap_read(pcf85063->regmap, PCF85063_REG_SC, &tmp);
|
||||
if (err) {
|
||||
dev_err(&client->dev, "RTC chip is not present\n");
|
||||
return err;
|
||||
@@ -599,6 +600,22 @@ static int pcf85063_probe(struct i2c_client *client)
|
||||
if (IS_ERR(pcf85063->rtc))
|
||||
return PTR_ERR(pcf85063->rtc);
|
||||
|
||||
/*
|
||||
* If a Power loss is detected, SW reset the device.
|
||||
* From PCF85063A datasheet:
|
||||
* There is a low probability that some devices will have corruption
|
||||
* of the registers after the automatic power-on reset...
|
||||
*/
|
||||
if (tmp & PCF85063_REG_SC_OS) {
|
||||
dev_warn(&client->dev,
|
||||
"POR issue detected, sending a SW reset\n");
|
||||
err = regmap_write(pcf85063->regmap, PCF85063_REG_CTRL1,
|
||||
PCF85063_REG_CTRL1_SWR);
|
||||
if (err < 0)
|
||||
dev_warn(&client->dev,
|
||||
"SW reset failed, trying to continue\n");
|
||||
}
|
||||
|
||||
err = pcf85063_load_capacitance(pcf85063, client->dev.of_node,
|
||||
config->force_cap_7000 ? 7000 : 0);
|
||||
if (err < 0)
|
||||
|
@@ -263,6 +263,19 @@ static struct console sclp_console =
|
||||
.index = 0 /* ttyS0 */
|
||||
};
|
||||
|
||||
/*
|
||||
* Release allocated pages.
|
||||
*/
|
||||
static void __init __sclp_console_free_pages(void)
|
||||
{
|
||||
struct list_head *page, *p;
|
||||
|
||||
list_for_each_safe(page, p, &sclp_con_pages) {
|
||||
list_del(page);
|
||||
free_page((unsigned long)page);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* called by console_init() in drivers/char/tty_io.c at boot-time.
|
||||
*/
|
||||
@@ -282,6 +295,10 @@ sclp_console_init(void)
|
||||
/* Allocate pages for output buffering */
|
||||
for (i = 0; i < sclp_console_pages; i++) {
|
||||
page = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
|
||||
if (!page) {
|
||||
__sclp_console_free_pages();
|
||||
return -ENOMEM;
|
||||
}
|
||||
list_add_tail(page, &sclp_con_pages);
|
||||
}
|
||||
sclp_conbuf = NULL;
|
||||
|
@@ -490,6 +490,17 @@ static const struct tty_operations sclp_ops = {
|
||||
.flush_buffer = sclp_tty_flush_buffer,
|
||||
};
|
||||
|
||||
/* Release allocated pages. */
|
||||
static void __init __sclp_tty_free_pages(void)
|
||||
{
|
||||
struct list_head *page, *p;
|
||||
|
||||
list_for_each_safe(page, p, &sclp_tty_pages) {
|
||||
list_del(page);
|
||||
free_page((unsigned long)page);
|
||||
}
|
||||
}
|
||||
|
||||
static int __init
|
||||
sclp_tty_init(void)
|
||||
{
|
||||
@@ -516,6 +527,7 @@ sclp_tty_init(void)
|
||||
for (i = 0; i < MAX_KMEM_PAGES; i++) {
|
||||
page = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
|
||||
if (page == NULL) {
|
||||
__sclp_tty_free_pages();
|
||||
tty_driver_kref_put(driver);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
@@ -911,8 +911,28 @@ static void hisi_sas_phyup_work_common(struct work_struct *work,
|
||||
container_of(work, typeof(*phy), works[event]);
|
||||
struct hisi_hba *hisi_hba = phy->hisi_hba;
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct asd_sas_port *sas_port = sas_phy->port;
|
||||
struct hisi_sas_port *port = phy->port;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
struct domain_device *port_dev;
|
||||
int phy_no = sas_phy->id;
|
||||
|
||||
if (!test_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags) &&
|
||||
sas_port && port && (port->id != phy->port_id)) {
|
||||
dev_info(dev, "phy%d's hw port id changed from %d to %llu\n",
|
||||
phy_no, port->id, phy->port_id);
|
||||
port_dev = sas_port->port_dev;
|
||||
if (port_dev && !dev_is_expander(port_dev->dev_type)) {
|
||||
/*
|
||||
* Set the device state to gone to block
|
||||
* sending IO to the device.
|
||||
*/
|
||||
set_bit(SAS_DEV_GONE, &port_dev->state);
|
||||
hisi_sas_notify_phy_event(phy, HISI_PHYE_LINK_RESET);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
phy->wait_phyup_cnt = 0;
|
||||
if (phy->identify.target_port_protocols == SAS_PROTOCOL_SSP)
|
||||
hisi_hba->hw->sl_notify_ssp(hisi_hba, phy_no);
|
||||
|
@@ -719,6 +719,7 @@ static void pm8001_dev_gone_notify(struct domain_device *dev)
|
||||
spin_lock_irqsave(&pm8001_ha->lock, flags);
|
||||
}
|
||||
PM8001_CHIP_DISP->dereg_dev_req(pm8001_ha, device_id);
|
||||
pm8001_ha->phy[pm8001_dev->attached_phy].phy_attached = 0;
|
||||
pm8001_free_dev(pm8001_dev);
|
||||
} else {
|
||||
pm8001_dbg(pm8001_ha, DISC, "Found dev has gone.\n");
|
||||
|
@@ -693,26 +693,23 @@ void scsi_cdl_check(struct scsi_device *sdev)
|
||||
*/
|
||||
int scsi_cdl_enable(struct scsi_device *sdev, bool enable)
|
||||
{
|
||||
struct scsi_mode_data data;
|
||||
struct scsi_sense_hdr sshdr;
|
||||
struct scsi_vpd *vpd;
|
||||
bool is_ata = false;
|
||||
char buf[64];
|
||||
bool is_ata;
|
||||
int ret;
|
||||
|
||||
if (!sdev->cdl_supported)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
rcu_read_lock();
|
||||
vpd = rcu_dereference(sdev->vpd_pg89);
|
||||
if (vpd)
|
||||
is_ata = true;
|
||||
is_ata = rcu_dereference(sdev->vpd_pg89);
|
||||
rcu_read_unlock();
|
||||
|
||||
/*
|
||||
* For ATA devices, CDL needs to be enabled with a SET FEATURES command.
|
||||
*/
|
||||
if (is_ata) {
|
||||
struct scsi_mode_data data;
|
||||
struct scsi_sense_hdr sshdr;
|
||||
char *buf_data;
|
||||
int len;
|
||||
|
||||
@@ -721,16 +718,30 @@ int scsi_cdl_enable(struct scsi_device *sdev, bool enable)
|
||||
if (ret)
|
||||
return -EINVAL;
|
||||
|
||||
/* Enable CDL using the ATA feature page */
|
||||
/* Enable or disable CDL using the ATA feature page */
|
||||
len = min_t(size_t, sizeof(buf),
|
||||
data.length - data.header_length -
|
||||
data.block_descriptor_length);
|
||||
buf_data = buf + data.header_length +
|
||||
data.block_descriptor_length;
|
||||
if (enable)
|
||||
buf_data[4] = 0x02;
|
||||
else
|
||||
buf_data[4] = 0;
|
||||
|
||||
/*
|
||||
* If we want to enable CDL and CDL is already enabled on the
|
||||
* device, do nothing. This avoids needlessly resetting the CDL
|
||||
* statistics on the device as that is implied by the CDL enable
|
||||
* action. Similar to this, there is no need to do anything if
|
||||
* we want to disable CDL and CDL is already disabled.
|
||||
*/
|
||||
if (enable) {
|
||||
if ((buf_data[4] & 0x03) == 0x02)
|
||||
goto out;
|
||||
buf_data[4] &= ~0x03;
|
||||
buf_data[4] |= 0x02;
|
||||
} else {
|
||||
if ((buf_data[4] & 0x03) == 0x00)
|
||||
goto out;
|
||||
buf_data[4] &= ~0x03;
|
||||
}
|
||||
|
||||
ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3,
|
||||
&data, &sshdr);
|
||||
@@ -742,6 +753,7 @@ int scsi_cdl_enable(struct scsi_device *sdev, bool enable)
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
sdev->cdl_enable = enable;
|
||||
|
||||
return 0;
|
||||
|
@@ -1154,8 +1154,12 @@ EXPORT_SYMBOL_GPL(scsi_alloc_request);
|
||||
*/
|
||||
static void scsi_cleanup_rq(struct request *rq)
|
||||
{
|
||||
struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
|
||||
|
||||
cmd->flags = 0;
|
||||
|
||||
if (rq->rq_flags & RQF_DONTPREP) {
|
||||
scsi_mq_uninit_cmd(blk_mq_rq_to_pdu(rq));
|
||||
scsi_mq_uninit_cmd(cmd);
|
||||
rq->rq_flags &= ~RQF_DONTPREP;
|
||||
}
|
||||
}
|
||||
|
@@ -1614,10 +1614,13 @@ static int spi_imx_transfer_one(struct spi_controller *controller,
|
||||
struct spi_device *spi,
|
||||
struct spi_transfer *transfer)
|
||||
{
|
||||
int ret;
|
||||
struct spi_imx_data *spi_imx = spi_controller_get_devdata(spi->controller);
|
||||
unsigned long hz_per_byte, byte_limit;
|
||||
|
||||
spi_imx_setupxfer(spi, transfer);
|
||||
ret = spi_imx_setupxfer(spi, transfer);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
transfer->effective_speed_hz = spi_imx->spi_bus_clk;
|
||||
|
||||
/* flush rxfifo before transfer */
|
||||
|
@@ -1117,9 +1117,9 @@ static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi,
|
||||
(&tqspi->xfer_completion,
|
||||
QSPI_DMA_TIMEOUT);
|
||||
|
||||
if (WARN_ON(ret == 0)) {
|
||||
dev_err(tqspi->dev, "QSPI Transfer failed with timeout: %d\n",
|
||||
ret);
|
||||
if (WARN_ON_ONCE(ret == 0)) {
|
||||
dev_err_ratelimited(tqspi->dev,
|
||||
"QSPI Transfer failed with timeout\n");
|
||||
if (tqspi->is_curr_dma_xfer &&
|
||||
(tqspi->cur_direction & DATA_DIR_TX))
|
||||
dmaengine_terminate_all
|
||||
|
@@ -1370,11 +1370,15 @@ static void tb_scan_port(struct tb_port *port)
|
||||
goto out_rpm_put;
|
||||
}
|
||||
|
||||
tb_retimer_scan(port, true);
|
||||
|
||||
sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
|
||||
tb_downstream_route(port));
|
||||
if (IS_ERR(sw)) {
|
||||
/*
|
||||
* Make the downstream retimers available even if there
|
||||
* is no router connected.
|
||||
*/
|
||||
tb_retimer_scan(port, true);
|
||||
|
||||
/*
|
||||
* If there is an error accessing the connected switch
|
||||
* it may be connected to another domain. Also we allow
|
||||
@@ -1424,6 +1428,14 @@ static void tb_scan_port(struct tb_port *port)
|
||||
upstream_port = tb_upstream_port(sw);
|
||||
tb_configure_link(port, upstream_port, sw);
|
||||
|
||||
/*
|
||||
* Scan for downstream retimers. We only scan them after the
|
||||
* router has been enumerated to avoid issues with certain
|
||||
* Pluggable devices that expect the host to enumerate them
|
||||
* within certain timeout.
|
||||
*/
|
||||
tb_retimer_scan(port, true);
|
||||
|
||||
/*
|
||||
* CL0s and CL1 are enabled and supported together.
|
||||
* Silently ignore CLx enabling in case CLx is not supported.
|
||||
|
@@ -1741,6 +1741,12 @@ msm_serial_early_console_setup_dm(struct earlycon_device *device,
|
||||
if (!device->port.membase)
|
||||
return -ENODEV;
|
||||
|
||||
/* Disable DM / single-character modes */
|
||||
msm_write(&device->port, 0, UARTDM_DMEN);
|
||||
msm_write(&device->port, MSM_UART_CR_CMD_RESET_RX, MSM_UART_CR);
|
||||
msm_write(&device->port, MSM_UART_CR_CMD_RESET_TX, MSM_UART_CR);
|
||||
msm_write(&device->port, MSM_UART_CR_TX_ENABLE, MSM_UART_CR);
|
||||
|
||||
device->con->write = msm_serial_early_write_dm;
|
||||
return 0;
|
||||
}
|
||||
|
@@ -562,8 +562,11 @@ static void sifive_serial_break_ctl(struct uart_port *port, int break_state)
|
||||
static int sifive_serial_startup(struct uart_port *port)
|
||||
{
|
||||
struct sifive_serial_port *ssp = port_to_sifive_serial_port(port);
|
||||
unsigned long flags;
|
||||
|
||||
uart_port_lock_irqsave(&ssp->port, &flags);
|
||||
__ssp_enable_rxwm(ssp);
|
||||
uart_port_unlock_irqrestore(&ssp->port, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -571,9 +574,12 @@ static int sifive_serial_startup(struct uart_port *port)
|
||||
static void sifive_serial_shutdown(struct uart_port *port)
|
||||
{
|
||||
struct sifive_serial_port *ssp = port_to_sifive_serial_port(port);
|
||||
unsigned long flags;
|
||||
|
||||
uart_port_lock_irqsave(&ssp->port, &flags);
|
||||
__ssp_disable_rxwm(ssp);
|
||||
__ssp_disable_txwm(ssp);
|
||||
uart_port_unlock_irqrestore(&ssp->port, flags);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@@ -632,13 +632,6 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
|
||||
unsigned long flags;
|
||||
int err;
|
||||
|
||||
if (!ufshcd_cmd_inflight(lrbp->cmd)) {
|
||||
dev_err(hba->dev,
|
||||
"%s: skip abort. cmd at tag %d already completed.\n",
|
||||
__func__, tag);
|
||||
return FAILED;
|
||||
}
|
||||
|
||||
/* Skip task abort in case previous aborts failed and report failure */
|
||||
if (lrbp->req_abort_skip) {
|
||||
dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n",
|
||||
@@ -647,6 +640,11 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
|
||||
}
|
||||
|
||||
hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd));
|
||||
if (!hwq) {
|
||||
dev_err(hba->dev, "%s: skip abort. cmd at tag %d already completed.\n",
|
||||
__func__, tag);
|
||||
return FAILED;
|
||||
}
|
||||
|
||||
if (ufshcd_mcq_sqe_search(hba, hwq, tag)) {
|
||||
/*
|
||||
|
@@ -990,9 +990,14 @@ static int exynos_ufs_pre_link(struct ufs_hba *hba)
|
||||
exynos_ufs_config_intr(ufs, DFES_DEF_L4_ERRS, UNIPRO_L4);
|
||||
exynos_ufs_set_unipro_pclk_div(ufs);
|
||||
|
||||
exynos_ufs_setup_clocks(hba, true, PRE_CHANGE);
|
||||
|
||||
/* unipro */
|
||||
exynos_ufs_config_unipro(ufs);
|
||||
|
||||
if (ufs->drv_data->pre_link)
|
||||
ufs->drv_data->pre_link(ufs);
|
||||
|
||||
/* m-phy */
|
||||
exynos_ufs_phy_init(ufs);
|
||||
if (!(ufs->opts & EXYNOS_UFS_OPT_SKIP_CONFIG_PHY_ATTR)) {
|
||||
@@ -1000,11 +1005,6 @@ static int exynos_ufs_pre_link(struct ufs_hba *hba)
|
||||
exynos_ufs_config_phy_cap_attr(ufs);
|
||||
}
|
||||
|
||||
exynos_ufs_setup_clocks(hba, true, PRE_CHANGE);
|
||||
|
||||
if (ufs->drv_data->pre_link)
|
||||
ufs->drv_data->pre_link(ufs);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@@ -1962,6 +1962,7 @@ static irqreturn_t cdns3_device_thread_irq_handler(int irq, void *data)
|
||||
unsigned int bit;
|
||||
unsigned long reg;
|
||||
|
||||
local_bh_disable();
|
||||
spin_lock_irqsave(&priv_dev->lock, flags);
|
||||
|
||||
reg = readl(&priv_dev->regs->usb_ists);
|
||||
@@ -2003,6 +2004,7 @@ static irqreturn_t cdns3_device_thread_irq_handler(int irq, void *data)
|
||||
irqend:
|
||||
writel(~0, &priv_dev->regs->ep_ien);
|
||||
spin_unlock_irqrestore(&priv_dev->lock, flags);
|
||||
local_bh_enable();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@@ -328,6 +328,13 @@ static int ci_hdrc_imx_notify_event(struct ci_hdrc *ci, unsigned int event)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void ci_hdrc_imx_disable_regulator(void *arg)
|
||||
{
|
||||
struct ci_hdrc_imx_data *data = arg;
|
||||
|
||||
regulator_disable(data->hsic_pad_regulator);
|
||||
}
|
||||
|
||||
static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct ci_hdrc_imx_data *data;
|
||||
@@ -386,6 +393,13 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
"Failed to enable HSIC pad regulator\n");
|
||||
goto err_put;
|
||||
}
|
||||
ret = devm_add_action_or_reset(dev,
|
||||
ci_hdrc_imx_disable_regulator, data);
|
||||
if (ret) {
|
||||
dev_err(dev,
|
||||
"Failed to add regulator devm action\n");
|
||||
goto err_put;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -424,11 +438,11 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
|
||||
ret = imx_get_clks(dev);
|
||||
if (ret)
|
||||
goto disable_hsic_regulator;
|
||||
goto qos_remove_request;
|
||||
|
||||
ret = imx_prepare_enable_clks(dev);
|
||||
if (ret)
|
||||
goto disable_hsic_regulator;
|
||||
goto qos_remove_request;
|
||||
|
||||
data->phy = devm_usb_get_phy_by_phandle(dev, "fsl,usbphy", 0);
|
||||
if (IS_ERR(data->phy)) {
|
||||
@@ -458,7 +472,11 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
of_usb_get_phy_mode(np) == USBPHY_INTERFACE_MODE_ULPI) {
|
||||
pdata.flags |= CI_HDRC_OVERRIDE_PHY_CONTROL;
|
||||
data->override_phy_control = true;
|
||||
usb_phy_init(pdata.usb_phy);
|
||||
ret = usb_phy_init(pdata.usb_phy);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to init phy\n");
|
||||
goto err_clk;
|
||||
}
|
||||
}
|
||||
|
||||
if (pdata.flags & CI_HDRC_SUPPORTS_RUNTIME_PM)
|
||||
@@ -467,7 +485,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
ret = imx_usbmisc_init(data->usbmisc_data);
|
||||
if (ret) {
|
||||
dev_err(dev, "usbmisc init failed, ret=%d\n", ret);
|
||||
goto err_clk;
|
||||
goto phy_shutdown;
|
||||
}
|
||||
|
||||
data->ci_pdev = ci_hdrc_add_device(dev,
|
||||
@@ -476,7 +494,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
if (IS_ERR(data->ci_pdev)) {
|
||||
ret = PTR_ERR(data->ci_pdev);
|
||||
dev_err_probe(dev, ret, "ci_hdrc_add_device failed\n");
|
||||
goto err_clk;
|
||||
goto phy_shutdown;
|
||||
}
|
||||
|
||||
if (data->usbmisc_data) {
|
||||
@@ -510,17 +528,18 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
|
||||
disable_device:
|
||||
ci_hdrc_remove_device(data->ci_pdev);
|
||||
phy_shutdown:
|
||||
if (data->override_phy_control)
|
||||
usb_phy_shutdown(data->phy);
|
||||
err_clk:
|
||||
imx_disable_unprepare_clks(dev);
|
||||
disable_hsic_regulator:
|
||||
if (data->hsic_pad_regulator)
|
||||
/* don't overwrite original ret (cf. EPROBE_DEFER) */
|
||||
regulator_disable(data->hsic_pad_regulator);
|
||||
qos_remove_request:
|
||||
if (pdata.flags & CI_HDRC_PMQOS)
|
||||
cpu_latency_qos_remove_request(&data->pm_qos_req);
|
||||
data->ci_pdev = NULL;
|
||||
err_put:
|
||||
put_device(data->usbmisc_data->dev);
|
||||
if (data->usbmisc_data)
|
||||
put_device(data->usbmisc_data->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -541,10 +560,9 @@ static void ci_hdrc_imx_remove(struct platform_device *pdev)
|
||||
imx_disable_unprepare_clks(&pdev->dev);
|
||||
if (data->plat_data->flags & CI_HDRC_PMQOS)
|
||||
cpu_latency_qos_remove_request(&data->pm_qos_req);
|
||||
if (data->hsic_pad_regulator)
|
||||
regulator_disable(data->hsic_pad_regulator);
|
||||
}
|
||||
put_device(data->usbmisc_data->dev);
|
||||
if (data->usbmisc_data)
|
||||
put_device(data->usbmisc_data->dev);
|
||||
}
|
||||
|
||||
static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
|
||||
|
@@ -726,7 +726,7 @@ static int wdm_open(struct inode *inode, struct file *file)
|
||||
rv = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
smp_rmb(); /* ordered against wdm_wwan_port_stop() */
|
||||
rv = usb_autopm_get_interface(desc->intf);
|
||||
if (rv < 0) {
|
||||
dev_err(&desc->intf->dev, "Error autopm - %d\n", rv);
|
||||
@@ -829,6 +829,7 @@ static struct usb_class_driver wdm_class = {
|
||||
static int wdm_wwan_port_start(struct wwan_port *port)
|
||||
{
|
||||
struct wdm_device *desc = wwan_port_get_drvdata(port);
|
||||
int rv;
|
||||
|
||||
/* The interface is both exposed via the WWAN framework and as a
|
||||
* legacy usbmisc chardev. If chardev is already open, just fail
|
||||
@@ -848,7 +849,15 @@ static int wdm_wwan_port_start(struct wwan_port *port)
|
||||
wwan_port_txon(port);
|
||||
|
||||
/* Start getting events */
|
||||
return usb_submit_urb(desc->validity, GFP_KERNEL);
|
||||
rv = usb_submit_urb(desc->validity, GFP_KERNEL);
|
||||
if (rv < 0) {
|
||||
wwan_port_txoff(port);
|
||||
desc->manage_power(desc->intf, 0);
|
||||
/* this must be last lest we race with chardev open */
|
||||
clear_bit(WDM_WWAN_IN_USE, &desc->flags);
|
||||
}
|
||||
|
||||
return rv;
|
||||
}
|
||||
|
||||
static void wdm_wwan_port_stop(struct wwan_port *port)
|
||||
@@ -859,8 +868,10 @@ static void wdm_wwan_port_stop(struct wwan_port *port)
|
||||
poison_urbs(desc);
|
||||
desc->manage_power(desc->intf, 0);
|
||||
clear_bit(WDM_READ, &desc->flags);
|
||||
clear_bit(WDM_WWAN_IN_USE, &desc->flags);
|
||||
unpoison_urbs(desc);
|
||||
smp_wmb(); /* ordered against wdm_open() */
|
||||
/* this must be last lest we open a poisoned device */
|
||||
clear_bit(WDM_WWAN_IN_USE, &desc->flags);
|
||||
}
|
||||
|
||||
static void wdm_wwan_port_tx_complete(struct urb *urb)
|
||||
@@ -868,7 +879,7 @@ static void wdm_wwan_port_tx_complete(struct urb *urb)
|
||||
struct sk_buff *skb = urb->context;
|
||||
struct wdm_device *desc = skb_shinfo(skb)->destructor_arg;
|
||||
|
||||
usb_autopm_put_interface(desc->intf);
|
||||
usb_autopm_put_interface_async(desc->intf);
|
||||
wwan_port_txon(desc->wwanp);
|
||||
kfree_skb(skb);
|
||||
}
|
||||
@@ -898,7 +909,7 @@ static int wdm_wwan_port_tx(struct wwan_port *port, struct sk_buff *skb)
|
||||
req->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
|
||||
req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;
|
||||
req->wValue = 0;
|
||||
req->wIndex = desc->inum;
|
||||
req->wIndex = desc->inum; /* already converted */
|
||||
req->wLength = cpu_to_le16(skb->len);
|
||||
|
||||
skb_shinfo(skb)->destructor_arg = desc;
|
||||
|
@@ -369,6 +369,9 @@ static const struct usb_device_id usb_quirk_list[] = {
|
||||
{ USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM },
|
||||
{ USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* SanDisk Corp. SanDisk 3.2Gen1 */
|
||||
{ USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT },
|
||||
|
||||
/* Realforce 87U Keyboard */
|
||||
{ USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
@@ -383,6 +386,9 @@ static const struct usb_device_id usb_quirk_list[] = {
|
||||
{ USB_DEVICE(0x0904, 0x6103), .driver_info =
|
||||
USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL },
|
||||
|
||||
/* Silicon Motion Flash Drive */
|
||||
{ USB_DEVICE(0x090c, 0x1000), .driver_info = USB_QUIRK_DELAY_INIT },
|
||||
|
||||
/* Sound Devices USBPre2 */
|
||||
{ USB_DEVICE(0x0926, 0x0202), .driver_info =
|
||||
USB_QUIRK_ENDPOINT_IGNORE },
|
||||
@@ -536,6 +542,9 @@ static const struct usb_device_id usb_quirk_list[] = {
|
||||
{ USB_DEVICE(0x2040, 0x7200), .driver_info =
|
||||
USB_QUIRK_CONFIG_INTF_STRINGS },
|
||||
|
||||
/* VLI disk */
|
||||
{ USB_DEVICE(0x2109, 0x0711), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* Raydium Touchscreen */
|
||||
{ USB_DEVICE(0x2386, 0x3114), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
|
@@ -148,11 +148,21 @@ static const struct property_entry dwc3_pci_intel_byt_properties[] = {
|
||||
{}
|
||||
};
|
||||
|
||||
/*
|
||||
* Intel Merrifield SoC uses these endpoints for tracing and they cannot
|
||||
* be re-allocated if being used because the side band flow control signals
|
||||
* are hard wired to certain endpoints:
|
||||
* - 1 High BW Bulk IN (IN#1) (RTIT)
|
||||
* - 1 1KB BW Bulk IN (IN#8) + 1 1KB BW Bulk OUT (Run Control) (OUT#8)
|
||||
*/
|
||||
static const u8 dwc3_pci_mrfld_reserved_endpoints[] = { 3, 16, 17 };
|
||||
|
||||
static const struct property_entry dwc3_pci_mrfld_properties[] = {
|
||||
PROPERTY_ENTRY_STRING("dr_mode", "otg"),
|
||||
PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
|
||||
PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"),
|
||||
PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
|
||||
PROPERTY_ENTRY_U8_ARRAY("snps,reserved-endpoints", dwc3_pci_mrfld_reserved_endpoints),
|
||||
PROPERTY_ENTRY_BOOL("snps,usb2-gadget-lpm-disable"),
|
||||
PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
|
||||
{}
|
||||
|
@@ -207,15 +207,13 @@ static int dwc3_xlnx_init_zynqmp(struct dwc3_xlnx *priv_data)
|
||||
|
||||
skip_usb3_phy:
|
||||
/* ulpi reset via gpio-modepin or gpio-framework driver */
|
||||
reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
|
||||
reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(reset_gpio)) {
|
||||
return dev_err_probe(dev, PTR_ERR(reset_gpio),
|
||||
"Failed to request reset GPIO\n");
|
||||
}
|
||||
|
||||
if (reset_gpio) {
|
||||
/* Toggle ulpi to reset the phy. */
|
||||
gpiod_set_value_cansleep(reset_gpio, 1);
|
||||
usleep_range(5000, 10000);
|
||||
gpiod_set_value_cansleep(reset_gpio, 0);
|
||||
usleep_range(5000, 10000);
|
||||
|
@@ -548,6 +548,7 @@ static int dwc3_gadget_set_xfer_resource(struct dwc3_ep *dep)
|
||||
int dwc3_gadget_start_config(struct dwc3 *dwc, unsigned int resource_index)
|
||||
{
|
||||
struct dwc3_gadget_ep_cmd_params params;
|
||||
struct dwc3_ep *dep;
|
||||
u32 cmd;
|
||||
int i;
|
||||
int ret;
|
||||
@@ -564,8 +565,13 @@ int dwc3_gadget_start_config(struct dwc3 *dwc, unsigned int resource_index)
|
||||
return ret;
|
||||
|
||||
/* Reset resource allocation flags */
|
||||
for (i = resource_index; i < dwc->num_eps && dwc->eps[i]; i++)
|
||||
dwc->eps[i]->flags &= ~DWC3_EP_RESOURCE_ALLOCATED;
|
||||
for (i = resource_index; i < dwc->num_eps; i++) {
|
||||
dep = dwc->eps[i];
|
||||
if (!dep)
|
||||
continue;
|
||||
|
||||
dep->flags &= ~DWC3_EP_RESOURCE_ALLOCATED;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -752,9 +758,11 @@ void dwc3_gadget_clear_tx_fifos(struct dwc3 *dwc)
|
||||
|
||||
dwc->last_fifo_depth = fifo_depth;
|
||||
/* Clear existing TXFIFO for all IN eps except ep0 */
|
||||
for (num = 3; num < min_t(int, dwc->num_eps, DWC3_ENDPOINTS_NUM);
|
||||
num += 2) {
|
||||
for (num = 3; num < min_t(int, dwc->num_eps, DWC3_ENDPOINTS_NUM); num += 2) {
|
||||
dep = dwc->eps[num];
|
||||
if (!dep)
|
||||
continue;
|
||||
|
||||
/* Don't change TXFRAMNUM on usb31 version */
|
||||
size = DWC3_IP_IS(DWC3) ? 0 :
|
||||
dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(num >> 1)) &
|
||||
@@ -3672,6 +3680,8 @@ out:
|
||||
|
||||
for (i = 0; i < DWC3_ENDPOINTS_NUM; i++) {
|
||||
dep = dwc->eps[i];
|
||||
if (!dep)
|
||||
continue;
|
||||
|
||||
if (!(dep->flags & DWC3_EP_ENABLED))
|
||||
continue;
|
||||
@@ -3860,6 +3870,10 @@ static void dwc3_endpoint_interrupt(struct dwc3 *dwc,
|
||||
u8 epnum = event->endpoint_number;
|
||||
|
||||
dep = dwc->eps[epnum];
|
||||
if (!dep) {
|
||||
dev_warn(dwc->dev, "spurious event, endpoint %u is not allocated\n", epnum);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!(dep->flags & DWC3_EP_ENABLED)) {
|
||||
if ((epnum > 1) && !(dep->flags & DWC3_EP_TRANSFER_STARTED))
|
||||
@@ -4572,6 +4586,12 @@ static irqreturn_t dwc3_check_event_buf(struct dwc3_event_buffer *evt)
|
||||
if (!count)
|
||||
return IRQ_NONE;
|
||||
|
||||
if (count > evt->length) {
|
||||
dev_err_ratelimited(dwc->dev, "invalid count(%u) > evt->length(%u)\n",
|
||||
count, evt->length);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
evt->count = count;
|
||||
evt->flags |= DWC3_EVENT_PENDING;
|
||||
|
||||
|
@@ -548,6 +548,9 @@ int ast_vhub_init_dev(struct ast_vhub *vhub, unsigned int idx)
|
||||
d->vhub = vhub;
|
||||
d->index = idx;
|
||||
d->name = devm_kasprintf(parent, GFP_KERNEL, "port%d", idx+1);
|
||||
if (!d->name)
|
||||
return -ENOMEM;
|
||||
|
||||
d->regs = vhub->regs + 0x100 + 0x10 * idx;
|
||||
|
||||
ast_vhub_init_ep0(vhub, &d->ep0, d);
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user