Merge 6.6.100 into android15-6.6-lts
GKI (arm64) relevant 38 out of 113 changes, affecting 44 files +300/-115311c434f5d
USB: serial: ftdi_sio: add support for NDI EMGUIDE GEMINI [2 files, +5/-0]58bdd51601
usb: gadget: configfs: Fix OOB read on empty string write [1 file, +4/-0]db44a558b3
Input: xpad - set correct controller type for Acer NGR200 [1 file, +1/-1]82b29ee8ba
spi: Add check for 8-bit transfer with 8 IO mode support [1 file, +10/-4]469a39a33a
dm-bufio: fix sched in atomic context [1 file, +5/-1]fcda39a9c5
HID: core: ensure the allocated report buffer can contain the reserved report ID [1 file, +4/-1]a1c0b87b76
HID: core: ensure __hid_request reserves the report ID as the first byte [1 file, +9/-2]0e5017d84d
HID: core: do not bypass hid_hw_raw_request [1 file, +1/-2]6ba89b382b
tracing/probes: Avoid using params uninitialized in parse_btf_arg() [1 file, +1/-1]6bc94f20a4
tracing: Add down_write(trace_event_sem) when adding trace event [1 file, +5/-0]d48845afa0
io_uring/poll: fix POLLERR handling [2 files, +8/-6]67ea5f37b2
af_packet: fix the SO_SNDTIMEO constraint not effective on tpacked_snd() [1 file, +2/-2]2bae35acbb
af_packet: fix soft lockup issue caused by tpacket_snd() [1 file, +11/-12]600f55da8d
pmdomain: governor: Consider CPU latency tolerance from pm_domain_cpu_gov [1 file, +16/-2]e7be679124
bpf: Reject %p% format string in bprintf-like helpers [1 file, +8/-3]21033b49cf
block: fix kobject leak in blk_unregister_queue [1 file, +1/-0]5d95fbbfaa
nvme: fix inconsistent RCU list manipulation in nvme_ns_add_to_ctrl_list() [1 file, +1/-1]ec158d05ea
net: phy: Don't register LEDs for genphy [1 file, +4/-2]a2f02a87fe
nvme: fix misaccounting of nvme-mpath inflight I/O [1 file, +4/-0]167006f730
wifi: cfg80211: remove scan request n_channels counted_by [1 file, +1/-1]c4f16f6b07
Bluetooth: Fix null-ptr-deref in l2cap_sock_resume_cb() [1 file, +3/-0]32e624912e
Bluetooth: hci_sync: fix connectable extended advertising when using static random address [1 file, +2/-2]f3323b18e3
Bluetooth: SMP: If an unallowed command is received consider it a failure [2 files, +19/-1]4ceefc9c31
Bluetooth: SMP: Fix using HCI_ERROR_REMOTE_USER_TERM on timeout [1 file, +1/-1]dcbc346f50
ipv6: mcast: Delay put pmc->idev in mld_del_delrec() [1 file, +1/-1]76179961c4
netfilter: nf_conntrack: fix crash due to removal of uninitialised entry [2 files, +33/-8]bd3051a816
Bluetooth: L2CAP: Fix attempting to adjust outgoing MTU [1 file, +21/-5]bb515c4130
net: vlan: fix VLAN 0 refcount imbalance of toggling filtering during runtime [2 files, +34/-9]7ff2d83ecf
net/sched: Return NULL when htb_lookup_leaf encounters an empty rbtree [1 file, +3/-1]f371ad6471
Revert "cgroup_freezer: cgroup_freezing: Check if not frozen" [1 file, +1/-7]4cb17b11c8
ipv6: make addrconf_wq single threaded [1 file, +2/-1]dc6a664089
clone_private_mnt(): make sure that caller has CAP_SYS_ADMIN in the right userns [1 file, +5/-0]d5024dc5e6
arm64: Filter out SME hwcaps when FEAT_SME isn't implemented [1 file, +21/-14]15fea75a78
usb: hub: fix detection of high tier USB3 devices behind suspended hubs [1 file, +32/-1]71f5c98d29
usb: hub: Fix flushing and scheduling of delayed work that tunes runtime pm [1 file, +4/-2]668c7b47a5
usb: hub: Fix flushing of delayed work used for post resume purposes [2 files, +9/-13]824fa25c85
usb: hub: Don't try to recover devices lost during warm reset. [1 file, +6/-2]6cfbff5f8d
usb: dwc3: qcom: Don't leave BCR asserted [1 file, +2/-6] Changes in 6.6.100 phy: tegra: xusb: Fix unbalanced regulator disable in UTMI PHY mode phy: tegra: xusb: Decouple CYA_TRK_CODE_UPDATE_ON_IDLE from trk_hw_mode phy: tegra: xusb: Disable periodic tracking on Tegra234 USB: serial: option: add Telit Cinterion FE910C04 (ECM) composition USB: serial: option: add Foxconn T99W640 USB: serial: ftdi_sio: add support for NDI EMGUIDE GEMINI usb: musb: fix gadget state on disconnect usb: gadget: configfs: Fix OOB read on empty string write i2c: stm32: fix the device used for the DMA map thunderbolt: Fix wake on connect at runtime thunderbolt: Fix bit masking in tb_dp_port_set_hops() nvmem: imx-ocotp: fix MAC address byte length Input: xpad - set correct controller type for Acer NGR200 pch_uart: Fix dma_sync_sg_for_device() nents value spi: Add check for 8-bit transfer with 8 IO mode support dm-bufio: fix sched in atomic context HID: core: ensure the allocated report buffer can contain the reserved report ID HID: core: ensure __hid_request reserves the report ID as the first byte HID: core: do not bypass hid_hw_raw_request tracing/probes: Avoid using params uninitialized in parse_btf_arg() tracing: Add down_write(trace_event_sem) when adding trace event tracing/osnoise: Fix crash in timerlat_dump_stack() drm/amdgpu/gfx8: reset compute ring wptr on the GPU on resume ALSA: hda/realtek: Add quirk for ASUS ROG Strix G712LWS io_uring/poll: fix POLLERR handling phonet/pep: Move call to pn_skb_get_dst_sockaddr() earlier in pep_sock_accept() net/mlx5: Update the list of the PCI supported devices arm64: dts: imx8mp-venice-gw74xx: fix TPM SPI frequency arm64: dts: freescale: imx8mm-verdin: Keep LDO5 always on arm64: dts: rockchip: use cs-gpios for spi1 on ringneck af_packet: fix the SO_SNDTIMEO constraint not effective on tpacked_snd() af_packet: fix soft lockup issue caused by tpacket_snd() dmaengine: nbpfaxi: Fix memory corruption in probe() isofs: Verify inode mode when loading from disk memstick: core: Zero initialize id_reg in h_memstick_read_dev_id() mmc: bcm2835: Fix dma_unmap_sg() nents value mmc: sdhci-pci: Quirk for broken command queuing on Intel GLK-based Positivo models mmc: sdhci_am654: Workaround for Errata i2312 net: libwx: remove duplicate page_pool_put_full_page() net: libwx: fix the using of Rx buffer DMA net: libwx: properly reset Rx ring descriptor pmdomain: governor: Consider CPU latency tolerance from pm_domain_cpu_gov s390/bpf: Fix bpf_arch_text_poke() with new_addr == NULL again smb: client: fix use-after-free in crypt_message when using async crypto soc: aspeed: lpc-snoop: Cleanup resources in stack-order soc: aspeed: lpc-snoop: Don't disable channels that aren't enabled iio: accel: fxls8962af: Fix use after free in fxls8962af_fifo_flush iio: adc: max1363: Fix MAX1363_4X_CHANS/MAX1363_8X_CHANS[] iio: adc: max1363: Reorder mode_list[] entries iio: adc: stm32-adc: Fix race in installing chained IRQ handler comedi: pcl812: Fix bit shift out of bounds comedi: aio_iiro_16: Fix bit shift out of bounds comedi: das16m1: Fix bit shift out of bounds comedi: das6402: Fix bit shift out of bounds comedi: Fail COMEDI_INSNLIST ioctl if n_insns is too large comedi: Fix some signed shift left operations comedi: Fix use of uninitialized data in insn_rw_emulate_bits() comedi: Fix initialization of data for instructions that write to subdevice soundwire: amd: fix for handling slave alerts after link is down soundwire: amd: fix for clearing command status register bpf: Reject %p% format string in bprintf-like helpers cachefiles: Fix the incorrect return value in __cachefiles_write() net: emaclite: Fix missing pointer increment in aligned_read() block: fix kobject leak in blk_unregister_queue net/sched: sch_qfq: Fix race condition on qfq_aggregate rpl: Fix use-after-free in rpl_do_srh_inline(). smb: client: fix use-after-free in cifs_oplock_break nvme: fix inconsistent RCU list manipulation in nvme_ns_add_to_ctrl_list() net: phy: Don't register LEDs for genphy nvme: fix misaccounting of nvme-mpath inflight I/O wifi: cfg80211: remove scan request n_channels counted_by selftests: net: increase inter-packet timeout in udpgro.sh hwmon: (corsair-cpro) Validate the size of the received input buffer ice: add NULL check in eswitch lag check usb: net: sierra: check for no status endpoint Bluetooth: Fix null-ptr-deref in l2cap_sock_resume_cb() Bluetooth: hci_sync: fix connectable extended advertising when using static random address Bluetooth: SMP: If an unallowed command is received consider it a failure Bluetooth: SMP: Fix using HCI_ERROR_REMOTE_USER_TERM on timeout Bluetooth: btusb: QCA: Fix downloading wrong NVM for WCN6855 GF variant without board ID net/mlx5: Correctly set gso_size when LRO is used ipv6: mcast: Delay put pmc->idev in mld_del_delrec() netfilter: nf_conntrack: fix crash due to removal of uninitialised entry Bluetooth: L2CAP: Fix attempting to adjust outgoing MTU hv_netvsc: Set VF priv_flags to IFF_NO_ADDRCONF before open to prevent IPv6 addrconf tls: always refresh the queue when reading sock net: vlan: fix VLAN 0 refcount imbalance of toggling filtering during runtime net: bridge: Do not offload IGMP/MLD messages net/sched: Return NULL when htb_lookup_leaf encounters an empty rbtree rxrpc: Fix recv-recv race of completed call rxrpc: Fix transmission of an abort in response to an abort Revert "cgroup_freezer: cgroup_freezing: Check if not frozen" sched: Change nr_uninterruptible type to unsigned long ipv6: make addrconf_wq single threaded clone_private_mnt(): make sure that caller has CAP_SYS_ADMIN in the right userns arm64: Filter out SME hwcaps when FEAT_SME isn't implemented usb: hub: fix detection of high tier USB3 devices behind suspended hubs usb: hub: Fix flushing and scheduling of delayed work that tunes runtime pm usb: hub: Fix flushing of delayed work used for post resume purposes usb: hub: Don't try to recover devices lost during warm reset. usb: dwc3: qcom: Don't leave BCR asserted i2c: omap: Add support for setting mux i2c: omap: Fix an error handling path in omap_i2c_probe() i2c: omap: Handle omap_i2c_init() errors in omap_i2c_probe() regulator: pwm-regulator: Calculate the output voltage for disabled PWMs regulator: pwm-regulator: Manage boot-on with disabled PWM channels ASoC: fsl_sai: Force a software reset when starting in consumer mode Revert "selftests/bpf: adjust dummy_st_ops_success to detect additional error" Revert "selftests/bpf: dummy_st_ops should reject 0 for non-nullable params" i2c: omap: fix deprecated of_property_read_bool() use nvmem: layouts: u-boot-env: remove crc32 endianness conversion KVM: x86/xen: Fix cleanup logic in emulation of Xen schedop poll hypercalls Linux 6.6.100 Change-Id: I4f60d35bcf527d82d2c82fea3307f42c85ec3a45 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 6
|
||||
SUBLEVEL = 99
|
||||
SUBLEVEL = 100
|
||||
EXTRAVERSION =
|
||||
NAME = Pinguïn Aangedreven
|
||||
|
||||
|
@@ -470,6 +470,7 @@
|
||||
};
|
||||
|
||||
reg_nvcc_sd: LDO5 {
|
||||
regulator-always-on;
|
||||
regulator-max-microvolt = <3300000>;
|
||||
regulator-min-microvolt = <1800000>;
|
||||
regulator-name = "On-module +V3.3_1.8_SD (LDO5)";
|
||||
|
@@ -185,7 +185,7 @@
|
||||
#address-cells = <0x1>;
|
||||
#size-cells = <0x1>;
|
||||
reg = <0x0>;
|
||||
spi-max-frequency = <36000000>;
|
||||
spi-max-frequency = <25000000>;
|
||||
};
|
||||
};
|
||||
|
||||
|
@@ -344,6 +344,18 @@
|
||||
<0 RK_PA7 RK_FUNC_GPIO &pcfg_pull_up>;
|
||||
};
|
||||
};
|
||||
|
||||
spi1 {
|
||||
spi1_csn0_gpio_pin: spi1-csn0-gpio-pin {
|
||||
rockchip,pins =
|
||||
<3 RK_PB1 RK_FUNC_GPIO &pcfg_pull_up_4ma>;
|
||||
};
|
||||
|
||||
spi1_csn1_gpio_pin: spi1-csn1-gpio-pin {
|
||||
rockchip,pins =
|
||||
<3 RK_PB2 RK_FUNC_GPIO &pcfg_pull_up_4ma>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
&saradc {
|
||||
@@ -355,6 +367,17 @@
|
||||
vqmmc-supply = <&vccio_sd>;
|
||||
};
|
||||
|
||||
&spi1 {
|
||||
/*
|
||||
* Hardware CS has a very slow rise time of about 6us,
|
||||
* causing transmission errors.
|
||||
* With cs-gpios we have a rise time of about 20ns.
|
||||
*/
|
||||
cs-gpios = <&gpio3 RK_PB1 GPIO_ACTIVE_LOW>, <&gpio3 RK_PB2 GPIO_ACTIVE_LOW>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&spi1_clk &spi1_csn0_gpio_pin &spi1_csn1_gpio_pin &spi1_miso &spi1_mosi>;
|
||||
};
|
||||
|
||||
&tsadc {
|
||||
status = "okay";
|
||||
};
|
||||
|
@@ -2805,6 +2805,13 @@ static bool has_sve_feature(const struct arm64_cpu_capabilities *cap, int scope)
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ARM64_SME
|
||||
static bool has_sme_feature(const struct arm64_cpu_capabilities *cap, int scope)
|
||||
{
|
||||
return system_supports_sme() && has_user_cpuid_feature(cap, scope);
|
||||
}
|
||||
#endif
|
||||
|
||||
static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
|
||||
HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMULL, CAP_HWCAP, KERNEL_HWCAP_PMULL),
|
||||
HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES, CAP_HWCAP, KERNEL_HWCAP_AES),
|
||||
@@ -2876,20 +2883,20 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
|
||||
HWCAP_CAP(ID_AA64ISAR2_EL1, MOPS, IMP, CAP_HWCAP, KERNEL_HWCAP_MOPS),
|
||||
HWCAP_CAP(ID_AA64ISAR2_EL1, BC, IMP, CAP_HWCAP, KERNEL_HWCAP_HBC),
|
||||
#ifdef CONFIG_ARM64_SME
|
||||
HWCAP_CAP(ID_AA64PFR1_EL1, SME, IMP, CAP_HWCAP, KERNEL_HWCAP_SME),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32),
|
||||
HWCAP_CAP(ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64PFR1_EL1, SME, IMP, CAP_HWCAP, KERNEL_HWCAP_SME),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, FA64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_FA64),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2p1, CAP_HWCAP, KERNEL_HWCAP_SME2P1),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, SMEver, SME2, CAP_HWCAP, KERNEL_HWCAP_SME2),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I64),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F64F64, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F64F64),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, I16I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I16I32),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16B16),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F16, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F16),
|
||||
HWCAP_CAP(ID_MATCH_ID(has_sme_feature, AA64SMFR0_EL1, I8I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_I8I32),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F16F32),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, B16F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_B16F32),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, BI32I32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_BI32I32),
|
||||
HWCAP_CAP_MATCH_ID(has_sme_feature, ID_AA64SMFR0_EL1, F32F32, IMP, CAP_HWCAP, KERNEL_HWCAP_SME_F32F32),
|
||||
#endif /* CONFIG_ARM64_SME */
|
||||
{},
|
||||
};
|
||||
|
@@ -539,7 +539,15 @@ static void bpf_jit_plt(struct bpf_plt *plt, void *ret, void *target)
|
||||
{
|
||||
memcpy(plt, &bpf_plt, sizeof(*plt));
|
||||
plt->ret = ret;
|
||||
plt->target = target;
|
||||
/*
|
||||
* (target == NULL) implies that the branch to this PLT entry was
|
||||
* patched and became a no-op. However, some CPU could have jumped
|
||||
* to this PLT entry before patching and may be still executing it.
|
||||
*
|
||||
* Since the intention in this case is to make the PLT entry a no-op,
|
||||
* make the target point to the return label instead of NULL.
|
||||
*/
|
||||
plt->target = target ?: ret;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@@ -1260,7 +1260,7 @@ static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode,
|
||||
if (kvm_read_guest_virt(vcpu, (gva_t)sched_poll.ports, ports,
|
||||
sched_poll.nr_ports * sizeof(*ports), &e)) {
|
||||
*r = -EFAULT;
|
||||
return true;
|
||||
goto out;
|
||||
}
|
||||
|
||||
for (i = 0; i < sched_poll.nr_ports; i++) {
|
||||
|
@@ -909,4 +909,5 @@ void blk_unregister_queue(struct gendisk *disk)
|
||||
mutex_unlock(&q->sysfs_dir_lock);
|
||||
|
||||
blk_debugfs_remove(disk);
|
||||
kobject_put(&disk->queue_kobj);
|
||||
}
|
||||
|
@@ -8,6 +8,7 @@
|
||||
#include <linux/pm_domain.h>
|
||||
#include <linux/pm_qos.h>
|
||||
#include <linux/hrtimer.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/cpuidle.h>
|
||||
#include <linux/cpumask.h>
|
||||
#include <linux/ktime.h>
|
||||
@@ -345,6 +346,8 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
|
||||
struct cpuidle_device *dev;
|
||||
ktime_t domain_wakeup, next_hrtimer;
|
||||
ktime_t now = ktime_get();
|
||||
struct device *cpu_dev;
|
||||
s64 cpu_constraint, global_constraint;
|
||||
s64 idle_duration_ns;
|
||||
int cpu, i;
|
||||
|
||||
@@ -355,6 +358,7 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
|
||||
if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN))
|
||||
return true;
|
||||
|
||||
global_constraint = cpu_latency_qos_limit();
|
||||
/*
|
||||
* Find the next wakeup for any of the online CPUs within the PM domain
|
||||
* and its subdomains. Note, we only need the genpd->cpus, as it already
|
||||
@@ -368,8 +372,16 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
|
||||
if (ktime_before(next_hrtimer, domain_wakeup))
|
||||
domain_wakeup = next_hrtimer;
|
||||
}
|
||||
|
||||
cpu_dev = get_cpu_device(cpu);
|
||||
if (cpu_dev) {
|
||||
cpu_constraint = dev_pm_qos_raw_resume_latency(cpu_dev);
|
||||
if (cpu_constraint < global_constraint)
|
||||
global_constraint = cpu_constraint;
|
||||
}
|
||||
}
|
||||
|
||||
global_constraint *= NSEC_PER_USEC;
|
||||
/* The minimum idle duration is from now - until the next wakeup. */
|
||||
idle_duration_ns = ktime_to_ns(ktime_sub(domain_wakeup, now));
|
||||
if (idle_duration_ns <= 0)
|
||||
@@ -385,8 +397,10 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
|
||||
*/
|
||||
i = genpd->state_idx;
|
||||
do {
|
||||
if (idle_duration_ns >= (genpd->states[i].residency_ns +
|
||||
genpd->states[i].power_off_latency_ns)) {
|
||||
if ((idle_duration_ns >= (genpd->states[i].residency_ns +
|
||||
genpd->states[i].power_off_latency_ns)) &&
|
||||
(global_constraint >= (genpd->states[i].power_on_latency_ns +
|
||||
genpd->states[i].power_off_latency_ns))) {
|
||||
genpd->state_idx = i;
|
||||
return true;
|
||||
}
|
||||
|
@@ -3736,6 +3736,32 @@ static const struct qca_device_info qca_devices_table[] = {
|
||||
{ 0x00190200, 40, 4, 16 }, /* WCN785x 2.0 */
|
||||
};
|
||||
|
||||
static u16 qca_extract_board_id(const struct qca_version *ver)
|
||||
{
|
||||
u16 flag = le16_to_cpu(ver->flag);
|
||||
u16 board_id = 0;
|
||||
|
||||
if (((flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) {
|
||||
/* The board_id should be split into two bytes
|
||||
* The 1st byte is chip ID, and the 2nd byte is platform ID
|
||||
* For example, board ID 0x010A, 0x01 is platform ID. 0x0A is chip ID
|
||||
* we have several platforms, and platform IDs are continuously added
|
||||
* Platform ID:
|
||||
* 0x00 is for Mobile
|
||||
* 0x01 is for X86
|
||||
* 0x02 is for Automotive
|
||||
* 0x03 is for Consumer electronic
|
||||
*/
|
||||
board_id = (ver->chip_id << 8) + ver->platform_id;
|
||||
}
|
||||
|
||||
/* Take 0xffff as invalid board ID */
|
||||
if (board_id == 0xffff)
|
||||
board_id = 0;
|
||||
|
||||
return board_id;
|
||||
}
|
||||
|
||||
static int btusb_qca_send_vendor_req(struct usb_device *udev, u8 request,
|
||||
void *data, u16 size)
|
||||
{
|
||||
@@ -3892,44 +3918,28 @@ static void btusb_generate_qca_nvm_name(char *fwname, size_t max_size,
|
||||
const struct qca_version *ver)
|
||||
{
|
||||
u32 rom_version = le32_to_cpu(ver->rom_version);
|
||||
u16 flag = le16_to_cpu(ver->flag);
|
||||
const char *variant;
|
||||
int len;
|
||||
u16 board_id;
|
||||
|
||||
if (((flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) {
|
||||
/* The board_id should be split into two bytes
|
||||
* The 1st byte is chip ID, and the 2nd byte is platform ID
|
||||
* For example, board ID 0x010A, 0x01 is platform ID. 0x0A is chip ID
|
||||
* we have several platforms, and platform IDs are continuously added
|
||||
* Platform ID:
|
||||
* 0x00 is for Mobile
|
||||
* 0x01 is for X86
|
||||
* 0x02 is for Automotive
|
||||
* 0x03 is for Consumer electronic
|
||||
*/
|
||||
u16 board_id = (ver->chip_id << 8) + ver->platform_id;
|
||||
const char *variant;
|
||||
board_id = qca_extract_board_id(ver);
|
||||
|
||||
switch (le32_to_cpu(ver->ram_version)) {
|
||||
case WCN6855_2_0_RAM_VERSION_GF:
|
||||
case WCN6855_2_1_RAM_VERSION_GF:
|
||||
variant = "_gf";
|
||||
break;
|
||||
default:
|
||||
variant = "";
|
||||
break;
|
||||
}
|
||||
|
||||
if (board_id == 0) {
|
||||
snprintf(fwname, max_size, "qca/nvm_usb_%08x%s.bin",
|
||||
rom_version, variant);
|
||||
} else {
|
||||
snprintf(fwname, max_size, "qca/nvm_usb_%08x%s_%04x.bin",
|
||||
rom_version, variant, board_id);
|
||||
}
|
||||
} else {
|
||||
snprintf(fwname, max_size, "qca/nvm_usb_%08x.bin",
|
||||
rom_version);
|
||||
switch (le32_to_cpu(ver->ram_version)) {
|
||||
case WCN6855_2_0_RAM_VERSION_GF:
|
||||
case WCN6855_2_1_RAM_VERSION_GF:
|
||||
variant = "_gf";
|
||||
break;
|
||||
default:
|
||||
variant = NULL;
|
||||
break;
|
||||
}
|
||||
|
||||
len = snprintf(fwname, max_size, "qca/nvm_usb_%08x", rom_version);
|
||||
if (variant)
|
||||
len += snprintf(fwname + len, max_size - len, "%s", variant);
|
||||
if (board_id)
|
||||
len += snprintf(fwname + len, max_size - len, "_%04x", board_id);
|
||||
len += snprintf(fwname + len, max_size - len, ".bin");
|
||||
}
|
||||
|
||||
static int btusb_setup_qca_load_nvm(struct hci_dev *hdev,
|
||||
|
@@ -1556,21 +1556,27 @@ static int do_insnlist_ioctl(struct comedi_device *dev,
|
||||
}
|
||||
|
||||
for (i = 0; i < n_insns; ++i) {
|
||||
unsigned int n = insns[i].n;
|
||||
|
||||
if (insns[i].insn & INSN_MASK_WRITE) {
|
||||
if (copy_from_user(data, insns[i].data,
|
||||
insns[i].n * sizeof(unsigned int))) {
|
||||
n * sizeof(unsigned int))) {
|
||||
dev_dbg(dev->class_dev,
|
||||
"copy_from_user failed\n");
|
||||
ret = -EFAULT;
|
||||
goto error;
|
||||
}
|
||||
if (n < MIN_SAMPLES) {
|
||||
memset(&data[n], 0, (MIN_SAMPLES - n) *
|
||||
sizeof(unsigned int));
|
||||
}
|
||||
}
|
||||
ret = parse_insn(dev, insns + i, data, file);
|
||||
if (ret < 0)
|
||||
goto error;
|
||||
if (insns[i].insn & INSN_MASK_READ) {
|
||||
if (copy_to_user(insns[i].data, data,
|
||||
insns[i].n * sizeof(unsigned int))) {
|
||||
n * sizeof(unsigned int))) {
|
||||
dev_dbg(dev->class_dev,
|
||||
"copy_to_user failed\n");
|
||||
ret = -EFAULT;
|
||||
@@ -1589,6 +1595,16 @@ error:
|
||||
return i;
|
||||
}
|
||||
|
||||
#define MAX_INSNS MAX_SAMPLES
|
||||
static int check_insnlist_len(struct comedi_device *dev, unsigned int n_insns)
|
||||
{
|
||||
if (n_insns > MAX_INSNS) {
|
||||
dev_dbg(dev->class_dev, "insnlist length too large\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* COMEDI_INSN ioctl
|
||||
* synchronous instruction
|
||||
@@ -1633,6 +1649,10 @@ static int do_insn_ioctl(struct comedi_device *dev,
|
||||
ret = -EFAULT;
|
||||
goto error;
|
||||
}
|
||||
if (insn->n < MIN_SAMPLES) {
|
||||
memset(&data[insn->n], 0,
|
||||
(MIN_SAMPLES - insn->n) * sizeof(unsigned int));
|
||||
}
|
||||
}
|
||||
ret = parse_insn(dev, insn, data, file);
|
||||
if (ret < 0)
|
||||
@@ -2239,6 +2259,9 @@ static long comedi_unlocked_ioctl(struct file *file, unsigned int cmd,
|
||||
rc = -EFAULT;
|
||||
break;
|
||||
}
|
||||
rc = check_insnlist_len(dev, insnlist.n_insns);
|
||||
if (rc)
|
||||
break;
|
||||
insns = kcalloc(insnlist.n_insns, sizeof(*insns), GFP_KERNEL);
|
||||
if (!insns) {
|
||||
rc = -ENOMEM;
|
||||
@@ -3090,6 +3113,9 @@ static int compat_insnlist(struct file *file, unsigned long arg)
|
||||
if (copy_from_user(&insnlist32, compat_ptr(arg), sizeof(insnlist32)))
|
||||
return -EFAULT;
|
||||
|
||||
rc = check_insnlist_len(dev, insnlist32.n_insns);
|
||||
if (rc)
|
||||
return rc;
|
||||
insns = kcalloc(insnlist32.n_insns, sizeof(*insns), GFP_KERNEL);
|
||||
if (!insns)
|
||||
return -ENOMEM;
|
||||
|
@@ -338,10 +338,10 @@ int comedi_dio_insn_config(struct comedi_device *dev,
|
||||
unsigned int *data,
|
||||
unsigned int mask)
|
||||
{
|
||||
unsigned int chan_mask = 1 << CR_CHAN(insn->chanspec);
|
||||
unsigned int chan = CR_CHAN(insn->chanspec);
|
||||
|
||||
if (!mask)
|
||||
mask = chan_mask;
|
||||
if (!mask && chan < 32)
|
||||
mask = 1U << chan;
|
||||
|
||||
switch (data[0]) {
|
||||
case INSN_CONFIG_DIO_INPUT:
|
||||
@@ -381,7 +381,7 @@ EXPORT_SYMBOL_GPL(comedi_dio_insn_config);
|
||||
unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
|
||||
unsigned int *data)
|
||||
{
|
||||
unsigned int chanmask = (s->n_chan < 32) ? ((1 << s->n_chan) - 1)
|
||||
unsigned int chanmask = (s->n_chan < 32) ? ((1U << s->n_chan) - 1)
|
||||
: 0xffffffff;
|
||||
unsigned int mask = data[0] & chanmask;
|
||||
unsigned int bits = data[1];
|
||||
@@ -614,6 +614,9 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
|
||||
unsigned int _data[2];
|
||||
int ret;
|
||||
|
||||
if (insn->n == 0)
|
||||
return 0;
|
||||
|
||||
memset(_data, 0, sizeof(_data));
|
||||
memset(&_insn, 0, sizeof(_insn));
|
||||
_insn.insn = INSN_BITS;
|
||||
@@ -624,8 +627,8 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
|
||||
if (insn->insn == INSN_WRITE) {
|
||||
if (!(s->subdev_flags & SDF_WRITABLE))
|
||||
return -EINVAL;
|
||||
_data[0] = 1 << (chan - base_chan); /* mask */
|
||||
_data[1] = data[0] ? (1 << (chan - base_chan)) : 0; /* bits */
|
||||
_data[0] = 1U << (chan - base_chan); /* mask */
|
||||
_data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */
|
||||
}
|
||||
|
||||
ret = s->insn_bits(dev, s, &_insn, _data);
|
||||
@@ -708,7 +711,7 @@ static int __comedi_device_postconfig(struct comedi_device *dev)
|
||||
|
||||
if (s->type == COMEDI_SUBD_DO) {
|
||||
if (s->n_chan < 32)
|
||||
s->io_bits = (1 << s->n_chan) - 1;
|
||||
s->io_bits = (1U << s->n_chan) - 1;
|
||||
else
|
||||
s->io_bits = 0xffffffff;
|
||||
}
|
||||
|
@@ -177,7 +177,8 @@ static int aio_iiro_16_attach(struct comedi_device *dev,
|
||||
* Digital input change of state interrupts are optionally supported
|
||||
* using IRQ 2-7, 10-12, 14, or 15.
|
||||
*/
|
||||
if ((1 << it->options[1]) & 0xdcfc) {
|
||||
if (it->options[1] > 0 && it->options[1] < 16 &&
|
||||
(1 << it->options[1]) & 0xdcfc) {
|
||||
ret = request_irq(it->options[1], aio_iiro_16_cos, 0,
|
||||
dev->board_name, dev);
|
||||
if (ret == 0)
|
||||
|
@@ -522,7 +522,8 @@ static int das16m1_attach(struct comedi_device *dev,
|
||||
devpriv->extra_iobase = dev->iobase + DAS16M1_8255_IOBASE;
|
||||
|
||||
/* only irqs 2, 3, 4, 5, 6, 7, 10, 11, 12, 14, and 15 are valid */
|
||||
if ((1 << it->options[1]) & 0xdcfc) {
|
||||
if (it->options[1] >= 2 && it->options[1] <= 15 &&
|
||||
(1 << it->options[1]) & 0xdcfc) {
|
||||
ret = request_irq(it->options[1], das16m1_interrupt, 0,
|
||||
dev->board_name, dev);
|
||||
if (ret == 0)
|
||||
|
@@ -567,7 +567,8 @@ static int das6402_attach(struct comedi_device *dev,
|
||||
das6402_reset(dev);
|
||||
|
||||
/* IRQs 2,3,5,6,7, 10,11,15 are valid for "enhanced" mode */
|
||||
if ((1 << it->options[1]) & 0x8cec) {
|
||||
if (it->options[1] > 0 && it->options[1] < 16 &&
|
||||
(1 << it->options[1]) & 0x8cec) {
|
||||
ret = request_irq(it->options[1], das6402_interrupt, 0,
|
||||
dev->board_name, dev);
|
||||
if (ret == 0) {
|
||||
|
@@ -1149,7 +1149,8 @@ static int pcl812_attach(struct comedi_device *dev, struct comedi_devconfig *it)
|
||||
if (!dev->pacer)
|
||||
return -ENOMEM;
|
||||
|
||||
if ((1 << it->options[1]) & board->irq_bits) {
|
||||
if (it->options[1] > 0 && it->options[1] < 16 &&
|
||||
(1 << it->options[1]) & board->irq_bits) {
|
||||
ret = request_irq(it->options[1], pcl812_interrupt, 0,
|
||||
dev->board_name, dev);
|
||||
if (ret == 0)
|
||||
|
@@ -1351,7 +1351,7 @@ static int nbpf_probe(struct platform_device *pdev)
|
||||
if (irqs == 1) {
|
||||
eirq = irqbuf[0];
|
||||
|
||||
for (i = 0; i <= num_channels; i++)
|
||||
for (i = 0; i < num_channels; i++)
|
||||
nbpf->chan[i].irq = irqbuf[0];
|
||||
} else {
|
||||
eirq = platform_get_irq_byname(pdev, "error");
|
||||
@@ -1361,16 +1361,15 @@ static int nbpf_probe(struct platform_device *pdev)
|
||||
if (irqs == num_channels + 1) {
|
||||
struct nbpf_channel *chan;
|
||||
|
||||
for (i = 0, chan = nbpf->chan; i <= num_channels;
|
||||
for (i = 0, chan = nbpf->chan; i < num_channels;
|
||||
i++, chan++) {
|
||||
/* Skip the error IRQ */
|
||||
if (irqbuf[i] == eirq)
|
||||
i++;
|
||||
if (i >= ARRAY_SIZE(irqbuf))
|
||||
return -EINVAL;
|
||||
chan->irq = irqbuf[i];
|
||||
}
|
||||
|
||||
if (chan != nbpf->chan + num_channels)
|
||||
return -EINVAL;
|
||||
} else {
|
||||
/* 2 IRQs and more than one channel */
|
||||
if (irqbuf[0] == eirq)
|
||||
@@ -1378,7 +1377,7 @@ static int nbpf_probe(struct platform_device *pdev)
|
||||
else
|
||||
irq = irqbuf[0];
|
||||
|
||||
for (i = 0; i <= num_channels; i++)
|
||||
for (i = 0; i < num_channels; i++)
|
||||
nbpf->chan[i].irq = irq;
|
||||
}
|
||||
}
|
||||
|
@@ -4656,6 +4656,7 @@ static int gfx_v8_0_kcq_init_queue(struct amdgpu_ring *ring)
|
||||
memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(struct vi_mqd_allocation));
|
||||
/* reset ring buffer */
|
||||
ring->wptr = 0;
|
||||
atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0);
|
||||
amdgpu_ring_clear_ring(ring);
|
||||
}
|
||||
return 0;
|
||||
|
@@ -1873,9 +1873,12 @@ u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags)
|
||||
/*
|
||||
* 7 extra bytes are necessary to achieve proper functionality
|
||||
* of implement() working on 8 byte chunks
|
||||
* 1 extra byte for the report ID if it is null (not used) so
|
||||
* we can reserve that extra byte in the first position of the buffer
|
||||
* when sending it to .raw_request()
|
||||
*/
|
||||
|
||||
u32 len = hid_report_len(report) + 7;
|
||||
u32 len = hid_report_len(report) + 7 + (report->id == 0);
|
||||
|
||||
return kzalloc(len, flags);
|
||||
}
|
||||
@@ -1938,7 +1941,7 @@ static struct hid_report *hid_get_report(struct hid_report_enum *report_enum,
|
||||
int __hid_request(struct hid_device *hid, struct hid_report *report,
|
||||
enum hid_class_request reqtype)
|
||||
{
|
||||
char *buf;
|
||||
char *buf, *data_buf;
|
||||
int ret;
|
||||
u32 len;
|
||||
|
||||
@@ -1946,13 +1949,19 @@ int __hid_request(struct hid_device *hid, struct hid_report *report,
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
data_buf = buf;
|
||||
len = hid_report_len(report);
|
||||
|
||||
if (reqtype == HID_REQ_SET_REPORT)
|
||||
hid_output_report(report, buf);
|
||||
if (report->id == 0) {
|
||||
/* reserve the first byte for the report ID */
|
||||
data_buf++;
|
||||
len++;
|
||||
}
|
||||
|
||||
ret = hid->ll_driver->raw_request(hid, report->id, buf, len,
|
||||
report->type, reqtype);
|
||||
if (reqtype == HID_REQ_SET_REPORT)
|
||||
hid_output_report(report, data_buf);
|
||||
|
||||
ret = hid_hw_raw_request(hid, report->id, buf, len, report->type, reqtype);
|
||||
if (ret < 0) {
|
||||
dbg_hid("unable to complete request: %d\n", ret);
|
||||
goto out;
|
||||
|
@@ -84,6 +84,7 @@ struct ccp_device {
|
||||
struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */
|
||||
u8 *cmd_buffer;
|
||||
u8 *buffer;
|
||||
int buffer_recv_size; /* number of received bytes in buffer */
|
||||
int target[6];
|
||||
DECLARE_BITMAP(temp_cnct, NUM_TEMP_SENSORS);
|
||||
DECLARE_BITMAP(fan_cnct, NUM_FANS);
|
||||
@@ -139,6 +140,9 @@ static int send_usb_cmd(struct ccp_device *ccp, u8 command, u8 byte1, u8 byte2,
|
||||
if (!t)
|
||||
return -ETIMEDOUT;
|
||||
|
||||
if (ccp->buffer_recv_size != IN_BUFFER_SIZE)
|
||||
return -EPROTO;
|
||||
|
||||
return ccp_get_errno(ccp);
|
||||
}
|
||||
|
||||
@@ -150,6 +154,7 @@ static int ccp_raw_event(struct hid_device *hdev, struct hid_report *report, u8
|
||||
spin_lock(&ccp->wait_input_report_lock);
|
||||
if (!completion_done(&ccp->wait_input_report)) {
|
||||
memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size));
|
||||
ccp->buffer_recv_size = size;
|
||||
complete_all(&ccp->wait_input_report);
|
||||
}
|
||||
spin_unlock(&ccp->wait_input_report_lock);
|
||||
|
@@ -899,6 +899,7 @@ config I2C_OMAP
|
||||
tristate "OMAP I2C adapter"
|
||||
depends on ARCH_OMAP || ARCH_K3 || COMPILE_TEST
|
||||
default MACH_OMAP_OSK
|
||||
select MULTIPLEXER
|
||||
help
|
||||
If you say yes to this option, support will be included for the
|
||||
I2C interface on the Texas Instruments OMAP1/2 family of processors.
|
||||
|
@@ -24,6 +24,7 @@
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/mux/consumer.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/slab.h>
|
||||
@@ -211,6 +212,7 @@ struct omap_i2c_dev {
|
||||
u16 syscstate;
|
||||
u16 westate;
|
||||
u16 errata;
|
||||
struct mux_state *mux_state;
|
||||
};
|
||||
|
||||
static const u8 reg_map_ip_v1[] = {
|
||||
@@ -1455,8 +1457,27 @@ omap_i2c_probe(struct platform_device *pdev)
|
||||
(1000 * omap->speed / 8);
|
||||
}
|
||||
|
||||
if (of_property_present(node, "mux-states")) {
|
||||
struct mux_state *mux_state;
|
||||
|
||||
mux_state = devm_mux_state_get(&pdev->dev, NULL);
|
||||
if (IS_ERR(mux_state)) {
|
||||
r = PTR_ERR(mux_state);
|
||||
dev_dbg(&pdev->dev, "failed to get I2C mux: %d\n", r);
|
||||
goto err_put_pm;
|
||||
}
|
||||
omap->mux_state = mux_state;
|
||||
r = mux_state_select(omap->mux_state);
|
||||
if (r) {
|
||||
dev_err(&pdev->dev, "failed to select I2C mux: %d\n", r);
|
||||
goto err_put_pm;
|
||||
}
|
||||
}
|
||||
|
||||
/* reset ASAP, clearing any IRQs */
|
||||
omap_i2c_init(omap);
|
||||
r = omap_i2c_init(omap);
|
||||
if (r)
|
||||
goto err_mux_state_deselect;
|
||||
|
||||
if (omap->rev < OMAP_I2C_OMAP1_REV_2)
|
||||
r = devm_request_irq(&pdev->dev, omap->irq, omap_i2c_omap1_isr,
|
||||
@@ -1499,6 +1520,10 @@ omap_i2c_probe(struct platform_device *pdev)
|
||||
|
||||
err_unuse_clocks:
|
||||
omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0);
|
||||
err_mux_state_deselect:
|
||||
if (omap->mux_state)
|
||||
mux_state_deselect(omap->mux_state);
|
||||
err_put_pm:
|
||||
pm_runtime_dont_use_autosuspend(omap->dev);
|
||||
pm_runtime_put_sync(omap->dev);
|
||||
err_disable_pm:
|
||||
@@ -1514,6 +1539,9 @@ static void omap_i2c_remove(struct platform_device *pdev)
|
||||
|
||||
i2c_del_adapter(&omap->adapter);
|
||||
|
||||
if (omap->mux_state)
|
||||
mux_state_deselect(omap->mux_state);
|
||||
|
||||
ret = pm_runtime_get_sync(&pdev->dev);
|
||||
if (ret < 0)
|
||||
dev_err(omap->dev, "Failed to resume hardware, skip disable\n");
|
||||
|
@@ -102,7 +102,6 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
|
||||
void *dma_async_param)
|
||||
{
|
||||
struct dma_async_tx_descriptor *txdesc;
|
||||
struct device *chan_dev;
|
||||
int ret;
|
||||
|
||||
if (rd_wr) {
|
||||
@@ -116,11 +115,10 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
|
||||
}
|
||||
|
||||
dma->dma_len = len;
|
||||
chan_dev = dma->chan_using->device->dev;
|
||||
|
||||
dma->dma_buf = dma_map_single(chan_dev, buf, dma->dma_len,
|
||||
dma->dma_buf = dma_map_single(dev, buf, dma->dma_len,
|
||||
dma->dma_data_dir);
|
||||
if (dma_mapping_error(chan_dev, dma->dma_buf)) {
|
||||
if (dma_mapping_error(dev, dma->dma_buf)) {
|
||||
dev_err(dev, "DMA mapping failed\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
@@ -150,7 +148,7 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
|
||||
return 0;
|
||||
|
||||
err:
|
||||
dma_unmap_single(chan_dev, dma->dma_buf, dma->dma_len,
|
||||
dma_unmap_single(dev, dma->dma_buf, dma->dma_len,
|
||||
dma->dma_data_dir);
|
||||
return ret;
|
||||
}
|
||||
|
@@ -728,10 +728,10 @@ static void stm32f7_i2c_dma_callback(void *arg)
|
||||
{
|
||||
struct stm32f7_i2c_dev *i2c_dev = (struct stm32f7_i2c_dev *)arg;
|
||||
struct stm32_i2c_dma *dma = i2c_dev->dma;
|
||||
struct device *dev = dma->chan_using->device->dev;
|
||||
|
||||
stm32f7_i2c_disable_dma_req(i2c_dev);
|
||||
dma_unmap_single(dev, dma->dma_buf, dma->dma_len, dma->dma_data_dir);
|
||||
dma_unmap_single(i2c_dev->dev, dma->dma_buf, dma->dma_len,
|
||||
dma->dma_data_dir);
|
||||
complete(&dma->dma_complete);
|
||||
}
|
||||
|
||||
|
@@ -865,6 +865,8 @@ static int fxls8962af_buffer_predisable(struct iio_dev *indio_dev)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
synchronize_irq(data->irq);
|
||||
|
||||
ret = __fxls8962af_fifo_set_mode(data, false);
|
||||
|
||||
if (data->enable_event)
|
||||
|
@@ -510,10 +510,10 @@ static const struct iio_event_spec max1363_events[] = {
|
||||
MAX1363_CHAN_U(1, _s1, 1, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_U(2, _s2, 2, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_U(3, _s3, 3, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_B(0, 1, d0m1, 4, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_B(2, 3, d2m3, 5, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_B(1, 0, d1m0, 6, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_B(3, 2, d3m2, 7, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_B(0, 1, d0m1, 12, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_B(2, 3, d2m3, 13, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_B(1, 0, d1m0, 18, bits, ev_spec, num_ev_spec), \
|
||||
MAX1363_CHAN_B(3, 2, d3m2, 19, bits, ev_spec, num_ev_spec), \
|
||||
IIO_CHAN_SOFT_TIMESTAMP(8) \
|
||||
}
|
||||
|
||||
@@ -531,23 +531,23 @@ static const struct iio_chan_spec max1363_channels[] =
|
||||
/* Applies to max1236, max1237 */
|
||||
static const enum max1363_modes max1236_mode_list[] = {
|
||||
_s0, _s1, _s2, _s3,
|
||||
s0to1, s0to2, s0to3,
|
||||
s0to1, s0to2, s2to3, s0to3,
|
||||
d0m1, d2m3, d1m0, d3m2,
|
||||
d0m1to2m3, d1m0to3m2,
|
||||
s2to3,
|
||||
};
|
||||
|
||||
/* Applies to max1238, max1239 */
|
||||
static const enum max1363_modes max1238_mode_list[] = {
|
||||
_s0, _s1, _s2, _s3, _s4, _s5, _s6, _s7, _s8, _s9, _s10, _s11,
|
||||
s0to1, s0to2, s0to3, s0to4, s0to5, s0to6,
|
||||
s6to7, s6to8, s6to9, s6to10, s6to11,
|
||||
s0to7, s0to8, s0to9, s0to10, s0to11,
|
||||
d0m1, d2m3, d4m5, d6m7, d8m9, d10m11,
|
||||
d1m0, d3m2, d5m4, d7m6, d9m8, d11m10,
|
||||
d0m1to2m3, d0m1to4m5, d0m1to6m7, d0m1to8m9, d0m1to10m11,
|
||||
d1m0to3m2, d1m0to5m4, d1m0to7m6, d1m0to9m8, d1m0to11m10,
|
||||
s6to7, s6to8, s6to9, s6to10, s6to11,
|
||||
d6m7to8m9, d6m7to10m11, d7m6to9m8, d7m6to11m10,
|
||||
d0m1to2m3, d0m1to4m5, d0m1to6m7, d6m7to8m9,
|
||||
d0m1to8m9, d6m7to10m11, d0m1to10m11, d1m0to3m2,
|
||||
d1m0to5m4, d1m0to7m6, d7m6to9m8, d1m0to9m8,
|
||||
d7m6to11m10, d1m0to11m10,
|
||||
};
|
||||
|
||||
#define MAX1363_12X_CHANS(bits) { \
|
||||
@@ -583,16 +583,15 @@ static const struct iio_chan_spec max1238_channels[] = MAX1363_12X_CHANS(12);
|
||||
|
||||
static const enum max1363_modes max11607_mode_list[] = {
|
||||
_s0, _s1, _s2, _s3,
|
||||
s0to1, s0to2, s0to3,
|
||||
s2to3,
|
||||
s0to1, s0to2, s2to3,
|
||||
s0to3,
|
||||
d0m1, d2m3, d1m0, d3m2,
|
||||
d0m1to2m3, d1m0to3m2,
|
||||
};
|
||||
|
||||
static const enum max1363_modes max11608_mode_list[] = {
|
||||
_s0, _s1, _s2, _s3, _s4, _s5, _s6, _s7,
|
||||
s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, s0to7,
|
||||
s6to7,
|
||||
s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, s6to7, s0to7,
|
||||
d0m1, d2m3, d4m5, d6m7,
|
||||
d1m0, d3m2, d5m4, d7m6,
|
||||
d0m1to2m3, d0m1to4m5, d0m1to6m7,
|
||||
@@ -608,14 +607,14 @@ static const enum max1363_modes max11608_mode_list[] = {
|
||||
MAX1363_CHAN_U(5, _s5, 5, bits, NULL, 0), \
|
||||
MAX1363_CHAN_U(6, _s6, 6, bits, NULL, 0), \
|
||||
MAX1363_CHAN_U(7, _s7, 7, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(0, 1, d0m1, 8, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(2, 3, d2m3, 9, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(4, 5, d4m5, 10, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(6, 7, d6m7, 11, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(1, 0, d1m0, 12, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(3, 2, d3m2, 13, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(5, 4, d5m4, 14, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(7, 6, d7m6, 15, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(0, 1, d0m1, 12, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(2, 3, d2m3, 13, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(4, 5, d4m5, 14, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(6, 7, d6m7, 15, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(1, 0, d1m0, 18, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(3, 2, d3m2, 19, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(5, 4, d5m4, 20, bits, NULL, 0), \
|
||||
MAX1363_CHAN_B(7, 6, d7m6, 21, bits, NULL, 0), \
|
||||
IIO_CHAN_SOFT_TIMESTAMP(16) \
|
||||
}
|
||||
static const struct iio_chan_spec max11602_channels[] = MAX1363_8X_CHANS(8);
|
||||
|
@@ -428,10 +428,9 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
for (i = 0; i < priv->cfg->num_irqs; i++) {
|
||||
irq_set_chained_handler(priv->irq[i], stm32_adc_irq_handler);
|
||||
irq_set_handler_data(priv->irq[i], priv);
|
||||
}
|
||||
for (i = 0; i < priv->cfg->num_irqs; i++)
|
||||
irq_set_chained_handler_and_data(priv->irq[i],
|
||||
stm32_adc_irq_handler, priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@@ -169,12 +169,12 @@ static const struct xpad_device {
|
||||
{ 0x046d, 0xca88, "Logitech Compact Controller for Xbox", 0, XTYPE_XBOX },
|
||||
{ 0x046d, 0xca8a, "Logitech Precision Vibration Feedback Wheel", 0, XTYPE_XBOX },
|
||||
{ 0x046d, 0xcaa3, "Logitech DriveFx Racing Wheel", 0, XTYPE_XBOX360 },
|
||||
{ 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX360 },
|
||||
{ 0x056e, 0x2004, "Elecom JC-U3613M", 0, XTYPE_XBOX360 },
|
||||
{ 0x05fd, 0x1007, "Mad Catz Controller (unverified)", 0, XTYPE_XBOX },
|
||||
{ 0x05fd, 0x107a, "InterAct 'PowerPad Pro' X-Box pad (Germany)", 0, XTYPE_XBOX },
|
||||
{ 0x05fe, 0x3030, "Chic Controller", 0, XTYPE_XBOX },
|
||||
{ 0x05fe, 0x3031, "Chic Controller", 0, XTYPE_XBOX },
|
||||
{ 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX },
|
||||
{ 0x062a, 0x0020, "Logic3 Xbox GamePad", 0, XTYPE_XBOX },
|
||||
{ 0x062a, 0x0033, "Competition Pro Steering Wheel", 0, XTYPE_XBOX },
|
||||
{ 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX },
|
||||
|
@@ -2746,7 +2746,11 @@ static unsigned long __evict_many(struct dm_bufio_client *c,
|
||||
__make_buffer_clean(b);
|
||||
__free_buffer_wake(b);
|
||||
|
||||
cond_resched();
|
||||
if (need_resched()) {
|
||||
dm_bufio_unlock(c);
|
||||
cond_resched();
|
||||
dm_bufio_lock(c);
|
||||
}
|
||||
}
|
||||
|
||||
return count;
|
||||
|
@@ -323,7 +323,7 @@ EXPORT_SYMBOL(memstick_init_req);
|
||||
static int h_memstick_read_dev_id(struct memstick_dev *card,
|
||||
struct memstick_request **mrq)
|
||||
{
|
||||
struct ms_id_register id_reg;
|
||||
struct ms_id_register id_reg = {};
|
||||
|
||||
if (!(*mrq)) {
|
||||
memstick_init_req(&card->current_mrq, MS_TPC_READ_REG, &id_reg,
|
||||
|
@@ -502,7 +502,8 @@ void bcm2835_prepare_dma(struct bcm2835_host *host, struct mmc_data *data)
|
||||
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
|
||||
|
||||
if (!desc) {
|
||||
dma_unmap_sg(dma_chan->device->dev, data->sg, sg_len, dir_data);
|
||||
dma_unmap_sg(dma_chan->device->dev, data->sg, data->sg_len,
|
||||
dir_data);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@@ -911,7 +911,8 @@ static bool glk_broken_cqhci(struct sdhci_pci_slot *slot)
|
||||
{
|
||||
return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC &&
|
||||
(dmi_match(DMI_BIOS_VENDOR, "LENOVO") ||
|
||||
dmi_match(DMI_SYS_VENDOR, "IRBIS"));
|
||||
dmi_match(DMI_SYS_VENDOR, "IRBIS") ||
|
||||
dmi_match(DMI_SYS_VENDOR, "Positivo Tecnologia SA"));
|
||||
}
|
||||
|
||||
static bool jsl_broken_hs400es(struct sdhci_pci_slot *slot)
|
||||
|
@@ -559,7 +559,8 @@ static struct sdhci_ops sdhci_am654_ops = {
|
||||
static const struct sdhci_pltfm_data sdhci_am654_pdata = {
|
||||
.ops = &sdhci_am654_ops,
|
||||
.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
|
||||
SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
|
||||
};
|
||||
|
||||
static const struct sdhci_am654_driver_data sdhci_am654_sr1_drvdata = {
|
||||
@@ -589,7 +590,8 @@ static struct sdhci_ops sdhci_j721e_8bit_ops = {
|
||||
static const struct sdhci_pltfm_data sdhci_j721e_8bit_pdata = {
|
||||
.ops = &sdhci_j721e_8bit_ops,
|
||||
.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
|
||||
SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
|
||||
};
|
||||
|
||||
static const struct sdhci_am654_driver_data sdhci_j721e_8bit_drvdata = {
|
||||
@@ -613,7 +615,8 @@ static struct sdhci_ops sdhci_j721e_4bit_ops = {
|
||||
static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = {
|
||||
.ops = &sdhci_j721e_4bit_ops,
|
||||
.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
|
||||
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
|
||||
SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
|
||||
};
|
||||
|
||||
static const struct sdhci_am654_driver_data sdhci_j721e_4bit_drvdata = {
|
||||
|
@@ -2129,7 +2129,8 @@ bool ice_lag_is_switchdev_running(struct ice_pf *pf)
|
||||
struct ice_lag *lag = pf->lag;
|
||||
struct net_device *tmp_nd;
|
||||
|
||||
if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) || !lag)
|
||||
if (!ice_is_feature_supported(pf, ICE_F_SRIOV_LAG) ||
|
||||
!lag || !lag->upper_netdev)
|
||||
return false;
|
||||
|
||||
rcu_read_lock();
|
||||
|
@@ -1159,8 +1159,9 @@ static void mlx5e_lro_update_tcp_hdr(struct mlx5_cqe64 *cqe, struct tcphdr *tcp)
|
||||
}
|
||||
}
|
||||
|
||||
static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
|
||||
u32 cqe_bcnt)
|
||||
static unsigned int mlx5e_lro_update_hdr(struct sk_buff *skb,
|
||||
struct mlx5_cqe64 *cqe,
|
||||
u32 cqe_bcnt)
|
||||
{
|
||||
struct ethhdr *eth = (struct ethhdr *)(skb->data);
|
||||
struct tcphdr *tcp;
|
||||
@@ -1211,6 +1212,8 @@ static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
|
||||
tcp->check = csum_ipv6_magic(&ipv6->saddr, &ipv6->daddr, payload_len,
|
||||
IPPROTO_TCP, check);
|
||||
}
|
||||
|
||||
return (unsigned int)((unsigned char *)tcp + tcp->doff * 4 - skb->data);
|
||||
}
|
||||
|
||||
static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index)
|
||||
@@ -1567,8 +1570,9 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
|
||||
mlx5e_macsec_offload_handle_rx_skb(netdev, skb, cqe);
|
||||
|
||||
if (lro_num_seg > 1) {
|
||||
mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
|
||||
skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt, lro_num_seg);
|
||||
unsigned int hdrlen = mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
|
||||
|
||||
skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt - hdrlen, lro_num_seg);
|
||||
/* Subtract one since we already counted this as one
|
||||
* "regular" packet in mlx5e_complete_rx_cqe()
|
||||
*/
|
||||
|
@@ -2206,6 +2206,7 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1021) }, /* ConnectX-7 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1023) }, /* ConnectX-8 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1025) }, /* ConnectX-9 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1027) }, /* ConnectX-10 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
|
||||
|
@@ -1348,7 +1348,6 @@ static void wx_configure_rx_ring(struct wx *wx,
|
||||
struct wx_ring *ring)
|
||||
{
|
||||
u16 reg_idx = ring->reg_idx;
|
||||
union wx_rx_desc *rx_desc;
|
||||
u64 rdba = ring->dma;
|
||||
u32 rxdctl;
|
||||
|
||||
@@ -1378,9 +1377,9 @@ static void wx_configure_rx_ring(struct wx *wx,
|
||||
memset(ring->rx_buffer_info, 0,
|
||||
sizeof(struct wx_rx_buffer) * ring->count);
|
||||
|
||||
/* initialize Rx descriptor 0 */
|
||||
rx_desc = WX_RX_DESC(ring, 0);
|
||||
rx_desc->wb.upper.length = 0;
|
||||
/* reset ntu and ntc to place SW in sync with hardware */
|
||||
ring->next_to_clean = 0;
|
||||
ring->next_to_use = 0;
|
||||
|
||||
/* enable receive descriptor ring */
|
||||
wr32m(wx, WX_PX_RR_CFG(reg_idx),
|
||||
|
@@ -171,10 +171,6 @@ static void wx_dma_sync_frag(struct wx_ring *rx_ring,
|
||||
skb_frag_off(frag),
|
||||
skb_frag_size(frag),
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
/* If the page was released, just unmap it. */
|
||||
if (unlikely(WX_CB(skb)->page_released))
|
||||
page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
|
||||
}
|
||||
|
||||
static struct wx_rx_buffer *wx_get_rx_buffer(struct wx_ring *rx_ring,
|
||||
@@ -224,10 +220,6 @@ static void wx_put_rx_buffer(struct wx_ring *rx_ring,
|
||||
struct sk_buff *skb,
|
||||
int rx_buffer_pgcnt)
|
||||
{
|
||||
if (!IS_ERR(skb) && WX_CB(skb)->dma == rx_buffer->dma)
|
||||
/* the page has been released from the ring */
|
||||
WX_CB(skb)->page_released = true;
|
||||
|
||||
/* clear contents of rx_buffer */
|
||||
rx_buffer->page = NULL;
|
||||
rx_buffer->skb = NULL;
|
||||
@@ -315,7 +307,7 @@ static bool wx_alloc_mapped_page(struct wx_ring *rx_ring,
|
||||
return false;
|
||||
dma = page_pool_get_dma_addr(page);
|
||||
|
||||
bi->page_dma = dma;
|
||||
bi->dma = dma;
|
||||
bi->page = page;
|
||||
bi->page_offset = 0;
|
||||
|
||||
@@ -352,7 +344,7 @@ void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count)
|
||||
DMA_FROM_DEVICE);
|
||||
|
||||
rx_desc->read.pkt_addr =
|
||||
cpu_to_le64(bi->page_dma + bi->page_offset);
|
||||
cpu_to_le64(bi->dma + bi->page_offset);
|
||||
|
||||
rx_desc++;
|
||||
bi++;
|
||||
@@ -365,6 +357,8 @@ void wx_alloc_rx_buffers(struct wx_ring *rx_ring, u16 cleaned_count)
|
||||
|
||||
/* clear the status bits for the next_to_use descriptor */
|
||||
rx_desc->wb.upper.status_error = 0;
|
||||
/* clear the length for the next_to_use descriptor */
|
||||
rx_desc->wb.upper.length = 0;
|
||||
|
||||
cleaned_count--;
|
||||
} while (cleaned_count);
|
||||
@@ -2158,9 +2152,6 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring)
|
||||
if (rx_buffer->skb) {
|
||||
struct sk_buff *skb = rx_buffer->skb;
|
||||
|
||||
if (WX_CB(skb)->page_released)
|
||||
page_pool_put_full_page(rx_ring->page_pool, rx_buffer->page, false);
|
||||
|
||||
dev_kfree_skb(skb);
|
||||
}
|
||||
|
||||
@@ -2184,6 +2175,9 @@ static void wx_clean_rx_ring(struct wx_ring *rx_ring)
|
||||
}
|
||||
}
|
||||
|
||||
/* Zero out the descriptor ring */
|
||||
memset(rx_ring->desc, 0, rx_ring->size);
|
||||
|
||||
rx_ring->next_to_alloc = 0;
|
||||
rx_ring->next_to_clean = 0;
|
||||
rx_ring->next_to_use = 0;
|
||||
|
@@ -668,7 +668,6 @@ enum wx_reset_type {
|
||||
struct wx_cb {
|
||||
dma_addr_t dma;
|
||||
u16 append_cnt; /* number of skb's appended */
|
||||
bool page_released;
|
||||
bool dma_released;
|
||||
};
|
||||
|
||||
@@ -756,7 +755,6 @@ struct wx_tx_buffer {
|
||||
struct wx_rx_buffer {
|
||||
struct sk_buff *skb;
|
||||
dma_addr_t dma;
|
||||
dma_addr_t page_dma;
|
||||
struct page *page;
|
||||
unsigned int page_offset;
|
||||
};
|
||||
|
@@ -285,7 +285,7 @@ static void xemaclite_aligned_read(u32 *src_ptr, u8 *dest_ptr,
|
||||
|
||||
/* Read the remaining data */
|
||||
for (; length > 0; length--)
|
||||
*to_u8_ptr = *from_u8_ptr;
|
||||
*to_u8_ptr++ = *from_u8_ptr++;
|
||||
}
|
||||
}
|
||||
|
||||
|
@@ -2313,8 +2313,11 @@ static int netvsc_prepare_bonding(struct net_device *vf_netdev)
|
||||
if (!ndev)
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/* set slave flag before open to prevent IPv6 addrconf */
|
||||
/* Set slave flag and no addrconf flag before open
|
||||
* to prevent IPv6 addrconf.
|
||||
*/
|
||||
vf_netdev->flags |= IFF_SLAVE;
|
||||
vf_netdev->priv_flags |= IFF_NO_ADDRCONF;
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
|
@@ -3377,7 +3377,8 @@ static int phy_probe(struct device *dev)
|
||||
/* Get the LEDs from the device tree, and instantiate standard
|
||||
* LEDs for them.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_PHYLIB_LEDS))
|
||||
if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) &&
|
||||
!phy_driver_is_genphy_10g(phydev))
|
||||
err = of_phy_leds(phydev);
|
||||
|
||||
out:
|
||||
@@ -3394,7 +3395,8 @@ static int phy_remove(struct device *dev)
|
||||
|
||||
cancel_delayed_work_sync(&phydev->state_queue);
|
||||
|
||||
if (IS_ENABLED(CONFIG_PHYLIB_LEDS))
|
||||
if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) &&
|
||||
!phy_driver_is_genphy_10g(phydev))
|
||||
phy_leds_unregister(phydev);
|
||||
|
||||
phydev->state = PHY_DOWN;
|
||||
|
@@ -689,6 +689,10 @@ static int sierra_net_bind(struct usbnet *dev, struct usb_interface *intf)
|
||||
status);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (!dev->status) {
|
||||
dev_err(&dev->udev->dev, "No status endpoint found");
|
||||
return -ENODEV;
|
||||
}
|
||||
/* Initialize sierra private data */
|
||||
priv = kzalloc(sizeof *priv, GFP_KERNEL);
|
||||
if (!priv)
|
||||
|
@@ -689,6 +689,10 @@ blk_status_t nvme_fail_nonready_command(struct nvme_ctrl *ctrl,
|
||||
!test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags) &&
|
||||
!blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
|
||||
return BLK_STS_RESOURCE;
|
||||
|
||||
if (!(rq->rq_flags & RQF_DONTPREP))
|
||||
nvme_clear_nvme_request(rq);
|
||||
|
||||
return nvme_host_path_error(rq);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nvme_fail_nonready_command);
|
||||
@@ -3596,7 +3600,7 @@ static void nvme_ns_add_to_ctrl_list(struct nvme_ns *ns)
|
||||
return;
|
||||
}
|
||||
}
|
||||
list_add(&ns->list, &ns->ctrl->namespaces);
|
||||
list_add_rcu(&ns->list, &ns->ctrl->namespaces);
|
||||
}
|
||||
|
||||
static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
|
||||
|
@@ -23,6 +23,7 @@
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/if_ether.h> /* ETH_ALEN */
|
||||
|
||||
#define IMX_OCOTP_OFFSET_B0W0 0x400 /* Offset from base address of the
|
||||
* OTP Bank0 Word0
|
||||
@@ -227,9 +228,11 @@ static int imx_ocotp_cell_pp(void *context, const char *id, int index,
|
||||
int i;
|
||||
|
||||
/* Deal with some post processing of nvmem cell data */
|
||||
if (id && !strcmp(id, "mac-address"))
|
||||
if (id && !strcmp(id, "mac-address")) {
|
||||
bytes = min(bytes, ETH_ALEN);
|
||||
for (i = 0; i < bytes / 2; i++)
|
||||
swap(buf[i], buf[bytes - i - 1]);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@@ -132,7 +132,7 @@ static int u_boot_env_parse(struct u_boot_env *priv)
|
||||
size_t crc32_data_offset;
|
||||
size_t crc32_data_len;
|
||||
size_t crc32_offset;
|
||||
__le32 *crc32_addr;
|
||||
uint32_t *crc32_addr;
|
||||
size_t data_offset;
|
||||
size_t data_len;
|
||||
size_t dev_size;
|
||||
@@ -183,8 +183,8 @@ static int u_boot_env_parse(struct u_boot_env *priv)
|
||||
goto err_kfree;
|
||||
}
|
||||
|
||||
crc32_addr = (__le32 *)(buf + crc32_offset);
|
||||
crc32 = le32_to_cpu(*crc32_addr);
|
||||
crc32_addr = (uint32_t *)(buf + crc32_offset);
|
||||
crc32 = *crc32_addr;
|
||||
crc32_data_len = dev_size - crc32_data_offset;
|
||||
data_len = dev_size - data_offset;
|
||||
|
||||
|
@@ -648,14 +648,15 @@ static void tegra186_utmi_bias_pad_power_on(struct tegra_xusb_padctl *padctl)
|
||||
udelay(100);
|
||||
}
|
||||
|
||||
if (padctl->soc->trk_hw_mode) {
|
||||
value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL2);
|
||||
value |= USB2_TRK_HW_MODE;
|
||||
value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL2);
|
||||
if (padctl->soc->trk_update_on_idle)
|
||||
value &= ~CYA_TRK_CODE_UPDATE_ON_IDLE;
|
||||
padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL2);
|
||||
} else {
|
||||
if (padctl->soc->trk_hw_mode)
|
||||
value |= USB2_TRK_HW_MODE;
|
||||
padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL2);
|
||||
|
||||
if (!padctl->soc->trk_hw_mode)
|
||||
clk_disable_unprepare(priv->usb2_trk_clk);
|
||||
}
|
||||
}
|
||||
|
||||
static void tegra186_utmi_bias_pad_power_off(struct tegra_xusb_padctl *padctl)
|
||||
@@ -782,13 +783,15 @@ static int tegra186_xusb_padctl_vbus_override(struct tegra_xusb_padctl *padctl,
|
||||
}
|
||||
|
||||
static int tegra186_xusb_padctl_id_override(struct tegra_xusb_padctl *padctl,
|
||||
bool status)
|
||||
struct tegra_xusb_usb2_port *port, bool status)
|
||||
{
|
||||
u32 value;
|
||||
u32 value, id_override;
|
||||
int err = 0;
|
||||
|
||||
dev_dbg(padctl->dev, "%s id override\n", status ? "set" : "clear");
|
||||
|
||||
value = padctl_readl(padctl, USB2_VBUS_ID);
|
||||
id_override = value & ID_OVERRIDE(~0);
|
||||
|
||||
if (status) {
|
||||
if (value & VBUS_OVERRIDE) {
|
||||
@@ -799,14 +802,34 @@ static int tegra186_xusb_padctl_id_override(struct tegra_xusb_padctl *padctl,
|
||||
value = padctl_readl(padctl, USB2_VBUS_ID);
|
||||
}
|
||||
|
||||
value &= ~ID_OVERRIDE(~0);
|
||||
value |= ID_OVERRIDE_GROUNDED;
|
||||
} else {
|
||||
value &= ~ID_OVERRIDE(~0);
|
||||
value |= ID_OVERRIDE_FLOATING;
|
||||
}
|
||||
if (id_override != ID_OVERRIDE_GROUNDED) {
|
||||
value &= ~ID_OVERRIDE(~0);
|
||||
value |= ID_OVERRIDE_GROUNDED;
|
||||
padctl_writel(padctl, value, USB2_VBUS_ID);
|
||||
|
||||
padctl_writel(padctl, value, USB2_VBUS_ID);
|
||||
err = regulator_enable(port->supply);
|
||||
if (err) {
|
||||
dev_err(padctl->dev, "Failed to enable regulator: %d\n", err);
|
||||
return err;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (id_override == ID_OVERRIDE_GROUNDED) {
|
||||
/*
|
||||
* The regulator is disabled only when the role transitions
|
||||
* from USB_ROLE_HOST to USB_ROLE_NONE.
|
||||
*/
|
||||
err = regulator_disable(port->supply);
|
||||
if (err) {
|
||||
dev_err(padctl->dev, "Failed to disable regulator: %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
value &= ~ID_OVERRIDE(~0);
|
||||
value |= ID_OVERRIDE_FLOATING;
|
||||
padctl_writel(padctl, value, USB2_VBUS_ID);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -826,27 +849,20 @@ static int tegra186_utmi_phy_set_mode(struct phy *phy, enum phy_mode mode,
|
||||
|
||||
if (mode == PHY_MODE_USB_OTG) {
|
||||
if (submode == USB_ROLE_HOST) {
|
||||
tegra186_xusb_padctl_id_override(padctl, true);
|
||||
|
||||
err = regulator_enable(port->supply);
|
||||
err = tegra186_xusb_padctl_id_override(padctl, port, true);
|
||||
if (err)
|
||||
goto out;
|
||||
} else if (submode == USB_ROLE_DEVICE) {
|
||||
tegra186_xusb_padctl_vbus_override(padctl, true);
|
||||
} else if (submode == USB_ROLE_NONE) {
|
||||
/*
|
||||
* When port is peripheral only or role transitions to
|
||||
* USB_ROLE_NONE from USB_ROLE_DEVICE, regulator is not
|
||||
* enabled.
|
||||
*/
|
||||
if (regulator_is_enabled(port->supply))
|
||||
regulator_disable(port->supply);
|
||||
|
||||
tegra186_xusb_padctl_id_override(padctl, false);
|
||||
err = tegra186_xusb_padctl_id_override(padctl, port, false);
|
||||
if (err)
|
||||
goto out;
|
||||
tegra186_xusb_padctl_vbus_override(padctl, false);
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
mutex_unlock(&padctl->lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -1710,7 +1726,8 @@ const struct tegra_xusb_padctl_soc tegra234_xusb_padctl_soc = {
|
||||
.num_supplies = ARRAY_SIZE(tegra194_xusb_padctl_supply_names),
|
||||
.supports_gen2 = true,
|
||||
.poll_trk_completed = true,
|
||||
.trk_hw_mode = true,
|
||||
.trk_hw_mode = false,
|
||||
.trk_update_on_idle = true,
|
||||
.supports_lp_cfg_en = true,
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(tegra234_xusb_padctl_soc);
|
||||
|
@@ -434,6 +434,7 @@ struct tegra_xusb_padctl_soc {
|
||||
bool need_fake_usb3_port;
|
||||
bool poll_trk_completed;
|
||||
bool trk_hw_mode;
|
||||
bool trk_update_on_idle;
|
||||
bool supports_lp_cfg_en;
|
||||
};
|
||||
|
||||
|
@@ -157,6 +157,13 @@ static int pwm_regulator_get_voltage(struct regulator_dev *rdev)
|
||||
|
||||
pwm_get_state(drvdata->pwm, &pstate);
|
||||
|
||||
if (!pstate.enabled) {
|
||||
if (pstate.polarity == PWM_POLARITY_INVERSED)
|
||||
pstate.duty_cycle = pstate.period;
|
||||
else
|
||||
pstate.duty_cycle = 0;
|
||||
}
|
||||
|
||||
voltage = pwm_get_relative_duty_cycle(&pstate, duty_unit);
|
||||
if (voltage < min(max_uV_duty, min_uV_duty) ||
|
||||
voltage > max(max_uV_duty, min_uV_duty))
|
||||
@@ -316,6 +323,32 @@ static int pwm_regulator_init_continuous(struct platform_device *pdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pwm_regulator_init_boot_on(struct platform_device *pdev,
|
||||
struct pwm_regulator_data *drvdata,
|
||||
const struct regulator_init_data *init_data)
|
||||
{
|
||||
struct pwm_state pstate;
|
||||
|
||||
if (!init_data->constraints.boot_on || drvdata->enb_gpio)
|
||||
return 0;
|
||||
|
||||
pwm_get_state(drvdata->pwm, &pstate);
|
||||
if (pstate.enabled)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Update the duty cycle so the output does not change
|
||||
* when the regulator core enables the regulator (and
|
||||
* thus the PWM channel).
|
||||
*/
|
||||
if (pstate.polarity == PWM_POLARITY_INVERSED)
|
||||
pstate.duty_cycle = pstate.period;
|
||||
else
|
||||
pstate.duty_cycle = 0;
|
||||
|
||||
return pwm_apply_might_sleep(drvdata->pwm, &pstate);
|
||||
}
|
||||
|
||||
static int pwm_regulator_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct regulator_init_data *init_data;
|
||||
@@ -375,6 +408,13 @@ static int pwm_regulator_probe(struct platform_device *pdev)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pwm_regulator_init_boot_on(pdev, drvdata, init_data);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "Failed to apply boot_on settings: %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
regulator = devm_regulator_register(&pdev->dev,
|
||||
&drvdata->desc, &config);
|
||||
if (IS_ERR(regulator)) {
|
||||
|
@@ -58,6 +58,7 @@ struct aspeed_lpc_snoop_model_data {
|
||||
};
|
||||
|
||||
struct aspeed_lpc_snoop_channel {
|
||||
bool enabled;
|
||||
struct kfifo fifo;
|
||||
wait_queue_head_t wq;
|
||||
struct miscdevice miscdev;
|
||||
@@ -190,6 +191,9 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
|
||||
const struct aspeed_lpc_snoop_model_data *model_data =
|
||||
of_device_get_match_data(dev);
|
||||
|
||||
if (WARN_ON(lpc_snoop->chan[channel].enabled))
|
||||
return -EBUSY;
|
||||
|
||||
init_waitqueue_head(&lpc_snoop->chan[channel].wq);
|
||||
/* Create FIFO datastructure */
|
||||
rc = kfifo_alloc(&lpc_snoop->chan[channel].fifo,
|
||||
@@ -236,6 +240,8 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
|
||||
regmap_update_bits(lpc_snoop->regmap, HICRB,
|
||||
hicrb_en, hicrb_en);
|
||||
|
||||
lpc_snoop->chan[channel].enabled = true;
|
||||
|
||||
return 0;
|
||||
|
||||
err_misc_deregister:
|
||||
@@ -248,6 +254,9 @@ err_free_fifo:
|
||||
static void aspeed_lpc_disable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
|
||||
int channel)
|
||||
{
|
||||
if (!lpc_snoop->chan[channel].enabled)
|
||||
return;
|
||||
|
||||
switch (channel) {
|
||||
case 0:
|
||||
regmap_update_bits(lpc_snoop->regmap, HICR5,
|
||||
@@ -263,8 +272,10 @@ static void aspeed_lpc_disable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
|
||||
return;
|
||||
}
|
||||
|
||||
kfifo_free(&lpc_snoop->chan[channel].fifo);
|
||||
lpc_snoop->chan[channel].enabled = false;
|
||||
/* Consider improving safety wrt concurrent reader(s) */
|
||||
misc_deregister(&lpc_snoop->chan[channel].miscdev);
|
||||
kfifo_free(&lpc_snoop->chan[channel].fifo);
|
||||
}
|
||||
|
||||
static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
|
||||
|
@@ -205,7 +205,7 @@ static u64 amd_sdw_send_cmd_get_resp(struct amd_sdw_manager *amd_manager, u32 lo
|
||||
|
||||
if (sts & AMD_SDW_IMM_RES_VALID) {
|
||||
dev_err(amd_manager->dev, "SDW%x manager is in bad state\n", amd_manager->instance);
|
||||
writel(0x00, amd_manager->mmio + ACP_SW_IMM_CMD_STS);
|
||||
writel(AMD_SDW_IMM_RES_VALID, amd_manager->mmio + ACP_SW_IMM_CMD_STS);
|
||||
}
|
||||
writel(upper_data, amd_manager->mmio + ACP_SW_IMM_CMD_UPPER_WORD);
|
||||
writel(lower_data, amd_manager->mmio + ACP_SW_IMM_CMD_LOWER_QWORD);
|
||||
@@ -1135,9 +1135,11 @@ static int __maybe_unused amd_suspend(struct device *dev)
|
||||
}
|
||||
|
||||
if (amd_manager->power_mode_mask & AMD_SDW_CLK_STOP_MODE) {
|
||||
cancel_work_sync(&amd_manager->amd_sdw_work);
|
||||
amd_sdw_wake_enable(amd_manager, false);
|
||||
return amd_sdw_clock_stop(amd_manager);
|
||||
} else if (amd_manager->power_mode_mask & AMD_SDW_POWER_OFF_MODE) {
|
||||
cancel_work_sync(&amd_manager->amd_sdw_work);
|
||||
amd_sdw_wake_enable(amd_manager, false);
|
||||
/*
|
||||
* As per hardware programming sequence on AMD platforms,
|
||||
|
@@ -4010,10 +4010,13 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
|
||||
xfer->tx_nbits != SPI_NBITS_QUAD)
|
||||
return -EINVAL;
|
||||
if ((xfer->tx_nbits == SPI_NBITS_DUAL) &&
|
||||
!(spi->mode & (SPI_TX_DUAL | SPI_TX_QUAD)))
|
||||
!(spi->mode & (SPI_TX_DUAL | SPI_TX_QUAD | SPI_TX_OCTAL)))
|
||||
return -EINVAL;
|
||||
if ((xfer->tx_nbits == SPI_NBITS_QUAD) &&
|
||||
!(spi->mode & SPI_TX_QUAD))
|
||||
!(spi->mode & (SPI_TX_QUAD | SPI_TX_OCTAL)))
|
||||
return -EINVAL;
|
||||
if ((xfer->tx_nbits == SPI_NBITS_OCTAL) &&
|
||||
!(spi->mode & SPI_TX_OCTAL))
|
||||
return -EINVAL;
|
||||
}
|
||||
/* Check transfer rx_nbits */
|
||||
@@ -4025,10 +4028,13 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
|
||||
xfer->rx_nbits != SPI_NBITS_QUAD)
|
||||
return -EINVAL;
|
||||
if ((xfer->rx_nbits == SPI_NBITS_DUAL) &&
|
||||
!(spi->mode & (SPI_RX_DUAL | SPI_RX_QUAD)))
|
||||
!(spi->mode & (SPI_RX_DUAL | SPI_RX_QUAD | SPI_RX_OCTAL)))
|
||||
return -EINVAL;
|
||||
if ((xfer->rx_nbits == SPI_NBITS_QUAD) &&
|
||||
!(spi->mode & SPI_RX_QUAD))
|
||||
!(spi->mode & (SPI_RX_QUAD | SPI_RX_OCTAL)))
|
||||
return -EINVAL;
|
||||
if ((xfer->rx_nbits == SPI_NBITS_OCTAL) &&
|
||||
!(spi->mode & SPI_RX_OCTAL))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@@ -1465,7 +1465,7 @@ int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
|
||||
return ret;
|
||||
|
||||
data[0] &= ~ADP_DP_CS_0_VIDEO_HOPID_MASK;
|
||||
data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK;
|
||||
data[1] &= ~ADP_DP_CS_1_AUX_TX_HOPID_MASK;
|
||||
data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK;
|
||||
|
||||
data[0] |= (video << ADP_DP_CS_0_VIDEO_HOPID_SHIFT) &
|
||||
@@ -3439,7 +3439,7 @@ void tb_sw_set_unplugged(struct tb_switch *sw)
|
||||
}
|
||||
}
|
||||
|
||||
static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags)
|
||||
static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime)
|
||||
{
|
||||
if (flags)
|
||||
tb_sw_dbg(sw, "enabling wakeup: %#x\n", flags);
|
||||
@@ -3447,7 +3447,7 @@ static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags)
|
||||
tb_sw_dbg(sw, "disabling wakeup\n");
|
||||
|
||||
if (tb_switch_is_usb4(sw))
|
||||
return usb4_switch_set_wake(sw, flags);
|
||||
return usb4_switch_set_wake(sw, flags, runtime);
|
||||
return tb_lc_set_wake(sw, flags);
|
||||
}
|
||||
|
||||
@@ -3523,7 +3523,7 @@ int tb_switch_resume(struct tb_switch *sw, bool runtime)
|
||||
tb_switch_check_wakes(sw);
|
||||
|
||||
/* Disable wakes */
|
||||
tb_switch_set_wake(sw, 0);
|
||||
tb_switch_set_wake(sw, 0, true);
|
||||
|
||||
err = tb_switch_tmu_init(sw);
|
||||
if (err)
|
||||
@@ -3604,7 +3604,7 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
|
||||
flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE;
|
||||
}
|
||||
|
||||
tb_switch_set_wake(sw, flags);
|
||||
tb_switch_set_wake(sw, flags, runtime);
|
||||
|
||||
if (tb_switch_is_usb4(sw))
|
||||
usb4_switch_set_sleep(sw);
|
||||
|
@@ -1266,7 +1266,7 @@ int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid);
|
||||
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
|
||||
size_t size);
|
||||
bool usb4_switch_lane_bonding_possible(struct tb_switch *sw);
|
||||
int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags);
|
||||
int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime);
|
||||
int usb4_switch_set_sleep(struct tb_switch *sw);
|
||||
int usb4_switch_nvm_sector_size(struct tb_switch *sw);
|
||||
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
|
||||
|
@@ -405,12 +405,12 @@ bool usb4_switch_lane_bonding_possible(struct tb_switch *sw)
|
||||
* usb4_switch_set_wake() - Enabled/disable wake
|
||||
* @sw: USB4 router
|
||||
* @flags: Wakeup flags (%0 to disable)
|
||||
* @runtime: Wake is being programmed during system runtime
|
||||
*
|
||||
* Enables/disables router to wake up from sleep.
|
||||
*/
|
||||
int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
|
||||
int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime)
|
||||
{
|
||||
struct usb4_port *usb4;
|
||||
struct tb_port *port;
|
||||
u64 route = tb_route(sw);
|
||||
u32 val;
|
||||
@@ -440,13 +440,11 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
|
||||
val |= PORT_CS_19_WOU4;
|
||||
} else {
|
||||
bool configured = val & PORT_CS_19_PC;
|
||||
usb4 = port->usb4;
|
||||
bool wakeup = runtime || device_may_wakeup(&port->usb4->dev);
|
||||
|
||||
if (((flags & TB_WAKE_ON_CONNECT) &&
|
||||
device_may_wakeup(&usb4->dev)) && !configured)
|
||||
if ((flags & TB_WAKE_ON_CONNECT) && wakeup && !configured)
|
||||
val |= PORT_CS_19_WOC;
|
||||
if (((flags & TB_WAKE_ON_DISCONNECT) &&
|
||||
device_may_wakeup(&usb4->dev)) && configured)
|
||||
if ((flags & TB_WAKE_ON_DISCONNECT) && wakeup && configured)
|
||||
val |= PORT_CS_19_WOD;
|
||||
if ((flags & TB_WAKE_ON_USB4) && configured)
|
||||
val |= PORT_CS_19_WOU4;
|
||||
|
@@ -967,7 +967,7 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv)
|
||||
__func__);
|
||||
return 0;
|
||||
}
|
||||
dma_sync_sg_for_device(port->dev, priv->sg_tx_p, nent, DMA_TO_DEVICE);
|
||||
dma_sync_sg_for_device(port->dev, priv->sg_tx_p, num, DMA_TO_DEVICE);
|
||||
priv->desc_tx = desc;
|
||||
desc->callback = pch_dma_tx_complete;
|
||||
desc->callback_param = priv;
|
||||
|
@@ -67,6 +67,12 @@
|
||||
*/
|
||||
#define USB_SHORT_SET_ADDRESS_REQ_TIMEOUT 500 /* ms */
|
||||
|
||||
/*
|
||||
* Give SS hubs 200ms time after wake to train downstream links before
|
||||
* assuming no port activity and allowing hub to runtime suspend back.
|
||||
*/
|
||||
#define USB_SS_PORT_U0_WAKE_TIME 200 /* ms */
|
||||
|
||||
/* Protect struct usb_device->state and ->children members
|
||||
* Note: Both are also protected by ->dev.sem, except that ->state can
|
||||
* change to USB_STATE_NOTATTACHED even when the semaphore isn't held. */
|
||||
@@ -1066,6 +1072,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
|
||||
goto init2;
|
||||
goto init3;
|
||||
}
|
||||
|
||||
hub_get(hub);
|
||||
|
||||
/* The superspeed hub except for root hub has to use Hub Depth
|
||||
@@ -1314,6 +1321,17 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
|
||||
device_unlock(&hdev->dev);
|
||||
}
|
||||
|
||||
if (type == HUB_RESUME && hub_is_superspeed(hub->hdev)) {
|
||||
/* give usb3 downstream links training time after hub resume */
|
||||
usb_autopm_get_interface_no_resume(
|
||||
to_usb_interface(hub->intfdev));
|
||||
|
||||
queue_delayed_work(system_power_efficient_wq,
|
||||
&hub->post_resume_work,
|
||||
msecs_to_jiffies(USB_SS_PORT_U0_WAKE_TIME));
|
||||
return;
|
||||
}
|
||||
|
||||
hub_put(hub);
|
||||
}
|
||||
|
||||
@@ -1332,6 +1350,14 @@ static void hub_init_func3(struct work_struct *ws)
|
||||
hub_activate(hub, HUB_INIT3);
|
||||
}
|
||||
|
||||
static void hub_post_resume(struct work_struct *ws)
|
||||
{
|
||||
struct usb_hub *hub = container_of(ws, struct usb_hub, post_resume_work.work);
|
||||
|
||||
usb_autopm_put_interface_async(to_usb_interface(hub->intfdev));
|
||||
hub_put(hub);
|
||||
}
|
||||
|
||||
enum hub_quiescing_type {
|
||||
HUB_DISCONNECT, HUB_PRE_RESET, HUB_SUSPEND
|
||||
};
|
||||
@@ -1357,6 +1383,7 @@ static void hub_quiesce(struct usb_hub *hub, enum hub_quiescing_type type)
|
||||
|
||||
/* Stop hub_wq and related activity */
|
||||
del_timer_sync(&hub->irq_urb_retry);
|
||||
flush_delayed_work(&hub->post_resume_work);
|
||||
usb_kill_urb(hub->urb);
|
||||
if (hub->has_indicators)
|
||||
cancel_delayed_work_sync(&hub->leds);
|
||||
@@ -1915,6 +1942,7 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id)
|
||||
hub->hdev = hdev;
|
||||
INIT_DELAYED_WORK(&hub->leds, led_work);
|
||||
INIT_DELAYED_WORK(&hub->init_work, NULL);
|
||||
INIT_DELAYED_WORK(&hub->post_resume_work, hub_post_resume);
|
||||
INIT_WORK(&hub->events, hub_event);
|
||||
INIT_LIST_HEAD(&hub->onboard_hub_devs);
|
||||
spin_lock_init(&hub->irq_urb_lock);
|
||||
@@ -5692,6 +5720,7 @@ static void port_event(struct usb_hub *hub, int port1)
|
||||
struct usb_device *hdev = hub->hdev;
|
||||
u16 portstatus, portchange;
|
||||
int i = 0;
|
||||
int err;
|
||||
|
||||
connect_change = test_bit(port1, hub->change_bits);
|
||||
clear_bit(port1, hub->event_bits);
|
||||
@@ -5788,8 +5817,11 @@ static void port_event(struct usb_hub *hub, int port1)
|
||||
} else if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION)
|
||||
|| udev->state == USB_STATE_NOTATTACHED) {
|
||||
dev_dbg(&port_dev->dev, "do warm reset, port only\n");
|
||||
if (hub_port_reset(hub, port1, NULL,
|
||||
HUB_BH_RESET_TIME, true) < 0)
|
||||
err = hub_port_reset(hub, port1, NULL,
|
||||
HUB_BH_RESET_TIME, true);
|
||||
if (!udev && err == -ENOTCONN)
|
||||
connect_change = 0;
|
||||
else if (err < 0)
|
||||
hub_port_disable(hub, port1, 1);
|
||||
} else {
|
||||
dev_dbg(&port_dev->dev, "do warm reset, full device\n");
|
||||
|
@@ -69,6 +69,7 @@ struct usb_hub {
|
||||
u8 indicator[USB_MAXCHILDREN];
|
||||
struct delayed_work leds;
|
||||
struct delayed_work init_work;
|
||||
struct delayed_work post_resume_work;
|
||||
struct work_struct events;
|
||||
spinlock_t irq_urb_lock;
|
||||
struct timer_list irq_urb_retry;
|
||||
|
@@ -854,13 +854,13 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
|
||||
ret = reset_control_deassert(qcom->resets);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to deassert resets, err=%d\n", ret);
|
||||
goto reset_assert;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = dwc3_qcom_clk_init(qcom, of_clk_get_parent_count(np));
|
||||
if (ret) {
|
||||
dev_err_probe(dev, ret, "failed to get clocks\n");
|
||||
goto reset_assert;
|
||||
return ret;
|
||||
}
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
@@ -964,8 +964,6 @@ clk_disable:
|
||||
clk_disable_unprepare(qcom->clks[i]);
|
||||
clk_put(qcom->clks[i]);
|
||||
}
|
||||
reset_assert:
|
||||
reset_control_assert(qcom->resets);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -995,8 +993,6 @@ static void dwc3_qcom_remove(struct platform_device *pdev)
|
||||
qcom->num_clocks = 0;
|
||||
|
||||
dwc3_qcom_interconnect_exit(qcom);
|
||||
reset_control_assert(qcom->resets);
|
||||
|
||||
pm_runtime_allow(dev);
|
||||
pm_runtime_disable(dev);
|
||||
}
|
||||
|
@@ -1067,6 +1067,8 @@ static ssize_t webusb_landingPage_store(struct config_item *item, const char *pa
|
||||
unsigned int bytes_to_strip = 0;
|
||||
int l = len;
|
||||
|
||||
if (!len)
|
||||
return len;
|
||||
if (page[l - 1] == '\n') {
|
||||
--l;
|
||||
++bytes_to_strip;
|
||||
@@ -1190,6 +1192,8 @@ static ssize_t os_desc_qw_sign_store(struct config_item *item, const char *page,
|
||||
struct gadget_info *gi = os_desc_item_to_gadget_info(item);
|
||||
int res, l;
|
||||
|
||||
if (!len)
|
||||
return len;
|
||||
l = min((int)len, OS_STRING_QW_SIGN_LEN >> 1);
|
||||
if (page[l - 1] == '\n')
|
||||
--l;
|
||||
|
@@ -1925,6 +1925,7 @@ static int musb_gadget_stop(struct usb_gadget *g)
|
||||
* gadget driver here and have everything work;
|
||||
* that currently misbehaves.
|
||||
*/
|
||||
usb_gadget_set_state(g, USB_STATE_NOTATTACHED);
|
||||
|
||||
/* Force check of devctl register for PM runtime */
|
||||
pm_runtime_mark_last_busy(musb->controller);
|
||||
@@ -2031,6 +2032,7 @@ void musb_g_disconnect(struct musb *musb)
|
||||
case OTG_STATE_B_PERIPHERAL:
|
||||
case OTG_STATE_B_IDLE:
|
||||
musb_set_state(musb, OTG_STATE_B_IDLE);
|
||||
usb_gadget_set_state(&musb->g, USB_STATE_NOTATTACHED);
|
||||
break;
|
||||
case OTG_STATE_B_SRP_INIT:
|
||||
break;
|
||||
|
@@ -803,6 +803,8 @@ static const struct usb_device_id id_table_combined[] = {
|
||||
.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
|
||||
{ USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID),
|
||||
.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
|
||||
{ USB_DEVICE(FTDI_NDI_VID, FTDI_NDI_EMGUIDE_GEMINI_PID),
|
||||
.driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
|
||||
{ USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) },
|
||||
{ USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) },
|
||||
{ USB_DEVICE(FTDI_VID, RTSYSTEMS_USB_VX8_PID) },
|
||||
|
@@ -204,6 +204,9 @@
|
||||
#define FTDI_NDI_FUTURE_3_PID 0xDA73 /* NDI future device #3 */
|
||||
#define FTDI_NDI_AURORA_SCU_PID 0xDA74 /* NDI Aurora SCU */
|
||||
|
||||
#define FTDI_NDI_VID 0x23F2
|
||||
#define FTDI_NDI_EMGUIDE_GEMINI_PID 0x0003 /* NDI Emguide Gemini */
|
||||
|
||||
/*
|
||||
* ChamSys Limited (www.chamsys.co.uk) USB wing/interface product IDs
|
||||
*/
|
||||
|
@@ -1415,6 +1415,9 @@ static const struct usb_device_id option_ids[] = {
|
||||
.driver_info = NCTRL(5) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10c7, 0xff, 0xff, 0x30), /* Telit FE910C04 (ECM) */
|
||||
.driver_info = NCTRL(4) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10c7, 0xff, 0xff, 0x40) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30), /* Telit FN990B (MBIM) */
|
||||
.driver_info = NCTRL(6) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) },
|
||||
@@ -2343,6 +2346,8 @@ static const struct usb_device_id option_ids[] = {
|
||||
.driver_info = RSVD(3) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff), /* Foxconn T99W651 RNDIS */
|
||||
.driver_info = RSVD(5) | RSVD(6) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe167, 0xff), /* Foxconn T99W640 MBIM */
|
||||
.driver_info = RSVD(3) },
|
||||
{ USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 (IOT version) */
|
||||
.driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
|
||||
{ USB_DEVICE(0x1782, 0x4d10) }, /* Fibocom L610 (AT mode) */
|
||||
|
@@ -346,8 +346,6 @@ int __cachefiles_write(struct cachefiles_object *object,
|
||||
default:
|
||||
ki->was_async = false;
|
||||
cachefiles_write_complete(&ki->iocb, ret);
|
||||
if (ret > 0)
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
|
@@ -84,10 +84,8 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb,
|
||||
|
||||
trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len);
|
||||
ret = __cachefiles_write(object, file, pos, iter, NULL, NULL);
|
||||
if (!ret) {
|
||||
ret = len;
|
||||
if (ret > 0)
|
||||
kiocb->ki_pos += ret;
|
||||
}
|
||||
|
||||
out:
|
||||
fput(file);
|
||||
|
@@ -1486,9 +1486,16 @@ static int isofs_read_inode(struct inode *inode, int relocated)
|
||||
inode->i_op = &page_symlink_inode_operations;
|
||||
inode_nohighmem(inode);
|
||||
inode->i_data.a_ops = &isofs_symlink_aops;
|
||||
} else
|
||||
} else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
|
||||
S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
|
||||
/* XXX - parse_rock_ridge_inode() had already set i_rdev. */
|
||||
init_special_inode(inode, inode->i_mode, inode->i_rdev);
|
||||
} else {
|
||||
printk(KERN_DEBUG "ISOFS: Invalid file type 0%04o for inode %lu.\n",
|
||||
inode->i_mode, inode->i_ino);
|
||||
ret = -EIO;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
|
@@ -2113,6 +2113,11 @@ struct vfsmount *clone_private_mount(const struct path *path)
|
||||
if (!check_mnt(old_mnt))
|
||||
goto invalid;
|
||||
|
||||
if (!ns_capable(old_mnt->mnt_ns->user_ns, CAP_SYS_ADMIN)) {
|
||||
up_read(&namespace_sem);
|
||||
return ERR_PTR(-EPERM);
|
||||
}
|
||||
|
||||
if (has_locked_children(old_mnt, path->dentry))
|
||||
goto invalid;
|
||||
|
||||
|
@@ -5042,7 +5042,8 @@ void cifs_oplock_break(struct work_struct *work)
|
||||
struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo,
|
||||
oplock_break);
|
||||
struct inode *inode = d_inode(cfile->dentry);
|
||||
struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
|
||||
struct cifsInodeInfo *cinode = CIFS_I(inode);
|
||||
struct cifs_tcon *tcon;
|
||||
struct TCP_Server_Info *server;
|
||||
@@ -5052,6 +5053,12 @@ void cifs_oplock_break(struct work_struct *work)
|
||||
__u64 persistent_fid, volatile_fid;
|
||||
__u16 net_fid;
|
||||
|
||||
/*
|
||||
* Hold a reference to the superblock to prevent it and its inodes from
|
||||
* being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put()
|
||||
* may release the last reference to the sb and trigger inode eviction.
|
||||
*/
|
||||
cifs_sb_active(sb);
|
||||
wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
|
||||
TASK_UNINTERRUPTIBLE);
|
||||
|
||||
@@ -5124,6 +5131,7 @@ oplock_break_ack:
|
||||
cifs_put_tlink(tlink);
|
||||
out:
|
||||
cifs_done_oplock_break(cinode);
|
||||
cifs_sb_deactive(sb);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@@ -4271,6 +4271,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
|
||||
u8 key[SMB3_ENC_DEC_KEY_SIZE];
|
||||
struct aead_request *req;
|
||||
u8 *iv;
|
||||
DECLARE_CRYPTO_WAIT(wait);
|
||||
unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
|
||||
void *creq;
|
||||
size_t sensitive_size;
|
||||
@@ -4321,7 +4322,11 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
|
||||
aead_request_set_crypt(req, sg, sg, crypt_len, iv);
|
||||
aead_request_set_ad(req, assoc_data_len);
|
||||
|
||||
rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
|
||||
aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
||||
crypto_req_done, &wait);
|
||||
|
||||
rc = crypto_wait_req(enc ? crypto_aead_encrypt(req)
|
||||
: crypto_aead_decrypt(req), &wait);
|
||||
|
||||
if (!rc && enc)
|
||||
memcpy(&tr_hdr->Signature, sign, SMB2_SIGNATURE_SIZE);
|
||||
|
@@ -2587,7 +2587,7 @@ struct cfg80211_scan_request {
|
||||
ANDROID_KABI_RESERVE(1);
|
||||
|
||||
/* keep last */
|
||||
struct ieee80211_channel *channels[] __counted_by(n_channels);
|
||||
struct ieee80211_channel *channels[];
|
||||
};
|
||||
|
||||
static inline void get_random_mask_addr(u8 *buf, const u8 *addr, const u8 *mask)
|
||||
|
@@ -308,8 +308,19 @@ static inline bool nf_ct_is_expired(const struct nf_conn *ct)
|
||||
/* use after obtaining a reference count */
|
||||
static inline bool nf_ct_should_gc(const struct nf_conn *ct)
|
||||
{
|
||||
return nf_ct_is_expired(ct) && nf_ct_is_confirmed(ct) &&
|
||||
!nf_ct_is_dying(ct);
|
||||
if (!nf_ct_is_confirmed(ct))
|
||||
return false;
|
||||
|
||||
/* load ct->timeout after is_confirmed() test.
|
||||
* Pairs with __nf_conntrack_confirm() which:
|
||||
* 1. Increases ct->timeout value
|
||||
* 2. Inserts ct into rcu hlist
|
||||
* 3. Sets the confirmed bit
|
||||
* 4. Unlocks the hlist lock
|
||||
*/
|
||||
smp_acquire__after_ctrl_dep();
|
||||
|
||||
return nf_ct_is_expired(ct) && !nf_ct_is_dying(ct);
|
||||
}
|
||||
|
||||
#define NF_CT_DAY (86400 * HZ)
|
||||
|
@@ -278,12 +278,15 @@
|
||||
EM(rxrpc_call_put_userid, "PUT user-id ") \
|
||||
EM(rxrpc_call_see_accept, "SEE accept ") \
|
||||
EM(rxrpc_call_see_activate_client, "SEE act-clnt") \
|
||||
EM(rxrpc_call_see_already_released, "SEE alrdy-rl") \
|
||||
EM(rxrpc_call_see_connect_failed, "SEE con-fail") \
|
||||
EM(rxrpc_call_see_connected, "SEE connect ") \
|
||||
EM(rxrpc_call_see_conn_abort, "SEE conn-abt") \
|
||||
EM(rxrpc_call_see_discard, "SEE discard ") \
|
||||
EM(rxrpc_call_see_disconnected, "SEE disconn ") \
|
||||
EM(rxrpc_call_see_distribute_error, "SEE dist-err") \
|
||||
EM(rxrpc_call_see_input, "SEE input ") \
|
||||
EM(rxrpc_call_see_recvmsg, "SEE recvmsg ") \
|
||||
EM(rxrpc_call_see_release, "SEE release ") \
|
||||
EM(rxrpc_call_see_userid_exists, "SEE u-exists") \
|
||||
EM(rxrpc_call_see_waiting_call, "SEE q-conn ") \
|
||||
|
@@ -1537,9 +1537,11 @@ int io_connect(struct io_kiocb *req, unsigned int issue_flags)
|
||||
io = &__io;
|
||||
}
|
||||
|
||||
if (unlikely(req->flags & REQ_F_FAIL)) {
|
||||
ret = -ECONNRESET;
|
||||
goto out;
|
||||
if (connect->in_progress) {
|
||||
struct poll_table_struct pt = { ._key = EPOLLERR };
|
||||
|
||||
if (vfs_poll(req->file, &pt) & EPOLLERR)
|
||||
goto get_sock_err;
|
||||
}
|
||||
|
||||
file_flags = force_nonblock ? O_NONBLOCK : 0;
|
||||
@@ -1571,8 +1573,10 @@ int io_connect(struct io_kiocb *req, unsigned int issue_flags)
|
||||
* which means the previous result is good. For both of these,
|
||||
* grab the sock_error() and use that for the completion.
|
||||
*/
|
||||
if (ret == -EBADFD || ret == -EISCONN)
|
||||
if (ret == -EBADFD || ret == -EISCONN) {
|
||||
get_sock_err:
|
||||
ret = sock_error(sock_from_file(req->file)->sk);
|
||||
}
|
||||
}
|
||||
if (ret == -ERESTARTSYS)
|
||||
ret = -EINTR;
|
||||
|
@@ -308,8 +308,6 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts)
|
||||
return IOU_POLL_REISSUE;
|
||||
}
|
||||
}
|
||||
if (unlikely(req->cqe.res & EPOLLERR))
|
||||
req_set_fail(req);
|
||||
if (req->apoll_events & EPOLLONESHOT)
|
||||
return IOU_POLL_DONE;
|
||||
|
||||
|
@@ -883,6 +883,13 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
|
||||
if (fmt[i] == 'p') {
|
||||
sizeof_cur_arg = sizeof(long);
|
||||
|
||||
if (fmt[i + 1] == 0 || isspace(fmt[i + 1]) ||
|
||||
ispunct(fmt[i + 1])) {
|
||||
if (tmp_buf)
|
||||
cur_arg = raw_args[num_spec];
|
||||
goto nocopy_fmt;
|
||||
}
|
||||
|
||||
if ((fmt[i + 1] == 'k' || fmt[i + 1] == 'u') &&
|
||||
fmt[i + 2] == 's') {
|
||||
fmt_ptype = fmt[i + 1];
|
||||
@@ -890,11 +897,9 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
|
||||
goto fmt_str;
|
||||
}
|
||||
|
||||
if (fmt[i + 1] == 0 || isspace(fmt[i + 1]) ||
|
||||
ispunct(fmt[i + 1]) || fmt[i + 1] == 'K' ||
|
||||
if (fmt[i + 1] == 'K' ||
|
||||
fmt[i + 1] == 'x' || fmt[i + 1] == 's' ||
|
||||
fmt[i + 1] == 'S') {
|
||||
/* just kernel pointers */
|
||||
if (tmp_buf)
|
||||
cur_arg = raw_args[num_spec];
|
||||
i++;
|
||||
|
@@ -66,15 +66,9 @@ static struct freezer *parent_freezer(struct freezer *freezer)
|
||||
bool cgroup_freezing(struct task_struct *task)
|
||||
{
|
||||
bool ret;
|
||||
unsigned int state;
|
||||
|
||||
rcu_read_lock();
|
||||
/* Check if the cgroup is still FREEZING, but not FROZEN. The extra
|
||||
* !FROZEN check is required, because the FREEZING bit is not cleared
|
||||
* when the state FROZEN is reached.
|
||||
*/
|
||||
state = task_freezer(task)->state;
|
||||
ret = (state & CGROUP_FREEZING) && !(state & CGROUP_FROZEN);
|
||||
ret = task_freezer(task)->state & CGROUP_FREEZING;
|
||||
rcu_read_unlock();
|
||||
|
||||
return ret;
|
||||
|
@@ -80,7 +80,7 @@ long calc_load_fold_active(struct rq *this_rq, long adjust)
|
||||
long nr_active, delta = 0;
|
||||
|
||||
nr_active = this_rq->nr_running - adjust;
|
||||
nr_active += (int)this_rq->nr_uninterruptible;
|
||||
nr_active += (long)this_rq->nr_uninterruptible;
|
||||
|
||||
if (nr_active != this_rq->calc_load_active) {
|
||||
delta = nr_active - this_rq->calc_load_active;
|
||||
|
@@ -1038,7 +1038,7 @@ struct rq {
|
||||
* one CPU and if it got migrated afterwards it may decrease
|
||||
* it on another CPU. Always updated under the runqueue lock:
|
||||
*/
|
||||
unsigned int nr_uninterruptible;
|
||||
unsigned long nr_uninterruptible;
|
||||
|
||||
struct task_struct __rcu *curr;
|
||||
struct task_struct *idle;
|
||||
|
@@ -2821,7 +2821,10 @@ __register_event(struct trace_event_call *call, struct module *mod)
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
down_write(&trace_event_sem);
|
||||
list_add(&call->list, &ftrace_events);
|
||||
up_write(&trace_event_sem);
|
||||
|
||||
if (call->flags & TRACE_EVENT_FL_DYNAMIC)
|
||||
atomic_set(&call->refcnt, 0);
|
||||
else
|
||||
@@ -3399,6 +3402,8 @@ __trace_add_event_dirs(struct trace_array *tr)
|
||||
struct trace_event_call *call;
|
||||
int ret;
|
||||
|
||||
lockdep_assert_held(&trace_event_sem);
|
||||
|
||||
list_for_each_entry(call, &ftrace_events, list) {
|
||||
ret = __trace_add_new_event(call, tr);
|
||||
if (ret < 0)
|
||||
|
@@ -665,8 +665,8 @@ __timerlat_dump_stack(struct trace_buffer *buffer, struct trace_stack *fstack, u
|
||||
|
||||
entry = ring_buffer_event_data(event);
|
||||
|
||||
memcpy(&entry->caller, fstack->calls, size);
|
||||
entry->size = fstack->nr_entries;
|
||||
memcpy(&entry->caller, fstack->calls, size);
|
||||
|
||||
if (!call_filter_check_discard(call, entry, buffer, event))
|
||||
trace_buffer_unlock_commit_nostack(buffer, event);
|
||||
|
@@ -656,7 +656,7 @@ static int parse_btf_arg(char *varname,
|
||||
ret = query_btf_context(ctx);
|
||||
if (ret < 0 || ctx->nr_params == 0) {
|
||||
trace_probe_log_err(ctx->offset, NO_BTF_ENTRY);
|
||||
return PTR_ERR(params);
|
||||
return -ENOENT;
|
||||
}
|
||||
}
|
||||
params = ctx->params;
|
||||
|
@@ -358,6 +358,35 @@ static int __vlan_device_event(struct net_device *dev, unsigned long event)
|
||||
return err;
|
||||
}
|
||||
|
||||
static void vlan_vid0_add(struct net_device *dev)
|
||||
{
|
||||
struct vlan_info *vlan_info;
|
||||
int err;
|
||||
|
||||
if (!(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
|
||||
return;
|
||||
|
||||
pr_info("adding VLAN 0 to HW filter on device %s\n", dev->name);
|
||||
|
||||
err = vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
|
||||
if (err)
|
||||
return;
|
||||
|
||||
vlan_info = rtnl_dereference(dev->vlan_info);
|
||||
vlan_info->auto_vid0 = true;
|
||||
}
|
||||
|
||||
static void vlan_vid0_del(struct net_device *dev)
|
||||
{
|
||||
struct vlan_info *vlan_info = rtnl_dereference(dev->vlan_info);
|
||||
|
||||
if (!vlan_info || !vlan_info->auto_vid0)
|
||||
return;
|
||||
|
||||
vlan_info->auto_vid0 = false;
|
||||
vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
|
||||
}
|
||||
|
||||
static int vlan_device_event(struct notifier_block *unused, unsigned long event,
|
||||
void *ptr)
|
||||
{
|
||||
@@ -379,15 +408,10 @@ static int vlan_device_event(struct notifier_block *unused, unsigned long event,
|
||||
return notifier_from_errno(err);
|
||||
}
|
||||
|
||||
if ((event == NETDEV_UP) &&
|
||||
(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {
|
||||
pr_info("adding VLAN 0 to HW filter on device %s\n",
|
||||
dev->name);
|
||||
vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
|
||||
}
|
||||
if (event == NETDEV_DOWN &&
|
||||
(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
|
||||
vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
|
||||
if (event == NETDEV_UP)
|
||||
vlan_vid0_add(dev);
|
||||
else if (event == NETDEV_DOWN)
|
||||
vlan_vid0_del(dev);
|
||||
|
||||
vlan_info = rtnl_dereference(dev->vlan_info);
|
||||
if (!vlan_info)
|
||||
|
@@ -33,6 +33,7 @@ struct vlan_info {
|
||||
struct vlan_group grp;
|
||||
struct list_head vid_list;
|
||||
unsigned int nr_vids;
|
||||
bool auto_vid0;
|
||||
struct rcu_head rcu;
|
||||
};
|
||||
|
||||
|
@@ -6796,8 +6796,8 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* No privacy so use a public address. */
|
||||
*own_addr_type = ADDR_LE_DEV_PUBLIC;
|
||||
/* No privacy, use the current address */
|
||||
hci_copy_identity_address(hdev, rand_addr, own_addr_type);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@@ -3485,12 +3485,28 @@ done:
|
||||
/* Configure output options and let the other side know
|
||||
* which ones we don't like. */
|
||||
|
||||
/* If MTU is not provided in configure request, use the most recently
|
||||
* explicitly or implicitly accepted value for the other direction,
|
||||
* or the default value.
|
||||
/* If MTU is not provided in configure request, try adjusting it
|
||||
* to the current output MTU if it has been set
|
||||
*
|
||||
* Bluetooth Core 6.1, Vol 3, Part A, Section 4.5
|
||||
*
|
||||
* Each configuration parameter value (if any is present) in an
|
||||
* L2CAP_CONFIGURATION_RSP packet reflects an ‘adjustment’ to a
|
||||
* configuration parameter value that has been sent (or, in case
|
||||
* of default values, implied) in the corresponding
|
||||
* L2CAP_CONFIGURATION_REQ packet.
|
||||
*/
|
||||
if (mtu == 0)
|
||||
mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU;
|
||||
if (!mtu) {
|
||||
/* Only adjust for ERTM channels as for older modes the
|
||||
* remote stack may not be able to detect that the
|
||||
* adjustment causing it to silently drop packets.
|
||||
*/
|
||||
if (chan->mode == L2CAP_MODE_ERTM &&
|
||||
chan->omtu && chan->omtu != L2CAP_DEFAULT_MTU)
|
||||
mtu = chan->omtu;
|
||||
else
|
||||
mtu = L2CAP_DEFAULT_MTU;
|
||||
}
|
||||
|
||||
if (mtu < L2CAP_DEFAULT_MIN_MTU)
|
||||
result = L2CAP_CONF_UNACCEPT;
|
||||
|
@@ -1687,6 +1687,9 @@ static void l2cap_sock_resume_cb(struct l2cap_chan *chan)
|
||||
{
|
||||
struct sock *sk = chan->data;
|
||||
|
||||
if (!sk)
|
||||
return;
|
||||
|
||||
if (test_and_clear_bit(FLAG_PENDING_SECURITY, &chan->flags)) {
|
||||
sk->sk_state = BT_CONNECTED;
|
||||
chan->state = BT_CONNECTED;
|
||||
|
@@ -1380,7 +1380,7 @@ static void smp_timeout(struct work_struct *work)
|
||||
|
||||
bt_dev_dbg(conn->hcon->hdev, "conn %p", conn);
|
||||
|
||||
hci_disconnect(conn->hcon, HCI_ERROR_REMOTE_USER_TERM);
|
||||
hci_disconnect(conn->hcon, HCI_ERROR_AUTH_FAILURE);
|
||||
}
|
||||
|
||||
static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
|
||||
@@ -2978,8 +2978,25 @@ static int smp_sig_channel(struct l2cap_chan *chan, struct sk_buff *skb)
|
||||
if (code > SMP_CMD_MAX)
|
||||
goto drop;
|
||||
|
||||
if (smp && !test_and_clear_bit(code, &smp->allow_cmd))
|
||||
if (smp && !test_and_clear_bit(code, &smp->allow_cmd)) {
|
||||
/* If there is a context and the command is not allowed consider
|
||||
* it a failure so the session is cleanup properly.
|
||||
*/
|
||||
switch (code) {
|
||||
case SMP_CMD_IDENT_INFO:
|
||||
case SMP_CMD_IDENT_ADDR_INFO:
|
||||
case SMP_CMD_SIGN_INFO:
|
||||
/* 3.6.1. Key distribution and generation
|
||||
*
|
||||
* A device may reject a distributed key by sending the
|
||||
* Pairing Failed command with the reason set to
|
||||
* "Key Rejected".
|
||||
*/
|
||||
smp_failure(conn, SMP_KEY_REJECTED);
|
||||
break;
|
||||
}
|
||||
goto drop;
|
||||
}
|
||||
|
||||
/* If we don't have a context the only allowed commands are
|
||||
* pairing request and security request.
|
||||
|
@@ -138,6 +138,7 @@ struct smp_cmd_keypress_notify {
|
||||
#define SMP_NUMERIC_COMP_FAILED 0x0c
|
||||
#define SMP_BREDR_PAIRING_IN_PROGRESS 0x0d
|
||||
#define SMP_CROSS_TRANSP_NOT_ALLOWED 0x0e
|
||||
#define SMP_KEY_REJECTED 0x0f
|
||||
|
||||
#define SMP_MIN_ENC_KEY_SIZE 7
|
||||
#define SMP_MAX_ENC_KEY_SIZE 16
|
||||
|
@@ -17,6 +17,9 @@ static bool nbp_switchdev_can_offload_tx_fwd(const struct net_bridge_port *p,
|
||||
if (!static_branch_unlikely(&br_switchdev_tx_fwd_offload))
|
||||
return false;
|
||||
|
||||
if (br_multicast_igmp_type(skb))
|
||||
return false;
|
||||
|
||||
return (p->flags & BR_TX_FWD_OFFLOAD) &&
|
||||
(p->hwdom != BR_INPUT_SKB_CB(skb)->src_hwdom);
|
||||
}
|
||||
|
@@ -7412,7 +7412,8 @@ int __init addrconf_init(void)
|
||||
if (err < 0)
|
||||
goto out_addrlabel;
|
||||
|
||||
addrconf_wq = create_workqueue("ipv6_addrconf");
|
||||
/* All works using addrconf_wq need to lock rtnl. */
|
||||
addrconf_wq = create_singlethread_workqueue("ipv6_addrconf");
|
||||
if (!addrconf_wq) {
|
||||
err = -ENOMEM;
|
||||
goto out_nowq;
|
||||
|
@@ -803,8 +803,8 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im)
|
||||
} else {
|
||||
im->mca_crcount = idev->mc_qrv;
|
||||
}
|
||||
in6_dev_put(pmc->idev);
|
||||
ip6_mc_clear_src(pmc);
|
||||
in6_dev_put(pmc->idev);
|
||||
kfree_rcu(pmc, rcu);
|
||||
}
|
||||
}
|
||||
|
@@ -129,13 +129,13 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
|
||||
struct dst_entry *cache_dst)
|
||||
{
|
||||
struct ipv6_rpl_sr_hdr *isrh, *csrh;
|
||||
const struct ipv6hdr *oldhdr;
|
||||
struct ipv6hdr oldhdr;
|
||||
struct ipv6hdr *hdr;
|
||||
unsigned char *buf;
|
||||
size_t hdrlen;
|
||||
int err;
|
||||
|
||||
oldhdr = ipv6_hdr(skb);
|
||||
memcpy(&oldhdr, ipv6_hdr(skb), sizeof(oldhdr));
|
||||
|
||||
buf = kcalloc(struct_size(srh, segments.addr, srh->segments_left), 2, GFP_ATOMIC);
|
||||
if (!buf)
|
||||
@@ -147,7 +147,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
|
||||
memcpy(isrh, srh, sizeof(*isrh));
|
||||
memcpy(isrh->rpl_segaddr, &srh->rpl_segaddr[1],
|
||||
(srh->segments_left - 1) * 16);
|
||||
isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr->daddr;
|
||||
isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr.daddr;
|
||||
|
||||
ipv6_rpl_srh_compress(csrh, isrh, &srh->rpl_segaddr[0],
|
||||
isrh->segments_left - 1);
|
||||
@@ -169,7 +169,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
|
||||
skb_mac_header_rebuild(skb);
|
||||
|
||||
hdr = ipv6_hdr(skb);
|
||||
memmove(hdr, oldhdr, sizeof(*hdr));
|
||||
memmove(hdr, &oldhdr, sizeof(*hdr));
|
||||
isrh = (void *)hdr + sizeof(*hdr);
|
||||
memcpy(isrh, csrh, hdrlen);
|
||||
|
||||
|
@@ -1075,6 +1075,12 @@ static int nf_ct_resolve_clash_harder(struct sk_buff *skb, u32 repl_idx)
|
||||
|
||||
hlist_nulls_add_head_rcu(&loser_ct->tuplehash[IP_CT_DIR_REPLY].hnnode,
|
||||
&nf_conntrack_hash[repl_idx]);
|
||||
/* confirmed bit must be set after hlist add, not before:
|
||||
* loser_ct can still be visible to other cpu due to
|
||||
* SLAB_TYPESAFE_BY_RCU.
|
||||
*/
|
||||
smp_mb__before_atomic();
|
||||
set_bit(IPS_CONFIRMED_BIT, &loser_ct->status);
|
||||
|
||||
NF_CT_STAT_INC(net, clash_resolve);
|
||||
return NF_ACCEPT;
|
||||
@@ -1211,8 +1217,6 @@ __nf_conntrack_confirm(struct sk_buff *skb)
|
||||
* user context, else we insert an already 'dead' hash, blocking
|
||||
* further use of that particular connection -JM.
|
||||
*/
|
||||
ct->status |= IPS_CONFIRMED;
|
||||
|
||||
if (unlikely(nf_ct_is_dying(ct))) {
|
||||
NF_CT_STAT_INC(net, insert_failed);
|
||||
goto dying;
|
||||
@@ -1244,7 +1248,7 @@ chaintoolong:
|
||||
}
|
||||
}
|
||||
|
||||
/* Timer relative to confirmation time, not original
|
||||
/* Timeout is relative to confirmation time, not original
|
||||
setting time, otherwise we'd get timer wrap in
|
||||
weird delay cases. */
|
||||
ct->timeout += nfct_time_stamp;
|
||||
@@ -1252,11 +1256,21 @@ chaintoolong:
|
||||
__nf_conntrack_insert_prepare(ct);
|
||||
|
||||
/* Since the lookup is lockless, hash insertion must be done after
|
||||
* starting the timer and setting the CONFIRMED bit. The RCU barriers
|
||||
* guarantee that no other CPU can find the conntrack before the above
|
||||
* stores are visible.
|
||||
* setting ct->timeout. The RCU barriers guarantee that no other CPU
|
||||
* can find the conntrack before the above stores are visible.
|
||||
*/
|
||||
__nf_conntrack_hash_insert(ct, hash, reply_hash);
|
||||
|
||||
/* IPS_CONFIRMED unset means 'ct not (yet) in hash', conntrack lookups
|
||||
* skip entries that lack this bit. This happens when a CPU is looking
|
||||
* at a stale entry that is being recycled due to SLAB_TYPESAFE_BY_RCU
|
||||
* or when another CPU encounters this entry right after the insertion
|
||||
* but before the set-confirm-bit below. This bit must not be set until
|
||||
* after __nf_conntrack_hash_insert().
|
||||
*/
|
||||
smp_mb__before_atomic();
|
||||
set_bit(IPS_CONFIRMED_BIT, &ct->status);
|
||||
|
||||
nf_conntrack_double_unlock(hash, reply_hash);
|
||||
local_bh_enable();
|
||||
|
||||
|
@@ -2791,7 +2791,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
|
||||
int len_sum = 0;
|
||||
int status = TP_STATUS_AVAILABLE;
|
||||
int hlen, tlen, copylen = 0;
|
||||
long timeo = 0;
|
||||
long timeo;
|
||||
|
||||
mutex_lock(&po->pg_vec_lock);
|
||||
|
||||
@@ -2845,22 +2845,28 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
|
||||
if ((size_max > dev->mtu + reserve + VLAN_HLEN) && !vnet_hdr_sz)
|
||||
size_max = dev->mtu + reserve + VLAN_HLEN;
|
||||
|
||||
timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
|
||||
reinit_completion(&po->skb_completion);
|
||||
|
||||
do {
|
||||
ph = packet_current_frame(po, &po->tx_ring,
|
||||
TP_STATUS_SEND_REQUEST);
|
||||
if (unlikely(ph == NULL)) {
|
||||
if (need_wait && skb) {
|
||||
timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
|
||||
/* Note: packet_read_pending() might be slow if we
|
||||
* have to call it as it's per_cpu variable, but in
|
||||
* fast-path we don't have to call it, only when ph
|
||||
* is NULL, we need to check the pending_refcnt.
|
||||
*/
|
||||
if (need_wait && packet_read_pending(&po->tx_ring)) {
|
||||
timeo = wait_for_completion_interruptible_timeout(&po->skb_completion, timeo);
|
||||
if (timeo <= 0) {
|
||||
err = !timeo ? -ETIMEDOUT : -ERESTARTSYS;
|
||||
goto out_put;
|
||||
}
|
||||
}
|
||||
/* check for additional frames */
|
||||
continue;
|
||||
/* check for additional frames */
|
||||
continue;
|
||||
} else
|
||||
break;
|
||||
}
|
||||
|
||||
skb = NULL;
|
||||
@@ -2949,14 +2955,7 @@ tpacket_error:
|
||||
}
|
||||
packet_increment_head(&po->tx_ring);
|
||||
len_sum += tp_len;
|
||||
} while (likely((ph != NULL) ||
|
||||
/* Note: packet_read_pending() might be slow if we have
|
||||
* to call it as it's per_cpu variable, but in fast-path
|
||||
* we already short-circuit the loop with the first
|
||||
* condition, and luckily don't have to go that path
|
||||
* anyway.
|
||||
*/
|
||||
(need_wait && packet_read_pending(&po->tx_ring))));
|
||||
} while (1);
|
||||
|
||||
err = len_sum;
|
||||
goto out_put;
|
||||
|
@@ -826,6 +826,7 @@ static struct sock *pep_sock_accept(struct sock *sk, int flags, int *errp,
|
||||
}
|
||||
|
||||
/* Check for duplicate pipe handle */
|
||||
pn_skb_get_dst_sockaddr(skb, &dst);
|
||||
newsk = pep_find_pipe(&pn->hlist, &dst, pipe_handle);
|
||||
if (unlikely(newsk)) {
|
||||
__sock_put(newsk);
|
||||
@@ -850,7 +851,6 @@ static struct sock *pep_sock_accept(struct sock *sk, int flags, int *errp,
|
||||
newsk->sk_destruct = pipe_destruct;
|
||||
|
||||
newpn = pep_sk(newsk);
|
||||
pn_skb_get_dst_sockaddr(skb, &dst);
|
||||
pn_skb_get_src_sockaddr(skb, &src);
|
||||
newpn->pn_sk.sobject = pn_sockaddr_get_object(&dst);
|
||||
newpn->pn_sk.dobject = pn_sockaddr_get_object(&src);
|
||||
|
@@ -219,6 +219,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
|
||||
tail = b->call_backlog_tail;
|
||||
while (CIRC_CNT(head, tail, size) > 0) {
|
||||
struct rxrpc_call *call = b->call_backlog[tail];
|
||||
rxrpc_see_call(call, rxrpc_call_see_discard);
|
||||
rcu_assign_pointer(call->socket, rx);
|
||||
if (rx->discard_new_call) {
|
||||
_debug("discard %lx", call->user_call_ID);
|
||||
|
@@ -589,6 +589,9 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
|
||||
__be32 code;
|
||||
int ret, ioc;
|
||||
|
||||
if (sp->hdr.type == RXRPC_PACKET_TYPE_ABORT)
|
||||
return; /* Never abort an abort. */
|
||||
|
||||
rxrpc_see_skb(skb, rxrpc_skb_see_reject);
|
||||
|
||||
iov[0].iov_base = &whdr;
|
||||
|
@@ -351,6 +351,16 @@ try_again:
|
||||
goto try_again;
|
||||
}
|
||||
|
||||
rxrpc_see_call(call, rxrpc_call_see_recvmsg);
|
||||
if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
|
||||
rxrpc_see_call(call, rxrpc_call_see_already_released);
|
||||
list_del_init(&call->recvmsg_link);
|
||||
spin_unlock_irq(&rx->recvmsg_lock);
|
||||
release_sock(&rx->sk);
|
||||
trace_rxrpc_recvmsg(call->debug_id, rxrpc_recvmsg_unqueue, 0);
|
||||
rxrpc_put_call(call, rxrpc_call_put_recvmsg);
|
||||
goto try_again;
|
||||
}
|
||||
if (!(flags & MSG_PEEK))
|
||||
list_del_init(&call->recvmsg_link);
|
||||
else
|
||||
@@ -374,8 +384,13 @@ try_again:
|
||||
|
||||
release_sock(&rx->sk);
|
||||
|
||||
if (test_bit(RXRPC_CALL_RELEASED, &call->flags))
|
||||
BUG();
|
||||
if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
|
||||
rxrpc_see_call(call, rxrpc_call_see_already_released);
|
||||
mutex_unlock(&call->user_mutex);
|
||||
if (!(flags & MSG_PEEK))
|
||||
rxrpc_put_call(call, rxrpc_call_put_recvmsg);
|
||||
goto try_again;
|
||||
}
|
||||
|
||||
if (test_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
|
||||
if (flags & MSG_CMSG_COMPAT) {
|
||||
|
@@ -821,7 +821,9 @@ static struct htb_class *htb_lookup_leaf(struct htb_prio *hprio, const int prio)
|
||||
u32 *pid;
|
||||
} stk[TC_HTB_MAXDEPTH], *sp = stk;
|
||||
|
||||
BUG_ON(!hprio->row.rb_node);
|
||||
if (unlikely(!hprio->row.rb_node))
|
||||
return NULL;
|
||||
|
||||
sp->root = hprio->row.rb_node;
|
||||
sp->pptr = &hprio->ptr;
|
||||
sp->pid = &hprio->last_ptr_id;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user