Merge branch 'android15-6.6' into android15-6.6-lts

Catch the -lts branch up with the recent changes made in android15-6.6:

* cfc94dc0cc Merge tag 'android15-6.6.87_r00' into android15-6.6
* 89643073e4 ANDROID: add USERFAULTFD back to microdroid
* 66459e7963 ANDROID: Repurpose a reserved slot in ipv6_devconf for backports
* 33e1555f77 ANDROID: GKI: Update symbol list for mtk
* 3ea1be5b92 ANDROID: GKI: update symbol list file for xiaomi
* 9743210ec0 BACKPORT: mm/page_alloc: keep track of free highatomic
* b9e2be445a UPSTREAM: mm: page_alloc: fix highatomic typing in multi-block buddies
* bc1e3097e3 BACKPORT: mm: page_alloc: batch vmstat updates in expand()
* 3dc7946030 BACKPORT: mm: page_alloc: consolidate free page accounting
* f15ddfd378 BACKPORT: mm: page_isolation: prepare for hygienic freelists
* a0fe7bbc01 UPSTREAM: mm: page_alloc: set migratetype inside move_freepages()
* 7bd0ba0831 BACKPORT: mm: page_alloc: close migratetype race between freeing and stealing
* 4e814d99e0 UPSTREAM: mm: page_alloc: fix freelist movement during block conversion
* 24d6337da4 UPSTREAM: mm: page_alloc: fix move_freepages_block() range error
* fd83b273bd UPSTREAM: mm: page_alloc: move free pages when converting block during isolation
* 4792c30baf UPSTREAM: mm: page_alloc: fix up block types when merging compatible blocks
* 6a56d21968 BACKPORT: mm: page_alloc: remove pcppage migratetype caching
* 3879594720 ANDROID: Set ALL_KMI_SYMBOLS to CONFIG_UNUSED_KSYMS_WHITELIST.
* 28584d680d ANDROID: Move 6.6 AutoFDO profile to android/gki/aarch64/afdo
* b495f54c6c UPSTREAM: exfat: call bh_read in get_block only when necessary
* 3d9e0194bc UPSTREAM: exfat: fix potential wrong error return from get_block
* 0e60a8dee0 BACKPORT: FROMGIT: dm-bufio: don't schedule in atomic context
* 8a34ab3c95 Revert "mm: resolve faulty mmap_region() error path behaviour"
* a3b47b4bc7 UPSTREAM: net_sched: Prevent creation of classes with TC_H_ROOT
* 041551be52 ANDROID: turn more configs off for microdroid kernel
* 9acf2a0575 ANDROID: turn more configs for microdroid kernel
* fea3e8f8f9 ANDROID: turn CONFIG_EXPERT for microdroid kernels
* eb4b900b64 ANDROID: re-run savedefconfig for microdroid kernels
* 1f04c5fc8c FROMGIT: f2fs: zone: fix to avoid inconsistence in between SIT and SSA
* 22fde3e080 FROMGIT: f2fs: prevent the current section from being selected as a victim during GC
* 4741ef5c96 FROMGIT: f2fs: clean up unnecessary indentation
* eed9b14910 FROMGIT: cgroup/cpuset-v1: Add missing support for cpuset_v2_mode
* ced9dc167c UPSTREAM: PCI/ASPM: Fix L1SS saving
* 57d2e22d5b ANDROID: Update the ABI symbol list
* 3fb238239b ANDROID: fs: Add vendor hooks for ep_create_wakeup_source & timerfd_create
* 5b82c86f62 ANDROID: Add CtsJobSchedulerTestCases to the presubmit group.
* cc59263d5d ANDROID: fuse-bpf: fix wrong logic in read backing
* 8dedd2a376 ANDROID: GKI: Update oplus symbol list
* 26c67c4430 ANDROID: madvise: add vendor hook to bypass madvise
* 08ca8785d3 ANDROID: virtio_balloon: sysfs-configurable option bail_on_out_of_puff
* ffe47cdefe ANDROID: abi_gki_aarch64_honor: whitelist symbols added percpu_ref_is_zero
* 6d50a5ff80 UPSTREAM: bpf: support SKF_NET_OFF and SKF_LL_OFF on skb frags
* 4f551093f5 UPSTREAM: regset: use kvzalloc() for regset_get_alloc()
* 65855f070c ANDROID: fips140: strip debug symbols from fips140.ko
* 76b10136b8 ANDROID: 16K: Add VMA padding size to smaps output
* 04053bc99f ANDROID: 16K: Don't copy data vma for maps/smaps output
* 9be2a46a50 ANDROID: export of_find_gpio.
* 422790d830 ANDROID: GKI: Update oplus symbol list
* 43bf81d175 ANDROID: mm: add vendor hooks for file folio reclaim.
* 138801b055 Revert "ANDROID: f2fs: add kernel message"
* ab3ef5326a Revert "ANDROID: f2fs: introduce sanity_check sysfs entry"
* af63db2f4d UPSTREAM: usbnet:fix NPE during rx_complete
* f77653a633 BACKPORT: FROMGIT: scsi: ufs: core: Fix a race condition related to device commands
* 1a45ce72b0 ANDROID: KVM: iommu: Restrict access KVM_IOMMU_DOMAIN_IDMAP_ID
* fbc642bc8e ANDROID: Add IFTTT analyzer markers for GKI modules
* 18cdf16404 FROMGIT: usb: dwc3: gadget: Prevent irq storm when TH re-executes
* cd2fcd0218 FROMGIT: KVM: arm64: Use acquire/release to communicate FF-A version negotiation
* c56c645543 ANDROID: KVM: arm64: Fix missing poison for huge-mapping
* 272829b273 ANDROID: define python library in BUILD for kunit parser
* b41825e8f4 ANDROID: KVM: arm64: Add pkvm module guest_stage2_pa
* 340ef6cb40 ANDROID: KVM: arm64: Add smc64 trap handling for protected guests
* 139cbbb536 ANDROID: GKI: Add symbols to symbol list for oppo
* 54584a5115 ANDROID: kthread: Export kthread_blkcg
* 05baf38aff ANDROID: sbitmap: Fix sbitmap_spinlock()
* 42a23db419 ANDROID: usb: xhci-plat: vendor hooks for suspend and resume
* 78c3f3bc8c FROMGIT: kasan: Add strscpy() test to trigger tag fault on arm64
* b5223b5d7f FROMGIT: string: Add load_unaligned_zeropad() code path to sized_strscpy()
* cfee73c146 FROMGIT: exfat: fix random stack corruption after get_block
* 0847f904bd UPSTREAM: exfat: short-circuit zero-byte writes in exfat_file_write_iter
* 74e81110e9 UPSTREAM: exfat: fix file being changed by unaligned direct write
* b4b52b9a4d BACKPORT: exfat: do not fallback to buffered write
* 41018b0b3d UPSTREAM: exfat: drop ->i_size_ondisk
* 503f6b7170 UPSTREAM: mm/damon/core: use nr_accesses_bp as a source of damos_before_apply tracepoint
* 752281cabe UPSTREAM: mm/damon/sysfs-schemes: use nr_accesses_bp as the source of tried_regions/<N>/nr_accesses
* 1940b7c64a UPSTREAM: mm/damon/core: make DAMOS uses nr_accesses_bp instead of nr_accesses
* b4537fa4f0 UPSTREAM: mm/damon/core: mark damon_moving_sum() as a static function
* 251e8ac61a UPSTREAM: mm/damon/core: skip updating nr_accesses_bp for each aggregation interval
* 5a8bcac5f6 UPSTREAM: mm/damon/core: use pseudo-moving sum for nr_accesses_bp
* a32703d72c BACKPORT: mm/damon/core: introduce nr_accesses_bp
* 02ec85759f UPSTREAM: mm/damon/core-test: add a unit test for damon_moving_sum()
* dda6109dea UPSTREAM: mm/damon/core: implement a pseudo-moving sum function
* 99418349f2 UPSTREAM: mm/damon/vaddr: call damon_update_region_access_rate() always
* 6c3ca340bf BACKPORT: mm/damon/core: define and use a dedicated function for region access rate update
* cfc20af5ea UPSTREAM: mm/damon/core: add a tracepoint for damos apply target regions
* 02eeb07257 BACKPORT: scsi: ufs: core: Set the Command Priority (CP) flag for RT requests
* 1a7fea5f88 BACKPORT: FROMGIT: dm-verity: support block number limits for different ioprio classes
* 1a7680db91 ANDROID: 16K: Emulate cachestat counters
* 622c3fd0e1 ANDROID: Add pcie-designware.h to aarch64 allowlist
* b62ea68f41 ANDROID: GKI: Update oplus symbol list
* 52e82fb490 ANDROID: vendor_hook: add hooks for I/O priority
* eafcd29b88 ANDROID: GKI: db845c: Add _totalram_pages to symbol list
* 0a83367cd1 ANDROID: GKI: virtual_device: Add _totalram_pages to symbol list
* bbf1176bfb ANDROID: dma-buf/heaps: system_heap: avoid too much allocation
* 662edcf4ff BACKPORT: tcp: fix forever orphan socket caused by tcp_abort
* a1b4999579 BACKPORT: tcp: fix races in tcp_abort()

Change-Id: I09d5b1e5dae482ba0155d1d2e09d1f2169d0e1c5
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman
2025-04-26 08:49:35 +00:00
103 changed files with 2398 additions and 1107 deletions

View File

@@ -185,7 +185,7 @@ define_common_kernels(target_configs = {
"protected_modules_list": ":gki_aarch64_protected_modules", "protected_modules_list": ":gki_aarch64_protected_modules",
"module_implicit_outs": get_gki_modules_list("arm64") + get_kunit_modules_list("arm64"), "module_implicit_outs": get_gki_modules_list("arm64") + get_kunit_modules_list("arm64"),
"make_goals": _GKI_AARCH64_MAKE_GOALS, "make_goals": _GKI_AARCH64_MAKE_GOALS,
"clang_autofdo_profile": "//toolchain/pgo-profiles/kernel:aarch64/android15-6.6/kernel.afdo", "clang_autofdo_profile": ":android/gki/aarch64/afdo/kernel.afdo",
"defconfig_fragments": ["arch/arm64/configs/autofdo_gki.fragment"], "defconfig_fragments": ["arch/arm64/configs/autofdo_gki.fragment"],
"ddk_headers_archive": ":kernel_aarch64_ddk_headers_archive", "ddk_headers_archive": ":kernel_aarch64_ddk_headers_archive",
"extra_dist": [ "extra_dist": [
@@ -752,6 +752,7 @@ kernel_build(
build_config = "build.config.gki.aarch64.fips140", build_config = "build.config.gki.aarch64.fips140",
kmi_symbol_list = "android/abi_gki_aarch64_fips140", kmi_symbol_list = "android/abi_gki_aarch64_fips140",
module_outs = ["crypto/fips140.ko"], module_outs = ["crypto/fips140.ko"],
strip_modules = True,
) )
kernel_abi( kernel_abi(
@@ -904,6 +905,16 @@ pkg_install(
visibility = ["//visibility:private"], visibility = ["//visibility:private"],
) )
py_library(
name = "kunit_parser",
srcs = [
"tools/testing/kunit/kunit_parser.py",
"tools/testing/kunit/kunit_printer.py",
],
imports = ["tools/testing/kunit"],
visibility = ["//visibility:public"],
)
# DDK Headers # DDK Headers
# All headers. These are the public targets for DDK modules to use. # All headers. These are the public targets for DDK modules to use.
alias( alias(
@@ -985,6 +996,7 @@ ddk_headers(
hdrs = [ hdrs = [
"drivers/dma-buf/heaps/deferred-free-helper.h", "drivers/dma-buf/heaps/deferred-free-helper.h",
"drivers/extcon/extcon.h", "drivers/extcon/extcon.h",
"drivers/pci/controller/dwc/pcie-designware.h",
"drivers/thermal/thermal_core.h", "drivers/thermal/thermal_core.h",
"drivers/thermal/thermal_netlink.h", "drivers/thermal/thermal_netlink.h",
"drivers/usb/dwc3/core.h", "drivers/usb/dwc3/core.h",
@@ -1002,6 +1014,7 @@ ddk_headers(
"arch/arm64/include/uapi", "arch/arm64/include/uapi",
"drivers/dma-buf", "drivers/dma-buf",
"drivers/extcon", "drivers/extcon",
"drivers/pci/controller/dwc",
"drivers/thermal", "drivers/thermal",
"drivers/usb", "drivers/usb",
"sound/usb", "sound/usb",

View File

@@ -142,8 +142,15 @@ root_hash_sig_key_desc <key_description>
already in the secondary trusted keyring. already in the secondary trusted keyring.
try_verify_in_tasklet try_verify_in_tasklet
If verity hashes are in cache, verify data blocks in kernel tasklet instead If verity hashes are in cache and the IO size does not exceed the limit,
of workqueue. This option can reduce IO latency. verify data blocks in bottom half instead of workqueue. This option can
reduce IO latency. The size limits can be configured via
/sys/module/dm_verity/parameters/use_bh_bytes. The four parameters
correspond to limits for IOPRIO_CLASS_NONE,IOPRIO_CLASS_RT,
IOPRIO_CLASS_BE and IOPRIO_CLASS_IDLE in turn.
For example:
<none>,<rt>,<be>,<idle>
4096,4096,4096,4096
Theory of operation Theory of operation
=================== ===================

File diff suppressed because it is too large Load Diff

View File

@@ -113,6 +113,14 @@ type 'struct cgroup_root' changed
type 'struct xhci_sideband' changed type 'struct xhci_sideband' changed
was fully defined, is now only declared was fully defined, is now only declared
type 'struct pkvm_module_ops' changed
member 'u64 android_kabi_reserved4' was removed
member 'union { int(* register_guest_smc_handler)(bool(*)(struct arm_smccc_1_2_regs*, struct arm_smccc_res*, pkvm_handle_t), pkvm_handle_t); struct { u64 android_kabi_reserved4; }; union { }; }' was added
type 'struct pkvm_module_ops' changed
member 'u64 android_kabi_reserved5' was removed
member 'union { int(* guest_stage2_pa)(pkvm_handle_t, u64, phys_addr_t*); struct { u64 android_kabi_reserved5; }; union { }; }' was added
type 'struct io_ring_ctx' changed type 'struct io_ring_ctx' changed
member 'struct hlist_head io_buf_list' was removed member 'struct hlist_head io_buf_list' was removed
28 members ('struct wait_queue_head poll_wq' .. 'struct page** sqe_pages') changed 28 members ('struct wait_queue_head poll_wq' .. 'struct page** sqe_pages') changed

View File

@@ -53,7 +53,6 @@
clk_put clk_put
clk_round_rate clk_round_rate
clk_set_rate clk_set_rate
clk_sync_state
clk_unprepare clk_unprepare
complete complete
complete_all complete_all
@@ -224,6 +223,7 @@
driver_register driver_register
driver_unregister driver_unregister
drm_add_edid_modes drm_add_edid_modes
drm_atomic_get_new_connector_for_encoder
drm_atomic_get_private_obj_state drm_atomic_get_private_obj_state
drm_atomic_helper_bridge_destroy_state drm_atomic_helper_bridge_destroy_state
drm_atomic_helper_bridge_duplicate_state drm_atomic_helper_bridge_duplicate_state
@@ -244,7 +244,6 @@
drm_connector_update_edid_property drm_connector_update_edid_property
drm_crtc_add_crc_entry drm_crtc_add_crc_entry
___drm_dbg ___drm_dbg
__drm_debug
__drm_dev_dbg __drm_dev_dbg
drm_dev_printk drm_dev_printk
drm_do_get_edid drm_do_get_edid
@@ -258,9 +257,6 @@
drm_mode_vrefresh drm_mode_vrefresh
drm_of_find_panel_or_bridge drm_of_find_panel_or_bridge
drm_printf drm_printf
drm_rect_rotate
drm_rect_rotate_inv
drmm_kmalloc
enable_irq enable_irq
eth_type_trans eth_type_trans
_find_first_bit _find_first_bit
@@ -275,6 +271,7 @@
fortify_panic fortify_panic
free_io_pgtable_ops free_io_pgtable_ops
free_irq free_irq
fwnode_handle_put
fwnode_property_present fwnode_property_present
fwnode_property_read_u32_array fwnode_property_read_u32_array
gcd gcd
@@ -456,8 +453,8 @@
misc_register misc_register
mod_delayed_work_on mod_delayed_work_on
mod_timer mod_timer
__module_get
module_layout module_layout
module_put
__msecs_to_jiffies __msecs_to_jiffies
msleep msleep
__mutex_init __mutex_init
@@ -582,7 +579,6 @@
prepare_to_wait_event prepare_to_wait_event
print_hex_dump print_hex_dump
_printk _printk
__pskb_copy_fclone
pskb_expand_head pskb_expand_head
__pskb_pull_tail __pskb_pull_tail
put_device put_device
@@ -697,7 +693,6 @@
simple_read_from_buffer simple_read_from_buffer
single_open single_open
single_release single_release
skb_clone
skb_copy skb_copy
skb_copy_bits skb_copy_bits
skb_dequeue skb_dequeue
@@ -813,6 +808,7 @@
usb_disabled usb_disabled
__usecs_to_jiffies __usecs_to_jiffies
usleep_range_state usleep_range_state
utf8_data_table
v4l2_ctrl_handler_free v4l2_ctrl_handler_free
v4l2_ctrl_handler_init_class v4l2_ctrl_handler_init_class
v4l2_ctrl_new_std v4l2_ctrl_new_std
@@ -861,8 +857,6 @@
vunmap vunmap
vzalloc vzalloc
wait_for_completion wait_for_completion
wait_for_completion_interruptible
wait_for_completion_interruptible_timeout
wait_for_completion_timeout wait_for_completion_timeout
__wake_up __wake_up
wake_up_process wake_up_process
@@ -1056,6 +1050,7 @@
__drm_crtc_commit_free __drm_crtc_commit_free
drm_crtc_commit_wait drm_crtc_commit_wait
drm_crtc_wait_one_vblank drm_crtc_wait_one_vblank
__drm_debug
drm_display_mode_from_cea_vic drm_display_mode_from_cea_vic
drm_edid_dup drm_edid_dup
drm_edid_duplicate drm_edid_duplicate
@@ -1102,6 +1097,7 @@
__tracepoint_mmap_lock_released __tracepoint_mmap_lock_released
__tracepoint_mmap_lock_start_locking __tracepoint_mmap_lock_start_locking
up_read up_read
wait_for_completion_interruptible
# required by gpi.ko # required by gpi.ko
krealloc krealloc
@@ -1196,7 +1192,6 @@
of_cpu_node_to_id of_cpu_node_to_id
# required by lontium-lt9611.ko # required by lontium-lt9611.ko
drm_atomic_get_new_connector_for_encoder
drm_hdmi_vendor_infoframe_from_display_mode drm_hdmi_vendor_infoframe_from_display_mode
# required by lontium-lt9611uxc.ko # required by lontium-lt9611uxc.ko
@@ -1265,6 +1260,7 @@
round_jiffies round_jiffies
round_jiffies_relative round_jiffies_relative
sg_init_one sg_init_one
skb_clone
skb_clone_sk skb_clone_sk
skb_complete_wifi_ack skb_complete_wifi_ack
skb_copy_expand skb_copy_expand
@@ -1472,6 +1468,7 @@
drm_ioctl drm_ioctl
drm_kms_helper_poll_fini drm_kms_helper_poll_fini
drm_kms_helper_poll_init drm_kms_helper_poll_init
drmm_kmalloc
drm_mm_init drm_mm_init
drm_mm_insert_node_in_range drm_mm_insert_node_in_range
drmm_mode_config_init drmm_mode_config_init
@@ -1521,6 +1518,8 @@
__drm_puts_coredump __drm_puts_coredump
__drm_puts_seq_file __drm_puts_seq_file
drm_read drm_read
drm_rect_rotate
drm_rect_rotate_inv
drm_release drm_release
drm_rotation_simplify drm_rotation_simplify
drm_self_refresh_helper_init drm_self_refresh_helper_init
@@ -1541,7 +1540,6 @@
get_unused_fd_flags get_unused_fd_flags
gpiod_get_value gpiod_get_value
hdmi_audio_infoframe_pack hdmi_audio_infoframe_pack
icc_put
idr_preload idr_preload
invalidate_mapping_pages invalidate_mapping_pages
iommu_map_sg iommu_map_sg
@@ -1709,7 +1707,6 @@
gpiochip_unlock_as_irq gpiochip_unlock_as_irq
handle_fasteoi_ack_irq handle_fasteoi_ack_irq
handle_fasteoi_irq handle_fasteoi_irq
module_put
pinctrl_force_default pinctrl_force_default
pinctrl_force_sleep pinctrl_force_sleep
pm_power_off pm_power_off
@@ -1789,7 +1786,6 @@
device_get_next_child_node device_get_next_child_node
devm_iio_device_alloc devm_iio_device_alloc
__devm_iio_device_register __devm_iio_device_register
fwnode_handle_put
fwnode_property_read_string fwnode_property_read_string
strchrnul strchrnul
@@ -1888,8 +1884,10 @@
get_user_ifreq get_user_ifreq
kernel_bind kernel_bind
lock_sock_nested lock_sock_nested
__module_get
proto_register proto_register
proto_unregister proto_unregister
__pskb_copy_fclone
put_user_ifreq put_user_ifreq
radix_tree_insert radix_tree_insert
radix_tree_iter_delete radix_tree_iter_delete
@@ -1967,6 +1965,7 @@
driver_set_override driver_set_override
platform_device_add platform_device_add
platform_device_alloc platform_device_alloc
wait_for_completion_interruptible_timeout
# required by slimbus.ko # required by slimbus.ko
device_find_child device_find_child
@@ -2002,8 +2001,8 @@
snd_soc_dapm_widget_name_cmp snd_soc_dapm_widget_name_cmp
# required by snd-soc-qcom-common.ko # required by snd-soc-qcom-common.ko
snd_soc_dummy_dlc
snd_soc_dai_link_set_capabilities snd_soc_dai_link_set_capabilities
snd_soc_dummy_dlc
snd_soc_of_get_dai_link_codecs snd_soc_of_get_dai_link_codecs
snd_soc_of_get_dlc snd_soc_of_get_dlc
snd_soc_of_parse_audio_routing snd_soc_of_parse_audio_routing
@@ -2093,6 +2092,7 @@
dma_sync_sg_for_device dma_sync_sg_for_device
__free_pages __free_pages
__sg_page_iter_next __sg_page_iter_next
_totalram_pages
# required by ufs-qcom.ko # required by ufs-qcom.ko
insert_resource insert_resource

View File

@@ -93,6 +93,7 @@
wait_for_completion_io wait_for_completion_io
bio_crypt_set_ctx bio_crypt_set_ctx
zero_fill_bio_iter zero_fill_bio_iter
percpu_ref_is_zero
__trace_bputs __trace_bputs
__traceiter_android_vh_proactive_compact_wmark_high __traceiter_android_vh_proactive_compact_wmark_high
__tracepoint_android_vh_proactive_compact_wmark_high __tracepoint_android_vh_proactive_compact_wmark_high

View File

@@ -163,6 +163,7 @@
cancel_delayed_work_sync cancel_delayed_work_sync
cancel_work cancel_work
cancel_work_sync cancel_work_sync
can_get_echo_skb
capable capable
cdc_parse_cdc_header cdc_parse_cdc_header
cdev_add cdev_add

View File

@@ -81,6 +81,7 @@
iterate_dir iterate_dir
jiffies_64_to_clock_t jiffies_64_to_clock_t
kick_process kick_process
kthread_blkcg
ktime_get_coarse_real_ts64 ktime_get_coarse_real_ts64
ktime_get_raw_ts64 ktime_get_raw_ts64
ktime_get_real_ts64 ktime_get_real_ts64
@@ -193,6 +194,7 @@
tcp_hashinfo tcp_hashinfo
tcp_reno_undo_cwnd tcp_reno_undo_cwnd
touch_atime touch_atime
__traceiter_android_rvh_do_madvise_bypass
__traceiter_android_rvh_post_init_entity_util_avg __traceiter_android_rvh_post_init_entity_util_avg
__traceiter_android_rvh_rtmutex_force_update __traceiter_android_rvh_rtmutex_force_update
__traceiter_android_rvh_set_cpus_allowed_comm __traceiter_android_rvh_set_cpus_allowed_comm
@@ -336,6 +338,7 @@
__traceiter_block_rq_issue __traceiter_block_rq_issue
__traceiter_block_rq_merge __traceiter_block_rq_merge
__traceiter_block_rq_requeue __traceiter_block_rq_requeue
__traceiter_android_vh_check_set_ioprio
__traceiter_mm_vmscan_kswapd_wake __traceiter_mm_vmscan_kswapd_wake
__traceiter_net_dev_queue __traceiter_net_dev_queue
__traceiter_net_dev_xmit __traceiter_net_dev_xmit
@@ -348,6 +351,15 @@
__traceiter_sched_stat_wait __traceiter_sched_stat_wait
__traceiter_sched_waking __traceiter_sched_waking
__traceiter_task_rename __traceiter_task_rename
__traceiter_android_vh_lru_gen_add_folio_skip
__traceiter_android_vh_lru_gen_del_folio_skip
__traceiter_android_vh_evict_folios_bypass
__traceiter_android_vh_keep_reclaimed_folio
__traceiter_android_vh_clear_reclaimed_folio
__traceiter_android_vh_filemap_pages
__traceiter_android_rvh_kswapd_shrink_node
__traceiter_android_rvh_perform_reclaim
__tracepoint_android_rvh_do_madvise_bypass
__tracepoint_android_rvh_post_init_entity_util_avg __tracepoint_android_rvh_post_init_entity_util_avg
__tracepoint_android_rvh_rtmutex_force_update __tracepoint_android_rvh_rtmutex_force_update
__tracepoint_android_rvh_set_cpus_allowed_comm __tracepoint_android_rvh_set_cpus_allowed_comm
@@ -491,6 +503,7 @@
__tracepoint_block_rq_issue __tracepoint_block_rq_issue
__tracepoint_block_rq_merge __tracepoint_block_rq_merge
__tracepoint_block_rq_requeue __tracepoint_block_rq_requeue
__tracepoint_android_vh_check_set_ioprio
__tracepoint_mm_vmscan_kswapd_wake __tracepoint_mm_vmscan_kswapd_wake
__tracepoint_net_dev_queue __tracepoint_net_dev_queue
__tracepoint_net_dev_xmit __tracepoint_net_dev_xmit
@@ -503,6 +516,14 @@
__tracepoint_sched_stat_wait __tracepoint_sched_stat_wait
__tracepoint_sched_waking __tracepoint_sched_waking
__tracepoint_task_rename __tracepoint_task_rename
__tracepoint_android_vh_lru_gen_add_folio_skip
__tracepoint_android_vh_lru_gen_del_folio_skip
__tracepoint_android_vh_evict_folios_bypass
__tracepoint_android_vh_keep_reclaimed_folio
__tracepoint_android_vh_clear_reclaimed_folio
__tracepoint_android_vh_filemap_pages
__tracepoint_android_rvh_kswapd_shrink_node
__tracepoint_android_rvh_perform_reclaim
folio_total_mapcount folio_total_mapcount
page_mapping page_mapping
__trace_puts __trace_puts

View File

@@ -2670,6 +2670,7 @@
__traceiter_android_vh_enable_thermal_genl_check __traceiter_android_vh_enable_thermal_genl_check
__traceiter_android_vh_filemap_get_folio __traceiter_android_vh_filemap_get_folio
__traceiter_android_vh_free_pages_prepare_init __traceiter_android_vh_free_pages_prepare_init
__traceiter_android_vh_ep_create_wakeup_source
__traceiter_android_vh_ipi_stop __traceiter_android_vh_ipi_stop
__traceiter_android_vh_mm_compaction_begin __traceiter_android_vh_mm_compaction_begin
__traceiter_android_vh_mm_compaction_end __traceiter_android_vh_mm_compaction_end
@@ -2685,6 +2686,7 @@
__traceiter_android_vh_si_meminfo_adjust __traceiter_android_vh_si_meminfo_adjust
__traceiter_android_vh_sysrq_crash __traceiter_android_vh_sysrq_crash
__traceiter_android_vh_tune_swappiness __traceiter_android_vh_tune_swappiness
__traceiter_android_vh_timerfd_create
__traceiter_android_vh_typec_store_partner_src_caps __traceiter_android_vh_typec_store_partner_src_caps
__traceiter_android_vh_typec_tcpm_log __traceiter_android_vh_typec_tcpm_log
__traceiter_android_vh_typec_tcpm_modify_src_caps __traceiter_android_vh_typec_tcpm_modify_src_caps
@@ -2795,6 +2797,7 @@
__tracepoint_android_vh_enable_thermal_genl_check __tracepoint_android_vh_enable_thermal_genl_check
__tracepoint_android_vh_filemap_get_folio __tracepoint_android_vh_filemap_get_folio
__tracepoint_android_vh_free_pages_prepare_init __tracepoint_android_vh_free_pages_prepare_init
__tracepoint_android_vh_ep_create_wakeup_source
__tracepoint_android_vh_ipi_stop __tracepoint_android_vh_ipi_stop
__tracepoint_android_vh_mm_compaction_begin __tracepoint_android_vh_mm_compaction_begin
__tracepoint_android_vh_mm_compaction_end __tracepoint_android_vh_mm_compaction_end
@@ -2810,6 +2813,7 @@
__tracepoint_android_vh_si_meminfo_adjust __tracepoint_android_vh_si_meminfo_adjust
__tracepoint_android_vh_sysrq_crash __tracepoint_android_vh_sysrq_crash
__tracepoint_android_vh_tune_swappiness __tracepoint_android_vh_tune_swappiness
__tracepoint_android_vh_timerfd_create
__tracepoint_android_vh_typec_store_partner_src_caps __tracepoint_android_vh_typec_store_partner_src_caps
__tracepoint_android_vh_typec_tcpm_log __tracepoint_android_vh_typec_tcpm_log
__tracepoint_android_vh_typec_tcpm_modify_src_caps __tracepoint_android_vh_typec_tcpm_modify_src_caps

View File

@@ -224,6 +224,7 @@
kmemdup kmemdup
kstrndup kstrndup
kstrtobool_from_user kstrtobool_from_user
kstrtoint
kthread_create_on_node kthread_create_on_node
kthread_park kthread_park
kthread_should_stop kthread_should_stop
@@ -297,17 +298,13 @@
page_pool_alloc_frag page_pool_alloc_frag
page_pool_destroy page_pool_destroy
page_pool_put_defragged_page page_pool_put_defragged_page
param_array_ops
param_ops_bool param_ops_bool
param_ops_charp param_ops_charp
param_ops_int param_ops_int
param_ops_uint param_ops_uint
passthru_features_check passthru_features_check
pci_bus_type pci_bus_type
pci_iomap_range
pci_release_region
pci_release_selected_regions
pci_request_region
pci_request_selected_regions
__per_cpu_offset __per_cpu_offset
perf_trace_buf_alloc perf_trace_buf_alloc
perf_trace_run_bpf_submit perf_trace_run_bpf_submit
@@ -377,7 +374,6 @@
__serio_register_driver __serio_register_driver
__serio_register_port __serio_register_port
serio_unregister_driver serio_unregister_driver
set_page_private
sg_alloc_table sg_alloc_table
sg_free_table sg_free_table
sg_init_one sg_init_one
@@ -405,11 +401,22 @@
skb_queue_tail skb_queue_tail
skb_to_sgvec skb_to_sgvec
skb_trim skb_trim
snd_card_register
snd_ctl_add
snd_ctl_new1
snd_ctl_notify
snd_pcm_format_physical_width
snd_pcm_hw_constraint_integer
snd_pcm_new
snd_pcm_period_elapsed
snd_pcm_set_managed_buffer_all
snd_pcm_set_ops
snprintf snprintf
sprintf sprintf
sscanf sscanf
__stack_chk_fail __stack_chk_fail
strcasecmp strcasecmp
strchr
strcmp strcmp
strcpy strcpy
strlen strlen
@@ -453,7 +460,6 @@
usb_create_shared_hcd usb_create_shared_hcd
usb_deregister usb_deregister
usb_disabled usb_disabled
usb_find_common_endpoints
usb_free_urb usb_free_urb
usb_get_dev usb_get_dev
usb_hcd_check_unlink_urb usb_hcd_check_unlink_urb
@@ -473,6 +479,7 @@
usb_unanchor_urb usb_unanchor_urb
__usecs_to_jiffies __usecs_to_jiffies
usleep_range_state usleep_range_state
utf8_data_table
v4l2_device_register v4l2_device_register
v4l2_device_unregister v4l2_device_unregister
v4l2_event_pending v4l2_event_pending
@@ -700,6 +707,8 @@
pci_enable_device pci_enable_device
pci_read_config_byte pci_read_config_byte
__pci_register_driver __pci_register_driver
pci_release_region
pci_request_region
pci_unregister_driver pci_unregister_driver
# required by goldfish_battery.ko # required by goldfish_battery.ko
@@ -747,6 +756,7 @@
unregister_candev unregister_candev
usb_control_msg_recv usb_control_msg_recv
usb_control_msg_send usb_control_msg_send
usb_find_common_endpoints
# required by hci_vhci.ko # required by hci_vhci.ko
_copy_from_iter _copy_from_iter
@@ -1044,6 +1054,7 @@
dma_sync_sg_for_cpu dma_sync_sg_for_cpu
__sg_page_iter_next __sg_page_iter_next
__sg_page_iter_start __sg_page_iter_start
_totalram_pages
vmap vmap
vunmap vunmap
@@ -1056,7 +1067,6 @@
# required by v4l2loopback.ko # required by v4l2loopback.ko
kstrtoull kstrtoull
mutex_lock_killable mutex_lock_killable
param_array_ops
v4l2_ctrl_handler_free v4l2_ctrl_handler_free
v4l2_ctrl_handler_init_class v4l2_ctrl_handler_init_class
v4l2_ctrl_handler_setup v4l2_ctrl_handler_setup
@@ -1085,12 +1095,10 @@
# required by vhci-hcd.ko # required by vhci-hcd.ko
kernel_sendmsg kernel_sendmsg
kernel_sock_shutdown kernel_sock_shutdown
kstrtoint
kstrtoll kstrtoll
kthread_stop_put kthread_stop_put
platform_bus platform_bus
sockfd_lookup sockfd_lookup
strchr
sysfs_create_group sysfs_create_group
sysfs_remove_group sysfs_remove_group
usb_speed_string usb_speed_string
@@ -1310,10 +1318,6 @@
xdp_rxq_info_unreg xdp_rxq_info_unreg
xdp_warn xdp_warn
# required by virtio_pci_legacy_dev.ko
pci_iomap
pci_iounmap
# required by virtio_pmem.ko # required by virtio_pmem.ko
nvdimm_bus_register nvdimm_bus_register
nvdimm_bus_unregister nvdimm_bus_unregister
@@ -1322,20 +1326,10 @@
# required by virtio_snd.ko # required by virtio_snd.ko
snd_card_free snd_card_free
snd_card_new snd_card_new
snd_card_register
snd_ctl_add
snd_ctl_new1
snd_ctl_notify
snd_jack_new snd_jack_new
snd_jack_report snd_jack_report
snd_pcm_add_chmap_ctls snd_pcm_add_chmap_ctls
snd_pcm_format_physical_width
snd_pcm_hw_constraint_integer
snd_pcm_lib_ioctl snd_pcm_lib_ioctl
snd_pcm_new
snd_pcm_period_elapsed
snd_pcm_set_managed_buffer_all
snd_pcm_set_ops
wait_for_completion_interruptible_timeout wait_for_completion_interruptible_timeout
# required by virtual-cpufreq.ko # required by virtual-cpufreq.ko
@@ -1480,10 +1474,15 @@
pci_find_ext_capability pci_find_ext_capability
pci_find_next_capability pci_find_next_capability
pci_free_irq_vectors pci_free_irq_vectors
pci_iomap
pci_iomap_range
pci_iounmap
pci_irq_get_affinity pci_irq_get_affinity
pci_irq_vector pci_irq_vector
pci_read_config_dword pci_read_config_dword
pci_read_config_word pci_read_config_word
pci_release_selected_regions
pci_request_selected_regions
pci_set_master pci_set_master
pci_vfs_assigned pci_vfs_assigned
pipe_lock pipe_lock
@@ -1497,6 +1496,7 @@
set_capacity_and_notify set_capacity_and_notify
set_disk_ro set_disk_ro
__SetPageMovable __SetPageMovable
set_page_private
sg_alloc_table_chained sg_alloc_table_chained
sg_free_table_chained sg_free_table_chained
si_mem_available si_mem_available

View File

@@ -1415,6 +1415,7 @@
of_find_node_by_name of_find_node_by_name
of_find_node_opts_by_path of_find_node_opts_by_path
of_find_property of_find_property
of_find_gpio
of_fwnode_ops of_fwnode_ops
of_get_child_by_name of_get_child_by_name
of_get_compatible_child of_get_compatible_child

View File

@@ -0,0 +1,47 @@
# AutoFDO profiles for Android common kernels
This directory contains AutoFDO profiles for Android common kernels. These profiles can be used to
optimize kernel builds for specific architectures and kernel versions.
## kernel.afdo
kernel.afdo is an AArch64 kernel profile collected on kernel version 6.6.82 (
SHA b62ea68f41a901d5f07f48bd6f1d3a117d801411, build server ID 13287877) using Pixel 6.
### Performance improvements
| Benchmark | Improvement |
| --------------------- | ----------- |
| Boot time | 2.2% |
| Cold App launch time | 2.7% |
| Binder-rpc | 4.4% |
| Binder-addints | 14.1% |
| Hwbinder | 17.0% |
| Bionic (syscall_mmap) | 1.6% |
Benchmark results were tested on Pixel 6.
To test a kernel prebuilt with the AutoFDO profile, navigate to [Android build server](
https://ci.android.com/builds/branches/aosp_kernel-common-android15-6.6/grid) and download
the kernel prebuilts under the `kernel_aarch64_autofdo` target.
## Steps to reproduce the profile
A kernel profile is generated by running app crawling and app launching for top 100 apps from Google
Play Store. While running, we collect ETM data for the kernel, which records executed instruction
stream. Finally, we merge and convert ETM data to one AutoFDO profile.
1. Build a kernel image and flash it on an Android device
* The source code and test device used to generate each profile are described above.
* We use a Pixel device. But using other real devices should get a similar profile.
2. Run app crawling and app launching for top 100 apps
* Add a gmail account on the test device. Because app crawler can use the account to automatically
login some of the apps.
* We run [App Crawler](https://developer.android.com/studio/test/other-testing-tools/app-crawler)
for one app for 3 minutes, and run it twice.
* We run app launching for one app for 3 seconds, and run it 15 times. After each running, the
app is killed and cache is cleared. So we get profile for cold app startups.
3. Record ETM data while running app crawling and app launching.
* We use cmdline `simpleperf record -e cs-etm:k -a` to [record ETM data for the kernel](https://android.googlesource.com/platform/system/extras/+/master/simpleperf/doc/collect_etm_data_for_autofdo.md).

Binary file not shown.

View File

@@ -5,14 +5,9 @@ CONFIG_PREEMPT=y
CONFIG_IRQ_TIME_ACCOUNTING=y CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_PSI=y CONFIG_PSI=y
CONFIG_RCU_EXPERT=y CONFIG_RCU_EXPERT=y
CONFIG_RCU_BOOST=y
CONFIG_RCU_NOCB_CPU=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
# CONFIG_UTS_NS is not set CONFIG_LOG_BUF_SHIFT=14
# CONFIG_TIME_NS is not set
# CONFIG_PID_NS is not set
# CONFIG_NET_NS is not set
# CONFIG_RD_GZIP is not set # CONFIG_RD_GZIP is not set
# CONFIG_RD_BZIP2 is not set # CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set # CONFIG_RD_LZMA is not set
@@ -20,11 +15,13 @@ CONFIG_IKCONFIG_PROC=y
# CONFIG_RD_LZO is not set # CONFIG_RD_LZO is not set
# CONFIG_RD_ZSTD is not set # CONFIG_RD_ZSTD is not set
CONFIG_BOOT_CONFIG=y CONFIG_BOOT_CONFIG=y
CONFIG_EXPERT=y
# CONFIG_IO_URING is not set
CONFIG_PROFILING=y CONFIG_PROFILING=y
CONFIG_KEXEC_FILE=y
CONFIG_SCHED_MC=y CONFIG_SCHED_MC=y
CONFIG_NR_CPUS=32 CONFIG_NR_CPUS=32
CONFIG_PARAVIRT_TIME_ACCOUNTING=y CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_KEXEC_FILE=y
CONFIG_ARM64_SW_TTBR0_PAN=y CONFIG_ARM64_SW_TTBR0_PAN=y
CONFIG_RANDOMIZE_BASE=y CONFIG_RANDOMIZE_BASE=y
# CONFIG_RANDOMIZE_MODULE_REGION_FULL is not set # CONFIG_RANDOMIZE_MODULE_REGION_FULL is not set
@@ -40,10 +37,11 @@ CONFIG_VIRTUALIZATION=y
CONFIG_JUMP_LABEL=y CONFIG_JUMP_LABEL=y
CONFIG_SHADOW_CALL_STACK=y CONFIG_SHADOW_CALL_STACK=y
CONFIG_CFI_CLANG=y CONFIG_CFI_CLANG=y
CONFIG_BLK_DEV_ZONED=y # CONFIG_BLOCK_LEGACY_AUTOLOAD is not set
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
# CONFIG_MSDOS_PARTITION is not set # CONFIG_MSDOS_PARTITION is not set
CONFIG_IOSCHED_BFQ=y # CONFIG_MQ_IOSCHED_DEADLINE is not set
# CONFIG_MQ_IOSCHED_KYBER is not set
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y CONFIG_BINFMT_MISC=y
# CONFIG_SLAB_MERGE_DEFAULT is not set # CONFIG_SLAB_MERGE_DEFAULT is not set
@@ -51,8 +49,6 @@ CONFIG_SLAB_FREELIST_RANDOM=y
CONFIG_SLAB_FREELIST_HARDENED=y CONFIG_SLAB_FREELIST_HARDENED=y
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=32768 CONFIG_DEFAULT_MMAP_MIN_ADDR=32768
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
@@ -69,9 +65,8 @@ CONFIG_PCI=y
CONFIG_PCIEPORTBUS=y CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y CONFIG_PCIEAER=y
CONFIG_PCI_IOV=y CONFIG_PCI_IOV=y
# CONFIG_VGA_ARB is not set
CONFIG_PCI_HOST_GENERIC=y CONFIG_PCI_HOST_GENERIC=y
CONFIG_PCIE_DW_PLAT_EP=y
CONFIG_PCIE_KIRIN=y
CONFIG_PCI_ENDPOINT=y CONFIG_PCI_ENDPOINT=y
CONFIG_FW_LOADER_USER_HELPER=y CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_FW_CACHE is not set # CONFIG_FW_CACHE is not set
@@ -102,7 +97,6 @@ CONFIG_SERIAL_8250_RUNTIME_UARTS=0
CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_NULL_TTY=y CONFIG_NULL_TTY=y
CONFIG_VIRTIO_CONSOLE=y CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_CCTRNG=y CONFIG_HW_RANDOM_CCTRNG=y
# CONFIG_DEVMEM is not set # CONFIG_DEVMEM is not set
# CONFIG_DEVPORT is not set # CONFIG_DEVPORT is not set
@@ -119,7 +113,6 @@ CONFIG_RTC_DRV_PL030=y
CONFIG_RTC_DRV_PL031=y CONFIG_RTC_DRV_PL031=y
CONFIG_DMABUF_HEAPS=y CONFIG_DMABUF_HEAPS=y
CONFIG_DMABUF_SYSFS_STATS=y CONFIG_DMABUF_SYSFS_STATS=y
CONFIG_UIO=y
CONFIG_VIRTIO_PCI=y CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BALLOON=y CONFIG_VIRTIO_BALLOON=y
CONFIG_STAGING=y CONFIG_STAGING=y
@@ -142,6 +135,7 @@ CONFIG_STATIC_USERMODEHELPER=y
CONFIG_STATIC_USERMODEHELPER_PATH="" CONFIG_STATIC_USERMODEHELPER_PATH=""
CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX=y
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_CRYPTO_HCTR2=y CONFIG_CRYPTO_HCTR2=y
CONFIG_CRYPTO_LZO=y CONFIG_CRYPTO_LZO=y
CONFIG_CRYPTO_SHA2_ARM64_CE=y CONFIG_CRYPTO_SHA2_ARM64_CE=y
@@ -152,16 +146,13 @@ CONFIG_DMA_RESTRICTED_POOL=y
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_PRINTK_CALLER=y CONFIG_PRINTK_CALLER=y
CONFIG_DYNAMIC_DEBUG_CORE=y CONFIG_DYNAMIC_DEBUG_CORE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_INFO_DWARF5=y CONFIG_DEBUG_INFO_DWARF5=y
CONFIG_DEBUG_INFO_REDUCED=y CONFIG_DEBUG_INFO_REDUCED=y
CONFIG_DEBUG_INFO_COMPRESSED=y
CONFIG_HEADERS_INSTALL=y CONFIG_HEADERS_INSTALL=y
# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set
CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ=y
CONFIG_UBSAN=y CONFIG_UBSAN=y
CONFIG_UBSAN_TRAP=y CONFIG_UBSAN_TRAP=y
CONFIG_UBSAN_LOCAL_BOUNDS=y
# CONFIG_UBSAN_SHIFT is not set # CONFIG_UBSAN_SHIFT is not set
# CONFIG_UBSAN_BOOL is not set # CONFIG_UBSAN_BOOL is not set
# CONFIG_UBSAN_ENUM is not set # CONFIG_UBSAN_ENUM is not set
@@ -174,8 +165,6 @@ CONFIG_PANIC_TIMEOUT=-1
CONFIG_SOFTLOCKUP_DETECTOR=y CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_WQ_WATCHDOG=y CONFIG_WQ_WATCHDOG=y
CONFIG_SCHEDSTATS=y CONFIG_SCHEDSTATS=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_HIST_TRIGGERS=y CONFIG_HIST_TRIGGERS=y
CONFIG_PID_IN_CONTEXTIDR=y CONFIG_PID_IN_CONTEXTIDR=y
# CONFIG_RUNTIME_TESTING_MENU is not set # CONFIG_RUNTIME_TESTING_MENU is not set

View File

@@ -658,4 +658,7 @@ int __pkvm_topup_hyp_alloc(unsigned long nr_pages);
int pkvm_call_hyp_nvhe_ppage(struct kvm_pinned_page *ppage, int pkvm_call_hyp_nvhe_ppage(struct kvm_pinned_page *ppage,
int (*call_hyp_nvhe)(u64, u64, u8, void*), int (*call_hyp_nvhe)(u64, u64, u8, void*),
void *args, bool unmap); void *args, bool unmap);
int pkvm_guest_stage2_pa(pkvm_handle_t handle, u64 ipa, phys_addr_t *phys);
#endif /* __ARM64_KVM_PKVM_H__ */ #endif /* __ARM64_KVM_PKVM_H__ */

View File

@@ -97,6 +97,10 @@ enum pkvm_psci_notification {
* @register_host_smc_handler: @cb is called whenever the host issues an SMC * @register_host_smc_handler: @cb is called whenever the host issues an SMC
* pKVM couldn't handle. If @cb returns false, the * pKVM couldn't handle. If @cb returns false, the
* SMC will be forwarded to EL3. * SMC will be forwarded to EL3.
* @register_guest_smc_handler: @cb is called whenever a guest identified by the
* pkvm_handle issues an SMC which pKVM doesn't
* handle. If @cb returns false, the control is
* given back to the host kernel to handle the exit.
* @register_default_trap_handler: * @register_default_trap_handler:
* @cb is called whenever EL2 traps EL1 and pKVM * @cb is called whenever EL2 traps EL1 and pKVM
* has not handled it. If @cb returns false, the * has not handled it. If @cb returns false, the
@@ -161,6 +165,14 @@ enum pkvm_psci_notification {
* @iommu_donate_pages_atomic: Allocate memory from IOMMU identity pool. * @iommu_donate_pages_atomic: Allocate memory from IOMMU identity pool.
* @iommu_reclaim_pages_atomic: Reclaim memory from iommu_donate_pages_atomic() * @iommu_reclaim_pages_atomic: Reclaim memory from iommu_donate_pages_atomic()
* @hyp_smp_processor_id: Current CPU id * @hyp_smp_processor_id: Current CPU id
* @guest_stage2_pa: Look up and return the PA (@phys) mapped into
* the specified VM (@handle) at the specified
* intermediate physical address (@ipa). If there
* is no mapping, or if it is a block mapping,
* then -EINVAL will be returned. Note that no
* lock or pin is held on the returned PA; the
* only guarantee is that @handle:@ipa -> @phys
* at some point during the call to this function.
*/ */
struct pkvm_module_ops { struct pkvm_module_ops {
int (*create_private_mapping)(phys_addr_t phys, size_t size, int (*create_private_mapping)(phys_addr_t phys, size_t size,
@@ -227,8 +239,13 @@ struct pkvm_module_ops {
ANDROID_KABI_USE(1, void (*iommu_flush_unmap_cache)(struct kvm_iommu_paddr_cache *cache)); ANDROID_KABI_USE(1, void (*iommu_flush_unmap_cache)(struct kvm_iommu_paddr_cache *cache));
ANDROID_KABI_USE(2, int (*host_stage2_enable_lazy_pte)(u64 addr, u64 nr_pages)); ANDROID_KABI_USE(2, int (*host_stage2_enable_lazy_pte)(u64 addr, u64 nr_pages));
ANDROID_KABI_USE(3, int (*host_stage2_disable_lazy_pte)(u64 addr, u64 nr_pages)); ANDROID_KABI_USE(3, int (*host_stage2_disable_lazy_pte)(u64 addr, u64 nr_pages));
ANDROID_KABI_RESERVE(4); ANDROID_KABI_USE(4, int (*register_guest_smc_handler)(bool (*cb)(
ANDROID_KABI_RESERVE(5); struct arm_smccc_1_2_regs *,
struct arm_smccc_res *res,
pkvm_handle_t handle),
pkvm_handle_t handle));
ANDROID_KABI_USE(5, int (*guest_stage2_pa)(pkvm_handle_t handle,
u64 ipa, phys_addr_t *phys));
ANDROID_KABI_RESERVE(6); ANDROID_KABI_RESERVE(6);
ANDROID_KABI_RESERVE(7); ANDROID_KABI_RESERVE(7);
ANDROID_KABI_RESERVE(8); ANDROID_KABI_RESERVE(8);

View File

@@ -100,4 +100,7 @@ static __always_inline void __load_host_stage2(void)
else else
write_sysreg(0, vttbr_el2); write_sysreg(0, vttbr_el2);
} }
int guest_stage2_pa(struct pkvm_hyp_vm *vm, u64 ipa, phys_addr_t *phys);
#endif /* __KVM_NVHE_MEM_PROTECT__ */ #endif /* __KVM_NVHE_MEM_PROTECT__ */

View File

@@ -19,7 +19,7 @@ void *hyp_fixmap_map(phys_addr_t phys);
void hyp_fixmap_unmap(void); void hyp_fixmap_unmap(void);
void *hyp_fixblock_map(phys_addr_t phys); void *hyp_fixblock_map(phys_addr_t phys);
void hyp_fixblock_unmap(void); void hyp_fixblock_unmap(void);
void hyp_poison_page(phys_addr_t phys); void hyp_poison_page(phys_addr_t phys, size_t page_size);
int hyp_create_idmap(u32 hyp_va_bits); int hyp_create_idmap(u32 hyp_va_bits);
int hyp_map_vectors(void); int hyp_map_vectors(void);

View File

@@ -1,9 +1,15 @@
#include <asm/kvm_pgtable.h> #include <asm/kvm_pgtable.h>
#include <linux/kvm_host.h>
#include <linux/arm-smccc.h>
#define HCALL_HANDLED 0 #define HCALL_HANDLED 0
#define HCALL_UNHANDLED -1 #define HCALL_UNHANDLED -1
int __pkvm_register_host_smc_handler(bool (*cb)(struct user_pt_regs *)); int __pkvm_register_host_smc_handler(bool (*cb)(struct user_pt_regs *));
int __pkvm_register_guest_smc_handler(bool (*cb)(struct arm_smccc_1_2_regs *,
struct arm_smccc_res *res,
pkvm_handle_t handle),
pkvm_handle_t handle);
int __pkvm_register_default_trap_handler(bool (*cb)(struct user_pt_regs *)); int __pkvm_register_default_trap_handler(bool (*cb)(struct user_pt_regs *));
int __pkvm_register_illegal_abt_notifier(void (*cb)(struct user_pt_regs *)); int __pkvm_register_illegal_abt_notifier(void (*cb)(struct user_pt_regs *));
int __pkvm_register_hyp_panic_notifier(void (*cb)(struct user_pt_regs *)); int __pkvm_register_hyp_panic_notifier(void (*cb)(struct user_pt_regs *));

View File

@@ -74,6 +74,9 @@ struct pkvm_hyp_vm {
*/ */
bool is_dying; bool is_dying;
bool (*smc_handler)(struct arm_smccc_1_2_regs *regs,
struct arm_smccc_res *res, pkvm_handle_t handle);
/* Array of the hyp vCPU structures for this VM. */ /* Array of the hyp vCPU structures for this VM. */
struct pkvm_hyp_vcpu *vcpus[]; struct pkvm_hyp_vcpu *vcpus[];
}; };
@@ -140,6 +143,8 @@ void pkvm_reset_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu);
bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code); bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code);
bool kvm_hyp_handle_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code); bool kvm_hyp_handle_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code);
bool kvm_handle_pvm_smc64(struct kvm_vcpu *vcpu, u64 *exit_code);
struct pkvm_hyp_vcpu *pkvm_mpidr_to_hyp_vcpu(struct pkvm_hyp_vm *vm, u64 mpidr); struct pkvm_hyp_vcpu *pkvm_mpidr_to_hyp_vcpu(struct pkvm_hyp_vm *vm, u64 mpidr);
static inline bool pkvm_hyp_vm_has_pvmfw(struct pkvm_hyp_vm *vm) static inline bool pkvm_hyp_vm_has_pvmfw(struct pkvm_hyp_vm *vm)

View File

@@ -730,10 +730,10 @@ static void do_ffa_version(struct arm_smccc_res *res,
hyp_ffa_version = ffa_req_version; hyp_ffa_version = ffa_req_version;
} }
if (hyp_ffa_post_init()) if (hyp_ffa_post_init()) {
res->a0 = FFA_RET_NOT_SUPPORTED; res->a0 = FFA_RET_NOT_SUPPORTED;
else { } else {
has_version_negotiated = true; smp_store_release(&has_version_negotiated, true);
res->a0 = hyp_ffa_version; res->a0 = hyp_ffa_version;
} }
unlock: unlock:
@@ -815,7 +815,8 @@ bool kvm_host_ffa_handler(struct kvm_cpu_context *ctxt, u32 func_id)
if (!is_ffa_call(func_id)) if (!is_ffa_call(func_id))
return false; return false;
if (!has_version_negotiated && func_id != FFA_VERSION) { if (func_id != FFA_VERSION &&
!smp_load_acquire(&has_version_negotiated)) {
ffa_to_smccc_error(&res, FFA_RET_INVALID_PARAMETERS); ffa_to_smccc_error(&res, FFA_RET_INVALID_PARAMETERS);
goto unhandled; goto unhandled;
} }

View File

@@ -494,6 +494,9 @@ int kvm_iommu_map_pages(pkvm_handle_t domain_id, unsigned long iova,
iova + size < iova || paddr + size < paddr) iova + size < iova || paddr + size < paddr)
return -E2BIG; return -E2BIG;
if (domain_id == KVM_IOMMU_DOMAIN_IDMAP_ID)
return -EINVAL;
domain = handle_to_domain(domain_id); domain = handle_to_domain(domain_id);
if (!domain || domain_get(domain)) if (!domain || domain_get(domain))
return -ENOENT; return -ENOENT;
@@ -595,6 +598,9 @@ size_t kvm_iommu_unmap_pages(pkvm_handle_t domain_id,
iova + size < iova) iova + size < iova)
return 0; return 0;
if (domain_id == KVM_IOMMU_DOMAIN_IDMAP_ID)
return 0;
domain = handle_to_domain(domain_id); domain = handle_to_domain(domain_id);
if (!domain || domain_get(domain)) if (!domain || domain_get(domain))
return 0; return 0;
@@ -626,6 +632,9 @@ phys_addr_t kvm_iommu_iova_to_phys(pkvm_handle_t domain_id, unsigned long iova)
if (!kvm_iommu_ops || !kvm_iommu_ops->iova_to_phys) if (!kvm_iommu_ops || !kvm_iommu_ops->iova_to_phys)
return 0; return 0;
if (domain_id == KVM_IOMMU_DOMAIN_IDMAP_ID)
return iova;
domain = handle_to_domain( domain_id); domain = handle_to_domain( domain_id);
if (!domain || domain_get(domain)) if (!domain || domain_get(domain))

View File

@@ -395,7 +395,7 @@ static int relinquish_walker(const struct kvm_pgtable_visit_ctx *ctx,
phys += ctx->addr - addr; phys += ctx->addr - addr;
if (state == PKVM_PAGE_OWNED) { if (state == PKVM_PAGE_OWNED) {
hyp_poison_page(phys); hyp_poison_page(phys, PAGE_SIZE);
psci_mem_protect_dec(1); psci_mem_protect_dec(1);
} }
@@ -2797,20 +2797,29 @@ int __pkvm_host_donate_guest(struct pkvm_hyp_vcpu *vcpu, u64 pfn, u64 gfn,
return ret; return ret;
} }
void hyp_poison_page(phys_addr_t phys) void hyp_poison_page(phys_addr_t phys, size_t size)
{ {
void *addr = hyp_fixmap_map(phys); WARN_ON(!PAGE_ALIGNED(size));
memset(addr, 0, PAGE_SIZE); while (size) {
/* size_t __size = size == PMD_SIZE ? size : PAGE_SIZE;
* Prefer kvm_flush_dcache_to_poc() over __clean_dcache_guest_page() void *addr = __fixmap_guest_page(__hyp_va(phys), &__size);
* here as the latter may elide the CMO under the assumption that FWB
* will be enabled on CPUs that support it. This is incorrect for the memset(addr, 0, __size);
* host stage-2 and would otherwise lead to a malicious host potentially
* being able to read the contents of newly reclaimed guest pages. /*
*/ * Prefer kvm_flush_dcache_to_poc() over __clean_dcache_guest_page()
kvm_flush_dcache_to_poc(addr, PAGE_SIZE); * here as the latter may elide the CMO under the assumption that FWB
hyp_fixmap_unmap(); * will be enabled on CPUs that support it. This is incorrect for the
* host stage-2 and would otherwise lead to a malicious host potentially
* being able to read the contents of newly reclaimed guest pages.
*/
kvm_flush_dcache_to_poc(addr, __size);
__fixunmap_guest_page(__size);
size -= __size;
phys += __size;
}
} }
void destroy_hyp_vm_pgt(struct pkvm_hyp_vm *vm) void destroy_hyp_vm_pgt(struct pkvm_hyp_vm *vm)
@@ -2845,7 +2854,7 @@ int __pkvm_host_reclaim_page(struct pkvm_hyp_vm *vm, u64 pfn, u64 ipa, u8 order)
switch((int)guest_get_page_state(pte, ipa)) { switch((int)guest_get_page_state(pte, ipa)) {
case PKVM_PAGE_OWNED: case PKVM_PAGE_OWNED:
WARN_ON(__host_check_page_state_range(phys, page_size, PKVM_NOPAGE)); WARN_ON(__host_check_page_state_range(phys, page_size, PKVM_NOPAGE));
hyp_poison_page(phys); hyp_poison_page(phys, page_size);
psci_mem_protect_dec(1 << order); psci_mem_protect_dec(1 << order);
break; break;
case PKVM_PAGE_SHARED_BORROWED: case PKVM_PAGE_SHARED_BORROWED:
@@ -3009,6 +3018,26 @@ int host_stage2_get_leaf(phys_addr_t phys, kvm_pte_t *ptep, u32 *level)
return ret; return ret;
} }
int guest_stage2_pa(struct pkvm_hyp_vm *vm, u64 ipa, phys_addr_t *phys)
{
kvm_pte_t pte;
u32 level;
int ret;
guest_lock_component(vm);
ret = kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level);
guest_unlock_component(vm);
if (ret)
return ret;
if (!kvm_pte_valid(pte) || level != KVM_PGTABLE_MAX_LEVELS - 1)
return -EINVAL;
*phys = kvm_pte_to_phys(pte);
return 0;
}
#ifdef CONFIG_NVHE_EL2_DEBUG #ifdef CONFIG_NVHE_EL2_DEBUG
static void *snap_zalloc_page(void *mc) static void *snap_zalloc_page(void *mc)
{ {

View File

@@ -124,6 +124,7 @@ const struct pkvm_module_ops module_ops = {
.host_stage2_mod_prot = module_change_host_page_prot, .host_stage2_mod_prot = module_change_host_page_prot,
.host_stage2_get_leaf = host_stage2_get_leaf, .host_stage2_get_leaf = host_stage2_get_leaf,
.register_host_smc_handler = __pkvm_register_host_smc_handler, .register_host_smc_handler = __pkvm_register_host_smc_handler,
.register_guest_smc_handler = __pkvm_register_guest_smc_handler,
.register_default_trap_handler = __pkvm_register_default_trap_handler, .register_default_trap_handler = __pkvm_register_default_trap_handler,
.register_illegal_abt_notifier = __pkvm_register_illegal_abt_notifier, .register_illegal_abt_notifier = __pkvm_register_illegal_abt_notifier,
.register_psci_notifier = __pkvm_register_psci_notifier, .register_psci_notifier = __pkvm_register_psci_notifier,
@@ -165,6 +166,7 @@ const struct pkvm_module_ops module_ops = {
.iommu_flush_unmap_cache = kvm_iommu_flush_unmap_cache, .iommu_flush_unmap_cache = kvm_iommu_flush_unmap_cache,
.host_stage2_enable_lazy_pte = host_stage2_enable_lazy_pte, .host_stage2_enable_lazy_pte = host_stage2_enable_lazy_pte,
.host_stage2_disable_lazy_pte = host_stage2_disable_lazy_pte, .host_stage2_disable_lazy_pte = host_stage2_disable_lazy_pte,
.guest_stage2_pa = pkvm_guest_stage2_pa,
}; };
int __pkvm_init_module(void *module_init) int __pkvm_init_module(void *module_init)

View File

@@ -7,6 +7,8 @@
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <hyp/adjust_pc.h>
#include <kvm/arm_hypercalls.h> #include <kvm/arm_hypercalls.h>
#include <kvm/arm_psci.h> #include <kvm/arm_psci.h>
@@ -1098,7 +1100,7 @@ void pkvm_poison_pvmfw_pages(void)
phys_addr_t addr = pvmfw_base; phys_addr_t addr = pvmfw_base;
while (npages--) { while (npages--) {
hyp_poison_page(addr); hyp_poison_page(addr, PAGE_SIZE);
addr += PAGE_SIZE; addr += PAGE_SIZE;
} }
} }
@@ -1682,6 +1684,64 @@ static bool pkvm_forward_trng(struct kvm_vcpu *vcpu)
return true; return true;
} }
static bool is_standard_secure_service_call(u64 func_id)
{
return (func_id >= PSCI_0_2_FN_BASE && func_id <= ARM_CCA_FUNC_END) ||
(func_id >= PSCI_0_2_FN64_BASE && func_id <= ARM_CCA_64BIT_FUNC_END);
}
bool kvm_handle_pvm_smc64(struct kvm_vcpu *vcpu, u64 *exit_code)
{
bool handled = false;
struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
struct pkvm_hyp_vm *vm;
struct pkvm_hyp_vcpu *hyp_vcpu;
struct arm_smccc_1_2_regs regs;
struct arm_smccc_res res;
DECLARE_REG(u64, func_id, ctxt, 0);
hyp_vcpu = container_of(vcpu, struct pkvm_hyp_vcpu, vcpu);
vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu);
if (is_standard_secure_service_call(func_id))
return false;
/* Paired with cmpxchg_release in the guest registration handler */
if (smp_load_acquire(&vm->smc_handler)) {
memcpy(&regs, &ctxt->regs, sizeof(regs));
handled = vm->smc_handler(&regs, &res, vm->kvm.arch.pkvm.handle);
/* Pass the return back to the calling guest */
memcpy(&ctxt->regs.regs[0], &regs, sizeof(res));
}
/* SMC was trapped, move ELR past the current PC. */
if (handled)
__kvm_skip_instr(vcpu);
return handled;
}
int __pkvm_register_guest_smc_handler(bool (*cb)(struct arm_smccc_1_2_regs *,
struct arm_smccc_res *res,
pkvm_handle_t handle),
pkvm_handle_t handle)
{
int ret = -EINVAL;
struct pkvm_hyp_vm *vm;
if (!cb)
return ret;
hyp_read_lock(&vm_table_lock);
vm = get_vm_by_handle(handle);
if (vm)
ret = cmpxchg_release(&vm->smc_handler, NULL, cb) ? -EBUSY : 0;
hyp_read_unlock(&vm_table_lock);
return ret;
}
/* /*
* Handler for protected VM HVC calls. * Handler for protected VM HVC calls.
* *
@@ -1775,6 +1835,28 @@ bool kvm_hyp_handle_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code)
return false; return false;
} }
int pkvm_guest_stage2_pa(pkvm_handle_t handle, u64 ipa, phys_addr_t *phys)
{
struct pkvm_hyp_vm *hyp_vm;
int err;
hyp_read_lock(&vm_table_lock);
hyp_vm = get_vm_by_handle(handle);
if (!hyp_vm) {
err = -ENOENT;
goto err_unlock;
} else if (hyp_vm->is_dying) {
err = -EBUSY;
goto err_unlock;
}
err = guest_stage2_pa(hyp_vm, ipa, phys);
hyp_read_unlock(&vm_table_lock);
err_unlock:
return err;
}
#ifdef CONFIG_NVHE_EL2_DEBUG #ifdef CONFIG_NVHE_EL2_DEBUG
static inline phys_addr_t get_next_memcache_page(phys_addr_t head) static inline phys_addr_t get_next_memcache_page(phys_addr_t head)
{ {

View File

@@ -303,6 +303,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
static const exit_handler_fn pvm_exit_handlers[] = { static const exit_handler_fn pvm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL, [0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_HVC64] = kvm_handle_pvm_hvc64, [ESR_ELx_EC_HVC64] = kvm_handle_pvm_hvc64,
[ESR_ELx_EC_SMC64] = kvm_handle_pvm_smc64,
[ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64,
[ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd,
[ESR_ELx_EC_SME] = kvm_handle_pvm_restricted, [ESR_ELx_EC_SME] = kvm_handle_pvm_restricted,

View File

@@ -9,8 +9,7 @@ CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_PSI=y CONFIG_PSI=y
CONFIG_RCU_EXPERT=y CONFIG_RCU_EXPERT=y
CONFIG_RCU_BOOST=y CONFIG_LOG_BUF_SHIFT=14
CONFIG_RCU_NOCB_CPU=y
CONFIG_UCLAMP_TASK=y CONFIG_UCLAMP_TASK=y
CONFIG_UCLAMP_BUCKETS_COUNT=20 CONFIG_UCLAMP_BUCKETS_COUNT=20
CONFIG_CGROUPS=y CONFIG_CGROUPS=y
@@ -21,16 +20,14 @@ CONFIG_UCLAMP_TASK_GROUP=y
CONFIG_CGROUP_FREEZER=y CONFIG_CGROUP_FREEZER=y
CONFIG_CPUSETS=y CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y CONFIG_CGROUP_CPUACCT=y
# CONFIG_UTS_NS is not set
# CONFIG_TIME_NS is not set
# CONFIG_PID_NS is not set
# CONFIG_NET_NS is not set
# CONFIG_RD_BZIP2 is not set # CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set # CONFIG_RD_LZMA is not set
# CONFIG_RD_XZ is not set # CONFIG_RD_XZ is not set
# CONFIG_RD_LZO is not set # CONFIG_RD_LZO is not set
CONFIG_BOOT_CONFIG=y CONFIG_BOOT_CONFIG=y
CONFIG_EXPERT=y
CONFIG_PROFILING=y CONFIG_PROFILING=y
CONFIG_KEXEC_FILE=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_X86_X2APIC=y CONFIG_X86_X2APIC=y
CONFIG_HYPERVISOR_GUEST=y CONFIG_HYPERVISOR_GUEST=y
@@ -39,7 +36,6 @@ CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_NR_CPUS=32 CONFIG_NR_CPUS=32
# CONFIG_X86_MCE is not set # CONFIG_X86_MCE is not set
CONFIG_EFI=y CONFIG_EFI=y
CONFIG_KEXEC_FILE=y
CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="stack_depot_disable=on cgroup_disable=pressure ioremap_guard panic=-1 bootconfig acpi=noirq" CONFIG_CMDLINE="stack_depot_disable=on cgroup_disable=pressure ioremap_guard panic=-1 bootconfig acpi=noirq"
CONFIG_PM_WAKELOCKS=y CONFIG_PM_WAKELOCKS=y
@@ -50,12 +46,12 @@ CONFIG_CPU_FREQ_TIMES=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_JUMP_LABEL=y CONFIG_JUMP_LABEL=y
CONFIG_BLK_DEV_ZONED=y # CONFIG_BLOCK_LEGACY_AUTOLOAD is not set
CONFIG_BLK_CGROUP_IOCOST=y CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_PARTITION_ADVANCED=y CONFIG_PARTITION_ADVANCED=y
# CONFIG_MSDOS_PARTITION is not set # CONFIG_MSDOS_PARTITION is not set
CONFIG_IOSCHED_BFQ=y # CONFIG_MQ_IOSCHED_DEADLINE is not set
CONFIG_BFQ_GROUP_IOSCHED=y # CONFIG_MQ_IOSCHED_KYBER is not set
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y CONFIG_BINFMT_MISC=y
# CONFIG_SLAB_MERGE_DEFAULT is not set # CONFIG_SLAB_MERGE_DEFAULT is not set
@@ -63,8 +59,6 @@ CONFIG_SLAB_FREELIST_RANDOM=y
CONFIG_SLAB_FREELIST_HARDENED=y CONFIG_SLAB_FREELIST_HARDENED=y
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=32768 CONFIG_DEFAULT_MMAP_MIN_ADDR=32768
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y
@@ -85,7 +79,7 @@ CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y CONFIG_PCIEAER=y
CONFIG_PCI_MSI=y CONFIG_PCI_MSI=y
CONFIG_PCI_IOV=y CONFIG_PCI_IOV=y
CONFIG_PCIE_DW_PLAT_EP=y # CONFIG_VGA_ARB is not set
CONFIG_PCI_ENDPOINT=y CONFIG_PCI_ENDPOINT=y
CONFIG_FW_LOADER_USER_HELPER=y CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_FW_CACHE is not set # CONFIG_FW_CACHE is not set
@@ -113,7 +107,6 @@ CONFIG_SERIAL_8250_RUNTIME_UARTS=0
CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_NULL_TTY=y CONFIG_NULL_TTY=y
CONFIG_VIRTIO_CONSOLE=y CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_VIRTIO=y CONFIG_HW_RANDOM_VIRTIO=y
# CONFIG_DEVMEM is not set # CONFIG_DEVMEM is not set
# CONFIG_DEVPORT is not set # CONFIG_DEVPORT is not set
@@ -138,7 +131,6 @@ CONFIG_EDAC=y
CONFIG_RTC_CLASS=y CONFIG_RTC_CLASS=y
CONFIG_DMABUF_HEAPS=y CONFIG_DMABUF_HEAPS=y
CONFIG_DMABUF_SYSFS_STATS=y CONFIG_DMABUF_SYSFS_STATS=y
CONFIG_UIO=y
CONFIG_VIRTIO_PCI=y CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BALLOON=y CONFIG_VIRTIO_BALLOON=y
CONFIG_STAGING=y CONFIG_STAGING=y
@@ -212,6 +204,7 @@ CONFIG_STATIC_USERMODEHELPER=y
CONFIG_STATIC_USERMODEHELPER_PATH="" CONFIG_STATIC_USERMODEHELPER_PATH=""
CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX=y
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_CRYPTO_HCTR2=y CONFIG_CRYPTO_HCTR2=y
CONFIG_CRYPTO_LZO=y CONFIG_CRYPTO_LZO=y
CONFIG_CRYPTO_AES_NI_INTEL=y CONFIG_CRYPTO_AES_NI_INTEL=y
@@ -220,16 +213,13 @@ CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=y CONFIG_CRYPTO_SHA512_SSSE3=y
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG_CORE=y CONFIG_DYNAMIC_DEBUG_CORE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_INFO_DWARF5=y CONFIG_DEBUG_INFO_DWARF5=y
CONFIG_DEBUG_INFO_REDUCED=y CONFIG_DEBUG_INFO_REDUCED=y
CONFIG_DEBUG_INFO_COMPRESSED=y
CONFIG_HEADERS_INSTALL=y CONFIG_HEADERS_INSTALL=y
# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set
CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ=y
CONFIG_UBSAN=y CONFIG_UBSAN=y
CONFIG_UBSAN_TRAP=y CONFIG_UBSAN_TRAP=y
CONFIG_UBSAN_LOCAL_BOUNDS=y
# CONFIG_UBSAN_SHIFT is not set # CONFIG_UBSAN_SHIFT is not set
# CONFIG_UBSAN_BOOL is not set # CONFIG_UBSAN_BOOL is not set
# CONFIG_UBSAN_ENUM is not set # CONFIG_UBSAN_ENUM is not set
@@ -243,6 +233,5 @@ CONFIG_PANIC_TIMEOUT=-1
CONFIG_SOFTLOCKUP_DETECTOR=y CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_WQ_WATCHDOG=y CONFIG_WQ_WATCHDOG=y
CONFIG_SCHEDSTATS=y CONFIG_SCHEDSTATS=y
CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_HIST_TRIGGERS=y CONFIG_HIST_TRIGGERS=y
CONFIG_UNWINDER_FRAME_POINTER=y CONFIG_UNWINDER_FRAME_POINTER=y

View File

@@ -31,6 +31,7 @@
#include <linux/part_stat.h> #include <linux/part_stat.h>
#include <trace/events/block.h> #include <trace/events/block.h>
#include <trace/hooks/blk.h>
#include <trace/hooks/blk.h> #include <trace/hooks/blk.h>
@@ -3031,6 +3032,8 @@ void blk_mq_submit_bio(struct bio *bio)
unsigned int nr_segs = 1; unsigned int nr_segs = 1;
blk_status_t ret; blk_status_t ret;
trace_android_vh_check_set_ioprio(bio);
bio = blk_queue_bounce(bio, q); bio = blk_queue_bounce(bio, q);
if (plug) { if (plug) {

View File

@@ -307,6 +307,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_report_bug);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_watchdog_timer_softlockup); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_watchdog_timer_softlockup);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_try_to_freeze_todo); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_try_to_freeze_todo);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_try_to_freeze_todo_unfrozen); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_try_to_freeze_todo_unfrozen);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ep_create_wakeup_source);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_timerfd_create);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_die_kernel_fault); EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_die_kernel_fault);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_do_sp_pc_abort); EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_do_sp_pc_abort);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_do_el1_undef); EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_do_el1_undef);
@@ -543,6 +545,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_show_smap_swap_shared);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_armv8pmu_counter_overflowed); EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_armv8pmu_counter_overflowed);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_perf_rotate_context); EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_perf_rotate_context);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_process_madvise_bypass); EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_process_madvise_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_do_madvise_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_pages_prepare_bypass); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_pages_prepare_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_pages_ok_bypass); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_pages_ok_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_unref_page_list_bypass); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_unref_page_list_bypass);
@@ -657,3 +660,12 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_update_unmapped_area_info);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_reuse_whole_anon_folio); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_reuse_whole_anon_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_swap_slot_cache); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_swap_slot_cache);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_calculate_totalreserve_pages); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_calculate_totalreserve_pages);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_set_ioprio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_filemap_pages);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_lru_gen_add_folio_skip);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_lru_gen_del_folio_skip);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_perform_reclaim);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_kswapd_shrink_node);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_keep_reclaimed_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_clear_reclaimed_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_evict_folios_bypass);

View File

@@ -45,7 +45,7 @@ struct dma_heap_attachment {
bool uncached; bool uncached;
}; };
#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO) #define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO | __GFP_RETRY_MAYFAIL)
#define HIGH_ORDER_GFP (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \ #define HIGH_ORDER_GFP (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \
| __GFP_NORETRY) & ~__GFP_RECLAIM) \ | __GFP_NORETRY) & ~__GFP_RECLAIM) \
| __GFP_COMP) | __GFP_COMP)
@@ -371,6 +371,9 @@ static struct dma_buf *system_heap_do_allocate(struct dma_heap *heap,
struct page *page, *tmp_page; struct page *page, *tmp_page;
int i, ret = -ENOMEM; int i, ret = -ENOMEM;
if (len / PAGE_SIZE > totalram_pages())
return ERR_PTR(-ENOMEM);
buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
if (!buffer) if (!buffer)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);

View File

@@ -679,6 +679,7 @@ struct gpio_desc *of_find_gpio(struct device_node *np, const char *con_id,
return desc; return desc;
} }
EXPORT_SYMBOL_GPL(of_find_gpio);
/** /**
* of_parse_own_gpio() - Get a GPIO hog descriptor, names and flags for GPIO API * of_parse_own_gpio() - Get a GPIO hog descriptor, names and flags for GPIO API

View File

@@ -69,6 +69,8 @@
#define LIST_DIRTY 1 #define LIST_DIRTY 1
#define LIST_SIZE 2 #define LIST_SIZE 2
#define SCAN_RESCHED_CYCLE 16
/*--------------------------------------------------------------*/ /*--------------------------------------------------------------*/
/* /*
@@ -2418,7 +2420,12 @@ static void __scan(struct dm_bufio_client *c)
atomic_long_dec(&c->need_shrink); atomic_long_dec(&c->need_shrink);
freed++; freed++;
cond_resched();
if (unlikely(freed % SCAN_RESCHED_CYCLE == 0)) {
dm_bufio_unlock(c);
cond_resched();
dm_bufio_lock(c);
}
} }
} }
} }

View File

@@ -29,6 +29,7 @@
#define DM_VERITY_ENV_VAR_NAME "DM_VERITY_ERR_BLOCK_NR" #define DM_VERITY_ENV_VAR_NAME "DM_VERITY_ERR_BLOCK_NR"
#define DM_VERITY_DEFAULT_PREFETCH_SIZE 262144 #define DM_VERITY_DEFAULT_PREFETCH_SIZE 262144
#define DM_VERITY_USE_BH_DEFAULT_BYTES 8192
#define DM_VERITY_MAX_CORRUPTED_ERRS 100 #define DM_VERITY_MAX_CORRUPTED_ERRS 100
@@ -46,6 +47,15 @@ static unsigned int dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE
module_param_named(prefetch_cluster, dm_verity_prefetch_cluster, uint, 0644); module_param_named(prefetch_cluster, dm_verity_prefetch_cluster, uint, 0644);
static unsigned int dm_verity_use_bh_bytes[4] = {
DM_VERITY_USE_BH_DEFAULT_BYTES, // IOPRIO_CLASS_NONE
DM_VERITY_USE_BH_DEFAULT_BYTES, // IOPRIO_CLASS_RT
DM_VERITY_USE_BH_DEFAULT_BYTES, // IOPRIO_CLASS_BE
0 // IOPRIO_CLASS_IDLE
};
module_param_array_named(use_bh_bytes, dm_verity_use_bh_bytes, uint, NULL, 0644);
static DEFINE_STATIC_KEY_FALSE(use_tasklet_enabled); static DEFINE_STATIC_KEY_FALSE(use_tasklet_enabled);
/* Is at least one dm-verity instance using ahash_tfm instead of shash_tfm? */ /* Is at least one dm-verity instance using ahash_tfm instead of shash_tfm? */
@@ -696,9 +706,17 @@ static void verity_work(struct work_struct *w)
verity_finish_io(io, errno_to_blk_status(verity_verify_io(io))); verity_finish_io(io, errno_to_blk_status(verity_verify_io(io)));
} }
static inline bool verity_use_bh(unsigned int bytes, unsigned short ioprio)
{
return ioprio <= IOPRIO_CLASS_IDLE &&
bytes <= READ_ONCE(dm_verity_use_bh_bytes[ioprio]);
}
static void verity_end_io(struct bio *bio) static void verity_end_io(struct bio *bio)
{ {
struct dm_verity_io *io = bio->bi_private; struct dm_verity_io *io = bio->bi_private;
unsigned short ioprio = IOPRIO_PRIO_CLASS(bio->bi_ioprio);
unsigned int bytes = io->n_blocks << io->v->data_dev_block_bits;
if (bio->bi_status && if (bio->bi_status &&
(!verity_fec_is_enabled(io->v) || (!verity_fec_is_enabled(io->v) ||
@@ -708,6 +726,19 @@ static void verity_end_io(struct bio *bio)
return; return;
} }
if (static_branch_unlikely(&use_tasklet_enabled) && io->v->use_tasklet &&
verity_use_bh(bytes, ioprio)) {
if (!(in_hardirq() || irqs_disabled())) {
int err;
io->in_tasklet = true;
err = verity_verify_io(io);
if (err != -EAGAIN && err != -ENOMEM) {
verity_finish_io(io, errno_to_blk_status(err));
return;
}
}
}
INIT_WORK(&io->work, verity_work); INIT_WORK(&io->work, verity_work);
queue_work(io->v->verify_wq, &io->work); queue_work(io->v->verify_wq, &io->work);
} }

View File

@@ -115,9 +115,6 @@ void pci_save_aspm_l1ss_state(struct pci_dev *pdev)
pci_read_config_dword(pdev, pdev->l1ss + PCI_L1SS_CTL2, cap++); pci_read_config_dword(pdev, pdev->l1ss + PCI_L1SS_CTL2, cap++);
pci_read_config_dword(pdev, pdev->l1ss + PCI_L1SS_CTL1, cap++); pci_read_config_dword(pdev, pdev->l1ss + PCI_L1SS_CTL1, cap++);
if (parent->state_saved)
return;
/* /*
* Save parent's L1 substate configuration so we have it for * Save parent's L1 substate configuration so we have it for
* pci_restore_aspm_l1ss_state(pdev) to restore. * pci_restore_aspm_l1ss_state(pdev) to restore.

View File

@@ -12,6 +12,7 @@
*/ */
struct ufs_hba_priv { struct ufs_hba_priv {
struct ufs_hba hba; struct ufs_hba hba;
struct completion dev_cmd_compl;
u8 rtt_cap; u8 rtt_cap;
int nortt; int nortt;
}; };

View File

@@ -2759,6 +2759,8 @@ static int ufshcd_compose_devman_upiu(struct ufs_hba *hba,
*/ */
static int ufshcd_comp_scsi_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) static int ufshcd_comp_scsi_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{ {
struct request *rq = scsi_cmd_to_rq(lrbp->cmd);
unsigned int ioprio_class = IOPRIO_PRIO_CLASS(req_get_ioprio(rq));
u8 upiu_flags; u8 upiu_flags;
int ret = 0; int ret = 0;
@@ -2769,6 +2771,8 @@ static int ufshcd_comp_scsi_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
if (likely(lrbp->cmd)) { if (likely(lrbp->cmd)) {
ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, lrbp->cmd->sc_data_direction, 0); ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, lrbp->cmd->sc_data_direction, 0);
if (ioprio_class == IOPRIO_CLASS_RT)
upiu_flags |= UPIU_CMD_FLAGS_CP;
ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags); ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags);
if (hba->android_quirks & UFSHCD_ANDROID_QUIRK_SET_IID_TO_ONE) if (hba->android_quirks & UFSHCD_ANDROID_QUIRK_SET_IID_TO_ONE)
lrbp->ucd_req_ptr->header.iid = 1; lrbp->ucd_req_ptr->header.iid = 1;
@@ -3090,16 +3094,10 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba,
int err; int err;
retry: retry:
time_left = wait_for_completion_timeout(hba->dev_cmd.complete, time_left = wait_for_completion_timeout(&to_hba_priv(hba)->dev_cmd_compl,
time_left); time_left);
if (likely(time_left)) { if (likely(time_left)) {
/*
* The completion handler called complete() and the caller of
* this function still owns the @lrbp tag so the code below does
* not trigger any race conditions.
*/
hba->dev_cmd.complete = NULL;
err = ufshcd_get_tr_ocs(lrbp, NULL); err = ufshcd_get_tr_ocs(lrbp, NULL);
if (!err) if (!err)
err = ufshcd_dev_cmd_completion(hba, lrbp); err = ufshcd_dev_cmd_completion(hba, lrbp);
@@ -3113,7 +3111,6 @@ retry:
/* successfully cleared the command, retry if needed */ /* successfully cleared the command, retry if needed */
if (ufshcd_clear_cmd(hba, lrbp->task_tag) == 0) if (ufshcd_clear_cmd(hba, lrbp->task_tag) == 0)
err = -EAGAIN; err = -EAGAIN;
hba->dev_cmd.complete = NULL;
return err; return err;
} }
@@ -3129,11 +3126,9 @@ retry:
spin_lock_irqsave(&hba->outstanding_lock, flags); spin_lock_irqsave(&hba->outstanding_lock, flags);
pending = test_bit(lrbp->task_tag, pending = test_bit(lrbp->task_tag,
&hba->outstanding_reqs); &hba->outstanding_reqs);
if (pending) { if (pending)
hba->dev_cmd.complete = NULL;
__clear_bit(lrbp->task_tag, __clear_bit(lrbp->task_tag,
&hba->outstanding_reqs); &hba->outstanding_reqs);
}
spin_unlock_irqrestore(&hba->outstanding_lock, flags); spin_unlock_irqrestore(&hba->outstanding_lock, flags);
if (!pending) { if (!pending) {
@@ -3151,8 +3146,6 @@ retry:
spin_lock_irqsave(&hba->outstanding_lock, flags); spin_lock_irqsave(&hba->outstanding_lock, flags);
pending = test_bit(lrbp->task_tag, pending = test_bit(lrbp->task_tag,
&hba->outstanding_reqs); &hba->outstanding_reqs);
if (pending)
hba->dev_cmd.complete = NULL;
spin_unlock_irqrestore(&hba->outstanding_lock, flags); spin_unlock_irqrestore(&hba->outstanding_lock, flags);
if (!pending) { if (!pending) {
@@ -3183,7 +3176,6 @@ retry:
static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
enum dev_cmd_type cmd_type, int timeout) enum dev_cmd_type cmd_type, int timeout)
{ {
DECLARE_COMPLETION_ONSTACK(wait);
const u32 tag = hba->reserved_slot; const u32 tag = hba->reserved_slot;
struct ufshcd_lrb *lrbp; struct ufshcd_lrb *lrbp;
int err; int err;
@@ -3199,10 +3191,7 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
if (unlikely(err)) if (unlikely(err))
goto out; goto out;
hba->dev_cmd.complete = &wait;
ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr); ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr);
ufshcd_send_command(hba, tag, hba->dev_cmd_queue); ufshcd_send_command(hba, tag, hba->dev_cmd_queue);
err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout); err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout);
ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP, ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP,
@@ -5512,14 +5501,12 @@ void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag,
scsi_done(cmd); scsi_done(cmd);
} else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE || } else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE ||
lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) { lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) {
if (hba->dev_cmd.complete) { trace_android_vh_ufs_compl_command(hba, lrbp);
trace_android_vh_ufs_compl_command(hba, lrbp); if (cqe) {
if (cqe) { ocs = le32_to_cpu(cqe->status) & MASK_OCS;
ocs = le32_to_cpu(cqe->status) & MASK_OCS; lrbp->utr_descriptor_ptr->header.ocs = ocs;
lrbp->utr_descriptor_ptr->header.ocs = ocs;
}
complete(hba->dev_cmd.complete);
} }
complete(&to_hba_priv(hba)->dev_cmd_compl);
} }
} }
@@ -7178,7 +7165,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
enum dev_cmd_type cmd_type, enum dev_cmd_type cmd_type,
enum query_opcode desc_op) enum query_opcode desc_op)
{ {
DECLARE_COMPLETION_ONSTACK(wait);
const u32 tag = hba->reserved_slot; const u32 tag = hba->reserved_slot;
struct ufshcd_lrb *lrbp; struct ufshcd_lrb *lrbp;
int err = 0; int err = 0;
@@ -7220,10 +7206,7 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp)); memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp));
hba->dev_cmd.complete = &wait;
ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr); ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr);
ufshcd_send_command(hba, tag, hba->dev_cmd_queue); ufshcd_send_command(hba, tag, hba->dev_cmd_queue);
/* /*
* ignore the returning value here - ufshcd_check_query_response is * ignore the returning value here - ufshcd_check_query_response is
@@ -7348,7 +7331,6 @@ int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *r
struct ufs_ehs *rsp_ehs, int sg_cnt, struct scatterlist *sg_list, struct ufs_ehs *rsp_ehs, int sg_cnt, struct scatterlist *sg_list,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
DECLARE_COMPLETION_ONSTACK(wait);
const u32 tag = hba->reserved_slot; const u32 tag = hba->reserved_slot;
struct ufshcd_lrb *lrbp; struct ufshcd_lrb *lrbp;
int err = 0; int err = 0;
@@ -7397,8 +7379,6 @@ int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *r
memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp)); memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp));
hba->dev_cmd.complete = &wait;
ufshcd_send_command(hba, tag, hba->dev_cmd_queue); ufshcd_send_command(hba, tag, hba->dev_cmd_queue);
err = ufshcd_wait_for_dev_cmd(hba, lrbp, ADVANCED_RPMB_REQ_TIMEOUT); err = ufshcd_wait_for_dev_cmd(hba, lrbp, ADVANCED_RPMB_REQ_TIMEOUT);
@@ -10457,6 +10437,8 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
hba->irq = irq; hba->irq = irq;
hba->vps = &ufs_hba_vps; hba->vps = &ufs_hba_vps;
init_completion(&to_hba_priv(hba)->dev_cmd_compl);
err = ufshcd_hba_init(hba); err = ufshcd_hba_init(hba);
if (err) if (err)
goto out_error; goto out_error;

View File

@@ -21,6 +21,7 @@
#include <linux/usb/of.h> #include <linux/usb/of.h>
#include <linux/reset.h> #include <linux/reset.h>
#include <trace/hooks/usb.h>
#include <trace/hooks/xhci.h> #include <trace/hooks/xhci.h>
#include "xhci.h" #include "xhci.h"
@@ -447,7 +448,13 @@ static int xhci_plat_suspend(struct device *dev)
{ {
struct usb_hcd *hcd = dev_get_drvdata(dev); struct usb_hcd *hcd = dev_get_drvdata(dev);
struct xhci_hcd *xhci = hcd_to_xhci(hcd); struct xhci_hcd *xhci = hcd_to_xhci(hcd);
int ret; struct usb_device *udev;
int ret, bypass = 0;
udev = hcd->self.root_hub;
trace_android_rvh_usb_dev_suspend(udev, PMSG_SUSPEND, &bypass);
if (bypass)
return 0;
if (pm_runtime_suspended(dev)) if (pm_runtime_suspended(dev))
pm_runtime_resume(dev); pm_runtime_resume(dev);
@@ -475,7 +482,13 @@ static int xhci_plat_resume_common(struct device *dev, struct pm_message pmsg)
{ {
struct usb_hcd *hcd = dev_get_drvdata(dev); struct usb_hcd *hcd = dev_get_drvdata(dev);
struct xhci_hcd *xhci = hcd_to_xhci(hcd); struct xhci_hcd *xhci = hcd_to_xhci(hcd);
int ret; struct usb_device *udev;
int ret, bypass = 0;
udev = hcd->self.root_hub;
trace_android_vh_usb_dev_resume(udev, PMSG_RESUME, &bypass);
if (bypass)
return 0;
if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) { if (!device_may_wakeup(dev) && (xhci->quirks & XHCI_SUSPEND_RESUME_CLKS)) {
ret = clk_prepare_enable(xhci->clk); ret = clk_prepare_enable(xhci->clk);

View File

@@ -18,6 +18,7 @@
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/page_reporting.h> #include <linux/page_reporting.h>
#include <linux/kstrtox.h>
/* /*
* Balloon device works in 4K page units. So each page is pointed to by * Balloon device works in 4K page units. So each page is pointed to by
@@ -119,6 +120,8 @@ struct virtio_balloon {
/* Free page reporting device */ /* Free page reporting device */
struct virtqueue *reporting_vq; struct virtqueue *reporting_vq;
struct page_reporting_dev_info pr_dev_info; struct page_reporting_dev_info pr_dev_info;
bool bail_on_out_of_puff;
}; };
static const struct virtio_device_id id_table[] = { static const struct virtio_device_id id_table[] = {
@@ -205,7 +208,8 @@ static void set_page_pfns(struct virtio_balloon *vb,
page_to_balloon_pfn(page) + i); page_to_balloon_pfn(page) + i);
} }
static unsigned int fill_balloon(struct virtio_balloon *vb, size_t num) static unsigned int fill_balloon(struct virtio_balloon *vb, size_t num,
bool *out_of_puff)
{ {
unsigned int num_allocated_pages; unsigned int num_allocated_pages;
unsigned int num_pfns; unsigned int num_pfns;
@@ -225,6 +229,7 @@ static unsigned int fill_balloon(struct virtio_balloon *vb, size_t num)
VIRTIO_BALLOON_PAGES_PER_PAGE); VIRTIO_BALLOON_PAGES_PER_PAGE);
/* Sleep for at least 1/5 of a second before retry. */ /* Sleep for at least 1/5 of a second before retry. */
msleep(200); msleep(200);
*out_of_puff = true;
break; break;
} }
@@ -477,6 +482,7 @@ static void update_balloon_size_func(struct work_struct *work)
{ {
struct virtio_balloon *vb; struct virtio_balloon *vb;
s64 diff; s64 diff;
bool out_of_puff = false;
vb = container_of(work, struct virtio_balloon, vb = container_of(work, struct virtio_balloon,
update_balloon_size_work); update_balloon_size_work);
@@ -486,12 +492,12 @@ static void update_balloon_size_func(struct work_struct *work)
return; return;
if (diff > 0) if (diff > 0)
diff -= fill_balloon(vb, diff); diff -= fill_balloon(vb, diff, &out_of_puff);
else else
diff += leak_balloon(vb, -diff); diff += leak_balloon(vb, -diff);
update_balloon_size(vb); update_balloon_size(vb);
if (diff) if (diff && !(vb->bail_on_out_of_puff && out_of_puff))
queue_work(system_freezable_wq, work); queue_work(system_freezable_wq, work);
} }
@@ -871,6 +877,38 @@ static int virtio_balloon_register_shrinker(struct virtio_balloon *vb)
return register_shrinker(&vb->shrinker, "virtio-balloon"); return register_shrinker(&vb->shrinker, "virtio-balloon");
} }
static ssize_t bail_on_out_of_puff_show(struct device *d, struct device_attribute *attr,
char *buf)
{
struct virtio_device *vdev =
container_of(d, struct virtio_device, dev);
struct virtio_balloon *vb = vdev->priv;
return sprintf(buf, "%c\n", vb->bail_on_out_of_puff ? '1' : '0');
}
static ssize_t bail_on_out_of_puff_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
struct virtio_device *vdev =
container_of(d, struct virtio_device, dev);
struct virtio_balloon *vb = vdev->priv;
return kstrtobool(buf, &vb->bail_on_out_of_puff) ?: count;
}
static DEVICE_ATTR_RW(bail_on_out_of_puff);
static struct attribute *virtio_balloon_sysfs_entries[] = {
&dev_attr_bail_on_out_of_puff.attr,
NULL
};
static const struct attribute_group virtio_balloon_attribute_group = {
.name = NULL, /* put in device directory */
.attrs = virtio_balloon_sysfs_entries,
};
static int virtballoon_probe(struct virtio_device *vdev) static int virtballoon_probe(struct virtio_device *vdev)
{ {
struct virtio_balloon *vb; struct virtio_balloon *vb;
@@ -901,6 +939,11 @@ static int virtballoon_probe(struct virtio_device *vdev)
if (err) if (err)
goto out_free_vb; goto out_free_vb;
err = sysfs_create_group(&vdev->dev.kobj,
&virtio_balloon_attribute_group);
if (err)
goto out_del_vqs;
#ifdef CONFIG_BALLOON_COMPACTION #ifdef CONFIG_BALLOON_COMPACTION
vb->vb_dev_info.migratepage = virtballoon_migratepage; vb->vb_dev_info.migratepage = virtballoon_migratepage;
#endif #endif
@@ -911,13 +954,13 @@ static int virtballoon_probe(struct virtio_device *vdev)
*/ */
if (virtqueue_get_vring_size(vb->free_page_vq) < 2) { if (virtqueue_get_vring_size(vb->free_page_vq) < 2) {
err = -ENOSPC; err = -ENOSPC;
goto out_del_vqs; goto out_remove_sysfs;
} }
vb->balloon_wq = alloc_workqueue("balloon-wq", vb->balloon_wq = alloc_workqueue("balloon-wq",
WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0); WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);
if (!vb->balloon_wq) { if (!vb->balloon_wq) {
err = -ENOMEM; err = -ENOMEM;
goto out_del_vqs; goto out_remove_sysfs;
} }
INIT_WORK(&vb->report_free_page_work, report_free_page_func); INIT_WORK(&vb->report_free_page_work, report_free_page_func);
vb->cmd_id_received_cache = VIRTIO_BALLOON_CMD_ID_STOP; vb->cmd_id_received_cache = VIRTIO_BALLOON_CMD_ID_STOP;
@@ -1011,6 +1054,8 @@ out_unregister_shrinker:
out_del_balloon_wq: out_del_balloon_wq:
if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
destroy_workqueue(vb->balloon_wq); destroy_workqueue(vb->balloon_wq);
out_remove_sysfs:
sysfs_remove_group(&vdev->dev.kobj, &virtio_balloon_attribute_group);
out_del_vqs: out_del_vqs:
vdev->config->del_vqs(vdev); vdev->config->del_vqs(vdev);
out_free_vb: out_free_vb:
@@ -1057,6 +1102,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
destroy_workqueue(vb->balloon_wq); destroy_workqueue(vb->balloon_wq);
} }
sysfs_remove_group(&vdev->dev.kobj, &virtio_balloon_attribute_group);
remove_common(vb); remove_common(vb);
kfree(vb); kfree(vb);
} }

View File

@@ -229,6 +229,26 @@
"include-filter": "kselftest_x86_test_mremap_vdso" "include-filter": "kselftest_x86_test_mremap_vdso"
} }
] ]
},
{
"name": "CtsJobSchedulerTestCases",
"options": [
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testCellularConstraintExecutedAndStopped"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_transitionNetworks"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_withMobile"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testEJMeteredConstraintFails_withMobile_DataSaverOn"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testMeteredConstraintFails_withMobile_DataSaverOn"
}
]
} }
], ],
"presubmit-large": [ "presubmit-large": [

View File

@@ -1991,7 +1991,7 @@ static void free_note_info(struct elf_note_info *info)
threads = t->next; threads = t->next;
WARN_ON(t->notes[0].data && t->notes[0].data != &t->prstatus); WARN_ON(t->notes[0].data && t->notes[0].data != &t->prstatus);
for (i = 1; i < info->thread_notes; ++i) for (i = 1; i < info->thread_notes; ++i)
kfree(t->notes[i].data); kvfree(t->notes[i].data);
kfree(t); kfree(t);
} }
kfree(info->psinfo.data); kfree(info->psinfo.data);

View File

@@ -39,6 +39,8 @@
#include <linux/rculist.h> #include <linux/rculist.h>
#include <net/busy_poll.h> #include <net/busy_poll.h>
#include <trace/hooks/fs.h>
/* /*
* LOCKING: * LOCKING:
* There are three level of locking required by epoll : * There are three level of locking required by epoll :
@@ -1444,15 +1446,20 @@ static int ep_create_wakeup_source(struct epitem *epi)
{ {
struct name_snapshot n; struct name_snapshot n;
struct wakeup_source *ws; struct wakeup_source *ws;
char ws_name[64];
strscpy(ws_name, "eventpoll", sizeof(ws_name));
trace_android_vh_ep_create_wakeup_source(ws_name, sizeof(ws_name));
if (!epi->ep->ws) { if (!epi->ep->ws) {
epi->ep->ws = wakeup_source_register(NULL, "eventpoll"); epi->ep->ws = wakeup_source_register(NULL, ws_name);
if (!epi->ep->ws) if (!epi->ep->ws)
return -ENOMEM; return -ENOMEM;
} }
take_dentry_name_snapshot(&n, epi->ffd.file->f_path.dentry); take_dentry_name_snapshot(&n, epi->ffd.file->f_path.dentry);
ws = wakeup_source_register(NULL, n.name.name); strscpy(ws_name, n.name.name, sizeof(ws_name));
trace_android_vh_ep_create_wakeup_source(ws_name, sizeof(ws_name));
ws = wakeup_source_register(NULL, ws_name);
release_dentry_name_snapshot(&n); release_dentry_name_snapshot(&n);
if (!ws) if (!ws)

View File

@@ -316,13 +316,6 @@ struct exfat_inode_info {
/* for avoiding the race between alloc and free */ /* for avoiding the race between alloc and free */
unsigned int cache_valid_id; unsigned int cache_valid_id;
/*
* NOTE: i_size_ondisk is 64bits, so must hold ->inode_lock to access.
* physically allocated size.
*/
loff_t i_size_ondisk;
/* block-aligned i_size (used in cont_write_begin) */
loff_t i_size_aligned;
/* on-disk position of directory entry or 0 */ /* on-disk position of directory entry or 0 */
loff_t i_pos; loff_t i_pos;
loff_t valid_size; loff_t valid_size;
@@ -429,6 +422,11 @@ static inline bool is_valid_cluster(struct exfat_sb_info *sbi,
return clus >= EXFAT_FIRST_CLUSTER && clus < sbi->num_clusters; return clus >= EXFAT_FIRST_CLUSTER && clus < sbi->num_clusters;
} }
static inline loff_t exfat_ondisk_size(const struct inode *inode)
{
return ((loff_t)inode->i_blocks) << 9;
}
/* super.c */ /* super.c */
int exfat_set_volume_dirty(struct super_block *sb); int exfat_set_volume_dirty(struct super_block *sb);
int exfat_clear_volume_dirty(struct super_block *sb); int exfat_clear_volume_dirty(struct super_block *sb);

View File

@@ -29,7 +29,7 @@ static int exfat_cont_expand(struct inode *inode, loff_t size)
if (ret) if (ret)
return ret; return ret;
num_clusters = EXFAT_B_TO_CLU_ROUND_UP(ei->i_size_ondisk, sbi); num_clusters = EXFAT_B_TO_CLU(exfat_ondisk_size(inode), sbi);
new_num_clusters = EXFAT_B_TO_CLU_ROUND_UP(size, sbi); new_num_clusters = EXFAT_B_TO_CLU_ROUND_UP(size, sbi);
if (new_num_clusters == num_clusters) if (new_num_clusters == num_clusters)
@@ -74,8 +74,6 @@ out:
/* Expanded range not zeroed, do not update valid_size */ /* Expanded range not zeroed, do not update valid_size */
i_size_write(inode, size); i_size_write(inode, size);
ei->i_size_aligned = round_up(size, sb->s_blocksize);
ei->i_size_ondisk = ei->i_size_aligned;
inode->i_blocks = round_up(size, sbi->cluster_size) >> 9; inode->i_blocks = round_up(size, sbi->cluster_size) >> 9;
mark_inode_dirty(inode); mark_inode_dirty(inode);
@@ -157,7 +155,7 @@ int __exfat_truncate(struct inode *inode)
exfat_set_volume_dirty(sb); exfat_set_volume_dirty(sb);
num_clusters_new = EXFAT_B_TO_CLU_ROUND_UP(i_size_read(inode), sbi); num_clusters_new = EXFAT_B_TO_CLU_ROUND_UP(i_size_read(inode), sbi);
num_clusters_phys = EXFAT_B_TO_CLU_ROUND_UP(ei->i_size_ondisk, sbi); num_clusters_phys = EXFAT_B_TO_CLU(exfat_ondisk_size(inode), sbi);
exfat_chain_set(&clu, ei->start_clu, num_clusters_phys, ei->flags); exfat_chain_set(&clu, ei->start_clu, num_clusters_phys, ei->flags);
@@ -243,8 +241,6 @@ void exfat_truncate(struct inode *inode)
struct super_block *sb = inode->i_sb; struct super_block *sb = inode->i_sb;
struct exfat_sb_info *sbi = EXFAT_SB(sb); struct exfat_sb_info *sbi = EXFAT_SB(sb);
struct exfat_inode_info *ei = EXFAT_I(inode); struct exfat_inode_info *ei = EXFAT_I(inode);
unsigned int blocksize = i_blocksize(inode);
loff_t aligned_size;
int err; int err;
mutex_lock(&sbi->s_lock); mutex_lock(&sbi->s_lock);
@@ -262,17 +258,6 @@ void exfat_truncate(struct inode *inode)
inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9; inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
write_size: write_size:
aligned_size = i_size_read(inode);
if (aligned_size & (blocksize - 1)) {
aligned_size |= (blocksize - 1);
aligned_size++;
}
if (ei->i_size_ondisk > i_size_read(inode))
ei->i_size_ondisk = aligned_size;
if (ei->i_size_aligned > i_size_read(inode))
ei->i_size_aligned = aligned_size;
mutex_unlock(&sbi->s_lock); mutex_unlock(&sbi->s_lock);
} }
@@ -592,9 +577,19 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
valid_size = ei->valid_size; valid_size = ei->valid_size;
ret = generic_write_checks(iocb, iter); ret = generic_write_checks(iocb, iter);
if (ret < 0) if (ret <= 0)
goto unlock; goto unlock;
if (iocb->ki_flags & IOCB_DIRECT) {
unsigned long align = pos | iov_iter_alignment(iter);
if (!IS_ALIGNED(align, i_blocksize(inode)) &&
!IS_ALIGNED(align, bdev_logical_block_size(inode->i_sb->s_bdev))) {
ret = -EINVAL;
goto unlock;
}
}
if (pos > valid_size) { if (pos > valid_size) {
ret = exfat_file_zeroed_range(file, valid_size, pos); ret = exfat_file_zeroed_range(file, valid_size, pos);
if (ret < 0 && ret != -ENOSPC) { if (ret < 0 && ret != -ENOSPC) {

View File

@@ -133,11 +133,9 @@ static int exfat_map_cluster(struct inode *inode, unsigned int clu_offset,
struct exfat_sb_info *sbi = EXFAT_SB(sb); struct exfat_sb_info *sbi = EXFAT_SB(sb);
struct exfat_inode_info *ei = EXFAT_I(inode); struct exfat_inode_info *ei = EXFAT_I(inode);
unsigned int local_clu_offset = clu_offset; unsigned int local_clu_offset = clu_offset;
unsigned int num_to_be_allocated = 0, num_clusters = 0; unsigned int num_to_be_allocated = 0, num_clusters;
if (ei->i_size_ondisk > 0) num_clusters = EXFAT_B_TO_CLU(exfat_ondisk_size(inode), sbi);
num_clusters =
EXFAT_B_TO_CLU_ROUND_UP(ei->i_size_ondisk, sbi);
if (clu_offset >= num_clusters) if (clu_offset >= num_clusters)
num_to_be_allocated = clu_offset - num_clusters + 1; num_to_be_allocated = clu_offset - num_clusters + 1;
@@ -263,21 +261,6 @@ static int exfat_map_cluster(struct inode *inode, unsigned int clu_offset,
return 0; return 0;
} }
static int exfat_map_new_buffer(struct exfat_inode_info *ei,
struct buffer_head *bh, loff_t pos)
{
if (buffer_delay(bh) && pos > ei->i_size_aligned)
return -EIO;
set_buffer_new(bh);
/*
* Adjust i_size_aligned if i_size_ondisk is bigger than it.
*/
if (ei->i_size_ondisk > ei->i_size_aligned)
ei->i_size_aligned = ei->i_size_ondisk;
return 0;
}
static int exfat_get_block(struct inode *inode, sector_t iblock, static int exfat_get_block(struct inode *inode, sector_t iblock,
struct buffer_head *bh_result, int create) struct buffer_head *bh_result, int create)
{ {
@@ -291,10 +274,11 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
sector_t last_block; sector_t last_block;
sector_t phys = 0; sector_t phys = 0;
sector_t valid_blks; sector_t valid_blks;
loff_t pos; loff_t i_size;
mutex_lock(&sbi->s_lock); mutex_lock(&sbi->s_lock);
last_block = EXFAT_B_TO_BLK_ROUND_UP(i_size_read(inode), sb); i_size = i_size_read(inode);
last_block = EXFAT_B_TO_BLK_ROUND_UP(i_size, sb);
if (iblock >= last_block && !create) if (iblock >= last_block && !create)
goto done; goto done;
@@ -319,93 +303,103 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
mapped_blocks = sbi->sect_per_clus - sec_offset; mapped_blocks = sbi->sect_per_clus - sec_offset;
max_blocks = min(mapped_blocks, max_blocks); max_blocks = min(mapped_blocks, max_blocks);
pos = EXFAT_BLK_TO_B((iblock + 1), sb);
if ((create && iblock >= last_block) || buffer_delay(bh_result)) {
if (ei->i_size_ondisk < pos)
ei->i_size_ondisk = pos;
}
map_bh(bh_result, sb, phys); map_bh(bh_result, sb, phys);
if (buffer_delay(bh_result)) if (buffer_delay(bh_result))
clear_buffer_delay(bh_result); clear_buffer_delay(bh_result);
if (create) { /*
* In most cases, we just need to set bh_result to mapped, unmapped
* or new status as follows:
* 1. i_size == valid_size
* 2. write case (create == 1)
* 3. direct_read (!bh_result->b_folio)
* -> the unwritten part will be zeroed in exfat_direct_IO()
*
* Otherwise, in the case of buffered read, it is necessary to take
* care the last nested block if valid_size is not equal to i_size.
*/
if (i_size == ei->valid_size || create || !bh_result->b_folio)
valid_blks = EXFAT_B_TO_BLK_ROUND_UP(ei->valid_size, sb); valid_blks = EXFAT_B_TO_BLK_ROUND_UP(ei->valid_size, sb);
else
if (iblock + max_blocks < valid_blks) {
/* The range has been written, map it */
goto done;
} else if (iblock < valid_blks) {
/*
* The range has been partially written,
* map the written part.
*/
max_blocks = valid_blks - iblock;
goto done;
}
/* The area has not been written, map and mark as new. */
err = exfat_map_new_buffer(ei, bh_result, pos);
if (err) {
exfat_fs_error(sb,
"requested for bmap out of range(pos : (%llu) > i_size_aligned(%llu)\n",
pos, ei->i_size_aligned);
goto unlock_ret;
}
ei->valid_size = EXFAT_BLK_TO_B(iblock + max_blocks, sb);
mark_inode_dirty(inode);
} else {
valid_blks = EXFAT_B_TO_BLK(ei->valid_size, sb); valid_blks = EXFAT_B_TO_BLK(ei->valid_size, sb);
if (iblock + max_blocks < valid_blks) { /* The range has been fully written, map it */
/* The range has been written, map it */ if (iblock + max_blocks < valid_blks)
goto done; goto done;
} else if (iblock < valid_blks) {
/*
* The area has been partially written,
* map the written part.
*/
max_blocks = valid_blks - iblock;
goto done;
} else if (iblock == valid_blks &&
(ei->valid_size & (sb->s_blocksize - 1))) {
/*
* The block has been partially written,
* zero the unwritten part and map the block.
*/
loff_t size, off;
max_blocks = 1; /* The range has been partially written, map the written part */
if (iblock < valid_blks) {
/* max_blocks = valid_blks - iblock;
* For direct read, the unwritten part will be zeroed in goto done;
* exfat_direct_IO()
*/
if (!bh_result->b_folio)
goto done;
pos -= sb->s_blocksize;
size = ei->valid_size - pos;
off = pos & (PAGE_SIZE - 1);
folio_set_bh(bh_result, bh_result->b_folio, off);
err = bh_read(bh_result, 0);
if (err < 0)
goto unlock_ret;
folio_zero_segment(bh_result->b_folio, off + size,
off + sb->s_blocksize);
} else {
/*
* The range has not been written, clear the mapped flag
* to only zero the cache and do not read from disk.
*/
clear_buffer_mapped(bh_result);
}
} }
/* The area has not been written, map and mark as new for create case */
if (create) {
set_buffer_new(bh_result);
ei->valid_size = EXFAT_BLK_TO_B(iblock + max_blocks, sb);
mark_inode_dirty(inode);
goto done;
}
/*
* The area has just one block partially written.
* In that case, we should read and fill the unwritten part of
* a block with zero.
*/
if (bh_result->b_folio && iblock == valid_blks &&
(ei->valid_size & (sb->s_blocksize - 1))) {
loff_t size, pos;
void *addr;
max_blocks = 1;
/*
* No buffer_head is allocated.
* (1) bmap: It's enough to set blocknr without I/O.
* (2) read: The unwritten part should be filled with zero.
* If a folio does not have any buffers,
* let's returns -EAGAIN to fallback to
* block_read_full_folio() for per-bh IO.
*/
if (!folio_buffers(bh_result->b_folio)) {
err = -EAGAIN;
goto done;
}
pos = EXFAT_BLK_TO_B(iblock, sb);
size = ei->valid_size - pos;
addr = folio_address(bh_result->b_folio) +
offset_in_folio(bh_result->b_folio, pos);
/* Check if bh->b_data points to proper addr in folio */
if (bh_result->b_data != addr) {
exfat_fs_error_ratelimit(sb,
"b_data(%p) != folio_addr(%p)",
bh_result->b_data, addr);
err = -EINVAL;
goto done;
}
/* Read a block */
err = bh_read(bh_result, 0);
if (err < 0)
goto done;
/* Zero unwritten part of a block */
memset(bh_result->b_data + size, 0, bh_result->b_size - size);
err = 0;
goto done;
}
/*
* The area has not been written, clear mapped for read/bmap cases.
* If so, it will be filled with zero without reading from disk.
*/
clear_buffer_mapped(bh_result);
done: done:
bh_result->b_size = EXFAT_BLK_TO_B(max_blocks, sb); bh_result->b_size = EXFAT_BLK_TO_B(max_blocks, sb);
if (err < 0)
clear_buffer_mapped(bh_result);
unlock_ret: unlock_ret:
mutex_unlock(&sbi->s_lock); mutex_unlock(&sbi->s_lock);
return err; return err;
@@ -480,14 +474,6 @@ static int exfat_write_end(struct file *file, struct address_space *mapping,
int err; int err;
err = generic_write_end(file, mapping, pos, len, copied, pagep, fsdata); err = generic_write_end(file, mapping, pos, len, copied, pagep, fsdata);
if (ei->i_size_aligned < i_size_read(inode)) {
exfat_fs_error(inode->i_sb,
"invalid size(size(%llu) > aligned(%llu)\n",
i_size_read(inode), ei->i_size_aligned);
return -EIO;
}
if (err < len) if (err < len)
exfat_write_failed(mapping, pos+len); exfat_write_failed(mapping, pos+len);
@@ -515,20 +501,6 @@ static ssize_t exfat_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
int rw = iov_iter_rw(iter); int rw = iov_iter_rw(iter);
ssize_t ret; ssize_t ret;
if (rw == WRITE) {
/*
* FIXME: blockdev_direct_IO() doesn't use ->write_begin(),
* so we need to update the ->i_size_aligned to block boundary.
*
* But we must fill the remaining area or hole by nul for
* updating ->i_size_aligned
*
* Return 0, and fallback to normal buffered write.
*/
if (EXFAT_I(inode)->i_size_aligned < size)
return 0;
}
/* /*
* Need to use the DIO_LOCKING for avoiding the race * Need to use the DIO_LOCKING for avoiding the race
* condition of exfat_get_block() and ->truncate(). * condition of exfat_get_block() and ->truncate().
@@ -542,8 +514,18 @@ static ssize_t exfat_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
} else } else
size = pos + ret; size = pos + ret;
/* zero the unwritten part in the partially written block */ if (rw == WRITE) {
if (rw == READ && pos < ei->valid_size && ei->valid_size < size) { /*
* If the block had been partially written before this write,
* ->valid_size will not be updated in exfat_get_block(),
* update it here.
*/
if (ei->valid_size < size) {
ei->valid_size = size;
mark_inode_dirty(inode);
}
} else if (pos < ei->valid_size && ei->valid_size < size) {
/* zero the unwritten part in the partially written block */
iov_iter_revert(iter, size - ei->valid_size); iov_iter_revert(iter, size - ei->valid_size);
iov_iter_zero(size - ei->valid_size, iter); iov_iter_zero(size - ei->valid_size, iter);
} }
@@ -678,15 +660,6 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
i_size_write(inode, size); i_size_write(inode, size);
/* ondisk and aligned size should be aligned with block size */
if (size & (inode->i_sb->s_blocksize - 1)) {
size |= (inode->i_sb->s_blocksize - 1);
size++;
}
ei->i_size_aligned = size;
ei->i_size_ondisk = size;
exfat_save_attr(inode, info->attr); exfat_save_attr(inode, info->attr);
inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9; inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;

View File

@@ -373,8 +373,6 @@ static int exfat_find_empty_entry(struct inode *inode,
/* directory inode should be updated in here */ /* directory inode should be updated in here */
i_size_write(inode, size); i_size_write(inode, size);
ei->i_size_ondisk += sbi->cluster_size;
ei->i_size_aligned += sbi->cluster_size;
ei->valid_size += sbi->cluster_size; ei->valid_size += sbi->cluster_size;
ei->flags = p_dir->flags; ei->flags = p_dir->flags;
inode->i_blocks += sbi->cluster_size >> 9; inode->i_blocks += sbi->cluster_size >> 9;

View File

@@ -409,8 +409,6 @@ static int exfat_read_root(struct inode *inode)
inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9; inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
ei->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff; ei->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff;
ei->i_size_aligned = i_size_read(inode);
ei->i_size_ondisk = i_size_read(inode);
exfat_save_attr(inode, EXFAT_ATTR_SUBDIR); exfat_save_attr(inode, EXFAT_ATTR_SUBDIR);
ei->i_crtime = simple_inode_init_ts(inode); ei->i_crtime = simple_inode_init_ts(inode);

View File

@@ -1846,7 +1846,6 @@ struct f2fs_sb_info {
spinlock_t iostat_lat_lock; spinlock_t iostat_lat_lock;
struct iostat_lat_info *iostat_io_lat; struct iostat_lat_info *iostat_io_lat;
#endif #endif
unsigned int sanity_check;
}; };
/* Definitions to access f2fs_sb_info */ /* Definitions to access f2fs_sb_info */
@@ -3671,11 +3670,8 @@ int f2fs_check_nid_range(struct f2fs_sb_info *sbi, nid_t nid);
bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type); bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type);
bool f2fs_in_warm_node_list(struct f2fs_sb_info *sbi, struct page *page); bool f2fs_in_warm_node_list(struct f2fs_sb_info *sbi, struct page *page);
void f2fs_init_fsync_node_info(struct f2fs_sb_info *sbi); void f2fs_init_fsync_node_info(struct f2fs_sb_info *sbi);
struct page *f2fs_get_prev_nat_page(struct f2fs_sb_info *sbi, nid_t nid);
void f2fs_del_fsync_node_entry(struct f2fs_sb_info *sbi, struct page *page); void f2fs_del_fsync_node_entry(struct f2fs_sb_info *sbi, struct page *page);
void f2fs_reset_fsync_node_info(struct f2fs_sb_info *sbi); void f2fs_reset_fsync_node_info(struct f2fs_sb_info *sbi);
bool f2fs_get_nat_entry(struct f2fs_sb_info *sbi, struct node_info *cne,
struct node_info *jne, nid_t nid);
int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid); int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid);
bool f2fs_is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid); bool f2fs_is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid);
bool f2fs_need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino); bool f2fs_need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino);

View File

@@ -2069,6 +2069,9 @@ int f2fs_gc_range(struct f2fs_sb_info *sbi,
.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS), .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
}; };
if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, segno)))
continue;
do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false); do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false);
put_gc_inode(&gc_list); put_gc_inode(&gc_list);

View File

@@ -749,99 +749,6 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page)
#endif #endif
} }
static void f2fs_sanity_check_nat(struct f2fs_sb_info *sbi, pgoff_t nid)
{
struct page *page;
struct node_info cni = { 0 }, jni = { 0 };
struct f2fs_nat_block *nat_blk;
struct f2fs_nat_entry ne;
nid_t start_nid;
struct f2fs_io_info fio = {
.sbi = sbi,
.type = NODE,
.op = REQ_OP_READ,
.op_flags = 0,
.encrypted_page = NULL,
};
int err;
int ret;
if (likely(!sbi->sanity_check))
return;
if (!is_sbi_flag_set(sbi, SBI_CP_DISABLED))
return;
/* nat entry */
ret = f2fs_get_nat_entry(sbi, &cni, &jni, nid);
if (ret) {
if (ret & NAT_JOURNAL_ENTRY)
f2fs_err(sbi, "nat entry in journal: [%u,%u,%u,%u,%u]",
jni.nid, jni.ino, jni.blk_addr, jni.version, jni.flag);
if (ret & NAT_CACHED_ENTRY)
f2fs_err(sbi, "nat entry in cache: [%u,%u,%u,%u,%u]",
cni.nid, cni.ino, cni.blk_addr, cni.version, cni.flag);
} else {
f2fs_err(sbi, "nat entry is not in cache&journal");
}
/* previous node block */
page = f2fs_get_prev_nat_page(sbi, nid);
if (IS_ERR(page))
return;
nat_blk = (struct f2fs_nat_block *)page_address(page);
start_nid = START_NID(nid);
ne = nat_blk->entries[nid - start_nid];
node_info_from_raw_nat(&cni, &ne);
ClearPageUptodate(page);
f2fs_put_page(page, 1);
f2fs_err(sbi, "previous node info: [%u,%u,%u,%u,%u]",
cni.nid, cni.ino, cni.blk_addr, cni.version, cni.flag);
if (cni.blk_addr == NULL_ADDR || cni.blk_addr == NEW_ADDR)
return;
page = f2fs_grab_cache_page(NODE_MAPPING(sbi), nid, false);
if (!page)
return;
fio.page = page;
fio.new_blkaddr = fio.old_blkaddr = cni.blk_addr;
err = f2fs_submit_page_bio(&fio);
if (err) {
f2fs_err(sbi, "f2fs_submit_page_bio fail err:%d", err);
goto out;
}
lock_page(page);
if (unlikely(page->mapping != NODE_MAPPING(sbi))) {
f2fs_err(sbi, "mapping dismatch");
goto out;
}
if (unlikely(!PageUptodate(page))) {
f2fs_err(sbi, "page is not uptodate");
goto out;
}
if (!f2fs_inode_chksum_verify(sbi, page)) {
f2fs_err(sbi, "f2fs_inode_chksum_verify fail");
goto out;
}
f2fs_err(sbi, "previous node block, nid:%lu, "
"node_footer[nid:%u,ino:%u,ofs:%u,cpver:%llu,blkaddr:%u]",
nid, nid_of_node(page), ino_of_node(page),
ofs_of_node(page), cpver_of_node(page),
next_blkaddr_of_node(page));
out:
ClearPageUptodate(page);
f2fs_put_page(page, 1);
}
void f2fs_update_inode_page(struct inode *inode) void f2fs_update_inode_page(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
@@ -859,9 +766,6 @@ retry:
if (err == -ENOMEM || ++count <= DEFAULT_RETRY_IO_COUNT) if (err == -ENOMEM || ++count <= DEFAULT_RETRY_IO_COUNT)
goto retry; goto retry;
f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_UPDATE_INODE); f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_UPDATE_INODE);
f2fs_err(sbi, "fail to get node page, ino:%lu, err: %d", inode->i_ino, err);
if (err == -EFSCORRUPTED)
f2fs_sanity_check_nat(sbi, inode->i_ino);
return; return;
} }
f2fs_update_inode(inode, node_page); f2fs_update_inode(inode, node_page);

View File

@@ -164,12 +164,6 @@ static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
return dst_page; return dst_page;
} }
struct page *f2fs_get_prev_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
{
pgoff_t dst_off = next_nat_addr(sbi, current_nat_addr(sbi, nid));
return f2fs_get_meta_page(sbi, dst_off);
}
static struct nat_entry *__alloc_nat_entry(struct f2fs_sb_info *sbi, static struct nat_entry *__alloc_nat_entry(struct f2fs_sb_info *sbi,
nid_t nid, bool no_fail) nid_t nid, bool no_fail)
{ {
@@ -383,39 +377,6 @@ void f2fs_reset_fsync_node_info(struct f2fs_sb_info *sbi)
spin_unlock_irqrestore(&sbi->fsync_node_lock, flags); spin_unlock_irqrestore(&sbi->fsync_node_lock, flags);
} }
bool f2fs_get_nat_entry(struct f2fs_sb_info *sbi, struct node_info *cni,
struct node_info *jni, nid_t nid)
{
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
struct f2fs_journal *journal = curseg->journal;
struct nat_entry *e;
int ret = 0;
int i;
f2fs_down_read(&nm_i->nat_tree_lock);
/* lookup nat entry in journal */
i = f2fs_lookup_journal_in_cursum(journal, NAT_JOURNAL, nid, 0);
if (i >= 0) {
struct f2fs_nat_entry ne;
ne = nat_in_journal(journal, i);
node_info_from_raw_nat(jni, &ne);
ret |= NAT_JOURNAL_ENTRY;
}
/* lookup nat entry in cache */
e = __lookup_nat_cache(nm_i, nid);
if (e) {
*cni = e->ni;
ret |= NAT_CACHED_ENTRY;
}
f2fs_up_read(&nm_i->nat_tree_lock);
return ret;
}
int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid) int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);

View File

@@ -429,7 +429,6 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
unsigned int secno = GET_SEC_FROM_SEG(sbi, segno); unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno); unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
unsigned int next; unsigned int next;
unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi);
spin_lock(&free_i->segmap_lock); spin_lock(&free_i->segmap_lock);
clear_bit(segno, free_i->free_segmap); clear_bit(segno, free_i->free_segmap);
@@ -437,7 +436,7 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
next = find_next_bit(free_i->free_segmap, next = find_next_bit(free_i->free_segmap,
start_segno + SEGS_PER_SEC(sbi), start_segno); start_segno + SEGS_PER_SEC(sbi), start_segno);
if (next >= start_segno + usable_segs) { if (next >= start_segno + f2fs_usable_segs_in_sec(sbi)) {
clear_bit(secno, free_i->free_secmap); clear_bit(secno, free_i->free_secmap);
free_i->free_sections++; free_i->free_sections++;
} }
@@ -463,22 +462,36 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
unsigned int secno = GET_SEC_FROM_SEG(sbi, segno); unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno); unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
unsigned int next; unsigned int next;
unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi); bool ret;
spin_lock(&free_i->segmap_lock); spin_lock(&free_i->segmap_lock);
if (test_and_clear_bit(segno, free_i->free_segmap)) { ret = test_and_clear_bit(segno, free_i->free_segmap);
free_i->free_segments++; if (!ret)
goto unlock_out;
if (!inmem && IS_CURSEC(sbi, secno)) free_i->free_segments++;
goto skip_free;
next = find_next_bit(free_i->free_segmap, if (!inmem && IS_CURSEC(sbi, secno))
start_segno + SEGS_PER_SEC(sbi), start_segno); goto unlock_out;
if (next >= start_segno + usable_segs) {
if (test_and_clear_bit(secno, free_i->free_secmap)) /* check large section */
free_i->free_sections++; next = find_next_bit(free_i->free_segmap,
} start_segno + SEGS_PER_SEC(sbi), start_segno);
} if (next < start_segno + f2fs_usable_segs_in_sec(sbi))
skip_free: goto unlock_out;
ret = test_and_clear_bit(secno, free_i->free_secmap);
if (!ret)
goto unlock_out;
free_i->free_sections++;
if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[BG_GC]) == secno)
sbi->next_victim_seg[BG_GC] = NULL_SEGNO;
if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[FG_GC]) == secno)
sbi->next_victim_seg[FG_GC] = NULL_SEGNO;
unlock_out:
spin_unlock(&free_i->segmap_lock); spin_unlock(&free_i->segmap_lock);
} }

View File

@@ -1122,8 +1122,6 @@ F2FS_SBI_GENERAL_RW_ATTR(max_read_extent_count);
F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec); F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec);
F2FS_SBI_GENERAL_RW_ATTR(blkzone_alloc_policy); F2FS_SBI_GENERAL_RW_ATTR(blkzone_alloc_policy);
#endif #endif
/* enable sanity check to dump more metadata info */
F2FS_SBI_GENERAL_RW_ATTR(sanity_check);
F2FS_SBI_GENERAL_RW_ATTR(carve_out); F2FS_SBI_GENERAL_RW_ATTR(carve_out);
/* STAT_INFO ATTR */ /* STAT_INFO ATTR */
@@ -1312,7 +1310,6 @@ static struct attribute *f2fs_attrs[] = {
ATTR_LIST(warm_data_age_threshold), ATTR_LIST(warm_data_age_threshold),
ATTR_LIST(last_age_weight), ATTR_LIST(last_age_weight),
ATTR_LIST(max_read_extent_count), ATTR_LIST(max_read_extent_count),
ATTR_LIST(sanity_check),
ATTR_LIST(carve_out), ATTR_LIST(carve_out),
NULL, NULL,
}; };

View File

@@ -799,10 +799,6 @@ int fuse_file_read_iter_initialize(
.size = to->count, .size = to->count,
}; };
fri->frio = (struct fuse_read_iter_out) {
.ret = fri->fri.size,
};
/* TODO we can't assume 'to' is a kvec */ /* TODO we can't assume 'to' is a kvec */
/* TODO we also can't assume the vector has only one component */ /* TODO we also can't assume the vector has only one component */
*fa = (struct fuse_bpf_args) { *fa = (struct fuse_bpf_args) {
@@ -837,11 +833,6 @@ int fuse_file_read_iter_backing(struct fuse_bpf_args *fa,
if (!iov_iter_count(to)) if (!iov_iter_count(to))
return 0; return 0;
if ((iocb->ki_flags & IOCB_DIRECT) &&
(!ff->backing_file->f_mapping->a_ops ||
!ff->backing_file->f_mapping->a_ops->direct_IO))
return -EINVAL;
/* TODO This just plain ignores any change to fuse_read_in */ /* TODO This just plain ignores any change to fuse_read_in */
if (is_sync_kiocb(iocb)) { if (is_sync_kiocb(iocb)) {
ret = vfs_iter_read(ff->backing_file, to, &iocb->ki_pos, ret = vfs_iter_read(ff->backing_file, to, &iocb->ki_pos,
@@ -864,14 +855,13 @@ int fuse_file_read_iter_backing(struct fuse_bpf_args *fa,
fuse_bpf_aio_cleanup_handler(aio_req); fuse_bpf_aio_cleanup_handler(aio_req);
} }
frio->ret = ret;
/* TODO Need to point value at the buffer for post-modification */ /* TODO Need to point value at the buffer for post-modification */
out: out:
fuse_file_accessed(file, ff->backing_file); fuse_file_accessed(file, ff->backing_file);
return ret; frio->ret = ret;
return ret < 0 ? ret : 0;
} }
void *fuse_file_read_iter_finalize(struct fuse_bpf_args *fa, void *fuse_file_read_iter_finalize(struct fuse_bpf_args *fa,

View File

@@ -281,7 +281,7 @@ show_map_vma(struct seq_file *m, struct vm_area_struct *vma)
} }
start = vma->vm_start; start = vma->vm_start;
end = vma->vm_end; end = VMA_PAD_START(vma);
__fold_filemap_fixup_entry(&((struct proc_maps_private *)m->private)->iter, &end); __fold_filemap_fixup_entry(&((struct proc_maps_private *)m->private)->iter, &end);
@@ -345,13 +345,12 @@ done:
static int show_map(struct seq_file *m, void *v) static int show_map(struct seq_file *m, void *v)
{ {
struct vm_area_struct *pad_vma = get_pad_vma(v); struct vm_area_struct *vma = v;
struct vm_area_struct *vma = get_data_vma(v);
if (vma_pages(vma)) if (vma_pages(vma))
show_map_vma(m, vma); show_map_vma(m, vma);
show_map_pad_vma(vma, pad_vma, m, show_map_vma, false); show_map_pad_vma(vma, m, show_map_vma, false);
return 0; return 0;
} }
@@ -726,18 +725,24 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
[ilog2(VM_SEALED)] = "sl", [ilog2(VM_SEALED)] = "sl",
#endif #endif
}; };
unsigned long pad_pages = vma_pad_pages(vma);
size_t i; size_t i;
seq_puts(m, "VmFlags: "); seq_puts(m, "VmFlags: ");
for (i = 0; i < BITS_PER_LONG; i++) { for (i = 0; i < BITS_PER_LONG; i++) {
if (!mnemonics[i][0]) if (!mnemonics[i][0])
continue; continue;
if ((1UL << i) & VM_PAD_MASK)
continue;
if (vma->vm_flags & (1UL << i)) { if (vma->vm_flags & (1UL << i)) {
seq_putc(m, mnemonics[i][0]); seq_putc(m, mnemonics[i][0]);
seq_putc(m, mnemonics[i][1]); seq_putc(m, mnemonics[i][1]);
seq_putc(m, ' '); seq_putc(m, ' ');
} }
} }
if (pad_pages)
seq_printf(m, "pad=%lukB", pad_pages << (PAGE_SHIFT - 10));
seq_putc(m, '\n'); seq_putc(m, '\n');
} }
@@ -794,9 +799,10 @@ static void smap_gather_stats(struct vm_area_struct *vma,
struct mem_size_stats *mss, unsigned long start) struct mem_size_stats *mss, unsigned long start)
{ {
const struct mm_walk_ops *ops = &smaps_walk_ops; const struct mm_walk_ops *ops = &smaps_walk_ops;
unsigned long end = VMA_PAD_START(vma);
/* Invalid start */ /* Invalid start */
if (start >= vma->vm_end) if (start >= end)
return; return;
if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) { if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
@@ -813,7 +819,15 @@ static void smap_gather_stats(struct vm_area_struct *vma,
unsigned long shmem_swapped = shmem_swap_usage(vma); unsigned long shmem_swapped = shmem_swap_usage(vma);
if (!start && (!shmem_swapped || (vma->vm_flags & VM_SHARED) || if (!start && (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
!(vma->vm_flags & VM_WRITE))) { !(vma->vm_flags & VM_WRITE)) &&
/*
* Only if we don't have padding can we use the fast path
* shmem_inode_info->swapped for shmem_swapped.
*
* Else we'll walk the page table to calculate
* shmem_swapped, (excluding the padding region).
*/
end == vma->vm_end) {
mss->swap += shmem_swapped; mss->swap += shmem_swapped;
} else { } else {
ops = &smaps_shmem_walk_ops; ops = &smaps_shmem_walk_ops;
@@ -822,9 +836,9 @@ static void smap_gather_stats(struct vm_area_struct *vma,
/* mmap_lock is held in m_start */ /* mmap_lock is held in m_start */
if (!start) if (!start)
walk_page_vma(vma, ops, mss); walk_page_range(vma->vm_mm, vma->vm_start, end, ops, mss);
else else
walk_page_range(vma->vm_mm, start, vma->vm_end, ops, mss); walk_page_range(vma->vm_mm, start, end, ops, mss);
} }
#define SEQ_PUT_DEC(str, val) \ #define SEQ_PUT_DEC(str, val) \
@@ -875,8 +889,7 @@ static void __show_smap(struct seq_file *m, const struct mem_size_stats *mss,
static int show_smap(struct seq_file *m, void *v) static int show_smap(struct seq_file *m, void *v)
{ {
struct vm_area_struct *pad_vma = get_pad_vma(v); struct vm_area_struct *vma = v;
struct vm_area_struct *vma = get_data_vma(v);
struct mem_size_stats mss; struct mem_size_stats mss;
memset(&mss, 0, sizeof(mss)); memset(&mss, 0, sizeof(mss));
@@ -888,7 +901,7 @@ static int show_smap(struct seq_file *m, void *v)
show_map_vma(m, vma); show_map_vma(m, vma);
SEQ_PUT_DEC("Size: ", vma->vm_end - vma->vm_start); SEQ_PUT_DEC("Size: ", VMA_PAD_START(vma) - vma->vm_start);
SEQ_PUT_DEC(" kB\nKernelPageSize: ", vma_kernel_pagesize(vma)); SEQ_PUT_DEC(" kB\nKernelPageSize: ", vma_kernel_pagesize(vma));
SEQ_PUT_DEC(" kB\nMMUPageSize: ", vma_mmu_pagesize(vma)); SEQ_PUT_DEC(" kB\nMMUPageSize: ", vma_mmu_pagesize(vma));
seq_puts(m, " kB\n"); seq_puts(m, " kB\n");
@@ -904,7 +917,7 @@ static int show_smap(struct seq_file *m, void *v)
show_smap_vma_flags(m, vma); show_smap_vma_flags(m, vma);
show_pad: show_pad:
show_map_pad_vma(vma, pad_vma, m, show_smap, true); show_map_pad_vma(vma, m, show_smap, true);
return 0; return 0;
} }

View File

@@ -28,6 +28,8 @@
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/time_namespace.h> #include <linux/time_namespace.h>
#include <trace/hooks/fs.h>
struct timerfd_ctx { struct timerfd_ctx {
union { union {
struct hrtimer tmr; struct hrtimer tmr;
@@ -407,6 +409,7 @@ SYSCALL_DEFINE2(timerfd_create, int, clockid, int, flags)
{ {
int ufd; int ufd;
struct timerfd_ctx *ctx; struct timerfd_ctx *ctx;
char file_name_buf[32];
/* Check the TFD_* constants for consistency. */ /* Check the TFD_* constants for consistency. */
BUILD_BUG_ON(TFD_CLOEXEC != O_CLOEXEC); BUILD_BUG_ON(TFD_CLOEXEC != O_CLOEXEC);
@@ -443,7 +446,9 @@ SYSCALL_DEFINE2(timerfd_create, int, clockid, int, flags)
ctx->moffs = ktime_mono_to_real(0); ctx->moffs = ktime_mono_to_real(0);
ufd = anon_inode_getfd("[timerfd]", &timerfd_fops, ctx, strscpy(file_name_buf, "[timerfd]", sizeof(file_name_buf));
trace_android_vh_timerfd_create(file_name_buf, sizeof(file_name_buf));
ufd = anon_inode_getfd(file_name_buf, &timerfd_fops, ctx,
O_RDWR | (flags & TFD_SHARED_FCNTL_FLAGS)); O_RDWR | (flags & TFD_SHARED_FCNTL_FLAGS));
if (ufd < 0) if (ufd < 0)
kfree(ctx); kfree(ctx);
@@ -451,7 +456,7 @@ SYSCALL_DEFINE2(timerfd_create, int, clockid, int, flags)
return ufd; return ufd;
} }
static int do_timerfd_settime(int ufd, int flags, static int do_timerfd_settime(int ufd, int flags,
const struct itimerspec64 *new, const struct itimerspec64 *new,
struct itimerspec64 *old) struct itimerspec64 *old)
{ {

View File

@@ -233,6 +233,14 @@
}, },
{ {
"name": "vts_kernel_net_tests" "name": "vts_kernel_net_tests"
},
{
"name": "CtsJobSchedulerTestCases",
"options": [
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_withMobile"
}
]
} }
], ],
"presubmit-large": [ "presubmit-large": [

View File

@@ -241,6 +241,26 @@
}, },
{ {
"name": "vts_kernel_net_tests" "name": "vts_kernel_net_tests"
},
{
"name": "CtsJobSchedulerTestCases",
"options": [
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testCellularConstraintExecutedAndStopped"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_transitionNetworks"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_withMobile"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testEJMeteredConstraintFails_withMobile_DataSaverOn"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testMeteredConstraintFails_withMobile_DataSaverOn"
}
]
} }
], ],
"presubmit-large": [ "presubmit-large": [

View File

@@ -257,6 +257,9 @@
ARM_SMCCC_OWNER_STANDARD, \ ARM_SMCCC_OWNER_STANDARD, \
0x53) 0x53)
#define ARM_CCA_FUNC_END 0x840001CF
#define ARM_CCA_64BIT_FUNC_END 0xC40001CF
/* /*
* Return codes defined in ARM DEN 0070A * Return codes defined in ARM DEN 0070A
* ARM DEN 0070A is now merged/consolidated into ARM DEN 0028 C * ARM DEN 0070A is now merged/consolidated into ARM DEN 0028 C

View File

@@ -40,9 +40,24 @@ struct damon_addr_range {
* @ar: The address range of the region. * @ar: The address range of the region.
* @sampling_addr: Address of the sample for the next access check. * @sampling_addr: Address of the sample for the next access check.
* @nr_accesses: Access frequency of this region. * @nr_accesses: Access frequency of this region.
* @nr_accesses_bp: @nr_accesses in basis point (0.01%) that updated for
* each sampling interval.
* @list: List head for siblings. * @list: List head for siblings.
* @age: Age of this region. * @age: Age of this region.
* *
* @nr_accesses is reset to zero for every &damon_attrs->aggr_interval and be
* increased for every &damon_attrs->sample_interval if an access to the region
* during the last sampling interval is found. The update of this field should
* not be done with direct access but with the helper function,
* damon_update_region_access_rate().
*
* @nr_accesses_bp is another representation of @nr_accesses in basis point
* (1 in 10,000) that updated for every &damon_attrs->sample_interval in a
* manner similar to moving sum. By the algorithm, this value becomes
* @nr_accesses * 10000 for every &struct damon_attrs->aggr_interval. This can
* be used when the aggregation interval is too huge and therefore cannot wait
* for it before getting the access monitoring results.
*
* @age is initially zero, increased for each aggregation interval, and reset * @age is initially zero, increased for each aggregation interval, and reset
* to zero again if the access frequency is significantly changed. If two * to zero again if the access frequency is significantly changed. If two
* regions are merged into a new region, both @nr_accesses and @age of the new * regions are merged into a new region, both @nr_accesses and @age of the new
@@ -52,6 +67,7 @@ struct damon_region {
struct damon_addr_range ar; struct damon_addr_range ar;
unsigned long sampling_addr; unsigned long sampling_addr;
unsigned int nr_accesses; unsigned int nr_accesses;
unsigned int nr_accesses_bp;
struct list_head list; struct list_head list;
unsigned int age; unsigned int age;
@@ -631,6 +647,8 @@ void damon_add_region(struct damon_region *r, struct damon_target *t);
void damon_destroy_region(struct damon_region *r, struct damon_target *t); void damon_destroy_region(struct damon_region *r, struct damon_target *t);
int damon_set_regions(struct damon_target *t, struct damon_addr_range *ranges, int damon_set_regions(struct damon_target *t, struct damon_addr_range *ranges,
unsigned int nr_ranges); unsigned int nr_ranges);
void damon_update_region_access_rate(struct damon_region *r, bool accessed,
struct damon_attrs *attrs);
struct damos_filter *damos_new_filter(enum damos_filter_type type, struct damos_filter *damos_new_filter(enum damos_filter_type type,
bool matching); bool matching);

View File

@@ -422,9 +422,6 @@ struct f2fs_sit_block {
struct f2fs_sit_entry entries[SIT_ENTRY_PER_BLOCK]; struct f2fs_sit_entry entries[SIT_ENTRY_PER_BLOCK];
} __packed; } __packed;
#define NAT_CACHED_ENTRY 1
#define NAT_JOURNAL_ENTRY 2
/* /*
* For segment summary * For segment summary
* *

View File

@@ -90,7 +90,7 @@ struct ipv6_devconf {
ANDROID_KABI_RESERVE(1); ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2); ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3); ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4); ANDROID_KABI_BACKPORT_OK(4);
}; };
struct ipv6_params { struct ipv6_params {

View File

@@ -3757,24 +3757,22 @@ static inline bool page_is_guard(struct page *page)
return PageGuard(page); return PageGuard(page);
} }
bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order);
int migratetype);
static inline bool set_page_guard(struct zone *zone, struct page *page, static inline bool set_page_guard(struct zone *zone, struct page *page,
unsigned int order, int migratetype) unsigned int order)
{ {
if (!debug_guardpage_enabled()) if (!debug_guardpage_enabled())
return false; return false;
return __set_page_guard(zone, page, order, migratetype); return __set_page_guard(zone, page, order);
} }
void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order);
int migratetype);
static inline void clear_page_guard(struct zone *zone, struct page *page, static inline void clear_page_guard(struct zone *zone, struct page *page,
unsigned int order, int migratetype) unsigned int order)
{ {
if (!debug_guardpage_enabled()) if (!debug_guardpage_enabled())
return; return;
__clear_page_guard(zone, page, order, migratetype); __clear_page_guard(zone, page, order);
} }
#else /* CONFIG_DEBUG_PAGEALLOC */ #else /* CONFIG_DEBUG_PAGEALLOC */
@@ -3784,9 +3782,9 @@ static inline unsigned int debug_guardpage_minorder(void) { return 0; }
static inline bool debug_guardpage_enabled(void) { return false; } static inline bool debug_guardpage_enabled(void) { return false; }
static inline bool page_is_guard(struct page *page) { return false; } static inline bool page_is_guard(struct page *page) { return false; }
static inline bool set_page_guard(struct zone *zone, struct page *page, static inline bool set_page_guard(struct zone *zone, struct page *page,
unsigned int order, int migratetype) { return false; } unsigned int order) { return false; }
static inline void clear_page_guard(struct zone *zone, struct page *page, static inline void clear_page_guard(struct zone *zone, struct page *page,
unsigned int order, int migratetype) {} unsigned int order) {}
#endif /* CONFIG_DEBUG_PAGEALLOC */ #endif /* CONFIG_DEBUG_PAGEALLOC */
#ifdef __HAVE_ARCH_GATE_AREA #ifdef __HAVE_ARCH_GATE_AREA

View File

@@ -247,6 +247,11 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio,
int type = folio_is_file_lru(folio); int type = folio_is_file_lru(folio);
int zone = folio_zonenum(folio); int zone = folio_zonenum(folio);
struct lru_gen_folio *lrugen = &lruvec->lrugen; struct lru_gen_folio *lrugen = &lruvec->lrugen;
bool skip = false;
trace_android_vh_lru_gen_add_folio_skip(lruvec, folio, &skip);
if (skip)
return true;
VM_WARN_ON_ONCE_FOLIO(gen != -1, folio); VM_WARN_ON_ONCE_FOLIO(gen != -1, folio);
@@ -294,6 +299,11 @@ static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio,
{ {
unsigned long flags; unsigned long flags;
int gen = folio_lru_gen(folio); int gen = folio_lru_gen(folio);
bool skip = false;
trace_android_vh_lru_gen_del_folio_skip(lruvec, folio, &skip);
if (skip)
return true;
if (gen < 0) if (gen < 0)
return false; return false;

View File

@@ -34,8 +34,9 @@ static inline bool is_migrate_isolate(int migratetype)
#define REPORT_FAILURE 0x2 #define REPORT_FAILURE 0x2
void set_pageblock_migratetype(struct page *page, int migratetype); void set_pageblock_migratetype(struct page *page, int migratetype);
int move_freepages_block(struct zone *zone, struct page *page,
int migratetype, int *num_movable); bool move_freepages_block_isolate(struct zone *zone, struct page *page,
int migratetype);
int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
int migratetype, int flags, gfp_t gfp_flags); int migratetype, int flags, gfp_t gfp_flags);

View File

@@ -26,12 +26,7 @@ extern unsigned long vma_pad_pages(struct vm_area_struct *vma);
extern void madvise_vma_pad_pages(struct vm_area_struct *vma, extern void madvise_vma_pad_pages(struct vm_area_struct *vma,
unsigned long start, unsigned long end); unsigned long start, unsigned long end);
extern struct vm_area_struct *get_pad_vma(struct vm_area_struct *vma);
extern struct vm_area_struct *get_data_vma(struct vm_area_struct *vma);
extern void show_map_pad_vma(struct vm_area_struct *vma, extern void show_map_pad_vma(struct vm_area_struct *vma,
struct vm_area_struct *pad,
struct seq_file *m, void *func, bool smaps); struct seq_file *m, void *func, bool smaps);
extern void split_pad_vma(struct vm_area_struct *vma, struct vm_area_struct *new, extern void split_pad_vma(struct vm_area_struct *vma, struct vm_area_struct *new,
@@ -57,18 +52,7 @@ static inline void madvise_vma_pad_pages(struct vm_area_struct *vma,
{ {
} }
static inline struct vm_area_struct *get_pad_vma(struct vm_area_struct *vma)
{
return NULL;
}
static inline struct vm_area_struct *get_data_vma(struct vm_area_struct *vma)
{
return vma;
}
static inline void show_map_pad_vma(struct vm_area_struct *vma, static inline void show_map_pad_vma(struct vm_area_struct *vma,
struct vm_area_struct *pad,
struct seq_file *m, void *func, bool smaps) struct seq_file *m, void *func, bool smaps)
{ {
} }

View File

@@ -180,7 +180,9 @@ static inline unsigned int __map_depth(const struct sbitmap *sb, int index)
static inline void sbitmap_free(struct sbitmap *sb) static inline void sbitmap_free(struct sbitmap *sb)
{ {
free_percpu(sb->alloc_hint); free_percpu(sb->alloc_hint);
kvfree(sb->map); if (!sb->map)
return;
kvfree(sb->map - 1);
sb->map = NULL; sb->map = NULL;
} }

View File

@@ -487,14 +487,6 @@ static inline void node_stat_sub_folio(struct folio *folio,
mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio)); mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio));
} }
static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
int migratetype)
{
__mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
if (is_migrate_cma(migratetype))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
}
extern const char * const vmstat_text[]; extern const char * const vmstat_text[];
static inline const char *zone_stat_name(enum zone_stat_item item) static inline const char *zone_stat_name(enum zone_stat_item item)

View File

@@ -241,6 +241,26 @@
}, },
{ {
"name": "vts_kernel_net_tests" "name": "vts_kernel_net_tests"
},
{
"name": "CtsJobSchedulerTestCases",
"options": [
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testCellularConstraintExecutedAndStopped"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_transitionNetworks"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_withMobile"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testEJMeteredConstraintFails_withMobile_DataSaverOn"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testMeteredConstraintFails_withMobile_DataSaverOn"
}
]
} }
], ],
"presubmit-large": [ "presubmit-large": [

View File

@@ -9,6 +9,45 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/tracepoint.h> #include <linux/tracepoint.h>
TRACE_EVENT_CONDITION(damos_before_apply,
TP_PROTO(unsigned int context_idx, unsigned int scheme_idx,
unsigned int target_idx, struct damon_region *r,
unsigned int nr_regions, bool do_trace),
TP_ARGS(context_idx, target_idx, scheme_idx, r, nr_regions, do_trace),
TP_CONDITION(do_trace),
TP_STRUCT__entry(
__field(unsigned int, context_idx)
__field(unsigned int, scheme_idx)
__field(unsigned long, target_idx)
__field(unsigned long, start)
__field(unsigned long, end)
__field(unsigned int, nr_accesses)
__field(unsigned int, age)
__field(unsigned int, nr_regions)
),
TP_fast_assign(
__entry->context_idx = context_idx;
__entry->scheme_idx = scheme_idx;
__entry->target_idx = target_idx;
__entry->start = r->ar.start;
__entry->end = r->ar.end;
__entry->nr_accesses = r->nr_accesses_bp / 10000;
__entry->age = r->age;
__entry->nr_regions = nr_regions;
),
TP_printk("ctx_idx=%u scheme_idx=%u target_idx=%lu nr_regions=%u %lu-%lu: %u %u",
__entry->context_idx, __entry->scheme_idx,
__entry->target_idx, __entry->nr_regions,
__entry->start, __entry->end,
__entry->nr_accesses, __entry->age)
);
TRACE_EVENT(damon_aggregated, TRACE_EVENT(damon_aggregated,
TP_PROTO(struct damon_target *t, unsigned int target_id, TP_PROTO(struct damon_target *t, unsigned int target_id,

View File

@@ -45,6 +45,12 @@ DECLARE_HOOK(android_vh_blk_mq_kick_requeue_list,
TP_PROTO(struct request_queue *q, unsigned long delay, bool *skip), TP_PROTO(struct request_queue *q, unsigned long delay, bool *skip),
TP_ARGS(q, delay, skip)); TP_ARGS(q, delay, skip));
struct bio;
DECLARE_HOOK(android_vh_check_set_ioprio,
TP_PROTO(struct bio *bio),
TP_ARGS(bio));
#endif /* _TRACE_HOOK_BLK_H */ #endif /* _TRACE_HOOK_BLK_H */
/* This part must be outside protection */ /* This part must be outside protection */
#include <trace/define_trace.h> #include <trace/define_trace.h>

View File

@@ -28,6 +28,13 @@ DECLARE_HOOK(android_vh_f2fs_restore_priority,
TP_PROTO(struct task_struct *p, int saved_prio), TP_PROTO(struct task_struct *p, int saved_prio),
TP_ARGS(p, saved_prio)); TP_ARGS(p, saved_prio));
DECLARE_HOOK(android_vh_ep_create_wakeup_source,
TP_PROTO(char *name, int len),
TP_ARGS(name, len));
DECLARE_HOOK(android_vh_timerfd_create,
TP_PROTO(char *name, int len),
TP_ARGS(name, len));
#endif /* _TRACE_HOOK_FS_H */ #endif /* _TRACE_HOOK_FS_H */
/* This part must be outside protection */ /* This part must be outside protection */

View File

@@ -11,6 +11,10 @@ DECLARE_RESTRICTED_HOOK(android_rvh_process_madvise_bypass,
TP_PROTO(int pidfd, const struct iovec __user *vec, size_t vlen, TP_PROTO(int pidfd, const struct iovec __user *vec, size_t vlen,
int behavior, unsigned int flags, ssize_t *ret, bool *bypass), int behavior, unsigned int flags, ssize_t *ret, bool *bypass),
TP_ARGS(pidfd, vec, vlen, behavior, flags, ret, bypass), 1); TP_ARGS(pidfd, vec, vlen, behavior, flags, ret, bypass), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_do_madvise_bypass,
TP_PROTO(struct mm_struct *mm, unsigned long start,
size_t len_in, int behavior, int *ret, bool *bypass),
TP_ARGS(mm, start, len_in, behavior, ret, bypass), 1);
struct vm_area_struct; struct vm_area_struct;
DECLARE_HOOK(android_vh_update_vma_flags, DECLARE_HOOK(android_vh_update_vma_flags,
TP_PROTO(struct vm_area_struct *vma), TP_PROTO(struct vm_area_struct *vma),
@@ -27,4 +31,4 @@ DECLARE_HOOK(android_vh_madvise_pageout_bypass,
#endif #endif
#include <trace/define_trace.h> #include <trace/define_trace.h>

View File

@@ -22,6 +22,14 @@ struct vm_unmapped_area_info;
DECLARE_RESTRICTED_HOOK(android_rvh_shmem_get_folio, DECLARE_RESTRICTED_HOOK(android_rvh_shmem_get_folio,
TP_PROTO(struct shmem_inode_info *info, struct folio **folio), TP_PROTO(struct shmem_inode_info *info, struct folio **folio),
TP_ARGS(info, folio), 2); TP_ARGS(info, folio), 2);
DECLARE_RESTRICTED_HOOK(android_rvh_perform_reclaim,
TP_PROTO(int order, gfp_t gfp_mask, nodemask_t *nodemask,
unsigned long *progress, bool *skip),
TP_ARGS(order, gfp_mask, nodemask, progress, skip), 4);
DECLARE_RESTRICTED_HOOK(android_rvh_do_traversal_lruvec_ex,
TP_PROTO(struct mem_cgroup *memcg, struct lruvec *lruvec,
bool *stop),
TP_ARGS(memcg, lruvec, stop), 3);
DECLARE_HOOK(android_vh_shmem_mod_shmem, DECLARE_HOOK(android_vh_shmem_mod_shmem,
TP_PROTO(struct address_space *mapping, long nr_pages), TP_PROTO(struct address_space *mapping, long nr_pages),
TP_ARGS(mapping, nr_pages)); TP_ARGS(mapping, nr_pages));
@@ -398,6 +406,9 @@ DECLARE_HOOK(android_vh_filemap_update_page,
TP_PROTO(struct address_space *mapping, struct folio *folio, TP_PROTO(struct address_space *mapping, struct folio *folio,
struct file *file), struct file *file),
TP_ARGS(mapping, folio, file)); TP_ARGS(mapping, folio, file));
DECLARE_HOOK(android_vh_filemap_pages,
TP_PROTO(struct folio *folio),
TP_ARGS(folio));
DECLARE_HOOK(android_vh_lruvec_add_folio, DECLARE_HOOK(android_vh_lruvec_add_folio,
TP_PROTO(struct lruvec *lruvec, struct folio *folio, enum lru_list lru, TP_PROTO(struct lruvec *lruvec, struct folio *folio, enum lru_list lru,
@@ -408,6 +419,12 @@ DECLARE_HOOK(android_vh_lruvec_del_folio,
TP_PROTO(struct lruvec *lruvec, struct folio *folio, enum lru_list lru, TP_PROTO(struct lruvec *lruvec, struct folio *folio, enum lru_list lru,
bool *skip), bool *skip),
TP_ARGS(lruvec, folio, lru, skip)); TP_ARGS(lruvec, folio, lru, skip));
DECLARE_HOOK(android_vh_lru_gen_add_folio_skip,
TP_PROTO(struct lruvec *lruvec, struct folio *folio, bool *skip),
TP_ARGS(lruvec, folio, skip));
DECLARE_HOOK(android_vh_lru_gen_del_folio_skip,
TP_PROTO(struct lruvec *lruvec, struct folio *folio, bool *skip),
TP_ARGS(lruvec, folio, skip));
DECLARE_HOOK(android_vh_add_lazyfree_bypass, DECLARE_HOOK(android_vh_add_lazyfree_bypass,
TP_PROTO(struct lruvec *lruvec, struct folio *folio, bool *bypass), TP_PROTO(struct lruvec *lruvec, struct folio *folio, bool *bypass),
TP_ARGS(lruvec, folio, bypass)); TP_ARGS(lruvec, folio, bypass));

View File

@@ -12,6 +12,9 @@
DECLARE_RESTRICTED_HOOK(android_rvh_set_balance_anon_file_reclaim, DECLARE_RESTRICTED_HOOK(android_rvh_set_balance_anon_file_reclaim,
TP_PROTO(bool *balance_anon_file_reclaim), TP_PROTO(bool *balance_anon_file_reclaim),
TP_ARGS(balance_anon_file_reclaim), 1); TP_ARGS(balance_anon_file_reclaim), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_kswapd_shrink_node,
TP_PROTO(unsigned long *nr_reclaimed),
TP_ARGS(nr_reclaimed), 1);
DECLARE_HOOK(android_vh_tune_swappiness, DECLARE_HOOK(android_vh_tune_swappiness,
TP_PROTO(int *swappiness), TP_PROTO(int *swappiness),
TP_ARGS(swappiness)); TP_ARGS(swappiness));
@@ -52,6 +55,15 @@ DECLARE_HOOK(android_vh_inode_lru_isolate,
DECLARE_HOOK(android_vh_invalidate_mapping_pagevec, DECLARE_HOOK(android_vh_invalidate_mapping_pagevec,
TP_PROTO(struct address_space *mapping, bool *skip), TP_PROTO(struct address_space *mapping, bool *skip),
TP_ARGS(mapping, skip)); TP_ARGS(mapping, skip));
DECLARE_HOOK(android_vh_keep_reclaimed_folio,
TP_PROTO(struct folio *folio, int refcount, bool *keep),
TP_ARGS(folio, refcount, keep));
DECLARE_HOOK(android_vh_clear_reclaimed_folio,
TP_PROTO(struct folio *folio, bool reclaimed),
TP_ARGS(folio, reclaimed));
DECLARE_HOOK(android_vh_evict_folios_bypass,
TP_PROTO(struct folio *folio, bool *bypass),
TP_ARGS(folio, bypass));
enum scan_balance; enum scan_balance;
DECLARE_HOOK(android_vh_tune_scan_type, DECLARE_HOOK(android_vh_tune_scan_type,

View File

@@ -100,9 +100,10 @@ enum upiu_response_transaction {
UPIU_TRANSACTION_REJECT_UPIU = 0x3F, UPIU_TRANSACTION_REJECT_UPIU = 0x3F,
}; };
/* UPIU Read/Write flags */ /* UPIU Read/Write flags. See also table "UPIU Flags" in the UFS standard. */
enum { enum {
UPIU_CMD_FLAGS_NONE = 0x00, UPIU_CMD_FLAGS_NONE = 0x00,
UPIU_CMD_FLAGS_CP = 0x04,
UPIU_CMD_FLAGS_WRITE = 0x20, UPIU_CMD_FLAGS_WRITE = 0x20,
UPIU_CMD_FLAGS_READ = 0x40, UPIU_CMD_FLAGS_READ = 0x40,
}; };

View File

@@ -248,7 +248,15 @@ struct ufs_query {
struct ufs_dev_cmd { struct ufs_dev_cmd {
enum dev_cmd_type type; enum dev_cmd_type type;
struct mutex lock; struct mutex lock;
struct completion *complete; struct completion *complete
/*
* Apparently the CRC generated by the ABI checker changes if an
* attribute is added to a structure member. Hence the #ifndef below.
*/
#ifndef __GENKSYMS__
__attribute__((deprecated))
#endif
;
struct ufs_query query; struct ufs_query query;
}; };

View File

@@ -2304,9 +2304,37 @@ static struct file_system_type cgroup2_fs_type = {
}; };
#ifdef CONFIG_CPUSETS #ifdef CONFIG_CPUSETS
enum cpuset_param {
Opt_cpuset_v2_mode,
};
static const struct fs_parameter_spec cpuset_fs_parameters[] = {
fsparam_flag ("cpuset_v2_mode", Opt_cpuset_v2_mode),
{}
};
static int cpuset_parse_param(struct fs_context *fc, struct fs_parameter *param)
{
struct cgroup_fs_context *ctx = cgroup_fc2context(fc);
struct fs_parse_result result;
int opt;
opt = fs_parse(fc, cpuset_fs_parameters, param, &result);
if (opt < 0)
return opt;
switch (opt) {
case Opt_cpuset_v2_mode:
ctx->flags |= CGRP_ROOT_CPUSET_V2_MODE;
return 0;
}
return -EINVAL;
}
static const struct fs_context_operations cpuset_fs_context_ops = { static const struct fs_context_operations cpuset_fs_context_ops = {
.get_tree = cgroup1_get_tree, .get_tree = cgroup1_get_tree,
.free = cgroup_fs_context_free, .free = cgroup_fs_context_free,
.parse_param = cpuset_parse_param,
}; };
/* /*
@@ -2343,6 +2371,7 @@ static int cpuset_init_fs_context(struct fs_context *fc)
static struct file_system_type cpuset_fs_type = { static struct file_system_type cpuset_fs_type = {
.name = "cpuset", .name = "cpuset",
.init_fs_context = cpuset_init_fs_context, .init_fs_context = cpuset_init_fs_context,
.parameters = cpuset_fs_parameters,
.fs_flags = FS_USERNS_MOUNT, .fs_flags = FS_USERNS_MOUNT,
}; };
#endif #endif

View File

@@ -1561,4 +1561,5 @@ struct cgroup_subsys_state *kthread_blkcg(void)
} }
return NULL; return NULL;
} }
EXPORT_SYMBOL_GPL(kthread_blkcg);
#endif #endif

View File

@@ -32,7 +32,13 @@ obj-$(CONFIG_MODULE_STATS) += stats.o
$(obj)/gki_module.o: include/generated/gki_module_protected_exports.h \ $(obj)/gki_module.o: include/generated/gki_module_protected_exports.h \
include/generated/gki_module_unprotected.h include/generated/gki_module_unprotected.h
ifneq ($(CONFIG_UNUSED_KSYMS_WHITELIST),)
ALL_KMI_SYMBOLS := $(CONFIG_UNUSED_KSYMS_WHITELIST)
else
ALL_KMI_SYMBOLS := include/config/abi_gki_kmi_symbols ALL_KMI_SYMBOLS := include/config/abi_gki_kmi_symbols
$(ALL_KMI_SYMBOLS):
: > $@
endif
include/generated/gki_module_unprotected.h: $(ALL_KMI_SYMBOLS) \ include/generated/gki_module_unprotected.h: $(ALL_KMI_SYMBOLS) \
$(srctree)/scripts/gen_gki_modules_headers.sh $(srctree)/scripts/gen_gki_modules_headers.sh
@@ -43,10 +49,6 @@ include/generated/gki_module_unprotected.h: $(ALL_KMI_SYMBOLS) \
# AARCH is the same as ARCH, except that arm64 becomes aarch64 # AARCH is the same as ARCH, except that arm64 becomes aarch64
AARCH := $(if $(filter arm64,$(ARCH)),aarch64,$(ARCH)) AARCH := $(if $(filter arm64,$(ARCH)),aarch64,$(ARCH))
# Generate symbol list with union of all symbol list for ARCH
$(ALL_KMI_SYMBOLS): $(wildcard $(srctree)/android/abi_gki_$(AARCH) $(srctree)/android/abi_gki_$(AARCH)_*)
$(if $(strip $^),cat $^ > $(ALL_KMI_SYMBOLS), echo "" > $(ALL_KMI_SYMBOLS))
# ABI protected exports list file specific to ARCH if exists else empty # ABI protected exports list file specific to ARCH if exists else empty
ABI_PROTECTED_EXPORTS_FILE := $(wildcard $(srctree)/android/abi_gki_protected_exports_$(AARCH)) ABI_PROTECTED_EXPORTS_FILE := $(wildcard $(srctree)/android/abi_gki_protected_exports_$(AARCH))

View File

@@ -16,14 +16,14 @@ static int __regset_get(struct task_struct *target,
if (size > regset->n * regset->size) if (size > regset->n * regset->size)
size = regset->n * regset->size; size = regset->n * regset->size;
if (!p) { if (!p) {
to_free = p = kzalloc(size, GFP_KERNEL); to_free = p = kvzalloc(size, GFP_KERNEL);
if (!p) if (!p)
return -ENOMEM; return -ENOMEM;
} }
res = regset->regset_get(target, regset, res = regset->regset_get(target, regset,
(struct membuf){.p = p, .left = size}); (struct membuf){.p = p, .left = size});
if (res < 0) { if (res < 0) {
kfree(to_free); kvfree(to_free);
return res; return res;
} }
*data = p; *data = p;
@@ -71,6 +71,6 @@ int copy_regset_to_user(struct task_struct *target,
ret = regset_get_alloc(target, regset, size, &buf); ret = regset_get_alloc(target, regset, size, &buf);
if (ret > 0) if (ret > 0)
ret = copy_to_user(data, buf, ret) ? -EFAULT : 0; ret = copy_to_user(data, buf, ret) ? -EFAULT : 0;
kfree(buf); kvfree(buf);
return ret; return ret;
} }

View File

@@ -236,6 +236,26 @@
"include-filter": "kselftest_x86_test_mremap_vdso" "include-filter": "kselftest_x86_test_mremap_vdso"
} }
] ]
},
{
"name": "CtsJobSchedulerTestCases",
"options": [
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testCellularConstraintExecutedAndStopped"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_transitionNetworks"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_withMobile"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testEJMeteredConstraintFails_withMobile_DataSaverOn"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testMeteredConstraintFails_withMobile_DataSaverOn"
}
]
} }
], ],
"presubmit-large": [ "presubmit-large": [

View File

@@ -100,24 +100,28 @@ static inline void update_alloc_hint_after_get(struct sbitmap *sb,
* *
* Return: Returns the spinlock corresponding to index. * Return: Returns the spinlock corresponding to index.
*/ */
static spinlock_t *sbitmap_spinlock(struct sbitmap_word *map, static spinlock_t *sbitmap_spinlock(struct sbitmap *sb, unsigned int index)
unsigned int map_nr, unsigned int index)
{ {
spinlock_t *base_lock = (spinlock_t *)(&map[map_nr - index]); const unsigned int max_map_nr = *(unsigned int *)&sb->map[-1];
spinlock_t *const base_lock = (spinlock_t *)
round_up((uintptr_t)&sb->map[max_map_nr],
__alignof__(spinlock_t));
WARN_ON_ONCE(index < 0 || index >= sb->map_nr);
BUG_ON(((unsigned long)base_lock % __alignof__(spinlock_t)));
return &base_lock[index]; return &base_lock[index];
} }
/* /*
* See if we have deferred clears that we can batch move * See if we have deferred clears that we can batch move
*/ */
static inline bool sbitmap_deferred_clear(struct sbitmap_word *map, static inline bool sbitmap_deferred_clear(struct sbitmap *sb,
struct sbitmap_word *map,
unsigned int depth, unsigned int alloc_hint, bool wrap, unsigned int depth, unsigned int alloc_hint, bool wrap,
unsigned int map_nr, unsigned int index) unsigned int map_nr, unsigned int index)
{ {
unsigned long mask, word_mask; unsigned long mask, word_mask;
spinlock_t *swap_lock = sbitmap_spinlock(map, map_nr, index); spinlock_t *swap_lock = sbitmap_spinlock(sb, index);
guard(spinlock_irqsave)(swap_lock); guard(spinlock_irqsave)(swap_lock);
@@ -183,13 +187,17 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
sb->alloc_hint = NULL; sb->alloc_hint = NULL;
} }
/* Due to 72d04bdcf3f7 ("sbitmap: fix io hung due to race on sbitmap_word /*
* ::cleared") directly adding spinlock_t swap_1ock to struct sbitmap_word * Commit 72d04bdcf3f7 ("sbitmap: fix io hung due to race on
* in sbitmap.h, KMI was damaged. In order to achieve functionality without * sbitmap_word::cleared") broke the KMI by adding `spinlock_t
* damaging KMI, we can only apply for a block of memory with a size of * swap_1ock` in struct sbitmap_word in sbitmap.h. Restore the KMI by
* map_nr * (sizeof (* sb ->map)+sizeof(spinlock_t)) to ensure that each * making sb->map larger and by storing the size of the sb->map array
* struct sbitmap-word receives protection from spinlock. * and the spinlock instances in that array.
* The actual memory distribution used is as follows: *
* The memory layout of sb->map is as follows:
* ----------------------
* struct sbitmap_word[-1] - only the first four bytes are used to store
* max_map_nr.
* ---------------------- * ----------------------
* struct sbitmap_word[0] * struct sbitmap_word[0]
* ...................... * ......................
@@ -199,17 +207,23 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift,
* ....................... * .......................
* spinlock_t swap_lock[n] * spinlock_t swap_lock[n]
* ---------------------- * ----------------------
* sbitmap_word[0] corresponds to swap_lock[0], and sbitmap_word[n] *
* corresponds to swap_lock[n], and so on * sbitmap_word[0] corresponds to swap_lock[0], and sbitmap_word[n]
* corresponds to swap_lock[n], and so on.
*/ */
sb->map = kvzalloc_node(sb->map_nr * (sizeof(*sb->map) + sizeof(spinlock_t)), flags, node); const size_t map_size = round_up((sb->map_nr + 1) * sizeof(*sb->map),
__alignof__(spinlock_t))
+ sb->map_nr * sizeof(spinlock_t);
sb->map = kvzalloc_node(map_size, flags, node);
if (!sb->map) { if (!sb->map) {
free_percpu(sb->alloc_hint); free_percpu(sb->alloc_hint);
return -ENOMEM; return -ENOMEM;
} }
*(unsigned int *)sb->map = sb->map_nr;
sb->map++;
for (i = 0; i < sb->map_nr; i++) { for (i = 0; i < sb->map_nr; i++) {
spinlock_t *swap_lock = sbitmap_spinlock(&sb->map[i], sb->map_nr, i); spinlock_t *swap_lock = sbitmap_spinlock(sb, i);
spin_lock_init(swap_lock); spin_lock_init(swap_lock);
} }
@@ -224,7 +238,7 @@ void sbitmap_resize(struct sbitmap *sb, unsigned int depth)
unsigned int i; unsigned int i;
for (i = 0; i < sb->map_nr; i++) for (i = 0; i < sb->map_nr; i++)
sbitmap_deferred_clear(&sb->map[i], 0, 0, 0, sb->map_nr, i); sbitmap_deferred_clear(sb, &sb->map[i], 0, 0, 0, sb->map_nr, i);
sb->depth = depth; sb->depth = depth;
sb->map_nr = DIV_ROUND_UP(sb->depth, bits_per_word); sb->map_nr = DIV_ROUND_UP(sb->depth, bits_per_word);
@@ -265,7 +279,8 @@ static int __sbitmap_get_word(unsigned long *word, unsigned long depth,
return nr; return nr;
} }
static int sbitmap_find_bit_in_word(struct sbitmap_word *map, static int sbitmap_find_bit_in_word(struct sbitmap *sb,
struct sbitmap_word *map,
unsigned int depth, unsigned int depth,
unsigned int alloc_hint, unsigned int alloc_hint,
bool wrap, bool wrap,
@@ -279,7 +294,7 @@ static int sbitmap_find_bit_in_word(struct sbitmap_word *map,
alloc_hint, wrap); alloc_hint, wrap);
if (nr != -1) if (nr != -1)
break; break;
if (!sbitmap_deferred_clear(map, depth, alloc_hint, wrap, map_nr, index)) if (!sbitmap_deferred_clear(sb, map, depth, alloc_hint, wrap, map_nr, index))
break; break;
} while (1); } while (1);
@@ -296,7 +311,7 @@ static int sbitmap_find_bit(struct sbitmap *sb,
int nr = -1; int nr = -1;
for (i = 0; i < sb->map_nr; i++) { for (i = 0; i < sb->map_nr; i++) {
nr = sbitmap_find_bit_in_word(&sb->map[index], nr = sbitmap_find_bit_in_word(sb, &sb->map[index],
min_t(unsigned int, min_t(unsigned int,
__map_depth(sb, index), __map_depth(sb, index),
depth), depth),
@@ -602,7 +617,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
unsigned int map_depth = __map_depth(sb, index); unsigned int map_depth = __map_depth(sb, index);
unsigned long val; unsigned long val;
sbitmap_deferred_clear(map, 0, 0, 0, sb->map_nr, index); sbitmap_deferred_clear(sb, map, 0, 0, 0, sb->map_nr, index);
val = READ_ONCE(map->word); val = READ_ONCE(map->word);
if (val == (1UL << (map_depth - 1)) - 1) if (val == (1UL << (map_depth - 1)) - 1)
goto next; goto next;

View File

@@ -128,6 +128,7 @@ ssize_t strscpy(char *dest, const char *src, size_t count)
if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) if (count == 0 || WARN_ON_ONCE(count > INT_MAX))
return -E2BIG; return -E2BIG;
#ifndef CONFIG_DCACHE_WORD_ACCESS
#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
/* /*
* If src is unaligned, don't cross a page boundary, * If src is unaligned, don't cross a page boundary,
@@ -142,12 +143,14 @@ ssize_t strscpy(char *dest, const char *src, size_t count)
/* If src or dest is unaligned, don't do word-at-a-time. */ /* If src or dest is unaligned, don't do word-at-a-time. */
if (((long) dest | (long) src) & (sizeof(long) - 1)) if (((long) dest | (long) src) & (sizeof(long) - 1))
max = 0; max = 0;
#endif
#endif #endif
/* /*
* read_word_at_a_time() below may read uninitialized bytes after the * load_unaligned_zeropad() or read_word_at_a_time() below may read
* trailing zero and use them in comparisons. Disable this optimization * uninitialized bytes after the trailing zero and use them in
* under KMSAN to prevent false positive reports. * comparisons. Disable this optimization under KMSAN to prevent
* false positive reports.
*/ */
if (IS_ENABLED(CONFIG_KMSAN)) if (IS_ENABLED(CONFIG_KMSAN))
max = 0; max = 0;
@@ -155,7 +158,11 @@ ssize_t strscpy(char *dest, const char *src, size_t count)
while (max >= sizeof(unsigned long)) { while (max >= sizeof(unsigned long)) {
unsigned long c, data; unsigned long c, data;
#ifdef CONFIG_DCACHE_WORD_ACCESS
c = load_unaligned_zeropad(src+res);
#else
c = read_word_at_a_time(src+res); c = read_word_at_a_time(src+res);
#endif
if (has_zero(c, &data, &constants)) { if (has_zero(c, &data, &constants)) {
data = prep_zero_mask(c, data, &constants); data = prep_zero_mask(c, data, &constants);
data = create_zero_mask(data); data = create_zero_mask(data);

View File

@@ -225,6 +225,26 @@
"include-filter": "kselftest_x86_test_mremap_vdso" "include-filter": "kselftest_x86_test_mremap_vdso"
} }
] ]
},
{
"name": "CtsJobSchedulerTestCases",
"options": [
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testCellularConstraintExecutedAndStopped"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_transitionNetworks"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testConnectivityConstraintExecutes_withMobile"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testEJMeteredConstraintFails_withMobile_DataSaverOn"
},
{
"include-filter": "android.jobscheduler.cts.ConnectivityConstraintTest#testMeteredConstraintFails_withMobile_DataSaverOn"
}
]
} }
], ],
"kernel-presubmit": [ "kernel-presubmit": [

View File

@@ -94,6 +94,7 @@ static void damon_test_aggregate(struct kunit *test)
for (ir = 0; ir < 3; ir++) { for (ir = 0; ir < 3; ir++) {
r = damon_new_region(saddr[it][ir], eaddr[it][ir]); r = damon_new_region(saddr[it][ir], eaddr[it][ir]);
r->nr_accesses = accesses[it][ir]; r->nr_accesses = accesses[it][ir];
r->nr_accesses_bp = accesses[it][ir] * 10000;
damon_add_region(r, t); damon_add_region(r, t);
} }
it++; it++;
@@ -147,9 +148,11 @@ static void damon_test_merge_two(struct kunit *test)
t = damon_new_target(); t = damon_new_target();
r = damon_new_region(0, 100); r = damon_new_region(0, 100);
r->nr_accesses = 10; r->nr_accesses = 10;
r->nr_accesses_bp = 100000;
damon_add_region(r, t); damon_add_region(r, t);
r2 = damon_new_region(100, 300); r2 = damon_new_region(100, 300);
r2->nr_accesses = 20; r2->nr_accesses = 20;
r2->nr_accesses_bp = 200000;
damon_add_region(r2, t); damon_add_region(r2, t);
damon_merge_two_regions(t, r, r2); damon_merge_two_regions(t, r, r2);
@@ -196,6 +199,7 @@ static void damon_test_merge_regions_of(struct kunit *test)
for (i = 0; i < ARRAY_SIZE(sa); i++) { for (i = 0; i < ARRAY_SIZE(sa); i++) {
r = damon_new_region(sa[i], ea[i]); r = damon_new_region(sa[i], ea[i]);
r->nr_accesses = nrs[i]; r->nr_accesses = nrs[i];
r->nr_accesses_bp = nrs[i] * 10000;
damon_add_region(r, t); damon_add_region(r, t);
} }
@@ -297,6 +301,7 @@ static void damon_test_update_monitoring_result(struct kunit *test)
struct damon_region *r = damon_new_region(3, 7); struct damon_region *r = damon_new_region(3, 7);
r->nr_accesses = 15; r->nr_accesses = 15;
r->nr_accesses_bp = 150000;
r->age = 20; r->age = 20;
new_attrs = (struct damon_attrs){ new_attrs = (struct damon_attrs){
@@ -341,6 +346,21 @@ static void damon_test_set_attrs(struct kunit *test)
KUNIT_EXPECT_EQ(test, damon_set_attrs(c, &invalid_attrs), -EINVAL); KUNIT_EXPECT_EQ(test, damon_set_attrs(c, &invalid_attrs), -EINVAL);
} }
static void damon_test_moving_sum(struct kunit *test)
{
unsigned int mvsum = 50000, nomvsum = 50000, len_window = 10;
unsigned int new_values[] = {10000, 0, 10000, 0, 0, 0, 10000, 0, 0, 0};
unsigned int expects[] = {55000, 50000, 55000, 50000, 45000, 40000,
45000, 40000, 35000, 30000};
int i;
for (i = 0; i < ARRAY_SIZE(new_values); i++) {
mvsum = damon_moving_sum(mvsum, nomvsum, len_window,
new_values[i]);
KUNIT_EXPECT_EQ(test, mvsum, expects[i]);
}
}
static void damos_test_new_filter(struct kunit *test) static void damos_test_new_filter(struct kunit *test)
{ {
struct damos_filter *filter; struct damos_filter *filter;
@@ -425,6 +445,7 @@ static struct kunit_case damon_test_cases[] = {
KUNIT_CASE(damon_test_set_regions), KUNIT_CASE(damon_test_set_regions),
KUNIT_CASE(damon_test_update_monitoring_result), KUNIT_CASE(damon_test_update_monitoring_result),
KUNIT_CASE(damon_test_set_attrs), KUNIT_CASE(damon_test_set_attrs),
KUNIT_CASE(damon_test_moving_sum),
KUNIT_CASE(damos_test_new_filter), KUNIT_CASE(damos_test_new_filter),
KUNIT_CASE(damos_test_filter_out), KUNIT_CASE(damos_test_filter_out),
{}, {},

View File

@@ -128,6 +128,7 @@ struct damon_region *damon_new_region(unsigned long start, unsigned long end)
region->ar.start = start; region->ar.start = start;
region->ar.end = end; region->ar.end = end;
region->nr_accesses = 0; region->nr_accesses = 0;
region->nr_accesses_bp = 0;
INIT_LIST_HEAD(&region->list); INIT_LIST_HEAD(&region->list);
region->age = 0; region->age = 0;
@@ -525,6 +526,7 @@ static void damon_update_monitoring_result(struct damon_region *r,
{ {
r->nr_accesses = damon_nr_accesses_for_new_attrs(r->nr_accesses, r->nr_accesses = damon_nr_accesses_for_new_attrs(r->nr_accesses,
old_attrs, new_attrs); old_attrs, new_attrs);
r->nr_accesses_bp = r->nr_accesses * 10000;
r->age = damon_age_for_new_attrs(r->age, old_attrs, new_attrs); r->age = damon_age_for_new_attrs(r->age, old_attrs, new_attrs);
} }
@@ -788,12 +790,13 @@ static void damon_split_region_at(struct damon_target *t,
static bool __damos_valid_target(struct damon_region *r, struct damos *s) static bool __damos_valid_target(struct damon_region *r, struct damos *s)
{ {
unsigned long sz; unsigned long sz;
unsigned int nr_accesses = r->nr_accesses_bp / 10000;
sz = damon_sz_region(r); sz = damon_sz_region(r);
return s->pattern.min_sz_region <= sz && return s->pattern.min_sz_region <= sz &&
sz <= s->pattern.max_sz_region && sz <= s->pattern.max_sz_region &&
s->pattern.min_nr_accesses <= r->nr_accesses && s->pattern.min_nr_accesses <= nr_accesses &&
r->nr_accesses <= s->pattern.max_nr_accesses && nr_accesses <= s->pattern.max_nr_accesses &&
s->pattern.min_age_region <= r->age && s->pattern.min_age_region <= r->age &&
r->age <= s->pattern.max_age_region; r->age <= s->pattern.max_age_region;
} }
@@ -948,6 +951,33 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t,
struct timespec64 begin, end; struct timespec64 begin, end;
unsigned long sz_applied = 0; unsigned long sz_applied = 0;
int err = 0; int err = 0;
/*
* We plan to support multiple context per kdamond, as DAMON sysfs
* implies with 'nr_contexts' file. Nevertheless, only single context
* per kdamond is supported for now. So, we can simply use '0' context
* index here.
*/
unsigned int cidx = 0;
struct damos *siter; /* schemes iterator */
unsigned int sidx = 0;
struct damon_target *titer; /* targets iterator */
unsigned int tidx = 0;
bool do_trace = false;
/* get indices for trace_damos_before_apply() */
if (trace_damos_before_apply_enabled()) {
damon_for_each_scheme(siter, c) {
if (siter == s)
break;
sidx++;
}
damon_for_each_target(titer, c) {
if (titer == t)
break;
tidx++;
}
do_trace = true;
}
if (c->ops.apply_scheme) { if (c->ops.apply_scheme) {
if (quota->esz && quota->charged_sz + sz > quota->esz) { if (quota->esz && quota->charged_sz + sz > quota->esz) {
@@ -962,8 +992,11 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t,
ktime_get_coarse_ts64(&begin); ktime_get_coarse_ts64(&begin);
if (c->callback.before_damos_apply) if (c->callback.before_damos_apply)
err = c->callback.before_damos_apply(c, t, r, s); err = c->callback.before_damos_apply(c, t, r, s);
if (!err) if (!err) {
trace_damos_before_apply(cidx, sidx, tidx, r,
damon_nr_regions(t), do_trace);
sz_applied = c->ops.apply_scheme(c, t, r, s); sz_applied = c->ops.apply_scheme(c, t, r, s);
}
ktime_get_coarse_ts64(&end); ktime_get_coarse_ts64(&end);
quota->total_charged_ns += timespec64_to_ns(&end) - quota->total_charged_ns += timespec64_to_ns(&end) -
timespec64_to_ns(&begin); timespec64_to_ns(&begin);
@@ -1127,6 +1160,7 @@ static void damon_merge_two_regions(struct damon_target *t,
l->nr_accesses = (l->nr_accesses * sz_l + r->nr_accesses * sz_r) / l->nr_accesses = (l->nr_accesses * sz_l + r->nr_accesses * sz_r) /
(sz_l + sz_r); (sz_l + sz_r);
l->nr_accesses_bp = l->nr_accesses * 10000;
l->age = (l->age * sz_l + r->age * sz_r) / (sz_l + sz_r); l->age = (l->age * sz_l + r->age * sz_r) / (sz_l + sz_r);
l->ar.end = r->ar.end; l->ar.end = r->ar.end;
damon_destroy_region(r, t); damon_destroy_region(r, t);
@@ -1216,6 +1250,7 @@ static void damon_split_region_at(struct damon_target *t,
new->age = r->age; new->age = r->age;
new->last_nr_accesses = r->last_nr_accesses; new->last_nr_accesses = r->last_nr_accesses;
new->nr_accesses = r->nr_accesses; new->nr_accesses = r->nr_accesses;
new->nr_accesses_bp = r->nr_accesses_bp;
damon_insert_region(new, r, damon_next_region(r), t); damon_insert_region(new, r, damon_next_region(r), t);
} }
@@ -1597,6 +1632,76 @@ int damon_set_region_biggest_system_ram_default(struct damon_target *t,
return damon_set_regions(t, &addr_range, 1); return damon_set_regions(t, &addr_range, 1);
} }
/*
* damon_moving_sum() - Calculate an inferred moving sum value.
* @mvsum: Inferred sum of the last @len_window values.
* @nomvsum: Non-moving sum of the last discrete @len_window window values.
* @len_window: The number of last values to take care of.
* @new_value: New value that will be added to the pseudo moving sum.
*
* Moving sum (moving average * window size) is good for handling noise, but
* the cost of keeping past values can be high for arbitrary window size. This
* function implements a lightweight pseudo moving sum function that doesn't
* keep the past window values.
*
* It simply assumes there was no noise in the past, and get the no-noise
* assumed past value to drop from @nomvsum and @len_window. @nomvsum is a
* non-moving sum of the last window. For example, if @len_window is 10 and we
* have 25 values, @nomvsum is the sum of the 11th to 20th values of the 25
* values. Hence, this function simply drops @nomvsum / @len_window from
* given @mvsum and add @new_value.
*
* For example, if @len_window is 10 and @nomvsum is 50, the last 10 values for
* the last window could be vary, e.g., 0, 10, 0, 10, 0, 10, 0, 0, 0, 20. For
* calculating next moving sum with a new value, we should drop 0 from 50 and
* add the new value. However, this function assumes it got value 5 for each
* of the last ten times. Based on the assumption, when the next value is
* measured, it drops the assumed past value, 5 from the current sum, and add
* the new value to get the updated pseduo-moving average.
*
* This means the value could have errors, but the errors will be disappeared
* for every @len_window aligned calls. For example, if @len_window is 10, the
* pseudo moving sum with 11th value to 19th value would have an error. But
* the sum with 20th value will not have the error.
*
* Return: Pseudo-moving average after getting the @new_value.
*/
static unsigned int damon_moving_sum(unsigned int mvsum, unsigned int nomvsum,
unsigned int len_window, unsigned int new_value)
{
return mvsum - nomvsum / len_window + new_value;
}
/**
* damon_update_region_access_rate() - Update the access rate of a region.
* @r: The DAMON region to update for its access check result.
* @accessed: Whether the region has accessed during last sampling interval.
* @attrs: The damon_attrs of the DAMON context.
*
* Update the access rate of a region with the region's last sampling interval
* access check result.
*
* Usually this will be called by &damon_operations->check_accesses callback.
*/
void damon_update_region_access_rate(struct damon_region *r, bool accessed,
struct damon_attrs *attrs)
{
unsigned int len_window = 1;
/*
* sample_interval can be zero, but cannot be larger than
* aggr_interval, owing to validation of damon_set_attrs().
*/
if (attrs->sample_interval)
len_window = attrs->aggr_interval / attrs->sample_interval;
r->nr_accesses_bp = damon_moving_sum(r->nr_accesses_bp,
r->last_nr_accesses * 10000, len_window,
accessed ? 10000 : 0);
if (accessed)
r->nr_accesses++;
}
static int __init damon_init(void) static int __init damon_init(void)
{ {
damon_region_cache = KMEM_CACHE(damon_region, 0); damon_region_cache = KMEM_CACHE(damon_region, 0);

View File

@@ -148,7 +148,8 @@ out:
return accessed; return accessed;
} }
static void __damon_pa_check_access(struct damon_region *r) static void __damon_pa_check_access(struct damon_region *r,
struct damon_attrs *attrs)
{ {
static unsigned long last_addr; static unsigned long last_addr;
static unsigned long last_folio_sz = PAGE_SIZE; static unsigned long last_folio_sz = PAGE_SIZE;
@@ -157,14 +158,12 @@ static void __damon_pa_check_access(struct damon_region *r)
/* If the region is in the last checked page, reuse the result */ /* If the region is in the last checked page, reuse the result */
if (ALIGN_DOWN(last_addr, last_folio_sz) == if (ALIGN_DOWN(last_addr, last_folio_sz) ==
ALIGN_DOWN(r->sampling_addr, last_folio_sz)) { ALIGN_DOWN(r->sampling_addr, last_folio_sz)) {
if (last_accessed) damon_update_region_access_rate(r, last_accessed, attrs);
r->nr_accesses++;
return; return;
} }
last_accessed = damon_pa_young(r->sampling_addr, &last_folio_sz); last_accessed = damon_pa_young(r->sampling_addr, &last_folio_sz);
if (last_accessed) damon_update_region_access_rate(r, last_accessed, attrs);
r->nr_accesses++;
last_addr = r->sampling_addr; last_addr = r->sampling_addr;
} }
@@ -177,7 +176,7 @@ static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx)
damon_for_each_target(t, ctx) { damon_for_each_target(t, ctx) {
damon_for_each_region(r, t) { damon_for_each_region(r, t) {
__damon_pa_check_access(r); __damon_pa_check_access(r, &ctx->attrs);
max_nr_accesses = max(r->nr_accesses, max_nr_accesses); max_nr_accesses = max(r->nr_accesses, max_nr_accesses);
} }
} }

View File

@@ -31,7 +31,7 @@ static struct damon_sysfs_scheme_region *damon_sysfs_scheme_region_alloc(
return NULL; return NULL;
sysfs_region->kobj = (struct kobject){}; sysfs_region->kobj = (struct kobject){};
sysfs_region->ar = region->ar; sysfs_region->ar = region->ar;
sysfs_region->nr_accesses = region->nr_accesses; sysfs_region->nr_accesses = region->nr_accesses_bp / 10000;
sysfs_region->age = region->age; sysfs_region->age = region->age;
INIT_LIST_HEAD(&sysfs_region->list); INIT_LIST_HEAD(&sysfs_region->list);
return sysfs_region; return sysfs_region;

View File

@@ -560,23 +560,27 @@ static bool damon_va_young(struct mm_struct *mm, unsigned long addr,
* r the region to be checked * r the region to be checked
*/ */
static void __damon_va_check_access(struct mm_struct *mm, static void __damon_va_check_access(struct mm_struct *mm,
struct damon_region *r, bool same_target) struct damon_region *r, bool same_target,
struct damon_attrs *attrs)
{ {
static unsigned long last_addr; static unsigned long last_addr;
static unsigned long last_folio_sz = PAGE_SIZE; static unsigned long last_folio_sz = PAGE_SIZE;
static bool last_accessed; static bool last_accessed;
if (!mm) {
damon_update_region_access_rate(r, false, attrs);
return;
}
/* If the region is in the last checked page, reuse the result */ /* If the region is in the last checked page, reuse the result */
if (same_target && (ALIGN_DOWN(last_addr, last_folio_sz) == if (same_target && (ALIGN_DOWN(last_addr, last_folio_sz) ==
ALIGN_DOWN(r->sampling_addr, last_folio_sz))) { ALIGN_DOWN(r->sampling_addr, last_folio_sz))) {
if (last_accessed) damon_update_region_access_rate(r, last_accessed, attrs);
r->nr_accesses++;
return; return;
} }
last_accessed = damon_va_young(mm, r->sampling_addr, &last_folio_sz); last_accessed = damon_va_young(mm, r->sampling_addr, &last_folio_sz);
if (last_accessed) damon_update_region_access_rate(r, last_accessed, attrs);
r->nr_accesses++;
last_addr = r->sampling_addr; last_addr = r->sampling_addr;
} }
@@ -591,15 +595,15 @@ static unsigned int damon_va_check_accesses(struct damon_ctx *ctx)
damon_for_each_target(t, ctx) { damon_for_each_target(t, ctx) {
mm = damon_get_mm(t); mm = damon_get_mm(t);
if (!mm)
continue;
same_target = false; same_target = false;
damon_for_each_region(r, t) { damon_for_each_region(r, t) {
__damon_va_check_access(mm, r, same_target); __damon_va_check_access(mm, r, same_target,
&ctx->attrs);
max_nr_accesses = max(r->nr_accesses, max_nr_accesses); max_nr_accesses = max(r->nr_accesses, max_nr_accesses);
same_target = true; same_target = true;
} }
mmput(mm); if (mm)
mmput(mm);
} }
return max_nr_accesses; return max_nr_accesses;

View File

@@ -32,8 +32,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
} }
early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order, bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order)
int migratetype)
{ {
if (order >= debug_guardpage_minorder()) if (order >= debug_guardpage_minorder())
return false; return false;
@@ -41,19 +40,12 @@ bool __set_page_guard(struct zone *zone, struct page *page, unsigned int order,
__SetPageGuard(page); __SetPageGuard(page);
INIT_LIST_HEAD(&page->buddy_list); INIT_LIST_HEAD(&page->buddy_list);
set_page_private(page, order); set_page_private(page, order);
/* Guard pages are not available for any usage */
if (!is_migrate_isolate(migratetype))
__mod_zone_freepage_state(zone, -(1 << order), migratetype);
return true; return true;
} }
void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order, void __clear_page_guard(struct zone *zone, struct page *page, unsigned int order)
int migratetype)
{ {
__ClearPageGuard(page); __ClearPageGuard(page);
set_page_private(page, 0); set_page_private(page, 0);
if (!is_migrate_isolate(migratetype))
__mod_zone_freepage_state(zone, (1 << order), migratetype);
} }

View File

@@ -43,6 +43,7 @@
#include <linux/psi.h> #include <linux/psi.h>
#include <linux/ramfs.h> #include <linux/ramfs.h>
#include <linux/page_idle.h> #include <linux/page_idle.h>
#include <linux/page_size_compat.h>
#include <linux/migrate.h> #include <linux/migrate.h>
#include <linux/pipe_fs_i.h> #include <linux/pipe_fs_i.h>
#include <linux/splice.h> #include <linux/splice.h>
@@ -3727,6 +3728,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
last_pgoff = xas.xa_index; last_pgoff = xas.xa_index;
end = folio->index + folio_nr_pages(folio) - 1; end = folio->index + folio_nr_pages(folio) - 1;
nr_pages = min(end, end_pgoff) - xas.xa_index + 1; nr_pages = min(end, end_pgoff) - xas.xa_index + 1;
trace_android_vh_filemap_pages(folio);
if (!folio_test_large(folio)) if (!folio_test_large(folio))
ret |= filemap_map_order0_folio(vmf, ret |= filemap_map_order0_folio(vmf,
@@ -4373,6 +4375,17 @@ resched:
} }
} }
rcu_read_unlock(); rcu_read_unlock();
/* Adjust the counts if emulating the page size */
if (__PAGE_SIZE > PAGE_SIZE) {
unsigned int nr_sub_pages = __PAGE_SIZE / PAGE_SIZE;
cs->nr_cache /= nr_sub_pages;
cs->nr_dirty /= nr_sub_pages;
cs->nr_writeback /= nr_sub_pages;
cs->nr_evicted /= nr_sub_pages;
cs->nr_recently_evicted /= nr_sub_pages;
}
} }
/* /*

View File

@@ -735,10 +735,6 @@ extern void *memmap_alloc(phys_addr_t size, phys_addr_t align,
void memmap_init_range(unsigned long, int, unsigned long, unsigned long, void memmap_init_range(unsigned long, int, unsigned long, unsigned long,
unsigned long, enum meminit_context, struct vmem_altmap *, int); unsigned long, enum meminit_context, struct vmem_altmap *, int);
int split_free_page(struct page *free_page,
unsigned int order, unsigned long split_pfn_offset);
#if defined CONFIG_COMPACTION || defined CONFIG_CMA #if defined CONFIG_COMPACTION || defined CONFIG_CMA
/* /*
@@ -1216,11 +1212,6 @@ static inline bool is_migrate_highatomic(enum migratetype migratetype)
return migratetype == MIGRATE_HIGHATOMIC; return migratetype == MIGRATE_HIGHATOMIC;
} }
static inline bool is_migrate_highatomic_page(struct page *page)
{
return get_pageblock_migratetype(page) == MIGRATE_HIGHATOMIC;
}
void setup_zone_pageset(struct zone *zone); void setup_zone_pageset(struct zone *zone);
struct migration_target_control { struct migration_target_control {

View File

@@ -1043,6 +1043,7 @@ static void kasan_memcmp(struct kunit *test)
static void kasan_strings(struct kunit *test) static void kasan_strings(struct kunit *test)
{ {
char *ptr; char *ptr;
char *src;
size_t size = 24; size_t size = 24;
/* /*
@@ -1054,6 +1055,25 @@ static void kasan_strings(struct kunit *test)
ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
src = kmalloc(KASAN_GRANULE_SIZE, GFP_KERNEL | __GFP_ZERO);
strscpy(src, "f0cacc1a0000000", KASAN_GRANULE_SIZE);
/*
* Make sure that strscpy() does not trigger KASAN if it overreads into
* poisoned memory.
*
* The expected size does not include the terminator '\0'
* so it is (KASAN_GRANULE_SIZE - 2) ==
* KASAN_GRANULE_SIZE - ("initial removed character" + "\0").
*/
KUNIT_EXPECT_EQ(test, KASAN_GRANULE_SIZE - 2,
strscpy(ptr, src + 1, KASAN_GRANULE_SIZE));
/* strscpy should fail if the first byte is unreadable. */
KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
KASAN_GRANULE_SIZE));
kfree(src);
kfree(ptr); kfree(ptr);
/* /*

View File

@@ -1544,6 +1544,14 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
SYSCALL_DEFINE3(madvise, unsigned long, start, size_t, len_in, int, behavior) SYSCALL_DEFINE3(madvise, unsigned long, start, size_t, len_in, int, behavior)
{ {
bool bypass = false;
int ret;
trace_android_rvh_do_madvise_bypass(current->mm, start,
len_in, behavior, &ret, &bypass);
if (bypass)
return ret;
return do_madvise(current->mm, start, len_in, behavior); return do_madvise(current->mm, start, len_in, behavior);
} }

View File

@@ -1434,9 +1434,17 @@ void do_traversal_all_lruvec(void)
memcg = mem_cgroup_iter(NULL, NULL, NULL); memcg = mem_cgroup_iter(NULL, NULL, NULL);
do { do {
struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
bool stop = false;
trace_android_vh_do_traversal_lruvec(lruvec); trace_android_vh_do_traversal_lruvec(lruvec);
trace_android_rvh_do_traversal_lruvec_ex(memcg, lruvec,
&stop);
if (stop) {
mem_cgroup_iter_break(NULL, memcg);
break;
}
memcg = mem_cgroup_iter(NULL, memcg, NULL); memcg = mem_cgroup_iter(NULL, memcg, NULL);
} while (memcg); } while (memcg);
} }

115
mm/mmap.c
View File

@@ -2703,14 +2703,14 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
return do_vmi_munmap(&vmi, mm, start, len, uf, false); return do_vmi_munmap(&vmi, mm, start, len, uf, false);
} }
static unsigned long __mmap_region(struct file *file, unsigned long addr, unsigned long mmap_region(struct file *file, unsigned long addr,
unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
struct list_head *uf) struct list_head *uf)
{ {
struct mm_struct *mm = current->mm; struct mm_struct *mm = current->mm;
struct vm_area_struct *vma = NULL; struct vm_area_struct *vma = NULL;
struct vm_area_struct *next, *prev, *merge; struct vm_area_struct *next, *prev, *merge;
pgoff_t pglen = PHYS_PFN(len); pgoff_t pglen = len >> PAGE_SHIFT;
unsigned long charged = 0; unsigned long charged = 0;
unsigned long end = addr + len; unsigned long end = addr + len;
unsigned long merge_start = addr, merge_end = end; unsigned long merge_start = addr, merge_end = end;
@@ -2810,26 +2810,25 @@ cannot_expand:
vma->vm_page_prot = vm_get_page_prot(vm_flags); vma->vm_page_prot = vm_get_page_prot(vm_flags);
vma->vm_pgoff = pgoff; vma->vm_pgoff = pgoff;
if (vma_iter_prealloc(&vmi, vma)) {
error = -ENOMEM;
goto free_vma;
}
if (file) { if (file) {
if (vm_flags & VM_SHARED) {
error = mapping_map_writable(file->f_mapping);
if (error)
goto free_vma;
}
vma->vm_file = get_file(file); vma->vm_file = get_file(file);
error = mmap_file(file, vma); error = mmap_file(file, vma);
if (error) if (error)
goto unmap_and_free_file_vma; goto unmap_and_free_vma;
/* Drivers cannot alter the address of the VMA. */
WARN_ON_ONCE(addr != vma->vm_start);
/* /*
* Drivers should not permit writability when previously it was * Expansion is handled above, merging is handled below.
* disallowed. * Drivers should not alter the address of the VMA.
*/ */
VM_WARN_ON_ONCE(vm_flags != vma->vm_flags && error = -EINVAL;
!(vm_flags & VM_MAYWRITE) && if (WARN_ON((addr != vma->vm_start)))
(vma->vm_flags & VM_MAYWRITE)); goto close_and_free_vma;
vma_iter_config(&vmi, addr, end); vma_iter_config(&vmi, addr, end);
/* /*
@@ -2841,7 +2840,6 @@ cannot_expand:
vma->vm_end, vma->vm_flags, NULL, vma->vm_end, vma->vm_flags, NULL,
vma->vm_file, vma->vm_pgoff, NULL, vma->vm_file, vma->vm_pgoff, NULL,
NULL_VM_UFFD_CTX, NULL); NULL_VM_UFFD_CTX, NULL);
if (merge) { if (merge) {
/* /*
* ->mmap() can change vma->vm_file and fput * ->mmap() can change vma->vm_file and fput
@@ -2855,7 +2853,7 @@ cannot_expand:
vma = merge; vma = merge;
/* Update vm_flags to pick up the change. */ /* Update vm_flags to pick up the change. */
vm_flags = vma->vm_flags; vm_flags = vma->vm_flags;
goto file_expanded; goto unmap_writable;
} }
} }
@@ -2863,15 +2861,24 @@ cannot_expand:
} else if (vm_flags & VM_SHARED) { } else if (vm_flags & VM_SHARED) {
error = shmem_zero_setup(vma); error = shmem_zero_setup(vma);
if (error) if (error)
goto free_iter_vma; goto free_vma;
} else { } else {
vma_set_anonymous(vma); vma_set_anonymous(vma);
} }
#ifdef CONFIG_SPARC64 if (map_deny_write_exec(vma->vm_flags, vma->vm_flags)) {
/* TODO: Fix SPARC ADI! */ error = -EACCES;
WARN_ON_ONCE(!arch_validate_flags(vm_flags)); goto close_and_free_vma;
#endif }
/* Allow architectures to sanity-check the vm_flags */
error = -EINVAL;
if (!arch_validate_flags(vma->vm_flags))
goto close_and_free_vma;
error = -ENOMEM;
if (vma_iter_prealloc(&vmi, vma))
goto close_and_free_vma;
/* Lock the VMA since it is modified after insertion into VMA tree */ /* Lock the VMA since it is modified after insertion into VMA tree */
vma_start_write(vma); vma_start_write(vma);
@@ -2894,7 +2901,10 @@ cannot_expand:
*/ */
khugepaged_enter_vma(vma, vma->vm_flags); khugepaged_enter_vma(vma, vma->vm_flags);
file_expanded: /* Once vma denies write, undo our temporary denial count */
unmap_writable:
if (file && vm_flags & VM_SHARED)
mapping_unmap_writable(file->f_mapping);
file = vma->vm_file; file = vma->vm_file;
ksm_add_vma(vma); ksm_add_vma(vma);
expanded: expanded:
@@ -2926,60 +2936,33 @@ expanded:
trace_android_vh_mmap_region(vma, addr); trace_android_vh_mmap_region(vma, addr);
validate_mm(mm);
return addr; return addr;
unmap_and_free_file_vma: close_and_free_vma:
fput(vma->vm_file); vma_close(vma);
vma->vm_file = NULL;
vma_iter_set(&vmi, vma->vm_end); if (file || vma->vm_file) {
/* Undo any partial mapping done by a device driver. */ unmap_and_free_vma:
unmap_region(mm, &vmi.mas, vma, prev, next, vma->vm_start, fput(vma->vm_file);
vma->vm_end, vma->vm_end, true); vma->vm_file = NULL;
free_iter_vma:
vma_iter_free(&vmi); vma_iter_set(&vmi, vma->vm_end);
/* Undo any partial mapping done by a device driver. */
unmap_region(mm, &vmi.mas, vma, prev, next, vma->vm_start,
vma->vm_end, vma->vm_end, true);
}
if (file && (vm_flags & VM_SHARED))
mapping_unmap_writable(file->f_mapping);
free_vma: free_vma:
vm_area_free(vma); vm_area_free(vma);
unacct_error: unacct_error:
if (charged) if (charged)
vm_unacct_memory(charged); vm_unacct_memory(charged);
validate_mm(mm);
return error; return error;
} }
unsigned long mmap_region(struct file *file, unsigned long addr,
unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
struct list_head *uf)
{
unsigned long ret;
bool writable_file_mapping = false;
/* Check to see if MDWE is applicable. */
if (map_deny_write_exec(vm_flags, vm_flags))
return -EACCES;
/* Allow architectures to sanity-check the vm_flags. */
if (!arch_validate_flags(vm_flags))
return -EINVAL;
/* Map writable and ensure this isn't a sealed memfd. */
if (file && (vm_flags & VM_SHARED)) {
int error = mapping_map_writable(file->f_mapping);
if (error)
return error;
writable_file_mapping = true;
}
ret = __mmap_region(file, addr, len, vm_flags, pgoff, uf);
/* Clear our write mapping regardless of error. */
if (writable_file_mapping)
mapping_unmap_writable(file->f_mapping);
validate_mm(current->mm);
return ret;
}
static int __vm_munmap(unsigned long start, size_t len, bool unlock) static int __vm_munmap(unsigned long start, size_t len, bool unlock)
{ {
int ret; int ret;

File diff suppressed because it is too large Load Diff

View File

@@ -179,15 +179,11 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end, unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end,
migratetype, isol_flags); migratetype, isol_flags);
if (!unmovable) { if (!unmovable) {
unsigned long nr_pages; if (!move_freepages_block_isolate(zone, page, MIGRATE_ISOLATE)) {
int mt = get_pageblock_migratetype(page); spin_unlock_irqrestore(&zone->lock, flags);
return -EBUSY;
set_pageblock_migratetype(page, MIGRATE_ISOLATE); }
zone->nr_isolate_pageblock++; zone->nr_isolate_pageblock++;
nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE,
NULL);
__mod_zone_freepage_state(zone, -nr_pages, mt);
spin_unlock_irqrestore(&zone->lock, flags); spin_unlock_irqrestore(&zone->lock, flags);
return 0; return 0;
} }
@@ -207,7 +203,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
static void unset_migratetype_isolate(struct page *page, int migratetype) static void unset_migratetype_isolate(struct page *page, int migratetype)
{ {
struct zone *zone; struct zone *zone;
unsigned long flags, nr_pages; unsigned long flags;
bool isolated_page = false; bool isolated_page = false;
unsigned int order; unsigned int order;
struct page *buddy; struct page *buddy;
@@ -253,12 +249,15 @@ static void unset_migratetype_isolate(struct page *page, int migratetype)
* allocation. * allocation.
*/ */
if (!isolated_page) { if (!isolated_page) {
nr_pages = move_freepages_block(zone, page, migratetype, NULL); /*
__mod_zone_freepage_state(zone, nr_pages, migratetype); * Isolating this block already succeeded, so this
} * should not fail on zone boundaries.
set_pageblock_migratetype(page, migratetype); */
if (isolated_page) WARN_ON_ONCE(!move_freepages_block_isolate(zone, page, migratetype));
} else {
set_pageblock_migratetype(page, migratetype);
__putback_isolated_page(page, order, migratetype); __putback_isolated_page(page, order, migratetype);
}
zone->nr_isolate_pageblock--; zone->nr_isolate_pageblock--;
out: out:
spin_unlock_irqrestore(&zone->lock, flags); spin_unlock_irqrestore(&zone->lock, flags);
@@ -367,26 +366,29 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
VM_BUG_ON(!page); VM_BUG_ON(!page);
pfn = page_to_pfn(page); pfn = page_to_pfn(page);
/*
* start_pfn is MAX_ORDER_NR_PAGES aligned, if there is any
* free pages in [start_pfn, boundary_pfn), its head page will
* always be in the range.
*/
if (PageBuddy(page)) { if (PageBuddy(page)) {
int order = buddy_order(page); int order = buddy_order(page);
if (pfn + (1UL << order) > boundary_pfn) { /* move_freepages_block_isolate() handled this */
/* free page changed before split, check it again */ VM_WARN_ON_ONCE(pfn + (1 << order) > boundary_pfn);
if (split_free_page(page, order, boundary_pfn - pfn))
continue;
}
pfn += 1UL << order; pfn += 1UL << order;
continue; continue;
} }
/* /*
* migrate compound pages then let the free page handling code * If a compound page is straddling our block, attempt
* above do the rest. If migration is not possible, just fail. * to migrate it out of the way.
*
* We don't have to worry about this creating a large
* free page that straddles into our block: gigantic
* pages are freed as order-0 chunks, and LRU pages
* (currently) do not exceed pageblock_order.
*
* The block of interest has already been marked
* MIGRATE_ISOLATE above, so when migration is done it
* will free its pages onto the correct freelists.
*/ */
if (PageCompound(page)) { if (PageCompound(page)) {
struct page *head = compound_head(page); struct page *head = compound_head(page);
@@ -397,16 +399,10 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
pfn = head_pfn + nr_pages; pfn = head_pfn + nr_pages;
continue; continue;
} }
#if defined CONFIG_COMPACTION || defined CONFIG_CMA #if defined CONFIG_COMPACTION || defined CONFIG_CMA
/* if (PageHuge(page)) {
* hugetlb, lru compound (THP), and movable compound pages
* can be migrated. Otherwise, fail the isolation.
*/
if (PageHuge(page) || PageLRU(page) || __PageMovable(page)) {
int order;
unsigned long outer_pfn;
int page_mt = get_pageblock_migratetype(page); int page_mt = get_pageblock_migratetype(page);
bool isolate_page = !is_migrate_isolate_page(page);
struct compact_control cc = { struct compact_control cc = {
.nr_migratepages = 0, .nr_migratepages = 0,
.order = -1, .order = -1,
@@ -419,56 +415,26 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
}; };
INIT_LIST_HEAD(&cc.migratepages); INIT_LIST_HEAD(&cc.migratepages);
/*
* XXX: mark the page as MIGRATE_ISOLATE so that
* no one else can grab the freed page after migration.
* Ideally, the page should be freed as two separate
* pages to be added into separate migratetype free
* lists.
*/
if (isolate_page) {
ret = set_migratetype_isolate(page, page_mt,
flags, head_pfn, head_pfn + nr_pages);
if (ret)
goto failed;
}
ret = __alloc_contig_migrate_range(&cc, head_pfn, ret = __alloc_contig_migrate_range(&cc, head_pfn,
head_pfn + nr_pages, page_mt); head_pfn + nr_pages, page_mt);
/*
* restore the page's migratetype so that it can
* be split into separate migratetype free lists
* later.
*/
if (isolate_page)
unset_migratetype_isolate(page, page_mt);
if (ret) if (ret)
goto failed; goto failed;
/*
* reset pfn to the head of the free page, so pfn = head_pfn + nr_pages;
* that the free page handling code above can split
* the free page to the right migratetype list.
*
* head_pfn is not used here as a hugetlb page order
* can be bigger than MAX_ORDER, but after it is
* freed, the free page order is not. Use pfn within
* the range to find the head of the free page.
*/
order = 0;
outer_pfn = pfn;
while (!PageBuddy(pfn_to_page(outer_pfn))) {
/* stop if we cannot find the free page */
if (++order > MAX_ORDER)
goto failed;
outer_pfn &= ~0UL << order;
}
pfn = outer_pfn;
continue; continue;
} else }
/*
* These pages are movable too, but they're
* not expected to exceed pageblock_order.
*
* Let us know when they do, so we can add
* proper free and split handling for them.
*/
VM_WARN_ON_ONCE_PAGE(PageLRU(page), page);
VM_WARN_ON_ONCE_PAGE(__PageMovable(page), page);
#endif #endif
goto failed; goto failed;
} }
pfn++; pfn++;

View File

@@ -271,10 +271,10 @@ static const struct vm_operations_struct pad_vma_ops = {
}; };
/* /*
* Returns a new VMA representing the padding in @vma, if no padding * Returns a new VMA representing the padding in @vma;
* in @vma returns NULL. * returns NULL if no padding in @vma or allocation failed.
*/ */
struct vm_area_struct *get_pad_vma(struct vm_area_struct *vma) static struct vm_area_struct *get_pad_vma(struct vm_area_struct *vma)
{ {
struct vm_area_struct *pad; struct vm_area_struct *pad;
@@ -282,6 +282,10 @@ struct vm_area_struct *get_pad_vma(struct vm_area_struct *vma)
return NULL; return NULL;
pad = kzalloc(sizeof(struct vm_area_struct), GFP_KERNEL); pad = kzalloc(sizeof(struct vm_area_struct), GFP_KERNEL);
if (!pad) {
pr_warn("Page size migration: Failed to allocate padding VMA");
return NULL;
}
memcpy(pad, vma, sizeof(struct vm_area_struct)); memcpy(pad, vma, sizeof(struct vm_area_struct));
@@ -306,34 +310,14 @@ struct vm_area_struct *get_pad_vma(struct vm_area_struct *vma)
return pad; return pad;
} }
/*
* Returns a new VMA exclusing the padding from @vma; if no padding in
* @vma returns @vma.
*/
struct vm_area_struct *get_data_vma(struct vm_area_struct *vma)
{
struct vm_area_struct *data;
if (!is_pgsize_migration_enabled() || !(vma->vm_flags & VM_PAD_MASK))
return vma;
data = kzalloc(sizeof(struct vm_area_struct), GFP_KERNEL);
memcpy(data, vma, sizeof(struct vm_area_struct));
/* Adjust the end to the start of the padding section */
data->vm_end = VMA_PAD_START(data);
return data;
}
/* /*
* Calls the show_pad_vma_fn on the @pad VMA, and frees the copies of @vma * Calls the show_pad_vma_fn on the @pad VMA, and frees the copies of @vma
* and @pad. * and @pad.
*/ */
void show_map_pad_vma(struct vm_area_struct *vma, struct vm_area_struct *pad, void show_map_pad_vma(struct vm_area_struct *vma, struct seq_file *m,
struct seq_file *m, void *func, bool smaps) void *func, bool smaps)
{ {
struct vm_area_struct *pad = get_pad_vma(vma);
if (!pad) if (!pad)
return; return;
@@ -349,13 +333,21 @@ void show_map_pad_vma(struct vm_area_struct *vma, struct vm_area_struct *pad,
*/ */
BUG_ON(!vma); BUG_ON(!vma);
/* The pad VMA should be anonymous. */
BUG_ON(pad->vm_file);
/* The pad VMA should be PROT_NONE. */
BUG_ON(pad->vm_flags & (VM_READ|VM_WRITE|VM_EXEC));
/* The pad VMA itself cannot have padding; infinite recursion */
BUG_ON(pad->vm_flags & VM_PAD_MASK);
if (smaps) if (smaps)
((show_pad_smaps_fn)func)(m, pad); ((show_pad_smaps_fn)func)(m, pad);
else else
((show_pad_maps_fn)func)(m, pad); ((show_pad_maps_fn)func)(m, pad);
kfree(pad); kfree(pad);
kfree(vma);
} }
/* /*

View File

@@ -1479,8 +1479,15 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio,
* same address_space. * same address_space.
*/ */
if (reclaimed && folio_is_file_lru(folio) && if (reclaimed && folio_is_file_lru(folio) &&
!mapping_exiting(mapping) && !dax_mapping(mapping)) !mapping_exiting(mapping) && !dax_mapping(mapping)) {
bool keep = false;
trace_android_vh_keep_reclaimed_folio(folio, refcount, &keep);
if (keep)
goto cannot_free;
shadow = workingset_eviction(folio, target_memcg); shadow = workingset_eviction(folio, target_memcg);
}
trace_android_vh_clear_reclaimed_folio(folio, reclaimed);
__filemap_remove_folio(folio, shadow); __filemap_remove_folio(folio, shadow);
xa_unlock_irq(&mapping->i_pages); xa_unlock_irq(&mapping->i_pages);
if (mapping_shrinkable(mapping)) if (mapping_shrinkable(mapping))
@@ -5354,6 +5361,12 @@ retry:
type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON);
list_for_each_entry_safe_reverse(folio, next, &list, lru) { list_for_each_entry_safe_reverse(folio, next, &list, lru) {
bool bypass = false;
trace_android_vh_evict_folios_bypass(folio, &bypass);
if (bypass)
continue;
if (!folio_evictable(folio)) { if (!folio_evictable(folio)) {
list_del(&folio->lru); list_del(&folio->lru);
folio_putback_lru(folio); folio_putback_lru(folio);
@@ -7549,6 +7562,7 @@ static bool kswapd_shrink_node(pg_data_t *pgdat,
sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX);
} }
trace_android_rvh_kswapd_shrink_node(&sc->nr_to_reclaim);
/* /*
* Historically care was taken to put equal pressure on all zones but * Historically care was taken to put equal pressure on all zones but

Some files were not shown because too many files have changed in this diff Show More