As off_t is a long, so breaking things on 32-bit land. Now
buildids work on 32-bit land.
[root@ana ~]# uname -a
Linux ana.ghostprotocols.net 2.6.31.6-162.fc12.i686 #1 SMP Fri
Dec 4 01:09:09 EST 2009 i686 i686 i386 GNU/Linux [root@ana ~]#
perf buildid-list | tail -5
136ee6792ba2ae57870ecd87369f4ae3194d5b27 /lib/libreadline.so.6.0
d202dcb1ad48d140065783657d37ae3f2d9ab83f /usr/bin/gdb
0a56c0c00dcc2e9e581ae9997f31957c9c4671df
/usr/lib/libdwarf.so.0.0
5f9e6ac95241cbb3227608e0ff2a2e0cbbe72439 /home/acme/bin/perf
925d19eccc2ddb1c9d74dd178a011426f1b124a8 /bin/sleep [root@ana ~]#
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260396578-19116-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Before:
$ ./perf kmem
...
-l, --line <num> show n lines
--raw-ip show raw ip instead of symbol
After:
$ ./perf kmem
...
-l, --line <num> show n lines
--raw-ip show raw ip instead of symbol
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
LKML-Reference: <4B20A1A9.3040104@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The core ftrace code (trace_clock_local) calls sched_clock()
directly, so we don't want to recurisvely trigger the ftrace
code. Rather than update every sched_clock() definition, tag
the prototype for everyone as notrace.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <1260407223-10900-1-git-send-email-vapier@gentoo.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
In current code, task's execute time is got by reading
'/proc/<pid>/sched' file, it's wrong if the task is created
by pthread_create(), because every thread task has same pid.
This way also has two demerits:
1: 'perf sched replay' can't work if the kernel is not
compiled with the 'CONFIG_SCHED_DEBUG' option
2: perf tool should depend on proc file system
So, this patch uses PERF_COUNT_SW_TASK_CLOCK to get task's
execution time instead of reading /proc file.
Changelog v2 -> v3:
use PERF_COUNT_SW_TASK_CLOCK instead of rusage() as Ingo's
suggestion
Reported-by: Torok Edwin <edwintorok@gmail.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Xiao Guangrong <ericxiao.gr@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <4B1F7322.80103@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When we define the common event fields in kprobe, we invert the error
handling and return immediately in case of success. Then we omit
to define specific kprobes fields (ip and nargs), and specific
kretprobes fields (func, ret_ip, nargs). And we only define them
when we fail to create common fields.
The most visible consequence is that we can't create filter for
k(ret)probes specific fields.
This patch re-invert the success/error handling to fix it.
Reported-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
LKML-Reference: <1260263815-5167-1-git-send-regression-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Currently, when ptrace needs to modify a breakpoint, like disabling
it, changing its address, type or len, it calls
modify_user_hw_breakpoint(). This latter will perform the heavy and
racy task of unregistering the old breakpoint and registering a new
one.
This is racy as someone else might steal the reserved breakpoint
slot under us, which is undesired as the breakpoint is only
supposed to be modified, sometimes in the middle of a debugging
workflow. We don't want our slot to be stolen in the middle.
So instead of unregistering/registering the breakpoint, just
disable it while we modify its breakpoint fields and re-enable it
after if necessary.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Prasad <prasad@linux.vnet.ibm.com>
LKML-Reference: <1260347148-5519-1-git-send-regression-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Whatever the context nature of a breakpoint, we always perform the
following constraint checks before allocating it a slot:
- Check the number of pinned breakpoint bound the concerned cpus
- Check the max number of task-bound breakpoints that are belonging
to a task.
- Add both and see if we have a reamining slot for the new breakpoint
This is the right thing to do when we are about to register a cpu-only
bound breakpoint. But not if we are dealing with a task bound
breakpoint. What we want in this case is:
- Check the number of pinned breakpoint bound the concerned cpus
- Check the number of breakpoints that already belong to the task
in which the breakpoint to register is bound to.
- Add both
This fixes a regression that makes the "firefox -g" command fail to
register breakpoints once we deal with a secondary thread.
Reported-by: Walt <w41ter@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Prasad <prasad@linux.vnet.ibm.com>
The perf attrs used to set up breakpoint parameters are often allocated
in the stack and not zeroed out before calling hw_breakpoint_init().
Handle it from this helper to avoid random attributes set by the stack.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Prasad <prasad@linux.vnet.ibm.com>
When I added the xs callbacks into perf, I forgot to re-check
the no-libperl case. This patch fixes the undefined reference
error for that.
Reported-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <1260153712.6564.4.camel@tropicana>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
- util/header.c
"len" is aligned to 64. So, it tries to write the out of
long_name buffer.
So, this use "zero_buf" to write aligned area.
- util/trace-event-read.c
"size" is not including nul byte. So, this allocates it, and set '\0'.
- util/trace-event-parse.c
It needs parens to calc correct size.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <87d42s8iiu.fsf_-_@devron.myhome.or.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Currently, sample event data is parsed for each commands, and it
is assuming that the data is not including other data. (E.g.
timechart, trace, etc. can't parse the event if it has
PERF_SAMPLE_CALLCHAIN)
So, even if we record the superset data for multiple commands at
a time, commands can't parse. etc.
To fix it, this makes common sample event parser, and use it to
parse sample event correctly. (PERF_SAMPLE_READ is unsupported
for now though, it seems to be not using.)
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <87hbs48imv.fsf@devron.myhome.or.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Fix the following warning:
arch/x86/tools/test_get_len.c: In function "main":
arch/x86/tools/test_get_len.c:116: warning: unused variable "c"
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When we enter in irq, two things can happen to preserve the link
to the previous frame pointer:
- If we were in an irq already, we don't switch to the irq stack
as we are inside. We just need to save the previous frame
pointer and to link the new one to the previous.
- Otherwise we need another level of indirection. We enter the irq with
the previous stack. We save the previous bp inside and make bp
pointing to its saved address. Then we switch to the irq stack and
push bp another time but to the new stack. This makes two levels to
dereference instead of one.
In the second case, the current stacktrace code omits the second level
and loses the frame pointer accuracy. The stack that follows will then
be considered as unreliable.
Handling that makes the perf callchain happier.
Before:
43.94% [k] _raw_read_lock
|
--- _read_lock
|
|--60.53%-- send_sigio
| __kill_fasync
| kill_fasync
| evdev_pass_event
| evdev_event
| input_pass_event
| input_handle_event
| input_event
| synaptics_process_byte
| psmouse_handle_byte
| psmouse_interrupt
| serio_interrupt
| i8042_interrupt
| handle_IRQ_event
| handle_edge_irq
| handle_irq
| __irqentry_text_start
| ret_from_intr
| |
| |--30.43%-- __select
| |
| |--17.39%-- 0x454f15
| |
| |--13.04%-- __read
| |
| |--13.04%-- vread_hpet
| |
| |--13.04%-- _xcb_lock_io
| |
| --13.04%-- 0x7f630878ce8
After:
50.00% [k] _raw_read_lock
|
--- _read_lock
|
|--98.97%-- send_sigio
| __kill_fasync
| kill_fasync
| evdev_pass_event
| evdev_event
| input_pass_event
| input_handle_event
| input_event
| |
| |--96.88%-- synaptics_process_byte
| | psmouse_handle_byte
| | psmouse_interrupt
| | serio_interrupt
| | i8042_interrupt
| | handle_IRQ_event
| | handle_edge_irq
| | handle_irq
| | __irqentry_text_start
| | ret_from_intr
| | |
| | |--39.78%-- __const_udelay
| | | |
| | | |--91.89%-- ath5k_hw_register_timeout
| | | | ath5k_hw_noise_floor_calibration
| | | | ath5k_hw_reset
| | | | ath5k_reset
| | | | ath5k_config
| | | | ieee80211_hw_config
| | | | |
| | | | |--88.24%-- ieee80211_scan_work
| | | | | worker_thread
| | | | | kthread
| | | | | child_rip
| | | | |
| | | | --11.76%-- ieee80211_scan_completed
| | | | ieee80211_scan_work
| | | | worker_thread
| | | | kthread
| | | | child_rip
| | | |
| | | --8.11%-- ath5k_hw_noise_floor_calibration
| | | ath5k_hw_reset
| | | ath5k_reset
| | | ath5k_config
Note: This does not only affect perf events but also x86-64
stacktraces. They were considered as unreliable once we quit
the irq stack frame.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: "K. Prasad" <prasad@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>