Updated kernel from 2.6.32-279.1.1.el6.x86_64.debug to 2.6.32-431.20.3.el6.x86_64.debug on test system and now I am reliably getting a lockdep splat on one of my tests.<br />
<br />
Problem happens on 2.6.32-431.17.1.el6.x86_64.debug and 2.6.32-431.11.2.el6.x86_64.debug as well.<br />
<br />
Output is:<br />
=============================================<br />
[ INFO: possible recursive locking detected ]<br />
2.6.32-431.17.1.el6.x86_64.debug <a href="http://bugs.centos.org/view.php?id=1">0000001</a><br />
---------------------------------------------<br />
python/16644 is trying to acquire lock:<br />
(&rq->lock/1){..-...}, at: [<ffffffff8105c393>] double_lock_balance+0x53/0x90<br />
<br />
but task is already holding lock:<br />
(&rq->lock/1){..-...}, at: [<ffffffff8105c3bf>] double_lock_balance+0x7f/0x90<br />
<br />
other info that might help us debug this:<br />
1 lock held by python/16644:<br />
#0: (&rq->lock/1){..-...}, at: [<ffffffff8105c3bf>] double_lock_balance+0x7f/0x90<br />
<br />
stack backtrace:<br />
Pid: 16644, comm: python Not tainted 2.6.32-431.17.1.el6.x86_64.debug <a href="http://bugs.centos.org/view.php?id=1">0000001</a><br />
Call Trace:<br />
[<ffffffff810bc540>] ? __lock_acquire+0x11b0/0x1560<br />
[<ffffffff810b716d>] ? trace_hardirqs_off+0xd/0x10<br />
[<ffffffff810a8acf>] ? cpu_clock+0x6f/0x80<br />
[<ffffffff8105c3bf>] ? double_lock_balance+0x7f/0x90<br />
[<ffffffff810bc994>] ? lock_acquire+0xa4/0x120<br />
[<ffffffff8105c393>] ? double_lock_balance+0x53/0x90<br />
[<ffffffff812a6709>] ? cpumask_next_and+0x29/0x50<br />
[<ffffffff8155ea44>] ? _spin_lock_nested+0x34/0x70<br />
[<ffffffff8105c393>] ? double_lock_balance+0x53/0x90<br />
[<ffffffff8105cb8e>] ? find_lowest_rq+0x10e/0x150<br />
[<ffffffff8105c393>] ? double_lock_balance+0x53/0x90<br />
[<ffffffff8106e85e>] ? push_rt_task+0xce/0x2a0<br />
[<ffffffff8106eb50>] ? post_schedule_rt+0x20/0x30<br />
[<ffffffff8155b324>] ? thread_return+0x15d/0x7d9<br />
[<ffffffff8155e880>] ? _spin_unlock_irqrestore+0x40/0x80<br />
[<ffffffff810bab8d>] ? trace_hardirqs_on_caller+0x14d/0x190<br />
[<ffffffff810a123e>] ? prepare_to_wait+0x4e/0x80<br />
[<ffffffffa037124f>] ? dahdi_chan_read+0x13f/0x420 [dahdi]<br />
[<ffffffff810bb5cd>] ? __lock_acquire+0x23d/0x1560<br />
[<ffffffff810a0f10>] ? autoremove_wake_function+0x0/0x40<br />
[<ffffffff810156d3>] ? native_sched_clock+0x13/0x80<br />
[<ffffffff81014bf9>] ? sched_clock+0x9/0x10<br />
[<ffffffff810a899d>] ? sched_clock_cpu+0xcd/0x110<br />
[<ffffffff810b716d>] ? trace_hardirqs_off+0xd/0x10<br />
[<ffffffff810a8acf>] ? cpu_clock+0x6f/0x80<br />
[<ffffffff810ba28d>] ? lock_release_holdtime+0x3d/0x190<br />
[<ffffffff81247bd6>] ? security_file_permission+0x16/0x20<br />
[<ffffffff811a6c35>] ? vfs_read+0xb5/0x1a0<br />
[<ffffffff811a76f6>] ? fget_light+0x66/0x100<br />
[<ffffffff811a6d71>] ? sys_read+0x51/0x90<br />
[<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
↧