Quantcast
Viewing all articles
Browse latest Browse all 19115

0006853: clvm does not work after upgrade to 6.5

I have 2 node cluster that use clvm and gfs2 to provide and manage shared storage for kvm.<br /> <br /> After upgrading from CentOS 6.4 to 6.5 clvm (started as cluster resource) hangs.<br /> It "hangs" only when both nodes are up, single node startup does not exhibit this behavior.<br /> <br /> I dont see any interesting things in system log, just after some time hung task messages appear.<br /> <br /> Dec 18 16:46:28 virtstud01 kernel: INFO: task clvmd:7613 blocked for more than 120 seconds.<br /> Dec 18 16:46:28 virtstud01 kernel: Not tainted 2.6.32-431.1.2.0.1.el6.x86_64 <a href="http://bugs.centos.org/view.php?id=1">0000001</a><br /> Dec 18 16:46:28 virtstud01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.<br /> Dec 18 16:46:28 virtstud01 kernel: clvmd D 0000000000000001 0 7613 1 0x00000080<br /> Dec 18 16:46:28 virtstud01 kernel: ffff88080b4bbc80 0000000000000086 0000000000000000 ffff88080b4bbc08<br /> Dec 18 16:46:28 virtstud01 kernel: ffffffff81059216 ffff88080b4bbc18 ffff880821f82080 ffff88080b4bbc18<br /> Dec 18 16:46:28 virtstud01 kernel: ffff88081df9e638 ffff88080b4bbfd8 000000000000fbc8 ffff88081df9e638<br /> Dec 18 16:46:28 virtstud01 kernel: Call Trace:<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff81059216>] ? enqueue_task+0x66/0x80<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff81065c5e>] ? try_to_wake_up+0x24e/0x3e0<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff81529f95>] rwsem_down_failed_common+0x95/0x1d0<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff8152a126>] rwsem_down_read_failed+0x26/0x30<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff8128e864>] call_rwsem_down_read_failed+0x14/0x30<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff81529624>] ? down_read+0x24/0x30<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffffa039f257>] dlm_user_request+0x47/0x1b0 [dlm]<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffffa03abd46>] ? device_write+0x66/0x720 [dlm]<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff8116f713>] ? kmem_cache_alloc_trace+0x1a3/0x1b0<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffffa03ac2a7>] device_write+0x5c7/0x720 [dlm]<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff812263d6>] ? security_file_permission+0x16/0x20<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff81188f88>] vfs_write+0xb8/0x1a0<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff81189881>] sys_write+0x51/0x90<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff8152ab4e>] ? do_device_not_available+0xe/0x10<br /> Dec 18 16:46:28 virtstud01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b<br /> <br /> With "ps auxf" I see that cluster startup calls clvm init script, that calls vgdisplay and some awk.<br /> Running vgdisplay at this moment causes shell to "hang" too.<br /> <br /> If i terminate all these proceses with kill -9, I'm able to run vgdisplay, that complains about not able to obtain "cluster lock" and not displaying any cluster lvm.

Viewing all articles
Browse latest Browse all 19115

Trending Articles