Quantcast
Channel: CentOS Bug Tracker - Issues
Viewing all articles
Browse latest Browse all 19115

0006396: Crash of tgtd while doing a backup

$
0
0
I have several VMs running using KVM. Every week I create a backup, essentially by creating snapshot volumes on the guests, using an open iscsi initiator to connect to a backup disk on the physical host. Then I copy over data (this is essentially dd although I am using my own script to copy data using also bandwidth limiting if needed). <br /> <br /> On the physical host, I use tgtd from scsi-target-utils (1.0.24). <br /> <br /> Since the update to centos 6.4 from 6.3 I am seeing high swap usage during the backup and at some times, tgtd crashes. From /var/log/messages: <br /> <br /> messages-20130414:Apr 12 21:33:24 falcon tgtd: conn_close(101) connection closed, 0x1da5768 1<br /> messages-20130414:Apr 12 21:33:24 falcon tgtd: conn_close(107) sesson 0x1da5a30 1<br /> messages-20130414:Apr 12 21:33:26 falcon tgtd: conn_close(101) connection closed, 0x1da5768 1<br /> messages-20130414:Apr 12 21:36:12 falcon tgtd: conn_close(101) connection closed, 0x1da5768 1<br /> messages-20130414:Apr 12 21:36:12 falcon tgtd: conn_close(107) sesson 0x1da5a30 1<br /> messages-20130414:Apr 12 21:36:14 falcon tgtd: conn_close(101) connection closed, 0x1da5768 1<br /> messages-20130414:Apr 12 23:01:19 falcon kernel: tgtd[8878]: segfault at 360b8b0 ip 0000000000405fef sp 00007fff3488a430 error 6 in tgtd[400000+3c000]<br /> messages-20130414:Apr 12 23:01:19 falcon tgtd: conn_close(101) connection closed, 0x1da5768 22<br /> messages-20130414:Apr 12 23:01:19 falcon tgtd: conn_close(107) sesson 0x1da5a30 1<br /> messages-20130414:Apr 12 23:01:19 falcon tgtd: conn_close(138) Forcing release of tx task 0x315a2c0 6c9 1<br /> messages-20130414:Apr 12 23:01:20 falcon abrt[7230]: Saved core dump of pid 8878 (/usr/sbin/tgtd) to /var/spool/abrt/ccpp-2013-04-12-23:01:19-8878 (206032896 bytes)<br /> <br /> When I examine the swap usage I see that all swap is being used by qemu-kvm processes. <br /> <br /> Also, I am seeing a high CPU load during the backup, which is also strange because it appears to be what I had some time ago when asynchronous IO was not being used by the KVM guests, but the guests are using asynchronous IO. <br /> <br /> See also the mail I sent to the KVM mailing list (<a href="http:<a href="mailto://www.mail-archive.com/kvm@vger.kernel.org">//www.mail-archive.com/kvm@vger.kernel.org</a>/msg88272.html">http:<a href="mailto://www.mail-archive.com/kvm@vger.kernel.org">//www.mail-archive.com/kvm@vger.kernel.org</a>/msg88272.html</a> [<a href="http:<a href="mailto://www.mail-archive.com/kvm@vger.kernel.org">//www.mail-archive.com/kvm@vger.kernel.org</a>/msg88272.html" target="_blank">^</a>]). I did not get a response to this mail but I suspect the issue has more to do with tgtd than with KVM or it some subtle interoperability issue.

Viewing all articles
Browse latest Browse all 19115

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>