libguestfs java binding library (jni) crash.<br />
<br />
You need use patch <a href="https://www.redhat.com/archives/libguestfs/2018-August/msg00131.html">https://www.redhat.com/archives/libguestfs/2018-August/msg00131.html</a> <br />
<br />
or libguestfs version from github <a href="https://github.com/libguestfs/libguestfs/tree/stable-1.38">https://github.com/libguestfs/libguestfs/tree/stable-1.38</a> witch already contains this patch.<br />
<br />
I think you use tar.gz-version from <a href="http://download.libguestfs.org/1.38-stable/">http://download.libguestfs.org/1.38-stable/</a> which was made before this patch.
↧
0017335: libguestfs-java-devel crash
↧
0017337: Enable btrfs module in kernel plus.
Redhat has disabled the btrfs module, Please enable this module in Kernel Plus.<br />
<br />
CONFIG_BTRFS_FS=m
↧
↧
0017292: Bonding not failing over in mode=1 under 2.6.32-754.28.1 (...27.1 works OK)
Note: I'm copying RHBZ #1828604 as that bug is set to private (I am the reporter).<br />
<br />
Summary: <br />
<br />
Bonding drivers don't fail over when link drops with mode=1 (active-passive) bonds under kernel-2.6.32-754.28.1.el6.x86_64, works under kernel-2.6.32-754.27.1.el6.x86_64.<br />
<br />
Full:<br />
<br />
With a two interface active-passive bond, issuing 'ifdown <link1>' works, the backup link takes over. However, if you unplug a cable, /proc/net/bonding/<bond> shows the active interface as 'down', but it remains the in-use interface. So traffic over the bond fails.<br />
<br />
Configuration:<br />
<br />
====<br />
[<a href="mailto:root@an-a02n02">root@an-a02n02</a> ~]# cat /etc/sysconfig/network-scripts/ifcfg-sn_link1 <br />
# Generated by: [InstallManifest.pm] on: [2020-03-24, 19:33:15].<br />
# Storage Network - Link 1<br />
DEVICE="sn_link1"<br />
NM_CONTROLLED="no"<br />
BOOTPROTO="none"<br />
ONBOOT="yes"<br />
SLAVE="yes"<br />
MASTER="sn_bond1"<br />
<br />
[<a href="mailto:root@an-a02n02">root@an-a02n02</a> ~]# cat /etc/sysconfig/network-scripts/ifcfg-sn_link2<br />
# Generated by: [InstallManifest.pm] on: [2020-03-24, 19:33:15].<br />
# Storage Network - Link 2<br />
DEVICE="sn_link2"<br />
NM_CONTROLLED="no"<br />
BOOTPROTO="none"<br />
ONBOOT="yes"<br />
SLAVE="yes"<br />
MASTER="sn_bond1"<br />
<br />
[<a href="mailto:root@an-a02n02">root@an-a02n02</a> ~]# cat /etc/sysconfig/network-scripts/ifcfg-sn_bond1 <br />
# Generated by: [InstallManifest.pm] on: [2020-03-24, 19:33:15].<br />
# Storage Network - Bond 1<br />
DEVICE="sn_bond1"<br />
BOOTPROTO="static"<br />
ONBOOT="yes"<br />
BONDING_OPTS="mode=1 miimon=100 use_carrier=1 updelay=120000 downdelay=0 primary=sn_link1 primary_reselect=always"<br />
IPADDR="10.10.20.2"<br />
NETMASK="255.255.0.0"<br />
DEFROUTE="no"<br />
====<br />
<br />
-=] Under 2.6.32-754.27.1.el6.x86_64 [=-<br />
<br />
/proc/net/bonding/sn_bond1 pre-failure:<br />
<br />
====<br />
[<a href="mailto:root@an-a02n02">root@an-a02n02</a> ~]# cat /proc/net/bonding/sn_bond1 <br />
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)<br />
<br />
Bonding Mode: fault-tolerance (active-backup)<br />
Primary Slave: sn_link1 (primary_reselect always)<br />
Currently Active Slave: sn_link1<br />
MII Status: up<br />
MII Polling Interval (ms): 100<br />
Up Delay (ms): 120000<br />
Down Delay (ms): 0<br />
<br />
Slave Interface: sn_link1<br />
MII Status: up<br />
Speed: 10000 Mbps<br />
Duplex: full<br />
Link Failure Count: 0<br />
Permanent HW addr: b4:96:91:4f:10:15<br />
Slave queue ID: 0<br />
<br />
Slave Interface: sn_link2<br />
MII Status: up<br />
Speed: 10000 Mbps<br />
Duplex: full<br />
Link Failure Count: 0<br />
Permanent HW addr: b4:96:91:4f:10:14<br />
Slave queue ID: 0<br />
====<br />
<br />
/var/log/messages failing the sn_link1:<br />
<br />
====<br />
Apr 27 17:22:01 an-a02n02 kernel: ixgbe 0000:05:00.1: sn_link1: NIC Link is Down<br />
Apr 27 17:22:01 an-a02n02 kernel: sn_bond1: link status definitely down for interface sn_link1, disabling it<br />
Apr 27 17:22:01 an-a02n02 kernel: sn_bond1: making interface sn_link2 the new active one<br />
====<br />
<br />
/proc/net/bonding/sn_bond1 post-failure:<br />
<br />
====<br />
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)<br />
<br />
Bonding Mode: fault-tolerance (active-backup)<br />
Primary Slave: sn_link1 (primary_reselect always)<br />
Currently Active Slave: sn_link2<br />
MII Status: up<br />
MII Polling Interval (ms): 100<br />
Up Delay (ms): 120000<br />
Down Delay (ms): 0<br />
<br />
Slave Interface: sn_link1<br />
MII Status: down<br />
Speed: Unknown<br />
Duplex: Unknown<br />
Link Failure Count: 1<br />
Permanent HW addr: b4:96:91:4f:10:15<br />
Slave queue ID: 0<br />
<br />
Slave Interface: sn_link2<br />
MII Status: up<br />
Speed: 10000 Mbps<br />
Duplex: full<br />
Link Failure Count: 0<br />
Permanent HW addr: b4:96:91:4f:10:14<br />
Slave queue ID: 0<br />
====<br />
<br />
Worked fine.<br />
<br />
-=] Under 2.6.32-754.28.1.el6.x86_64 [=-<br />
<br />
/proc/net/bonding/sn_bond1 pre-failure:<br />
<br />
====<br />
[<a href="mailto:root@an-a02n02">root@an-a02n02</a> ~]# cat /proc/net/bonding/sn_bond1 <br />
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)<br />
<br />
Bonding Mode: fault-tolerance (active-backup)<br />
Primary Slave: sn_link1 (primary_reselect always)<br />
Currently Active Slave: sn_link1<br />
MII Status: up<br />
MII Polling Interval (ms): 100<br />
Up Delay (ms): 120000<br />
Down Delay (ms): 0<br />
<br />
Slave Interface: sn_link1<br />
MII Status: up<br />
Speed: 10000 Mbps<br />
Duplex: full<br />
Link Failure Count: 0<br />
Permanent HW addr: b4:96:91:4f:10:15<br />
Slave queue ID: 0<br />
<br />
Slave Interface: sn_link2<br />
MII Status: up<br />
Speed: 10000 Mbps<br />
Duplex: full<br />
Link Failure Count: 0<br />
Permanent HW addr: b4:96:91:4f:10:14<br />
Slave queue ID: 0<br />
====<br />
<br />
/var/log/messages failing the sn_link1 (just the one line...):<br />
<br />
====<br />
Apr 27 17:32:08 an-a02n02 kernel: ixgbe 0000:05:00.1: sn_link1: NIC Link is Down<br />
====<br />
<br />
/proc/net/bonding/sn_bond1 post-failure:<br />
<br />
====<br />
[<a href="mailto:root@an-a02n02">root@an-a02n02</a> ~]# cat /proc/net/bonding/sn_bond1 <br />
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)<br />
<br />
Bonding Mode: fault-tolerance (active-backup)<br />
Primary Slave: sn_link1 (primary_reselect always)<br />
Currently Active Slave: sn_link1<br />
MII Status: up<br />
MII Polling Interval (ms): 100<br />
Up Delay (ms): 120000<br />
Down Delay (ms): 0<br />
<br />
Slave Interface: sn_link1<br />
MII Status: down<br />
Speed: Unknown<br />
Duplex: Unknown<br />
Link Failure Count: 0<br />
Permanent HW addr: b4:96:91:4f:10:15<br />
Slave queue ID: 0<br />
<br />
Slave Interface: sn_link2<br />
MII Status: up<br />
Speed: 10000 Mbps<br />
Duplex: full<br />
Link Failure Count: 0<br />
Permanent HW addr: b4:96:91:4f:10:14<br />
Slave queue ID: 0<br />
====
↧
0017338: Unable to use ISO located on Samba share to install VM
In the ISO selection screen when creating a new VM using Virtual Machine Manager, I click Browse > Browse Local > Other Locations > under "Networks" I select my already-mounted samba share and navigate to the correct subfolder, pick my torrent-downloaded and verified Centos 7 DVD iso, click Open button. It drops me back to the "New VM" dialogue, where it apparently is unable to detect the OS type; manually specify Linux/Centos 7, click forward, and get this error: "Validating install media '/run/usr/1000/gvfs/smb-share:server=192.168.11.60,share=media/Installers/OSes/Centos-7-x86_64-DVD-2003/CentOS-7-x86_64-DVD-2003.iso' failed: Could not start storage pool: cannot open directory 'run/user/1000/gvfs/smb-share:server=192.168.11.60,share=media/Installers/OSes/Centos-7-x86_64-DVD-2003': Permission denied. However manually mounting the same Samba share in my home folder, and in Browse > Browse Local, selecting my home directrory, the mount point, the subfolder, and the same ISO works perfectly.
↧
0017312: Spectrum Scale 5.0.4.3 and 5.0.3.2 (gpfs) on CentOS 7.8 make Word failed
Invoking Kbuild...<br />
/usr/bin/make -C /usr/src/kernels/3.10.0-1127.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \<br />
if [ $? -ne 0 ]; then \<br />
exit 1;\<br />
fi<br />
make[2]: Entering directory `/usr/src/kernels/3.10.0-1127.el7.x86_64'<br />
LD /usr/lpp/mmfs/src/gpl-linux/built-in.o<br />
CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o<br />
CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o<br />
CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o<br />
CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o<br />
LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o<br />
CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o<br />
LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o<br />
CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o<br />
In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:61:0,<br />
from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54:<br />
/usr/lpp/mmfs/src/gpl-linux/kx.c: In function ‘reopen_file’:<br />
/usr/lpp/mmfs/src/gpl-linux/kx.c:5743:7: error: implicit declaration of function ‘file_release_write’ [-Werror=implicit-function-declaration]<br />
file_release_write(fP);<br />
^<br />
cc1: some warnings being treated as errors<br />
make[3]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1<br />
make[2]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2<br />
make[2]: Leaving directory `/usr/src/kernels/3.10.0-1127.el7.x86_64'<br />
make[1]: *** [modules] Error 1<br />
make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux'<br />
make: *** [Modules] Error 1
↧
↧
0017302: *-azure-common-release tags locked
Hi,<br />
<br />
I seems that for some reason the *-azure-common-release tags I'm using to release packages are now locked [1].<br />
<br />
This wasn't the case with our last release in March. I'm not sure if this is an error or if maybe there has been a some change in the process that I'm not aware of. Is there something I should do to re-enable these tags?<br />
<br />
[1]: <a href="https://cbs.centos.org/koji/taginfo?tagID=1562">https://cbs.centos.org/koji/taginfo?tagID=1562</a> and <a href="https://cbs.centos.org/koji/taginfo?tagID=1558">https://cbs.centos.org/koji/taginfo?tagID=1558</a>
↧
0017241: Create CI project for rpki-client
I'm the Fedora/EPEL package maintainer and an upstream contributor of OpenBSD "rpki-client" (portable) and would like to support the "rpki-client" portable project by having CentOS builders to run CI builds (and later maybe also further tests) on CentOS's CI. Not sure if it's relevant, but as it's a small project, we're still happy with less resources.
↧
0017339: on the second load of the mlx4_core kernel module panic
on boot the mlx4_core kernel module propoerly finds a Mellanox connectx-2 VPD device. The device is not connected to the kernel module the first time as per the syslog message:<br />
[ 62.792147] mlx4_core 0000:05:00.0: command 0x4 timed out (go bit not cleared)<br />
[ 62.792150] mlx4_core 0000:05:00.0: device is going to be reset<br />
[ 62.798085] mlx4_core 0000:05:00.0: crdump: FW doesn't support health buffer access, skipping<br />
[ 63.799505] mlx4_core 0000:05:00.0: device was reset successfully<br />
[ 63.805637] mlx4_core 0000:05:00.0: QUERY_FW command failed, aborting<br />
[ 63.812112] mlx4_core 0000:05:00.0: Failed to init fw, aborting.<br />
[ 64.819389] mlx4_core: probe of 0000:05:00.0 failed with error -5<br />
[ 64.833322] pps_core: LinuxPPS API ver. 1 registered<br />
<br />
after removing the mlx4_core module and modprobe causes a kernel panic
↧
0017340: [abrt] lvm2: lv_add_segment(): lvm killed by SIGSEGV
Description of problem:<br />
- fresh CentOS 7 VM on a Debian host<br />
- added 4 10 GiB VirtIO virtual disks to the VM<br />
- in the VM, created one LVM partition on each disk (vdb1, vdc1, vdd1, vde1), then:<br />
<br />
pvcreate /dev/vdb1<br />
pvcreate /dev/vdc1<br />
vgcreate storage /dev/vdb1<br />
vgdisplay storage<br />
lvcreate --name lv_test -l 2559 storage<br />
mkfs -t ext4 /dev/mapper/storage-lv_test<br />
mount /dev/storage/lv_test /mnt<br />
vim /mnt/testfile<br />
umount /mnt<br />
<br />
vgextend storage /dev/vdc1<br />
lvextend --size +5G /dev/storage/lv_test<br />
e2fsck -f /dev/storage/lv_test<br />
resize2fs /dev/storage/lv_test<br />
mount /dev/storage/lv_test /mnt<br />
vim /mnt/testfile2<br />
umount /mnt<br />
pvcreate /dev/vdd1<br />
vgextend storage /dev/vdd1<br />
pvcreate /dev/vde1<br />
vgextend storage /dev/vde1<br />
lvconvert --type raid5 --stripes 2 storage/lv_test<br />
lvconvert --type raid5 --stripes 2 storage/lv_test<br />
<br />
lvextend -l +2500 /dev/storage/lv_test<br />
<br />
The last command resulted in a segfault.<br />
<br />
Version-Release number of selected component:<br />
lvm2-2.02.186-7.el7_8.1<br />
<br />
Truncated backtrace:<br />
Thread no. 1 (10 frames)<br />
#0 lv_add_segment<br />
#1 _lv_extend_layered_lv<br />
#2 lv_extend<br />
#3 _lvresize_volume<br />
#4 lv_resize<br />
#5 _lvresize_single<br />
#6 process_each_vg<br />
#7 lvresize<br />
#8 lvm_run_command<br />
#9 lvm2_main
↧
↧
0017341: dovecot missing dh.pem for ssl communication
When updating dovecot, my ssl configuration stopped working because ssl_dh parameter was missing in /etc/dovecot/conf.d/10-ssl.conf.<br />
I had to generate dh.pem and add it to 10-ssl.conf myself.
↧
0017342: [abrt] tracker: pcache1PinPage(): tracker-store killed by SIGSEGV
Version-Release number of selected component:<br />
tracker-1.10.5-6.el7<br />
<br />
Truncated backtrace:<br />
Thread no. 1 (10 frames)<br />
#0 pcache1PinPage at /lib64/libsqlite3.so.0<br />
#1 pcache1Fetch at /lib64/libsqlite3.so.0<br />
#2 sqlite3PcacheFetch at /lib64/libsqlite3.so.0<br />
#3 sqlite3PagerAcquire at /lib64/libsqlite3.so.0<br />
#4 btreeGetPage at /lib64/libsqlite3.so.0<br />
#5 getOverflowPage at /lib64/libsqlite3.so.0<br />
#6 accessPayload at /lib64/libsqlite3.so.0<br />
#7 sqlite3VdbeMemFromBtree at /lib64/libsqlite3.so.0<br />
#8 sqlite3VdbeExec at /lib64/libsqlite3.so.0<br />
#9 sqlite3_step at /lib64/libsqlite3.so.0
↧
0017343: Cloud Images don't output to tty0
The kernel command line is missing "console=tty0". This causes no output on tty0. While tty0 is not really used on most cloud environments it is still useful to have this console enabled.
↧
0017324: CPU hot add feature in ESXi causing Centos 7.X VM to crash due to race condition when free memory in guest VM is quite low.
What problem/issue/behavior are you having trouble with? What do you expect to see?<br />
<br />
When free memory in Centos 7.7 guest VM (tested Kernel version : 3.10.0-1062.12.1.el7.x86_64) running on VMware (tested ESXi 6.7) environment is below 110MB or 120MB, then CPU hot add operation can cause the VM to panic unexpectedly. Exactly same issue has been found in Red-hat 7.7 and it has been confirmed that the issue is a bug and fix will be needed. This issue eventually happens not only in 7.7 but also in every version of Centos 7.X. Same issue has been found on Red-hat and Red-hat confirmed this is the bug that needs to be fixed. Please refer to <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1819807">https://bugzilla.redhat.com/show_bug.cgi?id=1819807</a> for the same issue on the Red-hat.<br />
<br />
Following is the log "core.txt" from Centox 7.7 (Kernel version : 3.10.0-1062.12.1.el7.x86_64) when panic happens during the Cpu hot add. <br />
<br />
<br />
System crashes because an invalid (NULL) pointer is dereferenced:<br />
Vmcore.txt shows the following panic signatures. All of the panics are reporting the similar symptoms. <br />
[ 92.164060] CPU8 has been hot-added<br />
[ 92.166979] CPU9 has been hot-added<br />
[ 92.169032] CPU10 has been hot-added<br />
[ 92.170138] CPU11 has been hot-added<br />
[ 93.841222] smpboot: Booting Node 0 Processor 11 APIC 0x16<br />
[ 93.841809] Disabled fast string operations<br />
[ 93.842964] smpboot: CPU 11 Converting physical 22 to logical package 8<br />
[ 93.843003] Skipped synchronization checks as TSC is reliable.<br />
[ 93.915347] BUG: unable to handle kernel NULL pointer dereference at (null)<br />
[ 93.915353] IP: [<ffffffffb21a11bb>] __list_add+0x1b/0xc0<br />
[ 93.915361] PGD 0<br />
[ 93.915364] Oops: 0000 [#1] SMP<br />
[ 93.915367] Modules linked in: tcp_lp fuse xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun devlink ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat iptable_mangle iptable_security iptable_raw nf_conntrack ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vmw_vsock_vmci_transport vsock sunrpc ppdev sb_edac iosf_mbi crc32_pclmul vmw_balloon ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd joydev pcspkr sg vmw_vmci i2c_piix4 parport_pc parport ip_tables xfs libcrc32c sr_mod cdrom ata_generic pata_acpi<br />
[ 93.915401] sd_mod crc_t10dif vmwgfx crct10dif_generic drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci crct10dif_pclmul crct10dif_common crc32c_intel libahci drm ata_piix nfit serio_raw libata libnvdimm vmxnet3 vmw_pvscsi drm_panel_orientation_quirks dm_mirror dm_region_hash dm_log dm_mod<br />
[ 93.915417] CPU: 11 PID: 3568 Comm: systemd-udevd Kdump: loaded Not tainted 3.10.0-1062.12.1.el7.x86_64 #1<br />
[ 93.915419] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 12/12/2018<br />
[ 93.915421] task: ffff8eaca7675230 ti: ffff8ead8c378000 task.ti: ffff8ead8c378000<br />
[ 93.915423] RIP: 0010:[<ffffffffb21a11bb>] [<ffffffffb21a11bb>] __list_add+0x1b/0xc0<br />
[ 93.915426] RSP: 0018:ffff8ead8c37b508 EFLAGS: 00010246<br />
[ 93.915427] RAX: 00000000ffffffff RBX: ffff8ead8c37b530 RCX: 0000000000000000<br />
[ 93.915429] RDX: ffff8eae2a6d80b0 RSI: 0000000000000000 RDI: ffff8ead8c37b530<br />
[ 93.915431] RBP: ffff8ead8c37b520 R08: 0000000000000000 R09: 0000000000000002<br />
[ 93.915433] R10: ffffffffb2b5b2c0 R11: ffffffffffffffff R12: ffff8eae2a6d80b0<br />
[ 93.915434] R13: 0000000000000000 R14: 00000000ffffffff R15: ffff8eae2a6d80b0<br />
[ 93.915437] FS: 00007f9d423788c0(0000) GS:ffff8eae2a6c0000(0000) knlGS:0000000000000000<br />
[ 93.915439] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033<br />
[ 93.915440] CR2: 0000000000000000 CR3: 000000018e32c000 CR4: 00000000001607e0<br />
[ 93.915472] Call Trace:<br />
[ 93.915479] [<ffffffffb257f8b6>] __mutex_lock_slowpath+0xa6/0x1d0<br />
[ 93.915485] [<ffffffffb257ecaf>] mutex_lock+0x1f/0x2f<br />
[ 93.915490] [<ffffffffb1fe6bab>] get_swap_page+0x9b/0x1b0<br />
[ 93.915494] [<ffffffffb20075c9>] add_to_swap+0x19/0x80<br />
[ 93.915499] [<ffffffffb1fd26cb>] shrink_page_list+0x69b/0xc30<br />
[ 93.915503] [<ffffffffb1fd3286>] shrink_inactive_list+0x1c6/0x5d0<br />
[ 93.915506] [<ffffffffb1fd3d85>] shrink_lruvec+0x385/0x740<br />
[ 93.915509] [<ffffffffb1fd41b6>] shrink_zone+0x76/0x1a0<br />
[ 93.915512] [<ffffffffb1fd46a0>] do_try_to_free_pages+0xf0/0x520<br />
[ 93.915516] [<ffffffffb2024b5e>] ? ___slab_alloc+0x24e/0x4f0<br />
[ 93.915519] [<ffffffffb1fd4bcc>] try_to_free_pages+0xfc/0x180<br />
[ 93.915522] [<ffffffffb1fc87f1>] __alloc_pages_nodemask+0x831/0xbe0<br />
[ 93.915527] [<ffffffffb2109700>] ? selinux_mmap_addr+0x50/0x60<br />
[ 93.915531] [<ffffffffb2016ba8>] alloc_pages_current+0x98/0x110<br />
[ 93.915533] [<ffffffffb20247c3>] new_slab+0x393/0x4e0<br />
[ 93.915536] [<ffffffffb2024cbc>] ___slab_alloc+0x3ac/0x4f0<br />
[ 93.915539] [<ffffffffb1ffa71c>] ? mmap_region+0x38c/0x670<br />
[ 93.915542] [<ffffffffb210a3db>] ? cred_has_capability+0x6b/0x120<br />
[ 93.915545] [<ffffffffb1ffa71c>] ? mmap_region+0x38c/0x670<br />
[ 93.915548] [<ffffffffb257760f>] __slab_alloc+0x40/0x5c<br />
[ 93.915550] [<ffffffffb20250db>] kmem_cache_alloc+0x19b/0x1f0<br />
[ 93.915553] [<ffffffffb1ffa71c>] ? mmap_region+0x38c/0x670<br />
[ 93.915555] [<ffffffffb1ffa71c>] mmap_region+0x38c/0x670<br />
[ 93.915558] [<ffffffffb1ffad78>] do_mmap+0x378/0x530<br />
[ 93.915560] [<ffffffffb210a9b0>] ? file_map_prot_check+0xd0/0xd0<br />
[ 93.915563] [<ffffffffb1fddfe0>] vm_mmap_pgoff+0xd0/0x120<br />
[ 93.915566] [<ffffffffb1ff8c26>] SyS_mmap_pgoff+0x116/0x270<br />
[ 93.915572] [<ffffffffb1e31f12>] SyS_mmap+0x22/0x30<br />
[ 93.915575] [<ffffffffb258dede>] system_call_fastpath+0x25/0x2a<br />
[ 93.915577] Code: ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 55 48 89 e5 41 55 49 89 f5 41 54 49 89 d4 53 4c 8b 42 08 48 89 fb 49 39 f0 75 2a <4d> 8b 45 00 4d 39 c4 75 68 4c 39 e3 74 3e 4c 39 eb 74 39 49 89<br />
[ 93.915597] RIP [<ffffffffb21a11bb>] __list_add+0x1b/0xc0<br />
[ 93.915600] RSP <ffff8ead8c37b508><br />
[ 93.915601] CR2: 0000000000000000<br />
[<a href="mailto:root@centos77">root@centos77</a>_client 127.0.0.1-2020-02-27-10:25:44]#<br />
<br />
<br />
Where are you experiencing the behavior? What environment?<br />
Encountered in production and reproducible in a lab setup.<br />
<br />
When does the behavior occur? Frequency? Repeatedly? At certain times?<br />
<br />
Consistent failure. Appears to be a possible race condition - that can occur whenever CPU hot add is performed. When the memory pressure is created (refer to the document for steps to reproduce) this consistently occurs.<br />
<br />
What information can you provide around timeframes and the business impact?<br />
This is a significant business impact as it prevents the safe use of the hot CPU feature for guests running Lentos.
↧
↧
0017000: Krb5LoginModule.attemptAuthentication KrbException: Message stream modified (41)
Nach update von openJDK 1.8.0_232-b09 auf 1.8.0_242-b08: KrbException: Message stream modified (41)<br />
<br />
Nach update von java 1.8.0_232-b09 auf 1.8.0_242-b08: kommt KrbException: Message stream modified (41)<br />
<br />
Login Konfiguration:<br />
<br />
serverSecurityDomain {<br />
com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true debug=true keyTab="/etc/some.keytab" doNotPrompt=true storeKey=true realm=someRealm principal="somePrincipal";<br />
};<br />
<br />
/etc/krb5.conf:<br />
<br />
# Configuration snippets may be placed in this directory as well<br />
includedir /etc/krb5.conf.d/<br />
<br />
[logging]<br />
default = FILE:/var/log/krb5libs.log<br />
kdc = FILE:/var/log/krb5kdc.log<br />
admin_server = FILE:/var/log/kadmind.log<br />
<br />
[libdefaults]<br />
dns_lookup_realm = false<br />
ticket_lifetime = 24h<br />
renew_lifetime = 7d<br />
forwardable = true<br />
rdns = false<br />
# default_realm = EXAMPLE.COM<br />
default_ccache_name = KEYRING:persistent:%{uid}<br />
<br />
[realms]<br />
# EXAMPLE.COM = {<br />
# kdc = kerberos.example.com<br />
# admin_server = kerberos.example.com<br />
# }<br />
<br />
[domain_realm]<br />
# .example.com = EXAMPLE.COM<br />
# example.com = EXAMPLE.COM
↧
0017344: [abrt] firefox: mozilla::CycleCollectedJSRuntime::JSObjectsTenured()(): firefox killed by SIGSEGV
Version-Release number of selected component:<br />
firefox-68.7.0-2.el7.centos<br />
<br />
Truncated backtrace:<br />
Thread no. 1 (8 frames)<br />
#0 mozilla::CycleCollectedJSRuntime::JSObjectsTenured() at /usr/lib64/firefox/libxul.so<br />
#1 js::Nursery::doCollection(JS::GCReason, js::gc::TenureCountCache&) at /usr/lib64/firefox/libxul.so<br />
#2 js::Nursery::collect(JS::GCReason) at /usr/lib64/firefox/libxul.so<br />
#3 js::gc::GCRuntime::minorGC(JS::GCReason, js::gcstats::PhaseKind) at /usr/lib64/firefox/libxul.so<br />
#4 JSObject* js::AllocateObject<(js::AllowGC)1>(JSContext*, js::gc::AllocKind, unsigned long, js::gc::InitialHeap, js::Class const*) at /usr/lib64/firefox/libxul.so<br />
#5 NewObject(JSContext*, JS::Handle<js::ObjectGroup*>, js::gc::AllocKind, js::NewObjectKind, unsigned int) at /usr/lib64/firefox/libxul.so<br />
#6 js::NewObjectWithGivenTaggedProto(JSContext*, js::Class const*, JS::Handle<js::TaggedProto>, js::gc::AllocKind, js::NewObjectKind, unsigned int) at /usr/lib64/firefox/libxul.so<br />
#7 js::NewArrayIteratorObject(JSContext*, js::NewObjectKind) at /usr/lib64/firefox/libxul.so
↧
0017345: [abrt] policycoreutils: cil_list_destroy(): semodule killed by SIGSEGV
Version-Release number of selected component:<br />
policycoreutils-2.5-34.el7<br />
<br />
Truncated backtrace:<br />
Thread no. 1 (10 frames)<br />
#0 cil_list_destroy at ../cil/src/cil_list.c:67<br />
#1 cil_reset_classperms at ../cil/src/cil_reset_ast.c:46<br />
#2 cil_reset_classperms_list at ../cil/src/cil_reset_ast.c:73<br />
#3 cil_reset_avrule at ../cil/src/cil_reset_ast.c:198<br />
#4 __cil_reset_node at ../cil/src/cil_reset_ast.c:476<br />
#5 cil_tree_walk_core at ../cil/src/cil_tree.c:272<br />
#6 cil_tree_walk at ../cil/src/cil_tree.c:316<br />
#7 cil_tree_walk_core at ../cil/src/cil_tree.c:284<br />
#8 cil_tree_walk at ../cil/src/cil_tree.c:316<br />
#9 cil_tree_walk_core at ../cil/src/cil_tree.c:284
↧
0017346: [abrt] gnome-disk-utility: g_dbus_object_get_interface(): gnome-disks killed by SIGSEGV
Description of problem:<br />
Trying to format an external drive.<br />
<br />
Version-Release number of selected component:<br />
gnome-disk-utility-3.28.3-1.el7<br />
<br />
Truncated backtrace:<br />
Thread no. 1 (8 frames)<br />
#0 g_dbus_object_get_interface at gdbusobject.c:149<br />
#1 udisks_object_get_drive at /lib64/libudisks2.so.0<br />
#2 gdu_utils_get_all_contained_objects at ../src/libgdu/gduutils.c:1194<br />
#3 gdu_utils_is_in_use_full at ../src/libgdu/gduutils.c:1276<br />
#4 unuse_data_iterate at ../src/libgdu/gduutils.c:1472<br />
#5 gdu_utils_ensure_unused_list at ../src/libgdu/gduutils.c:1566<br />
#6 gdu_window_ensure_unused_list at ../src/disks/gduwindow.c:4376<br />
#7 gdu_window_ensure_unused at ../src/disks/gduwindow.c:4409
↧
↧
0017347: Computer locks when accessing network drive via (Places) extension
Computer locks when accessing network drive via (Places) extension <br />
<br />
The following GNOME Shell Extension completely blocks my desktop when trying to access network drives:<br />
<br />
Places Status Indicator (GNOME Shell Extension)<br />
Version: 3.32.1-10.el8<br />
Updated: Never<br />
Category: Add-ons → Shell Extensions<br />
License: Free<br />
Source: AppStream<br />
Installed Size: 22.1 kb<br />
<br />
Using this extension locks me onto the popup and doesn't allow me to cancel or unlock when entering credentials. However, using Nautilus to access network drives works fine.
↧
0017348: SELinux is preventing /usr/lib64/firefox/plugin-container;5eb626f0 (deleted) from 'setattr' accesses on the directory cache.
Description of problem:<br />
SELinux is preventing /usr/lib64/firefox/plugin-container;5eb626f0 (deleted) from 'setattr' accesses on the directory cache.<br />
<br />
***** Plugin mozplugger (99.1 confidence) suggests ************************<br />
<br />
If you want to use the plugin package<br />
Then you must turn off SELinux controls on the Firefox plugins.<br />
Do<br />
# setsebool -P unconfined_mozilla_plugin_transition 0<br />
<br />
***** Plugin catchall (1.81 confidence) suggests **************************<br />
<br />
If you believe that plugin-container;5eb626f0 (deleted) should be allowed setattr access on the cache directory by default.<br />
Then you should report this as a bug.<br />
You can generate a local policy module to allow this access.<br />
Do<br />
allow this access for now by executing:<br />
# ausearch -c 'Web Content' --raw | audit2allow -M my-WebContent<br />
# semodule -i my-WebContent.pp<br />
<br />
Additional Information:<br />
Source Context unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c<br />
0.c1023<br />
Target Context system_u:object_r:lib_t:s0<br />
Target Objects cache [ dir ]<br />
Source Web Content<br />
Source Path /usr/lib64/firefox/plugin-container;5eb626f0<br />
(deleted)<br />
Port <Unknown><br />
Host (removed)<br />
Source RPM Packages <br />
Target RPM Packages <br />
Policy RPM selinux-policy-3.13.1-166.el7.noarch selinux-<br />
policy-3.13.1-266.el7.noarch<br />
Selinux Enabled True<br />
Policy Type targeted<br />
Enforcing Mode Enforcing<br />
Host Name (removed)<br />
Platform Linux (removed) 3.10.0-693.el7.x86_64 #1 SMP Tue<br />
Aug 22 21:09:27 UTC 2017 x86_64 x86_64<br />
Alert Count 1<br />
First Seen 2020-05-09 09:22:10 +0530<br />
Last Seen 2020-05-09 09:22:10 +0530<br />
Local ID ad10fb1a-58f1-4b27-82c1-49e4b2e146ab<br />
<br />
Raw Audit Messages<br />
type=AVC msg=audit(1588996330.60:389): avc: denied { setattr } for pid=3459 comm=57656220436F6E74656E74 name="cache" dev="dm-0" ino=50558343 scontext=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 tcontext=system_u:object_r:lib_t:s0 tclass=dir<br />
<br />
<br />
type=SYSCALL msg=audit(1588996330.60:389): arch=x86_64 syscall=chmod success=no exit=EACCES a0=7f80fa8eabe0 a1=1ed a2=7f80fa8eabf9 a3=7ffd309a3740 items=0 ppid=3396 pid=3459 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=(none) ses=2 comm=Web Content exe=/usr/lib64/firefox/plugin-container;5eb626f0 (deleted) subj=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 key=(null)<br />
<br />
Hash: Web Content,mozilla_plugin_t,lib_t,dir,setattr<br />
<br />
Version-Release number of selected component:<br />
selinux-policy-3.13.1-166.el7.noarch<br />
selinux-policy-3.13.1-266.el7.noarch
↧
0016939: dlm package missing from repo
The Distributed Lock Manager (dlm) package that was part of the base os in V7.7 appears to now be a part of the upstream Resilient Storage suite. Sadly, This package is missing from the current distributions 8.0 and 8.1. I've checked various mirrors, and can not find this package, while its sister package dlm-libs is present.<br />
<br />
This package is needed for the proper functioning of a number of "cluster" applications like GFS2 and clvm.<br />
<br />
Thank you guys for all the great work and all the recent HA packages. Please let me know if I can assist in any way.<br />
<br />
<br />
Cheers,<br />
<br />
Christian
↧