Quantcast
Channel: CentOS Bug Tracker - Issues
Viewing all 19115 articles
Browse latest View live

0016765: [abrt] gnome-settings-daemon: settings_backend_path_changed(): gsd-keyboard killed by SIGSEGV

$
0
0
Description of problem:<br /> Hi.<br /> installing different service at a time and i can't say exactly regarding this problem. so i need a litte help from the you as a support team.<br /> Kind Regards <br /> <br /> Version-Release number of selected component:<br /> gnome-settings-daemon-3.28.1-5.el7<br /> <br /> Truncated backtrace:<br /> Thread no. 1 (3 frames)<br /> #0 settings_backend_path_changed at gsettings.c:460<br /> #1 g_settings_backend_invoke_closure at gsettingsbackend.c:267<br /> #7 gtk_main at gtkmain.c:1323

0016766: Generic Cloud Image

$
0
0
Guys is there a plan to build and publish a GenericCloud image for Centos 8 ?

0016145: SELinux is preventing /usr/libexec/dovecot/auth from 'write' accesses on the file passwd.db.

$
0
0
Description of problem:<br /> SELinux is preventing /usr/libexec/dovecot/auth from 'write' accesses on the file passwd.db.<br /> <br /> ***** Plugin catchall (100. confidence) suggests **************************<br /> <br /> If you believe that auth should be allowed write access on the passwd.db file by default.<br /> Then you should report this as a bug.<br /> You can generate a local policy module to allow this access.<br /> Do<br /> allow this access for now by executing:<br /> # ausearch -c 'auth' --raw | audit2allow -M my-auth<br /> # semodule -i my-auth.pp<br /> <br /> Additional Information:<br /> Source Context system_u:system_r:dovecot_auth_t:s0<br /> Target Context unconfined_u:object_r:postfix_spool_t:s0<br /> Target Objects passwd.db [ file ]<br /> Source auth<br /> Source Path /usr/libexec/dovecot/auth<br /> Port <Unknown><br /> Host (removed)<br /> Source RPM Packages <br /> Target RPM Packages <br /> Policy RPM selinux-policy-3.13.1-229.el7_6.12.noarch<br /> Selinux Enabled True<br /> Policy Type targeted<br /> Enforcing Mode Enforcing<br /> Host Name (removed)<br /> Platform Linux (removed) 3.10.0-957.12.2.el7.x86_64 #1 SMP<br /> Tue May 14 21:24:32 UTC 2019 x86_64 x86_64<br /> Alert Count 24<br /> First Seen 2019-06-03 22:59:15 CDT<br /> Last Seen 2019-06-03 23:08:17 CDT<br /> Local ID 98cb3e32-7f87-4b4c-bdaf-49ee3affe16e<br /> <br /> Raw Audit Messages<br /> type=AVC msg=audit(1559621297.370:496): avc: denied { write } for pid=19173 comm="auth" name="passwd.db" dev="dm-3" ino=10197 scontext=system_u:system_r:dovecot_auth_t:s0 tcontext=unconfined_u:object_r:postfix_spool_t:s0 tclass=file permissive=0<br /> <br /> <br /> Hash: auth,dovecot_auth_t,postfix_spool_t,file,write<br /> <br /> Version-Release number of selected component:<br /> selinux-policy-3.13.1-229.el7_6.12.noarch

0016767: SELinux is preventing /usr/sbin/dnsmasq from 'search' accesses on the directory /var/lib/NetworkManager/dnsmasq-wlp2s0.leases.

$
0
0
Description of problem:<br /> SELinux is preventing /usr/sbin/dnsmasq from 'search' accesses on the directory /var/lib/NetworkManager/dnsmasq-wlp2s0.leases.<br /> <br /> ***** Plugin catchall (100. confidence) suggests **************************<br /> <br /> If you believe that dnsmasq should be allowed search access on the dnsmasq-wlp2s0.leases directory by default.<br /> Then you should report this as a bug.<br /> You can generate a local policy module to allow this access.<br /> Do<br /> allow this access for now by executing:<br /> # ausearch -c 'dnsmasq' --raw | audit2allow -M my-dnsmasq<br /> # semodule -i my-dnsmasq.pp<br /> <br /> Additional Information:<br /> Source Context system_u:system_r:dnsmasq_t:s0<br /> Target Context system_u:object_r:NetworkManager_var_lib_t:s0<br /> Target Objects /var/lib/NetworkManager/dnsmasq-wlp2s0.leases [<br /> dir ]<br /> Source dnsmasq<br /> Source Path /usr/sbin/dnsmasq<br /> Port <Unknown><br /> Host (removed)<br /> Source RPM Packages dnsmasq-2.76-10.el7_7.1.x86_64<br /> Target RPM Packages <br /> Policy RPM selinux-policy-3.13.1-252.el7.1.noarch<br /> Selinux Enabled True<br /> Policy Type targeted<br /> Enforcing Mode Enforcing<br /> Host Name (removed)<br /> Platform Linux (removed) 3.10.0-1062.4.1.el7.x86_64 #1 SMP<br /> Fri Oct 18 17:15:30 UTC 2019 x86_64 x86_64<br /> Alert Count 1<br /> First Seen 2019-11-26 21:57:16 IST<br /> Last Seen 2019-11-26 21:57:16 IST<br /> Local ID 275bc850-175a-4053-bd75-01f19f21c762<br /> <br /> Raw Audit Messages<br /> type=AVC msg=audit(1574785636.133:2182): avc: denied { search } for pid=9777 comm="dnsmasq" name="NetworkManager" dev="dm-0" ino=34810610 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:NetworkManager_var_lib_t:s0 tclass=dir permissive=0<br /> <br /> <br /> type=SYSCALL msg=audit(1574785636.133:2182): arch=x86_64 syscall=open success=no exit=EACCES a0=55c9ce121070 a1=442 a2=1b6 a3=24 items=1 ppid=1602 pid=9777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=dnsmasq exe=/usr/sbin/dnsmasq subj=system_u:system_r:dnsmasq_t:s0 key=(null)<br /> <br /> type=CWD msg=audit(1574785636.133:2182): cwd=/<br /> <br /> type=PATH msg=audit(1574785636.133:2182): item=0 name=/var/lib/NetworkManager/dnsmasq-wlp2s0.leases objtype=UNKNOWN cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0<br /> <br /> Hash: dnsmasq,dnsmasq_t,NetworkManager_var_lib_t,dir,search<br /> <br /> Version-Release number of selected component:<br /> selinux-policy-3.13.1-252.el7.1.noarch

0016768: please add s390x builders to CBS

$
0
0
Would you please consider adding s390x builders to CBS?<br /> <br /> This would allow us to build Ceph for s390x. The OpenShift engineering team is working on adding s390x support in OpenShift, and we want to have Ceph CentOS builds available to work with that.

0016769: package qgpgme-devel is omitted from the CentOS-Stream repositories

$
0
0
CentOS-Stream provides package qgpgme, which includes shared libraries, in the Stream-AppStream repository, but it does not provide the corresponding qgpgme-devel package in Stream-PowerTools (nor in Stream-BaseOS or Stream-AppStream).

0016770: Something is up on pagyre's pipeline

$
0
0
The pipeline fails with "The requested operation failed as no inventory is available" and a traceback.<br /> <br /> <a href="https://ci.centos.org/job/pagure-pr/2970/console">https://ci.centos.org/job/pagure-pr/2970/console</a>

0016771: CICO node get fails with no inventory available

$
0
0
We are facing issues while running CI on centos/container-index.<br /> while doing cico node get from slave it fails with<br /> <br /> The requested operation failed as no inventory is available<br /> <br /> <a href="https://ci.centos.org/job/centos-container-index-ci/61/console">https://ci.centos.org/job/centos-container-index-ci/61/console</a><br /> <br /> it was running fine yesterday, something changed today?

0016726: mlx4_core driver does not support ConnectX-2 cards

$
0
0
The above kernel module supports Mellanox ConnectX-2 cards by default as long as it is compiled with switch CONFIG_MLX4_CORE_GEN2. As per <a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/mellanox/mlx4/Kconfig?h=v5.4-rc7">https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/mellanox/mlx4/Kconfig?h=v5.4-rc7</a> this is the default setting for recent kernels.<br /> <br /> CentOS 8 for some reason disabled this setting and ConnectX-2 cards do not work any longer.

0016635: kernel wrongley id's iwlwifi as driver for intel wireless 3945

$
0
0
$ lspci -knn | grep 3945<br /> 10:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection [8086:4222] (rev 02)<br /> $ grep 8086 /lib/modules/4.18.0-80.11.2.el8_0.x86_64/modules.* | grep 4222<br /> /lib/modules/4.18.0-80.11.2.el8_0.x86_64/modules.alias:alias pci:v00008086d00000891sv*sd00004222bc*sc*i* iwlwifi<br /> <br /> But:<br /> <a href="https://wireless.wiki.kernel.org/en/users/drivers/iwlegacy">https://wireless.wiki.kernel.org/en/users/drivers/iwlegacy</a><br /> Says:<br /> iwlegacy is the wireless driver for Intel's 3945 and 4965 wireless chips.

0016772: Access to Jenkins slave slave04

$
0
0
Hello, I am a member of the App SRE team in the Service Delivery organization at Red Hat.<br /> <br /> I would like to get access to slave04.ci.centos.org (by using jump.ci.centos.org).<br /> My Red Hat user name is: mafriedm<br /> My public SSH public key is:<br /> ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7iJZu4VCE574n9HTcfwf6dsJtlI+Xqr+srIu+H8UxoqOZlON76bQu/RweoT/tV5E+j3ctf5AVmwJ428ckGVrECeBJP3w861h1KDrW25q+ccoALHFjZJJ48l+mZaCJuHVs0oV2lDkqEqZvu+DmRw1G9xlajsdynDkJO4Ygu4JvHx/vpavCCaHWY1vsFI1JaxaZ5Ia4GR2JHaRflMk+qwC07y5i1oLXPITeT5DAGPe4zTSLtpQRGIJBqtC1YARzipmHmv0h9e/2iHg6F3gOIfKGsDfeBUJssc5yykqxdweM775F5Bwh96bF0sRPHNVczKkvrWoqnN4H8lji9FYU6ouV <a href="mailto:mafriedm@redhat.com">mafriedm@redhat.com</a><br /> <br /> Thanks in advance,

0016773: qemu-kvm: kvm_init_vcpu failed: Cannot allocate memory

$
0
0
I am facing qemu-kvm allocation memory issue during online guest migration between two hosts. I have activated guest placement to NUMA nodes. Destination host should have enough of free memory (using hugepages) in NUMA node 1 for migrated guest (consuming 16GB of RAM).<br /> <br /> guest memory configuration: <br /> <memory unit='KiB'>16777216</memory><br /> <currentMemory unit='KiB'>16777216</currentMemory><br /> <memoryBacking><br /> <hugepages><br /> <page size='2048' unit='KiB' nodeset='0'/><br /> </hugepages><br /> </memoryBacking><br /> <br /> <br /> source host:<br /> 2019-11-27 11:06:28.523+0000: 2562914: info : libvirt version: 4.5.0, package: 23.el7_7.1 (CentOS BuildSystem <<a href="http://bugs.centos.org>,">http://bugs.centos.org>,</a> 2019-09-13-18:01:52, x86-02.bsys.centos.org)<br /> 2019-11-27 11:06:28.523+0000: 2562914: info : hostname: ***<br /> 2019-11-27 11:06:28.523+0000: 2562914: error : virNetClientProgramDispatchError:174 : internal error: qemu unexpectedly closed the monitor: 2019-11-27T11:06:28.307440Z qemu-kvm: kvm_init_vcpu failed: Cannot allocate memory<br /> <br /> cat /proc/meminfo | grep Huge<br /> AnonHugePages: 256000 kB<br /> HugePages_Total: 20480<br /> HugePages_Free: 12288<br /> HugePages_Rsvd: 0<br /> HugePages_Surp: 0<br /> Hugepagesize: 2048 kB<br /> <br /> numastat -czs libvirt kvm qemu<br /> <br /> Per-node process memory usage (in MBs)<br /> PID Node 0 Node 1 Total<br /> --------------- ------ ------ -----<br /> 338873 (qemu-kvm 14 16444 16458 <------------------ Migration of this instance fail on memory allocation<br /> 2562889 (libvirt 12 11 23<br /> --------------- ------ ------ -----<br /> Total 26 16455 16481<br /> <br /> destination host:<br /> <br /> 2019-11-27 11:06:28.319+0000: 2612064: info : hostname: ***<br /> 2019-11-27 11:06:28.319+0000: 2612064: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer<br /> 2019-11-27 11:06:28.319+0000: 2612064: error : qemuProcessReportLogError:1923 : internal error: qemu unexpectedly closed the monitor: 2019-11-27T11:06:28.307440Z qemu-kvm: kvm_init_vcpu failed: Cannot allocate memory<br /> <br /> cat /proc/meminfo | grep Huge<br /> AnonHugePages: 327680 kB<br /> HugePages_Total: 20480<br /> HugePages_Free: 15872<br /> HugePages_Rsvd: 0<br /> HugePages_Surp: 0<br /> Hugepagesize: 2048 kB<br /> <br /> numastat -czs libvirt kvm qemu<br /> Per-node process memory usage (in MBs)<br /> PID Node 0 Node 1 Total<br /> --------------- ------ ------ -----<br /> 2616135 (qemu-kv 4157 10 4166<br /> 2618045 (qemu-kv 4144 10 4153<br /> 2617183 (qemu-kv 1075 10 1085<br /> 2612064 (libvirt 9 16 25<br /> --------------- ------ ------ -----<br /> Total 9385 45 9430<br /> <br /> Same issue has been reported here <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1010885">https://bugzilla.redhat.com/show_bug.cgi?id=1010885</a> by last two comments.

0016774: [abrt] gnome-terminal: poll_for_event(): gnome-terminal-server killed by SIGABRT

$
0
0
Description of problem:<br /> I was testing different colors for gnome-terminal. <br /> I had "black on light yellow" selected when I selected built-in schemes Rxvt and the crash happened<br /> <br /> Version-Release number of selected component:<br /> gnome-terminal-3.28.2-2.el7<br /> <br /> Truncated backtrace:<br /> Thread no. 1 (10 frames)<br /> #4 poll_for_event at xcb_io.c:260<br /> #5 poll_for_response at xcb_io.c:278<br /> #7 XEventsQueued at Pending.c:43<br /> #8 _cairo_xlib_shm_info_cleanup at cairo-xlib-surface-shm.c:481<br /> #9 _cairo_xlib_shm_info_create at cairo-xlib-surface-shm.c:640<br /> #10 _cairo_xlib_shm_surface_create at cairo-xlib-surface-shm.c:830<br /> #11 _cairo_xlib_surface_create_shm at cairo-xlib-surface-shm.c:1156<br /> #12 _cairo_xlib_surface_create_similar_shm at cairo-xlib-surface-shm.c:1181<br /> #13 INT_cairo_surface_create_similar_image at cairo-surface.c:595<br /> #14 gdk_cairo_set_source_pixbuf at gdkcairo.c:339

0016775: pulseaudio is started as user pcp by systemd and always fails as the home directory /var/lib/pcp appears inaccessible

$
0
0
systemd tries to start pulseaudio twice per hour, always fail.<br /> From /var/log/messages:<br /> Nov 24 03:20:03 in7 systemd[1]: Starting system activity accounting tool...<br /> Nov 24 03:20:03 in7 systemd[1]: Started system activity accounting tool.<br /> Nov 24 03:25:01 in7 systemd[1]: Started /run/user/992 mount wrapper.<br /> Nov 24 03:25:01 in7 systemd[1]: Created slice User Slice of UID 992.<br /> Nov 24 03:25:01 in7 systemd[1]: Starting User Manager for UID 992...<br /> Nov 24 03:25:01 in7 systemd[1]: Started Session 472 of user pcp.<br /> Nov 24 03:25:01 in7 systemd[30699]: Listening on Sound System.<br /> Nov 24 03:25:01 in7 systemd[30699]: Reached target Paths.<br /> Nov 24 03:25:01 in7 systemd[30699]: Listening on Multimedia System.<br /> Nov 24 03:25:01 in7 systemd[30699]: Reached target Timers.<br /> Nov 24 03:25:01 in7 systemd[30699]: Starting D-Bus User Message Bus Socket.<br /> Nov 24 03:25:01 in7 systemd[30699]: Listening on D-Bus User Message Bus Socket.<br /> Nov 24 03:25:01 in7 systemd[30699]: Reached target Sockets.<br /> Nov 24 03:25:01 in7 systemd[30699]: Reached target Basic System.<br /> Nov 24 03:25:01 in7 systemd[1]: Started User Manager for UID 992.<br /> Nov 24 03:25:01 in7 systemd[30699]: Starting Sound Service...<br /> Nov 24 03:25:01 in7 pulseaudio[30724]: E: [pulseaudio] core-util.c: Home directory not accessible: Permission denied<br /> Nov 24 03:25:01 in7 systemd[30699]: pulseaudio.service: Main process exited, code=exited, status=1/FAILURE<br /> Nov 24 03:25:01 in7 systemd[30699]: pulseaudio.service: Failed with result 'exit-code'.<br /> Nov 24 03:25:01 in7 systemd[30699]: Failed to start Sound Service.<br /> Nov 24 03:25:01 in7 systemd[30699]: Reached target Default.<br /> Nov 24 03:25:01 in7 systemd[30699]: Startup finished in 216ms.<br /> Nov 24 03:25:01 in7 systemd[30699]: pulseaudio.service: Service RestartSec=100ms expired, scheduling restart.<br /> Nov 24 03:25:01 in7 systemd[30699]: pulseaudio.service: Scheduled restart job, restart counter is at 1.<br /> Nov 24 03:25:01 in7 systemd[30699]: Stopped Sound Service.<br /> Nov 24 03:25:01 in7 systemd[30699]: Starting Sound Service...<br /> Nov 24 03:25:02 in7 pulseaudio[31130]: E: [pulseaudio] core-util.c: Home directory not accessible: Permission denied<br /> Nov 24 03:25:02 in7 systemd[30699]: pulseaudio.service: Main process exited, code=exited, status=1/FAILURE<br /> Nov 24 03:25:02 in7 systemd[30699]: pulseaudio.service: Failed with result 'exit-code'.<br /> -Nov 24 03:25:01 in7 systemd[30699]: Failed to start Sound Service.<br /> Nov 24 03:25:01 in7 systemd[30699]: Reached target Default.<br /> Nov 24 03:25:01 in7 systemd[30699]: Startup finished in 216ms.<br /> Nov 24 03:25:01 in7 systemd[30699]: pulseaudio.service: Service RestartSec=100ms expired, scheduling restart.<br /> Nov 24 03:25:01 in7 systemd[30699]: pulseaudio.service: Scheduled restart job, restart counter is at 1.<br /> Nov 24 03:25:01 in7 systemd[30699]: Stopped Sound Service.<br /> Nov 24 03:25:01 in7 systemd[30699]: Starting Sound Service...<br /> Nov 24 03:25:02 in7 pulseaudio[31130]: E: [pulseaudio] core-util.c: Home directory not accessible: Permission denied<br /> ....<br /> Some debugging shows that the inaccessible home directory is /var/lib/pcp, owned by root:root mode 755.

0016776: Crash if ratelimit taken into use with unbound-control

$
0
0
Unbound 1.6.6 crashes if ratelimit taken into use with unbound-control instead of with unbound.conf<br /> <br /> It is a known bug, corrected in unbound 1.7.3:<br /> <a href="https://nlnetlabs.nl/pipermail/unbound-users/2018-June/010714.html">https://nlnetlabs.nl/pipermail/unbound-users/2018-June/010714.html</a>

0016572: Unable to "login" to Centos8 when using VNC.

$
0
0
I'm using a Windows 10 machine with VirtualBox 6.<br /> I have successfully installed Centos 8 and TigerVNC.<br /> I have successfully connecter to the Centos 8 machine via TigerVNC (I am using RealVNC vncViewer).<br /> <br /> Unfortunately, when trying to login the "next" button is constantly being "clicked".<br /> This prevents me from typing in the password and instead I keep receiving an "Authentication error" message.<br /> <br /> I suspect the Vncviewer is sending an "input" causing the "next" button to be "clicked".

0016777: Fresh install, software update asks user to accept a CentOS 7 GPG Key

$
0
0
First time software update, first time in gnome after install. I deselected only the few kernel updates because I do not want the kernel updated.<br /> <br /> Offending package is ssd-proxy. I tried unselecting ssd packages but it must be a dependency and there are 760 odd software updates required.<br /> <br /> This is extremely suspicious because I've installed this exact version of 7 from same media the same way a few times and never got this prompt. The key displayed does match a portion I see on the website: F4A80EB5, but the source is just a local folder and I do not feel comfortable with this since I never received it before.<br /> <br /> I will skip updates for now.

0016778: remove device failed: Device or resource busy

$
0
0
Example code is as follows:<br /> [<a href="mailto:root@localhost">root@localhost</a> ]# cat hello/hello.c<br /> #include <linux/init.h><br /> #include <linux/module.h><br /> <br /> static int __init hello_init(void) {<br /> printk(KERN_ALERT "Hello, world\n");<br /> return 0;<br /> }<br /> <br /> static void __exit hello_exit(void) {<br /> printk(KERN_ALERT "Goodbye, cruel world\n");<br /> }<br /> <br /> MODULE_LICENSE("Dual BSD/GPL");<br /> <br /> module_init(hello_init);<br /> <br /> module_exit(hello_exit);<br /> <br /> Recurrence operations:<br /> [<a href="mailto:root@localhost">root@localhost</a> ]#insmod hinic.ko<br /> [<a href="mailto:root@localhost">root@localhost</a> ]#dmesg<br /> [ 2479.030163] Hello, world<br /> [<a href="mailto:root@localhost">root@localhost</a> ]# rmmod hello<br /> rmmod: ERROR: could not remove 'hello': Device or resource busy<br /> rmmod: ERROR: could not remove module hello: Device or resource busy

0016779: Evaporative Cooling Service Canberra

$
0
0
Usually, water heater repairs in Canberra are simple; Evaporative cooling, Blocked drains, etc., and these parts may be covered under the original warranty.

0016553: High availbility packages missing

$
0
0
All of the high availability (HA) packages are missing from the 8.0 release. The HA packages were provided in 7.0 release. The Red Hat 8.0 HA packages which are missing are:<br /> awscli<br /> booth<br /> booth-arbitrator<br /> booth-core<br /> booth-site<br /> booth-test<br /> clufter-bin<br /> clufter-cli<br /> clufter-common<br /> clufter-lib-ccs<br /> clufter-lib-general<br /> clufter-lib-pcs<br /> corosync<br /> corosync-qdevice<br /> corosync-qnetd<br /> corosynclib-devel<br /> fence-agents-aliyun<br /> fence-agents-aws<br /> fence-agents-azure-arm<br /> fence-agents-gce<br /> libknet1<br /> libknet1-compress-bzip2-plugin<br /> libknet1-compress-lz4-plugin<br /> libknet1-compress-lzma-plugin<br /> libknet1-compress-lzo2-plugin<br /> libknet1-compress-plugins-all<br /> libknet1-compress-zlib-plugin<br /> libknet1-crypto-nss-plugin<br /> libknet1-crypto-openssl-plugin<br /> libknet1-crypto-plugins-all<br /> libknet1-plugins-all<br /> pacemaker<br /> pacemaker-cli<br /> pacemaker-cts<br /> pacemaker-doc<br /> pacemaker-libs-devel<br /> pacemaker-nagios-plugins-metadata<br /> pacemaker-remote<br /> pcs<br /> pcs-snmp<br /> python3-azure-sdk<br /> python3-boto3<br /> python3-botocore<br /> python3-clufter<br /> python3-fasteners<br /> python3-gflags<br /> python3-google-api-client<br /> python3-httplib2<br /> python3-oauth2client<br /> python3-s3transfer<br /> python3-uritemplate<br /> resource-agents<br /> resource-agents-aliyun<br /> resource-agents-gcp<br /> <br /> The key packages which are needed are: <br /> corosync<br /> corosynclib-devel<br /> pacemaker<br /> pacemaker-cli<br /> pacemaker-doc<br /> pacemaker-libs-devel<br /> pcs<br /> resource-agents<br /> and any dependencies.
Viewing all 19115 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>