Quantcast
Channel: CentOS Bug Tracker - Issues
Viewing all 19115 articles
Browse latest View live

0016246: Kernel 2.6.32-754.17.1 kernel panic with KVM guest on startup

$
0
0
When starting up a kvm guest, host system crashes and reboots.

0016232: CentOS 7 not supported for c5.24xlarge instance

$
0
0
This CentOS 7 image (<a href="https://aws.amazon.com/marketplace/pp/Centosorg-CentOS-7-x8664-with-Updates-HVM/B00O7WM7QW">https://aws.amazon.com/marketplace/pp/Centosorg-CentOS-7-x8664-with-Updates-HVM/B00O7WM7QW</a>) does not have support for the c5.24xlarge instance.<br /> <br /> Can you please enable support for this image in AWS for c5.24xlarge?

0016247: Access credentials for jenkins-container-linux.apps.ci.centos.org

$
0
0
No one who currently works on Container Linux has access to jenkins-container-linux.apps.ci.centos.org. How can we get login credentials (apparently to CentOS OpenShift)?

0015543: [abrt] gnome-abrt: __init__.py:18::ImportError: /lib64/libcairo.so.2: undefined symbol: FT_Get_Var_Design_Coordinates

$
0
0
Version-Release number of selected component:<br /> gnome-abrt-0.3.4-8.el7<br /> <br /> Truncated backtrace:<br /> #1 <module> in /usr/lib64/python2.7/site-packages/gnome_abrt/wrappers/__init__.py:18<br /> #2 <module> in /usr/bin/gnome-abrt:26

0016248: python-gunicorn-18.0-2.el7.src.rpm is not available for download

$
0
0
The package python-gunicorn-18.0-2.el7.noarch.rpm is available from the extras directory of the CentOS download site (e.g., <a href="https://sjc.edge.kernel.org/centos/7.6.1810/extras/x86_64/Packages/python-gunicorn-18.0-2.el7.noarch.rpm">https://sjc.edge.kernel.org/centos/7.6.1810/extras/x86_64/Packages/python-gunicorn-18.0-2.el7.noarch.rpm</a>) but the source package the binary says it is built from (python-gunicorn-18.0-2.el7.src.rpm) is not available for download (e.g., <a href="http://vault.centos.org/7.6.1810/extras/Source/SPackages">http://vault.centos.org/7.6.1810/extras/Source/SPackages</a>). As a result, the RPMFind page for python-gunicorn (<a href="http://rpmfind.net/linux/RPM/centos/extras/7.6.1810/x86_64/Packages/python-gunicorn-18.0-2.el7.noarch.html">http://rpmfind.net/linux/RPM/centos/extras/7.6.1810/x86_64/Packages/python-gunicorn-18.0-2.el7.noarch.html</a>) has a broken link for the source RPM (<a href="http://vault.centos.org/7.6.1810/extras/Source/SPackages/python-gunicorn-18.0-2.el7.src.rpm">http://vault.centos.org/7.6.1810/extras/Source/SPackages/python-gunicorn-18.0-2.el7.src.rpm</a>).

0016249: Blivet picks wrong LV as root with multiple PVs and fails to put all in crypttab

$
0
0
With a custom partitioning scheme with multiple PVs, VGs, and LVs (LVM2) with LUKS (that is at least three (3) LUKS PVs) the firstboot (and subsequent boots) fails because kickstart/anaconda/blivet picked the wrong PV as the one holding the root LV and doesn't mount root.<br /> Also /etc/crypttab has only one (1) entry, instead of three (3), and the entry is not the one with the rootfs LV.

0016024: Add CodeReady containers project to CI

$
0
0
Hi, I am a member of the CodeReady Containers team and raising this bug to add the following project to centos ci.<br /> <a href="https://github.com/code-ready/crc">https://github.com/code-ready/crc</a><br /> <br /> CodeReady Containers project is focused on bringing a minimal OpenShift 4.0 or newer cluster to your local laptop or desktop computer.<br /> We need to run CI test and need virtualization for it, as crc launches OpenShift 4.x inside a VM.

0016250: Kickstart fails to add >1 LUKS partition to crypttab

$
0
0
With a custom partitioning scheme with multiple PVs, VGs, and LVs (LVM2) with LUKS (that is at least three (3) LUKS PVs) /etc/crypttab has only one (1) entry, instead of three (3), and the entry is not the one with the rootfs LV.<br /> As a result secondary LUKS partitions are not started on boot, by default.

0010875: [abrt] kdelibs: KCrash::defaultCrashHandler(int)(): kdeinit4 killed by SIGSEGV

$
0
0
Description of problem:<br /> It appared to be a overload ( all CPUs at 100 % )<br /> <br /> Version-Release number of selected component:<br /> kdelibs-4.14.8-5.el7_2<br /> <br /> Truncated backtrace:<br /> Thread no. 1 (10 frames)<br /> #1 KCrash::defaultCrashHandler(int) at /lib64/libkdeui.so.5<br /> #3 data at ../../src/corelib/tools/qscopedpointer.h:135<br /> #4 qGetPtrHelper<QScopedPointer<QObjectData> > at ../../src/corelib/global/qglobal.h:2457<br /> #5 d_func at thread/qthread.h:130<br /> #6 QThread::exit at thread/qthread.cpp:567<br /> #7 QThread::quit at thread/qthread.cpp:588<br /> #8 ColorD::~ColorD at /usr/src/debug/colord-kde-0.3.0/colord-kded/ColorD.cpp:105<br /> #10 Kded::~Kded() at /usr/lib64/libkdeinit4_kded4.so<br /> #12 kdemain at /usr/lib64/libkdeinit4_kded4.so<br /> #13 launch(int, char const*, char const*, char const*, int, char const*, bool, char const*, bool, char const*)

0016251: kernel: bnx2x NIC Link is Down

$
0
0
Hi All,<br /> <br /> We facing issue on my machine (3.10.0-957.12.2) with NIC suddenly down and can not bring up.<br /> Here are log message :<br /> <br /> kernel: bnx2x 0000:81:00.0 ens2f0: NIC Link is Down<br /> NetworkManager[5749]: <info> [1562666817.7957] device (ens2f0): enslaved to non-master-type device ovs-system; ignoring<br /> <br /> We use this machine as compute node openstack (PIKE) and use openvswitch (openvswitch-2.9.0-3)<br /> <br /> Will be appreciate if someone can help us.<br /> Thanks,

0016252: [abrt] yum: misc.py:1163:decompress:OSError: [Errno 2] No such file or directory: '/var/cache/yum/x86_64/7/epel/gen/primary_d...

$
0
0
Version-Release number of selected component:<br /> yum-3.4.3-161.el7.centos<br /> <br /> Truncated backtrace:<br /> misc.py:1163:decompress:OSError: [Errno 2] No such file or directory: '/var/cache/yum/x86_64/7/epel/gen/primary_db.sqlite'<br /> <br /> Traceback (most recent call last):<br /> File "/usr/bin/yum", line 29, in <module><br /> yummain.user_main(sys.argv[1:], exit_code=True)<br /> File "/usr/share/yum-cli/yummain.py", line 375, in user_main<br /> errcode = main(args)<br /> File "/usr/share/yum-cli/yummain.py", line 184, in main<br /> result, resultmsgs = base.doCommands()<br /> File "/usr/share/yum-cli/cli.py", line 585, in doCommands<br /> return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)<br /> File "/usr/share/yum-cli/yumcommands.py", line 446, in doCommand<br /> return base.installPkgs(extcmds, basecmd=basecmd)<br /> File "/usr/share/yum-cli/cli.py", line 1017, in installPkgs<br /> txmbrs = self.install(pattern=arg)<br /> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 4851, in install<br /> mypkgs = self.pkgSack.returnPackages(patterns=pats,<br /> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1075, in <lambda><br /> pkgSack = property(fget=lambda self: self._getSacks(),<br /> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 778, in _getSacks<br /> self.repos.populateSack(which=repos)<br /> File "/usr/lib/python2.7/site-packages/yum/repos.py", line 386, in populateSack<br /> sack.populate(repo, mdtype, callback, cacheonly)<br /> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 242, in populate<br /> mydbtype)<br /> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 287, in _check_uncompressed_db_gen<br /> cached=repo.cache)<br /> File "/usr/lib/python2.7/site-packages/yum/misc.py", line 1176, in repo_gen_decompress<br /> return decompress(filename, dest=dest, check_timestamps=True)<br /> File "/usr/lib/python2.7/site-packages/yum/misc.py", line 1163, in decompress<br /> os.utime(out, (fi.st_mtime, fi.st_mtime))<br /> OSError: [Errno 2] No such file or directory: '/var/cache/yum/x86_64/7/epel/gen/primary_db.sqlite'<br /> <br /> Local variables in innermost frame:<br /> check_timestamps: True<br /> dest: '/var/cache/yum/x86_64/7/epel/gen/primary_db.sqlite'<br /> filename: '/var/cache/yum/x86_64/7/epel/a49726e8193938d2e6f3b21b7da6d00e0cf9b7e52b08be3466ce752403e0702a-primary.sqlite.bz2'<br /> fo: None<br /> fi: posix.stat_result(st_mode=33188, st_ino=10860717, st_dev=64768L, st_nlink=1, st_uid=0, st_gid=0, st_size=7076537, st_atime=1562636708, st_mtime=1562458571, st_ctime=1562636708)<br /> out: '/var/cache/yum/x86_64/7/epel/gen/primary_db.sqlite'<br /> fn_only: False<br /> ztype: 'bz2'

0016253: SELinux is preventing ps from 'sys_ptrace' accesses on the cap_userns labeled mozilla_plugin_t.

$
0
0
Description of problem:<br /> SELinux is preventing ps from 'sys_ptrace' accesses on the cap_userns labeled mozilla_plugin_t.<br /> <br /> ***** Plugin mozplugger (99.1 confidence) suggests ************************<br /> <br /> If you want to use the plugin package<br /> Then you must turn off SELinux controls on the Firefox plugins.<br /> Do<br /> # setsebool -P unconfined_mozilla_plugin_transition 0<br /> <br /> ***** Plugin catchall (1.81 confidence) suggests **************************<br /> <br /> If you believe that ps should be allowed sys_ptrace access on cap_userns labeled mozilla_plugin_t by default.<br /> Then you should report this as a bug.<br /> You can generate a local policy module to allow this access.<br /> Do<br /> allow this access for now by executing:<br /> # ausearch -c 'ps' --raw | audit2allow -M my-ps<br /> # semodule -i my-ps.pp<br /> <br /> Additional Information:<br /> Source Context unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c<br /> 0.c1023<br /> Target Context unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c<br /> 0.c1023<br /> Target Objects Unknown [ cap_userns ]<br /> Source ps<br /> Source Path ps<br /> Port <Unknown><br /> Host (removed)<br /> Source RPM Packages <br /> Target RPM Packages <br /> Policy RPM selinux-policy-3.13.1-229.el7_6.12.noarch<br /> Selinux Enabled True<br /> Policy Type targeted<br /> Enforcing Mode Enforcing<br /> Host Name (removed)<br /> Platform Linux (removed) 5.2.0-1.el7.elrepo.x86_64 #1 SMP<br /> Mon Jul 8 09:37:45 EDT 2019 x86_64 x86_64<br /> Alert Count 6<br /> First Seen 2019-07-09 11:06:31 -03<br /> Last Seen 2019-07-09 11:06:31 -03<br /> Local ID ec3a4a36-f319-46a3-9d7b-2cf222e91276<br /> <br /> Raw Audit Messages<br /> type=AVC msg=audit(1562681191.381:237): avc: denied { sys_ptrace } for pid=5238 comm="ps" capability=19 scontext=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 tclass=cap_userns permissive=0<br /> <br /> <br /> Hash: ps,mozilla_plugin_t,mozilla_plugin_t,cap_userns,sys_ptrace<br /> <br /> Version-Release number of selected component:<br /> selinux-policy-3.13.1-229.el7_6.12.noarch

0016254: SELinux is preventing /usr/sbin/tcpdump from 'ioctl' accesses on the file /home/grid/app/grid/diagsnap/nodo1/evt_1_20190709-1...

$
0
0
Description of problem:<br /> was installing the oracle grid, when it restarted without finishing the installation<br /> SELinux is preventing /usr/sbin/tcpdump from 'ioctl' accesses on the file /home/grid/app/grid/diagsnap/nodo1/evt_1_20190709-141720/tcpdump_enp0s8.trc.<br /> <br /> ***** Plugin catchall (100. confidence) suggests **************************<br /> <br /> If you believe that tcpdump should be allowed ioctl access on the tcpdump_enp0s8.trc file by default.<br /> Then you should report this as a bug.<br /> You can generate a local policy module to allow this access.<br /> Do<br /> allow this access for now by executing:<br /> # ausearch -c 'tcpdump' --raw | audit2allow -M my-tcpdump<br /> # semodule -i my-tcpdump.pp<br /> <br /> Additional Information:<br /> Source Context system_u:system_r:netutils_t:s0<br /> Target Context system_u:object_r:user_home_t:s0<br /> Target Objects /home/grid/app/grid/diagsnap/nodo1/evt_1_20190709-<br /> 141720/tcpdump_enp0s8.trc [ file ]<br /> Source tcpdump<br /> Source Path /usr/sbin/tcpdump<br /> Port <Unknown><br /> Host (removed)<br /> Source RPM Packages tcpdump-4.9.2-3.el7.x86_64<br /> Target RPM Packages <br /> Policy RPM selinux-policy-3.13.1-229.el7_6.12.noarch<br /> Selinux Enabled True<br /> Policy Type targeted<br /> Enforcing Mode Enforcing<br /> Host Name (removed)<br /> Platform Linux (removed) 3.10.0-957.21.3.el7.x86_64 #1 SMP<br /> Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64<br /> Alert Count 851<br /> First Seen 2019-07-09 11:29:51 EDT<br /> Last Seen 2019-07-09 14:17:21 EDT<br /> Local ID 3f4aaf9d-4e0e-49eb-895c-eb1e41c33373<br /> <br /> Raw Audit Messages<br /> type=AVC msg=audit(1562696241.42:3547): avc: denied { ioctl } for pid=1605 comm="tcpdump" path="/home/grid/app/grid/diagsnap/nodo1/evt_1_20190709-141720/tcpdump_enp0s8.trc" dev="dm-2" ino=102543772 ioctlcmd=5401 scontext=system_u:system_r:netutils_t:s0 tcontext=system_u:object_r:user_home_t:s0 tclass=file permissive=0<br /> <br /> <br /> type=SYSCALL msg=audit(1562696241.42:3547): arch=x86_64 syscall=ioctl success=no exit=EACCES a0=1 a1=5401 a2=7ffd87f87da0 a3=7ffd87f878a0 items=0 ppid=1594 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=tcpdump exe=/usr/sbin/tcpdump subj=system_u:system_r:netutils_t:s0 key=(null)<br /> <br /> Hash: tcpdump,netutils_t,user_home_t,file,ioctl<br /> <br /> Version-Release number of selected component:<br /> selinux-policy-3.13.1-229.el7_6.12.noarch

0016256: SELinux is preventing /usr/libexec/snapd/snapd from 'getattr' accesses on the file /proc//stat.

$
0
0
Description of problem:<br /> run snap install without sudo (and then enter password interactively)<br /> SELinux is preventing /usr/libexec/snapd/snapd from 'getattr' accesses on the file /proc/<pid>/stat.<br /> <br /> ***** Plugin catchall (100. confidence) suggests **************************<br /> <br /> If you believe that snapd should be allowed getattr access on the stat file by default.<br /> Then you should report this as a bug.<br /> You can generate a local policy module to allow this access.<br /> Do<br /> allow this access for now by executing:<br /> # ausearch -c 'snapd' --raw | audit2allow -M my-snapd<br /> # semodule -i my-snapd.pp<br /> <br /> Additional Information:<br /> Source Context system_u:system_r:snappy_t:s0<br /> Target Context system_u:system_r:unconfined_service_t:s0<br /> Target Objects /proc/<pid>/stat [ file ]<br /> Source snapd<br /> Source Path /usr/libexec/snapd/snapd<br /> Port <Unknown><br /> Host (removed)<br /> Source RPM Packages snapd-2.39.2-1.el7.x86_64<br /> Target RPM Packages <br /> Policy RPM selinux-policy-3.13.1-229.el7_6.12.noarch<br /> Selinux Enabled True<br /> Policy Type targeted<br /> Enforcing Mode Enforcing<br /> Host Name (removed)<br /> Platform Linux (removed) 3.10.0-957.21.3.el7.x86_64 #1 SMP<br /> Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64<br /> Alert Count 1<br /> First Seen 2019-07-10 11:38:37 HKT<br /> Last Seen 2019-07-10 11:38:37 HKT<br /> Local ID 84bc0ad4-bd7a-4be2-b217-c8d5555196a6<br /> <br /> Raw Audit Messages<br /> type=AVC msg=audit(1562729917.74:15794): avc: denied { getattr } for pid=13934 comm="snapd" path="/proc/21191/stat" dev="proc" ino=503589881 scontext=system_u:system_r:snappy_t:s0 tcontext=system_u:system_r:unconfined_service_t:s0 tclass=file permissive=1<br /> <br /> <br /> type=SYSCALL msg=audit(1562729917.74:15794): arch=x86_64 syscall=fstat success=yes exit=0 a0=5 a1=c0005f8518 a2=0 a3=0 items=0 ppid=1 pid=13934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=snapd exe=/usr/libexec/snapd/snapd subj=system_u:system_r:snappy_t:s0 key=(null)<br /> <br /> Hash: snapd,snappy_t,unconfined_service_t,file,getattr<br /> <br /> Version-Release number of selected component:<br /> selinux-policy-3.13.1-229.el7_6.12.noarch

0016257: pam_loginuid prevents login in unprivileged containers

$
0
0
pam_loginuid prevents login via ssh in unprivileged containers because it can't write /proc/self/loginuid even as namespaced root. Upstream had been patched (<a href="https://github.com/linux-pam/linux-pam/commit/2e62d5aea3f5ac267cfa54f0ea1f8c07ac85a95a#diff-8322fbd4507ee14b865167c196cb78d2">https://github.com/linux-pam/linux-pam/commit/2e62d5aea3f5ac267cfa54f0ea1f8c07ac85a95a#diff-8322fbd4507ee14b865167c196cb78d2</a>) years ago to work around the issue in user namespaces. <br /> Could you please apply the patch?<br /> <br /> Thanks.

0016258: systemd-ci-slave01 is struggling under heavy load

$
0
0
Hello,<br /> <br /> first of all, sorry for opening a duplicate issue to <a href="https://bugs.centos.org/view.php?id=16120,">https://bugs.centos.org/view.php?id=16120,</a> but I needed to bump the priority/severity of this issue as it's currently snapping on our heels.<br /> <br /> As mentioned in the previous issue, I wonder (and right now even hope), if there's enough resources to bump the number of executors on the systemd-ci-slave01 (again). We're working on some optimizations to make the jobs faster, but even with that the 6 executors simply won't be enough (see the statistics <a href="https://ci.centos.org/computer/systemd-ci-slave01/load-statistics?type=hour">https://ci.centos.org/computer/systemd-ci-slave01/load-statistics?type=hour</a>).<br /> <br /> Thanks!

0016259: yum exits successfully when package name is misspelled

$
0
0
When yum install is passed a misspelled package name, it still exits successfully if another valid package name is passed.

0015078: [abrt] gnome-shell: settings_backend_path_changed(): gnome-shell killed by SIGSEGV

$
0
0
Version-Release number of selected component:<br /> gnome-shell-3.26.2-5.el7<br /> <br /> Truncated backtrace:<br /> Thread no. 1 (3 frames)<br /> #0 settings_backend_path_changed at gsettings.c:460<br /> #1 g_settings_backend_invoke_closure at gsettingsbackend.c:267<br /> #7 meta_run at core/main.c:650

0013611: [abrt] kernel: WARNING: at fs/locks.c:2304 locks_remove_flock+0x1ba/0x1d0()

$
0
0
Description of problem:<br /> It seems that there is a problem related to the use of truecrypt samba and/or ntfs.<br /> I cant pinpoint when the problem occurs.<br /> To me as user is seems to occur randomly.<br /> <br /> Sitiation: My laptop runs Fedora25 om an lvm.<br /> From that Fedora a second encrypted lvm partition is mounted contaning two lvm volumes,<br /> one for data and another for KVM virtual machines<br /> Directories on the data volume are shared (to the ínside vm world) using samba.<br /> <br /> I usually work on/with/from my virtual machines, some Windows, some CentOS each providing a<br /> contained environment for my work for different customers using the samba shares to store shared data.<br /> <br /> One CentOS virtual machine i consider 'my' workstation. There i receive mail write documents etc.<br /> Including using Truecrypt for encrypted containers. I use TrueCrypt because i have found no reason<br /> not to and i have found no (other) encryption product i trust.<br /> <br /> In this environment (My personal workstation) the error occurs. When TrueCrypt is serving, several<br /> truecrypt containers, located on different media like a USB stick containing a NTFS (or FAT32) file system,<br /> a samba share and/or a 'local' XFS filesystem.<br /> <br /> What do i see:<br /> At some point nautilus can't open directories anymore. That is when this issue occurs. And that is how far<br /> my knowledge goes.<br /> <br /> Programs involved:<br /> Host system: HP Zbook G3 I7, 32 GB, 1TB SSD<br /> Fedora Linux: 4.11.10-200.fc25.x86-64<br /> samba.x86-64: 2:4.5.11.0.fc25<br /> libvrt / kvm: 2.2.1-2.fc25<br /> CentOS guest: (2 CP's, 2 GB memory)<br /> CentOS: 7: 3.10.0-514.26.2.el7.x86_64<br /> ntfs3g.x86_64: 2:2017.3.23-1.el7<br /> TrueCrypt: truecrypt-7.1a-x64<br /> <br /> Version-Release number of selected component:<br /> kernel<br /> <br /> Truncated backtrace:<br /> WARNING: at fs/locks.c:2304 locks_remove_flock+0x1ba/0x1d0()<br /> leftover lock: dev=0:44 ino=11272594 type=0 flags=0x1 start=1073741826 end=1073742335<br /> Modules linked in: vfat fat twofish_generic twofish_avx_x86_64 twofish_x86_64_3way twofish_x86_64 twofish_common drbg ansi_cprng serpent_avx2 serpent_avx_x86_64 serpent_sse2_x86_64 serpent_generic xts dm_crypt loop sd_mod crc_t10dif crct10dif_generic uas usb_storage fuse uinput xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun arc4 md4 nls_utf8 cifs dns_resolver ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter<br /> iosf_mbi crc32_pclmul snd_hda_codec_generic ghash_clmulni_intel ppdev snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device aesni_intel lrw gf128mul glue_helper ablk_helper cryptd snd_pcm sg snd_timer pcspkr snd virtio_balloon soundcore i2c_piix4 parport_pc parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c sr_mod cdrom ata_generic pata_acpi virtio_net virtio_console virtio_blk ata_piix qxl crct10dif_pclmul crct10dif_common drm_kms_helper syscopyarea crc32c_intel sysfillrect sysimgblt fb_sys_fops ttm serio_raw drm virtio_pci virtio_ring i2c_core virtio libata floppy dm_mirror dm_region_hash dm_log dm_mod<br /> CPU: 0 PID: 4404 Comm: mozStorage #1 Not tainted 3.10.0-514.16.1.el7.x86_64 #1<br /> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.1-1.fc24 04/01/2014<br /> ffff880046ff3d28 000000008971dac6 ffff880046ff3ce0 ffffffff81686ac3<br /> ffff880046ff3d18 ffffffff81085cb0 ffff88004cf627a8 ffff880060b3e900<br /> ffff88004cf62660 ffff88004cf626e8 ffff880078ee6b20 ffff880046ff3d80<br /> Call Trace:<br /> [<ffffffff81686ac3>] dump_stack+0x19/0x1b<br /> [<ffffffff81085cb0>] warn_slowpath_common+0x70/0xb0<br /> [<ffffffff81085d4c>] warn_slowpath_fmt+0x5c/0x80<br /> [<ffffffff8125591a>] locks_remove_flock+0x1ba/0x1d0<br /> [<ffffffff8120055d>] __fput+0xbd/0x260<br /> [<ffffffff8120083e>] ____fput+0xe/0x10<br /> [<ffffffff810ad1e7>] task_work_run+0xa7/0xe0<br /> [<ffffffff8102ab22>] do_notify_resume+0x92/0xb0<br /> [<ffffffff8169743d>] int_signal+0x12/0x17

0016260: Users in git.centos.org can't delete their own forks

$
0
0
I can't delete a for i created from an existing project in git.centos.org or git.stg.centos.org. Examples:<br /> <br /> <a href="https://git.stg.centos.org/fork/amoralej/rpms/openstack-aodh">https://git.stg.centos.org/fork/amoralej/rpms/openstack-aodh</a><br /> <a href="https://git.centos.org/fork/amoralej/rpms/389-ds-base/">https://git.centos.org/fork/amoralej/rpms/389-ds-base/</a><br /> <br /> forked by my user "amoralej".<br /> <br /> I think this is an important problem for users to use fork/Pull Request workflow.
Viewing all 19115 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>