ansible-playbook [core 2.17.13] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-nSC executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Jun 4 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-8)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_swap.yml ******************************************************* 1 plays in /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml PLAY [Test management of swap] ************************************************* TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:2 Tuesday 22 July 2025 08:33:54 -0400 (0:00:00.048) 0:00:00.050 ********** [WARNING]: Platform linux on host managed-node11 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node11] TASK [Include role to ensure packages are installed] *************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:10 Tuesday 22 July 2025 08:33:57 -0400 (0:00:02.881) 0:00:02.931 ********** included: fedora.linux_system_roles.storage for managed-node11 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Tuesday 22 July 2025 08:33:57 -0400 (0:00:00.237) 0:00:03.169 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node11 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Tuesday 22 July 2025 08:33:58 -0400 (0:00:00.143) 0:00:03.313 ********** skipping: [managed-node11] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Tuesday 22 July 2025 08:33:58 -0400 (0:00:00.264) 0:00:03.578 ********** skipping: [managed-node11] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node11] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node11] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed-node11] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Tuesday 22 July 2025 08:33:58 -0400 (0:00:00.253) 0:00:03.831 ********** ok: [managed-node11] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Tuesday 22 July 2025 08:33:59 -0400 (0:00:01.052) 0:00:04.885 ********** ok: [managed-node11] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Tuesday 22 July 2025 08:33:59 -0400 (0:00:00.120) 0:00:05.005 ********** ok: [managed-node11] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Tuesday 22 July 2025 08:33:59 -0400 (0:00:00.047) 0:00:05.053 ********** ok: [managed-node11] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Tuesday 22 July 2025 08:33:59 -0400 (0:00:00.072) 0:00:05.125 ********** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node11 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Tuesday 22 July 2025 08:34:00 -0400 (0:00:00.205) 0:00:05.331 ********** ok: [managed-node11] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Tuesday 22 July 2025 08:34:01 -0400 (0:00:01.512) 0:00:06.843 ********** ok: [managed-node11] => { "storage_pools | d([])": [] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Tuesday 22 July 2025 08:34:01 -0400 (0:00:00.153) 0:00:06.998 ********** ok: [managed-node11] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Tuesday 22 July 2025 08:34:01 -0400 (0:00:00.174) 0:00:07.174 ********** [WARNING]: Module invocation had junk after the JSON data: sys:1: DeprecationWarning: builtin type swigvarlink has no __module__ attribute ok: [managed-node11] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Tuesday 22 July 2025 08:34:03 -0400 (0:00:01.706) 0:00:08.881 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node11 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Tuesday 22 July 2025 08:34:03 -0400 (0:00:00.157) 0:00:09.039 ********** skipping: [managed-node11] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Tuesday 22 July 2025 08:34:03 -0400 (0:00:00.114) 0:00:09.153 ********** skipping: [managed-node11] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Tuesday 22 July 2025 08:34:04 -0400 (0:00:00.127) 0:00:09.280 ********** skipping: [managed-node11] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Tuesday 22 July 2025 08:34:04 -0400 (0:00:00.096) 0:00:09.377 ********** ok: [managed-node11] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Tuesday 22 July 2025 08:34:05 -0400 (0:00:01.100) 0:00:10.478 ********** ok: [managed-node11] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_multipath.service": { "name": "modprobe@dm_multipath.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "inactive", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup-with-network@.service": { "name": "stratis-fstab-setup-with-network@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Tuesday 22 July 2025 08:34:07 -0400 (0:00:02.439) 0:00:12.918 ********** ok: [managed-node11] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Tuesday 22 July 2025 08:34:07 -0400 (0:00:00.182) 0:00:13.100 ********** skipping: [managed-node11] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Tuesday 22 July 2025 08:34:07 -0400 (0:00:00.064) 0:00:13.165 ********** ok: [managed-node11] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Tuesday 22 July 2025 08:34:08 -0400 (0:00:00.795) 0:00:13.960 ********** skipping: [managed-node11] => { "changed": false, "false_condition": "storage_udevadm_trigger | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Tuesday 22 July 2025 08:34:08 -0400 (0:00:00.142) 0:00:14.102 ********** ok: [managed-node11] => { "changed": false, "stat": { "atime": 1753187123.4364877, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "016bd7ce6cb6b233647ba6b5c21ac99bb7146610", "ctime": 1750750281.8033595, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194435, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1750750281.8033595, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1344, "uid": 0, "version": "3162749339", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Tuesday 22 July 2025 08:34:09 -0400 (0:00:00.587) 0:00:14.689 ********** skipping: [managed-node11] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Tuesday 22 July 2025 08:34:09 -0400 (0:00:00.051) 0:00:14.741 ********** skipping: [managed-node11] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Tuesday 22 July 2025 08:34:09 -0400 (0:00:00.059) 0:00:14.800 ********** ok: [managed-node11] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Tuesday 22 July 2025 08:34:09 -0400 (0:00:00.072) 0:00:14.872 ********** ok: [managed-node11] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Tuesday 22 July 2025 08:34:09 -0400 (0:00:00.080) 0:00:14.953 ********** ok: [managed-node11] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Tuesday 22 July 2025 08:34:09 -0400 (0:00:00.056) 0:00:15.010 ********** skipping: [managed-node11] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Tuesday 22 July 2025 08:34:09 -0400 (0:00:00.056) 0:00:15.066 ********** skipping: [managed-node11] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Tuesday 22 July 2025 08:34:09 -0400 (0:00:00.054) 0:00:15.120 ********** skipping: [managed-node11] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Tuesday 22 July 2025 08:34:10 -0400 (0:00:00.102) 0:00:15.223 ********** skipping: [managed-node11] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Tuesday 22 July 2025 08:34:10 -0400 (0:00:00.123) 0:00:15.346 ********** skipping: [managed-node11] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Tuesday 22 July 2025 08:34:10 -0400 (0:00:00.065) 0:00:15.412 ********** ok: [managed-node11] => { "changed": false, "stat": { "atime": 1753187315.6753762, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1750749389.405, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194436, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1750749068.122, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "1830666913", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Tuesday 22 July 2025 08:34:10 -0400 (0:00:00.455) 0:00:15.867 ********** skipping: [managed-node11] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Tuesday 22 July 2025 08:34:10 -0400 (0:00:00.022) 0:00:15.890 ********** ok: [managed-node11] TASK [Mark tasks to be skipped] ************************************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:14 Tuesday 22 July 2025 08:34:11 -0400 (0:00:00.893) 0:00:16.783 ********** ok: [managed-node11] => { "ansible_facts": { "storage_skip_checks": [ "blivet_available", "packages_installed", "service_facts" ] }, "changed": false } TASK [Get unused disks for swap] *********************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:22 Tuesday 22 July 2025 08:34:11 -0400 (0:00:00.060) 0:00:16.843 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml for managed-node11 TASK [Ensure test packages] **************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Tuesday 22 July 2025 08:34:11 -0400 (0:00:00.049) 0:00:16.893 ********** ok: [managed-node11] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Tuesday 22 July 2025 08:34:12 -0400 (0:00:00.916) 0:00:17.810 ********** ok: [managed-node11] => { "changed": false, "disks": "Unable to find unused disk", "info": [ "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "filename [xvda2] is a partition", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Tuesday 22 July 2025 08:34:13 -0400 (0:00:00.838) 0:00:18.648 ********** ok: [managed-node11] => { "changed": false, "cmd": "set -x\nexec 1>&2\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC\njournalctl -ex\n", "delta": "0:00:00.030716", "end": "2025-07-22 08:34:14.076142", "rc": 0, "start": "2025-07-22 08:34:14.045426" } STDERR: + exec + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda1" TYPE="part" SIZE="1048576" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda2" TYPE="part" SIZE="268433341952" FSTYPE="xfs" LOG-SEC="512" + journalctl -ex Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'crypt-gpg' will not be installed, because command 'gpg' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'pcsc' will not be installed, because command 'pcscd' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'squash-squashfs' will not be installed, because command 'mksquashfs' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'squash-squashfs' will not be installed, because command 'unsquashfs' could not be found! Jul 22 08:25:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: bash *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: shell-interpreter *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: fips *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: fips-crypto-policies *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd-ask-password *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd-initrd *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd-journald *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd-modules-load *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd-sysctl *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com chronyd[680]: Selected source 10.2.32.38 Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd-sysusers *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd-tmpfiles *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: systemd-udevd *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: rngd *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: i18n *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: drm *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: prefixdevname *** Jul 22 08:25:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: kernel-modules *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: kernel-modules-extra *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: kernel-modules-extra: configuration source "/run/depmod.d" does not exist Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: kernel-modules-extra: configuration source "/lib/depmod.d" does not exist Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf" Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: fstab-sys *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: rootfs-block *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: squash-erofs *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: terminfo *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: udev-rules *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: dracut-systemd *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: usrmount *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: base *** Jul 22 08:25:29 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: fs-lib *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: kdumpbase *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: memstrack *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: microcode_ctl-fw_dir_override *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl module: mangling fw_dir Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware" Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl: intel: caveats check for kernel version "6.12.0-98.el10.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl: configuration "intel-06-4f-01" is ignored Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"... Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl: configuration "intel-06-8f-08" is ignored Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates /lib/firmware" Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: openssl *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: shutdown *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including module: squash-lib *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Including modules done *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Installing kernel module dependencies *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Installing kernel module dependencies done *** Jul 22 08:25:30 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Resolving executable dependencies *** Jul 22 08:25:31 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Resolving executable dependencies done *** Jul 22 08:25:31 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Hardlinking files *** Jul 22 08:25:31 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Mode: real Jul 22 08:25:31 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Method: sha256 Jul 22 08:25:31 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Files: 550 Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Linked: 26 files Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Compared: 0 xattrs Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Compared: 44 files Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Saved: 14.25 MiB Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Duration: 0.163434 seconds Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Hardlinking files done *** Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Generating early-microcode cpio image *** Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Constructing GenuineIntel.bin *** Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Constructing GenuineIntel.bin *** Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Store current command line parameters *** Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: Stored kernel commandline: Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: No dracut internal kernel commandline stored in the initramfs Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Squashing the files inside the initramfs *** Jul 22 08:25:32 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 22 08:25:47 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Squashing the files inside the initramfs done *** Jul 22 08:25:47 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Creating image file '/boot/initramfs-6.12.0-98.el10.x86_64kdump.img' *** Jul 22 08:25:47 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com dracut[1553]: *** Creating initramfs image file '/boot/initramfs-6.12.0-98.el10.x86_64kdump.img' done *** Jul 22 08:25:48 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com kdumpctl[911]: kdump: kexec: loaded kdump kernel Jul 22 08:25:48 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com kdumpctl[911]: kdump: Starting kdump: [OK] Jul 22 08:25:48 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished kdump.service - Crash recovery kernel arming. ░░ Subject: A start job for unit kdump.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has finished successfully. ░░ ░░ The job identifier is 269. Jul 22 08:25:48 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Startup finished in 1.241s (kernel) + 3.408s (initrd) + 37.957s (userspace) = 42.608s. ░░ Subject: System start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ All system services necessary queued for starting at boot have been ░░ started. Note that this does not mean that the machine is now idle as services ░░ might still be busy with completing start-up. ░░ ░░ Kernel start-up required 1241739 microseconds. ░░ ░░ Initrd start-up required 3408673 microseconds. ░░ ░░ Userspace start-up required 37957947 microseconds. Jul 22 08:25:52 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 22 08:26:34 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com chronyd[680]: Selected source 158.51.99.19 (2.centos.pool.ntp.org) Jul 22 08:27:12 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4455]: Accepted publickey for root from 10.30.34.89 port 44014 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:12 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: New session 1 of user root. ░░ Subject: A new session 1 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 1 has been created for the user root. ░░ ░░ The leading process of the session is 4455. Jul 22 08:27:12 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Created slice user-0.slice - User Slice of UID 0. ░░ Subject: A start job for unit user-0.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-0.slice has finished successfully. ░░ ░░ The job identifier is 509. Jul 22 08:27:12 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0... ░░ Subject: A start job for unit user-runtime-dir@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has begun execution. ░░ ░░ The job identifier is 508. Jul 22 08:27:12 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0. ░░ Subject: A start job for unit user-runtime-dir@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has finished successfully. ░░ ░░ The job identifier is 508. Jul 22 08:27:12 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting user@0.service - User Manager for UID 0... ░░ Subject: A start job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 588. Jul 22 08:27:12 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: New session 2 of user root. ░░ Subject: A new session 2 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 2 has been created for the user root. ░░ ░░ The leading process of the session is 4460. Jul 22 08:27:12 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com (systemd)[4460]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Queued start job for default target default.target. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Created slice app.slice - User Application Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 8. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 6. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 4. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target paths.target - Paths. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 12. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target timers.target - Timers. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 3. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Starting dbus.socket - D-Bus User Message Bus Socket... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 11. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 7. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Listening on dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 11. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 7. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target sockets.target - Sockets. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 10. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 2. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target default.target - Main User Target. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 1. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Startup finished in 114ms. ░░ Subject: User manager start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The user manager instance for user 0 has been started. All services queued ░░ for starting have been started. Note that other services might still be starting ░░ up or be started at any later time. ░░ ░░ Startup of the manager took 114782 microseconds. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started user@0.service - User Manager for UID 0. ░░ Subject: A start job for unit user@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has finished successfully. ░░ ░░ The job identifier is 588. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-1.scope - Session 1 of User root. ░░ Subject: A start job for unit session-1.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-1.scope has finished successfully. ░░ ░░ The job identifier is 669. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4455]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4471]: Received disconnect from 10.30.34.89 port 44014:11: disconnected by user Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4471]: Disconnected from user root 10.30.34.89 port 44014 Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4455]: pam_unix(sshd:session): session closed for user root Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: session-1.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-1.scope has successfully entered the 'dead' state. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: Session 1 logged out. Waiting for processes to exit. Jul 22 08:27:13 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: Removed session 1. ░░ Subject: Session 1 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 1 has been terminated. Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4508]: Accepted publickey for root from 10.31.9.41 port 59830 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4509]: Accepted publickey for root from 10.31.9.41 port 59838 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: New session 3 of user root. ░░ Subject: A new session 3 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 3 has been created for the user root. ░░ ░░ The leading process of the session is 4508. Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-3.scope - Session 3 of User root. ░░ Subject: A start job for unit session-3.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-3.scope has finished successfully. ░░ ░░ The job identifier is 751. Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: New session 4 of user root. ░░ Subject: A new session 4 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 4 has been created for the user root. ░░ ░░ The leading process of the session is 4509. Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4508]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-4.scope - Session 4 of User root. ░░ Subject: A start job for unit session-4.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-4.scope has finished successfully. ░░ ░░ The job identifier is 833. Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4509]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4515]: Received disconnect from 10.31.9.41 port 59838:11: disconnected by user Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4515]: Disconnected from user root 10.31.9.41 port 59838 Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com sshd-session[4509]: pam_unix(sshd:session): session closed for user root Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: session-4.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-4.scope has successfully entered the 'dead' state. Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: Session 4 logged out. Waiting for processes to exit. Jul 22 08:27:17 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: Removed session 4. ░░ Subject: Session 4 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 4 has been terminated. Jul 22 08:27:27 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com unknown: Running test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1) with reboot count 0 and test restart count 0. (Be aware the test name is sanitized!) Jul 22 08:27:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... ░░ Subject: A start job for unit systemd-hostnamed.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has begun execution. ░░ ░░ The job identifier is 915. Jul 22 08:27:28 ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. ░░ Subject: A start job for unit systemd-hostnamed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has finished successfully. ░░ ░░ The job identifier is 915. Jul 22 08:27:28 managed-node11 systemd-hostnamed[6377]: Hostname set to (static) Jul 22 08:27:28 managed-node11 NetworkManager[719]: [1753187248.0712] hostname: static hostname changed from "ip-10-31-41-77.testing-farm.us-east-1.aws.redhat.com" to "managed-node11" Jul 22 08:27:28 managed-node11 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 993. Jul 22 08:27:28 managed-node11 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 993. Jul 22 08:27:29 managed-node11 unknown: Leaving test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1). (Be aware the test name is sanitized!) Jul 22 08:27:38 managed-node11 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 22 08:27:58 managed-node11 systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 22 08:28:00 managed-node11 sshd-session[7425]: Accepted publickey for root from 10.31.42.212 port 35274 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:00 managed-node11 systemd-logind[665]: New session 5 of user root. ░░ Subject: A new session 5 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 5 has been created for the user root. ░░ ░░ The leading process of the session is 7425. Jul 22 08:28:00 managed-node11 systemd[1]: Started session-5.scope - Session 5 of User root. ░░ Subject: A start job for unit session-5.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-5.scope has finished successfully. ░░ ░░ The job identifier is 1072. Jul 22 08:28:00 managed-node11 sshd-session[7425]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:00 managed-node11 sshd-session[7428]: Received disconnect from 10.31.42.212 port 35274:11: disconnected by user Jul 22 08:28:00 managed-node11 sshd-session[7428]: Disconnected from user root 10.31.42.212 port 35274 Jul 22 08:28:00 managed-node11 sshd-session[7425]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:00 managed-node11 systemd[1]: session-5.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-5.scope has successfully entered the 'dead' state. Jul 22 08:28:00 managed-node11 systemd-logind[665]: Session 5 logged out. Waiting for processes to exit. Jul 22 08:28:00 managed-node11 systemd-logind[665]: Removed session 5. ░░ Subject: Session 5 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 5 has been terminated. Jul 22 08:28:00 managed-node11 sshd-session[7453]: Accepted publickey for root from 10.31.42.212 port 35276 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:00 managed-node11 systemd-logind[665]: New session 6 of user root. ░░ Subject: A new session 6 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 6 has been created for the user root. ░░ ░░ The leading process of the session is 7453. Jul 22 08:28:00 managed-node11 systemd[1]: Started session-6.scope - Session 6 of User root. ░░ Subject: A start job for unit session-6.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-6.scope has finished successfully. ░░ ░░ The job identifier is 1154. Jul 22 08:28:00 managed-node11 sshd-session[7453]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:00 managed-node11 sshd-session[7456]: Received disconnect from 10.31.42.212 port 35276:11: disconnected by user Jul 22 08:28:00 managed-node11 sshd-session[7456]: Disconnected from user root 10.31.42.212 port 35276 Jul 22 08:28:00 managed-node11 sshd-session[7453]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:00 managed-node11 systemd[1]: session-6.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-6.scope has successfully entered the 'dead' state. Jul 22 08:28:00 managed-node11 systemd-logind[665]: Session 6 logged out. Waiting for processes to exit. Jul 22 08:28:00 managed-node11 systemd-logind[665]: Removed session 6. ░░ Subject: Session 6 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 6 has been terminated. Jul 22 08:28:11 managed-node11 sshd-session[7485]: Accepted publickey for root from 10.31.42.212 port 39112 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:11 managed-node11 systemd-logind[665]: New session 7 of user root. ░░ Subject: A new session 7 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 7 has been created for the user root. ░░ ░░ The leading process of the session is 7485. Jul 22 08:28:11 managed-node11 systemd[1]: Started session-7.scope - Session 7 of User root. ░░ Subject: A start job for unit session-7.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-7.scope has finished successfully. ░░ ░░ The job identifier is 1236. Jul 22 08:28:11 managed-node11 sshd-session[7485]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:11 managed-node11 sshd-session[7488]: Received disconnect from 10.31.42.212 port 39112:11: disconnected by user Jul 22 08:28:11 managed-node11 sshd-session[7488]: Disconnected from user root 10.31.42.212 port 39112 Jul 22 08:28:11 managed-node11 sshd-session[7485]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:11 managed-node11 systemd[1]: session-7.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-7.scope has successfully entered the 'dead' state. Jul 22 08:28:11 managed-node11 systemd-logind[665]: Session 7 logged out. Waiting for processes to exit. Jul 22 08:28:11 managed-node11 systemd-logind[665]: Removed session 7. ░░ Subject: Session 7 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 7 has been terminated. Jul 22 08:28:13 managed-node11 sshd-session[7514]: Accepted publickey for root from 10.31.42.212 port 47318 ssh2: ECDSA SHA256:WU7noZiQSxkQHAT4JsTwkz7sTow5ig7aO2gcgaqEwOg Jul 22 08:28:13 managed-node11 systemd-logind[665]: New session 8 of user root. ░░ Subject: A new session 8 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 8 has been created for the user root. ░░ ░░ The leading process of the session is 7514. Jul 22 08:28:13 managed-node11 systemd[1]: Started session-8.scope - Session 8 of User root. ░░ Subject: A start job for unit session-8.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-8.scope has finished successfully. ░░ ░░ The job identifier is 1318. Jul 22 08:28:13 managed-node11 sshd-session[7514]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:14 managed-node11 python3.12[7691]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:15 managed-node11 sudo[7869]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqphkrcuyaguidahdjgniosntbzpyrks ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187295.5700445-8243-203175579933760/AnsiballZ_setup.py' Jul 22 08:28:15 managed-node11 sudo[7869]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:15 managed-node11 python3.12[7872]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:16 managed-node11 sudo[7869]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:17 managed-node11 sudo[8050]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gquzpkacionmhirnyqaqkgeaojjfmede ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187296.717417-8263-113130969396679/AnsiballZ_stat.py' Jul 22 08:28:17 managed-node11 sudo[8050]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:17 managed-node11 python3.12[8053]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:17 managed-node11 sudo[8050]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:17 managed-node11 sudo[8202]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kplwknyshszsgttlkfzkbwukpvbkmpzp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187297.4787958-8324-56478776738903/AnsiballZ_dnf.py' Jul 22 08:28:17 managed-node11 sudo[8202]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:17 managed-node11 python3.12[8205]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:26 managed-node11 groupadd[8310]: group added to /etc/group: name=clevis, GID=993 Jul 22 08:28:26 managed-node11 groupadd[8310]: group added to /etc/gshadow: name=clevis Jul 22 08:28:26 managed-node11 groupadd[8310]: new group: name=clevis, GID=993 Jul 22 08:28:26 managed-node11 useradd[8312]: new user: name=clevis, UID=993, GID=993, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none Jul 22 08:28:26 managed-node11 usermod[8316]: add 'clevis' to group 'tss' Jul 22 08:28:26 managed-node11 usermod[8316]: add 'clevis' to shadow group 'tss' Jul 22 08:28:26 managed-node11 dbus-broker-launch[640]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:26 managed-node11 dbus-broker-launch[640]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:26 managed-node11 groupadd[8323]: group added to /etc/group: name=polkitd, GID=114 Jul 22 08:28:26 managed-node11 groupadd[8323]: group added to /etc/gshadow: name=polkitd Jul 22 08:28:26 managed-node11 groupadd[8323]: new group: name=polkitd, GID=114 Jul 22 08:28:26 managed-node11 useradd[8326]: new user: name=polkitd, UID=114, GID=114, home=/, shell=/sbin/nologin, from=none Jul 22 08:28:26 managed-node11 dbus-broker-launch[640]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:26 managed-node11 dbus-broker-launch[640]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:27 managed-node11 systemd[1]: Listening on pcscd.socket - PC/SC Smart Card Daemon Activation Socket. ░░ Subject: A start job for unit pcscd.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit pcscd.socket has finished successfully. ░░ ░░ The job identifier is 1404. Jul 22 08:28:27 managed-node11 dbus-broker-launch[640]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:27 managed-node11 dbus-broker-launch[640]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:27 managed-node11 systemd[1]: Started run-p8356-i8656.service - [systemd-run] /usr/bin/systemctl start man-db-cache-update. ░░ Subject: A start job for unit run-p8356-i8656.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-p8356-i8656.service has finished successfully. ░░ ░░ The job identifier is 1485. Jul 22 08:28:27 managed-node11 systemctl[8357]: Warning: The unit file, source configuration file or drop-ins of man-db-cache-update.service changed on disk. Run 'systemctl daemon-reload' to reload units. Jul 22 08:28:28 managed-node11 systemd[1]: Starting man-db-cache-update.service... ░░ Subject: A start job for unit man-db-cache-update.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has begun execution. ░░ ░░ The job identifier is 1563. Jul 22 08:28:28 managed-node11 systemd[1]: Reload requested from client PID 8360 ('systemctl') (unit session-8.scope)... Jul 22 08:28:28 managed-node11 systemd[1]: Reloading... Jul 22 08:28:28 managed-node11 systemd-rc-local-generator[8401]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 22 08:28:28 managed-node11 systemd[1]: Reloading finished in 257 ms. Jul 22 08:28:28 managed-node11 systemd[1]: Queuing reload/restart jobs for marked units… Jul 22 08:28:28 managed-node11 systemd[1]: Reloading user@0.service - User Manager for UID 0... ░░ Subject: A reload job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A reload job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 1641. Jul 22 08:28:28 managed-node11 systemd[4460]: Received SIGRTMIN+25 from PID 1 (systemd). Jul 22 08:28:28 managed-node11 systemd[4460]: Reexecuting. Jul 22 08:28:28 managed-node11 systemd[1]: Reloaded user@0.service - User Manager for UID 0. ░░ Subject: A reload job for unit user@0.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A reload job for unit user@0.service has finished. ░░ ░░ The job identifier is 1641 and the job result is done. Jul 22 08:28:29 managed-node11 sudo[8202]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:29 managed-node11 sudo[9128]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmskbpzkanpvudizocxggceklfxyndqw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187309.3618844-9094-50997737976197/AnsiballZ_blivet.py' Jul 22 08:28:29 managed-node11 sudo[9128]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:30 managed-node11 python3.12[9131]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:28:30 managed-node11 sudo[9128]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:30 managed-node11 systemd[1]: man-db-cache-update.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service has successfully entered the 'dead' state. Jul 22 08:28:30 managed-node11 systemd[1]: Finished man-db-cache-update.service. ░░ Subject: A start job for unit man-db-cache-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has finished successfully. ░░ ░░ The job identifier is 1563. Jul 22 08:28:30 managed-node11 systemd[1]: man-db-cache-update.service: Consumed 1.042s CPU time, 37.8M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service completed and consumed the indicated resources. Jul 22 08:28:30 managed-node11 systemd[1]: run-p8356-i8656.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-p8356-i8656.service has successfully entered the 'dead' state. Jul 22 08:28:30 managed-node11 sudo[9292]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxykoivbhnrivopiewbwnwgsdsuqbdnw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187310.710718-9184-281027595087800/AnsiballZ_dnf.py' Jul 22 08:28:30 managed-node11 sudo[9292]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:31 managed-node11 python3.12[9295]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:31 managed-node11 sudo[9292]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:31 managed-node11 sudo[9451]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skokxklttwlamvllxtnbifzngkqxywrw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187311.5418386-9291-229912675459621/AnsiballZ_service_facts.py' Jul 22 08:28:31 managed-node11 sudo[9451]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:31 managed-node11 python3.12[9454]: ansible-service_facts Invoked Jul 22 08:28:33 managed-node11 sudo[9451]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:33 managed-node11 sudo[9721]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlqqbrbgfwpqewoglyibddxtwlndbrb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187313.7664104-9482-38658910875209/AnsiballZ_blivet.py' Jul 22 08:28:33 managed-node11 sudo[9721]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:34 managed-node11 python3.12[9724]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:28:34 managed-node11 sudo[9721]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:34 managed-node11 sudo[9881]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtwueosffasoynpfcuecwxjaylkezaw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187314.4458163-9509-261769756835126/AnsiballZ_stat.py' Jul 22 08:28:34 managed-node11 sudo[9881]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:34 managed-node11 python3.12[9884]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:34 managed-node11 sudo[9881]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:35 managed-node11 sudo[10041]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqwopwccxtazcnjrdxtlrvspjjpkcdtk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187315.3918424-9590-87334815679117/AnsiballZ_stat.py' Jul 22 08:28:35 managed-node11 sudo[10041]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:35 managed-node11 python3.12[10044]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:35 managed-node11 sudo[10041]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:36 managed-node11 sudo[10201]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vstgzrbgatrzczenzhdsoweamdgeyncf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187315.8680313-9618-238727526186775/AnsiballZ_setup.py' Jul 22 08:28:36 managed-node11 sudo[10201]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:36 managed-node11 python3.12[10204]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:36 managed-node11 sudo[10201]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:37 managed-node11 sudo[10388]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mntyryxacnkinlgwsbppjoehrqbykacj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187317.4278388-9723-64368431758194/AnsiballZ_dnf.py' Jul 22 08:28:37 managed-node11 sudo[10388]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:37 managed-node11 python3.12[10391]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:38 managed-node11 sudo[10388]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:39 managed-node11 sudo[10547]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouvrftaioaobrgbjwjfkksvglnbtvteq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187318.5368328-9880-14392095670827/AnsiballZ_find_unused_disk.py' Jul 22 08:28:39 managed-node11 sudo[10547]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:39 managed-node11 python3.12[10550]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:28:39 managed-node11 sudo[10547]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:40 managed-node11 sudo[10707]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htldxpxnimfvhdxdbgogleoqfcynanvp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187319.5998547-10019-165649863773329/AnsiballZ_command.py' Jul 22 08:28:40 managed-node11 sudo[10707]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:40 managed-node11 python3.12[10710]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:28:40 managed-node11 sudo[10707]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:41 managed-node11 sshd-session[10738]: Accepted publickey for root from 10.31.42.212 port 49806 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:41 managed-node11 systemd-logind[665]: New session 9 of user root. ░░ Subject: A new session 9 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 9 has been created for the user root. ░░ ░░ The leading process of the session is 10738. Jul 22 08:28:41 managed-node11 systemd[1]: Started session-9.scope - Session 9 of User root. ░░ Subject: A start job for unit session-9.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-9.scope has finished successfully. ░░ ░░ The job identifier is 1642. Jul 22 08:28:41 managed-node11 sshd-session[10738]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:41 managed-node11 sshd-session[10741]: Received disconnect from 10.31.42.212 port 49806:11: disconnected by user Jul 22 08:28:41 managed-node11 sshd-session[10741]: Disconnected from user root 10.31.42.212 port 49806 Jul 22 08:28:41 managed-node11 sshd-session[10738]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:41 managed-node11 systemd[1]: session-9.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-9.scope has successfully entered the 'dead' state. Jul 22 08:28:41 managed-node11 systemd-logind[665]: Session 9 logged out. Waiting for processes to exit. Jul 22 08:28:41 managed-node11 systemd-logind[665]: Removed session 9. ░░ Subject: Session 9 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 9 has been terminated. Jul 22 08:28:41 managed-node11 sshd-session[10768]: Accepted publickey for root from 10.31.42.212 port 39172 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:41 managed-node11 systemd-logind[665]: New session 10 of user root. ░░ Subject: A new session 10 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 10 has been created for the user root. ░░ ░░ The leading process of the session is 10768. Jul 22 08:28:41 managed-node11 systemd[1]: Started session-10.scope - Session 10 of User root. ░░ Subject: A start job for unit session-10.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-10.scope has finished successfully. ░░ ░░ The job identifier is 1727. Jul 22 08:28:41 managed-node11 sshd-session[10768]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:41 managed-node11 sshd-session[10771]: Received disconnect from 10.31.42.212 port 39172:11: disconnected by user Jul 22 08:28:41 managed-node11 sshd-session[10771]: Disconnected from user root 10.31.42.212 port 39172 Jul 22 08:28:41 managed-node11 sshd-session[10768]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:41 managed-node11 systemd[1]: session-10.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-10.scope has successfully entered the 'dead' state. Jul 22 08:28:41 managed-node11 systemd-logind[665]: Session 10 logged out. Waiting for processes to exit. Jul 22 08:28:41 managed-node11 systemd-logind[665]: Removed session 10. ░░ Subject: Session 10 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 10 has been terminated. Jul 22 08:28:43 managed-node11 sudo[10978]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvkmfazyeazcsyjnysdalzyxmsvxlnuk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187322.5446758-10386-129106831454341/AnsiballZ_setup.py' Jul 22 08:28:43 managed-node11 sudo[10978]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:43 managed-node11 python3.12[10981]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:44 managed-node11 sudo[10978]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:45 managed-node11 sudo[11165]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgnpkolardjnztqoaillokkadolbiuki ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187324.7864256-10598-37815502030203/AnsiballZ_stat.py' Jul 22 08:28:45 managed-node11 sudo[11165]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:45 managed-node11 python3.12[11168]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:45 managed-node11 sudo[11165]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:47 managed-node11 sudo[11323]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfgxweilbtzemcjchpgouvoywboqahbb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187326.475815-10695-50243920140801/AnsiballZ_dnf.py' Jul 22 08:28:47 managed-node11 sudo[11323]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:47 managed-node11 python3.12[11326]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:47 managed-node11 sudo[11323]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:50 managed-node11 sudo[11482]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmrvvbhtjwpelafzozoqvxpxdjiurxuc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187328.8501117-10973-260033879039379/AnsiballZ_blivet.py' Jul 22 08:28:50 managed-node11 sudo[11482]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:50 managed-node11 python3.12[11485]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:28:50 managed-node11 sudo[11482]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:52 managed-node11 sudo[11642]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hajilmozdjmpmbfngtyzvwhoxwtoruvf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187331.8120239-11174-148014455550223/AnsiballZ_dnf.py' Jul 22 08:28:52 managed-node11 sudo[11642]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:52 managed-node11 python3.12[11645]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:52 managed-node11 sudo[11642]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:54 managed-node11 sudo[11802]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltblytkvwzuzgsqoudrfbgifqiuwlfdd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187332.8940525-11351-179660953631320/AnsiballZ_service_facts.py' Jul 22 08:28:54 managed-node11 sudo[11802]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:54 managed-node11 python3.12[11806]: ansible-service_facts Invoked Jul 22 08:28:55 managed-node11 sudo[11802]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:57 managed-node11 sudo[12073]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkeimclreiykpgoxkhhlmiqxixnguwja ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187337.136198-11791-236952784984438/AnsiballZ_blivet.py' Jul 22 08:28:57 managed-node11 sudo[12073]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:57 managed-node11 python3.12[12076]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:28:57 managed-node11 sudo[12073]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:58 managed-node11 sudo[12233]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwmxfuiudjlacejgbzpypkzgedxugqwj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187338.5697308-11999-184766414986369/AnsiballZ_stat.py' Jul 22 08:28:58 managed-node11 sudo[12233]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:59 managed-node11 python3.12[12236]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:59 managed-node11 sudo[12233]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:02 managed-node11 sudo[12393]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvuwwsxdlhoepkycpbnuodcefqlutxfg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187341.9415207-12497-252797546123129/AnsiballZ_stat.py' Jul 22 08:29:02 managed-node11 sudo[12393]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:02 managed-node11 python3.12[12397]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:02 managed-node11 sudo[12393]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:03 managed-node11 sudo[12554]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qckhqsnpuxowucjaluidcuiuajcekrfs ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187342.8520696-12567-216751998267915/AnsiballZ_setup.py' Jul 22 08:29:03 managed-node11 sudo[12554]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:03 managed-node11 python3.12[12557]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:03 managed-node11 sudo[12554]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:05 managed-node11 sudo[12741]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqfajaqumkvplsdxtjxiocwdkxuqrxvm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187344.947441-12728-127059794675719/AnsiballZ_dnf.py' Jul 22 08:29:05 managed-node11 sudo[12741]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:05 managed-node11 python3.12[12744]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:05 managed-node11 sudo[12741]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:07 managed-node11 sudo[12900]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhqiphzfmbchlhanfzmvceqauvosialw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187346.4069197-12878-85856603572381/AnsiballZ_find_unused_disk.py' Jul 22 08:29:07 managed-node11 sudo[12900]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:07 managed-node11 python3.12[12903]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=1 min_size=0 max_size=0 match_sector_size=False with_interface=None Jul 22 08:29:07 managed-node11 sudo[12900]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:09 managed-node11 sudo[13060]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrbqphetawawodgbhdzcbyczbppyvszr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187348.036083-13015-254467029762772/AnsiballZ_command.py' Jul 22 08:29:09 managed-node11 sudo[13060]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:09 managed-node11 python3.12[13063]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:29:09 managed-node11 sudo[13060]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:11 managed-node11 sshd-session[13091]: Accepted publickey for root from 10.31.42.212 port 41248 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:11 managed-node11 systemd-logind[665]: New session 11 of user root. ░░ Subject: A new session 11 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 11 has been created for the user root. ░░ ░░ The leading process of the session is 13091. Jul 22 08:29:11 managed-node11 systemd[1]: Started session-11.scope - Session 11 of User root. ░░ Subject: A start job for unit session-11.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-11.scope has finished successfully. ░░ ░░ The job identifier is 1812. Jul 22 08:29:11 managed-node11 sshd-session[13091]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:11 managed-node11 sshd-session[13094]: Received disconnect from 10.31.42.212 port 41248:11: disconnected by user Jul 22 08:29:11 managed-node11 sshd-session[13094]: Disconnected from user root 10.31.42.212 port 41248 Jul 22 08:29:11 managed-node11 sshd-session[13091]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:11 managed-node11 systemd[1]: session-11.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-11.scope has successfully entered the 'dead' state. Jul 22 08:29:11 managed-node11 systemd-logind[665]: Session 11 logged out. Waiting for processes to exit. Jul 22 08:29:11 managed-node11 systemd-logind[665]: Removed session 11. ░░ Subject: Session 11 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 11 has been terminated. Jul 22 08:29:11 managed-node11 sshd-session[13121]: Accepted publickey for root from 10.31.42.212 port 59722 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:11 managed-node11 systemd-logind[665]: New session 12 of user root. ░░ Subject: A new session 12 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 12 has been created for the user root. ░░ ░░ The leading process of the session is 13121. Jul 22 08:29:11 managed-node11 systemd[1]: Started session-12.scope - Session 12 of User root. ░░ Subject: A start job for unit session-12.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-12.scope has finished successfully. ░░ ░░ The job identifier is 1897. Jul 22 08:29:11 managed-node11 sshd-session[13121]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:11 managed-node11 sshd-session[13124]: Received disconnect from 10.31.42.212 port 59722:11: disconnected by user Jul 22 08:29:11 managed-node11 sshd-session[13124]: Disconnected from user root 10.31.42.212 port 59722 Jul 22 08:29:11 managed-node11 sshd-session[13121]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:11 managed-node11 systemd[1]: session-12.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-12.scope has successfully entered the 'dead' state. Jul 22 08:29:11 managed-node11 systemd-logind[665]: Session 12 logged out. Waiting for processes to exit. Jul 22 08:29:11 managed-node11 systemd-logind[665]: Removed session 12. ░░ Subject: Session 12 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 12 has been terminated. Jul 22 08:29:16 managed-node11 python3.12[13331]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:17 managed-node11 sudo[13515]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagdyyuxvuutxwzzmgnlkovdstsfzwkh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187357.5804358-14436-168757974848661/AnsiballZ_setup.py' Jul 22 08:29:17 managed-node11 sudo[13515]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:18 managed-node11 python3.12[13518]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:18 managed-node11 sudo[13515]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:20 managed-node11 sudo[13702]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryykicyfkpisddufrkkvwlvhysoivawn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187359.8190978-14716-51503231349296/AnsiballZ_stat.py' Jul 22 08:29:20 managed-node11 sudo[13702]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:20 managed-node11 python3.12[13705]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:20 managed-node11 sudo[13702]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:22 managed-node11 sudo[13860]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbgmyqemllikikddnrvunmveueyoleof ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187361.8731434-14975-234562901371225/AnsiballZ_dnf.py' Jul 22 08:29:22 managed-node11 sudo[13860]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:23 managed-node11 python3.12[13863]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:23 managed-node11 sudo[13860]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:25 managed-node11 sudo[14019]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aytlgjxgplkqgvlorhfeiawelwwmdtbd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187364.3384075-15132-245475061842223/AnsiballZ_blivet.py' Jul 22 08:29:25 managed-node11 sudo[14019]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:26 managed-node11 python3.12[14022]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:29:26 managed-node11 sudo[14019]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:28 managed-node11 sudo[14179]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whzddguqbboxnsjqtpimfxbqalcaqvzf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187367.8326838-15342-239648548040894/AnsiballZ_dnf.py' Jul 22 08:29:28 managed-node11 sudo[14179]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:28 managed-node11 python3.12[14182]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:28 managed-node11 sudo[14179]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:30 managed-node11 sudo[14338]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzpcqjmqxaikeajxnyjoyzpqahfmhesg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187369.156237-15488-28988332105511/AnsiballZ_service_facts.py' Jul 22 08:29:30 managed-node11 sudo[14338]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:30 managed-node11 python3.12[14341]: ansible-service_facts Invoked Jul 22 08:29:32 managed-node11 sudo[14338]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:34 managed-node11 sudo[14608]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kppsafoapiagkjleafailusznvhjfmbt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187374.0703537-15915-27942758975889/AnsiballZ_blivet.py' Jul 22 08:29:34 managed-node11 sudo[14608]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:34 managed-node11 python3.12[14611]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:29:34 managed-node11 sudo[14608]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:35 managed-node11 sudo[14768]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upzmimpteiudfqetzdxwdydrurgsplxe ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187375.4859788-16107-263728783823316/AnsiballZ_stat.py' Jul 22 08:29:35 managed-node11 sudo[14768]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:35 managed-node11 python3.12[14771]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:35 managed-node11 sudo[14768]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:39 managed-node11 sudo[14928]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bolaobnmrmqeecfegbrbvckpoztiwlnt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187379.2757874-16463-154618393026563/AnsiballZ_stat.py' Jul 22 08:29:39 managed-node11 sudo[14928]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:39 managed-node11 python3.12[14931]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:39 managed-node11 sudo[14928]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:41 managed-node11 sudo[15088]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmypvhjhevigvorlynufbfuutiwpqyxn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187380.5502236-16542-49662705352362/AnsiballZ_setup.py' Jul 22 08:29:41 managed-node11 sudo[15088]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:41 managed-node11 python3.12[15091]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:41 managed-node11 sudo[15088]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:43 managed-node11 sudo[15275]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjrkdfpzgppgbdhojqewnumfrogrkkzb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187383.167615-16750-201593910584448/AnsiballZ_dnf.py' Jul 22 08:29:43 managed-node11 sudo[15275]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:43 managed-node11 python3.12[15278]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:44 managed-node11 sudo[15275]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:46 managed-node11 sudo[15434]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bawvbbwhqyangdlnhvdezxcwvvcsqgxd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187384.6841905-16853-273892456983047/AnsiballZ_find_unused_disk.py' Jul 22 08:29:46 managed-node11 sudo[15434]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:46 managed-node11 python3.12[15437]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:29:46 managed-node11 sudo[15434]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:48 managed-node11 sudo[15594]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmnstepjcozbfjafmdlahdfzovufwilc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187386.765737-17155-181358109090461/AnsiballZ_command.py' Jul 22 08:29:48 managed-node11 sudo[15594]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:48 managed-node11 python3.12[15597]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:29:48 managed-node11 sudo[15594]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:50 managed-node11 sshd-session[15625]: Accepted publickey for root from 10.31.42.212 port 38762 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:50 managed-node11 systemd-logind[665]: New session 13 of user root. ░░ Subject: A new session 13 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 13 has been created for the user root. ░░ ░░ The leading process of the session is 15625. Jul 22 08:29:50 managed-node11 systemd[1]: Started session-13.scope - Session 13 of User root. ░░ Subject: A start job for unit session-13.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-13.scope has finished successfully. ░░ ░░ The job identifier is 1982. Jul 22 08:29:50 managed-node11 sshd-session[15625]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:50 managed-node11 sshd-session[15628]: Received disconnect from 10.31.42.212 port 38762:11: disconnected by user Jul 22 08:29:50 managed-node11 sshd-session[15628]: Disconnected from user root 10.31.42.212 port 38762 Jul 22 08:29:50 managed-node11 sshd-session[15625]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:50 managed-node11 systemd[1]: session-13.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-13.scope has successfully entered the 'dead' state. Jul 22 08:29:50 managed-node11 systemd-logind[665]: Session 13 logged out. Waiting for processes to exit. Jul 22 08:29:50 managed-node11 systemd-logind[665]: Removed session 13. ░░ Subject: Session 13 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 13 has been terminated. Jul 22 08:29:51 managed-node11 sshd-session[15655]: Accepted publickey for root from 10.31.42.212 port 38768 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:51 managed-node11 systemd-logind[665]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 15655. Jul 22 08:29:51 managed-node11 systemd[1]: Started session-14.scope - Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 2067. Jul 22 08:29:51 managed-node11 sshd-session[15655]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:51 managed-node11 sshd-session[15658]: Received disconnect from 10.31.42.212 port 38768:11: disconnected by user Jul 22 08:29:51 managed-node11 sshd-session[15658]: Disconnected from user root 10.31.42.212 port 38768 Jul 22 08:29:51 managed-node11 sshd-session[15655]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:51 managed-node11 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Jul 22 08:29:51 managed-node11 systemd-logind[665]: Session 14 logged out. Waiting for processes to exit. Jul 22 08:29:51 managed-node11 systemd-logind[665]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Jul 22 08:30:02 managed-node11 sshd-session[15685]: Accepted publickey for root from 10.31.42.212 port 41074 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:02 managed-node11 systemd-logind[665]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 15685. Jul 22 08:30:02 managed-node11 systemd[1]: Started session-15.scope - Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 2152. Jul 22 08:30:02 managed-node11 sshd-session[15685]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:02 managed-node11 sshd-session[15688]: Received disconnect from 10.31.42.212 port 41074:11: disconnected by user Jul 22 08:30:02 managed-node11 sshd-session[15688]: Disconnected from user root 10.31.42.212 port 41074 Jul 22 08:30:02 managed-node11 sshd-session[15685]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:02 managed-node11 systemd[1]: session-15.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-15.scope has successfully entered the 'dead' state. Jul 22 08:30:02 managed-node11 systemd-logind[665]: Session 15 logged out. Waiting for processes to exit. Jul 22 08:30:02 managed-node11 systemd-logind[665]: Removed session 15. ░░ Subject: Session 15 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 15 has been terminated. Jul 22 08:30:09 managed-node11 sudo[15895]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmbrbxssakkhsfjlrpalenccxklgjsd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187407.4444113-19722-69034203488650/AnsiballZ_setup.py' Jul 22 08:30:09 managed-node11 sudo[15895]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:09 managed-node11 python3.12[15898]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:09 managed-node11 sudo[15895]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:12 managed-node11 sudo[16082]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iixeehvacjipwasbiegujdijrvrxuslt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187411.6108584-20014-121851128953172/AnsiballZ_stat.py' Jul 22 08:30:12 managed-node11 sudo[16082]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:13 managed-node11 python3.12[16085]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:13 managed-node11 sudo[16082]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:16 managed-node11 sudo[16240]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjkxnxsyckhwvhncgqyxqfdkkqjknfbu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187414.412021-20250-188154280800429/AnsiballZ_dnf.py' Jul 22 08:30:16 managed-node11 sudo[16240]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:16 managed-node11 python3.12[16243]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:16 managed-node11 sudo[16240]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:18 managed-node11 sudo[16399]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-algphwlhqplxtamvrzpwpuetwgrgongi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187417.5880642-20852-236278713388123/AnsiballZ_blivet.py' Jul 22 08:30:18 managed-node11 sudo[16399]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:19 managed-node11 python3.12[16402]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:30:19 managed-node11 sudo[16399]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:20 managed-node11 sudo[16559]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eygenhdwtnrqxsyhjvmkthbhfqcrngbh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187420.5629776-21229-228421449427793/AnsiballZ_dnf.py' Jul 22 08:30:20 managed-node11 sudo[16559]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:21 managed-node11 python3.12[16562]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:21 managed-node11 sudo[16559]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:22 managed-node11 sudo[16719]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqwvopjpuiarfoxopnfpleeubogkdauu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187421.7267723-21447-158217461705801/AnsiballZ_service_facts.py' Jul 22 08:30:22 managed-node11 sudo[16719]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:22 managed-node11 python3.12[16723]: ansible-service_facts Invoked Jul 22 08:30:24 managed-node11 sudo[16719]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:25 managed-node11 sudo[16990]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gogzwejeujhoigavahnrolebdmwyqetn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187425.344455-22063-18943008848082/AnsiballZ_blivet.py' Jul 22 08:30:25 managed-node11 sudo[16990]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:25 managed-node11 python3.12[16993]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:30:25 managed-node11 sudo[16990]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:26 managed-node11 sudo[17150]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckpffvzregvtgywfknnklkpouoqlzly ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187426.370569-22276-191813356283440/AnsiballZ_stat.py' Jul 22 08:30:26 managed-node11 sudo[17150]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:26 managed-node11 python3.12[17153]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:26 managed-node11 sudo[17150]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:28 managed-node11 sudo[17310]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhvccnwixcusiaaxfhmkepqcepxfwwpr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187428.6295786-22532-265542796822965/AnsiballZ_stat.py' Jul 22 08:30:28 managed-node11 sudo[17310]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:29 managed-node11 python3.12[17314]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:29 managed-node11 sudo[17310]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:29 managed-node11 sudo[17471]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wldtqunvpogjtzolouagajzuresakjix ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187429.3432553-22566-262469887134287/AnsiballZ_setup.py' Jul 22 08:30:29 managed-node11 sudo[17471]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:29 managed-node11 python3.12[17474]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:30 managed-node11 sudo[17471]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:30 managed-node11 sudo[17658]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jraktbtduiabnccldfqqlhvprzzsnydp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187430.692872-22675-218251589092567/AnsiballZ_blivet.py' Jul 22 08:30:30 managed-node11 sudo[17658]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:31 managed-node11 python3.12[17661]: ansible-fedora.linux_system_roles.blivet Invoked with packages_only=True pools=[{'name': 'foo', 'type': 'lvm', 'state': 'present', 'disks': [], 'encryption': False, 'volumes': [{'name': 'test1', 'type': 'lvm', 'fs_type': 'xfs', 'state': 'present', 'mount_point': '/foo', 'encryption': False, 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'size': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}], 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'encryption_clevis_pin': None, 'encryption_tang_url': None, 'encryption_tang_thumbprint': None, 'grow_to_fill': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None, 'shared': None}] volumes=[] pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True diskvolume_mkfs_option_map={} uses_kmod_kvdo=False disklabel_type=None use_partitions=None Jul 22 08:30:31 managed-node11 sudo[17658]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:32 managed-node11 sudo[17831]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwbgswxblgidinwbpqjmyntdntvqoafa ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187432.5117016-22840-37999987737283/AnsiballZ_blivet.py' Jul 22 08:30:32 managed-node11 sudo[17831]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:33 managed-node11 python3.12[17834]: ansible-fedora.linux_system_roles.blivet Invoked with packages_only=True pools=[] volumes=[{'name': 'foo', 'type': 'disk', 'state': 'present', 'disks': [], 'fs_type': 'ext4', 'encryption': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'mount_options': None, 'mount_point': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'size': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None}] pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True diskvolume_mkfs_option_map={} uses_kmod_kvdo=False disklabel_type=None use_partitions=None Jul 22 08:30:33 managed-node11 sudo[17831]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:34 managed-node11 systemd[1]: Starting logrotate.service - Rotate log files... ░░ Subject: A start job for unit logrotate.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit logrotate.service has begun execution. ░░ ░░ The job identifier is 2273. Jul 22 08:30:34 managed-node11 sudo[18003]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbteodchklwxbddtcpzgyejfvvylzpsv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187434.2464514-23054-125772708961600/AnsiballZ_blivet.py' Jul 22 08:30:34 managed-node11 sudo[18003]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:34 managed-node11 systemd[1]: logrotate.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit logrotate.service has successfully entered the 'dead' state. Jul 22 08:30:34 managed-node11 systemd[1]: Finished logrotate.service - Rotate log files. ░░ Subject: A start job for unit logrotate.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit logrotate.service has finished successfully. ░░ ░░ The job identifier is 2273. Jul 22 08:30:34 managed-node11 python3.12[18006]: ansible-fedora.linux_system_roles.blivet Invoked with packages_only=True pools=[] volumes=[{'name': 'foo', 'type': 'disk', 'state': 'present', 'disks': [], 'fs_type': 'swap', 'encryption': False, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'mount_options': None, 'mount_point': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'size': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None}] pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True diskvolume_mkfs_option_map={} uses_kmod_kvdo=False disklabel_type=None use_partitions=None Jul 22 08:30:35 managed-node11 sudo[18003]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:36 managed-node11 sshd-session[18048]: Accepted publickey for root from 10.31.42.212 port 60534 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:36 managed-node11 systemd-logind[665]: New session 16 of user root. ░░ Subject: A new session 16 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 16 has been created for the user root. ░░ ░░ The leading process of the session is 18048. Jul 22 08:30:36 managed-node11 systemd[1]: Started session-16.scope - Session 16 of User root. ░░ Subject: A start job for unit session-16.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-16.scope has finished successfully. ░░ ░░ The job identifier is 2374. Jul 22 08:30:36 managed-node11 sshd-session[18048]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:36 managed-node11 sshd-session[18051]: Received disconnect from 10.31.42.212 port 60534:11: disconnected by user Jul 22 08:30:36 managed-node11 sshd-session[18051]: Disconnected from user root 10.31.42.212 port 60534 Jul 22 08:30:36 managed-node11 sshd-session[18048]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:36 managed-node11 systemd[1]: session-16.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-16.scope has successfully entered the 'dead' state. Jul 22 08:30:36 managed-node11 systemd-logind[665]: Session 16 logged out. Waiting for processes to exit. Jul 22 08:30:36 managed-node11 systemd-logind[665]: Removed session 16. ░░ Subject: Session 16 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 16 has been terminated. Jul 22 08:30:41 managed-node11 sshd-session[18078]: Accepted publickey for root from 10.31.42.212 port 37884 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:41 managed-node11 systemd-logind[665]: New session 17 of user root. ░░ Subject: A new session 17 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 17 has been created for the user root. ░░ ░░ The leading process of the session is 18078. Jul 22 08:30:41 managed-node11 systemd[1]: Started session-17.scope - Session 17 of User root. ░░ Subject: A start job for unit session-17.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-17.scope has finished successfully. ░░ ░░ The job identifier is 2459. Jul 22 08:30:41 managed-node11 sshd-session[18078]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:41 managed-node11 sshd-session[18081]: Received disconnect from 10.31.42.212 port 37884:11: disconnected by user Jul 22 08:30:41 managed-node11 sshd-session[18081]: Disconnected from user root 10.31.42.212 port 37884 Jul 22 08:30:41 managed-node11 sshd-session[18078]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:41 managed-node11 systemd-logind[665]: Session 17 logged out. Waiting for processes to exit. Jul 22 08:30:41 managed-node11 systemd[1]: session-17.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-17.scope has successfully entered the 'dead' state. Jul 22 08:30:41 managed-node11 systemd-logind[665]: Removed session 17. ░░ Subject: Session 17 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 17 has been terminated. Jul 22 08:30:47 managed-node11 sudo[18288]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxsbulawjjvecksxtsogeecugrhssfvk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187445.750348-24506-69352993076994/AnsiballZ_setup.py' Jul 22 08:30:47 managed-node11 sudo[18288]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:48 managed-node11 python3.12[18291]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:48 managed-node11 sudo[18288]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:49 managed-node11 sudo[18475]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trewanokjonjgewfudrlugfmqdfeiqbm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187449.5179396-24956-223542542376922/AnsiballZ_stat.py' Jul 22 08:30:49 managed-node11 sudo[18475]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:50 managed-node11 python3.12[18478]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:50 managed-node11 sudo[18475]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:51 managed-node11 sudo[18633]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykflcsxklaeujohjcimrtnbjifwxhpx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187451.1310103-25240-111373992076186/AnsiballZ_dnf.py' Jul 22 08:30:51 managed-node11 sudo[18633]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:51 managed-node11 python3.12[18636]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:52 managed-node11 sudo[18633]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:53 managed-node11 sudo[18792]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyhmwaklrudgyjritvrbeiametyjqjip ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187453.1313636-25375-38722408926672/AnsiballZ_blivet.py' Jul 22 08:30:53 managed-node11 sudo[18792]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:54 managed-node11 python3.12[18795]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:30:54 managed-node11 sudo[18792]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:55 managed-node11 sudo[18952]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgauzzvvobmhbmjgddrhbrzqeqfudxvf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187455.5260332-25626-9273947033126/AnsiballZ_dnf.py' Jul 22 08:30:55 managed-node11 sudo[18952]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:55 managed-node11 python3.12[18955]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:56 managed-node11 sudo[18952]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:57 managed-node11 sudo[19111]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctdlkumzvcxuskljwwyzhgcuaicnphdq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187456.599141-25884-207576762105786/AnsiballZ_service_facts.py' Jul 22 08:30:57 managed-node11 sudo[19111]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:57 managed-node11 python3.12[19114]: ansible-service_facts Invoked Jul 22 08:30:59 managed-node11 sudo[19111]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:00 managed-node11 sudo[19381]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukzuwtkokjjpcnbuthikhxezrowqmcgb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187459.9784455-26227-208516252418842/AnsiballZ_blivet.py' Jul 22 08:31:00 managed-node11 sudo[19381]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:00 managed-node11 python3.12[19384]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:00 managed-node11 sudo[19381]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:01 managed-node11 sudo[19541]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkzvrpceuuuzvoxlflwvajkcdaunudog ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187461.0708213-26281-173957532946257/AnsiballZ_stat.py' Jul 22 08:31:01 managed-node11 sudo[19541]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:01 managed-node11 python3.12[19544]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:01 managed-node11 sudo[19541]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:04 managed-node11 sudo[19701]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anysrvncxcvawnxeusgdmzxyufeftldh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187463.9202576-26583-107053051880674/AnsiballZ_stat.py' Jul 22 08:31:04 managed-node11 sudo[19701]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:04 managed-node11 python3.12[19704]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:04 managed-node11 sudo[19701]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:04 managed-node11 sudo[19861]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eusrnvdsdywexettqlefbdkwxlqvazag ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187464.7465565-26799-101566967822965/AnsiballZ_setup.py' Jul 22 08:31:04 managed-node11 sudo[19861]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:05 managed-node11 python3.12[19864]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:05 managed-node11 sudo[19861]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:06 managed-node11 sudo[20048]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urqowcuhdxnhmsabufcszncdejxrikbd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187466.6023877-27035-55053584728505/AnsiballZ_dnf.py' Jul 22 08:31:06 managed-node11 sudo[20048]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:07 managed-node11 python3.12[20051]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:07 managed-node11 sudo[20048]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:08 managed-node11 sudo[20207]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjhmckepopsbjyabssmranrloqvesyfw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187467.8084893-27133-180811826302002/AnsiballZ_find_unused_disk.py' Jul 22 08:31:08 managed-node11 sudo[20207]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:08 managed-node11 python3.12[20210]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=2 min_size=0 max_size=0 match_sector_size=False with_interface=None Jul 22 08:31:08 managed-node11 sudo[20207]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:10 managed-node11 sudo[20367]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eugmwiiccrpdchscavzshziafwinrdmt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187469.0390491-27236-118377020123742/AnsiballZ_command.py' Jul 22 08:31:10 managed-node11 sudo[20367]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:10 managed-node11 python3.12[20370]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:31:10 managed-node11 sudo[20367]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:11 managed-node11 sshd-session[20398]: Accepted publickey for root from 10.31.42.212 port 60026 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:11 managed-node11 systemd-logind[665]: New session 18 of user root. ░░ Subject: A new session 18 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 18 has been created for the user root. ░░ ░░ The leading process of the session is 20398. Jul 22 08:31:11 managed-node11 systemd[1]: Started session-18.scope - Session 18 of User root. ░░ Subject: A start job for unit session-18.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-18.scope has finished successfully. ░░ ░░ The job identifier is 2544. Jul 22 08:31:11 managed-node11 sshd-session[20398]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:12 managed-node11 sshd-session[20401]: Received disconnect from 10.31.42.212 port 60026:11: disconnected by user Jul 22 08:31:12 managed-node11 sshd-session[20401]: Disconnected from user root 10.31.42.212 port 60026 Jul 22 08:31:12 managed-node11 sshd-session[20398]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:12 managed-node11 systemd[1]: session-18.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-18.scope has successfully entered the 'dead' state. Jul 22 08:31:12 managed-node11 systemd-logind[665]: Session 18 logged out. Waiting for processes to exit. Jul 22 08:31:12 managed-node11 systemd-logind[665]: Removed session 18. ░░ Subject: Session 18 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 18 has been terminated. Jul 22 08:31:12 managed-node11 sshd-session[20428]: Accepted publickey for root from 10.31.42.212 port 60030 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:12 managed-node11 systemd-logind[665]: New session 19 of user root. ░░ Subject: A new session 19 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 19 has been created for the user root. ░░ ░░ The leading process of the session is 20428. Jul 22 08:31:12 managed-node11 systemd[1]: Started session-19.scope - Session 19 of User root. ░░ Subject: A start job for unit session-19.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-19.scope has finished successfully. ░░ ░░ The job identifier is 2629. Jul 22 08:31:12 managed-node11 sshd-session[20428]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:12 managed-node11 sshd-session[20431]: Received disconnect from 10.31.42.212 port 60030:11: disconnected by user Jul 22 08:31:12 managed-node11 sshd-session[20431]: Disconnected from user root 10.31.42.212 port 60030 Jul 22 08:31:12 managed-node11 sshd-session[20428]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:12 managed-node11 systemd[1]: session-19.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-19.scope has successfully entered the 'dead' state. Jul 22 08:31:12 managed-node11 systemd-logind[665]: Session 19 logged out. Waiting for processes to exit. Jul 22 08:31:12 managed-node11 systemd-logind[665]: Removed session 19. ░░ Subject: Session 19 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 19 has been terminated. Jul 22 08:31:17 managed-node11 python3.12[20638]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:18 managed-node11 sudo[20822]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezeorwmxerzdzxqkcvvvwhtjdugxanvv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187478.6055815-28430-142553845445867/AnsiballZ_setup.py' Jul 22 08:31:18 managed-node11 sudo[20822]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:19 managed-node11 python3.12[20825]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:19 managed-node11 sudo[20822]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:21 managed-node11 sudo[21009]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhhhwpcyqyzpncjlvinbntlvgcyepjt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187480.7839062-28801-105511059374569/AnsiballZ_stat.py' Jul 22 08:31:21 managed-node11 sudo[21009]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:21 managed-node11 python3.12[21013]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:21 managed-node11 sudo[21009]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:22 managed-node11 sudo[21168]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfevhkfnoqrpgyhxornmzmgunoxjxdeh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187481.9269423-28849-58274195636368/AnsiballZ_dnf.py' Jul 22 08:31:22 managed-node11 sudo[21168]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:22 managed-node11 python3.12[21172]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:23 managed-node11 sudo[21168]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:24 managed-node11 sudo[21328]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbntjhxhojsdictgmdpgvvjehlwdsiru ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187483.5587192-28972-215068592626348/AnsiballZ_blivet.py' Jul 22 08:31:24 managed-node11 sudo[21328]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:24 managed-node11 python3.12[21331]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:24 managed-node11 sudo[21328]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:26 managed-node11 sudo[21488]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfzbqdtfyttlijbzbbxjhpezmebitcq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187486.206062-29129-201807260876845/AnsiballZ_dnf.py' Jul 22 08:31:26 managed-node11 sudo[21488]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:26 managed-node11 python3.12[21491]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:27 managed-node11 sudo[21488]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:28 managed-node11 sudo[21647]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euaxazmgkonukpqpcaomkbtdotqqsrhg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187487.3637264-29306-9266384183747/AnsiballZ_service_facts.py' Jul 22 08:31:28 managed-node11 sudo[21647]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:28 managed-node11 python3.12[21650]: ansible-service_facts Invoked Jul 22 08:31:30 managed-node11 sudo[21647]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:31 managed-node11 sudo[21918]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzkjpltwvnxjjhkqwagkuksvhadbwqsm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187491.1523442-29784-134244222615945/AnsiballZ_blivet.py' Jul 22 08:31:31 managed-node11 sudo[21918]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:31 managed-node11 python3.12[21921]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:31 managed-node11 sudo[21918]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:32 managed-node11 sudo[22078]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvswdwhmrdgxhiksifwsncdktzcyodv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187492.2230444-29906-239079028440130/AnsiballZ_stat.py' Jul 22 08:31:32 managed-node11 sudo[22078]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:32 managed-node11 python3.12[22081]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:32 managed-node11 sudo[22078]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:34 managed-node11 sudo[22238]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludrqgwlytsabzdnuhxhlubueqzfgxni ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187494.408483-30141-183822056757752/AnsiballZ_stat.py' Jul 22 08:31:34 managed-node11 sudo[22238]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:34 managed-node11 python3.12[22241]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:34 managed-node11 sudo[22238]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:35 managed-node11 sudo[22398]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlsmdtqkrsnhpszmoexrhmfgscdzikxk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187495.2774103-30208-51845970187854/AnsiballZ_setup.py' Jul 22 08:31:35 managed-node11 sudo[22398]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:35 managed-node11 python3.12[22401]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:36 managed-node11 sudo[22398]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:37 managed-node11 sudo[22585]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eboebsyabnrdarpjzvxrgdkwwjcclowz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187497.0902283-30516-215605299868355/AnsiballZ_dnf.py' Jul 22 08:31:37 managed-node11 sudo[22585]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:37 managed-node11 python3.12[22588]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:37 managed-node11 sudo[22585]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:39 managed-node11 sudo[22744]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilrjtjpobfqgzkpecamocwpkggeopkap ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187498.2750087-30726-18310403886045/AnsiballZ_find_unused_disk.py' Jul 22 08:31:39 managed-node11 sudo[22744]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:39 managed-node11 python3.12[22747]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:31:39 managed-node11 sudo[22744]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:40 managed-node11 sudo[22904]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaagjaiyrtbhhcaijrtamiqwtgdvszmz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187499.4860919-30808-214607499604815/AnsiballZ_command.py' Jul 22 08:31:40 managed-node11 sudo[22904]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:40 managed-node11 python3.12[22907]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:31:40 managed-node11 sudo[22904]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:42 managed-node11 sshd-session[22935]: Accepted publickey for root from 10.31.42.212 port 57570 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:42 managed-node11 systemd-logind[665]: New session 20 of user root. ░░ Subject: A new session 20 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 20 has been created for the user root. ░░ ░░ The leading process of the session is 22935. Jul 22 08:31:42 managed-node11 systemd[1]: Started session-20.scope - Session 20 of User root. ░░ Subject: A start job for unit session-20.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-20.scope has finished successfully. ░░ ░░ The job identifier is 2714. Jul 22 08:31:42 managed-node11 sshd-session[22935]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:42 managed-node11 sshd-session[22938]: Received disconnect from 10.31.42.212 port 57570:11: disconnected by user Jul 22 08:31:42 managed-node11 sshd-session[22938]: Disconnected from user root 10.31.42.212 port 57570 Jul 22 08:31:42 managed-node11 sshd-session[22935]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:42 managed-node11 systemd[1]: session-20.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-20.scope has successfully entered the 'dead' state. Jul 22 08:31:42 managed-node11 systemd-logind[665]: Session 20 logged out. Waiting for processes to exit. Jul 22 08:31:42 managed-node11 systemd-logind[665]: Removed session 20. ░░ Subject: Session 20 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 20 has been terminated. Jul 22 08:31:42 managed-node11 sshd-session[22965]: Accepted publickey for root from 10.31.42.212 port 57572 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:42 managed-node11 systemd-logind[665]: New session 21 of user root. ░░ Subject: A new session 21 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 21 has been created for the user root. ░░ ░░ The leading process of the session is 22965. Jul 22 08:31:42 managed-node11 systemd[1]: Started session-21.scope - Session 21 of User root. ░░ Subject: A start job for unit session-21.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-21.scope has finished successfully. ░░ ░░ The job identifier is 2799. Jul 22 08:31:42 managed-node11 sshd-session[22965]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:42 managed-node11 sshd-session[22968]: Received disconnect from 10.31.42.212 port 57572:11: disconnected by user Jul 22 08:31:42 managed-node11 sshd-session[22968]: Disconnected from user root 10.31.42.212 port 57572 Jul 22 08:31:42 managed-node11 sshd-session[22965]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:42 managed-node11 systemd-logind[665]: Session 21 logged out. Waiting for processes to exit. Jul 22 08:31:42 managed-node11 systemd[1]: session-21.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-21.scope has successfully entered the 'dead' state. Jul 22 08:31:42 managed-node11 systemd-logind[665]: Removed session 21. ░░ Subject: Session 21 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 21 has been terminated. Jul 22 08:31:47 managed-node11 sudo[23175]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgciwwuvywxiynmqfxctibrkprqtkxhd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187505.3254263-31550-212423962853148/AnsiballZ_setup.py' Jul 22 08:31:47 managed-node11 sudo[23175]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:47 managed-node11 python3.12[23178]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:47 managed-node11 sudo[23175]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:50 managed-node11 sudo[23362]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diskxfuoielfbelhklghedebzvzgrqlc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187509.2952635-31937-255268731681849/AnsiballZ_stat.py' Jul 22 08:31:50 managed-node11 sudo[23362]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:50 managed-node11 python3.12[23365]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:50 managed-node11 sudo[23362]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:52 managed-node11 sudo[23520]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bffvlhkzkmcqunaqvtxdpmadrwfoyvdb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187511.3000104-32351-36805934684440/AnsiballZ_dnf.py' Jul 22 08:31:52 managed-node11 sudo[23520]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:52 managed-node11 python3.12[23523]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:52 managed-node11 sudo[23520]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:54 managed-node11 sudo[23679]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukogqzjsorvctkpmlyvuoidcmysaikah ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187513.571717-32510-165917768917163/AnsiballZ_blivet.py' Jul 22 08:31:54 managed-node11 sudo[23679]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:54 managed-node11 python3.12[23682]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:54 managed-node11 sudo[23679]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:56 managed-node11 sudo[23839]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvcdkhbaldczcjrqjsyzyjsgsophtlyy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187516.0614228-32745-31707096165663/AnsiballZ_dnf.py' Jul 22 08:31:56 managed-node11 sudo[23839]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:56 managed-node11 python3.12[23842]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:56 managed-node11 sudo[23839]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:58 managed-node11 sudo[23998]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frydqrhsnpmggffpsxwwpdstzegeldpc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187517.3067589-32883-193513449722041/AnsiballZ_service_facts.py' Jul 22 08:31:58 managed-node11 sudo[23998]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:58 managed-node11 python3.12[24001]: ansible-service_facts Invoked Jul 22 08:32:00 managed-node11 sudo[23998]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:01 managed-node11 sudo[24268]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptfnbmxazfslzhkcubxaegsymtqyzyr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187521.1735606-33375-23221545537276/AnsiballZ_blivet.py' Jul 22 08:32:01 managed-node11 sudo[24268]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:01 managed-node11 python3.12[24271]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:32:01 managed-node11 sudo[24268]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:02 managed-node11 sudo[24428]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfejglhoyutnkjohddwokfdhzptqswvp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187522.1768117-33482-125141685488832/AnsiballZ_stat.py' Jul 22 08:32:02 managed-node11 sudo[24428]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:02 managed-node11 python3.12[24431]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:02 managed-node11 sudo[24428]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:05 managed-node11 sudo[24588]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqcdmferkvfuipjnehjqgahdcoiecnqs ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187524.9662895-33748-256254838364711/AnsiballZ_stat.py' Jul 22 08:32:05 managed-node11 sudo[24588]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:05 managed-node11 python3.12[24591]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:05 managed-node11 sudo[24588]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:06 managed-node11 sudo[24748]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uemydnkifronbicyqvkwrzcxsfbnxfdh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187525.8141296-33914-32229910822572/AnsiballZ_setup.py' Jul 22 08:32:06 managed-node11 sudo[24748]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:06 managed-node11 python3.12[24751]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:06 managed-node11 sudo[24748]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:07 managed-node11 sudo[24935]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwztcywgdamiferggcdsqjfyrxqxjsji ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187527.5731676-34174-161975880930246/AnsiballZ_dnf.py' Jul 22 08:32:07 managed-node11 sudo[24935]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:07 managed-node11 python3.12[24938]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:08 managed-node11 sudo[24935]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:09 managed-node11 sudo[25094]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwdctngnhbrwcjnwubeqyfkblpbxwwp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187528.849048-34303-154784779569791/AnsiballZ_find_unused_disk.py' Jul 22 08:32:09 managed-node11 sudo[25094]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:09 managed-node11 python3.12[25097]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=2 match_sector_size=True max_size=0 with_interface=None Jul 22 08:32:09 managed-node11 sudo[25094]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:10 managed-node11 sshd-session[25125]: Accepted publickey for root from 10.31.42.212 port 33158 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:10 managed-node11 systemd-logind[665]: New session 22 of user root. ░░ Subject: A new session 22 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 22 has been created for the user root. ░░ ░░ The leading process of the session is 25125. Jul 22 08:32:10 managed-node11 systemd[1]: Started session-22.scope - Session 22 of User root. ░░ Subject: A start job for unit session-22.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-22.scope has finished successfully. ░░ ░░ The job identifier is 2884. Jul 22 08:32:10 managed-node11 sshd-session[25125]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:10 managed-node11 sshd-session[25128]: Received disconnect from 10.31.42.212 port 33158:11: disconnected by user Jul 22 08:32:10 managed-node11 sshd-session[25128]: Disconnected from user root 10.31.42.212 port 33158 Jul 22 08:32:10 managed-node11 sshd-session[25125]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:10 managed-node11 systemd[1]: session-22.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-22.scope has successfully entered the 'dead' state. Jul 22 08:32:10 managed-node11 systemd-logind[665]: Session 22 logged out. Waiting for processes to exit. Jul 22 08:32:10 managed-node11 systemd-logind[665]: Removed session 22. ░░ Subject: Session 22 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 22 has been terminated. Jul 22 08:32:11 managed-node11 sshd-session[25155]: Accepted publickey for root from 10.31.42.212 port 33174 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:11 managed-node11 systemd-logind[665]: New session 23 of user root. ░░ Subject: A new session 23 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 23 has been created for the user root. ░░ ░░ The leading process of the session is 25155. Jul 22 08:32:11 managed-node11 systemd[1]: Started session-23.scope - Session 23 of User root. ░░ Subject: A start job for unit session-23.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-23.scope has finished successfully. ░░ ░░ The job identifier is 2969. Jul 22 08:32:11 managed-node11 sshd-session[25155]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:11 managed-node11 sshd-session[25158]: Received disconnect from 10.31.42.212 port 33174:11: disconnected by user Jul 22 08:32:11 managed-node11 sshd-session[25158]: Disconnected from user root 10.31.42.212 port 33174 Jul 22 08:32:11 managed-node11 sshd-session[25155]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:11 managed-node11 systemd[1]: session-23.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-23.scope has successfully entered the 'dead' state. Jul 22 08:32:11 managed-node11 systemd-logind[665]: Session 23 logged out. Waiting for processes to exit. Jul 22 08:32:11 managed-node11 systemd-logind[665]: Removed session 23. ░░ Subject: Session 23 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 23 has been terminated. Jul 22 08:32:18 managed-node11 sshd-session[25185]: Accepted publickey for root from 10.31.42.212 port 39026 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:18 managed-node11 systemd-logind[665]: New session 24 of user root. ░░ Subject: A new session 24 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 24 has been created for the user root. ░░ ░░ The leading process of the session is 25185. Jul 22 08:32:18 managed-node11 systemd[1]: Started session-24.scope - Session 24 of User root. ░░ Subject: A start job for unit session-24.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-24.scope has finished successfully. ░░ ░░ The job identifier is 3054. Jul 22 08:32:18 managed-node11 sshd-session[25185]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:18 managed-node11 sshd-session[25188]: Received disconnect from 10.31.42.212 port 39026:11: disconnected by user Jul 22 08:32:18 managed-node11 sshd-session[25188]: Disconnected from user root 10.31.42.212 port 39026 Jul 22 08:32:18 managed-node11 sshd-session[25185]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:18 managed-node11 systemd[1]: session-24.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-24.scope has successfully entered the 'dead' state. Jul 22 08:32:18 managed-node11 systemd-logind[665]: Session 24 logged out. Waiting for processes to exit. Jul 22 08:32:18 managed-node11 systemd-logind[665]: Removed session 24. ░░ Subject: Session 24 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 24 has been terminated. Jul 22 08:32:23 managed-node11 sudo[25395]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnlofltsvsmwfmtlmlxmjlfrkbgsosmo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187542.1990516-35975-21479444752490/AnsiballZ_setup.py' Jul 22 08:32:23 managed-node11 sudo[25395]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:24 managed-node11 python3.12[25398]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:24 managed-node11 sudo[25395]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:26 managed-node11 sudo[25582]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqqlwyomepdamprktqvtpwrjqishiuqu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187545.96113-36248-108574569524000/AnsiballZ_stat.py' Jul 22 08:32:26 managed-node11 sudo[25582]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:26 managed-node11 python3.12[25585]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:26 managed-node11 sudo[25582]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:28 managed-node11 sudo[25740]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzgwvsuozpatzyrbpotduimhomhihgsj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187547.768694-36479-265352013994325/AnsiballZ_dnf.py' Jul 22 08:32:28 managed-node11 sudo[25740]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:28 managed-node11 python3.12[25744]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:29 managed-node11 sudo[25740]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:31 managed-node11 sudo[25900]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmvjbidfsqrxhjwbeeorkrjihwktkeyd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187549.9440193-36872-273786053246026/AnsiballZ_blivet.py' Jul 22 08:32:31 managed-node11 sudo[25900]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:31 managed-node11 python3.12[25903]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:32:31 managed-node11 sudo[25900]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:33 managed-node11 sudo[26060]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysrmbaiaqhgjacqqjnjlorqjocafxvlv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187552.721253-37045-200140353053547/AnsiballZ_dnf.py' Jul 22 08:32:33 managed-node11 sudo[26060]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:33 managed-node11 python3.12[26064]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:33 managed-node11 sudo[26060]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:34 managed-node11 sudo[26220]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajiupowefwngfebzyktegvexgfelwpvo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187553.8165026-37352-5380095990658/AnsiballZ_service_facts.py' Jul 22 08:32:34 managed-node11 sudo[26220]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:34 managed-node11 python3.12[26223]: ansible-service_facts Invoked Jul 22 08:32:36 managed-node11 sudo[26220]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:36 managed-node11 sudo[26490]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaxgveadsszmpqmkxuevthzmcjbxmdqb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187556.5130234-37966-216977392612187/AnsiballZ_blivet.py' Jul 22 08:32:36 managed-node11 sudo[26490]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:36 managed-node11 python3.12[26493]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:32:37 managed-node11 sudo[26490]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:37 managed-node11 sudo[26650]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baibjjdudndmijuflogzgocilcoqaccj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187557.3524404-38043-273169405265101/AnsiballZ_stat.py' Jul 22 08:32:37 managed-node11 sudo[26650]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:37 managed-node11 python3.12[26654]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:37 managed-node11 sudo[26650]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:39 managed-node11 sudo[26811]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttvtegsezvrbhchqszxrrkzjdgcdomvq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187559.5535424-38238-111108507818145/AnsiballZ_stat.py' Jul 22 08:32:39 managed-node11 sudo[26811]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:39 managed-node11 python3.12[26814]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:39 managed-node11 sudo[26811]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:40 managed-node11 sudo[26971]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdqhmzchcioovlmkjtfzvgzvvxzdfxur ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187560.2862682-38298-100007321400896/AnsiballZ_setup.py' Jul 22 08:32:40 managed-node11 sudo[26971]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:40 managed-node11 python3.12[26974]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:41 managed-node11 sudo[26971]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:41 managed-node11 systemd[4460]: Created slice background.slice - User Background Tasks Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 70. Jul 22 08:32:41 managed-node11 systemd[4460]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 69. Jul 22 08:32:41 managed-node11 systemd[4460]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 69. Jul 22 08:32:42 managed-node11 sudo[27160]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwigtgydwauhoqkeupgcgfrzsitdengn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187562.174656-38493-13857863723746/AnsiballZ_dnf.py' Jul 22 08:32:42 managed-node11 sudo[27160]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:42 managed-node11 python3.12[27163]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:43 managed-node11 sudo[27160]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:43 managed-node11 sudo[27319]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtuazgnnkcmapgsmgepgagwlfmiadpfh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187563.173746-38650-269925581274469/AnsiballZ_find_unused_disk.py' Jul 22 08:32:43 managed-node11 sudo[27319]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:44 managed-node11 python3.12[27322]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:32:44 managed-node11 sudo[27319]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:44 managed-node11 sudo[27479]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokalsvirbhnphcvirxejkgyeykvayei ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187564.3194027-38878-106344442773538/AnsiballZ_command.py' Jul 22 08:32:44 managed-node11 sudo[27479]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:45 managed-node11 python3.12[27482]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:32:45 managed-node11 sudo[27479]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:49 managed-node11 sudo[27639]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejykdjewpsnlmfrppnjexdqnzosprnra ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187568.9340868-39267-32683480505429/AnsiballZ_blivet.py' Jul 22 08:32:49 managed-node11 sudo[27639]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:49 managed-node11 python3.12[27642]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:32:49 managed-node11 sudo[27639]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:50 managed-node11 sudo[27799]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbpxvvuhfvnyxlifppifetyuspfxiikq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187569.8141584-39326-254897028227106/AnsiballZ_stat.py' Jul 22 08:32:50 managed-node11 sudo[27799]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:50 managed-node11 python3.12[27802]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:50 managed-node11 sudo[27799]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:52 managed-node11 sudo[27959]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmffthdvxysnbmxenmdhtprcsrxnmymp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187572.3649967-39509-149244966724816/AnsiballZ_stat.py' Jul 22 08:32:52 managed-node11 sudo[27959]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:52 managed-node11 python3.12[27962]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:52 managed-node11 sudo[27959]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:53 managed-node11 sudo[28119]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swzehmmuvorifaeenqivpmjrddmysjyn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187573.190292-39568-14905657646432/AnsiballZ_setup.py' Jul 22 08:32:53 managed-node11 sudo[28119]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:53 managed-node11 python3.12[28122]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:54 managed-node11 sudo[28119]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:54 managed-node11 sshd-session[28177]: Accepted publickey for root from 10.31.42.212 port 34922 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:54 managed-node11 systemd-logind[665]: New session 25 of user root. ░░ Subject: A new session 25 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 25 has been created for the user root. ░░ ░░ The leading process of the session is 28177. Jul 22 08:32:54 managed-node11 systemd[1]: Started session-25.scope - Session 25 of User root. ░░ Subject: A start job for unit session-25.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-25.scope has finished successfully. ░░ ░░ The job identifier is 3139. Jul 22 08:32:54 managed-node11 sshd-session[28177]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:54 managed-node11 sshd-session[28180]: Received disconnect from 10.31.42.212 port 34922:11: disconnected by user Jul 22 08:32:54 managed-node11 sshd-session[28180]: Disconnected from user root 10.31.42.212 port 34922 Jul 22 08:32:54 managed-node11 sshd-session[28177]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:54 managed-node11 systemd[1]: session-25.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-25.scope has successfully entered the 'dead' state. Jul 22 08:32:54 managed-node11 systemd-logind[665]: Session 25 logged out. Waiting for processes to exit. Jul 22 08:32:54 managed-node11 systemd-logind[665]: Removed session 25. ░░ Subject: Session 25 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 25 has been terminated. Jul 22 08:32:55 managed-node11 sshd-session[28207]: Accepted publickey for root from 10.31.42.212 port 34936 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:55 managed-node11 systemd-logind[665]: New session 26 of user root. ░░ Subject: A new session 26 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 26 has been created for the user root. ░░ ░░ The leading process of the session is 28207. Jul 22 08:32:55 managed-node11 systemd[1]: Started session-26.scope - Session 26 of User root. ░░ Subject: A start job for unit session-26.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-26.scope has finished successfully. ░░ ░░ The job identifier is 3224. Jul 22 08:32:55 managed-node11 sshd-session[28207]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:55 managed-node11 sshd-session[28210]: Received disconnect from 10.31.42.212 port 34936:11: disconnected by user Jul 22 08:32:55 managed-node11 sshd-session[28210]: Disconnected from user root 10.31.42.212 port 34936 Jul 22 08:32:55 managed-node11 sshd-session[28207]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:55 managed-node11 systemd-logind[665]: Session 26 logged out. Waiting for processes to exit. Jul 22 08:32:55 managed-node11 systemd[1]: session-26.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-26.scope has successfully entered the 'dead' state. Jul 22 08:32:55 managed-node11 systemd-logind[665]: Removed session 26. ░░ Subject: Session 26 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 26 has been terminated. Jul 22 08:33:00 managed-node11 sudo[28417]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwrbctjfkqdbkshfygttpjalwyzqhnkw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187578.6635458-40360-276313147887303/AnsiballZ_setup.py' Jul 22 08:33:00 managed-node11 sudo[28417]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:00 managed-node11 python3.12[28420]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:33:01 managed-node11 sudo[28417]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:02 managed-node11 sudo[28604]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-borqmwxszlcquljosdonbvfqgfuqmfsa ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187582.1074317-40902-122205785484059/AnsiballZ_stat.py' Jul 22 08:33:02 managed-node11 sudo[28604]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:02 managed-node11 python3.12[28607]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:02 managed-node11 sudo[28604]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:04 managed-node11 sudo[28762]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdqpqzyihukpwaztqcfiandwpltdvzle ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187583.4535336-41077-162351418868001/AnsiballZ_dnf.py' Jul 22 08:33:04 managed-node11 sudo[28762]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:04 managed-node11 python3.12[28765]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:33:04 managed-node11 sudo[28762]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:06 managed-node11 sudo[28921]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdbswzsttdtyjopsonczmujujyczwoiw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187585.410791-41193-178840270962566/AnsiballZ_blivet.py' Jul 22 08:33:06 managed-node11 sudo[28921]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:06 managed-node11 python3.12[28924]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:33:06 managed-node11 sudo[28921]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:08 managed-node11 sudo[29081]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbxeuykppveodxnfeggafbywtpfmcbui ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187587.9746792-41516-63997080014076/AnsiballZ_dnf.py' Jul 22 08:33:08 managed-node11 sudo[29081]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:08 managed-node11 python3.12[29084]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:33:08 managed-node11 sudo[29081]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:09 managed-node11 sudo[29240]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoiqgnqmofpzcsgpdaxyzskblhphucqy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187589.092952-41744-27648114553872/AnsiballZ_service_facts.py' Jul 22 08:33:09 managed-node11 sudo[29240]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:09 managed-node11 python3.12[29243]: ansible-service_facts Invoked Jul 22 08:33:11 managed-node11 sudo[29240]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:12 managed-node11 sudo[29510]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmpkzxtkmhlkogekfyjkbwrkvtzelsd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187592.211498-42024-76648352394949/AnsiballZ_blivet.py' Jul 22 08:33:12 managed-node11 sudo[29510]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:12 managed-node11 python3.12[29513]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:33:12 managed-node11 sudo[29510]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:13 managed-node11 sudo[29670]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqrirlmsjovpvodoajzmygdsazjccnya ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187593.1719403-42130-130111782470234/AnsiballZ_stat.py' Jul 22 08:33:13 managed-node11 sudo[29670]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:13 managed-node11 python3.12[29673]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:13 managed-node11 sudo[29670]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:15 managed-node11 sudo[29830]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmcvojekrtfnwztbawfblflwmquuialb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187595.564998-42366-251986090908245/AnsiballZ_stat.py' Jul 22 08:33:15 managed-node11 sudo[29830]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:15 managed-node11 python3.12[29833]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:15 managed-node11 sudo[29830]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:16 managed-node11 sudo[29990]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njzkwvccgaodgpmzwqcwmwntexxmrajz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187596.3227987-42555-238598934064632/AnsiballZ_setup.py' Jul 22 08:33:16 managed-node11 sudo[29990]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:16 managed-node11 python3.12[29993]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:33:17 managed-node11 sudo[29990]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:18 managed-node11 sudo[30177]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbndwilixyzipwtgbrziryciogvpxtzo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187597.9721978-42829-187280566067685/AnsiballZ_dnf.py' Jul 22 08:33:18 managed-node11 sudo[30177]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:18 managed-node11 python3.12[30180]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:33:18 managed-node11 sudo[30177]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:20 managed-node11 sudo[30336]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yglxzybhimxkljukporgmmbfumpjjbhl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187599.1326034-42904-214871928807221/AnsiballZ_find_unused_disk.py' Jul 22 08:33:20 managed-node11 sudo[30336]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:20 managed-node11 python3.12[30339]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=3 min_size=0 max_size=0 match_sector_size=False with_interface=None Jul 22 08:33:20 managed-node11 sudo[30336]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:21 managed-node11 sudo[30496]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaecdhyrfyksgftabglzxuqavbiizope ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187600.5180764-43020-197389477147704/AnsiballZ_command.py' Jul 22 08:33:21 managed-node11 sudo[30496]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:21 managed-node11 python3.12[30499]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:33:21 managed-node11 sudo[30496]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:23 managed-node11 sshd-session[30527]: Accepted publickey for root from 10.31.42.212 port 43614 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:23 managed-node11 systemd-logind[665]: New session 27 of user root. ░░ Subject: A new session 27 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 27 has been created for the user root. ░░ ░░ The leading process of the session is 30527. Jul 22 08:33:23 managed-node11 systemd[1]: Started session-27.scope - Session 27 of User root. ░░ Subject: A start job for unit session-27.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-27.scope has finished successfully. ░░ ░░ The job identifier is 3309. Jul 22 08:33:23 managed-node11 sshd-session[30527]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:23 managed-node11 sshd-session[30530]: Received disconnect from 10.31.42.212 port 43614:11: disconnected by user Jul 22 08:33:23 managed-node11 sshd-session[30530]: Disconnected from user root 10.31.42.212 port 43614 Jul 22 08:33:23 managed-node11 sshd-session[30527]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:23 managed-node11 systemd[1]: session-27.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-27.scope has successfully entered the 'dead' state. Jul 22 08:33:23 managed-node11 systemd-logind[665]: Session 27 logged out. Waiting for processes to exit. Jul 22 08:33:23 managed-node11 systemd-logind[665]: Removed session 27. ░░ Subject: Session 27 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 27 has been terminated. Jul 22 08:33:23 managed-node11 sshd-session[30557]: Accepted publickey for root from 10.31.42.212 port 43630 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:23 managed-node11 systemd-logind[665]: New session 28 of user root. ░░ Subject: A new session 28 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 28 has been created for the user root. ░░ ░░ The leading process of the session is 30557. Jul 22 08:33:23 managed-node11 systemd[1]: Started session-28.scope - Session 28 of User root. ░░ Subject: A start job for unit session-28.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-28.scope has finished successfully. ░░ ░░ The job identifier is 3394. Jul 22 08:33:23 managed-node11 sshd-session[30557]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:23 managed-node11 sshd-session[30561]: Received disconnect from 10.31.42.212 port 43630:11: disconnected by user Jul 22 08:33:23 managed-node11 sshd-session[30561]: Disconnected from user root 10.31.42.212 port 43630 Jul 22 08:33:23 managed-node11 sshd-session[30557]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:23 managed-node11 systemd[1]: session-28.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-28.scope has successfully entered the 'dead' state. Jul 22 08:33:23 managed-node11 systemd-logind[665]: Session 28 logged out. Waiting for processes to exit. Jul 22 08:33:23 managed-node11 systemd-logind[665]: Removed session 28. ░░ Subject: Session 28 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 28 has been terminated. Jul 22 08:33:28 managed-node11 sudo[30768]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyasewydfhiwmvnmyildguhptzxmiuvj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187607.2577856-43882-166966522322585/AnsiballZ_setup.py' Jul 22 08:33:28 managed-node11 sudo[30768]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:29 managed-node11 python3.12[30771]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:33:29 managed-node11 sudo[30768]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:31 managed-node11 sudo[30955]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvdycelnpmarxaswajicdvpwfhgshzit ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187610.4521415-44312-20799307071809/AnsiballZ_stat.py' Jul 22 08:33:31 managed-node11 sudo[30955]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:31 managed-node11 python3.12[30959]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:31 managed-node11 sudo[30955]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:32 managed-node11 sudo[31114]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlffkwaqwsqycvulpgnnqsscjvggzovc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187611.7703128-44506-173639654590432/AnsiballZ_dnf.py' Jul 22 08:33:32 managed-node11 sudo[31114]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:32 managed-node11 python3.12[31117]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:33:33 managed-node11 sudo[31114]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:34 managed-node11 sudo[31273]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-holizjycllidpasnjccyqayxodubssdy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187613.406639-44656-21193417425275/AnsiballZ_blivet.py' Jul 22 08:33:34 managed-node11 sudo[31273]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:34 managed-node11 python3.12[31276]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:33:34 managed-node11 sudo[31273]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:36 managed-node11 sudo[31434]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzrxbypualnpsstqfltbsorbvsvienhi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187616.1881318-44826-54406926590219/AnsiballZ_dnf.py' Jul 22 08:33:36 managed-node11 sudo[31434]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:36 managed-node11 python3.12[31437]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:33:37 managed-node11 sudo[31434]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:38 managed-node11 sudo[31593]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sipgdbumncjddqxnfdjwhgynadzhhizy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187617.3144805-45072-82651701008951/AnsiballZ_service_facts.py' Jul 22 08:33:38 managed-node11 sudo[31593]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:38 managed-node11 python3.12[31596]: ansible-service_facts Invoked Jul 22 08:33:39 managed-node11 sudo[31593]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:40 managed-node11 sudo[31863]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mezpbvvsjdemtsjgrexqvnpsfnksjans ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187620.5065691-45531-206183018435162/AnsiballZ_blivet.py' Jul 22 08:33:40 managed-node11 sudo[31863]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:40 managed-node11 python3.12[31866]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:33:41 managed-node11 sudo[31863]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:41 managed-node11 sudo[32023]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevwtokhvdxgkhvnqfvegualrofyegzt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187621.6321287-45629-13818635542521/AnsiballZ_stat.py' Jul 22 08:33:41 managed-node11 sudo[32023]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:42 managed-node11 python3.12[32026]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:42 managed-node11 sudo[32023]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:44 managed-node11 sudo[32183]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfczbiajygnkvaxauhtzkcswiejhskvl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187623.9893298-45876-191432503328250/AnsiballZ_stat.py' Jul 22 08:33:44 managed-node11 sudo[32183]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:44 managed-node11 python3.12[32186]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:44 managed-node11 sudo[32183]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:45 managed-node11 sudo[32343]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msxlegqjowjmwkiitrbgkacsbzrhugdh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187624.7850575-45999-228693273148420/AnsiballZ_setup.py' Jul 22 08:33:45 managed-node11 sudo[32343]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:45 managed-node11 python3.12[32346]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:33:45 managed-node11 sudo[32343]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:46 managed-node11 sudo[32530]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imvrfqekhgtsstbupoymouzkratbiipm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187626.575888-46261-173857320577489/AnsiballZ_dnf.py' Jul 22 08:33:46 managed-node11 sudo[32530]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:47 managed-node11 python3.12[32533]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:33:47 managed-node11 sudo[32530]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:48 managed-node11 sudo[32689]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psguwdhhqydzeuehtebhcpwqwtcysyrf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187627.8101609-46409-9495619348189/AnsiballZ_find_unused_disk.py' Jul 22 08:33:48 managed-node11 sudo[32689]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:48 managed-node11 python3.12[32692]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=1 min_size=0 max_size=0 match_sector_size=False with_interface=None Jul 22 08:33:48 managed-node11 sudo[32689]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:50 managed-node11 sudo[32849]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdpkynmssklilesvlcitfuvaldnshekk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187629.1427782-46530-130135317066439/AnsiballZ_command.py' Jul 22 08:33:50 managed-node11 sudo[32849]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:50 managed-node11 python3.12[32852]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:33:50 managed-node11 sudo[32849]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:51 managed-node11 sshd-session[32880]: Accepted publickey for root from 10.31.42.212 port 53500 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:51 managed-node11 systemd-logind[665]: New session 29 of user root. ░░ Subject: A new session 29 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 29 has been created for the user root. ░░ ░░ The leading process of the session is 32880. Jul 22 08:33:51 managed-node11 systemd[1]: Started session-29.scope - Session 29 of User root. ░░ Subject: A start job for unit session-29.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-29.scope has finished successfully. ░░ ░░ The job identifier is 3479. Jul 22 08:33:51 managed-node11 sshd-session[32880]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:51 managed-node11 sshd-session[32883]: Received disconnect from 10.31.42.212 port 53500:11: disconnected by user Jul 22 08:33:51 managed-node11 sshd-session[32883]: Disconnected from user root 10.31.42.212 port 53500 Jul 22 08:33:51 managed-node11 sshd-session[32880]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:51 managed-node11 systemd[1]: session-29.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-29.scope has successfully entered the 'dead' state. Jul 22 08:33:51 managed-node11 systemd-logind[665]: Session 29 logged out. Waiting for processes to exit. Jul 22 08:33:51 managed-node11 systemd-logind[665]: Removed session 29. ░░ Subject: Session 29 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 29 has been terminated. Jul 22 08:33:52 managed-node11 sshd-session[32910]: Accepted publickey for root from 10.31.42.212 port 53514 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:52 managed-node11 systemd-logind[665]: New session 30 of user root. ░░ Subject: A new session 30 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 30 has been created for the user root. ░░ ░░ The leading process of the session is 32910. Jul 22 08:33:52 managed-node11 systemd[1]: Started session-30.scope - Session 30 of User root. ░░ Subject: A start job for unit session-30.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-30.scope has finished successfully. ░░ ░░ The job identifier is 3564. Jul 22 08:33:52 managed-node11 sshd-session[32910]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:33:52 managed-node11 sshd-session[32913]: Received disconnect from 10.31.42.212 port 53514:11: disconnected by user Jul 22 08:33:52 managed-node11 sshd-session[32913]: Disconnected from user root 10.31.42.212 port 53514 Jul 22 08:33:52 managed-node11 sshd-session[32910]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:52 managed-node11 systemd[1]: session-30.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-30.scope has successfully entered the 'dead' state. Jul 22 08:33:52 managed-node11 systemd-logind[665]: Session 30 logged out. Waiting for processes to exit. Jul 22 08:33:52 managed-node11 systemd-logind[665]: Removed session 30. ░░ Subject: Session 30 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 30 has been terminated. Jul 22 08:33:56 managed-node11 python3.12[33120]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:33:59 managed-node11 python3.12[33304]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:01 managed-node11 python3.12[33459]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:34:03 managed-node11 python3.12[33615]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:34:04 managed-node11 python3.12[33772]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:34:05 managed-node11 python3.12[33928]: ansible-service_facts Invoked Jul 22 08:34:08 managed-node11 python3.12[34195]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:34:09 managed-node11 python3.12[34352]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:10 managed-node11 python3.12[34509]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:11 managed-node11 python3.12[34666]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:34:12 managed-node11 python3.12[34850]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:34:13 managed-node11 python3.12[35006]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_size=0 max_return=1 match_sector_size=False with_interface=None Jul 22 08:34:14 managed-node11 python3.12[35163]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:29 Tuesday 22 July 2025 08:34:14 -0400 (0:00:00.803) 0:00:19.451 ********** skipping: [managed-node11] => { "changed": false, "false_condition": "'Unable to find unused disk' not in unused_disks_return.disks", "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34 Tuesday 22 July 2025 08:34:14 -0400 (0:00:00.050) 0:00:19.502 ********** fatal: [managed-node11]: FAILED! => { "changed": false } MSG: Unable to find enough unused disks. Exiting playbook. PLAY RECAP ********************************************************************* managed-node11 : ok=29 changed=0 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.13", "end_time": "2025-07-22T12:34:14.411046+00:00Z", "host": "managed-node11", "message": "Unable to find enough unused disks. Exiting playbook.", "start_time": "2025-07-22T12:34:14.304275+00:00Z", "task_name": "Exit playbook when there's not enough unused disks in the system", "task_path": "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Tuesday 22 July 2025 08:34:14 -0400 (0:00:00.110) 0:00:19.613 ********** =============================================================================== Gathering Facts --------------------------------------------------------- 2.88s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:2 fedora.linux_system_roles.storage : Get service facts ------------------- 2.44s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Get required packages --------------- 1.71s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 fedora.linux_system_roles.storage : Make sure blivet is available ------- 1.51s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 fedora.linux_system_roles.storage : Make sure required packages are installed --- 1.10s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 fedora.linux_system_roles.storage : Check if system is ostree ----------- 1.06s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Ensure test packages ---------------------------------------------------- 0.92s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 fedora.linux_system_roles.storage : Update facts ------------------------ 0.89s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Find unused disks in the system ----------------------------------------- 0.84s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Debug why there are no unused disks ------------------------------------- 0.80s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 0.80s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Check if /etc/fstab is present ------ 0.59s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file --- 0.46s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 fedora.linux_system_roles.storage : Ensure ansible_facts used by role --- 0.26s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.25s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Include role to ensure packages are installed --------------------------- 0.24s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:10 fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.21s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 fedora.linux_system_roles.storage : Set storage_cryptsetup_services ----- 0.18s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 fedora.linux_system_roles.storage : Show storage_volumes ---------------- 0.18s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 fedora.linux_system_roles.storage : Enable copr repositories if needed --- 0.16s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Jul 22 08:33:56 managed-node11 python3.12[33120]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:33:59 managed-node11 python3.12[33304]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:01 managed-node11 python3.12[33459]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:34:03 managed-node11 python3.12[33615]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:34:04 managed-node11 python3.12[33772]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:34:05 managed-node11 python3.12[33928]: ansible-service_facts Invoked Jul 22 08:34:08 managed-node11 python3.12[34195]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:34:09 managed-node11 python3.12[34352]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:10 managed-node11 python3.12[34509]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:11 managed-node11 python3.12[34666]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:34:12 managed-node11 python3.12[34850]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:34:13 managed-node11 python3.12[35006]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_size=0 max_return=1 match_sector_size=False with_interface=None Jul 22 08:34:14 managed-node11 python3.12[35163]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:34:14 managed-node11 sshd-session[35191]: Accepted publickey for root from 10.31.42.212 port 48678 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:14 managed-node11 systemd-logind[665]: New session 31 of user root. ░░ Subject: A new session 31 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 31 has been created for the user root. ░░ ░░ The leading process of the session is 35191. Jul 22 08:34:14 managed-node11 systemd[1]: Started session-31.scope - Session 31 of User root. ░░ Subject: A start job for unit session-31.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-31.scope has finished successfully. ░░ ░░ The job identifier is 3649. Jul 22 08:34:14 managed-node11 sshd-session[35191]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:34:14 managed-node11 sshd-session[35194]: Received disconnect from 10.31.42.212 port 48678:11: disconnected by user Jul 22 08:34:14 managed-node11 sshd-session[35194]: Disconnected from user root 10.31.42.212 port 48678 Jul 22 08:34:14 managed-node11 sshd-session[35191]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:14 managed-node11 systemd-logind[665]: Session 31 logged out. Waiting for processes to exit. Jul 22 08:34:14 managed-node11 systemd[1]: session-31.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-31.scope has successfully entered the 'dead' state. Jul 22 08:34:14 managed-node11 systemd-logind[665]: Removed session 31. ░░ Subject: Session 31 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 31 has been terminated. Jul 22 08:34:15 managed-node11 sshd-session[35221]: Accepted publickey for root from 10.31.42.212 port 48686 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:15 managed-node11 systemd-logind[665]: New session 32 of user root. ░░ Subject: A new session 32 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 32 has been created for the user root. ░░ ░░ The leading process of the session is 35221. Jul 22 08:34:15 managed-node11 systemd[1]: Started session-32.scope - Session 32 of User root. ░░ Subject: A start job for unit session-32.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-32.scope has finished successfully. ░░ ░░ The job identifier is 3734. Jul 22 08:34:15 managed-node11 sshd-session[35221]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)