ansible-playbook [core 2.17.13] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-nSC executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Jun 4 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-8)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_luks2.yml ****************************************************** 1 plays in /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml PLAY [Test LUKS2] ************************************************************** TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:2 Tuesday 22 July 2025 08:31:08 -0400 (0:00:00.162) 0:00:00.162 ********** [WARNING]: Platform linux on host managed-node15 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node15] TASK [Enable FIPS mode] ******************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:20 Tuesday 22 July 2025 08:31:11 -0400 (0:00:03.023) 0:00:03.185 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Reboot] ****************************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:28 Tuesday 22 July 2025 08:31:11 -0400 (0:00:00.117) 0:00:03.303 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Enable FIPS mode - 2] **************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:39 Tuesday 22 July 2025 08:31:11 -0400 (0:00:00.118) 0:00:03.422 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Reboot - 2] ************************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:43 Tuesday 22 July 2025 08:31:11 -0400 (0:00:00.117) 0:00:03.539 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Ensure dracut-fips] ****************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:53 Tuesday 22 July 2025 08:31:11 -0400 (0:00:00.130) 0:00:03.670 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Configure boot for FIPS] ************************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:59 Tuesday 22 July 2025 08:31:11 -0400 (0:00:00.121) 0:00:03.791 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Reboot - 3] ************************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:68 Tuesday 22 July 2025 08:31:11 -0400 (0:00:00.105) 0:00:03.897 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Run the role] ************************************************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:72 Tuesday 22 July 2025 08:31:11 -0400 (0:00:00.090) 0:00:03.987 ********** included: fedora.linux_system_roles.storage for managed-node15 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Tuesday 22 July 2025 08:31:12 -0400 (0:00:00.265) 0:00:04.252 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node15 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Tuesday 22 July 2025 08:31:12 -0400 (0:00:00.087) 0:00:04.340 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Tuesday 22 July 2025 08:31:12 -0400 (0:00:00.260) 0:00:04.600 ********** skipping: [managed-node15] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node15] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node15] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed-node15] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Tuesday 22 July 2025 08:31:12 -0400 (0:00:00.366) 0:00:04.967 ********** ok: [managed-node15] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Tuesday 22 July 2025 08:31:14 -0400 (0:00:01.463) 0:00:06.430 ********** ok: [managed-node15] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Tuesday 22 July 2025 08:31:14 -0400 (0:00:00.147) 0:00:06.577 ********** ok: [managed-node15] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Tuesday 22 July 2025 08:31:14 -0400 (0:00:00.121) 0:00:06.699 ********** ok: [managed-node15] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Tuesday 22 July 2025 08:31:14 -0400 (0:00:00.077) 0:00:06.777 ********** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node15 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Tuesday 22 July 2025 08:31:15 -0400 (0:00:00.272) 0:00:07.050 ********** ok: [managed-node15] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Tuesday 22 July 2025 08:31:16 -0400 (0:00:01.715) 0:00:08.765 ********** ok: [managed-node15] => { "storage_pools | d([])": [] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Tuesday 22 July 2025 08:31:16 -0400 (0:00:00.230) 0:00:08.995 ********** ok: [managed-node15] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Tuesday 22 July 2025 08:31:17 -0400 (0:00:00.278) 0:00:09.274 ********** [WARNING]: Module invocation had junk after the JSON data: sys:1: DeprecationWarning: builtin type swigvarlink has no __module__ attribute ok: [managed-node15] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Tuesday 22 July 2025 08:31:19 -0400 (0:00:02.103) 0:00:11.377 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node15 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Tuesday 22 July 2025 08:31:19 -0400 (0:00:00.139) 0:00:11.517 ********** skipping: [managed-node15] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Tuesday 22 July 2025 08:31:19 -0400 (0:00:00.124) 0:00:11.641 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Tuesday 22 July 2025 08:31:19 -0400 (0:00:00.152) 0:00:11.794 ********** skipping: [managed-node15] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Tuesday 22 July 2025 08:31:19 -0400 (0:00:00.116) 0:00:11.911 ********** ok: [managed-node15] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Tuesday 22 July 2025 08:31:21 -0400 (0:00:01.132) 0:00:13.043 ********** ok: [managed-node15] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_multipath.service": { "name": "modprobe@dm_multipath.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "inactive", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup-with-network@.service": { "name": "stratis-fstab-setup-with-network@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Tuesday 22 July 2025 08:31:23 -0400 (0:00:02.698) 0:00:15.741 ********** ok: [managed-node15] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Tuesday 22 July 2025 08:31:23 -0400 (0:00:00.232) 0:00:15.974 ********** skipping: [managed-node15] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Tuesday 22 July 2025 08:31:24 -0400 (0:00:00.090) 0:00:16.064 ********** ok: [managed-node15] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Tuesday 22 July 2025 08:31:25 -0400 (0:00:01.166) 0:00:17.231 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "storage_udevadm_trigger | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Tuesday 22 July 2025 08:31:25 -0400 (0:00:00.275) 0:00:17.507 ********** ok: [managed-node15] => { "changed": false, "stat": { "atime": 1753187085.0013483, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "016bd7ce6cb6b233647ba6b5c21ac99bb7146610", "ctime": 1750750281.8033595, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194435, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1750750281.8033595, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1344, "uid": 0, "version": "3162749339", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Tuesday 22 July 2025 08:31:26 -0400 (0:00:00.814) 0:00:18.322 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Tuesday 22 July 2025 08:31:26 -0400 (0:00:00.193) 0:00:18.516 ********** skipping: [managed-node15] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Tuesday 22 July 2025 08:31:26 -0400 (0:00:00.128) 0:00:18.644 ********** ok: [managed-node15] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Tuesday 22 July 2025 08:31:26 -0400 (0:00:00.106) 0:00:18.751 ********** ok: [managed-node15] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Tuesday 22 July 2025 08:31:26 -0400 (0:00:00.123) 0:00:18.875 ********** ok: [managed-node15] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Tuesday 22 July 2025 08:31:27 -0400 (0:00:00.155) 0:00:19.030 ********** skipping: [managed-node15] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Tuesday 22 July 2025 08:31:27 -0400 (0:00:00.256) 0:00:19.287 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Tuesday 22 July 2025 08:31:27 -0400 (0:00:00.245) 0:00:19.532 ********** skipping: [managed-node15] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Tuesday 22 July 2025 08:31:27 -0400 (0:00:00.269) 0:00:19.801 ********** skipping: [managed-node15] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Tuesday 22 July 2025 08:31:27 -0400 (0:00:00.179) 0:00:19.980 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Tuesday 22 July 2025 08:31:28 -0400 (0:00:00.320) 0:00:20.301 ********** ok: [managed-node15] => { "changed": false, "stat": { "atime": 1753187340.0390136, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1750749389.405, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194436, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1750749068.122, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "1830666913", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Tuesday 22 July 2025 08:31:28 -0400 (0:00:00.684) 0:00:20.985 ********** skipping: [managed-node15] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Tuesday 22 July 2025 08:31:29 -0400 (0:00:00.093) 0:00:21.078 ********** ok: [managed-node15] TASK [Get unused disks] ******************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:76 Tuesday 22 July 2025 08:31:30 -0400 (0:00:01.298) 0:00:22.377 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml for managed-node15 TASK [Ensure test packages] **************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Tuesday 22 July 2025 08:31:30 -0400 (0:00:00.205) 0:00:22.583 ********** ok: [managed-node15] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Tuesday 22 July 2025 08:31:31 -0400 (0:00:01.314) 0:00:23.898 ********** ok: [managed-node15] => { "changed": false, "disks": "Unable to find unused disk", "info": [ "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "filename [xvda2] is a partition", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Tuesday 22 July 2025 08:31:33 -0400 (0:00:01.339) 0:00:25.238 ********** ok: [managed-node15] => { "changed": false, "cmd": "set -x\nexec 1>&2\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC\njournalctl -ex\n", "delta": "0:00:00.032770", "end": "2025-07-22 08:31:34.545737", "rc": 0, "start": "2025-07-22 08:31:34.512967" } STDERR: + exec + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda1" TYPE="part" SIZE="1048576" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda2" TYPE="part" SIZE="268433341952" FSTYPE="xfs" LOG-SEC="512" + journalctl -ex Jul 22 08:24:33 localhost systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. ░░ Subject: A start job for unit modprobe@efi_pstore.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit modprobe@efi_pstore.service has finished successfully. ░░ ░░ The job identifier is 152. Jul 22 08:24:33 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit modprobe@fuse.service has successfully entered the 'dead' state. Jul 22 08:24:33 localhost systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. ░░ Subject: A start job for unit modprobe@fuse.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit modprobe@fuse.service has finished successfully. ░░ ░░ The job identifier is 139. Jul 22 08:24:33 localhost systemd[1]: modprobe@loop.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit modprobe@loop.service has successfully entered the 'dead' state. Jul 22 08:24:33 localhost systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. ░░ Subject: A start job for unit modprobe@loop.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit modprobe@loop.service has finished successfully. ░░ ░░ The job identifier is 190. Jul 22 08:24:33 localhost systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. ░░ Subject: A start job for unit systemd-network-generator.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-network-generator.service has finished successfully. ░░ ░░ The job identifier is 184. Jul 22 08:24:33 localhost systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. ░░ Subject: A start job for unit systemd-remount-fs.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-remount-fs.service has finished successfully. ░░ ░░ The job identifier is 129. Jul 22 08:24:33 localhost systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... ░░ Subject: A start job for unit sys-fs-fuse-connections.mount has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sys-fs-fuse-connections.mount has begun execution. ░░ ░░ The job identifier is 138. Jul 22 08:24:33 localhost systemd[1]: systemd-hwdb-update.service - Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc). ░░ Subject: A start job for unit systemd-hwdb-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hwdb-update.service has finished successfully. ░░ ░░ The job identifier is 153. Jul 22 08:24:33 localhost systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... ░░ Subject: A start job for unit systemd-journal-flush.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-journal-flush.service has begun execution. ░░ ░░ The job identifier is 195. Jul 22 08:24:33 localhost systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). ░░ Subject: A start job for unit systemd-pstore.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-pstore.service has finished successfully. ░░ ░░ The job identifier is 151. Jul 22 08:24:33 localhost systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... ░░ Subject: A start job for unit systemd-random-seed.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-random-seed.service has begun execution. ░░ ░░ The job identifier is 158. Jul 22 08:24:33 localhost systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-repart.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-repart.service has finished successfully. ░░ ░░ The job identifier is 189. Jul 22 08:24:33 localhost systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... ░░ Subject: A start job for unit systemd-tmpfiles-setup-dev-early.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup-dev-early.service has begun execution. ░░ ░░ The job identifier is 199. Jul 22 08:24:33 localhost systemd[1]: systemd-tpm2-setup.service - TPM SRK Setup was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ░░ Subject: A start job for unit systemd-tpm2-setup.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tpm2-setup.service has finished successfully. ░░ ░░ The job identifier is 155. Jul 22 08:24:33 localhost systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. ░░ Subject: A start job for unit sys-fs-fuse-connections.mount has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sys-fs-fuse-connections.mount has finished successfully. ░░ ░░ The job identifier is 138. Jul 22 08:24:33 localhost systemd-journald[520]: Runtime Journal (/run/log/journal/ec26813eb00c31d347a70fa23c3a6af7) is 8M, max 69.2M, 61.2M free. ░░ Subject: Disk space used by the journal ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ Runtime Journal (/run/log/journal/ec26813eb00c31d347a70fa23c3a6af7) is currently using 8M. ░░ Maximum allowed usage is set to 69.2M. ░░ Leaving at least 34.6M free (of currently available 676.4M of disk space). ░░ Enforced usage limit is thus 69.2M, of which 61.2M are still available. ░░ ░░ The limits controlling how much disk space is used by the journal may ░░ be configured with SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=, ░░ RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize= settings in ░░ /etc/systemd/journald.conf. See journald.conf(5) for details. Jul 22 08:24:33 localhost systemd-journald[520]: Received client request to flush runtime journal. Jul 22 08:24:33 localhost systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. ░░ Subject: A start job for unit systemd-journal-flush.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-journal-flush.service has finished successfully. ░░ ░░ The job identifier is 195. Jul 22 08:24:33 localhost systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. ░░ Subject: A start job for unit systemd-udev-load-credentials.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-udev-load-credentials.service has finished successfully. ░░ ░░ The job identifier is 164. Jul 22 08:24:33 localhost systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. ░░ Subject: A start job for unit systemd-random-seed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-random-seed.service has finished successfully. ░░ ░░ The job identifier is 158. Jul 22 08:24:33 localhost systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. ░░ Subject: A start job for unit systemd-sysctl.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-sysctl.service has finished successfully. ░░ ░░ The job identifier is 193. Jul 22 08:24:33 localhost systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. ░░ Subject: A start job for unit systemd-udev-trigger.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-udev-trigger.service has finished successfully. ░░ ░░ The job identifier is 162. Jul 22 08:24:33 localhost systemd[1]: Starting systemd-userdbd.service - User Database Manager... ░░ Subject: A start job for unit systemd-userdbd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-userdbd.service has begun execution. ░░ ░░ The job identifier is 294. Jul 22 08:24:33 localhost systemd[1]: Started systemd-userdbd.service - User Database Manager. ░░ Subject: A start job for unit systemd-userdbd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-userdbd.service has finished successfully. ░░ ░░ The job identifier is 294. Jul 22 08:24:33 localhost systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. ░░ Subject: A start job for unit systemd-tmpfiles-setup-dev-early.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup-dev-early.service has finished successfully. ░░ ░░ The job identifier is 199. Jul 22 08:24:33 localhost systemd[1]: systemd-sysusers.service - Create System Users was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-sysusers.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-sysusers.service has finished successfully. ░░ ░░ The job identifier is 197. Jul 22 08:24:33 localhost systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... ░░ Subject: A start job for unit systemd-tmpfiles-setup-dev.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup-dev.service has begun execution. ░░ ░░ The job identifier is 145. Jul 22 08:24:34 localhost systemd[1]: Finished lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. ░░ Subject: A start job for unit lvm2-monitor.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit lvm2-monitor.service has finished successfully. ░░ ░░ The job identifier is 171. Jul 22 08:24:34 localhost systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. ░░ Subject: A start job for unit systemd-tmpfiles-setup-dev.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup-dev.service has finished successfully. ░░ ░░ The job identifier is 145. Jul 22 08:24:34 localhost systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. ░░ Subject: A start job for unit local-fs-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit local-fs-pre.target has finished successfully. ░░ ░░ The job identifier is 130. Jul 22 08:24:34 localhost systemd[1]: Reached target local-fs.target - Local File Systems. ░░ Subject: A start job for unit local-fs.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit local-fs.target has finished successfully. ░░ ░░ The job identifier is 127. Jul 22 08:24:34 localhost systemd[1]: Listening on systemd-bootctl.socket - Boot Entries Service Socket. ░░ Subject: A start job for unit systemd-bootctl.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-bootctl.socket has finished successfully. ░░ ░░ The job identifier is 213. Jul 22 08:24:34 localhost systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. ░░ Subject: A start job for unit systemd-sysext.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-sysext.socket has finished successfully. ░░ ░░ The job identifier is 207. Jul 22 08:24:34 localhost systemd[1]: selinux-autorelabel-mark.service - Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). ░░ Subject: A start job for unit selinux-autorelabel-mark.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit selinux-autorelabel-mark.service has finished successfully. ░░ ░░ The job identifier is 154. Jul 22 08:24:34 localhost systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-binfmt.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-binfmt.service has finished successfully. ░░ ░░ The job identifier is 177. Jul 22 08:24:34 localhost systemd[1]: systemd-boot-random-seed.service - Update Boot Loader Random Seed was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-boot-random-seed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-boot-random-seed.service has finished successfully. ░░ ░░ The job identifier is 146. Jul 22 08:24:34 localhost systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-confext.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-confext.service has finished successfully. ░░ ░░ The job identifier is 134. Jul 22 08:24:34 localhost systemd[1]: systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/ was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-sysext.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-sysext.service has finished successfully. ░░ ░░ The job identifier is 186. Jul 22 08:24:34 localhost systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... ░░ Subject: A start job for unit systemd-tmpfiles-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup.service has begun execution. ░░ ░░ The job identifier is 183. Jul 22 08:24:34 localhost systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... ░░ Subject: A start job for unit systemd-udevd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-udevd.service has begun execution. ░░ ░░ The job identifier is 163. Jul 22 08:24:34 localhost systemd-udevd[566]: Using default interface naming scheme 'rhel-10.0'. Jul 22 08:24:34 localhost systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. ░░ Subject: A start job for unit systemd-tmpfiles-setup.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-setup.service has finished successfully. ░░ ░░ The job identifier is 183. Jul 22 08:24:34 localhost systemd[1]: Mounting var-lib-nfs-rpc_pipefs.mount - RPC Pipe File System... ░░ Subject: A start job for unit var-lib-nfs-rpc_pipefs.mount has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit var-lib-nfs-rpc_pipefs.mount has begun execution. ░░ ░░ The job identifier is 249. Jul 22 08:24:34 localhost systemd[1]: Starting auditd.service - Security Audit Logging Service... ░░ Subject: A start job for unit auditd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit auditd.service has begun execution. ░░ ░░ The job identifier is 231. Jul 22 08:24:34 localhost systemd[1]: ldconfig.service - Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit ldconfig.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit ldconfig.service has finished successfully. ░░ ░░ The job identifier is 178. Jul 22 08:24:34 localhost systemd[1]: Starting rpcbind.service - RPC Bind... ░░ Subject: A start job for unit rpcbind.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpcbind.service has begun execution. ░░ ░░ The job identifier is 276. Jul 22 08:24:34 localhost systemd[1]: systemd-firstboot.service - First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). ░░ Subject: A start job for unit systemd-firstboot.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-firstboot.service has finished successfully. ░░ ░░ The job identifier is 169. Jul 22 08:24:34 localhost systemd[1]: first-boot-complete.target - First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). ░░ Subject: A start job for unit first-boot-complete.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit first-boot-complete.target has finished successfully. ░░ ░░ The job identifier is 159. Jul 22 08:24:34 localhost systemd[1]: systemd-journal-catalog-update.service - Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var). ░░ Subject: A start job for unit systemd-journal-catalog-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-journal-catalog-update.service has finished successfully. ░░ ░░ The job identifier is 176. Jul 22 08:24:34 localhost systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... ░░ Subject: A start job for unit systemd-machine-id-commit.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-machine-id-commit.service has begun execution. ░░ ░░ The job identifier is 198. Jul 22 08:24:34 localhost systemd[1]: systemd-update-done.service - Update is Completed was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit systemd-update-done.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-done.service has finished successfully. ░░ ░░ The job identifier is 182. Jul 22 08:24:34 localhost systemd[1]: etc-machine\x2did.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit etc-machine\x2did.mount has successfully entered the 'dead' state. Jul 22 08:24:34 localhost systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. ░░ Subject: A start job for unit systemd-machine-id-commit.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-machine-id-commit.service has finished successfully. ░░ ░░ The job identifier is 198. Jul 22 08:24:34 localhost kernel: RPC: Registered named UNIX socket transport module. Jul 22 08:24:34 localhost kernel: RPC: Registered udp transport module. Jul 22 08:24:34 localhost kernel: RPC: Registered tcp transport module. Jul 22 08:24:34 localhost kernel: RPC: Registered tcp-with-tls transport module. Jul 22 08:24:34 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 22 08:24:34 localhost systemd[1]: Mounted var-lib-nfs-rpc_pipefs.mount - RPC Pipe File System. ░░ Subject: A start job for unit var-lib-nfs-rpc_pipefs.mount has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit var-lib-nfs-rpc_pipefs.mount has finished successfully. ░░ ░░ The job identifier is 249. Jul 22 08:24:34 localhost systemd[1]: Reached target rpc_pipefs.target. ░░ Subject: A start job for unit rpc_pipefs.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc_pipefs.target has finished successfully. ░░ ░░ The job identifier is 248. Jul 22 08:24:34 localhost auditd[581]: No plugins found, not dispatching events Jul 22 08:24:34 localhost auditd[581]: Init complete, auditd 4.0.3 listening for events (startup state enable) Jul 22 08:24:34 localhost systemd[1]: Started auditd.service - Security Audit Logging Service. ░░ Subject: A start job for unit auditd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit auditd.service has finished successfully. ░░ ░░ The job identifier is 231. Jul 22 08:24:34 localhost systemd[1]: Starting audit-rules.service - Load Audit Rules... ░░ Subject: A start job for unit audit-rules.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit audit-rules.service has begun execution. ░░ ░░ The job identifier is 230. Jul 22 08:24:34 localhost systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... ░░ Subject: A start job for unit systemd-update-utmp.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp.service has begun execution. ░░ ░░ The job identifier is 261. Jul 22 08:24:34 localhost systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. ░░ Subject: A start job for unit systemd-update-utmp.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp.service has finished successfully. ░░ ░░ The job identifier is 261. Jul 22 08:24:34 localhost systemd[1]: Started rpcbind.service - RPC Bind. ░░ Subject: A start job for unit rpcbind.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpcbind.service has finished successfully. ░░ ░░ The job identifier is 276. Jul 22 08:24:34 localhost systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. ░░ Subject: A start job for unit systemd-udevd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-udevd.service has finished successfully. ░░ ░░ The job identifier is 163. Jul 22 08:24:34 localhost systemd[1]: Reached target sysinit.target - System Initialization. ░░ Subject: A start job for unit sysinit.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sysinit.target has finished successfully. ░░ ░░ The job identifier is 123. Jul 22 08:24:34 localhost systemd[1]: Started dnf-makecache.timer - dnf makecache --timer. ░░ Subject: A start job for unit dnf-makecache.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dnf-makecache.timer has finished successfully. ░░ ░░ The job identifier is 219. Jul 22 08:24:34 localhost systemd[1]: Started fstrim.timer - Discard unused filesystem blocks once a week. ░░ Subject: A start job for unit fstrim.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit fstrim.timer has finished successfully. ░░ ░░ The job identifier is 226. Jul 22 08:24:34 localhost systemd[1]: Started logrotate.timer - Daily rotation of log files. ░░ Subject: A start job for unit logrotate.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit logrotate.timer has finished successfully. ░░ ░░ The job identifier is 227. Jul 22 08:24:34 localhost systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. ░░ Subject: A start job for unit systemd-tmpfiles-clean.timer has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-tmpfiles-clean.timer has finished successfully. ░░ ░░ The job identifier is 225. Jul 22 08:24:34 localhost systemd[1]: Reached target timers.target - Timer Units. ░░ Subject: A start job for unit timers.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit timers.target has finished successfully. ░░ ░░ The job identifier is 218. Jul 22 08:24:34 localhost systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. ░░ Subject: A start job for unit dbus.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dbus.socket has finished successfully. ░░ ░░ The job identifier is 202. Jul 22 08:24:34 localhost systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). ░░ Subject: A start job for unit sshd-unix-local.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-unix-local.socket has finished successfully. ░░ ░░ The job identifier is 210. Jul 22 08:24:34 localhost systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). ░░ Subject: A start job for unit sshd-vsock.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-vsock.socket has finished successfully. ░░ ░░ The job identifier is 216. Jul 22 08:24:34 localhost systemd[1]: Reached target ssh-access.target - SSH Access Available. ░░ Subject: A start job for unit ssh-access.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit ssh-access.target has finished successfully. ░░ ░░ The job identifier is 217. Jul 22 08:24:34 localhost systemd[1]: Listening on sssd-kcm.socket - SSSD Kerberos Cache Manager responder socket. ░░ Subject: A start job for unit sssd-kcm.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sssd-kcm.socket has finished successfully. ░░ ░░ The job identifier is 203. Jul 22 08:24:34 localhost systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. ░░ Subject: A start job for unit systemd-hostnamed.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.socket has finished successfully. ░░ ░░ The job identifier is 208. Jul 22 08:24:34 localhost systemd[1]: Reached target sockets.target - Socket Units. ░░ Subject: A start job for unit sockets.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sockets.target has finished successfully. ░░ ░░ The job identifier is 201. Jul 22 08:24:34 localhost systemd[1]: Starting dbus-broker.service - D-Bus System Message Bus... ░░ Subject: A start job for unit dbus-broker.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dbus-broker.service has begun execution. ░░ ░░ The job identifier is 223. Jul 22 08:24:34 localhost systemd[1]: systemd-pcrphase-sysinit.service - TPM PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ░░ Subject: A start job for unit systemd-pcrphase-sysinit.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-pcrphase-sysinit.service has finished successfully. ░░ ░░ The job identifier is 173. Jul 22 08:24:34 localhost systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. ░░ Subject: A start job for unit dev-ttyS0.device has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dev-ttyS0.device has finished successfully. ░░ ░░ The job identifier is 253. Jul 22 08:24:34 localhost systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... ░░ Subject: A start job for unit modprobe@configfs.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit modprobe@configfs.service has begun execution. ░░ ░░ The job identifier is 301. Jul 22 08:24:34 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit modprobe@configfs.service has successfully entered the 'dead' state. Jul 22 08:24:34 localhost systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. ░░ Subject: A start job for unit modprobe@configfs.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit modprobe@configfs.service has finished successfully. ░░ ░░ The job identifier is 301. Jul 22 08:24:34 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input5 Jul 22 08:24:34 localhost kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 22 08:24:34 localhost augenrules[585]: /usr/sbin/augenrules: No change Jul 22 08:24:34 localhost augenrules[632]: No rules Jul 22 08:24:34 localhost augenrules[632]: enabled 1 Jul 22 08:24:34 localhost augenrules[632]: failure 1 Jul 22 08:24:34 localhost augenrules[632]: pid 581 Jul 22 08:24:34 localhost augenrules[632]: rate_limit 0 Jul 22 08:24:34 localhost augenrules[632]: backlog_limit 8192 Jul 22 08:24:34 localhost augenrules[632]: lost 0 Jul 22 08:24:34 localhost augenrules[632]: backlog 0 Jul 22 08:24:34 localhost augenrules[632]: backlog_wait_time 60000 Jul 22 08:24:34 localhost augenrules[632]: backlog_wait_time_actual 0 Jul 22 08:24:34 localhost augenrules[632]: enabled 1 Jul 22 08:24:34 localhost augenrules[632]: failure 1 Jul 22 08:24:34 localhost augenrules[632]: pid 581 Jul 22 08:24:34 localhost augenrules[632]: rate_limit 0 Jul 22 08:24:34 localhost augenrules[632]: backlog_limit 8192 Jul 22 08:24:34 localhost augenrules[632]: lost 0 Jul 22 08:24:34 localhost augenrules[632]: backlog 4 Jul 22 08:24:34 localhost augenrules[632]: backlog_wait_time 60000 Jul 22 08:24:34 localhost augenrules[632]: backlog_wait_time_actual 0 Jul 22 08:24:34 localhost augenrules[632]: enabled 1 Jul 22 08:24:34 localhost augenrules[632]: failure 1 Jul 22 08:24:34 localhost augenrules[632]: pid 581 Jul 22 08:24:34 localhost augenrules[632]: rate_limit 0 Jul 22 08:24:34 localhost augenrules[632]: backlog_limit 8192 Jul 22 08:24:34 localhost augenrules[632]: lost 0 Jul 22 08:24:34 localhost augenrules[632]: backlog 4 Jul 22 08:24:34 localhost augenrules[632]: backlog_wait_time 60000 Jul 22 08:24:34 localhost augenrules[632]: backlog_wait_time_actual 0 Jul 22 08:24:34 localhost systemd[1]: audit-rules.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit audit-rules.service has successfully entered the 'dead' state. Jul 22 08:24:34 localhost systemd[1]: Finished audit-rules.service - Load Audit Rules. ░░ Subject: A start job for unit audit-rules.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit audit-rules.service has finished successfully. ░░ ░░ The job identifier is 230. Jul 22 08:24:34 localhost kernel: cirrus-qemu 0000:00:02.0: vgaarb: deactivate vga console Jul 22 08:24:34 localhost kernel: Console: switching to colour dummy device 80x25 Jul 22 08:24:34 localhost kernel: [drm] Initialized cirrus-qemu 2.0.0 for 0000:00:02.0 on minor 0 Jul 22 08:24:34 localhost kernel: fbcon: cirrus-qemudrmf (fb0) is primary device Jul 22 08:24:34 localhost kernel: Console: switching to colour frame buffer device 128x48 Jul 22 08:24:34 localhost kernel: cirrus-qemu 0000:00:02.0: [drm] fb0: cirrus-qemudrmf frame buffer device Jul 22 08:24:34 localhost systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... ░░ Subject: A start job for unit systemd-vconsole-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has begun execution. ░░ ░░ The job identifier is 308. Jul 22 08:24:34 localhost kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 655360 ms ovfl timer Jul 22 08:24:34 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-vconsole-setup.service has successfully entered the 'dead' state. Jul 22 08:24:34 localhost systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. ░░ Subject: A stop job for unit systemd-vconsole-setup.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit systemd-vconsole-setup.service has finished. ░░ ░░ The job identifier is 308 and the job result is done. Jul 22 08:24:34 localhost systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... ░░ Subject: A start job for unit systemd-vconsole-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has begun execution. ░░ ░░ The job identifier is 308. Jul 22 08:24:34 localhost (udev-worker)[603]: Network interface NamePolicy= disabled on kernel command line. Jul 22 08:24:34 localhost systemd[1]: Started dbus-broker.service - D-Bus System Message Bus. ░░ Subject: A start job for unit dbus-broker.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dbus-broker.service has finished successfully. ░░ ░░ The job identifier is 223. Jul 22 08:24:34 localhost systemd[1]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit basic.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit basic.target has finished successfully. ░░ ░░ The job identifier is 122. Jul 22 08:24:34 localhost dbus-broker-launch[620]: Ready Jul 22 08:24:34 localhost systemd[1]: Starting chronyd.service - NTP client/server... ░░ Subject: A start job for unit chronyd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit chronyd.service has begun execution. ░░ ░░ The job identifier is 277. Jul 22 08:24:34 localhost systemd[1]: Starting cloud-init-local.service - Cloud-init: Local Stage (pre-network)... ░░ Subject: A start job for unit cloud-init-local.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init-local.service has begun execution. ░░ ░░ The job identifier is 271. Jul 22 08:24:34 localhost systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... ░░ Subject: A start job for unit dracut-shutdown.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dracut-shutdown.service has begun execution. ░░ ░░ The job identifier is 124. Jul 22 08:24:35 localhost systemd[1]: Started irqbalance.service - irqbalance daemon. ░░ Subject: A start job for unit irqbalance.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit irqbalance.service has finished successfully. ░░ ░░ The job identifier is 259. Jul 22 08:24:35 localhost systemd[1]: Started rngd.service - Hardware RNG Entropy Gatherer Daemon. ░░ Subject: A start job for unit rngd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rngd.service has finished successfully. ░░ ░░ The job identifier is 232. Jul 22 08:24:35 localhost systemd[1]: ssh-host-keys-migration.service - Update OpenSSH host key permissions was skipped because of an unmet condition check (ConditionPathExists=!/var/lib/.ssh-host-keys-migration). ░░ Subject: A start job for unit ssh-host-keys-migration.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit ssh-host-keys-migration.service has finished successfully. ░░ ░░ The job identifier is 239. Jul 22 08:24:35 localhost systemd[1]: sshd-keygen@ecdsa.service - OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ecdsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ecdsa.service has finished successfully. ░░ ░░ The job identifier is 235. Jul 22 08:24:35 localhost systemd[1]: sshd-keygen@ed25519.service - OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ed25519.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ed25519.service has finished successfully. ░░ ░░ The job identifier is 238. Jul 22 08:24:35 localhost systemd[1]: sshd-keygen@rsa.service - OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@rsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@rsa.service has finished successfully. ░░ ░░ The job identifier is 237. Jul 22 08:24:35 localhost systemd[1]: Reached target sshd-keygen.target. ░░ Subject: A start job for unit sshd-keygen.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen.target has finished successfully. ░░ ░░ The job identifier is 234. Jul 22 08:24:35 localhost systemd[1]: sssd.service - System Security Services Daemon was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit sssd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sssd.service has finished successfully. ░░ ░░ The job identifier is 264. Jul 22 08:24:35 localhost systemd[1]: Reached target nss-user-lookup.target - User and Group Name Lookups. ░░ Subject: A start job for unit nss-user-lookup.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit nss-user-lookup.target has finished successfully. ░░ ░░ The job identifier is 265. Jul 22 08:24:35 localhost systemd[1]: Starting systemd-logind.service - User Login Management... ░░ Subject: A start job for unit systemd-logind.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-logind.service has begun execution. ░░ ░░ The job identifier is 273. Jul 22 08:24:35 localhost systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. ░░ Subject: A start job for unit dracut-shutdown.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dracut-shutdown.service has finished successfully. ░░ ░░ The job identifier is 124. Jul 22 08:24:35 localhost systemd-logind[661]: New seat seat0. ░░ Subject: A new seat seat0 is now available ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new seat seat0 has been configured and is now available. Jul 22 08:24:35 localhost systemd-logind[661]: Watching system buttons on /dev/input/event0 (Power Button) Jul 22 08:24:35 localhost systemd-logind[661]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 22 08:24:35 localhost systemd-logind[661]: Watching system buttons on /dev/input/event2 (AT Translated Set 2 keyboard) Jul 22 08:24:35 localhost systemd[1]: Started systemd-logind.service - User Login Management. ░░ Subject: A start job for unit systemd-logind.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-logind.service has finished successfully. ░░ ░░ The job identifier is 273. Jul 22 08:24:35 localhost systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. ░░ Subject: A start job for unit systemd-vconsole-setup.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has finished successfully. ░░ ░░ The job identifier is 308. Jul 22 08:24:35 localhost rngd[658]: Disabling 7: PKCS11 Entropy generator (pkcs11) Jul 22 08:24:35 localhost rngd[658]: Disabling 5: NIST Network Entropy Beacon (nist) Jul 22 08:24:35 localhost rngd[658]: Disabling 9: Qrypt quantum entropy beacon (qrypt) Jul 22 08:24:35 localhost rngd[658]: Disabling 10: Named pipe entropy input (namedpipe) Jul 22 08:24:35 localhost rngd[658]: Initializing available sources Jul 22 08:24:35 localhost rngd[658]: [hwrng ]: Initialization Failed Jul 22 08:24:35 localhost rngd[658]: [rdrand]: Enabling RDRAND rng support Jul 22 08:24:35 localhost rngd[658]: [rdrand]: Initialized Jul 22 08:24:35 localhost rngd[658]: [jitter]: JITTER timeout set to 5 sec Jul 22 08:24:35 localhost rngd[658]: [jitter]: Initializing AES buffer Jul 22 08:24:35 localhost chronyd[677]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Jul 22 08:24:35 localhost chronyd[677]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift Jul 22 08:24:35 localhost chronyd[677]: Loaded seccomp filter (level 2) Jul 22 08:24:35 localhost systemd[1]: Started chronyd.service - NTP client/server. ░░ Subject: A start job for unit chronyd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit chronyd.service has finished successfully. ░░ ░░ The job identifier is 277. Jul 22 08:24:40 localhost rngd[658]: [jitter]: Unable to obtain AES key, disabling JITTER source Jul 22 08:24:40 localhost rngd[658]: [jitter]: Initialization Failed Jul 22 08:24:40 localhost rngd[658]: Process privileges have been dropped to 2:2 Jul 22 08:24:40 localhost cloud-init[682]: Cloud-init v. 24.4-5.el10 running 'init-local' at Tue, 22 Jul 2025 12:24:40 +0000. Up 17.01 seconds. Jul 22 08:24:40 localhost dhcpcd[685]: dhcpcd-10.0.6 starting Jul 22 08:24:40 localhost kernel: 8021q: 802.1Q VLAN Support v1.8 Jul 22 08:24:40 localhost systemd[1]: Listening on systemd-rfkill.socket - Load/Save RF Kill Switch Status /dev/rfkill Watch. ░░ Subject: A start job for unit systemd-rfkill.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-rfkill.socket has finished successfully. ░░ ░░ The job identifier is 317. Jul 22 08:24:41 localhost kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database Jul 22 08:24:41 localhost kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7' Jul 22 08:24:41 localhost kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600' Jul 22 08:24:41 localhost dhcpcd[688]: DUID 00:01:00:01:30:12:3f:89:12:54:47:e6:76:21 Jul 22 08:24:41 localhost dhcpcd[688]: eth0: IAID 47:e6:76:21 Jul 22 08:24:41 localhost kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 Jul 22 08:24:41 localhost kernel: cfg80211: failed to load regulatory.db Jul 22 08:24:42 localhost dhcpcd[688]: eth0: soliciting a DHCP lease Jul 22 08:24:42 localhost dhcpcd[688]: eth0: offered 10.31.10.48 from 10.31.8.1 Jul 22 08:24:42 localhost dhcpcd[688]: eth0: leased 10.31.10.48 for 3600 seconds Jul 22 08:24:42 localhost dhcpcd[688]: eth0: adding route to 10.31.8.0/22 Jul 22 08:24:42 localhost dhcpcd[688]: eth0: adding default route via 10.31.8.1 Jul 22 08:24:42 localhost dhcpcd[688]: control command: dhcpcd --dumplease --ipv4only eth0 Jul 22 08:24:42 localhost systemd[1]: Starting systemd-hostnamed.service - Hostname Service... ░░ Subject: A start job for unit systemd-hostnamed.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has begun execution. ░░ ░░ The job identifier is 326. Jul 22 08:24:42 localhost systemd[1]: Started systemd-hostnamed.service - Hostname Service. ░░ Subject: A start job for unit systemd-hostnamed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has finished successfully. ░░ ░░ The job identifier is 326. Jul 22 08:24:42 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-hostnamed[709]: Hostname set to (static) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-init-local.service - Cloud-init: Local Stage (pre-network). ░░ Subject: A start job for unit cloud-init-local.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init-local.service has finished successfully. ░░ ░░ The job identifier is 271. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target network-pre.target - Preparation for Network. ░░ Subject: A start job for unit network-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network-pre.target has finished successfully. ░░ ░░ The job identifier is 185. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager.service - Network Manager... ░░ Subject: A start job for unit NetworkManager.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager.service has begun execution. ░░ ░░ The job identifier is 222. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5112] NetworkManager (version 1.53.91-1.el10) is starting... (boot:94c1ff1b-9922-4755-b06c-efb4a8dee671) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5114] Read config: /etc/NetworkManager/NetworkManager.conf, /etc/NetworkManager/conf.d/30-cloud-init-ip6-addr-gen-mode.conf Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5591] manager[0x5565ddb349c0]: monitoring kernel firmware directory '/lib/firmware'. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5623] hostname: hostname: using hostnamed Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5623] hostname: static hostname changed from (none) to "ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com" Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5632] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5639] manager[0x5565ddb349c0]: rfkill: Wi-Fi hardware radio set enabled Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5639] manager[0x5565ddb349c0]: rfkill: WWAN hardware radio set enabled Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5711] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5712] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5712] manager: Networking is enabled by state file Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.5720] settings: Loaded settings plugin: keyfile (internal) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 404. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6068] dhcp: init: Using DHCP client 'internal' Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6070] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6081] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6086] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6092] device (lo): Activation: starting connection 'lo' (cb45aacf-cbd2-4a3a-9c98-0519a53dc374) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6099] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6102] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager.service - Network Manager. ░░ Subject: A start job for unit NetworkManager.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager.service has finished successfully. ░░ ░░ The job identifier is 222. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6135] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target network.target - Network. ░░ Subject: A start job for unit network.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network.target has finished successfully. ░░ ░░ The job identifier is 224. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6157] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6158] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6160] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6161] device (eth0): carrier: link connected Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6164] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6168] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6173] policy: auto-activating connection 'cloud-init eth0' (1dd9a779-d327-56e1-8454-c65e2556c12c) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6177] device (eth0): Activation: starting connection 'cloud-init eth0' (1dd9a779-d327-56e1-8454-c65e2556c12c) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6178] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6180] manager: NetworkManager state is now CONNECTING Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-wait-online.service - Network Manager Wait Online... ░░ Subject: A start job for unit NetworkManager-wait-online.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-wait-online.service has begun execution. ░░ ░░ The job identifier is 221. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6203] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6208] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6211] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting gssproxy.service - GSSAPI Proxy Daemon... ░░ Subject: A start job for unit gssproxy.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit gssproxy.service has begun execution. ░░ ░░ The job identifier is 246. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.6270] dhcp4 (eth0): state changed new lease, address=10.31.10.48, acd pending Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started gssproxy.service - GSSAPI Proxy Daemon. ░░ Subject: A start job for unit gssproxy.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit gssproxy.service has finished successfully. ░░ ░░ The job identifier is 246. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: rpc-gssd.service - RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). ░░ Subject: A start job for unit rpc-gssd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-gssd.service has finished successfully. ░░ ░░ The job identifier is 247. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target nfs-client.target - NFS client services. ░░ Subject: A start job for unit nfs-client.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit nfs-client.target has finished successfully. ░░ ░░ The job identifier is 242. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. ░░ Subject: A start job for unit remote-fs-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-fs-pre.target has finished successfully. ░░ ░░ The job identifier is 243. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. ░░ Subject: A start job for unit remote-cryptsetup.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-cryptsetup.target has finished successfully. ░░ ░░ The job identifier is 281. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-fs.target - Remote File Systems. ░░ Subject: A start job for unit remote-fs.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-fs.target has finished successfully. ░░ ░░ The job identifier is 241. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-pcrphase.service - TPM PCR Barrier (User) was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ░░ Subject: A start job for unit systemd-pcrphase.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-pcrphase.service has finished successfully. ░░ ░░ The job identifier is 161. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 404. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7273] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7277] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7285] device (lo): Activation: successful, device activated. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7860] dhcp4 (eth0): state changed new lease, address=10.31.10.48 Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7871] policy: set 'cloud-init eth0' (eth0) as default for IPv4 routing and DNS Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7917] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7946] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7953] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full') Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7958] manager: NetworkManager state is now CONNECTED_SITE Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7974] device (eth0): Activation: successful, device activated. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7993] manager: NetworkManager state is now CONNECTED_GLOBAL Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com NetworkManager[716]: [1753187083.7997] manager: startup complete Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished NetworkManager-wait-online.service - Network Manager Wait Online. ░░ Subject: A start job for unit NetworkManager-wait-online.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-wait-online.service has finished successfully. ░░ ░░ The job identifier is 221. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-init.service - Cloud-init: Network Stage... ░░ Subject: A start job for unit cloud-init.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.service has begun execution. ░░ ░░ The job identifier is 270. Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com chronyd[677]: Added source 10.11.160.238 Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com chronyd[677]: Added source 10.18.100.10 Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com chronyd[677]: Added source 10.2.32.37 Jul 22 08:24:43 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com chronyd[677]: Added source 10.2.32.38 Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Cloud-init v. 24.4-5.el10 running 'init' at Tue, 22 Jul 2025 12:24:44 +0000. Up 20.49 seconds. Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | eth0 | True | 10.31.10.48 | 255.255.252.0 | global | 12:54:47:e6:76:21 | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | eth0 | True | fe80::1054:47ff:fee6:7621/64 | . | link | 12:54:47:e6:76:21 | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | lo | True | ::1/128 | . | host | . | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: ++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | 0 | 0.0.0.0 | 10.31.8.1 | 0.0.0.0 | eth0 | UG | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | 1 | 10.31.8.0 | 0.0.0.0 | 255.255.252.0 | eth0 | U | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | Route | Destination | Gateway | Interface | Flags | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | 0 | fe80::/64 | :: | eth0 | U | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: | 2 | multicast | :: | eth0 | U | Jul 22 08:24:44 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Generating public/private rsa key pair. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: The key fingerprint is: Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: SHA256:X/3nw8csp9oXlPtc14OsbFRpm7714NU23jYx9bitqIU root@ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: The key's randomart image is: Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: +---[RSA 3072]----+ Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . .| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | = o.| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | S = *.=| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . + =.B*| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | E + oO#| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | = ==O/| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | o.oo*OB| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: +----[SHA256]-----+ Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Generating public/private ecdsa key pair. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: The key fingerprint is: Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: SHA256:3Eq7ost1oQwMuf7z+lf0AMNeroZLR6Lj52m+1wux7bE root@ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: The key's randomart image is: Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: +---[ECDSA 256]---+ Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . + . | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | o . = | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | + ..o.+ | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . o. +So.o | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . oooo+B. . | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | .. o++*oo | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | oo.=oo+.o | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | *%B=. E. | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: +----[SHA256]-----+ Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Generating public/private ed25519 key pair. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: The key fingerprint is: Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: SHA256:bO7dTmZ8agzxQ/1chHdqm5pze2vOXmVlHLakyk5aeic root@ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: The key's randomart image is: Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: +--[ED25519 256]--+ Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | = | Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | =.*| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | o ==| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . o o +.o| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | S O . =+| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | o B.o o.+| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | .o E==. .| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . ..=Ooo.o| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: | . .o+o=B.| Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[804]: +----[SHA256]-----+ Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-init.service - Cloud-init: Network Stage. ░░ Subject: A start job for unit cloud-init.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.service has finished successfully. ░░ ░░ The job identifier is 270. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target cloud-config.target - Cloud-config availability. ░░ Subject: A start job for unit cloud-config.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.target has finished successfully. ░░ ░░ The job identifier is 269. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target network-online.target - Network is Online. ░░ Subject: A start job for unit network-online.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network-online.target has finished successfully. ░░ ░░ The job identifier is 220. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-config.service - Cloud-init: Config Stage... ░░ Subject: A start job for unit cloud-config.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.service has begun execution. ░░ ░░ The job identifier is 268. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting kdump.service - Crash recovery kernel arming... ░░ Subject: A start job for unit kdump.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has begun execution. ░░ ░░ The job identifier is 263. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting restraintd.service - The restraint harness.... ░░ Subject: A start job for unit restraintd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit restraintd.service has begun execution. ░░ ░░ The job identifier is 266. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting rpc-statd-notify.service - Notify NFS peers of a restart... ░░ Subject: A start job for unit rpc-statd-notify.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-statd-notify.service has begun execution. ░░ ░░ The job identifier is 244. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting rsyslog.service - System Logging Service... ░░ Subject: A start job for unit rsyslog.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rsyslog.service has begun execution. ░░ ░░ The job identifier is 240. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 233. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sm-notify[890]: Version 2.8.3 starting Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... ░░ Subject: A start job for unit systemd-user-sessions.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-user-sessions.service has begun execution. ░░ ░░ The job identifier is 280. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started restraintd.service - The restraint harness.. ░░ Subject: A start job for unit restraintd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit restraintd.service has finished successfully. ░░ ░░ The job identifier is 266. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started rpc-statd-notify.service - Notify NFS peers of a restart. ░░ Subject: A start job for unit rpc-statd-notify.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-statd-notify.service has finished successfully. ░░ ░░ The job identifier is 244. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd[894]: Server listening on 0.0.0.0 port 22. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd[894]: Server listening on :: port 22. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 233. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. ░░ Subject: A start job for unit systemd-user-sessions.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-user-sessions.service has finished successfully. ░░ ░░ The job identifier is 280. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started crond.service - Command Scheduler. ░░ Subject: A start job for unit crond.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit crond.service has finished successfully. ░░ ░░ The job identifier is 229. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started getty@tty1.service - Getty on tty1. ░░ Subject: A start job for unit getty@tty1.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit getty@tty1.service has finished successfully. ░░ ░░ The job identifier is 257. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 0 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 0 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 48 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 48 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 49 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 49 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 50 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 50 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 51 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 51 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 52 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 52 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 53 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 53 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 54 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 54 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 55 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 55 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 56 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 56 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 57 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 57 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 58 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 58 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: Cannot change IRQ 59 affinity: Permission denied Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com irqbalance[657]: IRQ 59 affinity is now unmanaged Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. ░░ Subject: A start job for unit serial-getty@ttyS0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit serial-getty@ttyS0.service has finished successfully. ░░ ░░ The job identifier is 252. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target getty.target - Login Prompts. ░░ Subject: A start job for unit getty.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit getty.target has finished successfully. ░░ ░░ The job identifier is 251. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com crond[904]: (CRON) STARTUP (1.7.0) Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com crond[904]: (CRON) INFO (Syslog will be used instead of sendmail.) Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com crond[904]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 38% if used.) Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com crond[904]: (CRON) INFO (running with inotify support) Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com rsyslogd[892]: [origin software="rsyslogd" swVersion="8.2506.0-1.el10" x-pid="892" x-info="https://www.rsyslog.com"] start Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started rsyslog.service - System Logging Service. ░░ Subject: A start job for unit rsyslog.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rsyslog.service has finished successfully. ░░ ░░ The job identifier is 240. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target multi-user.target - Multi-User System. ░░ Subject: A start job for unit multi-user.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit multi-user.target has finished successfully. ░░ ░░ The job identifier is 121. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... ░░ Subject: A start job for unit systemd-update-utmp-runlevel.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp-runlevel.service has begun execution. ░░ ░░ The job identifier is 260. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-update-utmp-runlevel.service has successfully entered the 'dead' state. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. ░░ Subject: A start job for unit systemd-update-utmp-runlevel.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp-runlevel.service has finished successfully. ░░ ░░ The job identifier is 260. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com rsyslogd[892]: imjournal: journal files changed, reloading... [v8.2506.0-1.el10 try https://www.rsyslog.com/e/0 ] Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[948]: Cloud-init v. 24.4-5.el10 running 'modules:config' at Tue, 22 Jul 2025 12:24:45 +0000. Up 22.23 seconds. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopping sshd.service - OpenSSH server daemon... ░░ Subject: A stop job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 507. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd[894]: Received signal 15; terminating. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: sshd.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit sshd.service has successfully entered the 'dead' state. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopped sshd.service - OpenSSH server daemon. ░░ Subject: A stop job for unit sshd.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has finished. ░░ ░░ The job identifier is 507 and the job result is done. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 507. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com restraintd[897]: Listening on http://localhost:8081 Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd[952]: Server listening on 0.0.0.0 port 22. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd[952]: Server listening on :: port 22. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 507. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-config.service - Cloud-init: Config Stage. ░░ Subject: A start job for unit cloud-config.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.service has finished successfully. ░░ ░░ The job identifier is 268. Jul 22 08:24:45 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-final.service - Cloud-init: Final Stage... ░░ Subject: A start job for unit cloud-final.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-final.service has begun execution. ░░ ░░ The job identifier is 272. Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[989]: Cloud-init v. 24.4-5.el10 running 'modules:final' at Tue, 22 Jul 2025 12:24:46 +0000. Up 22.67 seconds. Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[991]: ############################################################# Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[992]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[1000]: 256 SHA256:3Eq7ost1oQwMuf7z+lf0AMNeroZLR6Lj52m+1wux7bE root@ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com (ECDSA) Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[1007]: 256 SHA256:bO7dTmZ8agzxQ/1chHdqm5pze2vOXmVlHLakyk5aeic root@ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com (ED25519) Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[1012]: 3072 SHA256:X/3nw8csp9oXlPtc14OsbFRpm7714NU23jYx9bitqIU root@ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com (RSA) Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[1016]: -----END SSH HOST KEY FINGERPRINTS----- Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[1020]: ############################################################# Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com cloud-init[989]: Cloud-init v. 24.4-5.el10 finished at Tue, 22 Jul 2025 12:24:46 +0000. Datasource DataSourceEc2Local. Up 22.78 seconds Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-final.service - Cloud-init: Final Stage. ░░ Subject: A start job for unit cloud-final.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-final.service has finished successfully. ░░ ░░ The job identifier is 272. Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target cloud-init.target - Cloud-init target. ░░ Subject: A start job for unit cloud-init.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.target has finished successfully. ░░ ░░ The job identifier is 267. Jul 22 08:24:46 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com kdumpctl[909]: kdump: Detected change(s) in the following file(s): /etc/fstab Jul 22 08:24:48 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com kernel: block xvda: the capability attribute has been deprecated. Jul 22 08:24:48 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com kdumpctl[909]: kdump: Rebuilding /boot/initramfs-6.12.0-98.el10.x86_64kdump.img Jul 22 08:24:49 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1425]: dracut-105-4.el10 Jul 22 08:24:49 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1428]: Executing: /usr/bin/dracut --list-modules Jul 22 08:24:49 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1484]: dracut-105-4.el10 Jul 22 08:24:49 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1487]: Executing: /usr/bin/dracut --list-modules Jul 22 08:24:49 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1548]: dracut-105-4.el10 Jul 22 08:24:49 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics --aggressive-strip --mount "/dev/disk/by-uuid/0a4c0384-ac05-49a1-bf2b-0105495224f1 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --add squash-erofs --squash-compressor lzma --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-6.12.0-98.el10.x86_64kdump.img 6.12.0-98.el10.x86_64 Jul 22 08:24:49 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com chronyd[677]: Selected source 10.2.32.37 Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-bsod' will not be installed, because command '/usr/lib/systemd/systemd-bsod' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-pcrphase' will not be installed, because command '/usr/lib/systemd/systemd-pcrphase' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'plymouth' will not be installed, because it's in the list to be omitted! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com chronyd[677]: Received KoD RATE from 172.235.32.243 Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'crypt-gpg' will not be installed, because command 'gpg' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'pcsc' will not be installed, because command 'pcscd' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'hwdb' will not be installed, because it's in the list to be omitted! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'resume' will not be installed, because it's in the list to be omitted! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'squash-squashfs' will not be installed, because command 'mksquashfs' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'squash-squashfs' will not be installed, because command 'unsquashfs' could not be found! Jul 22 08:24:50 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'earlykdump' will not be installed, because it's in the list to be omitted! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-bsod' will not be installed, because command '/usr/lib/systemd/systemd-bsod' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-pcrphase' will not be installed, because command '/usr/lib/systemd/systemd-pcrphase' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'crypt-gpg' will not be installed, because command 'gpg' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'pcsc' will not be installed, because command 'pcscd' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'squash-squashfs' will not be installed, because command 'mksquashfs' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'squash-squashfs' will not be installed, because command 'unsquashfs' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: bash *** Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: shell-interpreter *** Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd *** Jul 22 08:24:51 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: fips *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: fips-crypto-policies *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd-ask-password *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd-initrd *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd-journald *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd-modules-load *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd-sysctl *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd-sysusers *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd-tmpfiles *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: systemd-udevd *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: rngd *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: i18n *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: drm *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: prefixdevname *** Jul 22 08:24:52 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: kernel-modules *** Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: kernel-modules-extra *** Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: kernel-modules-extra: configuration source "/run/depmod.d" does not exist Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: kernel-modules-extra: configuration source "/lib/depmod.d" does not exist Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf" Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: fstab-sys *** Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: rootfs-block *** Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: squash-erofs *** Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: terminfo *** Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: udev-rules *** Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 22 08:24:53 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: dracut-systemd *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: usrmount *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: base *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: fs-lib *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: kdumpbase *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: memstrack *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: microcode_ctl-fw_dir_override *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl module: mangling fw_dir Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware" Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl: intel: caveats check for kernel version "6.12.0-98.el10.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl: configuration "intel-06-4f-01" is ignored Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"... Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl: configuration "intel-06-8f-08" is ignored Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates /lib/firmware" Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: openssl *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: shutdown *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including module: squash-lib *** Jul 22 08:24:54 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Including modules done *** Jul 22 08:24:55 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Installing kernel module dependencies *** Jul 22 08:24:55 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Installing kernel module dependencies done *** Jul 22 08:24:55 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Resolving executable dependencies *** Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Resolving executable dependencies done *** Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Hardlinking files *** Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Mode: real Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Method: sha256 Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Files: 550 Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Linked: 24 files Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Compared: 0 xattrs Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Compared: 42 files Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Saved: 14.22 MiB Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Duration: 0.206929 seconds Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Hardlinking files done *** Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Generating early-microcode cpio image *** Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Constructing GenuineIntel.bin *** Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Constructing GenuineIntel.bin *** Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Store current command line parameters *** Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: Stored kernel commandline: Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: No dracut internal kernel commandline stored in the initramfs Jul 22 08:24:56 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Squashing the files inside the initramfs *** Jul 22 08:25:11 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Squashing the files inside the initramfs done *** Jul 22 08:25:11 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Creating image file '/boot/initramfs-6.12.0-98.el10.x86_64kdump.img' *** Jul 22 08:25:11 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com dracut[1551]: *** Creating initramfs image file '/boot/initramfs-6.12.0-98.el10.x86_64kdump.img' done *** Jul 22 08:25:12 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com kdumpctl[909]: kdump: kexec: loaded kdump kernel Jul 22 08:25:12 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com kdumpctl[909]: kdump: Starting kdump: [OK] Jul 22 08:25:12 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished kdump.service - Crash recovery kernel arming. ░░ Subject: A start job for unit kdump.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has finished successfully. ░░ ░░ The job identifier is 263. Jul 22 08:25:12 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Startup finished in 1.241s (kernel) + 4.132s (initrd) + 43.680s (userspace) = 49.054s. ░░ Subject: System start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ All system services necessary queued for starting at boot have been ░░ started. Note that this does not mean that the machine is now idle as services ░░ might still be busy with completing start-up. ░░ ░░ Kernel start-up required 1241337 microseconds. ░░ ░░ Initrd start-up required 4132356 microseconds. ░░ ░░ Userspace start-up required 43680980 microseconds. Jul 22 08:25:13 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 22 08:25:55 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com chronyd[677]: Selected source 158.51.99.19 (2.centos.pool.ntp.org) Jul 22 08:26:40 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting fstrim.service - Discard unused blocks on filesystems from /etc/fstab... ░░ Subject: A start job for unit fstrim.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit fstrim.service has begun execution. ░░ ░░ The job identifier is 508. Jul 22 08:26:40 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: fstrim.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit fstrim.service has successfully entered the 'dead' state. Jul 22 08:26:40 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished fstrim.service - Discard unused blocks on filesystems from /etc/fstab. ░░ Subject: A start job for unit fstrim.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit fstrim.service has finished successfully. ░░ ░░ The job identifier is 508. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4454]: Accepted publickey for root from 10.30.33.122 port 47762 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Created slice user-0.slice - User Slice of UID 0. ░░ Subject: A start job for unit user-0.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-0.slice has finished successfully. ░░ ░░ The job identifier is 664. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0... ░░ Subject: A start job for unit user-runtime-dir@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has begun execution. ░░ ░░ The job identifier is 586. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-logind[661]: New session 1 of user root. ░░ Subject: A new session 1 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 1 has been created for the user root. ░░ ░░ The leading process of the session is 4454. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0. ░░ Subject: A start job for unit user-runtime-dir@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has finished successfully. ░░ ░░ The job identifier is 586. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting user@0.service - User Manager for UID 0... ░░ Subject: A start job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 666. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-logind[661]: New session 2 of user root. ░░ Subject: A new session 2 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 2 has been created for the user root. ░░ ░░ The leading process of the session is 4459. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com (systemd)[4459]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Queued start job for default target default.target. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Created slice app.slice - User Application Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 10. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 4. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 6. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Reached target paths.target - Paths. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 7. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Reached target timers.target - Timers. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 3. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Starting dbus.socket - D-Bus User Message Bus Socket... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 9. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 12. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Listening on dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 9. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Reached target sockets.target - Sockets. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 8. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 12. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 2. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Reached target default.target - Main User Target. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 1. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[4459]: Startup finished in 114ms. ░░ Subject: User manager start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The user manager instance for user 0 has been started. All services queued ░░ for starting have been started. Note that other services might still be starting ░░ up or be started at any later time. ░░ ░░ Startup of the manager took 114421 microseconds. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started user@0.service - User Manager for UID 0. ░░ Subject: A start job for unit user@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has finished successfully. ░░ ░░ The job identifier is 666. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-1.scope - Session 1 of User root. ░░ Subject: A start job for unit session-1.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-1.scope has finished successfully. ░░ ░░ The job identifier is 747. Jul 22 08:26:47 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4454]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:26:48 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4470]: Received disconnect from 10.30.33.122 port 47762:11: disconnected by user Jul 22 08:26:48 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4470]: Disconnected from user root 10.30.33.122 port 47762 Jul 22 08:26:48 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4454]: pam_unix(sshd:session): session closed for user root Jul 22 08:26:48 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: session-1.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-1.scope has successfully entered the 'dead' state. Jul 22 08:26:48 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-logind[661]: Session 1 logged out. Waiting for processes to exit. Jul 22 08:26:48 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-logind[661]: Removed session 1. ░░ Subject: Session 1 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 1 has been terminated. Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4507]: Accepted publickey for root from 10.31.9.41 port 59638 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4508]: Accepted publickey for root from 10.31.9.41 port 59652 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-logind[661]: New session 3 of user root. ░░ Subject: A new session 3 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 3 has been created for the user root. ░░ ░░ The leading process of the session is 4507. Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-3.scope - Session 3 of User root. ░░ Subject: A start job for unit session-3.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-3.scope has finished successfully. ░░ ░░ The job identifier is 829. Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-logind[661]: New session 4 of user root. ░░ Subject: A new session 4 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 4 has been created for the user root. ░░ ░░ The leading process of the session is 4508. Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4507]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-4.scope - Session 4 of User root. ░░ Subject: A start job for unit session-4.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-4.scope has finished successfully. ░░ ░░ The job identifier is 911. Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4508]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4514]: Received disconnect from 10.31.9.41 port 59652:11: disconnected by user Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4514]: Disconnected from user root 10.31.9.41 port 59652 Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com sshd-session[4508]: pam_unix(sshd:session): session closed for user root Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: session-4.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-4.scope has successfully entered the 'dead' state. Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-logind[661]: Session 4 logged out. Waiting for processes to exit. Jul 22 08:26:57 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd-logind[661]: Removed session 4. ░░ Subject: Session 4 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 4 has been terminated. Jul 22 08:27:27 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com unknown: Running test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1) with reboot count 0 and test restart count 0. (Be aware the test name is sanitized!) Jul 22 08:27:28 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... ░░ Subject: A start job for unit systemd-hostnamed.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has begun execution. ░░ ░░ The job identifier is 993. Jul 22 08:27:28 ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. ░░ Subject: A start job for unit systemd-hostnamed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has finished successfully. ░░ ░░ The job identifier is 993. Jul 22 08:27:28 managed-node15 systemd-hostnamed[6376]: Hostname set to (static) Jul 22 08:27:28 managed-node15 NetworkManager[716]: [1753187248.0854] hostname: static hostname changed from "ip-10-31-10-48.testing-farm.us-east-1.aws.redhat.com" to "managed-node15" Jul 22 08:27:28 managed-node15 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 1071. Jul 22 08:27:28 managed-node15 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 1071. Jul 22 08:27:29 managed-node15 unknown: Leaving test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1). (Be aware the test name is sanitized!) Jul 22 08:27:38 managed-node15 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 22 08:27:58 managed-node15 systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 22 08:28:01 managed-node15 sshd-session[7424]: Accepted publickey for root from 10.31.42.212 port 56428 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:01 managed-node15 systemd-logind[661]: New session 5 of user root. ░░ Subject: A new session 5 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 5 has been created for the user root. ░░ ░░ The leading process of the session is 7424. Jul 22 08:28:01 managed-node15 systemd[1]: Started session-5.scope - Session 5 of User root. ░░ Subject: A start job for unit session-5.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-5.scope has finished successfully. ░░ ░░ The job identifier is 1150. Jul 22 08:28:01 managed-node15 sshd-session[7424]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:01 managed-node15 sshd-session[7427]: Received disconnect from 10.31.42.212 port 56428:11: disconnected by user Jul 22 08:28:01 managed-node15 sshd-session[7427]: Disconnected from user root 10.31.42.212 port 56428 Jul 22 08:28:01 managed-node15 sshd-session[7424]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:01 managed-node15 systemd[1]: session-5.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-5.scope has successfully entered the 'dead' state. Jul 22 08:28:01 managed-node15 systemd-logind[661]: Session 5 logged out. Waiting for processes to exit. Jul 22 08:28:01 managed-node15 systemd-logind[661]: Removed session 5. ░░ Subject: Session 5 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 5 has been terminated. Jul 22 08:28:01 managed-node15 sshd-session[7452]: Accepted publickey for root from 10.31.42.212 port 56434 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:01 managed-node15 systemd-logind[661]: New session 6 of user root. ░░ Subject: A new session 6 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 6 has been created for the user root. ░░ ░░ The leading process of the session is 7452. Jul 22 08:28:01 managed-node15 systemd[1]: Started session-6.scope - Session 6 of User root. ░░ Subject: A start job for unit session-6.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-6.scope has finished successfully. ░░ ░░ The job identifier is 1232. Jul 22 08:28:01 managed-node15 sshd-session[7452]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:01 managed-node15 sshd-session[7455]: Received disconnect from 10.31.42.212 port 56434:11: disconnected by user Jul 22 08:28:01 managed-node15 sshd-session[7455]: Disconnected from user root 10.31.42.212 port 56434 Jul 22 08:28:01 managed-node15 sshd-session[7452]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:01 managed-node15 systemd[1]: session-6.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-6.scope has successfully entered the 'dead' state. Jul 22 08:28:01 managed-node15 systemd-logind[661]: Session 6 logged out. Waiting for processes to exit. Jul 22 08:28:01 managed-node15 systemd-logind[661]: Removed session 6. ░░ Subject: Session 6 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 6 has been terminated. Jul 22 08:28:25 managed-node15 sshd-session[7482]: Accepted publickey for root from 10.31.42.212 port 49286 ssh2: ECDSA SHA256:WU7noZiQSxkQHAT4JsTwkz7sTow5ig7aO2gcgaqEwOg Jul 22 08:28:25 managed-node15 systemd-logind[661]: New session 7 of user root. ░░ Subject: A new session 7 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 7 has been created for the user root. ░░ ░░ The leading process of the session is 7482. Jul 22 08:28:25 managed-node15 systemd[1]: Started session-7.scope - Session 7 of User root. ░░ Subject: A start job for unit session-7.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-7.scope has finished successfully. ░░ ░░ The job identifier is 1314. Jul 22 08:28:25 managed-node15 sshd-session[7482]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:27 managed-node15 sudo[7659]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoopooufulvxgwmhlntmcrautdcehkiw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187306.8581624-8917-46181066610879/AnsiballZ_setup.py' Jul 22 08:28:27 managed-node15 sudo[7659]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:28 managed-node15 python3.12[7662]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:28 managed-node15 sudo[7659]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:29 managed-node15 sudo[7840]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsenpqvilvhkkvykqhemzupzxjvljknb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187309.094527-9086-54397456468417/AnsiballZ_stat.py' Jul 22 08:28:29 managed-node15 sudo[7840]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:29 managed-node15 python3.12[7843]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:29 managed-node15 sudo[7840]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:30 managed-node15 sudo[7992]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rthvuqfwwnyrvqbnokqafukxxtsnwkzl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187309.942286-9130-69841956597602/AnsiballZ_dnf.py' Jul 22 08:28:30 managed-node15 sudo[7992]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:30 managed-node15 python3.12[7995]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:41 managed-node15 groupadd[8100]: group added to /etc/group: name=clevis, GID=993 Jul 22 08:28:41 managed-node15 groupadd[8100]: group added to /etc/gshadow: name=clevis Jul 22 08:28:41 managed-node15 groupadd[8100]: new group: name=clevis, GID=993 Jul 22 08:28:41 managed-node15 useradd[8102]: new user: name=clevis, UID=993, GID=993, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none Jul 22 08:28:41 managed-node15 usermod[8106]: add 'clevis' to group 'tss' Jul 22 08:28:41 managed-node15 usermod[8106]: add 'clevis' to shadow group 'tss' Jul 22 08:28:41 managed-node15 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:41 managed-node15 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:41 managed-node15 groupadd[8113]: group added to /etc/group: name=polkitd, GID=114 Jul 22 08:28:41 managed-node15 groupadd[8113]: group added to /etc/gshadow: name=polkitd Jul 22 08:28:41 managed-node15 groupadd[8113]: new group: name=polkitd, GID=114 Jul 22 08:28:41 managed-node15 useradd[8116]: new user: name=polkitd, UID=114, GID=114, home=/, shell=/sbin/nologin, from=none Jul 22 08:28:41 managed-node15 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:41 managed-node15 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:41 managed-node15 systemd[1]: Listening on pcscd.socket - PC/SC Smart Card Daemon Activation Socket. ░░ Subject: A start job for unit pcscd.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit pcscd.socket has finished successfully. ░░ ░░ The job identifier is 1400. Jul 22 08:28:42 managed-node15 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:42 managed-node15 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:45 managed-node15 systemd[1]: Started run-p8146-i8446.service - [systemd-run] /usr/bin/systemctl start man-db-cache-update. ░░ Subject: A start job for unit run-p8146-i8446.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-p8146-i8446.service has finished successfully. ░░ ░░ The job identifier is 1481. Jul 22 08:28:45 managed-node15 systemd[1]: Starting man-db-cache-update.service... ░░ Subject: A start job for unit man-db-cache-update.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has begun execution. ░░ ░░ The job identifier is 1559. Jul 22 08:28:45 managed-node15 systemctl[8147]: Warning: The unit file, source configuration file or drop-ins of man-db-cache-update.service changed on disk. Run 'systemctl daemon-reload' to reload units. Jul 22 08:28:45 managed-node15 systemd[1]: Reload requested from client PID 8150 ('systemctl') (unit session-7.scope)... Jul 22 08:28:45 managed-node15 systemd[1]: Reloading... Jul 22 08:28:45 managed-node15 systemd-rc-local-generator[8187]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 22 08:28:45 managed-node15 systemd[1]: Reloading finished in 225 ms. Jul 22 08:28:45 managed-node15 systemd[1]: Queuing reload/restart jobs for marked units… Jul 22 08:28:45 managed-node15 systemd[1]: Reloading user@0.service - User Manager for UID 0... ░░ Subject: A reload job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A reload job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 1637. Jul 22 08:28:45 managed-node15 systemd[4459]: Received SIGRTMIN+25 from PID 1 (systemd). Jul 22 08:28:45 managed-node15 systemd[4459]: Reexecuting. Jul 22 08:28:45 managed-node15 systemd[1]: Reloaded user@0.service - User Manager for UID 0. ░░ Subject: A reload job for unit user@0.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A reload job for unit user@0.service has finished. ░░ ░░ The job identifier is 1637 and the job result is done. Jul 22 08:28:46 managed-node15 sudo[7992]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:47 managed-node15 sudo[8693]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcltpdlkwcdrrcztwdkpmqfjryyzruqt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187326.9964168-10797-79452900558212/AnsiballZ_blivet.py' Jul 22 08:28:47 managed-node15 sudo[8693]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:48 managed-node15 python3.12[8696]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:28:48 managed-node15 sudo[8693]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:48 managed-node15 systemd[1]: man-db-cache-update.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service has successfully entered the 'dead' state. Jul 22 08:28:48 managed-node15 systemd[1]: Finished man-db-cache-update.service. ░░ Subject: A start job for unit man-db-cache-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has finished successfully. ░░ ░░ The job identifier is 1559. Jul 22 08:28:48 managed-node15 systemd[1]: run-p8146-i8446.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-p8146-i8446.service has successfully entered the 'dead' state. Jul 22 08:28:49 managed-node15 sudo[8861]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdmuypavwmzbutsxpolaaquqrlibygha ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187329.4625895-10999-1131655552162/AnsiballZ_dnf.py' Jul 22 08:28:49 managed-node15 sudo[8861]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:50 managed-node15 python3.12[8864]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:50 managed-node15 sudo[8861]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:51 managed-node15 sudo[9020]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfobcqimpxrihcehaxkknhhyrzgnppqh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187330.7501402-11137-137217593060680/AnsiballZ_service_facts.py' Jul 22 08:28:51 managed-node15 sudo[9020]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:51 managed-node15 python3.12[9023]: ansible-service_facts Invoked Jul 22 08:28:53 managed-node15 sudo[9020]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:55 managed-node15 sudo[9290]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwpluvrxqhzzzwdsuyhhmcvoidqxcrva ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187334.7192438-11553-105689661651166/AnsiballZ_blivet.py' Jul 22 08:28:55 managed-node15 sudo[9290]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:55 managed-node15 python3.12[9293]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:28:55 managed-node15 sudo[9290]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:56 managed-node15 sudo[9450]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvdvdqtjmspoxtxdjgyazmaogjaavpdd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187336.0624468-11716-249619971277426/AnsiballZ_stat.py' Jul 22 08:28:56 managed-node15 sudo[9450]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:56 managed-node15 python3.12[9453]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:56 managed-node15 sudo[9450]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:59 managed-node15 sudo[9610]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yauzfhyajtlgjrtlapooueexwykatmns ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187339.5641263-12203-146039076397345/AnsiballZ_stat.py' Jul 22 08:28:59 managed-node15 sudo[9610]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:00 managed-node15 python3.12[9613]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:00 managed-node15 sudo[9610]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:01 managed-node15 sudo[9770]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrvvusfnlxsygbingjmwiucrbdsbmfbd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187340.6174865-12348-239464446005188/AnsiballZ_setup.py' Jul 22 08:29:01 managed-node15 sudo[9770]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:01 managed-node15 python3.12[9773]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:01 managed-node15 sudo[9770]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:03 managed-node15 sudo[9957]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rutqzciylsicaykpooclmforardpafwe ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187342.954506-12558-202285352269312/AnsiballZ_dnf.py' Jul 22 08:29:03 managed-node15 sudo[9957]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:03 managed-node15 python3.12[9960]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:03 managed-node15 sudo[9957]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:05 managed-node15 sudo[10116]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfqpxadqgsdxombcnxojbvylfnthuhgy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187344.3261468-12653-278243845146859/AnsiballZ_find_unused_disk.py' Jul 22 08:29:05 managed-node15 sudo[10116]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:05 managed-node15 python3.12[10119]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:29:05 managed-node15 sudo[10116]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:07 managed-node15 sudo[10276]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqrvdwghwduyorogkjbljqthcwlqhwrp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187345.9061544-12860-144962546636928/AnsiballZ_command.py' Jul 22 08:29:07 managed-node15 sudo[10276]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:07 managed-node15 python3.12[10279]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:29:07 managed-node15 sudo[10276]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:09 managed-node15 sshd-session[10307]: Accepted publickey for root from 10.31.42.212 port 41926 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:09 managed-node15 systemd-logind[661]: New session 8 of user root. ░░ Subject: A new session 8 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 8 has been created for the user root. ░░ ░░ The leading process of the session is 10307. Jul 22 08:29:09 managed-node15 systemd[1]: Started session-8.scope - Session 8 of User root. ░░ Subject: A start job for unit session-8.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-8.scope has finished successfully. ░░ ░░ The job identifier is 1638. Jul 22 08:29:09 managed-node15 sshd-session[10307]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:09 managed-node15 sshd-session[10310]: Received disconnect from 10.31.42.212 port 41926:11: disconnected by user Jul 22 08:29:09 managed-node15 sshd-session[10310]: Disconnected from user root 10.31.42.212 port 41926 Jul 22 08:29:09 managed-node15 sshd-session[10307]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:09 managed-node15 systemd[1]: session-8.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-8.scope has successfully entered the 'dead' state. Jul 22 08:29:09 managed-node15 systemd-logind[661]: Session 8 logged out. Waiting for processes to exit. Jul 22 08:29:09 managed-node15 systemd-logind[661]: Removed session 8. ░░ Subject: Session 8 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 8 has been terminated. Jul 22 08:29:09 managed-node15 sshd-session[10337]: Accepted publickey for root from 10.31.42.212 port 41934 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:09 managed-node15 systemd-logind[661]: New session 9 of user root. ░░ Subject: A new session 9 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 9 has been created for the user root. ░░ ░░ The leading process of the session is 10337. Jul 22 08:29:09 managed-node15 systemd[1]: Started session-9.scope - Session 9 of User root. ░░ Subject: A start job for unit session-9.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-9.scope has finished successfully. ░░ ░░ The job identifier is 1723. Jul 22 08:29:09 managed-node15 sshd-session[10337]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:09 managed-node15 sshd-session[10340]: Received disconnect from 10.31.42.212 port 41934:11: disconnected by user Jul 22 08:29:09 managed-node15 sshd-session[10340]: Disconnected from user root 10.31.42.212 port 41934 Jul 22 08:29:09 managed-node15 sshd-session[10337]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:09 managed-node15 systemd-logind[661]: Session 9 logged out. Waiting for processes to exit. Jul 22 08:29:09 managed-node15 systemd[1]: session-9.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-9.scope has successfully entered the 'dead' state. Jul 22 08:29:09 managed-node15 systemd-logind[661]: Removed session 9. ░░ Subject: Session 9 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 9 has been terminated. Jul 22 08:29:19 managed-node15 sshd-session[10367]: Accepted publickey for root from 10.31.42.212 port 55080 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:19 managed-node15 systemd-logind[661]: New session 10 of user root. ░░ Subject: A new session 10 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 10 has been created for the user root. ░░ ░░ The leading process of the session is 10367. Jul 22 08:29:19 managed-node15 systemd[1]: Started session-10.scope - Session 10 of User root. ░░ Subject: A start job for unit session-10.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-10.scope has finished successfully. ░░ ░░ The job identifier is 1808. Jul 22 08:29:19 managed-node15 sshd-session[10367]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:19 managed-node15 sshd-session[10370]: Received disconnect from 10.31.42.212 port 55080:11: disconnected by user Jul 22 08:29:19 managed-node15 sshd-session[10370]: Disconnected from user root 10.31.42.212 port 55080 Jul 22 08:29:19 managed-node15 sshd-session[10367]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:19 managed-node15 systemd[1]: session-10.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-10.scope has successfully entered the 'dead' state. Jul 22 08:29:19 managed-node15 systemd-logind[661]: Session 10 logged out. Waiting for processes to exit. Jul 22 08:29:19 managed-node15 systemd-logind[661]: Removed session 10. ░░ Subject: Session 10 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 10 has been terminated. Jul 22 08:29:26 managed-node15 sudo[10577]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpvollohcrqmdjqobqftcwskqfydjihg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187363.850283-15085-178809497306568/AnsiballZ_setup.py' Jul 22 08:29:26 managed-node15 sudo[10577]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:26 managed-node15 python3.12[10580]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:26 managed-node15 sudo[10577]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:30 managed-node15 sudo[10765]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnguinpfjgyxlvokctyfzrnhtqkfapza ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187368.8229945-15419-213432637530052/AnsiballZ_stat.py' Jul 22 08:29:30 managed-node15 sudo[10765]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:30 managed-node15 python3.12[10768]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:30 managed-node15 sudo[10765]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:32 managed-node15 sudo[10923]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmtuhjkbczgtbulpsbpdbglpljhrerkf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187371.2896478-15670-80892170020858/AnsiballZ_dnf.py' Jul 22 08:29:32 managed-node15 sudo[10923]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:32 managed-node15 python3.12[10926]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:33 managed-node15 sudo[10923]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:35 managed-node15 sudo[11082]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oywkjeyhhyaorldsqnwzffxgbuewbnvr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187374.3741424-15949-261502983698833/AnsiballZ_blivet.py' Jul 22 08:29:35 managed-node15 sudo[11082]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:36 managed-node15 python3.12[11085]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:29:36 managed-node15 sudo[11082]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:38 managed-node15 sudo[11242]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekbvnvtvhljcicblvmyczgfvqnxiogso ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187378.138898-16314-100994473831852/AnsiballZ_dnf.py' Jul 22 08:29:38 managed-node15 sudo[11242]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:38 managed-node15 python3.12[11246]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:39 managed-node15 sudo[11242]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:41 managed-node15 sudo[11402]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfycivgfsebcrpundjbenrnhwdyrqavw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187379.609619-16478-250619385585969/AnsiballZ_service_facts.py' Jul 22 08:29:41 managed-node15 sudo[11402]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:41 managed-node15 python3.12[11405]: ansible-service_facts Invoked Jul 22 08:29:43 managed-node15 sudo[11402]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:45 managed-node15 sudo[11672]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqgebepjgveqnqgxidqyszopiivrnkhj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187384.726484-16840-225644163449676/AnsiballZ_blivet.py' Jul 22 08:29:45 managed-node15 sudo[11672]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:45 managed-node15 python3.12[11676]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:29:45 managed-node15 sudo[11672]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:46 managed-node15 sudo[11833]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrkeggkgpgwiuqvbzihynyfqhjdjgkmf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187386.2214634-17089-221423825924624/AnsiballZ_stat.py' Jul 22 08:29:46 managed-node15 sudo[11833]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:46 managed-node15 python3.12[11836]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:46 managed-node15 sudo[11833]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:50 managed-node15 sudo[11993]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzacibdvuwfucahstxfpjwlwycvzykux ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187390.035483-17587-11145151613165/AnsiballZ_stat.py' Jul 22 08:29:50 managed-node15 sudo[11993]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:50 managed-node15 python3.12[11996]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:50 managed-node15 sudo[11993]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:51 managed-node15 sudo[12153]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auectrwmqstnjoxcnkukugdgqpdvanyi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187391.1760826-17766-56649704697751/AnsiballZ_setup.py' Jul 22 08:29:51 managed-node15 sudo[12153]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:51 managed-node15 python3.12[12156]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:52 managed-node15 sudo[12153]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:53 managed-node15 sudo[12340]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozbxfnenywimxesfjvadvaiqoseidbto ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187393.3321002-18211-244937987360210/AnsiballZ_dnf.py' Jul 22 08:29:53 managed-node15 sudo[12340]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:53 managed-node15 python3.12[12343]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:54 managed-node15 sudo[12340]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:55 managed-node15 sudo[12499]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bacjpkmyezukjsclkqmyapmctqocwyfd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187394.6571472-18383-168819710876259/AnsiballZ_find_unused_disk.py' Jul 22 08:29:55 managed-node15 sudo[12499]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:55 managed-node15 python3.12[12502]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=1 min_size=0 max_size=0 match_sector_size=False with_interface=None Jul 22 08:29:55 managed-node15 sudo[12499]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:57 managed-node15 sudo[12659]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvxhuknptcgokcgoebhsmgvajnvodrhr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187396.127144-18581-190444709750670/AnsiballZ_command.py' Jul 22 08:29:57 managed-node15 sudo[12659]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:57 managed-node15 python3.12[12662]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:29:57 managed-node15 sudo[12659]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:59 managed-node15 sshd-session[12690]: Accepted publickey for root from 10.31.42.212 port 36316 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:59 managed-node15 systemd-logind[661]: New session 11 of user root. ░░ Subject: A new session 11 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 11 has been created for the user root. ░░ ░░ The leading process of the session is 12690. Jul 22 08:29:59 managed-node15 systemd[1]: Started session-11.scope - Session 11 of User root. ░░ Subject: A start job for unit session-11.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-11.scope has finished successfully. ░░ ░░ The job identifier is 1893. Jul 22 08:29:59 managed-node15 sshd-session[12690]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:59 managed-node15 sshd-session[12693]: Received disconnect from 10.31.42.212 port 36316:11: disconnected by user Jul 22 08:29:59 managed-node15 sshd-session[12693]: Disconnected from user root 10.31.42.212 port 36316 Jul 22 08:29:59 managed-node15 sshd-session[12690]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:59 managed-node15 systemd[1]: session-11.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-11.scope has successfully entered the 'dead' state. Jul 22 08:29:59 managed-node15 systemd-logind[661]: Session 11 logged out. Waiting for processes to exit. Jul 22 08:29:59 managed-node15 systemd-logind[661]: Removed session 11. ░░ Subject: Session 11 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 11 has been terminated. Jul 22 08:29:59 managed-node15 sshd-session[12720]: Accepted publickey for root from 10.31.42.212 port 36324 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:59 managed-node15 systemd-logind[661]: New session 12 of user root. ░░ Subject: A new session 12 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 12 has been created for the user root. ░░ ░░ The leading process of the session is 12720. Jul 22 08:29:59 managed-node15 systemd[1]: Started session-12.scope - Session 12 of User root. ░░ Subject: A start job for unit session-12.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-12.scope has finished successfully. ░░ ░░ The job identifier is 1978. Jul 22 08:29:59 managed-node15 sshd-session[12720]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:59 managed-node15 sshd-session[12723]: Received disconnect from 10.31.42.212 port 36324:11: disconnected by user Jul 22 08:29:59 managed-node15 sshd-session[12723]: Disconnected from user root 10.31.42.212 port 36324 Jul 22 08:29:59 managed-node15 sshd-session[12720]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:59 managed-node15 systemd-logind[661]: Session 12 logged out. Waiting for processes to exit. Jul 22 08:29:59 managed-node15 systemd[1]: session-12.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-12.scope has successfully entered the 'dead' state. Jul 22 08:29:59 managed-node15 systemd-logind[661]: Removed session 12. ░░ Subject: Session 12 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 12 has been terminated. Jul 22 08:30:10 managed-node15 sshd-session[12750]: Accepted publickey for root from 10.31.42.212 port 35346 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:10 managed-node15 systemd[1]: Starting logrotate.service - Rotate log files... ░░ Subject: A start job for unit logrotate.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit logrotate.service has begun execution. ░░ ░░ The job identifier is 2063. Jul 22 08:30:10 managed-node15 systemd-logind[661]: New session 13 of user root. ░░ Subject: A new session 13 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 13 has been created for the user root. ░░ ░░ The leading process of the session is 12750. Jul 22 08:30:10 managed-node15 systemd[1]: Started session-13.scope - Session 13 of User root. ░░ Subject: A start job for unit session-13.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-13.scope has finished successfully. ░░ ░░ The job identifier is 2144. Jul 22 08:30:10 managed-node15 sshd-session[12750]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:10 managed-node15 systemd[1]: logrotate.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit logrotate.service has successfully entered the 'dead' state. Jul 22 08:30:10 managed-node15 systemd[1]: Finished logrotate.service - Rotate log files. ░░ Subject: A start job for unit logrotate.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit logrotate.service has finished successfully. ░░ ░░ The job identifier is 2063. Jul 22 08:30:10 managed-node15 sshd-session[12754]: Received disconnect from 10.31.42.212 port 35346:11: disconnected by user Jul 22 08:30:10 managed-node15 sshd-session[12754]: Disconnected from user root 10.31.42.212 port 35346 Jul 22 08:30:10 managed-node15 sshd-session[12750]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:10 managed-node15 systemd[1]: session-13.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-13.scope has successfully entered the 'dead' state. Jul 22 08:30:10 managed-node15 systemd-logind[661]: Session 13 logged out. Waiting for processes to exit. Jul 22 08:30:10 managed-node15 systemd-logind[661]: Removed session 13. ░░ Subject: Session 13 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 13 has been terminated. Jul 22 08:30:18 managed-node15 sudo[12964]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyhmpwykmtdqxnxizuofgeamvjypjiaj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187416.8975127-20694-156084360565533/AnsiballZ_setup.py' Jul 22 08:30:18 managed-node15 sudo[12964]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:19 managed-node15 python3.12[12967]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:19 managed-node15 sudo[12964]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:21 managed-node15 sudo[13151]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otsoscjaulgvlwqydzflyauyidsccldv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187421.0173428-21330-116792403451008/AnsiballZ_stat.py' Jul 22 08:30:21 managed-node15 sudo[13151]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:21 managed-node15 python3.12[13154]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:21 managed-node15 sudo[13151]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:23 managed-node15 sudo[13309]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rucxvolkvcvimdehbqobgmgawxvyybhz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187423.0186505-21636-116012700848875/AnsiballZ_dnf.py' Jul 22 08:30:23 managed-node15 sudo[13309]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:23 managed-node15 python3.12[13312]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:24 managed-node15 sudo[13309]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:26 managed-node15 sudo[13468]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qalrhidylozbynngzgrrhzregneeaqqn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187424.9793863-22030-243370930349923/AnsiballZ_blivet.py' Jul 22 08:30:26 managed-node15 sudo[13468]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:26 managed-node15 python3.12[13471]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:30:26 managed-node15 sudo[13468]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:27 managed-node15 sudo[13628]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbutfbhbsxckgftdfeyfrmuashdrqkzu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187427.3712645-22399-127719178355171/AnsiballZ_dnf.py' Jul 22 08:30:27 managed-node15 sudo[13628]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:27 managed-node15 python3.12[13631]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:28 managed-node15 sudo[13628]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:29 managed-node15 sudo[13787]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onuvzhjjpaaddwouxhfwfyvewjmfbkoz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187428.5709605-22526-281065246846448/AnsiballZ_service_facts.py' Jul 22 08:30:29 managed-node15 sudo[13787]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:29 managed-node15 python3.12[13790]: ansible-service_facts Invoked Jul 22 08:30:31 managed-node15 sudo[13787]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:32 managed-node15 sudo[14057]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvsecpppqrkdjctyzkpjqhbzlhpqjegm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187432.211074-22816-219836423010217/AnsiballZ_blivet.py' Jul 22 08:30:32 managed-node15 sudo[14057]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:32 managed-node15 python3.12[14060]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:30:32 managed-node15 sudo[14057]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:33 managed-node15 sudo[14217]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waptpohesfhtdowcubhjfvdrwtlaxbgm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187433.259143-22965-232396762223271/AnsiballZ_stat.py' Jul 22 08:30:33 managed-node15 sudo[14217]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:33 managed-node15 python3.12[14220]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:33 managed-node15 sudo[14217]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:35 managed-node15 sudo[14377]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmmbwmhuxbilpgtfurtjlhohwskdtvxi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187435.6269462-23144-31177471852002/AnsiballZ_stat.py' Jul 22 08:30:35 managed-node15 sudo[14377]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:36 managed-node15 python3.12[14380]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:36 managed-node15 sudo[14377]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:36 managed-node15 sudo[14537]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpndltnxflpjoijcootagjxfgmggebdz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187436.426643-23256-153724024753894/AnsiballZ_setup.py' Jul 22 08:30:36 managed-node15 sudo[14537]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:37 managed-node15 python3.12[14540]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:37 managed-node15 sudo[14537]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:38 managed-node15 sudo[14724]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tonzslemfdavenlnvlpucvciwbynwmlw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187438.3841176-23598-205065087579716/AnsiballZ_dnf.py' Jul 22 08:30:38 managed-node15 sudo[14724]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:38 managed-node15 python3.12[14727]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:39 managed-node15 sudo[14724]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:40 managed-node15 sudo[14883]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlepevarhyuzebcbrtqanhvordnkxdln ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187439.729491-23677-154534666161341/AnsiballZ_find_unused_disk.py' Jul 22 08:30:40 managed-node15 sudo[14883]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:40 managed-node15 python3.12[14886]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:30:40 managed-node15 sudo[14883]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:42 managed-node15 sudo[15043]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sraowovmwmzfzjvnubmxrojwpodcorfu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187441.1020143-23795-152183479457750/AnsiballZ_command.py' Jul 22 08:30:42 managed-node15 sudo[15043]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:42 managed-node15 python3.12[15046]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:30:42 managed-node15 sudo[15043]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:43 managed-node15 sshd-session[15074]: Accepted publickey for root from 10.31.42.212 port 45876 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:43 managed-node15 systemd-logind[661]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 15074. Jul 22 08:30:43 managed-node15 systemd[1]: Started session-14.scope - Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 2229. Jul 22 08:30:43 managed-node15 sshd-session[15074]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:43 managed-node15 sshd-session[15077]: Received disconnect from 10.31.42.212 port 45876:11: disconnected by user Jul 22 08:30:43 managed-node15 sshd-session[15077]: Disconnected from user root 10.31.42.212 port 45876 Jul 22 08:30:43 managed-node15 sshd-session[15074]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:43 managed-node15 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Jul 22 08:30:43 managed-node15 systemd-logind[661]: Session 14 logged out. Waiting for processes to exit. Jul 22 08:30:43 managed-node15 systemd-logind[661]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Jul 22 08:30:43 managed-node15 sshd-session[15104]: Accepted publickey for root from 10.31.42.212 port 45880 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:43 managed-node15 systemd-logind[661]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 15104. Jul 22 08:30:43 managed-node15 systemd[1]: Started session-15.scope - Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 2314. Jul 22 08:30:43 managed-node15 sshd-session[15104]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:43 managed-node15 sshd-session[15107]: Received disconnect from 10.31.42.212 port 45880:11: disconnected by user Jul 22 08:30:43 managed-node15 sshd-session[15107]: Disconnected from user root 10.31.42.212 port 45880 Jul 22 08:30:43 managed-node15 sshd-session[15104]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:43 managed-node15 systemd[1]: session-15.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-15.scope has successfully entered the 'dead' state. Jul 22 08:30:43 managed-node15 systemd-logind[661]: Session 15 logged out. Waiting for processes to exit. Jul 22 08:30:43 managed-node15 systemd-logind[661]: Removed session 15. ░░ Subject: Session 15 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 15 has been terminated. Jul 22 08:30:49 managed-node15 sshd-session[15134]: Accepted publickey for root from 10.31.42.212 port 45890 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:49 managed-node15 systemd-logind[661]: New session 16 of user root. ░░ Subject: A new session 16 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 16 has been created for the user root. ░░ ░░ The leading process of the session is 15134. Jul 22 08:30:49 managed-node15 systemd[1]: Started session-16.scope - Session 16 of User root. ░░ Subject: A start job for unit session-16.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-16.scope has finished successfully. ░░ ░░ The job identifier is 2399. Jul 22 08:30:49 managed-node15 sshd-session[15134]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:49 managed-node15 sshd-session[15137]: Received disconnect from 10.31.42.212 port 45890:11: disconnected by user Jul 22 08:30:49 managed-node15 sshd-session[15137]: Disconnected from user root 10.31.42.212 port 45890 Jul 22 08:30:49 managed-node15 sshd-session[15134]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:49 managed-node15 systemd[1]: session-16.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-16.scope has successfully entered the 'dead' state. Jul 22 08:30:49 managed-node15 systemd-logind[661]: Session 16 logged out. Waiting for processes to exit. Jul 22 08:30:49 managed-node15 systemd-logind[661]: Removed session 16. ░░ Subject: Session 16 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 16 has been terminated. Jul 22 08:30:55 managed-node15 sshd-session[15164]: Accepted publickey for root from 10.31.42.212 port 53120 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:55 managed-node15 systemd-logind[661]: New session 17 of user root. ░░ Subject: A new session 17 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 17 has been created for the user root. ░░ ░░ The leading process of the session is 15164. Jul 22 08:30:55 managed-node15 systemd[1]: Started session-17.scope - Session 17 of User root. ░░ Subject: A start job for unit session-17.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-17.scope has finished successfully. ░░ ░░ The job identifier is 2484. Jul 22 08:30:55 managed-node15 sshd-session[15164]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:55 managed-node15 sshd-session[15167]: Received disconnect from 10.31.42.212 port 53120:11: disconnected by user Jul 22 08:30:55 managed-node15 sshd-session[15167]: Disconnected from user root 10.31.42.212 port 53120 Jul 22 08:30:55 managed-node15 sshd-session[15164]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:55 managed-node15 systemd[1]: session-17.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-17.scope has successfully entered the 'dead' state. Jul 22 08:30:55 managed-node15 systemd-logind[661]: Session 17 logged out. Waiting for processes to exit. Jul 22 08:30:55 managed-node15 systemd-logind[661]: Removed session 17. ░░ Subject: Session 17 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 17 has been terminated. Jul 22 08:31:03 managed-node15 sshd-session[15194]: Accepted publickey for root from 10.31.42.212 port 37854 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:03 managed-node15 systemd-logind[661]: New session 18 of user root. ░░ Subject: A new session 18 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 18 has been created for the user root. ░░ ░░ The leading process of the session is 15194. Jul 22 08:31:03 managed-node15 systemd[1]: Started session-18.scope - Session 18 of User root. ░░ Subject: A start job for unit session-18.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-18.scope has finished successfully. ░░ ░░ The job identifier is 2569. Jul 22 08:31:03 managed-node15 sshd-session[15194]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:03 managed-node15 sshd-session[15198]: Received disconnect from 10.31.42.212 port 37854:11: disconnected by user Jul 22 08:31:03 managed-node15 sshd-session[15198]: Disconnected from user root 10.31.42.212 port 37854 Jul 22 08:31:03 managed-node15 sshd-session[15194]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:03 managed-node15 systemd[1]: session-18.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-18.scope has successfully entered the 'dead' state. Jul 22 08:31:03 managed-node15 systemd-logind[661]: Session 18 logged out. Waiting for processes to exit. Jul 22 08:31:03 managed-node15 systemd-logind[661]: Removed session 18. ░░ Subject: Session 18 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 18 has been terminated. Jul 22 08:31:10 managed-node15 sudo[15405]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpjxgtaiijcebrjllllaczjauwxfkdcx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187468.3139958-27190-29990833066211/AnsiballZ_setup.py' Jul 22 08:31:10 managed-node15 sudo[15405]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:10 managed-node15 python3.12[15408]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:10 managed-node15 sudo[15405]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:14 managed-node15 sudo[15592]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrafrxwquhiwfczxufoznhxoxyipkgke ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187473.1723597-27676-182880064372061/AnsiballZ_stat.py' Jul 22 08:31:14 managed-node15 sudo[15592]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:14 managed-node15 python3.12[15595]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:14 managed-node15 sudo[15592]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:16 managed-node15 sudo[15750]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgumpnhwcvahwfjxxovswoqcmkwippb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187475.285589-28048-253546334077721/AnsiballZ_dnf.py' Jul 22 08:31:16 managed-node15 sudo[15750]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:16 managed-node15 python3.12[15753]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:16 managed-node15 sudo[15750]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:18 managed-node15 sudo[15909]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vewypupeiukdvzsecetlwnykyboomdff ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187477.6666353-28283-143182433570172/AnsiballZ_blivet.py' Jul 22 08:31:18 managed-node15 sudo[15909]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:19 managed-node15 python3.12[15913]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:19 managed-node15 sudo[15909]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:20 managed-node15 sudo[16070]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhniygbdukrkagyqeszoclrpoklrprf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187480.0702808-28633-116636809001732/AnsiballZ_dnf.py' Jul 22 08:31:20 managed-node15 sudo[16070]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:20 managed-node15 python3.12[16073]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:20 managed-node15 sudo[16070]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:21 managed-node15 sudo[16230]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daxhanjeoghorgpmxfsuvqzkysunnsao ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187481.1585383-28819-33206157333960/AnsiballZ_service_facts.py' Jul 22 08:31:21 managed-node15 sudo[16230]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:21 managed-node15 python3.12[16233]: ansible-service_facts Invoked Jul 22 08:31:23 managed-node15 sudo[16230]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:24 managed-node15 sudo[16500]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztnhdfjrjttxcuqmosysbryxlmghmrb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187484.417572-29006-280494806589610/AnsiballZ_blivet.py' Jul 22 08:31:24 managed-node15 sudo[16500]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:25 managed-node15 python3.12[16503]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:25 managed-node15 sudo[16500]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:26 managed-node15 sudo[16660]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihcpeajmmedmytwdbmrhcsvqppbnilk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187485.704973-29099-4644117374091/AnsiballZ_stat.py' Jul 22 08:31:26 managed-node15 sudo[16660]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:26 managed-node15 python3.12[16663]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:26 managed-node15 sudo[16660]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:28 managed-node15 sudo[16820]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxpicpawpiffjqcqoxvzqksfzqggqxza ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187488.4333653-29425-53294066599821/AnsiballZ_stat.py' Jul 22 08:31:28 managed-node15 sudo[16820]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:28 managed-node15 python3.12[16823]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:28 managed-node15 sudo[16820]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:29 managed-node15 sudo[16980]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfrsiwphipjrqkcgsrahevvwaykioqqz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187489.1977088-29560-26325215174969/AnsiballZ_setup.py' Jul 22 08:31:29 managed-node15 sudo[16980]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:29 managed-node15 python3.12[16983]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:30 managed-node15 sudo[16980]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:31 managed-node15 sudo[17167]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmehvcoousgzmaclkiobgzvjghutgszd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187490.8803105-29733-16640847073870/AnsiballZ_dnf.py' Jul 22 08:31:31 managed-node15 sudo[17167]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:31 managed-node15 python3.12[17170]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:31 managed-node15 sudo[17167]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:32 managed-node15 sudo[17326]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrvfwqrgekrhyvseuzwgfpkacwftysn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187492.2433274-29875-42210789015694/AnsiballZ_find_unused_disk.py' Jul 22 08:31:32 managed-node15 sudo[17326]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:33 managed-node15 python3.12[17329]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:31:33 managed-node15 sudo[17326]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:34 managed-node15 sudo[17486]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixjgcmlaegzotedlumykvkmbawztjwcl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187493.3671567-30028-22722218583494/AnsiballZ_command.py' Jul 22 08:31:34 managed-node15 sudo[17486]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:34 managed-node15 python3.12[17489]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:29 Tuesday 22 July 2025 08:31:34 -0400 (0:00:01.647) 0:00:26.885 ********** skipping: [managed-node15] => { "changed": false, "false_condition": "'Unable to find unused disk' not in unused_disks_return.disks", "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34 Tuesday 22 July 2025 08:31:34 -0400 (0:00:00.102) 0:00:26.987 ********** fatal: [managed-node15]: FAILED! => { "changed": false } MSG: Unable to find enough unused disks. Exiting playbook. PLAY RECAP ********************************************************************* managed-node15 : ok=28 changed=0 unreachable=0 failed=1 skipped=22 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.13", "end_time": "2025-07-22T12:31:35.127530+00:00Z", "host": "managed-node15", "message": "Unable to find enough unused disks. Exiting playbook.", "start_time": "2025-07-22T12:31:34.996397+00:00Z", "task_name": "Exit playbook when there's not enough unused disks in the system", "task_path": "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Tuesday 22 July 2025 08:31:35 -0400 (0:00:00.139) 0:00:27.128 ********** =============================================================================== Gathering Facts --------------------------------------------------------- 3.02s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:2 fedora.linux_system_roles.storage : Get service facts ------------------- 2.70s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Get required packages --------------- 2.10s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 fedora.linux_system_roles.storage : Make sure blivet is available ------- 1.72s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Debug why there are no unused disks ------------------------------------- 1.65s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 fedora.linux_system_roles.storage : Check if system is ostree ----------- 1.46s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Find unused disks in the system ----------------------------------------- 1.34s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Ensure test packages ---------------------------------------------------- 1.31s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 fedora.linux_system_roles.storage : Update facts ------------------------ 1.30s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 1.17s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Make sure required packages are installed --- 1.13s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 fedora.linux_system_roles.storage : Check if /etc/fstab is present ------ 0.81s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file --- 0.68s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.37s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab --- 0.32s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 fedora.linux_system_roles.storage : Show storage_volumes ---------------- 0.28s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 fedora.linux_system_roles.storage : Workaround for udev issue on some platforms --- 0.28s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.27s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 fedora.linux_system_roles.storage : Set up new/current mounts ----------- 0.27s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Run the role ------------------------------------------------------------ 0.27s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks2.yml:72 Jul 22 08:31:10 managed-node15 sudo[15405]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpjxgtaiijcebrjllllaczjauwxfkdcx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187468.3139958-27190-29990833066211/AnsiballZ_setup.py' Jul 22 08:31:10 managed-node15 sudo[15405]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:10 managed-node15 python3.12[15408]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:10 managed-node15 sudo[15405]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:14 managed-node15 sudo[15592]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrafrxwquhiwfczxufoznhxoxyipkgke ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187473.1723597-27676-182880064372061/AnsiballZ_stat.py' Jul 22 08:31:14 managed-node15 sudo[15592]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:14 managed-node15 python3.12[15595]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:14 managed-node15 sudo[15592]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:16 managed-node15 sudo[15750]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgumpnhwcvahwfjxxovswoqcmkwippb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187475.285589-28048-253546334077721/AnsiballZ_dnf.py' Jul 22 08:31:16 managed-node15 sudo[15750]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:16 managed-node15 python3.12[15753]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:16 managed-node15 sudo[15750]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:18 managed-node15 sudo[15909]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vewypupeiukdvzsecetlwnykyboomdff ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187477.6666353-28283-143182433570172/AnsiballZ_blivet.py' Jul 22 08:31:18 managed-node15 sudo[15909]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:19 managed-node15 python3.12[15913]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:19 managed-node15 sudo[15909]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:20 managed-node15 sudo[16070]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhniygbdukrkagyqeszoclrpoklrprf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187480.0702808-28633-116636809001732/AnsiballZ_dnf.py' Jul 22 08:31:20 managed-node15 sudo[16070]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:20 managed-node15 python3.12[16073]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:20 managed-node15 sudo[16070]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:21 managed-node15 sudo[16230]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daxhanjeoghorgpmxfsuvqzkysunnsao ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187481.1585383-28819-33206157333960/AnsiballZ_service_facts.py' Jul 22 08:31:21 managed-node15 sudo[16230]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:21 managed-node15 python3.12[16233]: ansible-service_facts Invoked Jul 22 08:31:23 managed-node15 sudo[16230]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:24 managed-node15 sudo[16500]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztnhdfjrjttxcuqmosysbryxlmghmrb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187484.417572-29006-280494806589610/AnsiballZ_blivet.py' Jul 22 08:31:24 managed-node15 sudo[16500]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:25 managed-node15 python3.12[16503]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:25 managed-node15 sudo[16500]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:26 managed-node15 sudo[16660]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihcpeajmmedmytwdbmrhcsvqppbnilk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187485.704973-29099-4644117374091/AnsiballZ_stat.py' Jul 22 08:31:26 managed-node15 sudo[16660]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:26 managed-node15 python3.12[16663]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:26 managed-node15 sudo[16660]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:28 managed-node15 sudo[16820]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxpicpawpiffjqcqoxvzqksfzqggqxza ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187488.4333653-29425-53294066599821/AnsiballZ_stat.py' Jul 22 08:31:28 managed-node15 sudo[16820]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:28 managed-node15 python3.12[16823]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:28 managed-node15 sudo[16820]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:29 managed-node15 sudo[16980]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfrsiwphipjrqkcgsrahevvwaykioqqz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187489.1977088-29560-26325215174969/AnsiballZ_setup.py' Jul 22 08:31:29 managed-node15 sudo[16980]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:29 managed-node15 python3.12[16983]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:30 managed-node15 sudo[16980]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:31 managed-node15 sudo[17167]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmehvcoousgzmaclkiobgzvjghutgszd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187490.8803105-29733-16640847073870/AnsiballZ_dnf.py' Jul 22 08:31:31 managed-node15 sudo[17167]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:31 managed-node15 python3.12[17170]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:31 managed-node15 sudo[17167]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:32 managed-node15 sudo[17326]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrvfwqrgekrhyvseuzwgfpkacwftysn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187492.2433274-29875-42210789015694/AnsiballZ_find_unused_disk.py' Jul 22 08:31:32 managed-node15 sudo[17326]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:33 managed-node15 python3.12[17329]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:31:33 managed-node15 sudo[17326]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:34 managed-node15 sudo[17486]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixjgcmlaegzotedlumykvkmbawztjwcl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187493.3671567-30028-22722218583494/AnsiballZ_command.py' Jul 22 08:31:34 managed-node15 sudo[17486]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:34 managed-node15 python3.12[17489]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:31:34 managed-node15 sudo[17486]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:36 managed-node15 sshd-session[17517]: Accepted publickey for root from 10.31.42.212 port 42188 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:36 managed-node15 systemd-logind[661]: New session 19 of user root. ░░ Subject: A new session 19 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 19 has been created for the user root. ░░ ░░ The leading process of the session is 17517. Jul 22 08:31:36 managed-node15 systemd[1]: Started session-19.scope - Session 19 of User root. ░░ Subject: A start job for unit session-19.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-19.scope has finished successfully. ░░ ░░ The job identifier is 2654. Jul 22 08:31:36 managed-node15 sshd-session[17517]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:36 managed-node15 sshd-session[17520]: Received disconnect from 10.31.42.212 port 42188:11: disconnected by user Jul 22 08:31:36 managed-node15 sshd-session[17520]: Disconnected from user root 10.31.42.212 port 42188 Jul 22 08:31:36 managed-node15 sshd-session[17517]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:36 managed-node15 systemd[1]: session-19.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-19.scope has successfully entered the 'dead' state. Jul 22 08:31:36 managed-node15 systemd-logind[661]: Session 19 logged out. Waiting for processes to exit. Jul 22 08:31:36 managed-node15 systemd-logind[661]: Removed session 19. ░░ Subject: Session 19 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 19 has been terminated. Jul 22 08:31:36 managed-node15 sshd-session[17547]: Accepted publickey for root from 10.31.42.212 port 42196 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:36 managed-node15 systemd-logind[661]: New session 20 of user root. ░░ Subject: A new session 20 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 20 has been created for the user root. ░░ ░░ The leading process of the session is 17547. Jul 22 08:31:36 managed-node15 systemd[1]: Started session-20.scope - Session 20 of User root. ░░ Subject: A start job for unit session-20.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-20.scope has finished successfully. ░░ ░░ The job identifier is 2739. Jul 22 08:31:36 managed-node15 sshd-session[17547]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)