ansible-playbook [core 2.17.14] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-RsQ executable location = /usr/local/bin/ansible-playbook python version = 3.12.12 (main, Jan 16 2026, 00:00:00) [GCC 14.3.1 20251022 (Red Hat 14.3.1-4)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_quadlet_basic.yml ********************************************** 2 plays in /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml PLAY [all] ********************************************************************* TASK [Include vault variables] ************************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:5 Saturday 14 February 2026 11:47:03 -0500 (0:00:00.029) 0:00:00.029 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_test_password": { "__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n35383939616163653333633431363463313831383037386236646138333162396161356130303461\n3932623930643263313563336163316337643562333936360a363538636631313039343233383732\n38666530383538656639363465313230343533386130303833336434303438333161656262346562\n3362626538613031640a663330613638366132356534363534353239616666653466353961323533\n6565\n" }, "mysql_container_root_password": { "__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n61333932373230333539663035366431326163363166363036323963623131363530326231303634\n6635326161643165363366323062333334363730376631660a393566366139353861656364656661\n38653463363837336639363032646433666361646535366137303464623261313663643336306465\n6264663730656337310a343962353137386238383064646533366433333437303566656433386233\n34343235326665646661623131643335313236313131353661386338343366316261643634653633\n3832313034366536616531323963333234326461353130303532\n" } }, "ansible_included_var_files": [ "/tmp/podman-KwE/tests/vars/vault-variables.yml" ], "changed": false } PLAY [Ensure that the role can manage quadlet specs] *************************** TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:9 Saturday 14 February 2026 11:47:03 -0500 (0:00:00.021) 0:00:00.050 ***** [WARNING]: Platform linux on host managed-node2 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node2] TASK [Test is only supported on x86_64] **************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:59 Saturday 14 February 2026 11:47:04 -0500 (0:00:01.188) 0:00:01.239 ***** skipping: [managed-node2] => { "false_condition": "ansible_facts[\"architecture\"] != \"x86_64\"" } TASK [End test] **************************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:66 Saturday 14 February 2026 11:47:04 -0500 (0:00:00.020) 0:00:01.259 ***** META: end_play conditional evaluated to False, continuing play skipping: [managed-node2] => { "skip_reason": "end_play conditional evaluated to False, continuing play" } MSG: end_play TASK [Run role - do not pull images] ******************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:74 Saturday 14 February 2026 11:47:04 -0500 (0:00:00.007) 0:00:01.267 ***** included: fedora.linux_system_roles.podman for managed-node2 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 14 February 2026 11:47:04 -0500 (0:00:00.074) 0:00:01.342 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 14 February 2026 11:47:04 -0500 (0:00:00.034) 0:00:01.376 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 14 February 2026 11:47:04 -0500 (0:00:00.037) 0:00:01.413 ***** ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 14 February 2026 11:47:05 -0500 (0:00:00.517) 0:00:01.931 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 14 February 2026 11:47:05 -0500 (0:00:00.032) 0:00:01.963 ***** ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 14 February 2026 11:47:05 -0500 (0:00:00.368) 0:00:02.332 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_is_transactional": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 14 February 2026 11:47:05 -0500 (0:00:00.031) 0:00:02.364 ***** [WARNING]: TASK: fedora.linux_system_roles.podman : Set platform/version specific variables: The loop variable '__vars_file' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior. skipping: [managed-node2] => (item=RedHat.yml) => { "__vars_file": "RedHat.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "__vars_file": "CentOS.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.podman : Run systemctl] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:52 Saturday 14 February 2026 11:47:05 -0500 (0:00:00.048) 0:00:02.412 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "systemctl", "is-system-running" ], "delta": "0:00:00.009200", "end": "2026-02-14 11:47:06.426746", "failed_when_result": false, "rc": 0, "start": "2026-02-14 11:47:06.417546" } STDOUT: running TASK [fedora.linux_system_roles.podman : Require installed systemd] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:60 Saturday 14 February 2026 11:47:06 -0500 (0:00:00.499) 0:00:02.912 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "\"No such file or directory\" in __is_system_running.msg | d(\"\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:65 Saturday 14 February 2026 11:47:06 -0500 (0:00:00.051) 0:00:02.963 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_is_booted": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 14 February 2026 11:47:06 -0500 (0:00:00.034) 0:00:02.998 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 14 February 2026 11:47:07 -0500 (0:00:01.327) 0:00:04.325 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 14 February 2026 11:47:07 -0500 (0:00:00.039) 0:00:04.365 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages)) | list | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 14 February 2026 11:47:07 -0500 (0:00:00.042) 0:00:04.407 ***** skipping: [managed-node2] => { "false_condition": "__podman_is_transactional | d(false)" } TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.032) 0:00:04.440 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.037) 0:00:04.477 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.032) 0:00:04.510 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.023933", "end": "2026-02-14 11:47:08.433521", "rc": 0, "start": "2026-02-14 11:47:08.409588" } STDOUT: podman version 5.6.0 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.417) 0:00:04.927 ***** ok: [managed-node2] => { "ansible_facts": { "podman_version": "5.6.0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.033) 0:00:04.961 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.035) 0:00:04.997 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:73 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.058) 0:00:05.055 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.049) 0:00:05.105 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__has_type_pod or __has_pod_file_ext or __has_pod_file_src_ext or __has_pod_template_src_ext or __has_pod_template_src_ext_j2", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:96 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.049) 0:00:05.154 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.029) 0:00:05.183 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:08 -0500 (0:00:00.043) 0:00:05.227 ***** ok: [managed-node2] => { "ansible_facts": { "getent_passwd": { "root": [ "x", "0", "0", "Super User", "/root", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.475) 0:00:05.702 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.037) 0:00:05.740 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.040) 0:00:05.781 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.395) 0:00:06.177 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.032) 0:00:06.209 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.029) 0:00:06.238 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.018) 0:00:06.257 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.021) 0:00:06.279 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.020) 0:00:06.300 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.055) 0:00:06.355 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.024) 0:00:06.380 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 14 February 2026 11:47:09 -0500 (0:00:00.020) 0:00:06.400 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.052) 0:00:06.453 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.036) 0:00:06.489 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.018) 0:00:06.508 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.018) 0:00:06.526 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.036) 0:00:06.563 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.018) 0:00:06.582 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.020) 0:00:06.602 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.043) 0:00:06.646 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.024) 0:00:06.671 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.033) 0:00:06.704 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.044) 0:00:06.748 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.022) 0:00:06.771 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.017) 0:00:06.789 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.018) 0:00:06.807 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.017) 0:00:06.825 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_firewall | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.019) 0:00:06.844 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.018) 0:00:06.862 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.018) 0:00:06.880 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.014) 0:00:06.895 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.016) 0:00:06.911 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:184 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.015) 0:00:06.926 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:191 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.015) 0:00:06.942 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.063) 0:00:07.005 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Container": { "ContainerName": "nopull", "Image": "quay.io/libpod/testimage:20210610" }, "Install": { "WantedBy": "default.target" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.045) 0:00:07.050 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": false, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.041) 0:00:07.092 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.030) 0:00:07.123 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "nopull", "__podman_quadlet_type": "container", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.054) 0:00:07.178 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.057) 0:00:07.235 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.038) 0:00:07.274 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.035) 0:00:07.309 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:10 -0500 (0:00:00.047) 0:00:07.357 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.409) 0:00:07.767 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.046) 0:00:07.814 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.021) 0:00:07.835 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.020) 0:00:07.855 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.020) 0:00:07.875 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.021) 0:00:07.896 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.019) 0:00:07.916 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.019) 0:00:07.936 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.019) 0:00:07.956 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": false, "__podman_images_found": [ "quay.io/libpod/testimage:20210610" ], "__podman_kube_yamls_raw": "", "__podman_service_name": "nopull.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.036) 0:00:07.992 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.026) 0:00:08.019 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.021) 0:00:08.040 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [ "quay.io/libpod/testimage:20210610" ], "__podman_quadlet_file": "/etc/containers/systemd/nopull.container", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.059) 0:00:08.099 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.027) 0:00:08.127 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.016) 0:00:08.144 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.044) 0:00:08.189 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.030) 0:00:08.219 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.017) 0:00:08.236 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.019) 0:00:08.256 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.018) 0:00:08.274 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.015) 0:00:08.289 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.032) 0:00:08.322 ***** skipping: [managed-node2] => (item=None) => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.024) 0:00:08.346 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:47:11 -0500 (0:00:00.016) 0:00:08.363 ***** ok: [managed-node2] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/containers/systemd", "secontext": "system_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:47:12 -0500 (0:00:00.505) 0:00:08.868 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:47:12 -0500 (0:00:00.021) 0:00:08.890 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:12 -0500 (0:00:00.020) 0:00:08.910 ***** changed: [managed-node2] => { "changed": true, "checksum": "670d64fc68a9768edb20cad26df2acc703542d85", "dest": "/etc/containers/systemd/nopull.container", "gid": 0, "group": "root", "md5sum": "cedb6667f6cd1b033fe06e2810fe6b19", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 151, "src": "/root/.ansible/tmp/ansible-tmp-1771087632.5337622-22450-265320183664731/.source.container", "state": "file", "uid": 0 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.831) 0:00:09.742 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_activate_systemd_unit | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.040) 0:00:09.782 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_activate_systemd_unit | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.034) 0:00:09.817 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_activate_systemd_unit | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:198 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.027) 0:00:09.845 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:205 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.019) 0:00:09.864 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:214 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.016) 0:00:09.881 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Verify image not pulled] ************************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:91 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.023) 0:00:09.905 ***** ok: [managed-node2] => { "changed": false } MSG: All assertions passed TASK [Run role - try to pull bogus image] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:95 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.019) 0:00:09.924 ***** included: fedora.linux_system_roles.podman for managed-node2 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.069) 0:00:09.994 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.058) 0:00:10.052 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.026) 0:00:10.078 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.022) 0:00:10.100 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.020) 0:00:10.121 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.029) 0:00:10.150 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.029) 0:00:10.179 ***** skipping: [managed-node2] => (item=RedHat.yml) => { "__vars_file": "RedHat.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "__vars_file": "CentOS.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.podman : Run systemctl] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:52 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.045) 0:00:10.225 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Require installed systemd] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:60 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.026) 0:00:10.251 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:65 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.018) 0:00:10.270 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 14 February 2026 11:47:13 -0500 (0:00:00.024) 0:00:10.295 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 14 February 2026 11:47:14 -0500 (0:00:01.035) 0:00:11.331 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 14 February 2026 11:47:14 -0500 (0:00:00.031) 0:00:11.362 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages)) | list | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 14 February 2026 11:47:14 -0500 (0:00:00.039) 0:00:11.402 ***** skipping: [managed-node2] => { "false_condition": "__podman_is_transactional | d(false)" } TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.032) 0:00:11.434 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.020) 0:00:11.455 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.022) 0:00:11.478 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.021637", "end": "2026-02-14 11:47:15.386051", "rc": 0, "start": "2026-02-14 11:47:15.364414" } STDOUT: podman version 5.6.0 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.411) 0:00:11.889 ***** ok: [managed-node2] => { "ansible_facts": { "podman_version": "5.6.0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.026) 0:00:11.915 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.021) 0:00:11.937 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:73 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.020) 0:00:11.957 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.020) 0:00:11.978 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__has_type_pod or __has_pod_file_ext or __has_pod_file_src_ext or __has_pod_template_src_ext or __has_pod_template_src_ext_j2", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:96 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.036) 0:00:12.014 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.023) 0:00:12.038 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.040) 0:00:12.078 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.023) 0:00:12.101 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.023) 0:00:12.125 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:15 -0500 (0:00:00.028) 0:00:12.154 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.386) 0:00:12.540 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.023) 0:00:12.563 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.023) 0:00:12.587 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.027) 0:00:12.615 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.031) 0:00:12.646 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.030) 0:00:12.677 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.034) 0:00:12.712 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.074) 0:00:12.787 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.034) 0:00:12.821 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.090) 0:00:12.911 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.060) 0:00:12.972 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.032) 0:00:13.004 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.030) 0:00:13.035 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.062) 0:00:13.098 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.032) 0:00:13.130 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.029) 0:00:13.160 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.063) 0:00:13.223 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.032) 0:00:13.256 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.030) 0:00:13.287 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.067) 0:00:13.354 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.031) 0:00:13.386 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 14 February 2026 11:47:16 -0500 (0:00:00.033) 0:00:13.419 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.032) 0:00:13.451 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.029) 0:00:13.481 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_firewall | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.030) 0:00:13.512 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.020) 0:00:13.532 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.022) 0:00:13.555 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.015) 0:00:13.571 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.019) 0:00:13.591 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:184 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.018) 0:00:13.609 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:191 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.022) 0:00:13.631 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.056) 0:00:13.688 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Container": { "ContainerName": "bogus", "Image": "this_is_a_bogus_image" }, "Install": { "WantedBy": "default.target" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.030) 0:00:13.719 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": true, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.027) 0:00:13.746 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.018) 0:00:13.765 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "bogus", "__podman_quadlet_type": "container", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.034) 0:00:13.800 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.035) 0:00:13.835 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.027) 0:00:13.862 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.023) 0:00:13.885 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.067) 0:00:13.953 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.397) 0:00:14.350 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:17 -0500 (0:00:00.024) 0:00:14.374 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.056) 0:00:14.430 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.029) 0:00:14.460 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.020) 0:00:14.481 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.023) 0:00:14.505 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.022) 0:00:14.527 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.020) 0:00:14.548 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.020) 0:00:14.568 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": false, "__podman_images_found": [ "this_is_a_bogus_image" ], "__podman_kube_yamls_raw": "", "__podman_service_name": "bogus.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.036) 0:00:14.604 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.029) 0:00:14.633 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.021) 0:00:14.655 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [ "this_is_a_bogus_image" ], "__podman_quadlet_file": "/etc/containers/systemd/bogus.container", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.066) 0:00:14.721 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.043) 0:00:14.764 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.032) 0:00:14.797 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.075) 0:00:14.872 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.051) 0:00:14.924 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.031) 0:00:14.956 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.032) 0:00:14.989 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.044) 0:00:15.033 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.027) 0:00:15.061 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:47:18 -0500 (0:00:00.053) 0:00:15.114 ***** ok: [managed-node2] => (item=None) => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:47:19 -0500 (0:00:00.674) 0:00:15.789 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:47:19 -0500 (0:00:00.018) 0:00:15.807 ***** ok: [managed-node2] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/containers/systemd", "secontext": "system_u:object_r:etc_t:s0", "size": 30, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:47:19 -0500 (0:00:00.415) 0:00:16.223 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:47:19 -0500 (0:00:00.021) 0:00:16.245 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:19 -0500 (0:00:00.021) 0:00:16.267 ***** changed: [managed-node2] => { "changed": true, "checksum": "1d087e679d135214e8ac9ccaf33b2222916efb7f", "dest": "/etc/containers/systemd/bogus.container", "gid": 0, "group": "root", "md5sum": "97480a9a73734d9f8007d2c06e7fed1f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 138, "src": "/root/.ansible/tmp/ansible-tmp-1771087639.8898952-22814-104891175911053/.source.container", "state": "file", "uid": 0 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:47:20 -0500 (0:00:00.728) 0:00:16.995 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_activate_systemd_unit | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:47:20 -0500 (0:00:00.021) 0:00:17.017 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_activate_systemd_unit | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:20 -0500 (0:00:00.030) 0:00:17.047 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_activate_systemd_unit | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:198 Saturday 14 February 2026 11:47:20 -0500 (0:00:00.035) 0:00:17.082 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:205 Saturday 14 February 2026 11:47:20 -0500 (0:00:00.015) 0:00:17.098 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:214 Saturday 14 February 2026 11:47:20 -0500 (0:00:00.019) 0:00:17.118 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Verify image not pulled and no error] ************************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:111 Saturday 14 February 2026 11:47:20 -0500 (0:00:00.021) 0:00:17.139 ***** ok: [managed-node2] => { "changed": false } MSG: All assertions passed TASK [Cleanup] ***************************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:118 Saturday 14 February 2026 11:47:20 -0500 (0:00:00.037) 0:00:17.177 ***** included: fedora.linux_system_roles.podman for managed-node2 => (item=nopull) included: fedora.linux_system_roles.podman for managed-node2 => (item=bogus) TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.257) 0:00:17.434 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.038) 0:00:17.473 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.033) 0:00:17.507 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.022) 0:00:17.530 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.019) 0:00:17.549 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.018) 0:00:17.567 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.021) 0:00:17.589 ***** skipping: [managed-node2] => (item=RedHat.yml) => { "__vars_file": "RedHat.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "__vars_file": "CentOS.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.podman : Run systemctl] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:52 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.043) 0:00:17.633 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Require installed systemd] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:60 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.021) 0:00:17.654 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:65 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.020) 0:00:17.674 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 14 February 2026 11:47:21 -0500 (0:00:00.020) 0:00:17.695 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.986) 0:00:18.681 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.020) 0:00:18.702 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages)) | list | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.025) 0:00:18.727 ***** skipping: [managed-node2] => { "false_condition": "__podman_is_transactional | d(false)" } TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.019) 0:00:18.747 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.018) 0:00:18.766 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.021) 0:00:18.787 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.022423", "end": "2026-02-14 11:47:22.711458", "rc": 0, "start": "2026-02-14 11:47:22.689035" } STDOUT: podman version 5.6.0 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.419) 0:00:19.206 ***** ok: [managed-node2] => { "ansible_facts": { "podman_version": "5.6.0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.039) 0:00:19.246 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.029) 0:00:19.276 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:73 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.036) 0:00:19.313 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.025) 0:00:19.338 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__has_type_pod or __has_pod_file_ext or __has_pod_file_src_ext or __has_pod_template_src_ext or __has_pod_template_src_ext_j2", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:96 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.042) 0:00:19.381 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 14 February 2026 11:47:22 -0500 (0:00:00.029) 0:00:19.410 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.043) 0:00:19.454 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.032) 0:00:19.486 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.023) 0:00:19.509 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.032) 0:00:19.542 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.376) 0:00:19.919 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.022) 0:00:19.941 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.021) 0:00:19.962 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.020) 0:00:19.982 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.021) 0:00:20.004 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.052) 0:00:20.057 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.022) 0:00:20.079 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.020) 0:00:20.100 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.020) 0:00:20.121 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.067) 0:00:20.188 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.036) 0:00:20.224 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.019) 0:00:20.244 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.020) 0:00:20.265 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.051) 0:00:20.317 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.021) 0:00:20.339 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 14 February 2026 11:47:23 -0500 (0:00:00.028) 0:00:20.367 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.065) 0:00:20.433 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.034) 0:00:20.467 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.032) 0:00:20.500 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.070) 0:00:20.570 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.030) 0:00:20.601 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.021) 0:00:20.622 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.021) 0:00:20.644 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.026) 0:00:20.670 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_firewall | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.019) 0:00:20.690 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.025) 0:00:20.715 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.019) 0:00:20.734 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.020) 0:00:20.755 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.016) 0:00:20.771 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:184 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.016) 0:00:20.787 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:191 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.015) 0:00:20.802 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.052) 0:00:20.855 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.028) 0:00:20.884 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.028) 0:00:20.913 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.022) 0:00:20.935 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "nopull", "__podman_quadlet_type": "container", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.037) 0:00:20.973 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.051) 0:00:21.025 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.038) 0:00:21.063 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.037) 0:00:21.100 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:24 -0500 (0:00:00.048) 0:00:21.149 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.444) 0:00:21.593 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.023) 0:00:21.617 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.025) 0:00:21.642 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.030) 0:00:21.673 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.037) 0:00:21.710 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.035) 0:00:21.746 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.034) 0:00:21.780 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.023) 0:00:21.804 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.026) 0:00:21.830 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "nopull.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.042) 0:00:21.872 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.034) 0:00:21.907 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.018) 0:00:21.926 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/nopull.container", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.068) 0:00:21.994 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.025) 0:00:22.019 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.054) 0:00:22.074 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:47:25 -0500 (0:00:00.017) 0:00:22.092 ***** ok: [managed-node2] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service nopull.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:26 -0500 (0:00:00.765) 0:00:22.857 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087633.2452886, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "670d64fc68a9768edb20cad26df2acc703542d85", "ctime": 1771087633.2482886, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 356516044, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087632.8762865, "nlink": 1, "path": "/etc/containers/systemd/nopull.container", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 151, "uid": 0, "version": "247161805", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:47:26 -0500 (0:00:00.405) 0:00:23.263 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:47:26 -0500 (0:00:00.055) 0:00:23.319 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:47:27 -0500 (0:00:00.528) 0:00:23.848 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:47:27 -0500 (0:00:00.044) 0:00:23.892 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:47:27 -0500 (0:00:00.033) 0:00:23.926 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:47:27 -0500 (0:00:00.031) 0:00:23.957 ***** changed: [managed-node2] => { "changed": true, "path": "/etc/containers/systemd/nopull.container", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:47:27 -0500 (0:00:00.410) 0:00:24.368 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:47:27 -0500 (0:00:00.019) 0:00:24.387 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:47:28 -0500 (0:00:00.747) 0:00:25.134 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.451) 0:00:25.586 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.050) 0:00:25.637 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.039) 0:00:25.676 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_prune_images | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.033) 0:00:25.710 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.057) 0:00:25.768 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.033) 0:00:25.802 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.030) 0:00:25.833 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.032) 0:00:25.865 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.035) 0:00:25.901 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.036) 0:00:25.937 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.034) 0:00:25.972 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.028) 0:00:26.001 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.025) 0:00:26.027 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.059) 0:00:26.086 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.029) 0:00:26.116 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:198 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.021) 0:00:26.138 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:205 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.016) 0:00:26.155 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:214 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.018) 0:00:26.174 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.022) 0:00:26.196 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.033) 0:00:26.230 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.025) 0:00:26.255 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.018) 0:00:26.274 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.018) 0:00:26.293 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.021) 0:00:26.314 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.023) 0:00:26.337 ***** skipping: [managed-node2] => (item=RedHat.yml) => { "__vars_file": "RedHat.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "__vars_file": "CentOS.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.podman : Run systemctl] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:52 Saturday 14 February 2026 11:47:29 -0500 (0:00:00.054) 0:00:26.392 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Require installed systemd] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:60 Saturday 14 February 2026 11:47:30 -0500 (0:00:00.039) 0:00:26.432 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:65 Saturday 14 February 2026 11:47:30 -0500 (0:00:00.035) 0:00:26.467 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 14 February 2026 11:47:30 -0500 (0:00:00.020) 0:00:26.487 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 14 February 2026 11:47:31 -0500 (0:00:01.024) 0:00:27.512 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.020) 0:00:27.533 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages)) | list | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.025) 0:00:27.559 ***** skipping: [managed-node2] => { "false_condition": "__podman_is_transactional | d(false)" } TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.024) 0:00:27.583 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.021) 0:00:27.605 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.026) 0:00:27.631 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.022140", "end": "2026-02-14 11:47:31.561049", "rc": 0, "start": "2026-02-14 11:47:31.538909" } STDOUT: podman version 5.6.0 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.411) 0:00:28.043 ***** ok: [managed-node2] => { "ansible_facts": { "podman_version": "5.6.0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.021) 0:00:28.064 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.020) 0:00:28.085 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:73 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.026) 0:00:28.111 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.014) 0:00:28.126 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__has_type_pod or __has_pod_file_ext or __has_pod_file_src_ext or __has_pod_template_src_ext or __has_pod_template_src_ext_j2", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:96 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.049) 0:00:28.175 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.040) 0:00:28.216 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.066) 0:00:28.283 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.025) 0:00:28.308 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.025) 0:00:28.334 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:31 -0500 (0:00:00.035) 0:00:28.370 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.431) 0:00:28.801 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.029) 0:00:28.830 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.023) 0:00:28.854 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.022) 0:00:28.876 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.026) 0:00:28.902 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.023) 0:00:28.925 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.022) 0:00:28.948 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.020) 0:00:28.969 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.021) 0:00:28.990 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.050) 0:00:29.041 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.037) 0:00:29.078 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.020) 0:00:29.098 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.019) 0:00:29.118 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.038) 0:00:29.156 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.020) 0:00:29.176 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.019) 0:00:29.195 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.038) 0:00:29.234 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.019) 0:00:29.254 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.019) 0:00:29.273 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.040) 0:00:29.314 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.022) 0:00:29.336 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.022) 0:00:29.359 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 14 February 2026 11:47:32 -0500 (0:00:00.032) 0:00:29.391 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.032) 0:00:29.424 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_firewall | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.019) 0:00:29.444 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.021) 0:00:29.466 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.023) 0:00:29.489 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.018) 0:00:29.508 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.035) 0:00:29.543 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:184 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.020) 0:00:29.564 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:191 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.021) 0:00:29.586 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.055) 0:00:29.642 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.034) 0:00:29.676 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.028) 0:00:29.705 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.023) 0:00:29.728 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "bogus", "__podman_quadlet_type": "container", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.036) 0:00:29.764 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.072) 0:00:29.837 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.025) 0:00:29.862 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.023) 0:00:29.886 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.034) 0:00:29.920 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.408) 0:00:30.328 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.035) 0:00:30.364 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:33 -0500 (0:00:00.034) 0:00:30.398 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.037) 0:00:30.436 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.035) 0:00:30.472 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.034) 0:00:30.506 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.033) 0:00:30.540 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.063) 0:00:30.603 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.040) 0:00:30.644 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "bogus.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.051) 0:00:30.695 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.031) 0:00:30.727 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.025) 0:00:30.752 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/bogus.container", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.073) 0:00:30.826 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.027) 0:00:30.854 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.053) 0:00:30.907 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:47:34 -0500 (0:00:00.019) 0:00:30.927 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "bogus.service", "state": "stopped", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket sysinit.target system.slice basic.target network-online.target -.mount", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target multi-user.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "yes", "DelegateControllers": "cpu cpuset io memory pids", "Description": "bogus.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "Environment": "PODMAN_SYSTEMD_UNIT=bogus.service", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman run --name bogus --replace --rm --cgroups=split --sdnotify=conmon -d this_is_a_bogus_image ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman run --name bogus --replace --rm --cgroups=split --sdnotify=conmon -d this_is_a_bogus_image ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i bogus ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i bogus ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopPost": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i bogus ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopPostEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i bogus ; flags=ignore-failure ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/bogus.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "bogus.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3098968064", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "bogus.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMPolicy": "continue", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target -.mount system.slice", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/bogus.container", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "bogus", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:35 -0500 (0:00:00.824) 0:00:31.751 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087640.5093296, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "1d087e679d135214e8ac9ccaf33b2222916efb7f", "ctime": 1771087640.5123296, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 423624911, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087640.241328, "nlink": 1, "path": "/etc/containers/systemd/bogus.container", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 138, "uid": 0, "version": "3508141222", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:47:35 -0500 (0:00:00.395) 0:00:32.146 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:47:35 -0500 (0:00:00.032) 0:00:32.179 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:47:36 -0500 (0:00:00.380) 0:00:32.559 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:47:36 -0500 (0:00:00.031) 0:00:32.591 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:47:36 -0500 (0:00:00.022) 0:00:32.613 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:47:36 -0500 (0:00:00.021) 0:00:32.634 ***** changed: [managed-node2] => { "changed": true, "path": "/etc/containers/systemd/bogus.container", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:47:36 -0500 (0:00:00.408) 0:00:33.042 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:47:36 -0500 (0:00:00.018) 0:00:33.061 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:47:37 -0500 (0:00:00.757) 0:00:33.818 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:47:37 -0500 (0:00:00.441) 0:00:34.260 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:47:37 -0500 (0:00:00.034) 0:00:34.295 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:47:37 -0500 (0:00:00.023) 0:00:34.318 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_prune_images | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:47:37 -0500 (0:00:00.020) 0:00:34.338 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:37 -0500 (0:00:00.036) 0:00:34.374 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:37 -0500 (0:00:00.020) 0:00:34.395 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:37 -0500 (0:00:00.019) 0:00:34.415 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.019) 0:00:34.434 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.056) 0:00:34.490 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.023) 0:00:34.514 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.021) 0:00:34.536 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.021) 0:00:34.557 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.023) 0:00:34.581 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.022) 0:00:34.604 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.021) 0:00:34.626 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:198 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.017) 0:00:34.643 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:205 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.017) 0:00:34.661 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:214 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.015) 0:00:34.677 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Create user for testing] ************************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:130 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.025) 0:00:34.702 ***** changed: [managed-node2] => { "changed": true, "comment": "", "create_home": true, "group": 1111, "home": "/home/user_quadlet_basic", "name": "user_quadlet_basic", "shell": "/bin/bash", "state": "present", "system": false, "uid": 1111 } TASK [Handle linger for user for ostree] *************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:136 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.606) 0:00:35.309 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_ostree", "skip_reason": "Conditional result was False" } TASK [Get local machine ID] **************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:154 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.018) 0:00:35.327 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "ansible_facts[\"distribution_version\"] is version(\"9\", \"<\")", "skip_reason": "Conditional result was False" } TASK [Skip test if cannot reboot] ********************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:160 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.027) 0:00:35.355 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [Enable cgroup controllers] *********************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:166 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.019) 0:00:35.375 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "ansible_facts[\"distribution_version\"] is version(\"9\", \"<\")", "skip_reason": "Conditional result was False" } TASK [Configure cgroups in kernel] ********************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:198 Saturday 14 February 2026 11:47:38 -0500 (0:00:00.025) 0:00:35.400 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "ansible_facts[\"distribution_version\"] is version(\"9\", \"<\")", "skip_reason": "Conditional result was False" } TASK [Reboot] ****************************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:204 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.025) 0:00:35.425 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "ansible_facts[\"distribution_version\"] is version(\"9\", \"<\")", "skip_reason": "Conditional result was False" } TASK [Run the role - user] ***************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:207 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.026) 0:00:35.452 ***** included: fedora.linux_system_roles.podman for managed-node2 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.063) 0:00:35.516 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.032) 0:00:35.548 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.028) 0:00:35.576 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.024) 0:00:35.601 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.020) 0:00:35.621 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.022) 0:00:35.644 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.023) 0:00:35.667 ***** skipping: [managed-node2] => (item=RedHat.yml) => { "__vars_file": "RedHat.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "__vars_file": "CentOS.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.podman : Run systemctl] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:52 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.052) 0:00:35.720 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Require installed systemd] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:60 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.027) 0:00:35.748 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:65 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.024) 0:00:35.773 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 14 February 2026 11:47:39 -0500 (0:00:00.019) 0:00:35.792 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 14 February 2026 11:47:40 -0500 (0:00:01.039) 0:00:36.831 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 14 February 2026 11:47:40 -0500 (0:00:00.020) 0:00:36.852 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages)) | list | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 14 February 2026 11:47:40 -0500 (0:00:00.108) 0:00:36.961 ***** skipping: [managed-node2] => { "false_condition": "__podman_is_transactional | d(false)" } TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 14 February 2026 11:47:40 -0500 (0:00:00.021) 0:00:36.982 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 14 February 2026 11:47:40 -0500 (0:00:00.019) 0:00:37.002 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 14 February 2026 11:47:40 -0500 (0:00:00.018) 0:00:37.020 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.024615", "end": "2026-02-14 11:47:40.938645", "rc": 0, "start": "2026-02-14 11:47:40.914030" } STDOUT: podman version 5.6.0 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.404) 0:00:37.425 ***** ok: [managed-node2] => { "ansible_facts": { "podman_version": "5.6.0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.023) 0:00:37.448 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.019) 0:00:37.468 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:73 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.051) 0:00:37.519 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.042) 0:00:37.562 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__has_type_pod or __has_pod_file_ext or __has_pod_file_src_ext or __has_pod_template_src_ext or __has_pod_template_src_ext_j2", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:96 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.047) 0:00:37.610 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.039) 0:00:37.649 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.039) 0:00:37.688 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.023) 0:00:37.711 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.022) 0:00:37.733 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.028) 0:00:37.762 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.392) 0:00:38.155 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.022) 0:00:38.177 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.020) 0:00:38.197 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.020) 0:00:38.218 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.021) 0:00:38.239 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.020) 0:00:38.260 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.022) 0:00:38.282 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.023) 0:00:38.306 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.021) 0:00:38.327 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.049) 0:00:38.377 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 14 February 2026 11:47:41 -0500 (0:00:00.034) 0:00:38.411 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.020) 0:00:38.432 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.019) 0:00:38.451 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.034) 0:00:38.485 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.020) 0:00:38.506 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.019) 0:00:38.525 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.036) 0:00:38.561 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.020) 0:00:38.582 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.019) 0:00:38.601 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.074) 0:00:38.676 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.021) 0:00:38.697 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.019) 0:00:38.717 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.019) 0:00:38.736 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.022) 0:00:38.759 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_firewall | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.020) 0:00:38.779 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.020) 0:00:38.799 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.019) 0:00:38.818 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.024) 0:00:38.843 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.022) 0:00:38.865 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.077) 0:00:38.943 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.024) 0:00:38.968 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.035) 0:00:39.003 ***** ok: [managed-node2] => { "ansible_facts": { "getent_passwd": { "user_quadlet_basic": [ "x", "1111", "1111", "", "/home/user_quadlet_basic", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.383) 0:00:39.387 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:42 -0500 (0:00:00.023) 0:00:39.410 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.030) 0:00:39.441 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.020) 0:00:39.461 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.021) 0:00:39.483 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.020) 0:00:39.504 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.020) 0:00:39.524 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.020) 0:00:39.545 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.021) 0:00:39.566 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.020) 0:00:39.587 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.020) 0:00:39.607 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.019) 0:00:39.627 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": true, "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.028) 0:00:39.655 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.033) 0:00:39.689 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "loginctl", "enable-linger", "user_quadlet_basic" ], "delta": "0:00:00.015866", "end": "2026-02-14 11:47:43.601867", "rc": 0, "start": "2026-02-14 11:47:43.586001" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.424) 0:00:40.113 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.029) 0:00:40.143 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') == 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 14 February 2026 11:47:43 -0500 (0:00:00.023) 0:00:40.166 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087663.7184618, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087663.7184618, "nlink": 3, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 80, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 14 February 2026 11:47:44 -0500 (0:00:00.387) 0:00:40.554 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 14 February 2026 11:47:44 -0500 (0:00:00.811) 0:00:41.366 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 14 February 2026 11:47:44 -0500 (0:00:00.025) 0:00:41.391 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.033) 0:00:41.424 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.024) 0:00:41.449 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.024) 0:00:41.473 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.030) 0:00:41.504 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.019) 0:00:41.523 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.020) 0:00:41.544 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.054) 0:00:41.598 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.020) 0:00:41.619 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.019) 0:00:41.639 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.020) 0:00:41.659 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.022) 0:00:41.681 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.020) 0:00:41.702 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.020) 0:00:41.722 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": true, "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.027) 0:00:41.749 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.035) 0:00:41.785 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "loginctl", "enable-linger", "user_quadlet_basic" ], "delta": null, "end": null, "rc": 0, "start": null } STDOUT: skipped, since /var/lib/systemd/linger/user_quadlet_basic exists MSG: Did not run command since '/var/lib/systemd/linger/user_quadlet_basic' exists TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.385) 0:00:42.171 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.029) 0:00:42.201 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') == 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 14 February 2026 11:47:45 -0500 (0:00:00.023) 0:00:42.224 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087664.65112, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087664.65112, "nlink": 5, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 120, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 14 February 2026 11:47:46 -0500 (0:00:00.389) 0:00:42.613 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:184 Saturday 14 February 2026 11:47:46 -0500 (0:00:00.604) 0:00:43.218 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:191 Saturday 14 February 2026 11:47:46 -0500 (0:00:00.019) 0:00:43.238 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:47:46 -0500 (0:00:00.104) 0:00:43.342 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Network]\nSubnet=192.168.29.0/24\nGateway=192.168.29.1\nLabel=app=wordpress\nNetworkName=quadlet-basic-name\n", "__podman_quadlet_template_src": "templates/quadlet-basic.network.j2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:47:46 -0500 (0:00:00.068) 0:00:43.411 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.028) 0:00:43.440 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_quadlet_str", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.020) 0:00:43.460 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic", "__podman_quadlet_type": "network", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.036) 0:00:43.497 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.035) 0:00:43.533 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.023) 0:00:43.557 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.023) 0:00:43.580 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.030) 0:00:43.610 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.389) 0:00:44.000 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004538", "end": "2026-02-14 11:47:47.902080", "rc": 0, "start": "2026-02-14 11:47:47.897542" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:47 -0500 (0:00:00.387) 0:00:44.387 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.005595", "end": "2026-02-14 11:47:48.293689", "rc": 0, "start": "2026-02-14 11:47:48.288094" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.389) 0:00:44.777 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.037) 0:00:44.814 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.020) 0:00:44.835 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.022) 0:00:44.857 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.021) 0:00:44.878 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.020) 0:00:44.899 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.019) 0:00:44.918 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-network.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.038) 0:00:44.956 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.036) 0:00:44.993 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.030) 0:00:45.023 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.068) 0:00:45.092 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.032) 0:00:45.124 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.062) 0:00:45.187 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.045) 0:00:45.232 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:48 -0500 (0:00:00.030) 0:00:45.263 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "loginctl", "enable-linger", "user_quadlet_basic" ], "delta": null, "end": null, "rc": 0, "start": null } STDOUT: skipped, since /var/lib/systemd/linger/user_quadlet_basic exists MSG: Did not run command since '/var/lib/systemd/linger/user_quadlet_basic' exists TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.394) 0:00:45.657 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.043) 0:00:45.701 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') == 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.035) 0:00:45.736 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.027) 0:00:45.764 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.055) 0:00:45.820 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.030) 0:00:45.851 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.021) 0:00:45.873 ***** changed: [managed-node2] => { "changed": true, "gid": 1111, "group": "user_quadlet_basic", "mode": "0755", "owner": "user_quadlet_basic", "path": "/home/user_quadlet_basic/.config/containers/systemd", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 6, "state": "directory", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.408) 0:00:46.281 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:47:49 -0500 (0:00:00.024) 0:00:46.306 ***** changed: [managed-node2] => { "changed": true, "checksum": "19c9b17be2af9b9deca5c3bd327f048966750682", "dest": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network", "gid": 1111, "group": "user_quadlet_basic", "md5sum": "313e9a2e5a99f80fa7023c19a1065658", "mode": "0644", "owner": "user_quadlet_basic", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 105, "src": "/root/.ansible/tmp/ansible-tmp-1771087669.9321482-24074-44070762850677/.source.network", "state": "file", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:50 -0500 (0:00:00.710) 0:00:47.016 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_copy_content is skipped", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:47:50 -0500 (0:00:00.030) 0:00:47.047 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:47:51 -0500 (0:00:00.653) 0:00:47.701 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-network.service", "state": "started", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "run-user-1111.mount app.slice podman-user-wait-network-online.service basic.target -.mount", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-network.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore --subnet 192.168.29.0/24 --gateway 192.168.29.1 --label app=wordpress quadlet-basic-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore --subnet 192.168.29.0/24 --gateway 192.168.29.1 --label app=wordpress quadlet-basic-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-network.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-network.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3624103936", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-network.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "app.slice basic.target", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-network", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:51 -0500 (0:00:00.685) 0:00:48.386 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:47:51 -0500 (0:00:00.023) 0:00:48.410 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Network": {} }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.047) 0:00:48.457 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.031) 0:00:48.489 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.025) 0:00:48.515 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-unused-network", "__podman_quadlet_type": "network", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.040) 0:00:48.555 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.044) 0:00:48.600 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.027) 0:00:48.628 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.022) 0:00:48.650 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.029) 0:00:48.680 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:52 -0500 (0:00:00.385) 0:00:49.066 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004216", "end": "2026-02-14 11:47:52.979832", "rc": 0, "start": "2026-02-14 11:47:52.975616" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.410) 0:00:49.477 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.005513", "end": "2026-02-14 11:47:53.412319", "rc": 0, "start": "2026-02-14 11:47:53.406806" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.427) 0:00:49.904 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.060) 0:00:49.965 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.036) 0:00:50.002 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.035) 0:00:50.037 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.032) 0:00:50.070 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.035) 0:00:50.105 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.036) 0:00:50.142 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-unused-network-network.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.062) 0:00:50.204 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.045) 0:00:50.250 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.036) 0:00:50.286 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:47:53 -0500 (0:00:00.103) 0:00:50.389 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.099) 0:00:50.488 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.032) 0:00:50.521 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.065) 0:00:50.586 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.036) 0:00:50.623 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "loginctl", "enable-linger", "user_quadlet_basic" ], "delta": null, "end": null, "rc": 0, "start": null } STDOUT: skipped, since /var/lib/systemd/linger/user_quadlet_basic exists MSG: Did not run command since '/var/lib/systemd/linger/user_quadlet_basic' exists TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.391) 0:00:51.014 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.033) 0:00:51.047 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') == 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.023) 0:00:51.071 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.023) 0:00:51.095 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.037) 0:00:51.132 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.018) 0:00:51.151 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:47:54 -0500 (0:00:00.018) 0:00:51.169 ***** ok: [managed-node2] => { "changed": false, "gid": 1111, "group": "user_quadlet_basic", "mode": "0755", "owner": "user_quadlet_basic", "path": "/home/user_quadlet_basic/.config/containers/systemd", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 35, "state": "directory", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:47:55 -0500 (0:00:00.387) 0:00:51.557 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:47:55 -0500 (0:00:00.021) 0:00:51.578 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:55 -0500 (0:00:00.021) 0:00:51.600 ***** changed: [managed-node2] => { "changed": true, "checksum": "52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01", "dest": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network", "gid": 1111, "group": "user_quadlet_basic", "md5sum": "968d495367b59475979615e4884cbda2", "mode": "0644", "owner": "user_quadlet_basic", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 54, "src": "/root/.ansible/tmp/ansible-tmp-1771087675.2243838-24329-194671574787178/.source.network", "state": "file", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:47:55 -0500 (0:00:00.769) 0:00:52.369 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:47:56 -0500 (0:00:00.647) 0:00:53.017 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-unused-network-network.service", "state": "started", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "basic.target -.mount run-user-1111.mount podman-user-wait-network-online.service app.slice", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-unused-network-network.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore systemd-quadlet-basic-unused-network ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore systemd-quadlet-basic-unused-network ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-unused-network-network.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-unused-network-network.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3624042496", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-unused-network-network.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "app.slice basic.target", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-unused-network-network", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.680) 0:00:53.698 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.029) 0:00:53.727 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Volume": { "VolumeName": "quadlet-basic-mysql-name" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.033) 0:00:53.761 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.029) 0:00:53.791 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.020) 0:00:53.811 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-mysql", "__podman_quadlet_type": "volume", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.036) 0:00:53.848 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.038) 0:00:53.886 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.024) 0:00:53.911 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.023) 0:00:53.935 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.037) 0:00:53.972 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:47:57 -0500 (0:00:00.403) 0:00:54.376 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.003983", "end": "2026-02-14 11:47:58.289361", "rc": 0, "start": "2026-02-14 11:47:58.285378" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.397) 0:00:54.773 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.005566", "end": "2026-02-14 11:47:58.675081", "rc": 0, "start": "2026-02-14 11:47:58.669515" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.394) 0:00:55.168 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.048) 0:00:55.216 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.021) 0:00:55.238 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.025) 0:00:55.263 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.027) 0:00:55.290 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.037) 0:00:55.327 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.026) 0:00:55.354 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-mysql-volume.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:47:58 -0500 (0:00:00.041) 0:00:55.396 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.029) 0:00:55.425 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.023) 0:00:55.449 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.146) 0:00:55.596 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.028) 0:00:55.624 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.018) 0:00:55.643 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.045) 0:00:55.688 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.029) 0:00:55.718 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "loginctl", "enable-linger", "user_quadlet_basic" ], "delta": null, "end": null, "rc": 0, "start": null } STDOUT: skipped, since /var/lib/systemd/linger/user_quadlet_basic exists MSG: Did not run command since '/var/lib/systemd/linger/user_quadlet_basic' exists TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.412) 0:00:56.130 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.046) 0:00:56.177 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') == 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.038) 0:00:56.216 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.027) 0:00:56.243 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.057) 0:00:56.300 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.031) 0:00:56.332 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:47:59 -0500 (0:00:00.053) 0:00:56.385 ***** ok: [managed-node2] => { "changed": false, "gid": 1111, "group": "user_quadlet_basic", "mode": "0755", "owner": "user_quadlet_basic", "path": "/home/user_quadlet_basic/.config/containers/systemd", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 79, "state": "directory", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:48:00 -0500 (0:00:00.428) 0:00:56.814 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:48:00 -0500 (0:00:00.022) 0:00:56.836 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:00 -0500 (0:00:00.036) 0:00:56.873 ***** changed: [managed-node2] => { "changed": true, "checksum": "90a3571bfc7670328fe3f8fb625585613dbd9c4a", "dest": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume", "gid": 1111, "group": "user_quadlet_basic", "md5sum": "8682d71bf3c086f228cd72389b7c9018", "mode": "0644", "owner": "user_quadlet_basic", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 89, "src": "/root/.ansible/tmp/ansible-tmp-1771087680.5019636-24577-124432255509769/.source.volume", "state": "file", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:48:01 -0500 (0:00:00.741) 0:00:57.614 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:48:01 -0500 (0:00:00.651) 0:00:58.265 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-mysql-volume.service", "state": "started", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "podman-user-wait-network-online.service app.slice -.mount run-user-1111.mount basic.target", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-mysql-volume.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore quadlet-basic-mysql-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore quadlet-basic-mysql-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-mysql-volume.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-mysql-volume.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3624005632", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-mysql-volume.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "app.slice basic.target", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-mysql-volume", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.689) 0:00:58.955 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.028) 0:00:58.983 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Volume": {} }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.033) 0:00:59.016 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.030) 0:00:59.047 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.022) 0:00:59.069 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-unused-volume", "__podman_quadlet_type": "volume", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.037) 0:00:59.107 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.039) 0:00:59.146 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.027) 0:00:59.174 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.030) 0:00:59.205 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:02 -0500 (0:00:00.036) 0:00:59.241 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:03 -0500 (0:00:00.388) 0:00:59.630 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004709", "end": "2026-02-14 11:48:03.531251", "rc": 0, "start": "2026-02-14 11:48:03.526542" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:03 -0500 (0:00:00.383) 0:01:00.014 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.005557", "end": "2026-02-14 11:48:03.917464", "rc": 0, "start": "2026-02-14 11:48:03.911907" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:03 -0500 (0:00:00.386) 0:01:00.401 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.038) 0:01:00.439 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.023) 0:01:00.462 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.029) 0:01:00.492 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.033) 0:01:00.525 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.023) 0:01:00.548 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.022) 0:01:00.571 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-unused-volume-volume.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.038) 0:01:00.610 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.032) 0:01:00.642 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.071) 0:01:00.714 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.065) 0:01:00.779 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.027) 0:01:00.807 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.018) 0:01:00.826 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.045) 0:01:00.872 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.031) 0:01:00.903 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "loginctl", "enable-linger", "user_quadlet_basic" ], "delta": null, "end": null, "rc": 0, "start": null } STDOUT: skipped, since /var/lib/systemd/linger/user_quadlet_basic exists MSG: Did not run command since '/var/lib/systemd/linger/user_quadlet_basic' exists TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.374) 0:01:01.278 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.031) 0:01:01.309 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') == 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.029) 0:01:01.339 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.018) 0:01:01.358 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.035) 0:01:01.393 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:48:04 -0500 (0:00:00.019) 0:01:01.413 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:48:05 -0500 (0:00:00.018) 0:01:01.431 ***** ok: [managed-node2] => { "changed": false, "gid": 1111, "group": "user_quadlet_basic", "mode": "0755", "owner": "user_quadlet_basic", "path": "/home/user_quadlet_basic/.config/containers/systemd", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 113, "state": "directory", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:48:05 -0500 (0:00:00.388) 0:01:01.820 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:48:05 -0500 (0:00:00.022) 0:01:01.843 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:05 -0500 (0:00:00.024) 0:01:01.867 ***** changed: [managed-node2] => { "changed": true, "checksum": "fd0ae560360afa5541b866560b1e849d25e216ef", "dest": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume", "gid": 1111, "group": "user_quadlet_basic", "md5sum": "4967598a0284ad3e296ab106829a30a2", "mode": "0644", "owner": "user_quadlet_basic", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 53, "src": "/root/.ansible/tmp/ansible-tmp-1771087685.4910405-24770-45960372293351/.source.volume", "state": "file", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:48:06 -0500 (0:00:00.735) 0:01:02.602 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:48:06 -0500 (0:00:00.644) 0:01:03.247 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-unused-volume-volume.service", "state": "started", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "run-user-1111.mount basic.target podman-user-wait-network-online.service -.mount app.slice", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-unused-volume-volume.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore systemd-quadlet-basic-unused-volume ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore systemd-quadlet-basic-unused-volume ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-unused-volume-volume.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-unused-volume-volume.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3623964672", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-unused-volume-volume.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "app.slice basic.target", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-unused-volume-volume", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.668) 0:01:03.916 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.027) 0:01:03.943 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Container": { "ContainerName": "quadlet-basic-mysql-name", "Environment": [ "FOO=/bin/busybox-extras", "BAZ=test" ], "Image": "quay.io/linux-system-roles/mysql:5.6", "Network": "quadlet-basic.network", "PodmanArgs": "--secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json", "Volume": "quadlet-basic-mysql.volume:/var/lib/mysql" }, "Install": { "WantedBy": "default.target" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.032) 0:01:03.976 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.028) 0:01:04.005 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.020) 0:01:04.025 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-mysql", "__podman_quadlet_type": "container", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.036) 0:01:04.061 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.036) 0:01:04.098 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.025) 0:01:04.123 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.023) 0:01:04.147 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:07 -0500 (0:00:00.031) 0:01:04.178 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:08 -0500 (0:00:00.379) 0:01:04.558 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004071", "end": "2026-02-14 11:48:08.459632", "rc": 0, "start": "2026-02-14 11:48:08.455561" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:08 -0500 (0:00:00.385) 0:01:04.944 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.005949", "end": "2026-02-14 11:48:08.841334", "rc": 0, "start": "2026-02-14 11:48:08.835385" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:08 -0500 (0:00:00.379) 0:01:05.323 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:08 -0500 (0:00:00.037) 0:01:05.361 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:08 -0500 (0:00:00.021) 0:01:05.382 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:08 -0500 (0:00:00.021) 0:01:05.404 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.020) 0:01:05.425 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.020) 0:01:05.446 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.020) 0:01:05.467 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-mysql.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.038) 0:01:05.505 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.120) 0:01:05.626 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.029) 0:01:05.656 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.063) 0:01:05.720 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.026) 0:01:05.746 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.020) 0:01:05.766 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.044) 0:01:05.811 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.030) 0:01:05.841 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "loginctl", "enable-linger", "user_quadlet_basic" ], "delta": null, "end": null, "rc": 0, "start": null } STDOUT: skipped, since /var/lib/systemd/linger/user_quadlet_basic exists MSG: Did not run command since '/var/lib/systemd/linger/user_quadlet_basic' exists TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.384) 0:01:06.225 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.027) 0:01:06.253 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') == 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.024) 0:01:06.277 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.020) 0:01:06.298 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:48:09 -0500 (0:00:00.036) 0:01:06.335 ***** changed: [managed-node2] => (item=None) => { "attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:48:17 -0500 (0:00:07.317) 0:01:13.652 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:48:17 -0500 (0:00:00.042) 0:01:13.694 ***** ok: [managed-node2] => { "changed": false, "gid": 1111, "group": "user_quadlet_basic", "mode": "0755", "owner": "user_quadlet_basic", "path": "/home/user_quadlet_basic/.config/containers/systemd", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 155, "state": "directory", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:48:17 -0500 (0:00:00.402) 0:01:14.096 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:48:17 -0500 (0:00:00.024) 0:01:14.121 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:17 -0500 (0:00:00.022) 0:01:14.143 ***** changed: [managed-node2] => { "changed": true, "checksum": "0b6cac7929623f1059e78ef39b8b0a25169b28a6", "dest": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container", "gid": 1111, "group": "user_quadlet_basic", "md5sum": "1ede2d50fe62a3ca756acb50f2f6868e", "mode": "0644", "owner": "user_quadlet_basic", "secontext": "unconfined_u:object_r:config_home_t:s0", "size": 448, "src": "/root/.ansible/tmp/ansible-tmp-1771087697.7685003-25203-102137340282322/.source.container", "state": "file", "uid": 1111 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:48:18 -0500 (0:00:00.735) 0:01:14.878 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:48:19 -0500 (0:00:00.656) 0:01:15.535 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-mysql.service", "state": "started", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "podman-user-wait-network-online.service quadlet-basic-network.service -.mount app.slice basic.target quadlet-basic-mysql-volume.service run-user-1111.mount", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target default.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "yes", "DelegateControllers": "cpu cpuset io memory pids", "Description": "quadlet-basic-mysql.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "Environment": "PODMAN_SYSTEMD_UNIT=quadlet-basic-mysql.service", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman run --name quadlet-basic-mysql-name --replace --rm --cgroups=split --network quadlet-basic-name --sdnotify=conmon -d -v quadlet-basic-mysql-name:/var/lib/mysql --env BAZ=test --env FOO=/bin/busybox-extras --secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json quay.io/linux-system-roles/mysql:5.6 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman run --name quadlet-basic-mysql-name --replace --rm --cgroups=split --network quadlet-basic-name --sdnotify=conmon -d -v quadlet-basic-mysql-name:/var/lib/mysql --env BAZ=test --env FOO=/bin/busybox-extras --secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json quay.io/linux-system-roles/mysql:5.6 ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i quadlet-basic-mysql-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i quadlet-basic-mysql-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopPost": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i quadlet-basic-mysql-name ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopPostEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i quadlet-basic-mysql-name ; flags=ignore-failure ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-mysql.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-mysql.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3297951744", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-mysql.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMPolicy": "continue", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "app.slice quadlet-basic-network.service basic.target quadlet-basic-mysql-volume.service", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-mysql", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "WantedBy": "default.target", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:20 -0500 (0:00:00.896) 0:01:16.432 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:198 Saturday 14 February 2026 11:48:20 -0500 (0:00:00.022) 0:01:16.455 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:205 Saturday 14 February 2026 11:48:20 -0500 (0:00:00.016) 0:01:16.471 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:214 Saturday 14 February 2026 11:48:20 -0500 (0:00:00.017) 0:01:16.489 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Check files] ************************************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:218 Saturday 14 February 2026 11:48:20 -0500 (0:00:00.020) 0:01:16.510 ***** ok: [managed-node2] => (item=quadlet-basic-mysql.container) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "cat", "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container" ], "delta": "0:00:00.003839", "end": "2026-02-14 11:48:20.521339", "item": "quadlet-basic-mysql.container", "rc": 0, "start": "2026-02-14 11:48:20.517500" } STDOUT: # # Ansible managed # # system_role:podman [Install] WantedBy=default.target [Container] Image=quay.io/linux-system-roles/mysql:5.6 ContainerName=quadlet-basic-mysql-name Volume=quadlet-basic-mysql.volume:/var/lib/mysql Network=quadlet-basic.network PodmanArgs=--secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json Environment=FOO=/bin/busybox-extras Environment=BAZ=test ok: [managed-node2] => (item=quadlet-basic.network) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "cat", "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network" ], "delta": "0:00:00.003204", "end": "2026-02-14 11:48:20.967412", "item": "quadlet-basic.network", "rc": 0, "start": "2026-02-14 11:48:20.964208" } STDOUT: [Network] Subnet=192.168.29.0/24 Gateway=192.168.29.1 Label=app=wordpress NetworkName=quadlet-basic-name ok: [managed-node2] => (item=quadlet-basic-mysql.volume) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "cat", "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume" ], "delta": "0:00:00.003592", "end": "2026-02-14 11:48:21.375931", "item": "quadlet-basic-mysql.volume", "rc": 0, "start": "2026-02-14 11:48:21.372339" } STDOUT: # # Ansible managed # # system_role:podman [Volume] VolumeName=quadlet-basic-mysql-name TASK [Ensure linger] *********************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:228 Saturday 14 February 2026 11:48:21 -0500 (0:00:01.350) 0:01:17.860 ***** ok: [managed-node2] => { "changed": false, "failed_when_result": false, "stat": { "atime": 1771087663.593497, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1771087663.593497, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4448282, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0644", "mtime": 1771087663.593497, "nlink": 1, "path": "/var/lib/systemd/linger/user_quadlet_basic", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "version": "3737457893", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [Run the role - root] ***************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:235 Saturday 14 February 2026 11:48:21 -0500 (0:00:00.436) 0:01:18.297 ***** included: fedora.linux_system_roles.podman for managed-node2 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 14 February 2026 11:48:21 -0500 (0:00:00.079) 0:01:18.376 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 14 February 2026 11:48:21 -0500 (0:00:00.037) 0:01:18.414 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.042) 0:01:18.457 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.031) 0:01:18.488 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.026) 0:01:18.515 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.021) 0:01:18.536 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.080) 0:01:18.616 ***** skipping: [managed-node2] => (item=RedHat.yml) => { "__vars_file": "RedHat.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "__vars_file": "CentOS.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.podman : Run systemctl] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:52 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.078) 0:01:18.694 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Require installed systemd] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:60 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.033) 0:01:18.728 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:65 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.033) 0:01:18.762 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 14 February 2026 11:48:22 -0500 (0:00:00.034) 0:01:18.797 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 14 February 2026 11:48:23 -0500 (0:00:01.035) 0:01:19.832 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 14 February 2026 11:48:23 -0500 (0:00:00.021) 0:01:19.854 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages)) | list | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 14 February 2026 11:48:23 -0500 (0:00:00.027) 0:01:19.881 ***** skipping: [managed-node2] => { "false_condition": "__podman_is_transactional | d(false)" } TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 14 February 2026 11:48:23 -0500 (0:00:00.020) 0:01:19.901 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 14 February 2026 11:48:23 -0500 (0:00:00.019) 0:01:19.920 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 14 February 2026 11:48:23 -0500 (0:00:00.031) 0:01:19.952 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.024241", "end": "2026-02-14 11:48:23.870813", "rc": 0, "start": "2026-02-14 11:48:23.846572" } STDOUT: podman version 5.6.0 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 14 February 2026 11:48:23 -0500 (0:00:00.413) 0:01:20.366 ***** ok: [managed-node2] => { "ansible_facts": { "podman_version": "5.6.0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 14 February 2026 11:48:23 -0500 (0:00:00.039) 0:01:20.405 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.034) 0:01:20.439 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:73 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.058) 0:01:20.498 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.043) 0:01:20.541 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__has_type_pod or __has_pod_file_ext or __has_pod_file_src_ext or __has_pod_template_src_ext or __has_pod_template_src_ext_j2", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:96 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.047) 0:01:20.589 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.034) 0:01:20.624 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.038) 0:01:20.662 ***** ok: [managed-node2] => { "ansible_facts": { "getent_passwd": { "root": [ "x", "0", "0", "Super User", "/root", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.411) 0:01:21.073 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.028) 0:01:21.101 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:24 -0500 (0:00:00.040) 0:01:21.141 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.407) 0:01:21.549 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.036) 0:01:21.585 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.037) 0:01:21.622 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.034) 0:01:21.656 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.023) 0:01:21.680 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.023) 0:01:21.703 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.027) 0:01:21.731 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.022) 0:01:21.754 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.028) 0:01:21.782 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.054) 0:01:21.837 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.036) 0:01:21.874 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.068) 0:01:21.942 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.023) 0:01:21.966 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.054) 0:01:22.020 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.023) 0:01:22.044 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.029) 0:01:22.073 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.063) 0:01:22.136 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.026) 0:01:22.163 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.021) 0:01:22.184 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.049) 0:01:22.234 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.026) 0:01:22.260 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.026) 0:01:22.287 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.020) 0:01:22.308 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.022) 0:01:22.330 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_firewall | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.020) 0:01:22.350 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.021) 0:01:22.371 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.020) 0:01:22.392 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 14 February 2026 11:48:25 -0500 (0:00:00.018) 0:01:22.410 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.017) 0:01:22.428 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.081) 0:01:22.509 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.032) 0:01:22.542 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.039) 0:01:22.582 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.032) 0:01:22.615 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.030) 0:01:22.645 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.037) 0:01:22.682 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.020) 0:01:22.703 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.022) 0:01:22.725 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.020) 0:01:22.746 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.020) 0:01:22.766 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.019) 0:01:22.786 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.022) 0:01:22.808 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.021) 0:01:22.829 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.020) 0:01:22.849 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.020) 0:01:22.869 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.034) 0:01:22.904 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.043) 0:01:22.948 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.024) 0:01:22.972 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.089) 0:01:23.061 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.026) 0:01:23.088 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 14 February 2026 11:48:26 -0500 (0:00:00.019) 0:01:23.108 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.532) 0:01:23.640 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.027) 0:01:23.668 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.037) 0:01:23.706 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.027) 0:01:23.733 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.025) 0:01:23.758 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.038) 0:01:23.797 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.024) 0:01:23.822 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.037) 0:01:23.860 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.033) 0:01:23.893 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.023) 0:01:23.917 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.022) 0:01:23.939 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.028) 0:01:23.968 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.023) 0:01:23.991 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.025) 0:01:24.017 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.025) 0:01:24.042 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.039) 0:01:24.082 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.040) 0:01:24.122 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.021) 0:01:24.143 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.023) 0:01:24.166 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.021) 0:01:24.188 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 14 February 2026 11:48:27 -0500 (0:00:00.019) 0:01:24.207 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:184 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.762) 0:01:24.970 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:191 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.030) 0:01:25.000 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.153) 0:01:25.154 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Network]\nSubnet=192.168.29.0/24\nGateway=192.168.29.1\nLabel=app=wordpress\nNetworkName=quadlet-basic-name\n", "__podman_quadlet_template_src": "templates/quadlet-basic.network.j2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.076) 0:01:25.231 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.029) 0:01:25.260 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_quadlet_str", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.024) 0:01:25.285 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic", "__podman_quadlet_type": "network", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.037) 0:01:25.323 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.037) 0:01:25.360 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.027) 0:01:25.388 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:28 -0500 (0:00:00.025) 0:01:25.413 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.032) 0:01:25.445 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.382) 0:01:25.828 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.069) 0:01:25.898 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.025) 0:01:25.923 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.030) 0:01:25.954 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.028) 0:01:25.982 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.030) 0:01:26.013 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.036) 0:01:26.049 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.036) 0:01:26.086 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.036) 0:01:26.123 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-network.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.083) 0:01:26.206 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.050) 0:01:26.256 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.039) 0:01:26.296 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:48:29 -0500 (0:00:00.105) 0:01:26.402 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.034) 0:01:26.436 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.020) 0:01:26.456 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.060) 0:01:26.517 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.037) 0:01:26.555 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.021) 0:01:26.577 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.023) 0:01:26.601 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.021) 0:01:26.622 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.016) 0:01:26.639 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.034) 0:01:26.674 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.019) 0:01:26.694 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.019) 0:01:26.713 ***** ok: [managed-node2] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/containers/systemd", "secontext": "system_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.415) 0:01:27.129 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:48:30 -0500 (0:00:00.030) 0:01:27.160 ***** changed: [managed-node2] => { "changed": true, "checksum": "19c9b17be2af9b9deca5c3bd327f048966750682", "dest": "/etc/containers/systemd/quadlet-basic.network", "gid": 0, "group": "root", "md5sum": "313e9a2e5a99f80fa7023c19a1065658", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 105, "src": "/root/.ansible/tmp/ansible-tmp-1771087710.8078883-25734-32145037604992/.source.network", "state": "file", "uid": 0 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:31 -0500 (0:00:00.837) 0:01:27.997 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_copy_content is skipped", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:48:31 -0500 (0:00:00.019) 0:01:28.017 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:48:32 -0500 (0:00:00.787) 0:01:28.805 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-network.service", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "basic.target sysinit.target -.mount network-online.target systemd-journald.socket system.slice", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-network.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore --subnet 192.168.29.0/24 --gateway 192.168.29.1 --label app=wordpress quadlet-basic-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore --subnet 192.168.29.0/24 --gateway 192.168.29.1 --label app=wordpress quadlet-basic-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/quadlet-basic-network.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-network.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "2517286912", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-network.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "-.mount sysinit.target system.slice", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/quadlet-basic.network", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-network", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.626) 0:01:29.432 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.025) 0:01:29.457 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Network": {} }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.030) 0:01:29.488 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.030) 0:01:29.518 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.021) 0:01:29.539 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-unused-network", "__podman_quadlet_type": "network", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.036) 0:01:29.575 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.037) 0:01:29.612 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.029) 0:01:29.642 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.030) 0:01:29.674 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.051) 0:01:29.725 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.469) 0:01:30.195 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.039) 0:01:30.234 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.038) 0:01:30.273 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.027) 0:01:30.300 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.036) 0:01:30.336 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.035) 0:01:30.372 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.023) 0:01:30.396 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:33 -0500 (0:00:00.026) 0:01:30.423 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.027) 0:01:30.450 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-unused-network-network.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.039) 0:01:30.490 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.030) 0:01:30.520 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.023) 0:01:30.544 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic-unused-network.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.065) 0:01:30.610 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.030) 0:01:30.640 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.019) 0:01:30.660 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.045) 0:01:30.706 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.030) 0:01:30.736 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.020) 0:01:30.757 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.020) 0:01:30.778 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.019) 0:01:30.798 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.016) 0:01:30.815 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.033) 0:01:30.848 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.019) 0:01:30.868 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.020) 0:01:30.889 ***** ok: [managed-node2] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/containers/systemd", "secontext": "system_u:object_r:etc_t:s0", "size": 35, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.404) 0:01:31.293 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.028) 0:01:31.322 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:34 -0500 (0:00:00.027) 0:01:31.350 ***** changed: [managed-node2] => { "changed": true, "checksum": "52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01", "dest": "/etc/containers/systemd/quadlet-basic-unused-network.network", "gid": 0, "group": "root", "md5sum": "968d495367b59475979615e4884cbda2", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 54, "src": "/root/.ansible/tmp/ansible-tmp-1771087714.978921-25930-142517737796637/.source.network", "state": "file", "uid": 0 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:48:35 -0500 (0:00:00.730) 0:01:32.080 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:48:36 -0500 (0:00:00.748) 0:01:32.829 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-unused-network-network.service", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "network-online.target systemd-journald.socket basic.target system.slice sysinit.target -.mount", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-unused-network-network.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore systemd-quadlet-basic-unused-network ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore systemd-quadlet-basic-unused-network ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/quadlet-basic-unused-network-network.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-unused-network-network.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "2512388096", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-unused-network-network.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "-.mount sysinit.target system.slice", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/quadlet-basic-unused-network.network", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-unused-network-network", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.628) 0:01:33.458 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.031) 0:01:33.489 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Volume": { "VolumeName": "quadlet-basic-mysql-name" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.054) 0:01:33.544 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.048) 0:01:33.593 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.034) 0:01:33.627 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-mysql", "__podman_quadlet_type": "volume", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.041) 0:01:33.669 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.046) 0:01:33.716 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.025) 0:01:33.741 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.026) 0:01:33.768 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.080) 0:01:33.848 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.382) 0:01:34.231 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.023) 0:01:34.254 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.023) 0:01:34.278 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.025) 0:01:34.303 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.024) 0:01:34.328 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.022) 0:01:34.350 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.023) 0:01:34.374 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.025) 0:01:34.400 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:48:37 -0500 (0:00:00.023) 0:01:34.423 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-mysql-volume.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.047) 0:01:34.471 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.037) 0:01:34.508 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.031) 0:01:34.540 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic-mysql.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.099) 0:01:34.640 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.045) 0:01:34.685 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.031) 0:01:34.717 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.078) 0:01:34.795 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.054) 0:01:34.850 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.035) 0:01:34.885 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.033) 0:01:34.919 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.033) 0:01:34.953 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.029) 0:01:34.982 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.050) 0:01:35.033 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.030) 0:01:35.064 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:48:38 -0500 (0:00:00.031) 0:01:35.096 ***** ok: [managed-node2] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/containers/systemd", "secontext": "system_u:object_r:etc_t:s0", "size": 79, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:48:39 -0500 (0:00:00.439) 0:01:35.535 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:48:39 -0500 (0:00:00.036) 0:01:35.571 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:39 -0500 (0:00:00.035) 0:01:35.607 ***** changed: [managed-node2] => { "changed": true, "checksum": "90a3571bfc7670328fe3f8fb625585613dbd9c4a", "dest": "/etc/containers/systemd/quadlet-basic-mysql.volume", "gid": 0, "group": "root", "md5sum": "8682d71bf3c086f228cd72389b7c9018", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 89, "src": "/root/.ansible/tmp/ansible-tmp-1771087719.2426136-26105-180291594378143/.source.volume", "state": "file", "uid": 0 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:48:39 -0500 (0:00:00.769) 0:01:36.377 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:48:40 -0500 (0:00:00.792) 0:01:37.169 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-mysql-volume.service", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "-.mount sysinit.target systemd-journald.socket basic.target network-online.target system.slice", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-mysql-volume.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore quadlet-basic-mysql-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore quadlet-basic-mysql-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/quadlet-basic-mysql-volume.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-mysql-volume.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "2536652800", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-mysql-volume.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "sysinit.target -.mount system.slice", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/quadlet-basic-mysql.volume", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-mysql-volume", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.633) 0:01:37.803 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.040) 0:01:37.843 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Volume": {} }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.055) 0:01:37.899 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.036) 0:01:37.936 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.025) 0:01:37.962 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-unused-volume", "__podman_quadlet_type": "volume", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.044) 0:01:38.007 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.044) 0:01:38.051 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.030) 0:01:38.081 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.072) 0:01:38.154 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:41 -0500 (0:00:00.034) 0:01:38.189 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.391) 0:01:38.580 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.062) 0:01:38.643 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.052) 0:01:38.695 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.037) 0:01:38.732 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.038) 0:01:38.771 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.034) 0:01:38.805 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.040) 0:01:38.845 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.035) 0:01:38.881 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.037) 0:01:38.919 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-unused-volume-volume.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.061) 0:01:38.980 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.035) 0:01:39.015 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.024) 0:01:39.040 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic-unused-volume.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.075) 0:01:39.115 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.031) 0:01:39.147 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.020) 0:01:39.167 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.044) 0:01:39.212 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.032) 0:01:39.244 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.020) 0:01:39.265 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.019) 0:01:39.285 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.021) 0:01:39.306 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.020) 0:01:39.327 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.036) 0:01:39.364 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle images when not booted] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:25 Saturday 14 February 2026 11:48:42 -0500 (0:00:00.032) 0:01:39.396 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure the quadlet directory is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:21 Saturday 14 February 2026 11:48:43 -0500 (0:00:00.030) 0:01:39.427 ***** ok: [managed-node2] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/containers/systemd", "secontext": "system_u:object_r:etc_t:s0", "size": 113, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is copied] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:32 Saturday 14 February 2026 11:48:43 -0500 (0:00:00.437) 0:01:39.864 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_file_src | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file content is present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:44 Saturday 14 February 2026 11:48:43 -0500 (0:00:00.035) 0:01:39.900 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_str | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure quadlet file is present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:43 -0500 (0:00:00.040) 0:01:39.941 ***** changed: [managed-node2] => { "changed": true, "checksum": "fd0ae560360afa5541b866560b1e849d25e216ef", "dest": "/etc/containers/systemd/quadlet-basic-unused-volume.volume", "gid": 0, "group": "root", "md5sum": "4967598a0284ad3e296ab106829a30a2", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 53, "src": "/root/.ansible/tmp/ansible-tmp-1771087723.576437-26316-237709762951197/.source.volume", "state": "file", "uid": 0 } TASK [fedora.linux_system_roles.podman : Reload systemctl] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:69 Saturday 14 February 2026 11:48:44 -0500 (0:00:00.782) 0:01:40.723 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Start service] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.754) 0:01:41.477 ***** changed: [managed-node2] => { "changed": true, "name": "quadlet-basic-unused-volume-volume.service", "state": "started", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "basic.target network-online.target sysinit.target system.slice -.mount systemd-journald.socket", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "0", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-unused-volume-volume.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore systemd-quadlet-basic-unused-volume ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore systemd-quadlet-basic-unused-volume ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/quadlet-basic-unused-volume-volume.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-unused-volume-volume.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "2539048960", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "[not set]", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "[not set]", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-unused-volume-volume.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "sysinit.target -.mount system.slice", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/quadlet-basic-unused-volume.volume", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-unused-volume-volume", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "infinity" } } TASK [fedora.linux_system_roles.podman : Restart service] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.613) 0:01:42.091 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_service_started is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.023) 0:01:42.115 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Container": { "ContainerName": "quadlet-basic-mysql-name", "Environment": [ "FOO=/bin/busybox-extras", "BAZ=test" ], "Image": "quay.io/linux-system-roles/mysql:5.6", "Network": "quadlet-basic.network", "PodmanArgs": "--secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json", "Volume": "quadlet-basic-mysql.volume:/var/lib/mysql" }, "Install": { "WantedBy": "default.target" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.033) 0:01:42.148 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "created", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.038) 0:01:42.187 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.037) 0:01:42.224 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-mysql", "__podman_quadlet_type": "container", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.043) 0:01:42.268 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.041) 0:01:42.310 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.085) 0:01:42.395 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:45 -0500 (0:00:00.026) 0:01:42.422 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.035) 0:01:42.457 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.399) 0:01:42.857 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.042) 0:01:42.899 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.037) 0:01:42.936 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.039) 0:01:42.975 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.037) 0:01:43.013 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.038) 0:01:43.052 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.037) 0:01:43.089 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.036) 0:01:43.126 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.035) 0:01:43.162 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-mysql.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.067) 0:01:43.230 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.045) 0:01:43.275 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_kube_yamls_raw | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.028) 0:01:43.304 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic-mysql.container", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.076) 0:01:43.380 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:48:46 -0500 (0:00:00.035) 0:01:43.416 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state == \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:48:47 -0500 (0:00:00.019) 0:01:43.436 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:2 Saturday 14 February 2026 11:48:47 -0500 (0:00:00.045) 0:01:43.481 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:47 -0500 (0:00:00.032) 0:01:43.514 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:47 -0500 (0:00:00.020) 0:01:43.535 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:47 -0500 (0:00:00.021) 0:01:43.556 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create host directories] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:7 Saturday 14 February 2026 11:48:47 -0500 (0:00:00.020) 0:01:43.577 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:18 Saturday 14 February 2026 11:48:47 -0500 (0:00:00.017) 0:01:43.594 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure container images are present] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 Saturday 14 February 2026 11:48:47 -0500 (0:00:00.037) 0:01:43.632 ***** failed: [managed-node2] (item=None) => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } fatal: [managed-node2]: FAILED! => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Show error] ************************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:302 Saturday 14 February 2026 11:48:51 -0500 (0:00:04.033) 0:01:47.665 ***** ok: [managed-node2] => {} MSG: { "changed": false, "failed": true, "msg": "One or more items failed", "results": [ { "_ansible_ignore_unreachable": null, "_ansible_item_label": "quay.io/linux-system-roles/mysql:5.6", "_ansible_item_result": true, "_ansible_no_log": true, "_ansible_parsed": true, "ansible_loop_var": "item", "changed": false, "failed": true, "failed_when_result": true, "invocation": { "module_args": { "arch": null, "auth_file": null, "build": { "annotation": null, "cache": true, "container_file": null, "extra_args": null, "file": null, "force_rm": false, "format": "oci", "rm": true, "target": null, "volume": null }, "ca_cert_dir": null, "executable": "podman", "force": true, "name": "quay.io/linux-system-roles/mysql:5.6", "password": null, "path": null, "pull": true, "pull_extra_args": null, "push": false, "push_args": { "compress": null, "dest": null, "extra_args": null, "format": null, "remove_signatures": null, "sign_by": null, "ssh": null, "transport": null }, "quadlet_dir": null, "quadlet_file_mode": null, "quadlet_filename": null, "quadlet_options": null, "state": "present", "tag": "latest", "username": null, "validate_certs": null } }, "item": "quay.io/linux-system-roles/mysql:5.6", "msg": "Failed to pull image quay.io/linux-system-roles/mysql:5.6", "stderr": "Trying to pull quay.io/linux-system-roles/mysql:5.6...\nGetting image source signatures\nCopying blob sha256:74711b52025d6eeadafdf7e48e57c6c79b1b962bbe076320dbd32c19d21ff20e\nCopying blob sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8\nCopying blob sha256:ea81a90e0b6844870e4e5d158af214492ec29a553acee4218851587e67c18277\nCopying blob sha256:8e6b4edcb54da53b43a65f233c2e0a9bafd6888ac50cf736df97ba38d597da48\nCopying blob sha256:920bcc5a8004a85af85ee7c7c89de8e60c1cc105131a9b8f441ee09c5eaafe7d\nCopying blob sha256:6c156120e6acbf00169e48bdfbf41c5cce421e13146891437b93b9f073be14ff\nCopying blob sha256:876299233b4689a869cf260574765f36f6587f67662851efd6e1ba7b67df45b7\nCopying blob sha256:e66e6a922f9f0b432ca8fe7696c1a2009139e04600cfefeef3805734495f87f2\nCopying blob sha256:63ff629ef91c7b044d5e2db032c620f38027f7c2d9b5c69e3324c464ea643a88\nCopying blob sha256:0674d499d16ed94593c7d6410930312a4213cb45765be6a9dadd371012ff8fae\nCopying blob sha256:255aedc2649186302c90a5a0a7d080b9d6a1ebe5295554b5a20da55c68c6f62d\nError: unable to copy from source docker://quay.io/linux-system-roles/mysql:5.6: copying system image from manifest list: reading blob sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8: Digest did not match, expected sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\n", "stderr_lines": [ "Trying to pull quay.io/linux-system-roles/mysql:5.6...", "Getting image source signatures", "Copying blob sha256:74711b52025d6eeadafdf7e48e57c6c79b1b962bbe076320dbd32c19d21ff20e", "Copying blob sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8", "Copying blob sha256:ea81a90e0b6844870e4e5d158af214492ec29a553acee4218851587e67c18277", "Copying blob sha256:8e6b4edcb54da53b43a65f233c2e0a9bafd6888ac50cf736df97ba38d597da48", "Copying blob sha256:920bcc5a8004a85af85ee7c7c89de8e60c1cc105131a9b8f441ee09c5eaafe7d", "Copying blob sha256:6c156120e6acbf00169e48bdfbf41c5cce421e13146891437b93b9f073be14ff", "Copying blob sha256:876299233b4689a869cf260574765f36f6587f67662851efd6e1ba7b67df45b7", "Copying blob sha256:e66e6a922f9f0b432ca8fe7696c1a2009139e04600cfefeef3805734495f87f2", "Copying blob sha256:63ff629ef91c7b044d5e2db032c620f38027f7c2d9b5c69e3324c464ea643a88", "Copying blob sha256:0674d499d16ed94593c7d6410930312a4213cb45765be6a9dadd371012ff8fae", "Copying blob sha256:255aedc2649186302c90a5a0a7d080b9d6a1ebe5295554b5a20da55c68c6f62d", "Error: unable to copy from source docker://quay.io/linux-system-roles/mysql:5.6: copying system image from manifest list: reading blob sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8: Digest did not match, expected sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" ], "stdout": "", "stdout_lines": [] } ], "skipped": false } TASK [Debug3] ****************************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:306 Saturday 14 February 2026 11:48:51 -0500 (0:00:00.037) 0:01:47.703 ***** ok: [managed-node2] => { "changed": false, "cmd": "set -x\nset -o pipefail\nexec 1>&2\n#podman volume rm --all\n#podman network prune -f\npodman volume ls\npodman network ls\npodman secret ls\npodman container ls\npodman pod ls\npodman images\nsystemctl list-units | grep quadlet\n", "delta": "0:00:00.159104", "end": "2026-02-14 11:48:51.783442", "rc": 0, "start": "2026-02-14 11:48:51.624338" } STDERR: + set -o pipefail + exec + podman volume ls DRIVER VOLUME NAME local quadlet-basic-mysql-name local systemd-quadlet-basic-unused-volume + podman network ls NETWORK ID NAME DRIVER 2f259bab93aa podman bridge abf306ea2c77 podman-default-kube-network bridge 753e73850896 quadlet-basic-name bridge 01cf3213e3ed systemd-quadlet-basic-unused-network bridge + podman secret ls ID NAME DRIVER CREATED UPDATED 90451f17a3693765e0b13c8ad json_secret file 23 seconds ago 23 seconds ago 90fe43ec6167996f11d8c921b mysql_container_root_password file 24 seconds ago 24 seconds ago + podman container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + podman pod ls POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS + podman images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/libpod/testimage 20210610 9f9ec7f2fdef 4 years ago 7.99 MB + systemctl list-units + grep quadlet quadlet-basic-mysql-volume.service loaded active exited quadlet-basic-mysql-volume.service quadlet-basic-network.service loaded active exited quadlet-basic-network.service quadlet-basic-unused-network-network.service loaded active exited quadlet-basic-unused-network-network.service quadlet-basic-unused-volume-volume.service loaded active exited quadlet-basic-unused-volume-volume.service TASK [Check AVCs] ************************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:322 Saturday 14 February 2026 11:48:51 -0500 (0:00:00.568) 0:01:48.272 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "grep", "type=AVC", "/var/log/audit/audit.log" ], "delta": "0:00:00.004566", "end": "2026-02-14 11:48:52.163115", "failed_when_result": false, "rc": 1, "start": "2026-02-14 11:48:52.158549" } MSG: non-zero return code TASK [Dump journal] ************************************************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:327 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.373) 0:01:48.646 ***** fatal: [managed-node2]: FAILED! => { "changed": false, "cmd": [ "journalctl", "-ex" ], "delta": "0:00:00.033270", "end": "2026-02-14 11:48:52.586838", "failed_when_result": true, "rc": 0, "start": "2026-02-14 11:48:52.553568" } STDOUT: Feb 14 11:44:06 managed-node2 conmon[33222]: conmon 257f1d51b1075ac8c229 : winsz read side: 15, winsz write side: 16 Feb 14 11:44:06 managed-node2 systemd[1]: Started libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope - libcrun container. ░░ Subject: A start job for unit libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope has finished successfully. ░░ ░░ The job identifier is 2427. Feb 14 11:44:06 managed-node2 conmon[33222]: conmon 257f1d51b1075ac8c229 : container PID: 33224 Feb 14 11:44:06 managed-node2 podman[33160]: 2026-02-14 11:44:06.122843666 -0500 EST m=+0.369090254 container init 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Feb 14 11:44:06 managed-node2 podman[33160]: 2026-02-14 11:44:06.126102643 -0500 EST m=+0.372349223 container start 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Feb 14 11:44:06 managed-node2 podman[33160]: 2026-02-14 11:44:06.130281128 -0500 EST m=+0.376527469 pod start 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 (image=, name=httpd2) Feb 14 11:44:06 managed-node2 python3.12[33153]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Feb 14 11:44:06 managed-node2 python3.12[33153]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 Container: 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 Feb 14 11:44:06 managed-node2 python3.12[33153]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2026-02-14T11:44:05-05:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2026-02-14T11:44:05-05:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2026-02-14T11:44:05-05:00" level=info msg="Setting parallel job count to 7" time="2026-02-14T11:44:05-05:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2026-02-14T11:44:05-05:00" level=info msg="Using sqlite as database backend" time="2026-02-14T11:44:05-05:00" level=debug msg="Using graph driver overlay" time="2026-02-14T11:44:05-05:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2026-02-14T11:44:05-05:00" level=debug msg="Using run root /run/containers/storage" time="2026-02-14T11:44:05-05:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2026-02-14T11:44:05-05:00" level=debug msg="Using tmp dir /run/libpod" time="2026-02-14T11:44:05-05:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2026-02-14T11:44:05-05:00" level=debug msg="Using transient store: false" time="2026-02-14T11:44:05-05:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Cached value indicated that overlay is supported" time="2026-02-14T11:44:05-05:00" level=debug msg="Cached value indicated that overlay is supported" time="2026-02-14T11:44:05-05:00" level=debug msg="Cached value indicated that metacopy is being used" time="2026-02-14T11:44:05-05:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2026-02-14T11:44:05-05:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2026-02-14T11:44:05-05:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2026-02-14T11:44:05-05:00" level=debug msg="Initializing event backend journald" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2026-02-14T11:44:05-05:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 bridge podman1 2026-02-14 11:42:18.681480354 -0500 EST [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2026-02-14T11:44:05-05:00" level=debug msg="Successfully loaded 2 networks" time="2026-02-14T11:44:05-05:00" level=debug msg="Pod using bridge network mode" time="2026-02-14T11:44:05-05:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice for parent machine.slice and name libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2" time="2026-02-14T11:44:05-05:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice" time="2026-02-14T11:44:05-05:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice" time="2026-02-14T11:44:05-05:00" level=debug msg="no command or entrypoint provided, and no CMD or ENTRYPOINT from image: defaulting to empty string" time="2026-02-14T11:44:05-05:00" level=debug msg="using systemd mode: false" time="2026-02-14T11:44:05-05:00" level=debug msg="setting container name 0454ffcc8b08-infra" time="2026-02-14T11:44:05-05:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Allocated lock 1 for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a" time="2026-02-14T11:44:05-05:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are supported" time="2026-02-14T11:44:05-05:00" level=debug msg="Created container \"337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Container \"337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\" has work directory \"/var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Container \"337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\" has run directory \"/run/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2026-02-14T11:44:05-05:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2026-02-14T11:44:05-05:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2026-02-14T11:44:05-05:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2026-02-14T11:44:05-05:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2026-02-14T11:44:05-05:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2026-02-14T11:44:05-05:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2026-02-14T11:44:05-05:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2026-02-14T11:44:05-05:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2026-02-14T11:44:05-05:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2026-02-14T11:44:05-05:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2026-02-14T11:44:05-05:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2026-02-14T11:44:05-05:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2026-02-14T11:44:05-05:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2026-02-14T11:44:05-05:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2026-02-14T11:44:05-05:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2026-02-14T11:44:05-05:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2026-02-14T11:44:05-05:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2026-02-14T11:44:05-05:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2026-02-14T11:44:05-05:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2026-02-14T11:44:05-05:00" level=debug msg="using systemd mode: false" time="2026-02-14T11:44:05-05:00" level=debug msg="adding container to pod httpd2" time="2026-02-14T11:44:05-05:00" level=debug msg="setting container name httpd2-httpd2" time="2026-02-14T11:44:05-05:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2026-02-14T11:44:05-05:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2026-02-14T11:44:05-05:00" level=debug msg="Adding mount /proc" time="2026-02-14T11:44:05-05:00" level=debug msg="Adding mount /dev" time="2026-02-14T11:44:05-05:00" level=debug msg="Adding mount /dev/pts" time="2026-02-14T11:44:05-05:00" level=debug msg="Adding mount /dev/mqueue" time="2026-02-14T11:44:05-05:00" level=debug msg="Adding mount /sys" time="2026-02-14T11:44:05-05:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2026-02-14T11:44:05-05:00" level=debug msg="Allocated lock 2 for container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2" time="2026-02-14T11:44:05-05:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Created container \"257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Container \"257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\" has work directory \"/var/lib/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Container \"257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\" has run directory \"/run/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata\"" time="2026-02-14T11:44:05-05:00" level=debug msg="Strongconnecting node 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a" time="2026-02-14T11:44:05-05:00" level=debug msg="Pushed 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a onto stack" time="2026-02-14T11:44:05-05:00" level=debug msg="Finishing node 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a. Popped 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a off stack" time="2026-02-14T11:44:05-05:00" level=debug msg="Strongconnecting node 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2" time="2026-02-14T11:44:05-05:00" level=debug msg="Pushed 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 onto stack" time="2026-02-14T11:44:05-05:00" level=debug msg="Finishing node 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2. Popped 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 off stack" time="2026-02-14T11:44:05-05:00" level=debug msg="Made network namespace at /run/netns/netns-88026c48-10f7-33b6-4b2c-c0b5e250fcc3 for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a" time="2026-02-14T11:44:05-05:00" level=debug msg="Created root filesystem for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a at /var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/rootfs/merge" [DEBUG netavark::network::validation] Validating network namespace... [DEBUG netavark::commands::setup] Setting up... [INFO netavark::firewall] Using nftables firewall driver [DEBUG netavark::network::bridge] Setup network podman-default-kube-network [DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.2/24] [DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24] [DEBUG netavark::network::bridge] Using mtu 9001 from default route interface for the network [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/ip_forward to 1 [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/conf/podman1/route_localnet to 1 [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/conf/podman1/rp_filter to 2 [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv6/conf/eth0/autoconf to 0 [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/conf/eth0/arp_notify to 1 [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/conf/eth0/rp_filter to 2 [INFO netavark::network::netlink_route] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100) [DEBUG netavark::firewall::firewalld] Adding firewalld rules for network 10.89.0.0/24 [DEBUG netavark::firewall::firewalld] Adding subnet 10.89.0.0/24 to zone trusted as source [INFO netavark::firewall::nft] Creating container chain nv_abf306ea_10_89_0_0_nm24 [DEBUG netavark::dns::aardvark] Spawning aardvark server [DEBUG netavark::dns::aardvark] start aardvark-dns: ["systemd-run", "-q", "--scope", "/usr/libexec/podman/aardvark-dns", "--config", "/run/containers/networks/aardvark-dns", "-p", "53", "run"] [DEBUG netavark::commands::setup] { "podman-default-kube-network": StatusBlock { dns_search_domains: Some( [ "dns.podman", ], ), dns_server_ips: Some( [ 10.89.0.1, ], ), interfaces: Some( { "eth0": NetInterface { mac_address: "fe:41:3b:4d:b8:2d", subnets: Some( [ NetAddress { gateway: Some( 10.89.0.1, ), ipnet: 10.89.0.2/24, }, ], ), }, }, ), }, } [DEBUG netavark::commands::setup] Setup complete time="2026-02-14T11:44:05-05:00" level=debug msg="/proc/sys/crypto/fips_enabled does not contain '1', not adding FIPS mode bind mounts" time="2026-02-14T11:44:06-05:00" level=debug msg="Setting Cgroups for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a to machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice:libpod:337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a" time="2026-02-14T11:44:06-05:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2026-02-14T11:44:06-05:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/rootfs/merge\"" time="2026-02-14T11:44:06-05:00" level=debug msg="Created OCI spec for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a at /var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata/config.json" time="2026-02-14T11:44:06-05:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice for parent machine.slice and name libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2" time="2026-02-14T11:44:06-05:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice" time="2026-02-14T11:44:06-05:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice" time="2026-02-14T11:44:06-05:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2026-02-14T11:44:06-05:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a -u 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata -p /run/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata/pidfile -n 0454ffcc8b08-infra --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --hooks-dir --exit-command-arg /usr/share/containers/oci/hooks.d --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --stopped-only --exit-command-arg 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a]" time="2026-02-14T11:44:06-05:00" level=info msg="Running conmon under slice machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice and unitName libpod-conmon-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope" time="2026-02-14T11:44:06-05:00" level=debug msg="Received: 33219" time="2026-02-14T11:44:06-05:00" level=info msg="Got Conmon PID as 33217" time="2026-02-14T11:44:06-05:00" level=debug msg="Created container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a in OCI runtime" time="2026-02-14T11:44:06-05:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2026-02-14T11:44:06-05:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2026-02-14T11:44:06-05:00" level=debug msg="Starting container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a with command [/catatonit -P]" time="2026-02-14T11:44:06-05:00" level=debug msg="Started container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a" time="2026-02-14T11:44:06-05:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/VRWALXDJXGCPTG6I4RK6YKJW5Q,upperdir=/var/lib/containers/storage/overlay/aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c/diff,workdir=/var/lib/containers/storage/overlay/aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c28,c372\"" time="2026-02-14T11:44:06-05:00" level=debug msg="Mounted container \"257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\" at \"/var/lib/containers/storage/overlay/aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c/merged\"" time="2026-02-14T11:44:06-05:00" level=debug msg="Created root filesystem for container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 at /var/lib/containers/storage/overlay/aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c/merged" time="2026-02-14T11:44:06-05:00" level=debug msg="/proc/sys/crypto/fips_enabled does not contain '1', not adding FIPS mode bind mounts" time="2026-02-14T11:44:06-05:00" level=debug msg="Setting Cgroups for container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 to machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice:libpod:257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2" time="2026-02-14T11:44:06-05:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2026-02-14T11:44:06-05:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2026-02-14T11:44:06-05:00" level=debug msg="Created OCI spec for container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 at /var/lib/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata/config.json" time="2026-02-14T11:44:06-05:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice for parent machine.slice and name libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2" time="2026-02-14T11:44:06-05:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice" time="2026-02-14T11:44:06-05:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice" time="2026-02-14T11:44:06-05:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2026-02-14T11:44:06-05:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 -u 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata -p /run/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --hooks-dir --exit-command-arg /usr/share/containers/oci/hooks.d --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --stopped-only --exit-command-arg 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2]" time="2026-02-14T11:44:06-05:00" level=info msg="Running conmon under slice machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice and unitName libpod-conmon-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope" time="2026-02-14T11:44:06-05:00" level=debug msg="Received: 33224" time="2026-02-14T11:44:06-05:00" level=info msg="Got Conmon PID as 33222" time="2026-02-14T11:44:06-05:00" level=debug msg="Created container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 in OCI runtime" time="2026-02-14T11:44:06-05:00" level=debug msg="Starting container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 with command [/bin/busybox-extras httpd -f -p 80]" time="2026-02-14T11:44:06-05:00" level=debug msg="Started container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2" time="2026-02-14T11:44:06-05:00" level=debug msg="Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2026-02-14T11:44:06-05:00" level=debug msg="Shutting down engines" time="2026-02-14T11:44:06-05:00" level=info msg="Received shutdown.Stop(), terminating!" PID=33160 Feb 14 11:44:06 managed-node2 python3.12[33153]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Feb 14 11:44:06 managed-node2 python3.12[33380]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:44:06 managed-node2 systemd[1]: Reload requested from client PID 33381 ('systemctl') (unit session-8.scope)... Feb 14 11:44:06 managed-node2 systemd[1]: Reloading... Feb 14 11:44:06 managed-node2 systemd-rc-local-generator[33433]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:44:06 managed-node2 systemd[1]: Reloading finished in 228 ms. Feb 14 11:44:07 managed-node2 python3.12[33601]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None Feb 14 11:44:07 managed-node2 systemd[1]: Reload requested from client PID 33604 ('systemctl') (unit session-8.scope)... Feb 14 11:44:07 managed-node2 systemd[1]: Reloading... Feb 14 11:44:07 managed-node2 systemd-rc-local-generator[33655]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:44:07 managed-node2 systemd[1]: Reloading finished in 220 ms. Feb 14 11:44:08 managed-node2 python3.12[33824]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:44:08 managed-node2 systemd[1]: Created slice system-podman\x2dkube.slice - Slice /system/podman-kube. ░░ Subject: A start job for unit system-podman\x2dkube.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit system-podman\x2dkube.slice has finished successfully. ░░ ░░ The job identifier is 2512. Feb 14 11:44:08 managed-node2 systemd[1]: Starting podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play... ░░ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution. ░░ ░░ The job identifier is 2434. Feb 14 11:44:08 managed-node2 podman[33828]: 2026-02-14 11:44:08.458903188 -0500 EST m=+0.025281117 pod stop 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 (image=, name=httpd2) Feb 14 11:44:15 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 podman[33828]: time="2026-02-14T11:44:18-05:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd2-httpd2 in 10 seconds, resorting to SIGKILL" Feb 14 11:44:18 managed-node2 conmon[33222]: conmon 257f1d51b1075ac8c229 : container 33224 exited with status 137 Feb 14 11:44:18 managed-node2 systemd[1]: libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 conmon[33222]: conmon 257f1d51b1075ac8c229 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice/libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope/container/memory.events Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.481884669 -0500 EST m=+10.048262979 container died 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --hooks-dir /usr/share/containers/oci/hooks.d --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup --stopped-only 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2)" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=info msg="Setting parallel job count to 7" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Setting custom database backend: \"sqlite\"" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=info msg="Using sqlite as database backend" Feb 14 11:44:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay-aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c-merged.mount has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using graph driver overlay" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using graph root /var/lib/containers/storage" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using run root /run/containers/storage" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using tmp dir /run/libpod" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using transient store: false" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Cached value indicated that overlay is supported" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Cached value indicated that overlay is supported" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Cached value indicated that metacopy is being used" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Cached value indicated that native-diff is not being used" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Initializing event backend journald" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.518945693 -0500 EST m=+10.085323581 container cleanup 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --hooks-dir /usr/share/containers/oci/hooks.d --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup --stopped-only 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2)" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=debug msg="Shutting down engines" Feb 14 11:44:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time="2026-02-14T11:44:18-05:00" level=info msg="Received shutdown.Stop(), terminating!" PID=33840 Feb 14 11:44:18 managed-node2 systemd[1]: libpod-conmon-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-conmon-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 systemd[1]: libpod-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.529731368 -0500 EST m=+10.096109300 container stop 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a (image=, name=0454ffcc8b08-infra, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2) Feb 14 11:44:18 managed-node2 conmon[33217]: conmon 337f995e0e5b707c847d : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice/libpod-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope/container/memory.events Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.531475473 -0500 EST m=+10.097853456 container died 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a (image=, name=0454ffcc8b08-infra) Feb 14 11:44:18 managed-node2 aardvark-dns[33213]: Received SIGHUP Feb 14 11:44:18 managed-node2 aardvark-dns[33213]: Successfully parsed config Feb 14 11:44:18 managed-node2 aardvark-dns[33213]: Listen v4 ip {} Feb 14 11:44:18 managed-node2 aardvark-dns[33213]: Listen v6 ip {} Feb 14 11:44:18 managed-node2 aardvark-dns[33213]: No configuration found stopping the sever Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:44:18 managed-node2 systemd[1]: run-p33207-i33208.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-p33207-i33208.scope has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 kernel: veth0 (unregistering): left allmulticast mode Feb 14 11:44:18 managed-node2 kernel: veth0 (unregistering): left promiscuous mode Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --hooks-dir /usr/share/containers/oci/hooks.d --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup --stopped-only 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a)" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=info msg="Setting parallel job count to 7" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Setting custom database backend: \"sqlite\"" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=info msg="Using sqlite as database backend" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using graph driver overlay" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using graph root /var/lib/containers/storage" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using run root /run/containers/storage" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using tmp dir /run/libpod" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using transient store: false" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Cached value indicated that overlay is supported" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Cached value indicated that overlay is supported" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Cached value indicated that metacopy is being used" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Cached value indicated that native-diff is not being used" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Initializing event backend journald" Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.5734] device (podman1): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed') Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" Feb 14 11:44:18 managed-node2 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 2519. Feb 14 11:44:18 managed-node2 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 2519. Feb 14 11:44:18 managed-node2 systemd[1]: run-netns-netns\x2d88026c48\x2d10f7\x2d33b6\x2d4b2c\x2dc0b5e250fcc3.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-netns-netns\x2d88026c48\x2d10f7\x2d33b6\x2d4b2c\x2dc0b5e250fcc3.mount has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a-rootfs-merge.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a-rootfs-merge.mount has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.645299132 -0500 EST m=+10.211677129 container cleanup 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a (image=, name=0454ffcc8b08-infra, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2) Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --hooks-dir /usr/share/containers/oci/hooks.d --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup --stopped-only 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a)" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=debug msg="Shutting down engines" Feb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time="2026-02-14T11:44:18-05:00" level=info msg="Received shutdown.Stop(), terminating!" PID=33851 Feb 14 11:44:18 managed-node2 systemd[1]: libpod-conmon-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-conmon-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope has successfully entered the 'dead' state. Feb 14 11:44:18 managed-node2 systemd[1]: Removed slice machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice - cgroup machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice. ░░ Subject: A stop job for unit machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice has finished. ░░ ░░ The job identifier is 2598 and the job result is done. Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.70205527 -0500 EST m=+10.268433185 container remove 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.723359249 -0500 EST m=+10.289737186 container remove 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a (image=, name=0454ffcc8b08-infra, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2) Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.735329201 -0500 EST m=+10.301707105 pod remove 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 (image=, name=httpd2) Feb 14 11:44:18 managed-node2 podman[33828]: Pods stopped: Feb 14 11:44:18 managed-node2 podman[33828]: 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 Feb 14 11:44:18 managed-node2 podman[33828]: Pods removed: Feb 14 11:44:18 managed-node2 podman[33828]: 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 Feb 14 11:44:18 managed-node2 podman[33828]: Secrets removed: Feb 14 11:44:18 managed-node2 podman[33828]: Volumes removed: Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.735787226 -0500 EST m=+10.302165293 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge) Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.753265753 -0500 EST m=+10.319643757 container create c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:44:18 managed-node2 systemd[1]: Created slice machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice - cgroup machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice. ░░ Subject: A start job for unit machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice has finished successfully. ░░ ░░ The job identifier is 2600. Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.789688005 -0500 EST m=+10.356065937 container create 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.793663888 -0500 EST m=+10.360041777 pod create dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2) Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.795354104 -0500 EST m=+10.361732147 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.818008901 -0500 EST m=+10.384386884 container create 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, io.containers.autoupdate=registry, app=test, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.818367488 -0500 EST m=+10.384745413 container restart c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:44:18 managed-node2 systemd[1]: Started libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope - libcrun container. ░░ Subject: A start job for unit libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope has finished successfully. ░░ ░░ The job identifier is 2606. Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.872690582 -0500 EST m=+10.439068523 container init c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.876183545 -0500 EST m=+10.442561631 container start c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered blocking state Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:44:18 managed-node2 kernel: veth0: entered allmulticast mode Feb 14 11:44:18 managed-node2 kernel: veth0: entered promiscuous mode Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered blocking state Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered forwarding state Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.8947] manager: (podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/5) Feb 14 11:44:18 managed-node2 (udev-worker)[33859]: Network interface NamePolicy= disabled on kernel command line. Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:44:18 managed-node2 (udev-worker)[33860]: Network interface NamePolicy= disabled on kernel command line. Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered blocking state Feb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered forwarding state Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9052] device (podman1): carrier: link connected Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9058] device (veth0): carrier: link connected Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9060] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/6) Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9180] device (podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external') Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9187] device (podman1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external') Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9199] device (podman1): Activation: starting connection 'podman1' (ddcc90c9-9614-4d49-81ea-de66f003e113) Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9201] device (podman1): state change: disconnected -> prepare (reason 'none', managed-type: 'external') Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9204] device (podman1): state change: prepare -> config (reason 'none', managed-type: 'external') Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9207] device (podman1): state change: config -> ip-config (reason 'none', managed-type: 'external') Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9210] device (podman1): state change: ip-config -> ip-check (reason 'none', managed-type: 'external') Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9268] device (podman1): state change: ip-check -> secondaries (reason 'none', managed-type: 'external') Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9304] device (podman1): state change: secondaries -> activated (reason 'none', managed-type: 'external') Feb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9322] device (podman1): Activation: successful, device activated. Feb 14 11:44:18 managed-node2 systemd[1]: Started run-p33919-i33920.scope - [systemd-run] /usr/libexec/podman/aardvark-dns --config /run/containers/networks/aardvark-dns -p 53 run. ░░ Subject: A start job for unit run-p33919-i33920.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-p33919-i33920.scope has finished successfully. ░░ ░░ The job identifier is 2612. Feb 14 11:44:19 managed-node2 systemd[1]: Started libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope - libcrun container. ░░ Subject: A start job for unit libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope has finished successfully. ░░ ░░ The job identifier is 2618. Feb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.029725697 -0500 EST m=+10.596103734 container init 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.032671824 -0500 EST m=+10.599049788 container start 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:44:19 managed-node2 systemd[1]: Started libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope - libcrun container. ░░ Subject: A start job for unit libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope has finished successfully. ░░ ░░ The job identifier is 2625. Feb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.071338207 -0500 EST m=+10.637716150 container init 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z) Feb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.073950536 -0500 EST m=+10.640328628 container start 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Feb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.078480939 -0500 EST m=+10.644858856 pod start dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2) Feb 14 11:44:19 managed-node2 podman[33828]: Pod: Feb 14 11:44:19 managed-node2 podman[33828]: dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 Feb 14 11:44:19 managed-node2 podman[33828]: Container: Feb 14 11:44:19 managed-node2 podman[33828]: 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 Feb 14 11:44:19 managed-node2 systemd[1]: Started podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play. ░░ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished successfully. ░░ ░░ The job identifier is 2434. Feb 14 11:44:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a-userdata-shm.mount has successfully entered the 'dead' state. Feb 14 11:44:19 managed-node2 python3.12[34088]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:44:20 managed-node2 python3.12[34245]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:21 managed-node2 python3.12[34401]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:44:21 managed-node2 python3.12[34556]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:44:22 managed-node2 podman[34733]: 2026-02-14 11:44:22.965161635 -0500 EST m=+0.328798382 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Feb 14 11:44:23 managed-node2 python3.12[34923]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:44:23 managed-node2 python3.12[35078]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:44:24 managed-node2 python3.12[35233]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:44:24 managed-node2 python3.12[35358]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1771087464.1235387-15297-279098182960707/.source.yml _original_basename=.o5ptkggs follow=False checksum=e4784a08bb43caa8f773f2aa113f2c5371f34613 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:44:25 managed-node2 python3.12[35513]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.238227145 -0500 EST m=+0.013796205 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge) Feb 14 11:44:25 managed-node2 systemd[1]: Created slice machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice - cgroup machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice. ░░ Subject: A start job for unit machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice has finished successfully. ░░ ░░ The job identifier is 2632. Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.274947218 -0500 EST m=+0.050516294 container create fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d) Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.279274874 -0500 EST m=+0.054843985 pod create cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3) Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.305656895 -0500 EST m=+0.081226077 container create 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Feb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered blocking state Feb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered disabled state Feb 14 11:44:25 managed-node2 kernel: veth1: entered allmulticast mode Feb 14 11:44:25 managed-node2 kernel: veth1: entered promiscuous mode Feb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered blocking state Feb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered forwarding state Feb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered disabled state Feb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered blocking state Feb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered forwarding state Feb 14 11:44:25 managed-node2 NetworkManager[815]: [1771087465.3341] manager: (veth1): new Veth device (/org/freedesktop/NetworkManager/Devices/7) Feb 14 11:44:25 managed-node2 (udev-worker)[35531]: Network interface NamePolicy= disabled on kernel command line. Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.28240009 -0500 EST m=+0.057969381 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Feb 14 11:44:25 managed-node2 NetworkManager[815]: [1771087465.3385] device (veth1): carrier: link connected Feb 14 11:44:25 managed-node2 systemd[1]: Started libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope. ░░ Subject: A start job for unit libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has finished successfully. ░░ ░░ The job identifier is 2638. Feb 14 11:44:25 managed-node2 systemd[1]: Started libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope - libcrun container. ░░ Subject: A start job for unit libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has finished successfully. ░░ ░░ The job identifier is 2645. Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.422411406 -0500 EST m=+0.197980570 container init fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d) Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.426185128 -0500 EST m=+0.201754273 container start fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d) Feb 14 11:44:25 managed-node2 systemd[1]: Started libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope. ░░ Subject: A start job for unit libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has finished successfully. ░░ ░░ The job identifier is 2652. Feb 14 11:44:25 managed-node2 systemd[1]: Started libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope - libcrun container. ░░ Subject: A start job for unit libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has finished successfully. ░░ ░░ The job identifier is 2659. Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.481490168 -0500 EST m=+0.257059305 container init 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.48418125 -0500 EST m=+0.259750473 container start 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Feb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.488330304 -0500 EST m=+0.263899381 pod start cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3) Feb 14 11:44:26 managed-node2 python3.12[35723]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:44:26 managed-node2 systemd[1]: Reload requested from client PID 35724 ('systemctl') (unit session-8.scope)... Feb 14 11:44:26 managed-node2 systemd[1]: Reloading... Feb 14 11:44:26 managed-node2 systemd-rc-local-generator[35770]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:44:26 managed-node2 systemd[1]: Reloading finished in 234 ms. Feb 14 11:44:26 managed-node2 python3.12[35944]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None Feb 14 11:44:26 managed-node2 systemd[1]: Reload requested from client PID 35947 ('systemctl') (unit session-8.scope)... Feb 14 11:44:26 managed-node2 systemd[1]: Reloading... Feb 14 11:44:27 managed-node2 systemd-rc-local-generator[36001]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:44:27 managed-node2 systemd[1]: Reloading finished in 236 ms. Feb 14 11:44:27 managed-node2 python3.12[36168]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:44:27 managed-node2 systemd[1]: Starting podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play... ░░ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution. ░░ ░░ The job identifier is 2666. Feb 14 11:44:27 managed-node2 podman[36172]: 2026-02-14 11:44:27.75184531 -0500 EST m=+0.020904811 pod stop cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3) Feb 14 11:44:28 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 podman[36172]: time="2026-02-14T11:44:37-05:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd3-httpd3 in 10 seconds, resorting to SIGKILL" Feb 14 11:44:37 managed-node2 systemd[1]: libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.775367984 -0500 EST m=+10.044427533 container died 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Feb 14 11:44:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-547664d6d5241636666f9721df93db52fd32d8116fb1e7171b34d6f07db6d627-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-547664d6d5241636666f9721df93db52fd32d8116fb1e7171b34d6f07db6d627-merged.mount has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.808010446 -0500 EST m=+10.077069923 container cleanup 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Feb 14 11:44:37 managed-node2 systemd[1]: libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.816250478 -0500 EST m=+10.085310205 container stop fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d) Feb 14 11:44:37 managed-node2 systemd[1]: libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.842391823 -0500 EST m=+10.111451498 container died fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra) Feb 14 11:44:37 managed-node2 kernel: podman1: port 2(veth1) entered disabled state Feb 14 11:44:37 managed-node2 kernel: veth1 (unregistering): left allmulticast mode Feb 14 11:44:37 managed-node2 kernel: veth1 (unregistering): left promiscuous mode Feb 14 11:44:37 managed-node2 kernel: podman1: port 2(veth1) entered disabled state Feb 14 11:44:37 managed-node2 systemd[1]: run-netns-netns\x2df18140bf\x2d08ea\x2d8813\x2d237b\x2dc0aab5f301f6.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-netns-netns\x2df18140bf\x2d08ea\x2d8813\x2d237b\x2dc0aab5f301f6.mount has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028-rootfs-merge.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028-rootfs-merge.mount has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028-userdata-shm.mount has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.90018477 -0500 EST m=+10.169244253 container cleanup fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d) Feb 14 11:44:37 managed-node2 systemd[1]: libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has successfully entered the 'dead' state. Feb 14 11:44:37 managed-node2 systemd[1]: Removed slice machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice - cgroup machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice. ░░ Subject: A stop job for unit machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice has finished. ░░ ░░ The job identifier is 2751 and the job result is done. Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.909189817 -0500 EST m=+10.178249293 pod stop cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3) Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.914377069 -0500 EST m=+10.183436549 pod stop cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3) Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.93614438 -0500 EST m=+10.205203953 container remove 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.956877655 -0500 EST m=+10.225937174 container remove fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d) Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.963972623 -0500 EST m=+10.233032101 pod remove cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3) Feb 14 11:44:37 managed-node2 podman[36172]: Pods stopped: Feb 14 11:44:37 managed-node2 podman[36172]: cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d Feb 14 11:44:37 managed-node2 podman[36172]: Pods removed: Feb 14 11:44:37 managed-node2 podman[36172]: cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d Feb 14 11:44:37 managed-node2 podman[36172]: Secrets removed: Feb 14 11:44:37 managed-node2 podman[36172]: Volumes removed: Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.96437851 -0500 EST m=+10.233438132 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge) Feb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.981973764 -0500 EST m=+10.251033256 container create e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:44:37 managed-node2 systemd[1]: Created slice machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice - cgroup machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice. ░░ Subject: A start job for unit machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice has finished successfully. ░░ ░░ The job identifier is 2755. Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.011860762 -0500 EST m=+10.280920334 container create b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.015980365 -0500 EST m=+10.285039839 pod create 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 (image=, name=httpd3) Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.04217597 -0500 EST m=+10.311235469 container create 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, app=test) Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.042466927 -0500 EST m=+10.311526431 container restart e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:44:38 managed-node2 systemd[1]: Started libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope - libcrun container. ░░ Subject: A start job for unit libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope has finished successfully. ░░ ░░ The job identifier is 2761. Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.081629628 -0500 EST m=+10.350689249 container init e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.084933774 -0500 EST m=+10.353993248 container start e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered blocking state Feb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered disabled state Feb 14 11:44:38 managed-node2 kernel: veth1: entered allmulticast mode Feb 14 11:44:38 managed-node2 kernel: veth1: entered promiscuous mode Feb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered blocking state Feb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered forwarding state Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.018569499 -0500 EST m=+10.287629154 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Feb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered disabled state Feb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered blocking state Feb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered forwarding state Feb 14 11:44:38 managed-node2 (udev-worker)[36203]: Network interface NamePolicy= disabled on kernel command line. Feb 14 11:44:38 managed-node2 NetworkManager[815]: [1771087478.1092] manager: (veth1): new Veth device (/org/freedesktop/NetworkManager/Devices/8) Feb 14 11:44:38 managed-node2 NetworkManager[815]: [1771087478.1101] device (veth1): carrier: link connected Feb 14 11:44:38 managed-node2 systemd[1]: Started libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope - libcrun container. ░░ Subject: A start job for unit libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope has finished successfully. ░░ ░░ The job identifier is 2767. Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.178418528 -0500 EST m=+10.447478108 container init b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.180727983 -0500 EST m=+10.449787532 container start b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:44:38 managed-node2 systemd[1]: Started libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope - libcrun container. ░░ Subject: A start job for unit libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope has finished successfully. ░░ ░░ The job identifier is 2774. Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.213492942 -0500 EST m=+10.482552472 container init 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test) Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.215831642 -0500 EST m=+10.484891319 container start 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Feb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.220001057 -0500 EST m=+10.489060638 pod start 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 (image=, name=httpd3) Feb 14 11:44:38 managed-node2 podman[36172]: Pod: Feb 14 11:44:38 managed-node2 podman[36172]: 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 Feb 14 11:44:38 managed-node2 podman[36172]: Container: Feb 14 11:44:38 managed-node2 podman[36172]: 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc Feb 14 11:44:38 managed-node2 systemd[1]: Started podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play. ░░ Subject: A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished successfully. ░░ ░░ The job identifier is 2666. Feb 14 11:44:38 managed-node2 sudo[36456]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwfzisldzdwyyefuftqqnwcjguwtzerm ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087478.5294943-15719-277869054228043/AnsiballZ_command.py' Feb 14 11:44:38 managed-node2 sudo[36456]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:44:38 managed-node2 python3.12[36460]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:38 managed-node2 systemd[29195]: Started podman-36468.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 127. Feb 14 11:44:38 managed-node2 sudo[36456]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:44:39 managed-node2 python3.12[36630]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:39 managed-node2 python3.12[36793]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:40 managed-node2 sudo[37006]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsnubttuzlaozlpnbdfvbxosrbpdzdqj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087479.9051843-15774-102000498703581/AnsiballZ_command.py' Feb 14 11:44:40 managed-node2 sudo[37006]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:44:40 managed-node2 python3.12[37009]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:40 managed-node2 sudo[37006]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:44:40 managed-node2 python3.12[37167]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:41 managed-node2 python3.12[37325]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:41 managed-node2 python3.12[37483]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:44:42 managed-node2 python3.12[37641]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:44:42 managed-node2 python3.12[37797]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_no9h_drm_podman/httpd1-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:42 managed-node2 python3.12[37953]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_no9h_drm_podman/httpd2-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:43 managed-node2 python3.12[38109]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_no9h_drm_podman/httpd3-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:45 managed-node2 python3.12[38420]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:44:46 managed-node2 python3.12[38581]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:44:48 managed-node2 python3.12[38738]: ansible-ansible.legacy.dnf Invoked with name=['firewalld'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 14 11:44:49 managed-node2 python3.12[38894]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None Feb 14 11:44:49 managed-node2 python3.12[39051]: ansible-ansible.legacy.systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Feb 14 11:44:50 managed-node2 python3.12[39208]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] ipset_options={} protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Feb 14 11:44:51 managed-node2 python3.12[39363]: ansible-ansible.legacy.dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 14 11:44:52 managed-node2 python3.12[39519]: ansible-ansible.legacy.dnf Invoked with name=['grubby'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 14 11:44:53 managed-node2 python3.12[39675]: ansible-ansible.legacy.dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Feb 14 11:44:54 managed-node2 python3.12[39831]: ansible-setup Invoked with filter=['ansible_selinux'] gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Feb 14 11:44:56 managed-node2 python3.12[40028]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Feb 14 11:44:56 managed-node2 python3.12[40183]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Feb 14 11:44:59 managed-node2 python3.12[40338]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Feb 14 11:45:00 managed-node2 python3.12[40494]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:00 managed-node2 python3.12[40651]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:01 managed-node2 python3.12[40807]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:02 managed-node2 python3.12[40963]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:02 managed-node2 python3.12[41119]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:45:03 managed-node2 python3.12[41274]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:03 managed-node2 python3.12[41429]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:04 managed-node2 sudo[41634]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmkqlusgchkybllnzalektonsmlqmsrd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087503.961329-16726-18565808136635/AnsiballZ_podman_image.py' Feb 14 11:45:04 managed-node2 sudo[41634]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41638.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 131. Feb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41645.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 135. Feb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41652.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 139. Feb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41659.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 143. Feb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41667.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 147. Feb 14 11:45:05 managed-node2 systemd[29195]: Started podman-41675.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 151. Feb 14 11:45:05 managed-node2 systemd[29195]: Started podman-41682.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 155. Feb 14 11:45:05 managed-node2 systemd[29195]: Started podman-41689.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 159. Feb 14 11:45:05 managed-node2 sudo[41634]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:45:05 managed-node2 python3.12[41850]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:05 managed-node2 python3.12[42007]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:06 managed-node2 python3.12[42162]: ansible-ansible.legacy.stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:45:06 managed-node2 python3.12[42240]: ansible-ansible.legacy.file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=.n4pqsi5o recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:06 managed-node2 sudo[42445]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnxwhijhmtcundqyezuqfudfknggmlfl ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087506.773379-16793-188090990846597/AnsiballZ_podman_play.py' Feb 14 11:45:06 managed-node2 sudo[42445]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 14 11:45:07 managed-node2 systemd[29195]: Started podman-42456.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 163. Feb 14 11:45:07 managed-node2 systemd[29195]: Created slice user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice - cgroup user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 167. Feb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Feb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Feb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2026-02-14T11:45:07-05:00" level=info msg="/bin/podman filtering at log level debug" time="2026-02-14T11:45:07-05:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2026-02-14T11:45:07-05:00" level=info msg="Setting parallel job count to 7" time="2026-02-14T11:45:07-05:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2026-02-14T11:45:07-05:00" level=info msg="Using sqlite as database backend" time="2026-02-14T11:45:07-05:00" level=debug msg="systemd-logind: Unknown object '/'." time="2026-02-14T11:45:07-05:00" level=debug msg="Using graph driver overlay" time="2026-02-14T11:45:07-05:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2026-02-14T11:45:07-05:00" level=debug msg="Using run root /run/user/3001/containers" time="2026-02-14T11:45:07-05:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2026-02-14T11:45:07-05:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2026-02-14T11:45:07-05:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2026-02-14T11:45:07-05:00" level=debug msg="Using transient store: false" time="2026-02-14T11:45:07-05:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2026-02-14T11:45:07-05:00" level=debug msg="Cached value indicated that overlay is supported" time="2026-02-14T11:45:07-05:00" level=debug msg="Cached value indicated that overlay is supported" time="2026-02-14T11:45:07-05:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2026-02-14T11:45:07-05:00" level=debug msg="Cached value indicated that native-diff is usable" time="2026-02-14T11:45:07-05:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2026-02-14T11:45:07-05:00" level=debug msg="Initializing event backend file" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2026-02-14T11:45:07-05:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2026-02-14T11:45:07-05:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network b45c64e00bfe2c0071d8275383afdd1283c4d60a5ceed7d974f55458a724e831 bridge podman1 2026-02-14 11:43:46.509060406 -0500 EST [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2026-02-14T11:45:07-05:00" level=debug msg="Successfully loaded 2 networks" time="2026-02-14T11:45:07-05:00" level=debug msg="Pod using bridge network mode" time="2026-02-14T11:45:07-05:00" level=debug msg="Created cgroup path user.slice/user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice for parent user.slice and name libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607" time="2026-02-14T11:45:07-05:00" level=debug msg="Created cgroup user.slice/user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice" time="2026-02-14T11:45:07-05:00" level=debug msg="Got pod cgroup as user.slice/user-3001.slice/user@3001.service/user.slice/user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice" Error: adding pod to state: name "httpd1" is in use: pod already exists time="2026-02-14T11:45:07-05:00" level=debug msg="Shutting down engines" time="2026-02-14T11:45:07-05:00" level=info msg="Received shutdown.Stop(), terminating!" PID=42456 Feb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Feb 14 11:45:07 managed-node2 sudo[42445]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:45:08 managed-node2 python3.12[42617]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:45:08 managed-node2 python3.12[42773]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:09 managed-node2 python3.12[42930]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:09 managed-node2 python3.12[43086]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:10 managed-node2 python3.12[43241]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:11 managed-node2 podman[43419]: 2026-02-14 11:45:11.367298586 -0500 EST m=+0.482134491 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Feb 14 11:45:11 managed-node2 python3.12[43609]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:12 managed-node2 python3.12[43766]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:12 managed-node2 python3.12[43921]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:45:12 managed-node2 python3.12[43999]: ansible-ansible.legacy.file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=.lv3t88g0 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 14 11:45:13 managed-node2 podman[44161]: 2026-02-14 11:45:13.466695843 -0500 EST m=+0.013716044 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge) Feb 14 11:45:13 managed-node2 systemd[1]: Created slice machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice - cgroup machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice. ░░ Subject: A start job for unit machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice has finished successfully. ░░ ░░ The job identifier is 2781. Feb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Feb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Feb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2026-02-14T11:45:13-05:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2026-02-14T11:45:13-05:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2026-02-14T11:45:13-05:00" level=info msg="Setting parallel job count to 7" time="2026-02-14T11:45:13-05:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2026-02-14T11:45:13-05:00" level=info msg="Using sqlite as database backend" time="2026-02-14T11:45:13-05:00" level=debug msg="Using graph driver overlay" time="2026-02-14T11:45:13-05:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2026-02-14T11:45:13-05:00" level=debug msg="Using run root /run/containers/storage" time="2026-02-14T11:45:13-05:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2026-02-14T11:45:13-05:00" level=debug msg="Using tmp dir /run/libpod" time="2026-02-14T11:45:13-05:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2026-02-14T11:45:13-05:00" level=debug msg="Using transient store: false" time="2026-02-14T11:45:13-05:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2026-02-14T11:45:13-05:00" level=debug msg="Cached value indicated that overlay is supported" time="2026-02-14T11:45:13-05:00" level=debug msg="Cached value indicated that overlay is supported" time="2026-02-14T11:45:13-05:00" level=debug msg="Cached value indicated that metacopy is being used" time="2026-02-14T11:45:13-05:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2026-02-14T11:45:13-05:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2026-02-14T11:45:13-05:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2026-02-14T11:45:13-05:00" level=debug msg="Initializing event backend journald" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2026-02-14T11:45:13-05:00" level=debug msg="Using OCI runtime \"/usr/bin/crun\"" time="2026-02-14T11:45:13-05:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 bridge podman1 2026-02-14 11:42:18.681480354 -0500 EST [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2026-02-14T11:45:13-05:00" level=debug msg="Successfully loaded 2 networks" time="2026-02-14T11:45:13-05:00" level=debug msg="Pod using bridge network mode" time="2026-02-14T11:45:13-05:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice for parent machine.slice and name libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066" time="2026-02-14T11:45:13-05:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice" time="2026-02-14T11:45:13-05:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice" Error: adding pod to state: name "httpd2" is in use: pod already exists time="2026-02-14T11:45:13-05:00" level=debug msg="Shutting down engines" Feb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Feb 14 11:45:14 managed-node2 python3.12[44322]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:15 managed-node2 python3.12[44479]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:15 managed-node2 python3.12[44636]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:16 managed-node2 python3.12[44791]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:17 managed-node2 podman[44968]: 2026-02-14 11:45:17.151065988 -0500 EST m=+0.289585155 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Feb 14 11:45:17 managed-node2 python3.12[45159]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:18 managed-node2 python3.12[45316]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:18 managed-node2 python3.12[45471]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:45:18 managed-node2 python3.12[45549]: ansible-ansible.legacy.file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=.vboa9hyz recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:19 managed-node2 python3.12[45704]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 14 11:45:19 managed-node2 podman[45712]: 2026-02-14 11:45:19.30811819 -0500 EST m=+0.014177431 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge) Feb 14 11:45:19 managed-node2 systemd[1]: Created slice machine-libpod_pod_b0f4498c0b1a6a00087d92e451e47188438fd77ff1718caf8b2d35d83725f547.slice - cgroup machine-libpod_pod_b0f4498c0b1a6a00087d92e451e47188438fd77ff1718caf8b2d35d83725f547.slice. ░░ Subject: A start job for unit machine-libpod_pod_b0f4498c0b1a6a00087d92e451e47188438fd77ff1718caf8b2d35d83725f547.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit machine-libpod_pod_b0f4498c0b1a6a00087d92e451e47188438fd77ff1718caf8b2d35d83725f547.slice has finished successfully. ░░ ░░ The job identifier is 2787. Feb 14 11:45:20 managed-node2 sudo[45924]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsuheqpdodqrzzzbcamsddzhcuitzot ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087519.8040786-17251-92408683234572/AnsiballZ_command.py' Feb 14 11:45:20 managed-node2 sudo[45924]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:45:20 managed-node2 python3.12[45927]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:20 managed-node2 systemd[29195]: Started podman-45934.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 171. Feb 14 11:45:20 managed-node2 sudo[45924]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:45:20 managed-node2 python3.12[46099]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:21 managed-node2 python3.12[46262]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:21 managed-node2 sudo[46475]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvrrappufczsldymgagfxtigzpflzerg ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087521.1786187-17311-51783948001152/AnsiballZ_command.py' Feb 14 11:45:21 managed-node2 sudo[46475]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:45:21 managed-node2 python3.12[46478]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:21 managed-node2 sudo[46475]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:45:21 managed-node2 python3.12[46636]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:22 managed-node2 python3.12[46794]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:22 managed-node2 python3.12[46952]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:23 managed-node2 python3.12[47108]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:23 managed-node2 python3.12[47264]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:25 managed-node2 python3.12[47575]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:26 managed-node2 python3.12[47736]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:28 managed-node2 python3.12[47893]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Feb 14 11:45:29 managed-node2 python3.12[48050]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:29 managed-node2 python3.12[48207]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:30 managed-node2 python3.12[48363]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:30 managed-node2 python3.12[48519]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:31 managed-node2 python3.12[48675]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:31 managed-node2 sudo[48882]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuoolknzrliqkfqtcydhnfthpyqbeewx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087531.3850875-17892-266440434332148/AnsiballZ_systemd.py' Feb 14 11:45:31 managed-node2 sudo[48882]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:45:31 managed-node2 python3.12[48885]: ansible-systemd Invoked with name=podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Feb 14 11:45:31 managed-node2 systemd[29195]: Reload requested from client PID 48888 ('systemctl')... Feb 14 11:45:31 managed-node2 systemd[29195]: Reloading... Feb 14 11:45:31 managed-node2 systemd[29195]: Reloading finished in 65 ms. Feb 14 11:45:31 managed-node2 systemd[29195]: Stopping podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service - A template for running K8s workloads via podman-kube-play... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 175. Feb 14 11:45:42 managed-node2 podman[48899]: time="2026-02-14T11:45:42-05:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd1-httpd1 in 10 seconds, resorting to SIGKILL" Feb 14 11:45:42 managed-node2 conmon[31412]: conmon 6da07bcbaedcde56d4e1 : Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-3001.slice/user@3001.service/user.slice/user-libpod_pod_39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1.slice/libpod-6da07bcbaedcde56d4e1f6d376605d0690fe64cd6f8d4ca51d42712f887d41cc.scope/container/memory.events Feb 14 11:45:42 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:45:42 managed-node2 kernel: veth0 (unregistering): left allmulticast mode Feb 14 11:45:42 managed-node2 kernel: veth0 (unregistering): left promiscuous mode Feb 14 11:45:42 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:45:42 managed-node2 systemd[29195]: Removed slice user-libpod_pod_39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1.slice - cgroup user-libpod_pod_39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1.slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 176 and the job result is done. Feb 14 11:45:42 managed-node2 conmon[31368]: conmon 5b8a8d5d0430c64be456 : Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-3001.slice/user@3001.service/user.slice/libpod-5b8a8d5d0430c64be4560d51ae0a7e88111b69a3710627acb43738bf92a81739.scope/container/memory.events Feb 14 11:45:42 managed-node2 podman[48899]: Pods stopped: Feb 14 11:45:42 managed-node2 podman[48899]: 39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1 Feb 14 11:45:42 managed-node2 podman[48899]: Pods removed: Feb 14 11:45:42 managed-node2 podman[48899]: 39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1 Feb 14 11:45:42 managed-node2 podman[48899]: Secrets removed: Feb 14 11:45:42 managed-node2 podman[48899]: Volumes removed: Feb 14 11:45:42 managed-node2 systemd[29195]: Stopped podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service - A template for running K8s workloads via podman-kube-play. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 175 and the job result is done. Feb 14 11:45:42 managed-node2 systemd[29195]: podman-kube@-home-podman_basic_user-.config-containers-ansible\x2dkubernetes.d-httpd1.yml.service: Consumed 621ms CPU time, 64.1M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Feb 14 11:45:42 managed-node2 sudo[48882]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:45:42 managed-node2 python3.12[49104]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:43 managed-node2 sudo[49312]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zolhlvibkwbjlkciijuxwwzhkfjcpmeg ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087542.9204967-18233-260601492252058/AnsiballZ_podman_play.py' Feb 14 11:45:43 managed-node2 sudo[49312]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play version: 5.6.0, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Feb 14 11:45:43 managed-node2 systemd[29195]: Started podman-49322.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 180. Feb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Feb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Feb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Feb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Feb 14 11:45:43 managed-node2 sudo[49312]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:45:43 managed-node2 python3.12[49484]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:44 managed-node2 python3.12[49639]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:45:45 managed-node2 python3.12[49795]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:46 managed-node2 python3.12[49952]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:45:46 managed-node2 python3.12[50108]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Feb 14 11:45:46 managed-node2 systemd[1]: Reload requested from client PID 50111 ('systemctl') (unit session-8.scope)... Feb 14 11:45:46 managed-node2 systemd[1]: Reloading... Feb 14 11:45:46 managed-node2 systemd-rc-local-generator[50150]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:45:46 managed-node2 systemd[1]: Reloading finished in 246 ms. Feb 14 11:45:47 managed-node2 systemd[1]: Stopping podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play... ░░ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun execution. ░░ ░░ The job identifier is 2794. Feb 14 11:45:47 managed-node2 podman[50178]: 2026-02-14 11:45:47.057544426 -0500 EST m=+0.022922833 pod stop dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2) Feb 14 11:45:57 managed-node2 podman[50178]: time="2026-02-14T11:45:57-05:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd2-httpd2 in 10 seconds, resorting to SIGKILL" Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.075883411 -0500 EST m=+10.041261947 container stop 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z) Feb 14 11:45:57 managed-node2 systemd[1]: libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.104222413 -0500 EST m=+10.069600901 container died 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Feb 14 11:45:57 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7d74a1e1f62f6341c88dc81407340e28d99c68a78b273dc960b703e2edb884e7-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-7d74a1e1f62f6341c88dc81407340e28d99c68a78b273dc960b703e2edb884e7-merged.mount has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 systemd[5488]: Created slice background.slice - User Background Tasks Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 20. Feb 14 11:45:57 managed-node2 systemd[5488]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 19. Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.136412765 -0500 EST m=+10.101791145 container cleanup 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0) Feb 14 11:45:57 managed-node2 systemd[5488]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 19. Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.148326182 -0500 EST m=+10.113704699 container stop 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:45:57 managed-node2 systemd[1]: libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.174172879 -0500 EST m=+10.139551469 container died 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:45:57 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:45:57 managed-node2 kernel: veth0 (unregistering): left allmulticast mode Feb 14 11:45:57 managed-node2 kernel: veth0 (unregistering): left promiscuous mode Feb 14 11:45:57 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:45:57 managed-node2 systemd[1]: run-netns-netns\x2de4ab7e41\x2da9bf\x2d3ef1\x2d2d12\x2ddbe2c8a39368.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-netns-netns\x2de4ab7e41\x2da9bf\x2d3ef1\x2d2d12\x2ddbe2c8a39368.mount has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0-rootfs-merge.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0-rootfs-merge.mount has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0-userdata-shm.mount has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.23444651 -0500 EST m=+10.199825019 container cleanup 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:45:57 managed-node2 systemd[1]: Removed slice machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice - cgroup machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice. ░░ Subject: A stop job for unit machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice has finished. ░░ ░░ The job identifier is 2795 and the job result is done. Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.242083364 -0500 EST m=+10.207461866 pod stop dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2) Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.264794903 -0500 EST m=+10.230173312 container remove 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service, app=test) Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.285758663 -0500 EST m=+10.251137072 container remove 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.293078382 -0500 EST m=+10.258456765 pod remove dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2) Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.29594165 -0500 EST m=+10.261320164 container kill c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:45:57 managed-node2 systemd[1]: libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.30088823 -0500 EST m=+10.266266621 container died c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:45:57 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3-rootfs-merge.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3-rootfs-merge.mount has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.355078722 -0500 EST m=+10.320457131 container remove c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service) Feb 14 11:45:57 managed-node2 podman[50178]: Pods stopped: Feb 14 11:45:57 managed-node2 podman[50178]: dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 Feb 14 11:45:57 managed-node2 podman[50178]: Pods removed: Feb 14 11:45:57 managed-node2 podman[50178]: dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 Feb 14 11:45:57 managed-node2 podman[50178]: Secrets removed: Feb 14 11:45:57 managed-node2 podman[50178]: Volumes removed: Feb 14 11:45:57 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state. Feb 14 11:45:57 managed-node2 systemd[1]: Stopped podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play. ░░ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished. ░░ ░░ The job identifier is 2794 and the job result is done. Feb 14 11:45:57 managed-node2 python3.12[50381]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:45:58 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3-userdata-shm.mount has successfully entered the 'dead' state. Feb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play version: 5.6.0, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml Feb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml Feb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Feb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Feb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Feb 14 11:45:58 managed-node2 python3.12[50706]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:45:59 managed-node2 python3.12[50861]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:00 managed-node2 python3.12[51018]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:01 managed-node2 python3.12[51174]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Feb 14 11:46:01 managed-node2 systemd[1]: Reload requested from client PID 51177 ('systemctl') (unit session-8.scope)... Feb 14 11:46:01 managed-node2 systemd[1]: Reloading... Feb 14 11:46:01 managed-node2 systemd-rc-local-generator[51215]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:46:01 managed-node2 systemd[1]: Reloading finished in 228 ms. Feb 14 11:46:01 managed-node2 systemd[1]: Stopping podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play... ░░ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun execution. ░░ ░░ The job identifier is 2798. Feb 14 11:46:01 managed-node2 podman[51244]: 2026-02-14 11:46:01.595932777 -0500 EST m=+0.023865329 pod stop 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 (image=, name=httpd3) Feb 14 11:46:11 managed-node2 podman[51244]: time="2026-02-14T11:46:11-05:00" level=warning msg="StopSignal SIGTERM failed to stop container httpd3-httpd3 in 10 seconds, resorting to SIGKILL" Feb 14 11:46:11 managed-node2 systemd[1]: libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.624723575 -0500 EST m=+10.052656258 container died 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry) Feb 14 11:46:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay-aa73ec333742c02f48305550cfb2c13cad6c09f505663b91204e062523eb0502-merged.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay-aa73ec333742c02f48305550cfb2c13cad6c09f505663b91204e062523eb0502-merged.mount has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.657360503 -0500 EST m=+10.085293056 container cleanup 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage) Feb 14 11:46:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 systemd[1]: libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.667356994 -0500 EST m=+10.095289630 container died b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:46:11 managed-node2 kernel: podman1: port 2(veth1) entered disabled state Feb 14 11:46:11 managed-node2 kernel: veth1 (unregistering): left allmulticast mode Feb 14 11:46:11 managed-node2 kernel: veth1 (unregistering): left promiscuous mode Feb 14 11:46:11 managed-node2 kernel: podman1: port 2(veth1) entered disabled state Feb 14 11:46:11 managed-node2 systemd[1]: run-p33919-i33920.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-p33919-i33920.scope has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 NetworkManager[815]: [1771087571.6978] device (podman1): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed') Feb 14 11:46:11 managed-node2 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 2799. Feb 14 11:46:11 managed-node2 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 2799. Feb 14 11:46:11 managed-node2 systemd[1]: run-netns-netns\x2d21d381d5\x2dd108\x2d36c8\x2d3925\x2df88b1712188a.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-netns-netns\x2d21d381d5\x2dd108\x2d36c8\x2d3925\x2df88b1712188a.mount has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac-rootfs-merge.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac-rootfs-merge.mount has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.780271382 -0500 EST m=+10.208203998 container cleanup b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:46:11 managed-node2 systemd[1]: Removed slice machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice - cgroup machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice. ░░ Subject: A stop job for unit machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice has finished. ░░ ░░ The job identifier is 2878 and the job result is done. Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.807588953 -0500 EST m=+10.235521507 container remove 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.829454691 -0500 EST m=+10.257387246 container remove b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.836846027 -0500 EST m=+10.264778551 pod remove 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 (image=, name=httpd3) Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.840052556 -0500 EST m=+10.267985686 container kill e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:46:11 managed-node2 systemd[1]: libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.845916667 -0500 EST m=+10.273849494 container died e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.896294322 -0500 EST m=+10.324226877 container remove e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service) Feb 14 11:46:11 managed-node2 podman[51244]: Pods stopped: Feb 14 11:46:11 managed-node2 podman[51244]: 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 Feb 14 11:46:11 managed-node2 podman[51244]: Pods removed: Feb 14 11:46:11 managed-node2 podman[51244]: 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 Feb 14 11:46:11 managed-node2 podman[51244]: Secrets removed: Feb 14 11:46:11 managed-node2 podman[51244]: Volumes removed: Feb 14 11:46:11 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state. Feb 14 11:46:11 managed-node2 systemd[1]: Stopped podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play. ░░ Subject: A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished. ░░ ░░ The job identifier is 2798 and the job result is done. Feb 14 11:46:12 managed-node2 python3.12[51460]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac-userdata-shm.mount has successfully entered the 'dead' state. Feb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b-rootfs-merge.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b-rootfs-merge.mount has successfully entered the 'dead' state. Feb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b-userdata-shm.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay\x2dcontainers-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b-userdata-shm.mount has successfully entered the 'dead' state. Feb 14 11:46:12 managed-node2 python3.12[51617]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Feb 14 11:46:12 managed-node2 python3.12[51617]: ansible-containers.podman.podman_play version: 5.6.0, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml Feb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:46:13 managed-node2 python3.12[51785]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:46:14 managed-node2 python3.12[51940]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Feb 14 11:46:14 managed-node2 python3.12[52096]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:15 managed-node2 sudo[52303]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqishaqoeeauqjityygmflzpaqldnowu ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087574.7806642-19332-58185055903523/AnsiballZ_podman_container_info.py' Feb 14 11:46:15 managed-node2 sudo[52303]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:46:15 managed-node2 python3.12[52306]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Feb 14 11:46:15 managed-node2 systemd[29195]: Started podman-52307.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 184. Feb 14 11:46:15 managed-node2 sudo[52303]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:46:15 managed-node2 sudo[52519]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hangkhtqwiganrwnkxxhsiofgprcvkkx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087575.4226453-19357-159026990468623/AnsiballZ_command.py' Feb 14 11:46:15 managed-node2 sudo[52519]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:46:15 managed-node2 python3.12[52522]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:15 managed-node2 systemd[29195]: Started podman-52523.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 188. Feb 14 11:46:15 managed-node2 sudo[52519]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:46:16 managed-node2 sudo[52734]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gomapidfsfoypmttsyusbxhkhjdpprqy ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087575.9556694-19387-69229158351339/AnsiballZ_command.py' Feb 14 11:46:16 managed-node2 sudo[52734]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:46:16 managed-node2 python3.12[52737]: ansible-ansible.legacy.command Invoked with _raw_params=podman secret ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:16 managed-node2 systemd[29195]: Started podman-52738.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 192. Feb 14 11:46:16 managed-node2 sudo[52734]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:46:16 managed-node2 python3.12[52901]: ansible-ansible.legacy.command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Feb 14 11:46:16 managed-node2 systemd[1]: Stopping user@3001.service - User Manager for UID 3001... ░░ Subject: A stop job for unit user@3001.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user@3001.service has begun execution. ░░ ░░ The job identifier is 2881. Feb 14 11:46:16 managed-node2 systemd[29195]: Activating special unit exit.target... Feb 14 11:46:16 managed-node2 systemd[29195]: Stopping podman-pause-37bd2e87.scope... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 216. Feb 14 11:46:16 managed-node2 systemd[29195]: Removed slice app-podman\x2dkube.slice - Slice /app/podman-kube. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 203 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: app-podman\x2dkube.slice: Consumed 621ms CPU time, 64.1M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Feb 14 11:46:16 managed-node2 systemd[29195]: Removed slice user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice - cgroup user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 215 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped target default.target - Main User Target. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 202 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped target basic.target - Basic System. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 200 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped target paths.target - Paths. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 204 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped target sockets.target - Sockets. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 211 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped target timers.target - Timers. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 205 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 210 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 207 and the job result is done. Feb 14 11:46:16 managed-node2 dbus-broker[29751]: Dispatched 2413 messages @ 3(±15)μs / message. ░░ Subject: Dispatched 2413 messages ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ This message is printed by dbus-broker when shutting down. It includes metric ░░ information collected during the runtime of dbus-broker. ░░ ░░ The message lists the number of dispatched messages ░░ (in this case 2413) as well as the mean time to ░░ handling a single message. The time measurements exclude the time spent on ░░ writing to and reading from the kernel. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopping dbus-broker.service - D-Bus User Message Bus... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 213. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 206 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped dbus-broker.service - D-Bus User Message Bus. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 213 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Stopped podman-pause-37bd2e87.scope. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 216 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Removed slice session.slice - User Core Session Slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 212 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Removed slice user.slice - Slice /user. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 214 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Closed dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 217 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: Removed slice app.slice - User Application Slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 218 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[29195]: app.slice: Consumed 649ms CPU time, 64.7M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Feb 14 11:46:16 managed-node2 systemd[29195]: Reached target shutdown.target - Shutdown. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 199. Feb 14 11:46:16 managed-node2 systemd[29195]: Finished systemd-exit.service - Exit the Session. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 197. Feb 14 11:46:16 managed-node2 systemd[29195]: Reached target exit.target - Exit the Session. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 196. Feb 14 11:46:16 managed-node2 systemd-logind[768]: Removed session 10. ░░ Subject: Session 10 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 10 has been terminated. Feb 14 11:46:16 managed-node2 systemd[1]: user@3001.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user@3001.service has successfully entered the 'dead' state. Feb 14 11:46:16 managed-node2 systemd[1]: Stopped user@3001.service - User Manager for UID 3001. ░░ Subject: A stop job for unit user@3001.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user@3001.service has finished. ░░ ░░ The job identifier is 2881 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[1]: user@3001.service: Consumed 2.184s CPU time, 83.2M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user@3001.service completed and consumed the indicated resources. Feb 14 11:46:16 managed-node2 systemd[1]: Stopping user-runtime-dir@3001.service - User Runtime Directory /run/user/3001... ░░ Subject: A stop job for unit user-runtime-dir@3001.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-runtime-dir@3001.service has begun execution. ░░ ░░ The job identifier is 2880. Feb 14 11:46:16 managed-node2 systemd[1]: run-user-3001.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-user-3001.mount has successfully entered the 'dead' state. Feb 14 11:46:16 managed-node2 systemd[1]: user-runtime-dir@3001.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user-runtime-dir@3001.service has successfully entered the 'dead' state. Feb 14 11:46:16 managed-node2 systemd[1]: Stopped user-runtime-dir@3001.service - User Runtime Directory /run/user/3001. ░░ Subject: A stop job for unit user-runtime-dir@3001.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-runtime-dir@3001.service has finished. ░░ ░░ The job identifier is 2880 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[1]: Removed slice user-3001.slice - User Slice of UID 3001. ░░ Subject: A stop job for unit user-3001.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-3001.slice has finished. ░░ ░░ The job identifier is 2882 and the job result is done. Feb 14 11:46:16 managed-node2 systemd[1]: user-3001.slice: Consumed 2.212s CPU time, 83.2M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user-3001.slice completed and consumed the indicated resources. Feb 14 11:46:17 managed-node2 python3.12[53062]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:19 managed-node2 python3.12[53218]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:21 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Feb 14 11:46:21 managed-node2 python3.12[53375]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:24 managed-node2 python3.12[53531]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:26 managed-node2 python3.12[53687]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:29 managed-node2 python3.12[53843]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:29 managed-node2 sudo[54049]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtkxdbcybdoesitcfkqviyhvcybnwfk ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087589.3860164-19842-259624477313611/AnsiballZ_command.py' Feb 14 11:46:29 managed-node2 sudo[54049]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:46:29 managed-node2 python3.12[54052]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:29 managed-node2 sudo[54049]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:46:30 managed-node2 python3.12[54214]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd2 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:30 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:46:30 managed-node2 python3.12[54376]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd3 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:30 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:46:30 managed-node2 sudo[54588]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqjkzkyquobosfntmmeshlxejznbhuye ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087590.6654937-19897-26461502310971/AnsiballZ_command.py' Feb 14 11:46:30 managed-node2 sudo[54588]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0) Feb 14 11:46:31 managed-node2 python3.12[54591]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:31 managed-node2 sudo[54588]: pam_unix(sudo:session): session closed for user podman_basic_user Feb 14 11:46:31 managed-node2 python3.12[54749]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:31 managed-node2 python3.12[54907]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active ' _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:32 managed-node2 python3.12[55065]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:34 managed-node2 python3.12[55375]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:35 managed-node2 python3.12[55536]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:46:35 managed-node2 python3.12[55692]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:37 managed-node2 python3.12[55850]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Feb 14 11:46:38 managed-node2 python3.12[56006]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:38 managed-node2 python3.12[56163]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:38 managed-node2 python3.12[56319]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:39 managed-node2 python3.12[56475]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:40 managed-node2 python3.12[56631]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:40 managed-node2 python3.12[56786]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:40 managed-node2 python3.12[56941]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:46:41 managed-node2 python3.12[57096]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:46:42 managed-node2 python3.12[57252]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:43 managed-node2 python3.12[57409]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:43 managed-node2 python3.12[57565]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Feb 14 11:46:44 managed-node2 python3.12[57722]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:44 managed-node2 python3.12[57877]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:46:45 managed-node2 python3.12[58032]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:46 managed-node2 python3.12[58189]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:47 managed-node2 python3.12[58345]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None Feb 14 11:46:47 managed-node2 python3.12[58502]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:48 managed-node2 python3.12[58657]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:46:48 managed-node2 python3.12[58812]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Feb 14 11:46:49 managed-node2 python3.12[58968]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:50 managed-node2 python3.12[59123]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:46:50 managed-node2 python3.12[59278]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:46:51 managed-node2 sshd-session[59304]: Accepted publickey for root from 10.31.12.69 port 46594 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Feb 14 11:46:51 managed-node2 systemd-logind[768]: New session 11 of user root. ░░ Subject: A new session 11 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 11 has been created for the user root. ░░ ░░ The leading process of the session is 59304. Feb 14 11:46:51 managed-node2 systemd[1]: Started session-11.scope - Session 11 of User root. ░░ Subject: A start job for unit session-11.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-11.scope has finished successfully. ░░ ░░ The job identifier is 2884. Feb 14 11:46:51 managed-node2 sshd-session[59304]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Feb 14 11:46:51 managed-node2 sshd-session[59307]: Received disconnect from 10.31.12.69 port 46594:11: disconnected by user Feb 14 11:46:51 managed-node2 sshd-session[59307]: Disconnected from user root 10.31.12.69 port 46594 Feb 14 11:46:51 managed-node2 sshd-session[59304]: pam_unix(sshd:session): session closed for user root Feb 14 11:46:51 managed-node2 systemd[1]: session-11.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-11.scope has successfully entered the 'dead' state. Feb 14 11:46:51 managed-node2 systemd-logind[768]: Session 11 logged out. Waiting for processes to exit. Feb 14 11:46:51 managed-node2 systemd-logind[768]: Removed session 11. ░░ Subject: Session 11 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 11 has been terminated. Feb 14 11:46:53 managed-node2 python3.12[59514]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 14 11:46:56 managed-node2 python3.12[59698]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:56 managed-node2 python3.12[59854]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:46:57 managed-node2 python3.12[60009]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:59 managed-node2 python3.12[60320]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:46:59 managed-node2 python3.12[60481]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:47:00 managed-node2 python3.12[60637]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:02 managed-node2 sshd-session[60665]: Accepted publickey for root from 10.31.12.69 port 54102 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Feb 14 11:47:02 managed-node2 systemd-logind[768]: New session 12 of user root. ░░ Subject: A new session 12 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 12 has been created for the user root. ░░ ░░ The leading process of the session is 60665. Feb 14 11:47:02 managed-node2 systemd[1]: Started session-12.scope - Session 12 of User root. ░░ Subject: A start job for unit session-12.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-12.scope has finished successfully. ░░ ░░ The job identifier is 2966. Feb 14 11:47:02 managed-node2 sshd-session[60665]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Feb 14 11:47:02 managed-node2 sshd-session[60668]: Received disconnect from 10.31.12.69 port 54102:11: disconnected by user Feb 14 11:47:02 managed-node2 sshd-session[60668]: Disconnected from user root 10.31.12.69 port 54102 Feb 14 11:47:02 managed-node2 sshd-session[60665]: pam_unix(sshd:session): session closed for user root Feb 14 11:47:02 managed-node2 systemd[1]: session-12.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-12.scope has successfully entered the 'dead' state. Feb 14 11:47:02 managed-node2 systemd-logind[768]: Session 12 logged out. Waiting for processes to exit. Feb 14 11:47:02 managed-node2 systemd-logind[768]: Removed session 12. ░░ Subject: Session 12 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 12 has been terminated. Feb 14 11:47:04 managed-node2 python3.12[60876]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 14 11:47:05 managed-node2 python3.12[61060]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:05 managed-node2 python3.12[61215]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:06 managed-node2 python3.12[61370]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:08 managed-node2 python3.12[61681]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:09 managed-node2 python3.12[61844]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:47:09 managed-node2 python3.12[62000]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:11 managed-node2 python3.12[62157]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:12 managed-node2 python3.12[62314]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:12 managed-node2 python3.12[62469]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:47:13 managed-node2 python3.12[62594]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087632.5337622-22450-265320183664731/.source.container dest=/etc/containers/systemd/nopull.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=670d64fc68a9768edb20cad26df2acc703542d85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:15 managed-node2 python3.12[62904]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:16 managed-node2 python3.12[63065]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:17 managed-node2 python3.12[63222]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:47:19 managed-node2 podman[63388]: 2026-02-14 11:47:19.295521214 -0500 EST m=+0.017817019 image pull-error this_is_a_bogus_image:latest short-name resolution enforced but cannot prompt without a TTY Feb 14 11:47:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:47:19 managed-node2 python3.12[63550]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:20 managed-node2 python3.12[63705]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:47:20 managed-node2 python3.12[63830]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087639.8898952-22814-104891175911053/.source.container dest=/etc/containers/systemd/bogus.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=1d087e679d135214e8ac9ccaf33b2222916efb7f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:22 managed-node2 python3.12[64140]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:23 managed-node2 python3.12[64301]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:25 managed-node2 python3.12[64458]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:26 managed-node2 python3.12[64615]: ansible-systemd Invoked with name=nopull.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:47:26 managed-node2 python3.12[64771]: ansible-stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:27 managed-node2 python3.12[65083]: ansible-file Invoked with path=/etc/containers/systemd/nopull.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:28 managed-node2 python3.12[65238]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:47:28 managed-node2 systemd[1]: Reload requested from client PID 65239 ('systemctl') (unit session-8.scope)... Feb 14 11:47:28 managed-node2 systemd[1]: Reloading... Feb 14 11:47:28 managed-node2 quadlet-generator[65263]: Warning: bogus.container specifies the image "this_is_a_bogus_image" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Feb 14 11:47:28 managed-node2 systemd-rc-local-generator[65291]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:47:28 managed-node2 systemd[1]: Reloading finished in 210 ms. Feb 14 11:47:29 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:47:31 managed-node2 python3.12[65771]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:32 managed-node2 python3.12[65932]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:33 managed-node2 python3.12[66089]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:34 managed-node2 python3.12[66246]: ansible-systemd Invoked with name=bogus.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:47:35 managed-node2 systemd[1]: Reload requested from client PID 66249 ('systemctl') (unit session-8.scope)... Feb 14 11:47:35 managed-node2 systemd[1]: Reloading... Feb 14 11:47:35 managed-node2 quadlet-generator[66273]: Warning: bogus.container specifies the image "this_is_a_bogus_image" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Feb 14 11:47:35 managed-node2 systemd-rc-local-generator[66298]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:47:35 managed-node2 systemd[1]: Reloading finished in 219 ms. Feb 14 11:47:35 managed-node2 python3.12[66465]: ansible-stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:36 managed-node2 python3.12[66777]: ansible-file Invoked with path=/etc/containers/systemd/bogus.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:37 managed-node2 python3.12[66932]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:47:37 managed-node2 systemd[1]: Reload requested from client PID 66933 ('systemctl') (unit session-8.scope)... Feb 14 11:47:37 managed-node2 systemd[1]: Reloading... Feb 14 11:47:37 managed-node2 systemd-rc-local-generator[66984]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:47:37 managed-node2 systemd[1]: Reloading finished in 210 ms. Feb 14 11:47:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:47:38 managed-node2 python3.12[67311]: ansible-user Invoked with name=user_quadlet_basic uid=1111 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 14 11:47:38 managed-node2 useradd[67313]: new group: name=user_quadlet_basic, GID=1111 Feb 14 11:47:38 managed-node2 useradd[67313]: new user: name=user_quadlet_basic, UID=1111, GID=1111, home=/home/user_quadlet_basic, shell=/bin/bash, from=/dev/pts/0 Feb 14 11:47:38 managed-node2 rsyslogd[985]: imjournal: journal files changed, reloading... [v8.2510.0-5.el10 try https://www.rsyslog.com/e/0 ] Feb 14 11:47:38 managed-node2 rsyslogd[985]: imjournal: journal files changed, reloading... [v8.2510.0-5.el10 try https://www.rsyslog.com/e/0 ] Feb 14 11:47:40 managed-node2 python3.12[67624]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:41 managed-node2 python3.12[67785]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:42 managed-node2 python3.12[67942]: ansible-getent Invoked with database=passwd key=user_quadlet_basic fail_key=False service=None split=None Feb 14 11:47:43 managed-node2 python3.12[68098]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:47:43 managed-node2 systemd[1]: Created slice user-1111.slice - User Slice of UID 1111. ░░ Subject: A start job for unit user-1111.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-1111.slice has finished successfully. ░░ ░░ The job identifier is 3126. Feb 14 11:47:43 managed-node2 systemd[1]: Starting user-runtime-dir@1111.service - User Runtime Directory /run/user/1111... ░░ Subject: A start job for unit user-runtime-dir@1111.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@1111.service has begun execution. ░░ ░░ The job identifier is 3048. Feb 14 11:47:43 managed-node2 systemd[1]: Finished user-runtime-dir@1111.service - User Runtime Directory /run/user/1111. ░░ Subject: A start job for unit user-runtime-dir@1111.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@1111.service has finished successfully. ░░ ░░ The job identifier is 3048. Feb 14 11:47:43 managed-node2 systemd[1]: Starting user@1111.service - User Manager for UID 1111... ░░ Subject: A start job for unit user@1111.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@1111.service has begun execution. ░░ ░░ The job identifier is 3128. Feb 14 11:47:43 managed-node2 systemd-logind[768]: New session 13 of user user_quadlet_basic. ░░ Subject: A new session 13 has been created for user user_quadlet_basic ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 13 has been created for the user user_quadlet_basic. ░░ ░░ The leading process of the session is 68102. Feb 14 11:47:43 managed-node2 (systemd)[68102]: pam_unix(systemd-user:session): session opened for user user_quadlet_basic(uid=1111) by user_quadlet_basic(uid=0) Feb 14 11:47:43 managed-node2 systemd[68102]: Queued start job for default target default.target. Feb 14 11:47:43 managed-node2 systemd[68102]: Created slice app.slice - User Application Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 5. Feb 14 11:47:43 managed-node2 systemd[68102]: Started grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 10. Feb 14 11:47:43 managed-node2 systemd[68102]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 9. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target paths.target - Paths. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 12. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target timers.target - Timers. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 8. Feb 14 11:47:43 managed-node2 systemd[68102]: Starting dbus.socket - D-Bus User Message Bus Socket... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 4. Feb 14 11:47:43 managed-node2 systemd[68102]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 11. Feb 14 11:47:43 managed-node2 systemd[68102]: Listening on dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 4. Feb 14 11:47:43 managed-node2 systemd[68102]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 11. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target sockets.target - Sockets. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 3. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 2. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target default.target - Main User Target. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 1. Feb 14 11:47:43 managed-node2 systemd[68102]: Startup finished in 64ms. ░░ Subject: User manager start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The user manager instance for user 1111 has been started. All services queued ░░ for starting have been started. Note that other services might still be starting ░░ up or be started at any later time. ░░ ░░ Startup of the manager took 64969 microseconds. Feb 14 11:47:43 managed-node2 systemd[1]: Started user@1111.service - User Manager for UID 1111. ░░ Subject: A start job for unit user@1111.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@1111.service has finished successfully. ░░ ░░ The job identifier is 3128. Feb 14 11:47:44 managed-node2 python3.12[68273]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:44 managed-node2 sudo[68480]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uixpklxygdemobdbhyuryznyhdkszmkn ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087664.1855588-23870-3454953199487/AnsiballZ_podman_secret.py' Feb 14 11:47:44 managed-node2 sudo[68480]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:44 managed-node2 systemd[68102]: Created slice session.slice - User Core Session Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 14. Feb 14 11:47:44 managed-node2 systemd[68102]: Starting dbus-broker.service - D-Bus User Message Bus... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 13. Feb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Feb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Feb 14 11:47:44 managed-node2 systemd[68102]: Started dbus-broker.service - D-Bus User Message Bus. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 13. Feb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Ready Feb 14 11:47:44 managed-node2 systemd[68102]: Created slice user.slice - Slice /user. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 20. Feb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68496.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 19. Feb 14 11:47:44 managed-node2 systemd[68102]: Started podman-pause-57116b4d.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 23. Feb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68512.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 27. Feb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68519.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 31. Feb 14 11:47:44 managed-node2 sudo[68480]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:45 managed-node2 python3.12[68680]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:47:46 managed-node2 python3.12[68835]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:46 managed-node2 sudo[69042]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbpkvqovbwkjyfrwfoiymaxsidvpmvs ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087666.2460904-23929-12811921702894/AnsiballZ_podman_secret.py' Feb 14 11:47:46 managed-node2 sudo[69042]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69053.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 35. Feb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69060.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 39. Feb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69068.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 43. Feb 14 11:47:46 managed-node2 sudo[69042]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:47 managed-node2 python3.12[69230]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:47 managed-node2 python3.12[69387]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:48 managed-node2 python3.12[69543]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:49 managed-node2 python3.12[69699]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:47:49 managed-node2 python3.12[69854]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:50 managed-node2 python3.12[70009]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:47:50 managed-node2 python3.12[70134]: ansible-ansible.legacy.copy Invoked with dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network owner=user_quadlet_basic group=1111 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1771087669.9321482-24074-44070762850677/.source.network _original_basename=.19wp4gkp follow=False checksum=19c9b17be2af9b9deca5c3bd327f048966750682 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:50 managed-node2 sudo[70339]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apzsbbomwlqjxtafirxqggwpyjtxmtby ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087670.684208-24104-102282262690494/AnsiballZ_systemd.py' Feb 14 11:47:50 managed-node2 sudo[70339]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:51 managed-node2 python3.12[70342]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:47:51 managed-node2 systemd[68102]: Reload requested from client PID 70343 ('systemctl')... Feb 14 11:47:51 managed-node2 systemd[68102]: Reloading... Feb 14 11:47:51 managed-node2 systemd[68102]: Reloading finished in 39 ms. Feb 14 11:47:51 managed-node2 sudo[70339]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:51 managed-node2 sudo[70557]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbyikavwgufawfbnmbhihisncpxpuwz ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087671.337951-24131-108859254372812/AnsiballZ_systemd.py' Feb 14 11:47:51 managed-node2 sudo[70557]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:51 managed-node2 python3.12[70560]: ansible-systemd Invoked with name=quadlet-basic-network.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:47:51 managed-node2 systemd[68102]: Starting podman-user-wait-network-online.service - Wait for system level network-online.target as user.... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 59. Feb 14 11:47:51 managed-node2 sh[70564]: active Feb 14 11:47:51 managed-node2 systemd[68102]: Finished podman-user-wait-network-online.service - Wait for system level network-online.target as user.. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 59. Feb 14 11:47:51 managed-node2 systemd[68102]: Starting quadlet-basic-network.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 47. Feb 14 11:47:51 managed-node2 quadlet-basic-network[70566]: quadlet-basic-name Feb 14 11:47:51 managed-node2 systemd[68102]: Finished quadlet-basic-network.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 47. Feb 14 11:47:51 managed-node2 sudo[70557]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:52 managed-node2 python3.12[70728]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:52 managed-node2 python3.12[70886]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:53 managed-node2 python3.12[71042]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:54 managed-node2 python3.12[71198]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:47:55 managed-node2 python3.12[71353]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:55 managed-node2 python3.12[71508]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:47:55 managed-node2 python3.12[71633]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087675.2243838-24329-194671574787178/.source.network dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:56 managed-node2 sudo[71838]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbqqluhqmoaicdmjhhkmabpjkqakfen ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087675.9977405-24371-137166020726145/AnsiballZ_systemd.py' Feb 14 11:47:56 managed-node2 sudo[71838]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:56 managed-node2 python3.12[71842]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:47:56 managed-node2 systemd[68102]: Reload requested from client PID 71843 ('systemctl')... Feb 14 11:47:56 managed-node2 systemd[68102]: Reloading... Feb 14 11:47:56 managed-node2 systemd[68102]: Reloading finished in 39 ms. Feb 14 11:47:56 managed-node2 sudo[71838]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:56 managed-node2 sudo[72058]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqnmoirqgewpatwzwawhhrlgyanrsed ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087676.6492229-24391-168211225876343/AnsiballZ_systemd.py' Feb 14 11:47:56 managed-node2 sudo[72058]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:57 managed-node2 python3.12[72061]: ansible-systemd Invoked with name=quadlet-basic-unused-network-network.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:47:57 managed-node2 systemd[68102]: Starting quadlet-basic-unused-network-network.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 60. Feb 14 11:47:57 managed-node2 quadlet-basic-unused-network-network[72064]: systemd-quadlet-basic-unused-network Feb 14 11:47:57 managed-node2 systemd[68102]: Finished quadlet-basic-unused-network-network.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 60. Feb 14 11:47:57 managed-node2 sudo[72058]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:57 managed-node2 python3.12[72226]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:58 managed-node2 python3.12[72383]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:58 managed-node2 python3.12[72539]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:59 managed-node2 python3.12[72695]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:48:00 managed-node2 python3.12[72850]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:00 managed-node2 python3.12[73005]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:01 managed-node2 python3.12[73130]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087680.5019636-24577-124432255509769/.source.volume dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=90a3571bfc7670328fe3f8fb625585613dbd9c4a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:01 managed-node2 sudo[73335]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aizslvcfwxohxjustnktmhoxtyqdyuqo ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087681.2433515-24617-277250510433155/AnsiballZ_systemd.py' Feb 14 11:48:01 managed-node2 sudo[73335]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:01 managed-node2 python3.12[73338]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:01 managed-node2 systemd[68102]: Reload requested from client PID 73339 ('systemctl')... Feb 14 11:48:01 managed-node2 systemd[68102]: Reloading... Feb 14 11:48:01 managed-node2 systemd[68102]: Reloading finished in 39 ms. Feb 14 11:48:01 managed-node2 sudo[73335]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:02 managed-node2 sudo[73553]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnaimusrbkhfvcfbsqyjenyppzojpcum ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087681.8992858-24637-239185248282254/AnsiballZ_systemd.py' Feb 14 11:48:02 managed-node2 sudo[73553]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:02 managed-node2 python3.12[73556]: ansible-systemd Invoked with name=quadlet-basic-mysql-volume.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:02 managed-node2 systemd[68102]: Starting quadlet-basic-mysql-volume.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 73. Feb 14 11:48:02 managed-node2 quadlet-basic-mysql-volume[73559]: quadlet-basic-mysql-name Feb 14 11:48:02 managed-node2 systemd[68102]: Finished quadlet-basic-mysql-volume.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 73. Feb 14 11:48:02 managed-node2 sudo[73553]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:03 managed-node2 python3.12[73722]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:03 managed-node2 python3.12[73879]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:03 managed-node2 python3.12[74035]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:04 managed-node2 python3.12[74191]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:48:05 managed-node2 python3.12[74346]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:05 managed-node2 python3.12[74501]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:06 managed-node2 python3.12[74626]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087685.4910405-24770-45960372293351/.source.volume dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=fd0ae560360afa5541b866560b1e849d25e216ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:06 managed-node2 sudo[74831]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwpvfwjdxtnxlrjnsruhcvrbszosshmc ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087686.2318456-24792-71938151665693/AnsiballZ_systemd.py' Feb 14 11:48:06 managed-node2 sudo[74831]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:06 managed-node2 python3.12[74834]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:06 managed-node2 systemd[68102]: Reload requested from client PID 74835 ('systemctl')... Feb 14 11:48:06 managed-node2 systemd[68102]: Reloading... Feb 14 11:48:06 managed-node2 systemd[68102]: Reloading finished in 42 ms. Feb 14 11:48:06 managed-node2 sudo[74831]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:07 managed-node2 sudo[75049]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuthmwvkdjusmazenuymwuyactedtesj ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087686.879755-24802-91952004617364/AnsiballZ_systemd.py' Feb 14 11:48:07 managed-node2 sudo[75049]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:07 managed-node2 python3.12[75052]: ansible-systemd Invoked with name=quadlet-basic-unused-volume-volume.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:07 managed-node2 systemd[68102]: Starting quadlet-basic-unused-volume-volume.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 86. Feb 14 11:48:07 managed-node2 quadlet-basic-unused-volume-volume[75055]: systemd-quadlet-basic-unused-volume Feb 14 11:48:07 managed-node2 systemd[68102]: Finished quadlet-basic-unused-volume-volume.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 86. Feb 14 11:48:07 managed-node2 sudo[75049]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:08 managed-node2 python3.12[75217]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:08 managed-node2 python3.12[75374]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:08 managed-node2 python3.12[75530]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:09 managed-node2 python3.12[75686]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:48:10 managed-node2 sudo[75891]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnxglmukkluarwwhgryjkisfktidpwgs ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087689.9702964-24906-141256695692020/AnsiballZ_podman_image.py' Feb 14 11:48:10 managed-node2 sudo[75891]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:10 managed-node2 systemd[68102]: Started podman-75895.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 99. Feb 14 11:48:10 managed-node2 systemd[68102]: Started podman-75903.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 103. Feb 14 11:48:15 managed-node2 systemd[68102]: podman-75903.scope: Consumed 8.633s CPU time, 469.5M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Feb 14 11:48:15 managed-node2 systemd[68102]: Started podman-76114.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 107. Feb 14 11:48:16 managed-node2 systemd[68102]: Started podman-76121.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 111. Feb 14 11:48:17 managed-node2 systemd[68102]: Started podman-76129.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 115. Feb 14 11:48:17 managed-node2 systemd[68102]: Started podman-76137.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 119. Feb 14 11:48:17 managed-node2 sudo[75891]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:17 managed-node2 python3.12[76298]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:18 managed-node2 python3.12[76453]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:18 managed-node2 python3.12[76578]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087697.7685003-25203-102137340282322/.source.container dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=0b6cac7929623f1059e78ef39b8b0a25169b28a6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:18 managed-node2 sudo[76783]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgfztswzveapjwryvbcfeqxxlwjdmhyv ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087698.5060833-25237-65119752173567/AnsiballZ_systemd.py' Feb 14 11:48:18 managed-node2 sudo[76783]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:18 managed-node2 python3.12[76786]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:18 managed-node2 systemd[68102]: Reload requested from client PID 76787 ('systemctl')... Feb 14 11:48:18 managed-node2 systemd[68102]: Reloading... Feb 14 11:48:19 managed-node2 systemd[68102]: Reloading finished in 42 ms. Feb 14 11:48:19 managed-node2 sudo[76783]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:19 managed-node2 sudo[77002]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpzvyfzxibzygvqvywjclcqjhwmygvfl ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087699.1828828-25258-135996473326537/AnsiballZ_systemd.py' Feb 14 11:48:19 managed-node2 sudo[77002]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:19 managed-node2 python3.12[77005]: ansible-systemd Invoked with name=quadlet-basic-mysql.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:19 managed-node2 systemd[68102]: Starting quadlet-basic-mysql.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 123. Feb 14 11:48:19 managed-node2 systemd[68102]: Started rootless-netns-cae0da54.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 138. Feb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered blocking state Feb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:48:19 managed-node2 kernel: veth0: entered allmulticast mode Feb 14 11:48:19 managed-node2 kernel: veth0: entered promiscuous mode Feb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered blocking state Feb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered forwarding state Feb 14 11:48:19 managed-node2 systemd[68102]: Started run-p77036-i77037.scope - [systemd-run] /usr/libexec/podman/aardvark-dns --config /run/user/1111/containers/networks/aardvark-dns -p 53 run. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 142. Feb 14 11:48:19 managed-node2 systemd[68102]: Started quadlet-basic-mysql.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 123. Feb 14 11:48:19 managed-node2 quadlet-basic-mysql[77008]: eb573be648acda7fed60ce6841ee124752e10f81306eb341a44d9c0b11f44d3d Feb 14 11:48:19 managed-node2 sudo[77002]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:20 managed-node2 python3.12[77251]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:20 managed-node2 python3.12[77407]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:21 managed-node2 python3.12[77566]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:21 managed-node2 python3.12[77730]: ansible-stat Invoked with path=/var/lib/systemd/linger/user_quadlet_basic follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:23 managed-node2 python3.12[78042]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:24 managed-node2 python3.12[78227]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:48:25 managed-node2 python3.12[78383]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:27 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:27 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:27 managed-node2 podman[78590]: 2026-02-14 11:48:27.13362275 -0500 EST m=+0.017279956 secret create 90fe43ec6167996f11d8c921b Feb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:28 managed-node2 podman[78781]: 2026-02-14 11:48:28.453280763 -0500 EST m=+0.022645603 secret create 90451f17a3693765e0b13c8ad Feb 14 11:48:29 managed-node2 python3.12[78950]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:30 managed-node2 python3.12[79107]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:31 managed-node2 python3.12[79274]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:31 managed-node2 python3.12[79410]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/systemd/quadlet-basic.network owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1771087710.8078883-25734-32145037604992/.source.network _original_basename=.y5jiqtp_ follow=False checksum=19c9b17be2af9b9deca5c3bd327f048966750682 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:32 managed-node2 python3.12[79565]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:32 managed-node2 systemd[1]: Reload requested from client PID 79566 ('systemctl') (unit session-8.scope)... Feb 14 11:48:32 managed-node2 systemd[1]: Reloading... Feb 14 11:48:32 managed-node2 systemd-rc-local-generator[79615]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:48:32 managed-node2 systemd[1]: Reloading finished in 220 ms. Feb 14 11:48:32 managed-node2 python3.12[79785]: ansible-systemd Invoked with name=quadlet-basic-network.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:32 managed-node2 systemd[1]: Starting quadlet-basic-network.service... ░░ Subject: A start job for unit quadlet-basic-network.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-network.service has begun execution. ░░ ░░ The job identifier is 3214. Feb 14 11:48:32 managed-node2 podman[79789]: 2026-02-14 11:48:32.926261081 -0500 EST m=+0.016760408 network create 753e73850896ed526bbf5b94858e3cd2708517eedee1b4c90d154f7a2349144f (name=quadlet-basic-name, type=bridge) Feb 14 11:48:32 managed-node2 quadlet-basic-network[79789]: quadlet-basic-name Feb 14 11:48:32 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:32 managed-node2 systemd[1]: Finished quadlet-basic-network.service. ░░ Subject: A start job for unit quadlet-basic-network.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-network.service has finished successfully. ░░ ░░ The job identifier is 3214. Feb 14 11:48:33 managed-node2 python3.12[79950]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:34 managed-node2 python3.12[80108]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:35 managed-node2 python3.12[80263]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-unused-network.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:35 managed-node2 python3.12[80388]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087714.978921-25930-142517737796637/.source.network dest=/etc/containers/systemd/quadlet-basic-unused-network.network owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:36 managed-node2 python3.12[80543]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:36 managed-node2 systemd[1]: Reload requested from client PID 80544 ('systemctl') (unit session-8.scope)... Feb 14 11:48:36 managed-node2 systemd[1]: Reloading... Feb 14 11:48:36 managed-node2 systemd-rc-local-generator[80594]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:48:36 managed-node2 systemd[1]: Reloading finished in 213 ms. Feb 14 11:48:36 managed-node2 python3.12[80763]: ansible-systemd Invoked with name=quadlet-basic-unused-network-network.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:36 managed-node2 systemd[1]: Starting quadlet-basic-unused-network-network.service... ░░ Subject: A start job for unit quadlet-basic-unused-network-network.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-unused-network-network.service has begun execution. ░░ ░░ The job identifier is 3298. Feb 14 11:48:36 managed-node2 podman[80767]: 2026-02-14 11:48:36.941382834 -0500 EST m=+0.020181531 network create 01cf3213e3ed2d0e128267f68e75813ac56a211a4b375eb13d2a19c6244fb9f3 (name=systemd-quadlet-basic-unused-network, type=bridge) Feb 14 11:48:36 managed-node2 quadlet-basic-unused-network-network[80767]: systemd-quadlet-basic-unused-network Feb 14 11:48:36 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:36 managed-node2 systemd[1]: Finished quadlet-basic-unused-network-network.service. ░░ Subject: A start job for unit quadlet-basic-unused-network-network.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-unused-network-network.service has finished successfully. ░░ ░░ The job identifier is 3298. Feb 14 11:48:37 managed-node2 python3.12[80929]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:39 managed-node2 python3.12[81086]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:39 managed-node2 python3.12[81241]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-mysql.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:39 managed-node2 python3.12[81366]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087719.2426136-26105-180291594378143/.source.volume dest=/etc/containers/systemd/quadlet-basic-mysql.volume owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=90a3571bfc7670328fe3f8fb625585613dbd9c4a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:40 managed-node2 python3.12[81521]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:40 managed-node2 systemd[1]: Reload requested from client PID 81522 ('systemctl') (unit session-8.scope)... Feb 14 11:48:40 managed-node2 systemd[1]: Reloading... Feb 14 11:48:40 managed-node2 systemd-rc-local-generator[81573]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:48:40 managed-node2 systemd[1]: Reloading finished in 217 ms. Feb 14 11:48:41 managed-node2 python3.12[81742]: ansible-systemd Invoked with name=quadlet-basic-mysql-volume.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:41 managed-node2 systemd[1]: Starting quadlet-basic-mysql-volume.service... ░░ Subject: A start job for unit quadlet-basic-mysql-volume.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-mysql-volume.service has begun execution. ░░ ░░ The job identifier is 3382. Feb 14 11:48:41 managed-node2 podman[81746]: 2026-02-14 11:48:41.283012966 -0500 EST m=+0.024739231 volume create quadlet-basic-mysql-name Feb 14 11:48:41 managed-node2 quadlet-basic-mysql-volume[81746]: quadlet-basic-mysql-name Feb 14 11:48:41 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:41 managed-node2 systemd[1]: Finished quadlet-basic-mysql-volume.service. ░░ Subject: A start job for unit quadlet-basic-mysql-volume.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-mysql-volume.service has finished successfully. ░░ ░░ The job identifier is 3382. Feb 14 11:48:42 managed-node2 python3.12[81909]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:43 managed-node2 python3.12[82066]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:43 managed-node2 python3.12[82221]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-unused-volume.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:44 managed-node2 python3.12[82346]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087723.576437-26316-237709762951197/.source.volume dest=/etc/containers/systemd/quadlet-basic-unused-volume.volume owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=fd0ae560360afa5541b866560b1e849d25e216ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:44 managed-node2 python3.12[82501]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:44 managed-node2 systemd[1]: Reload requested from client PID 82502 ('systemctl') (unit session-8.scope)... Feb 14 11:48:44 managed-node2 systemd[1]: Reloading... Feb 14 11:48:44 managed-node2 systemd-rc-local-generator[82557]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:48:44 managed-node2 systemd[1]: Reloading finished in 213 ms. Feb 14 11:48:45 managed-node2 python3.12[82722]: ansible-systemd Invoked with name=quadlet-basic-unused-volume-volume.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:45 managed-node2 systemd[1]: Starting quadlet-basic-unused-volume-volume.service... ░░ Subject: A start job for unit quadlet-basic-unused-volume-volume.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-unused-volume-volume.service has begun execution. ░░ ░░ The job identifier is 3466. Feb 14 11:48:45 managed-node2 podman[82726]: 2026-02-14 11:48:45.588887323 -0500 EST m=+0.024810141 volume create systemd-quadlet-basic-unused-volume Feb 14 11:48:45 managed-node2 quadlet-basic-unused-volume-volume[82726]: systemd-quadlet-basic-unused-volume Feb 14 11:48:45 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:45 managed-node2 systemd[1]: Finished quadlet-basic-unused-volume-volume.service. ░░ Subject: A start job for unit quadlet-basic-unused-volume-volume.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-unused-volume-volume.service has finished successfully. ░░ ░░ The job identifier is 3466. Feb 14 11:48:46 managed-node2 python3.12[82888]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:47 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:51 managed-node2 podman[83053]: 2026-02-14 11:48:51.154323799 -0500 EST m=+3.494317061 image pull-error quay.io/linux-system-roles/mysql:5.6 unable to copy from source docker://quay.io/linux-system-roles/mysql:5.6: copying system image from manifest list: reading blob sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8: Digest did not match, expected sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 Feb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:51 managed-node2 python3.12[83273]: ansible-ansible.legacy.command Invoked with _raw_params=set -x set -o pipefail exec 1>&2 #podman volume rm --all #podman network prune -f podman volume ls podman network ls podman secret ls podman container ls podman pod ls podman images systemctl list-units | grep quadlet _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:52 managed-node2 python3.12[83475]: ansible-ansible.legacy.command Invoked with _raw_params=grep type=AVC /var/log/audit/audit.log _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:52 managed-node2 python3.12[83631]: ansible-ansible.legacy.command Invoked with _raw_params=journalctl -ex _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [End test when not booted] ************************************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:333 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.469) 0:01:49.115 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [Cleanup user] ************************************************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:339 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.012) 0:01:49.128 ***** included: fedora.linux_system_roles.podman for managed-node2 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.057) 0:01:49.185 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.089) 0:01:49.275 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.028) 0:01:49.303 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.020) 0:01:49.324 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.019) 0:01:49.343 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.021) 0:01:49.364 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 14 February 2026 11:48:52 -0500 (0:00:00.020) 0:01:49.385 ***** skipping: [managed-node2] => (item=RedHat.yml) => { "__vars_file": "RedHat.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "__vars_file": "CentOS.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.podman : Run systemctl] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:52 Saturday 14 February 2026 11:48:53 -0500 (0:00:00.044) 0:01:49.429 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Require installed systemd] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:60 Saturday 14 February 2026 11:48:53 -0500 (0:00:00.019) 0:01:49.449 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:65 Saturday 14 February 2026 11:48:53 -0500 (0:00:00.021) 0:01:49.470 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 14 February 2026 11:48:53 -0500 (0:00:00.019) 0:01:49.489 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 14 February 2026 11:48:54 -0500 (0:00:01.022) 0:01:50.512 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.021) 0:01:50.534 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages)) | list | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.027) 0:01:50.561 ***** skipping: [managed-node2] => { "false_condition": "__podman_is_transactional | d(false)" } TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.021) 0:01:50.583 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.021) 0:01:50.605 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.020) 0:01:50.626 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.022411", "end": "2026-02-14 11:48:54.537585", "rc": 0, "start": "2026-02-14 11:48:54.515174" } STDOUT: podman version 5.6.0 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.397) 0:01:51.023 ***** ok: [managed-node2] => { "ansible_facts": { "podman_version": "5.6.0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.023) 0:01:51.047 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.022) 0:01:51.070 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:73 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.048) 0:01:51.119 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.038) 0:01:51.157 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__has_type_pod or __has_pod_file_ext or __has_pod_file_src_ext or __has_pod_template_src_ext or __has_pod_template_src_ext_j2", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:96 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.051) 0:01:51.209 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.039) 0:01:51.248 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:54 -0500 (0:00:00.039) 0:01:51.288 ***** ok: [managed-node2] => { "ansible_facts": { "getent_passwd": { "user_quadlet_basic": [ "x", "1111", "1111", "", "/home/user_quadlet_basic", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:55 -0500 (0:00:00.387) 0:01:51.676 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:55 -0500 (0:00:00.024) 0:01:51.700 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:55 -0500 (0:00:00.033) 0:01:51.734 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:55 -0500 (0:00:00.380) 0:01:52.114 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004143", "end": "2026-02-14 11:48:56.015524", "rc": 0, "start": "2026-02-14 11:48:56.011381" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.383) 0:01:52.498 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.006029", "end": "2026-02-14 11:48:56.402953", "rc": 0, "start": "2026-02-14 11:48:56.396924" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.387) 0:01:52.885 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.039) 0:01:52.925 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.022) 0:01:52.947 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.021) 0:01:52.968 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.021) 0:01:52.989 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.022) 0:01:53.012 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.079) 0:01:53.091 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/home/user_quadlet_basic/.config/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0700", "__podman_parent_path": "/home/user_quadlet_basic/.config/containers", "__podman_policy_json_file": "/home/user_quadlet_basic/.config/containers/policy.json", "__podman_registries_conf_file": "/home/user_quadlet_basic/.config/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/home/user_quadlet_basic/.config/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.053) 0:01:53.145 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.039) 0:01:53.184 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.020) 0:01:53.205 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.021) 0:01:53.226 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.039) 0:01:53.266 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.020) 0:01:53.286 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.019) 0:01:53.305 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.039) 0:01:53.345 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.020) 0:01:53.366 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 14 February 2026 11:48:56 -0500 (0:00:00.019) 0:01:53.385 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.041) 0:01:53.427 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.022) 0:01:53.450 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.020) 0:01:53.471 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.021) 0:01:53.492 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.020) 0:01:53.513 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_firewall | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.020) 0:01:53.533 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.020) 0:01:53.553 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.022) 0:01:53.575 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.016) 0:01:53.592 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.016) 0:01:53.609 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.080) 0:01:53.689 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.024) 0:01:53.714 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.036) 0:01:53.751 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.024) 0:01:53.775 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.023) 0:01:53.799 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.032) 0:01:53.831 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.022) 0:01:53.853 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.021) 0:01:53.875 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.020) 0:01:53.895 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.021) 0:01:53.916 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.021) 0:01:53.938 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.021) 0:01:53.959 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.020) 0:01:53.979 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.020) 0:01:54.000 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.071) 0:01:54.071 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": true, "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.038) 0:01:54.110 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.037) 0:01:54.147 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.025) 0:01:54.173 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.023) 0:01:54.196 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [ "user_quadlet_basic" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 14 February 2026 11:48:57 -0500 (0:00:00.029) 0:01:54.226 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087699.8416822, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087699.8416822, "nlink": 7, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 160, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.386) 0:01:54.612 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.551) 0:01:55.163 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.024) 0:01:55.188 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.036) 0:01:55.224 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.027) 0:01:55.252 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.024) 0:01:55.276 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.037) 0:01:55.314 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.024) 0:01:55.339 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.037) 0:01:55.377 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:48:58 -0500 (0:00:00.035) 0:01:55.412 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.025) 0:01:55.437 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.025) 0:01:55.462 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.023) 0:01:55.485 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.025) 0:01:55.511 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.023) 0:01:55.534 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.024) 0:01:55.558 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": true, "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.035) 0:01:55.594 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.035) 0:01:55.630 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.025) 0:01:55.655 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.023) 0:01:55.678 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [ "user_quadlet_basic" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.027) 0:01:55.705 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087699.8416822, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087699.8416822, "nlink": 7, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 160, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 14 February 2026 11:48:59 -0500 (0:00:00.396) 0:01:56.102 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:184 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.536) 0:01:56.639 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:191 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.018) 0:01:56.657 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.111) 0:01:56.768 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Container": { "ContainerName": "quadlet-basic-mysql-name", "Environment": [ "FOO=/bin/busybox-extras", "BAZ=test" ], "Image": "quay.io/linux-system-roles/mysql:5.6", "Network": "quadlet-basic.network", "PodmanArgs": "--secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json", "Volume": "quadlet-basic-mysql.volume:/var/lib/mysql" }, "Install": { "WantedBy": "default.target" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.031) 0:01:56.800 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.031) 0:01:56.831 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.020) 0:01:56.852 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-mysql", "__podman_quadlet_type": "container", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.037) 0:01:56.889 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.114) 0:01:57.003 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.030) 0:01:57.034 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.025) 0:01:57.059 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:00 -0500 (0:00:00.038) 0:01:57.097 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:01 -0500 (0:00:00.399) 0:01:57.497 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004673", "end": "2026-02-14 11:49:01.408869", "rc": 0, "start": "2026-02-14 11:49:01.404196" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:01 -0500 (0:00:00.394) 0:01:57.891 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.006024", "end": "2026-02-14 11:49:01.814754", "rc": 0, "start": "2026-02-14 11:49:01.808730" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:01 -0500 (0:00:00.412) 0:01:58.304 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:01 -0500 (0:00:00.043) 0:01:58.348 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:01 -0500 (0:00:00.032) 0:01:58.380 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:01 -0500 (0:00:00.029) 0:01:58.410 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.024) 0:01:58.435 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.027) 0:01:58.462 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.026) 0:01:58.489 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-mysql.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.041) 0:01:58.531 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.033) 0:01:58.565 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.020) 0:01:58.586 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.069) 0:01:58.656 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.031) 0:01:58.688 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.063) 0:01:58.751 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087699.8416822, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087699.8416822, "nlink": 7, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 160, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:49:02 -0500 (0:00:00.439) 0:01:59.190 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-mysql.service", "state": "stopped", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:48:19 EST", "ActiveEnterTimestampMonotonic": "584110668", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "podman-user-wait-network-online.service quadlet-basic-network.service -.mount app.slice basic.target quadlet-basic-mysql-volume.service run-user-1111.mount", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:48:19 EST", "AssertTimestampMonotonic": "583916539", "Before": "shutdown.target default.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "2809781000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:48:19 EST", "ConditionTimestampMonotonic": "583916534", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/user.slice/user-1111.slice/user@1111.service/app.slice/quadlet-basic-mysql.service", "ControlGroupId": "13076", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "yes", "DelegateControllers": "cpu cpuset io memory pids", "Description": "quadlet-basic-mysql.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "Environment": "PODMAN_SYSTEMD_UNIT=quadlet-basic-mysql.service", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainHandoffTimestampMonotonic": "0", "ExecMainPID": "77042", "ExecMainStartTimestamp": "Sat 2026-02-14 11:48:19 EST", "ExecMainStartTimestampMonotonic": "584050001", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman run --name quadlet-basic-mysql-name --replace --rm --cgroups=split --network quadlet-basic-name --sdnotify=conmon -d -v quadlet-basic-mysql-name:/var/lib/mysql --env BAZ=test --env FOO=/bin/busybox-extras --secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json quay.io/linux-system-roles/mysql:5.6 ; ignore_errors=no ; start_time=[Sat 2026-02-14 11:48:19 EST] ; stop_time=[n/a] ; pid=77008 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman run --name quadlet-basic-mysql-name --replace --rm --cgroups=split --network quadlet-basic-name --sdnotify=conmon -d -v quadlet-basic-mysql-name:/var/lib/mysql --env BAZ=test --env FOO=/bin/busybox-extras --secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json quay.io/linux-system-roles/mysql:5.6 ; flags= ; start_time=[Sat 2026-02-14 11:48:19 EST] ; stop_time=[n/a] ; pid=77008 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i quadlet-basic-mysql-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i quadlet-basic-mysql-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopPost": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i quadlet-basic-mysql-name ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStopPostEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman rm -v -f -i quadlet-basic-mysql-name ; flags=ignore-failure ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-mysql.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-mysql.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:48:19 EST", "InactiveExitTimestampMonotonic": "583917988", "InvocationID": "1c33da15d62b4ccab0c5233ef0ba9a0a", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "77042", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "2679803904", "MemoryCurrent": "616071168", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "630767616", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "0", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "0", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-mysql.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "all", "OOMPolicy": "continue", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "app.slice quadlet-basic-network.service basic.target quadlet-basic-mysql-volume.service", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:48:19 EST", "StateChangeTimestampMonotonic": "584110668", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-mysql", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "22", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "1min 30s", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "WantedBy": "default.target", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:05 -0500 (0:00:02.396) 0:02:01.587 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087699.002677, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "0b6cac7929623f1059e78ef39b8b0a25169b28a6", "ctime": 1771087698.397025, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 155189487, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087698.1096716, "nlink": 1, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 448, "uid": 1111, "version": "892553537", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:49:05 -0500 (0:00:00.423) 0:02:02.011 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:49:05 -0500 (0:00:00.040) 0:02:02.052 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:49:06 -0500 (0:00:00.382) 0:02:02.434 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:49:06 -0500 (0:00:00.059) 0:02:02.494 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:49:06 -0500 (0:00:00.041) 0:02:02.535 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:49:06 -0500 (0:00:00.038) 0:02:02.574 ***** changed: [managed-node2] => { "changed": true, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:49:06 -0500 (0:00:00.425) 0:02:03.000 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:49:06 -0500 (0:00:00.036) 0:02:03.036 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:49:07 -0500 (0:00:00.680) 0:02:03.716 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:49:07 -0500 (0:00:00.515) 0:02:04.232 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:49:07 -0500 (0:00:00.034) 0:02:04.267 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:49:07 -0500 (0:00:00.024) 0:02:04.292 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.242577", "end": "2026-02-14 11:49:08.513682", "rc": 0, "start": "2026-02-14 11:49:08.271105" } STDOUT: dd3b2a5dcb48ff61113592ed5ddd762581be4387c7bc552375a2159422aa6bf5 TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.708) 0:02:05.000 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.037) 0:02:05.037 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.025) 0:02:05.063 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.023) 0:02:05.087 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [ "user_quadlet_basic" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.028) 0:02:05.116 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.022) 0:02:05.139 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.073) 0:02:05.212 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.025) 0:02:05.238 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.024) 0:02:05.263 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.023) 0:02:05.286 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.025) 0:02:05.312 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.023) 0:02:05.336 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.019) 0:02:05.355 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Volume": {} }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:49:08 -0500 (0:00:00.031) 0:02:05.387 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:09 -0500 (0:00:00.047) 0:02:05.434 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:49:09 -0500 (0:00:00.023) 0:02:05.458 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-unused-volume", "__podman_quadlet_type": "volume", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:49:09 -0500 (0:00:00.038) 0:02:05.496 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:09 -0500 (0:00:00.039) 0:02:05.536 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:09 -0500 (0:00:00.026) 0:02:05.563 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:09 -0500 (0:00:00.025) 0:02:05.589 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:09 -0500 (0:00:00.034) 0:02:05.623 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:09 -0500 (0:00:00.397) 0:02:06.021 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004102", "end": "2026-02-14 11:49:09.951361", "rc": 0, "start": "2026-02-14 11:49:09.947259" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.418) 0:02:06.439 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.005789", "end": "2026-02-14 11:49:10.362881", "rc": 0, "start": "2026-02-14 11:49:10.357092" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.417) 0:02:06.856 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.044) 0:02:06.901 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.031) 0:02:06.932 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.025) 0:02:06.958 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.027) 0:02:06.985 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.026) 0:02:07.011 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.024) 0:02:07.036 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-unused-volume-volume.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.038) 0:02:07.075 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.029) 0:02:07.104 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.019) 0:02:07.124 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.069) 0:02:07.193 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.032) 0:02:07.226 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:49:10 -0500 (0:00:00.094) 0:02:07.321 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087699.8416822, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087699.8416822, "nlink": 7, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 160, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:49:11 -0500 (0:00:00.428) 0:02:07.749 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-unused-volume-volume.service", "state": "stopped", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:48:07 EST", "ActiveEnterTimestampMonotonic": "571635437", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "-.mount basic.target app.slice run-user-1111.mount podman-user-wait-network-online.service", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:48:07 EST", "AssertTimestampMonotonic": "571592748", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "31238000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:48:07 EST", "ConditionTimestampMonotonic": "571592744", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "12754", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-unused-volume-volume.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2026-02-14 11:48:07 EST", "ExecMainExitTimestampMonotonic": "571635220", "ExecMainHandoffTimestamp": "Sat 2026-02-14 11:48:07 EST", "ExecMainHandoffTimestampMonotonic": "571605167", "ExecMainPID": "75055", "ExecMainStartTimestamp": "Sat 2026-02-14 11:48:07 EST", "ExecMainStartTimestampMonotonic": "571593332", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore systemd-quadlet-basic-unused-volume ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore systemd-quadlet-basic-unused-volume ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-unused-volume-volume.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-unused-volume-volume.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:48:07 EST", "InactiveExitTimestampMonotonic": "571593839", "InvocationID": "d5c9384453014f44bdc045afaa914fb2", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3495395328", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "14393344", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-unused-volume-volume.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "app.slice basic.target", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:48:07 EST", "StateChangeTimestampMonotonic": "571635437", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-unused-volume-volume", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:12 -0500 (0:00:00.711) 0:02:08.461 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087686.7226021, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "fd0ae560360afa5541b866560b1e849d25e216ef", "ctime": 1771087686.1158233, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 490733777, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087685.8285966, "nlink": 1, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 53, "uid": 1111, "version": "1976892808", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:49:12 -0500 (0:00:00.416) 0:02:08.878 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:49:12 -0500 (0:00:00.033) 0:02:08.912 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:49:12 -0500 (0:00:00.370) 0:02:09.283 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:49:12 -0500 (0:00:00.032) 0:02:09.315 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:49:12 -0500 (0:00:00.022) 0:02:09.338 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:49:12 -0500 (0:00:00.021) 0:02:09.359 ***** changed: [managed-node2] => { "changed": true, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:49:13 -0500 (0:00:00.384) 0:02:09.744 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:49:13 -0500 (0:00:00.076) 0:02:09.821 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:49:14 -0500 (0:00:00.655) 0:02:10.476 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:49:14 -0500 (0:00:00.520) 0:02:10.997 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:49:14 -0500 (0:00:00.034) 0:02:11.032 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:49:14 -0500 (0:00:00.025) 0:02:11.057 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.037075", "end": "2026-02-14 11:49:15.074383", "rc": 0, "start": "2026-02-14 11:49:15.037308" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.504) 0:02:11.562 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.046) 0:02:11.608 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.032) 0:02:11.641 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.024) 0:02:11.665 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [ "user_quadlet_basic" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.029) 0:02:11.694 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.025) 0:02:11.719 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.025) 0:02:11.745 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.025) 0:02:11.770 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.024) 0:02:11.794 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.024) 0:02:11.819 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.025) 0:02:11.844 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.024) 0:02:11.868 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.019) 0:02:11.888 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Volume": { "VolumeName": "quadlet-basic-mysql-name" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.031) 0:02:11.919 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.031) 0:02:11.951 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.022) 0:02:11.973 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-mysql", "__podman_quadlet_type": "volume", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.036) 0:02:12.010 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.039) 0:02:12.049 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.024) 0:02:12.074 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.024) 0:02:12.098 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:15 -0500 (0:00:00.031) 0:02:12.129 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:16 -0500 (0:00:00.387) 0:02:12.516 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004378", "end": "2026-02-14 11:49:16.429280", "rc": 0, "start": "2026-02-14 11:49:16.424902" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:16 -0500 (0:00:00.395) 0:02:12.912 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.006675", "end": "2026-02-14 11:49:16.821110", "rc": 0, "start": "2026-02-14 11:49:16.814435" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:16 -0500 (0:00:00.394) 0:02:13.307 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:16 -0500 (0:00:00.038) 0:02:13.345 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:16 -0500 (0:00:00.024) 0:02:13.370 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:16 -0500 (0:00:00.021) 0:02:13.392 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:16 -0500 (0:00:00.022) 0:02:13.414 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.021) 0:02:13.435 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.023) 0:02:13.459 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-mysql-volume.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.039) 0:02:13.498 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.030) 0:02:13.528 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.018) 0:02:13.547 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.068) 0:02:13.615 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.027) 0:02:13.643 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.121) 0:02:13.764 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087699.8416822, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087699.8416822, "nlink": 7, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 160, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:49:17 -0500 (0:00:00.393) 0:02:14.157 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-mysql-volume.service", "state": "stopped", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:48:02 EST", "ActiveEnterTimestampMonotonic": "566666269", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "app.slice run-user-1111.mount podman-user-wait-network-online.service -.mount basic.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:48:02 EST", "AssertTimestampMonotonic": "566628617", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "30370000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:48:02 EST", "ConditionTimestampMonotonic": "566628613", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "12715", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-mysql-volume.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2026-02-14 11:48:02 EST", "ExecMainExitTimestampMonotonic": "566666055", "ExecMainHandoffTimestamp": "Sat 2026-02-14 11:48:02 EST", "ExecMainHandoffTimestampMonotonic": "566637179", "ExecMainPID": "73559", "ExecMainStartTimestamp": "Sat 2026-02-14 11:48:02 EST", "ExecMainStartTimestampMonotonic": "566629183", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore quadlet-basic-mysql-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore quadlet-basic-mysql-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-mysql-volume.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-mysql-volume.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:48:02 EST", "InactiveExitTimestampMonotonic": "566629670", "InvocationID": "7af9acacf797474c8d950274f7e3e9a7", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3495399424", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "14323712", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-mysql-volume.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "basic.target app.slice", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:48:02 EST", "StateChangeTimestampMonotonic": "566666269", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-mysql-volume", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:18 -0500 (0:00:00.697) 0:02:14.855 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087681.7435718, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "90a3571bfc7670328fe3f8fb625585613dbd9c4a", "ctime": 1771087681.1299868, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 402653402, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087680.8475661, "nlink": 1, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 89, "uid": 1111, "version": "900281072", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:49:18 -0500 (0:00:00.390) 0:02:15.245 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:49:18 -0500 (0:00:00.033) 0:02:15.278 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:49:19 -0500 (0:00:00.371) 0:02:15.649 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:49:19 -0500 (0:00:00.031) 0:02:15.681 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:49:19 -0500 (0:00:00.022) 0:02:15.704 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:49:19 -0500 (0:00:00.029) 0:02:15.733 ***** changed: [managed-node2] => { "changed": true, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:49:19 -0500 (0:00:00.388) 0:02:16.121 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:49:19 -0500 (0:00:00.021) 0:02:16.143 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:49:20 -0500 (0:00:00.644) 0:02:16.787 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:49:20 -0500 (0:00:00.525) 0:02:17.312 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:49:20 -0500 (0:00:00.034) 0:02:17.347 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:49:20 -0500 (0:00:00.024) 0:02:17.371 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.039245", "end": "2026-02-14 11:49:21.381178", "rc": 0, "start": "2026-02-14 11:49:21.341933" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.496) 0:02:17.867 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.036) 0:02:17.904 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.024) 0:02:17.928 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.024) 0:02:17.952 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [ "user_quadlet_basic" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.028) 0:02:17.980 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.023) 0:02:18.004 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.024) 0:02:18.029 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.022) 0:02:18.052 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.022) 0:02:18.074 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.023) 0:02:18.098 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.025) 0:02:18.124 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.023) 0:02:18.147 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.019) 0:02:18.167 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Network": {} }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.029) 0:02:18.196 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.029) 0:02:18.226 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.020) 0:02:18.247 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-unused-network", "__podman_quadlet_type": "network", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.035) 0:02:18.283 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.039) 0:02:18.322 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.024) 0:02:18.347 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.023) 0:02:18.370 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:21 -0500 (0:00:00.034) 0:02:18.405 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:22 -0500 (0:00:00.391) 0:02:18.796 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004163", "end": "2026-02-14 11:49:22.710251", "rc": 0, "start": "2026-02-14 11:49:22.706088" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:22 -0500 (0:00:00.395) 0:02:19.192 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.006115", "end": "2026-02-14 11:49:23.095692", "rc": 0, "start": "2026-02-14 11:49:23.089577" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.386) 0:02:19.578 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.040) 0:02:19.619 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.073) 0:02:19.692 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.024) 0:02:19.717 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.023) 0:02:19.741 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.023) 0:02:19.764 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.024) 0:02:19.788 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-unused-network-network.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.040) 0:02:19.829 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.042) 0:02:19.872 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.020) 0:02:19.892 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.071) 0:02:19.964 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.028) 0:02:19.992 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:49:23 -0500 (0:00:00.055) 0:02:20.047 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087699.8416822, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087699.8416822, "nlink": 7, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 160, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:49:24 -0500 (0:00:00.391) 0:02:20.439 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-unused-network-network.service", "state": "stopped", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:47:57 EST", "ActiveEnterTimestampMonotonic": "561419385", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "-.mount run-user-1111.mount basic.target podman-user-wait-network-online.service app.slice", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:47:57 EST", "AssertTimestampMonotonic": "561383444", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "28669000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:47:57 EST", "ConditionTimestampMonotonic": "561383440", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "12676", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-unused-network-network.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2026-02-14 11:47:57 EST", "ExecMainExitTimestampMonotonic": "561419156", "ExecMainHandoffTimestamp": "Sat 2026-02-14 11:47:57 EST", "ExecMainHandoffTimestampMonotonic": "561395325", "ExecMainPID": "72064", "ExecMainStartTimestamp": "Sat 2026-02-14 11:47:57 EST", "ExecMainStartTimestampMonotonic": "561383991", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore systemd-quadlet-basic-unused-network ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore systemd-quadlet-basic-unused-network ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-unused-network-network.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-unused-network-network.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:47:57 EST", "InactiveExitTimestampMonotonic": "561384470", "InvocationID": "943b56cf7b8d48388829570ed2eb76c9", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3622768640", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "14069760", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-unused-network-network.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "app.slice basic.target", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:47:57 EST", "StateChangeTimestampMonotonic": "561419385", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-unused-network-network", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:24 -0500 (0:00:00.693) 0:02:21.133 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087676.4935396, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01", "ctime": 1771087675.88735, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 322961616, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087675.586534, "nlink": 1, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 54, "uid": 1111, "version": "685548365", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:49:25 -0500 (0:00:00.385) 0:02:21.518 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:49:25 -0500 (0:00:00.033) 0:02:21.552 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:49:25 -0500 (0:00:00.367) 0:02:21.919 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:49:25 -0500 (0:00:00.032) 0:02:21.952 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:49:25 -0500 (0:00:00.022) 0:02:21.975 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:49:25 -0500 (0:00:00.021) 0:02:21.997 ***** changed: [managed-node2] => { "changed": true, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:49:25 -0500 (0:00:00.378) 0:02:22.375 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:49:25 -0500 (0:00:00.022) 0:02:22.397 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:49:26 -0500 (0:00:00.642) 0:02:23.040 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.508) 0:02:23.548 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.032) 0:02:23.581 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.024) 0:02:23.605 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.038579", "end": "2026-02-14 11:49:27.616780", "rc": 0, "start": "2026-02-14 11:49:27.578201" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.508) 0:02:24.114 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.051) 0:02:24.165 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.029) 0:02:24.194 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.029) 0:02:24.223 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [ "user_quadlet_basic" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.030) 0:02:24.254 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.029) 0:02:24.284 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.028) 0:02:24.313 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.024) 0:02:24.337 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.023) 0:02:24.361 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.025) 0:02:24.386 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:49:27 -0500 (0:00:00.023) 0:02:24.409 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.023) 0:02:24.433 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.018) 0:02:24.451 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Network]\nSubnet=192.168.29.0/24\nGateway=192.168.29.1\nLabel=app=wordpress\nNetworkName=quadlet-basic-name\n", "__podman_quadlet_template_src": "templates/quadlet-basic.network.j2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.160) 0:02:24.611 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "user_quadlet_basic" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.095) 0:02:24.707 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_quadlet_str", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.025) 0:02:24.732 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic", "__podman_quadlet_type": "network", "__podman_rootless": true }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.039) 0:02:24.771 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.039) 0:02:24.811 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.026) 0:02:24.837 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.024) 0:02:24.862 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.034) 0:02:24.896 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:28 -0500 (0:00:00.391) 0:02:25.287 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "user_quadlet_basic" ], "delta": "0:00:00.004120", "end": "2026-02-14 11:49:29.204981", "rc": 0, "start": "2026-02-14 11:49:29.200861" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.399) 0:02:25.687 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "getsubids", "-g", "user_quadlet_basic" ], "delta": "0:00:00.005180", "end": "2026-02-14 11:49:29.589693", "rc": 0, "start": "2026-02-14 11:49:29.584513" } STDOUT: 0: user_quadlet_basic 589824 65536 TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.384) 0:02:26.072 ***** ok: [managed-node2] => { "ansible_facts": { "podman_subgid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } }, "podman_subuid_info": { "user_quadlet_basic": { "range": 65536, "start": 589824 } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.046) 0:02:26.118 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.036) 0:02:26.154 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.032) 0:02:26.186 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.026) 0:02:26.213 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.025) 0:02:26.238 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.028) 0:02:26.266 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-network.service", "__podman_systemd_scope": "user", "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.048) 0:02:26.314 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/home/user_quadlet_basic/.config/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.034) 0:02:26.348 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:49:29 -0500 (0:00:00.021) 0:02:26.369 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:49:30 -0500 (0:00:00.063) 0:02:26.433 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:49:30 -0500 (0:00:00.028) 0:02:26.461 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:49:30 -0500 (0:00:00.055) 0:02:26.516 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087699.8416822, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087699.8416822, "nlink": 7, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 160, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:49:30 -0500 (0:00:00.387) 0:02:26.903 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-network.service", "state": "stopped", "status": { "AccessSELinuxContext": "unconfined_u:object_r:user_tmp_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:47:51 EST", "ActiveEnterTimestampMonotonic": "556101920", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "-.mount app.slice basic.target run-user-1111.mount podman-user-wait-network-online.service", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:47:51 EST", "AssertTimestampMonotonic": "556070434", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "30331000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:47:51 EST", "ConditionTimestampMonotonic": "556070431", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "12637", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-network.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2026-02-14 11:47:51 EST", "ExecMainExitTimestampMonotonic": "556101708", "ExecMainHandoffTimestamp": "Sat 2026-02-14 11:47:51 EST", "ExecMainHandoffTimestampMonotonic": "556076194", "ExecMainPID": "70566", "ExecMainStartTimestamp": "Sat 2026-02-14 11:47:51 EST", "ExecMainStartTimestampMonotonic": "556070906", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore --subnet 192.168.29.0/24 --gateway 192.168.29.1 --label app=wordpress quadlet-basic-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore --subnet 192.168.29.0/24 --gateway 192.168.29.1 --label app=wordpress quadlet-basic-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/user/1111/systemd/generator/quadlet-basic-network.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-network.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:47:51 EST", "InactiveExitTimestampMonotonic": "556071454", "InvocationID": "5c6d19014cc042d4bbdef242ecc89430", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "inherit", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3622768640", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "14364672", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-network.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "200", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "app.slice basic.target", "RequiresMountsFor": "/run/user/1111/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "app.slice", "SourcePath": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:47:51 EST", "StateChangeTimestampMonotonic": "556101920", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-network", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "podman-user-wait-network-online.service", "WantsMountsFor": "/home/user_quadlet_basic", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0", "WorkingDirectory": "!/home/user_quadlet_basic" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:31 -0500 (0:00:00.676) 0:02:27.580 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087671.1805072, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "19c9b17be2af9b9deca5c3bd327f048966750682", "ctime": 1771087670.528382, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 243269836, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087670.2455015, "nlink": 1, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 105, "uid": 1111, "version": "4144534517", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:49:31 -0500 (0:00:00.390) 0:02:27.971 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:49:31 -0500 (0:00:00.032) 0:02:28.003 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:49:31 -0500 (0:00:00.367) 0:02:28.371 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:49:31 -0500 (0:00:00.034) 0:02:28.405 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:49:32 -0500 (0:00:00.023) 0:02:28.429 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:49:32 -0500 (0:00:00.026) 0:02:28.455 ***** changed: [managed-node2] => { "changed": true, "path": "/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:49:32 -0500 (0:00:00.393) 0:02:28.849 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:49:32 -0500 (0:00:00.026) 0:02:28.875 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:49:33 -0500 (0:00:00.650) 0:02:29.526 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:49:33 -0500 (0:00:00.505) 0:02:30.032 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:49:33 -0500 (0:00:00.035) 0:02:30.068 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:49:33 -0500 (0:00:00.038) 0:02:30.106 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.040297", "end": "2026-02-14 11:49:34.108702", "rc": 0, "start": "2026-02-14 11:49:34.068405" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.488) 0:02:30.594 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.092) 0:02:30.687 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.026) 0:02:30.713 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_item_state | d('present') != 'absent'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.024) 0:02:30.738 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [ "user_quadlet_basic" ] }, "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.029) 0:02:30.767 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.027) 0:02:30.795 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.025) 0:02:30.820 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.025) 0:02:30.846 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.029) 0:02:30.875 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.039) 0:02:30.914 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.036) 0:02:30.951 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_test_debug | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.030) 0:02:30.981 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:198 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.020) 0:02:31.001 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml for managed-node2 => (item=user_quadlet_basic) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:4 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.099) 0:02:31.101 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_linger_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set cancel linger vars] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:11 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.041) 0:02:31.142 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_xdg_runtime_dir": "/run/user/1111" }, "changed": false } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:16 Saturday 14 February 2026 11:49:34 -0500 (0:00:00.032) 0:02:31.175 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087663.6274612, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "ctime": 1771087699.8416822, "dev": 45, "device_type": 0, "executable": true, "exists": true, "gid": 1111, "gr_name": "user_quadlet_basic", "inode": 1, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mimetype": "inode/directory", "mode": "0700", "mtime": 1771087699.8416822, "nlink": 7, "path": "/run/user/1111", "pw_name": "user_quadlet_basic", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 160, "uid": 1111, "version": null, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": true } } TASK [fedora.linux_system_roles.podman : Gather facts for containers] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:21 Saturday 14 February 2026 11:49:35 -0500 (0:00:00.378) 0:02:31.553 ***** ok: [managed-node2] => { "changed": false, "containers": [] } TASK [fedora.linux_system_roles.podman : Gather facts for networks] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:30 Saturday 14 February 2026 11:49:35 -0500 (0:00:00.573) 0:02:32.127 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "network", "ls", "-q" ], "delta": "0:00:00.035974", "end": "2026-02-14 11:49:36.148569", "rc": 0, "start": "2026-02-14 11:49:36.112595" } STDOUT: podman TASK [fedora.linux_system_roles.podman : Gather secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:40 Saturday 14 February 2026 11:49:36 -0500 (0:00:00.509) 0:02:32.637 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "secret", "ls", "-n", "-q" ], "delta": "0:00:00.035893", "end": "2026-02-14 11:49:36.644538", "rc": 0, "start": "2026-02-14 11:49:36.608645" } TASK [fedora.linux_system_roles.podman : Cancel linger if no more resources are in use] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:50 Saturday 14 February 2026 11:49:36 -0500 (0:00:00.508) 0:02:33.145 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "loginctl", "disable-linger", "user_quadlet_basic" ], "delta": "0:00:00.007236", "end": "2026-02-14 11:49:37.101007", "rc": 0, "start": "2026-02-14 11:49:37.093771" } TASK [fedora.linux_system_roles.podman : Wait for user session to exit closing state] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:62 Saturday 14 February 2026 11:49:37 -0500 (0:00:00.461) 0:02:33.607 ***** FAILED - RETRYING: [managed-node2]: Wait for user session to exit closing state (10 retries left). FAILED - RETRYING: [managed-node2]: Wait for user session to exit closing state (9 retries left). FAILED - RETRYING: [managed-node2]: Wait for user session to exit closing state (8 retries left). FAILED - RETRYING: [managed-node2]: Wait for user session to exit closing state (7 retries left). FAILED - RETRYING: [managed-node2]: Wait for user session to exit closing state (6 retries left). ok: [managed-node2] => { "attempts": 6, "changed": false, "cmd": [ "loginctl", "show-user", "--value", "-p", "State", "user_quadlet_basic" ], "delta": "0:00:00.006933", "end": "2026-02-14 11:49:49.371497", "failed_when_result": false, "rc": 1, "start": "2026-02-14 11:49:49.364564" } STDERR: Failed to get user: User ID 1111 is not logged in or lingering MSG: non-zero return code TASK [fedora.linux_system_roles.podman : Stop logind] ************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:82 Saturday 14 February 2026 11:49:49 -0500 (0:00:12.248) 0:02:45.855 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__user_state is failed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Wait for user session to exit closing state] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:87 Saturday 14 February 2026 11:49:49 -0500 (0:00:00.021) 0:02:45.877 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__user_state is failed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Restart logind] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:98 Saturday 14 February 2026 11:49:49 -0500 (0:00:00.021) 0:02:45.898 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__user_state is failed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:205 Saturday 14 February 2026 11:49:49 -0500 (0:00:00.019) 0:02:45.918 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:214 Saturday 14 February 2026 11:49:49 -0500 (0:00:00.017) 0:02:45.936 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Remove test user] ******************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:349 Saturday 14 February 2026 11:49:49 -0500 (0:00:00.026) 0:02:45.963 ***** changed: [managed-node2] => { "changed": true, "force": false, "name": "user_quadlet_basic", "remove": false, "state": "absent" } TASK [Cleanup system - root] *************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:355 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.503) 0:02:46.467 ***** included: fedora.linux_system_roles.podman for managed-node2 TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.126) 0:02:46.593 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.041) 0:02:46.635 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.034) 0:02:46.669 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.020) 0:02:46.690 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.023) 0:02:46.713 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.020) 0:02:46.734 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_transactional is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.019) 0:02:46.754 ***** skipping: [managed-node2] => (item=RedHat.yml) => { "__vars_file": "RedHat.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "__vars_file": "CentOS.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_10.yml) => { "__vars_file": "CentOS_10.yml", "ansible_loop_var": "__vars_file", "changed": false, "false_condition": "__vars_file is file", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.podman : Run systemctl] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:52 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.044) 0:02:46.799 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Require installed systemd] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:60 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.020) 0:02:46.819 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:65 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.081) 0:02:46.901 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_booted is not defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 14 February 2026 11:49:50 -0500 (0:00:00.024) 0:02:46.925 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 14 February 2026 11:49:51 -0500 (0:00:01.036) 0:02:47.961 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_use_copr | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 14 February 2026 11:49:51 -0500 (0:00:00.025) 0:02:47.987 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "(__podman_packages | difference(ansible_facts.packages)) | list | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 14 February 2026 11:49:51 -0500 (0:00:00.034) 0:02:48.021 ***** skipping: [managed-node2] => { "false_condition": "__podman_is_transactional | d(false)" } TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 14 February 2026 11:49:51 -0500 (0:00:00.025) 0:02:48.047 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 14 February 2026 11:49:51 -0500 (0:00:00.022) 0:02:48.070 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_is_transactional | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 14 February 2026 11:49:51 -0500 (0:00:00.021) 0:02:48.092 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.024485", "end": "2026-02-14 11:49:52.003320", "rc": 0, "start": "2026-02-14 11:49:51.978835" } STDOUT: podman version 5.6.0 TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.394) 0:02:48.486 ***** ok: [managed-node2] => { "ansible_facts": { "podman_version": "5.6.0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.024) 0:02:48.511 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.2\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.022) 0:02:48.533 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_version is version(\"4.4\", \"<\")", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:73 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.049) 0:02:48.582 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.049) 0:02:48.631 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__has_type_pod or __has_pod_file_ext or __has_pod_file_src_ext or __has_pod_template_src_ext or __has_pod_template_src_ext_j2", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:96 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.056) 0:02:48.688 ***** META: end_host conditional evaluated to False, continuing execution for managed-node2 skipping: [managed-node2] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node2" } MSG: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.040) 0:02:48.728 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.043) 0:02:48.771 ***** ok: [managed-node2] => { "ansible_facts": { "getent_passwd": { "root": [ "x", "0", "0", "Super User", "/root", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.404) 0:02:49.175 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.039) 0:02:49.214 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:52 -0500 (0:00:00.040) 0:02:49.255 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.402) 0:02:49.658 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.037) 0:02:49.696 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.030) 0:02:49.726 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.029) 0:02:49.756 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.027) 0:02:49.784 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.028) 0:02:49.812 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.023) 0:02:49.836 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.024) 0:02:49.860 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.023) 0:02:49.884 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.053) 0:02:49.937 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.040) 0:02:49.978 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.021) 0:02:49.999 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_containers_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.027) 0:02:50.027 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.120) 0:02:50.147 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.024) 0:02:50.172 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_registries_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.026) 0:02:50.198 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.050) 0:02:50.249 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.026) 0:02:50.275 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_storage_conf | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.021) 0:02:50.296 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.042) 0:02:50.338 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.022) 0:02:50.361 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.022) 0:02:50.383 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 14 February 2026 11:49:53 -0500 (0:00:00.023) 0:02:50.407 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_policy_json | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.021) 0:02:50.429 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_firewall | length > 0", "skip_reason": "Conditional result was False" } TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.022) 0:02:50.451 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "podman_selinux_ports | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.022) 0:02:50.474 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.026) 0:02:50.501 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.025) 0:02:50.526 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.029) 0:02:50.555 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.096) 0:02:50.651 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.031) 0:02:50.683 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.055) 0:02:50.739 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.026) 0:02:50.765 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.024) 0:02:50.790 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.034) 0:02:50.824 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.023) 0:02:50.848 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.021) 0:02:50.869 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.021) 0:02:50.891 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.021) 0:02:50.913 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.023) 0:02:50.937 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.024) 0:02:50.962 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.025) 0:02:50.987 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.035) 0:02:51.023 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.028) 0:02:51.051 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.033) 0:02:51.085 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.042) 0:02:51.128 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.031) 0:02:51.159 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.022) 0:02:51.182 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.023) 0:02:51.205 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 14 February 2026 11:49:54 -0500 (0:00:00.021) 0:02:51.226 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.481) 0:02:51.708 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.035) 0:02:51.743 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.043) 0:02:51.787 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.032) 0:02:51.819 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.027) 0:02:51.846 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.054) 0:02:51.901 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.025) 0:02:51.926 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.028) 0:02:51.954 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.023) 0:02:51.977 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.035) 0:02:52.013 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.033) 0:02:52.047 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.038) 0:02:52.085 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.034) 0:02:52.120 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.023) 0:02:52.144 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_check_subids | d(true)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.028) 0:02:52.172 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.039) 0:02:52.211 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.038) 0:02:52.250 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.023) 0:02:52.273 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.020) 0:02:52.293 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.020) 0:02:52.314 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 14 February 2026 11:49:55 -0500 (0:00:00.020) 0:02:52.335 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Handle Kubernetes specifications] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:184 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.423) 0:02:52.758 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle Quadlet specifications] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:191 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.029) 0:02:52.787 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml for managed-node2 => (item=(censored due to no_log)) TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.115) 0:02:52.903 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Container": { "ContainerName": "quadlet-basic-mysql-name", "Environment": [ "FOO=/bin/busybox-extras", "BAZ=test" ], "Image": "quay.io/linux-system-roles/mysql:5.6", "Network": "quadlet-basic.network", "PodmanArgs": "--secret=mysql_container_root_password,type=env,target=MYSQL_ROOT_PASSWORD --secret=json_secret,type=mount,target=/tmp/test.json", "Volume": "quadlet-basic-mysql.volume:/var/lib/mysql" }, "Install": { "WantedBy": "default.target" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.032) 0:02:52.935 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.030) 0:02:52.965 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.021) 0:02:52.987 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-mysql", "__podman_quadlet_type": "container", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.036) 0:02:53.023 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.039) 0:02:53.062 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.026) 0:02:53.089 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.025) 0:02:53.114 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:49:56 -0500 (0:00:00.033) 0:02:53.148 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.389) 0:02:53.537 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.023) 0:02:53.561 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.022) 0:02:53.583 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.022) 0:02:53.606 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.089) 0:02:53.695 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.025) 0:02:53.721 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.023) 0:02:53.745 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.024) 0:02:53.769 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.024) 0:02:53.794 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-mysql.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.053) 0:02:53.848 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.033) 0:02:53.881 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.020) 0:02:53.902 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [ "quay.io/linux-system-roles/mysql:5.6" ], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic-mysql.container", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.068) 0:02:53.970 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.031) 0:02:54.002 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.056) 0:02:54.058 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:49:57 -0500 (0:00:00.019) 0:02:54.078 ***** ok: [managed-node2] => { "changed": false, "failed_when_result": false } MSG: Could not find the requested service quadlet-basic-mysql.service: host TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:49:58 -0500 (0:00:00.547) 0:02:54.626 ***** ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:49:58 -0500 (0:00:00.400) 0:02:55.026 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:49:58 -0500 (0:00:00.022) 0:02:55.048 ***** ok: [managed-node2] => { "changed": false, "path": "/etc/containers/systemd/quadlet-basic-mysql.container", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.404) 0:02:55.453 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.033) 0:02:55.487 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_file_removed is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.037) 0:02:55.524 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.037) 0:02:55.562 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.054) 0:02:55.616 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.039) 0:02:55.655 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.043312", "end": "2026-02-14 11:49:59.615403", "rc": 0, "start": "2026-02-14 11:49:59.572091" } STDOUT: 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.443) 0:02:56.099 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.043) 0:02:56.143 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.031) 0:02:56.174 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.036) 0:02:56.211 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:49:59 -0500 (0:00:00.036) 0:02:56.247 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.029123", "end": "2026-02-14 11:50:00.180951", "rc": 0, "start": "2026-02-14 11:50:00.151828" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:50:00 -0500 (0:00:00.436) 0:02:56.684 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.029087", "end": "2026-02-14 11:50:00.629008", "rc": 0, "start": "2026-02-14 11:50:00.599921" } STDOUT: local quadlet-basic-mysql-name local systemd-quadlet-basic-unused-volume TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:50:00 -0500 (0:00:00.437) 0:02:57.121 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.029389", "end": "2026-02-14 11:50:01.071895", "rc": 0, "start": "2026-02-14 11:50:01.042506" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:50:01 -0500 (0:00:00.434) 0:02:57.555 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.027695", "end": "2026-02-14 11:50:01.488677", "rc": 0, "start": "2026-02-14 11:50:01.460982" } STDOUT: podman podman-default-kube-network quadlet-basic-name systemd-quadlet-basic-unused-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:50:01 -0500 (0:00:00.432) 0:02:57.988 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:50:02 -0500 (0:00:00.434) 0:02:58.423 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:50:02 -0500 (0:00:00.418) 0:02:58.841 ***** ok: [managed-node2] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-nftables-reload.service": { "name": "netavark-nftables-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quadlet-basic-mysql-volume.service": { "name": "quadlet-basic-mysql-volume.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quadlet-basic-network.service": { "name": "quadlet-basic-network.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quadlet-basic-unused-network-network.service": { "name": "quadlet-basic-unused-network-network.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quadlet-basic-unused-volume-volume.service": { "name": "quadlet-basic-unused-volume-volume.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:50:04 -0500 (0:00:02.294) 0:03:01.135 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:50:04 -0500 (0:00:00.024) 0:03:01.160 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Volume": {} }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:50:04 -0500 (0:00:00.038) 0:03:01.198 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:50:04 -0500 (0:00:00.044) 0:03:01.242 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:50:04 -0500 (0:00:00.039) 0:03:01.281 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-unused-volume", "__podman_quadlet_type": "volume", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:50:04 -0500 (0:00:00.120) 0:03:01.402 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.047) 0:03:01.449 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.032) 0:03:01.481 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.029) 0:03:01.510 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.037) 0:03:01.548 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.402) 0:03:01.950 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.026) 0:03:01.977 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.026) 0:03:02.004 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.031) 0:03:02.035 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.039) 0:03:02.075 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.030) 0:03:02.106 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.042) 0:03:02.149 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.029) 0:03:02.179 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.029) 0:03:02.208 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-unused-volume-volume.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.053) 0:03:02.262 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.038) 0:03:02.300 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.026) 0:03:02.327 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic-unused-volume.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:50:05 -0500 (0:00:00.069) 0:03:02.396 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:50:06 -0500 (0:00:00.031) 0:03:02.428 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:50:06 -0500 (0:00:00.058) 0:03:02.486 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:50:06 -0500 (0:00:00.023) 0:03:02.510 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-unused-volume-volume.service", "state": "stopped", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:48:45 EST", "ActiveEnterTimestampMonotonic": "609810253", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "-.mount system.slice network-online.target basic.target systemd-journald.socket sysinit.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:48:45 EST", "AssertTimestampMonotonic": "609764908", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "33383000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:48:45 EST", "ConditionTimestampMonotonic": "609764904", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "13423", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-unused-volume-volume.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2026-02-14 11:48:45 EST", "ExecMainExitTimestampMonotonic": "609810104", "ExecMainHandoffTimestamp": "Sat 2026-02-14 11:48:45 EST", "ExecMainHandoffTimestampMonotonic": "609772332", "ExecMainPID": "82726", "ExecMainStartTimestamp": "Sat 2026-02-14 11:48:45 EST", "ExecMainStartTimestampMonotonic": "609765631", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore systemd-quadlet-basic-unused-volume ; ignore_errors=no ; start_time=[Sat 2026-02-14 11:48:45 EST] ; stop_time=[Sat 2026-02-14 11:48:45 EST] ; pid=82726 ; code=exited ; status=0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore systemd-quadlet-basic-unused-volume ; flags= ; start_time=[Sat 2026-02-14 11:48:45 EST] ; stop_time=[Sat 2026-02-14 11:48:45 EST] ; pid=82726 ; code=exited ; status=0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/quadlet-basic-unused-volume-volume.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-unused-volume-volume.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:48:45 EST", "InactiveExitTimestampMonotonic": "609766164", "InvocationID": "91c19d07cbfc4d4394684db6d632aa1b", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3075993600", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "14155776", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-unused-volume-volume.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "sysinit.target -.mount system.slice", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/quadlet-basic-unused-volume.volume", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:48:45 EST", "StateChangeTimestampMonotonic": "609810253", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-unused-volume-volume", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:50:06 -0500 (0:00:00.836) 0:03:03.346 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087724.2288308, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "fd0ae560360afa5541b866560b1e849d25e216ef", "ctime": 1771087724.2324631, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 562036953, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087723.9558291, "nlink": 1, "path": "/etc/containers/systemd/quadlet-basic-unused-volume.volume", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 53, "uid": 0, "version": "718272838", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:50:07 -0500 (0:00:00.395) 0:03:03.742 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:50:07 -0500 (0:00:00.038) 0:03:03.781 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:50:07 -0500 (0:00:00.376) 0:03:04.157 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:50:07 -0500 (0:00:00.033) 0:03:04.191 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:50:07 -0500 (0:00:00.025) 0:03:04.216 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:50:07 -0500 (0:00:00.023) 0:03:04.239 ***** changed: [managed-node2] => { "changed": true, "path": "/etc/containers/systemd/quadlet-basic-unused-volume.volume", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:50:08 -0500 (0:00:00.387) 0:03:04.627 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:50:08 -0500 (0:00:00.023) 0:03:04.650 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:50:09 -0500 (0:00:00.881) 0:03:05.531 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:50:09 -0500 (0:00:00.496) 0:03:06.028 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:50:09 -0500 (0:00:00.057) 0:03:06.086 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:50:09 -0500 (0:00:00.049) 0:03:06.136 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.028658", "end": "2026-02-14 11:50:10.095201", "rc": 0, "start": "2026-02-14 11:50:10.066543" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:50:10 -0500 (0:00:00.455) 0:03:06.591 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:50:10 -0500 (0:00:00.053) 0:03:06.645 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:50:10 -0500 (0:00:00.023) 0:03:06.669 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:50:10 -0500 (0:00:00.029) 0:03:06.699 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:50:10 -0500 (0:00:00.029) 0:03:06.729 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.029106", "end": "2026-02-14 11:50:10.656963", "rc": 0, "start": "2026-02-14 11:50:10.627857" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:50:10 -0500 (0:00:00.419) 0:03:07.149 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.027069", "end": "2026-02-14 11:50:11.088898", "rc": 0, "start": "2026-02-14 11:50:11.061829" } STDOUT: local quadlet-basic-mysql-name TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:50:11 -0500 (0:00:00.434) 0:03:07.584 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.028426", "end": "2026-02-14 11:50:11.510083", "rc": 0, "start": "2026-02-14 11:50:11.481657" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:50:11 -0500 (0:00:00.411) 0:03:07.995 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.028615", "end": "2026-02-14 11:50:11.922568", "rc": 0, "start": "2026-02-14 11:50:11.893953" } STDOUT: podman podman-default-kube-network quadlet-basic-name systemd-quadlet-basic-unused-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:50:11 -0500 (0:00:00.421) 0:03:08.417 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:50:12 -0500 (0:00:00.430) 0:03:08.847 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:50:12 -0500 (0:00:00.434) 0:03:09.282 ***** ok: [managed-node2] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-nftables-reload.service": { "name": "netavark-nftables-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quadlet-basic-mysql-volume.service": { "name": "quadlet-basic-mysql-volume.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quadlet-basic-network.service": { "name": "quadlet-basic-network.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quadlet-basic-unused-network-network.service": { "name": "quadlet-basic-unused-network-network.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:50:14 -0500 (0:00:02.089) 0:03:11.371 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:50:14 -0500 (0:00:00.022) 0:03:11.393 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Volume": { "VolumeName": "quadlet-basic-mysql-name" } }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.036) 0:03:11.430 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.045) 0:03:11.475 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.033) 0:03:11.508 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-mysql", "__podman_quadlet_type": "volume", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.043) 0:03:11.551 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.048) 0:03:11.600 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.033) 0:03:11.634 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.033) 0:03:11.668 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.036) 0:03:11.705 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.386) 0:03:12.091 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.026) 0:03:12.117 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.023) 0:03:12.141 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.023) 0:03:12.165 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.087) 0:03:12.253 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.026) 0:03:12.279 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.025) 0:03:12.305 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.038) 0:03:12.343 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.023) 0:03:12.366 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-mysql-volume.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:50:15 -0500 (0:00:00.041) 0:03:12.408 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:50:16 -0500 (0:00:00.036) 0:03:12.444 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:50:16 -0500 (0:00:00.020) 0:03:12.465 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic-mysql.volume", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:50:16 -0500 (0:00:00.065) 0:03:12.530 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:50:16 -0500 (0:00:00.032) 0:03:12.563 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:50:16 -0500 (0:00:00.057) 0:03:12.621 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:50:16 -0500 (0:00:00.022) 0:03:12.644 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-mysql-volume.service", "state": "stopped", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:48:41 EST", "ActiveEnterTimestampMonotonic": "605505175", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket sysinit.target -.mount basic.target system.slice network-online.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:48:41 EST", "AssertTimestampMonotonic": "605453569", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "34657000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:48:41 EST", "ConditionTimestampMonotonic": "605453566", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "13384", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-mysql-volume.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2026-02-14 11:48:41 EST", "ExecMainExitTimestampMonotonic": "605504972", "ExecMainHandoffTimestamp": "Sat 2026-02-14 11:48:41 EST", "ExecMainHandoffTimestampMonotonic": "605466536", "ExecMainPID": "81746", "ExecMainStartTimestamp": "Sat 2026-02-14 11:48:41 EST", "ExecMainStartTimestampMonotonic": "605454293", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore quadlet-basic-mysql-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman volume create --ignore quadlet-basic-mysql-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/quadlet-basic-mysql-volume.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-mysql-volume.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:48:41 EST", "InactiveExitTimestampMonotonic": "605455376", "InvocationID": "9260166c3fca4a73a6f41b596cd38000", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3059724288", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "14426112", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-mysql-volume.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "sysinit.target -.mount system.slice", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/quadlet-basic-mysql.volume", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:48:41 EST", "StateChangeTimestampMonotonic": "605505175", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-mysql-volume", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:50:17 -0500 (0:00:00.816) 0:03:13.460 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087719.8768044, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "90a3571bfc7670328fe3f8fb625585613dbd9c4a", "ctime": 1771087719.8805661, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 507511033, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087719.6078026, "nlink": 1, "path": "/etc/containers/systemd/quadlet-basic-mysql.volume", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 89, "uid": 0, "version": "2713987485", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:50:17 -0500 (0:00:00.406) 0:03:13.866 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:50:17 -0500 (0:00:00.056) 0:03:13.923 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:50:17 -0500 (0:00:00.393) 0:03:14.316 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:50:17 -0500 (0:00:00.032) 0:03:14.349 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:50:17 -0500 (0:00:00.041) 0:03:14.390 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:50:18 -0500 (0:00:00.040) 0:03:14.431 ***** changed: [managed-node2] => { "changed": true, "path": "/etc/containers/systemd/quadlet-basic-mysql.volume", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:50:18 -0500 (0:00:00.435) 0:03:14.866 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:50:18 -0500 (0:00:00.035) 0:03:14.901 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:50:19 -0500 (0:00:00.774) 0:03:15.676 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:50:19 -0500 (0:00:00.440) 0:03:16.117 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:50:19 -0500 (0:00:00.034) 0:03:16.151 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:50:19 -0500 (0:00:00.025) 0:03:16.176 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.029252", "end": "2026-02-14 11:50:20.103515", "rc": 0, "start": "2026-02-14 11:50:20.074263" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:50:20 -0500 (0:00:00.413) 0:03:16.589 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:50:20 -0500 (0:00:00.040) 0:03:16.630 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:50:20 -0500 (0:00:00.022) 0:03:16.652 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:50:20 -0500 (0:00:00.023) 0:03:16.675 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:50:20 -0500 (0:00:00.023) 0:03:16.699 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.028438", "end": "2026-02-14 11:50:20.622126", "rc": 0, "start": "2026-02-14 11:50:20.593688" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:50:20 -0500 (0:00:00.408) 0:03:17.107 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.028979", "end": "2026-02-14 11:50:21.040909", "rc": 0, "start": "2026-02-14 11:50:21.011930" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:50:21 -0500 (0:00:00.426) 0:03:17.534 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.028011", "end": "2026-02-14 11:50:21.466494", "rc": 0, "start": "2026-02-14 11:50:21.438483" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:50:21 -0500 (0:00:00.510) 0:03:18.045 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.030436", "end": "2026-02-14 11:50:21.979635", "rc": 0, "start": "2026-02-14 11:50:21.949199" } STDOUT: podman podman-default-kube-network quadlet-basic-name systemd-quadlet-basic-unused-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:50:22 -0500 (0:00:00.445) 0:03:18.490 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:50:22 -0500 (0:00:00.435) 0:03:18.925 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:50:22 -0500 (0:00:00.421) 0:03:19.347 ***** ok: [managed-node2] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-nftables-reload.service": { "name": "netavark-nftables-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quadlet-basic-network.service": { "name": "quadlet-basic-network.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quadlet-basic-unused-network-network.service": { "name": "quadlet-basic-unused-network-network.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:50:25 -0500 (0:00:02.076) 0:03:21.423 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.026) 0:03:21.450 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": { "Network": {} }, "__podman_quadlet_str": "", "__podman_quadlet_template_src": "" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.038) 0:03:21.489 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.035) 0:03:21.524 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_quadlet_spec | length == 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.022) 0:03:21.547 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic-unused-network", "__podman_quadlet_type": "network", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.051) 0:03:21.598 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.042) 0:03:21.640 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.027) 0:03:21.668 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.031) 0:03:21.699 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.040) 0:03:21.739 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.393) 0:03:22.132 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.029) 0:03:22.162 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.024) 0:03:22.186 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.039) 0:03:22.226 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.034) 0:03:22.260 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.028) 0:03:22.288 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.023) 0:03:22.312 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.029) 0:03:22.341 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.029) 0:03:22.370 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-unused-network-network.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:50:25 -0500 (0:00:00.044) 0:03:22.415 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:50:26 -0500 (0:00:00.036) 0:03:22.451 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:50:26 -0500 (0:00:00.019) 0:03:22.471 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic-unused-network.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:50:26 -0500 (0:00:00.065) 0:03:22.537 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:50:26 -0500 (0:00:00.031) 0:03:22.569 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:50:26 -0500 (0:00:00.075) 0:03:22.644 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:50:26 -0500 (0:00:00.112) 0:03:22.757 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-unused-network-network.service", "state": "stopped", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:48:36 EST", "ActiveEnterTimestampMonotonic": "601163427", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket basic.target network-online.target system.slice sysinit.target -.mount", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:48:36 EST", "AssertTimestampMonotonic": "601115720", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "34889000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:48:36 EST", "ConditionTimestampMonotonic": "601115717", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "13345", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-unused-network-network.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2026-02-14 11:48:36 EST", "ExecMainExitTimestampMonotonic": "601163196", "ExecMainHandoffTimestamp": "Sat 2026-02-14 11:48:36 EST", "ExecMainHandoffTimestampMonotonic": "601128412", "ExecMainPID": "80767", "ExecMainStartTimestamp": "Sat 2026-02-14 11:48:36 EST", "ExecMainStartTimestampMonotonic": "601116560", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore systemd-quadlet-basic-unused-network ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore systemd-quadlet-basic-unused-network ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/quadlet-basic-unused-network-network.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-unused-network-network.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:48:36 EST", "InactiveExitTimestampMonotonic": "601117111", "InvocationID": "084dd4abf0fb4ac884f5444934703725", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3074560000", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "14417920", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-unused-network-network.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "-.mount sysinit.target system.slice", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/quadlet-basic-unused-network.network", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:48:36 EST", "StateChangeTimestampMonotonic": "601163427", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-unused-network-network", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:50:27 -0500 (0:00:00.832) 0:03:23.590 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087715.5967782, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01", "ctime": 1771087715.5998106, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 461373653, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087715.3177764, "nlink": 1, "path": "/etc/containers/systemd/quadlet-basic-unused-network.network", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 54, "uid": 0, "version": "1305006237", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:50:27 -0500 (0:00:00.398) 0:03:23.989 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:50:27 -0500 (0:00:00.035) 0:03:24.024 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:50:27 -0500 (0:00:00.370) 0:03:24.395 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:50:28 -0500 (0:00:00.031) 0:03:24.427 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:50:28 -0500 (0:00:00.023) 0:03:24.450 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:50:28 -0500 (0:00:00.024) 0:03:24.474 ***** changed: [managed-node2] => { "changed": true, "path": "/etc/containers/systemd/quadlet-basic-unused-network.network", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:50:28 -0500 (0:00:00.379) 0:03:24.854 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:50:28 -0500 (0:00:00.021) 0:03:24.875 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:50:29 -0500 (0:00:00.767) 0:03:25.642 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:50:29 -0500 (0:00:00.483) 0:03:26.126 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:50:29 -0500 (0:00:00.041) 0:03:26.168 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:50:29 -0500 (0:00:00.029) 0:03:26.197 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.028690", "end": "2026-02-14 11:50:30.123758", "rc": 0, "start": "2026-02-14 11:50:30.095068" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:50:30 -0500 (0:00:00.427) 0:03:26.625 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:50:30 -0500 (0:00:00.067) 0:03:26.692 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:50:30 -0500 (0:00:00.039) 0:03:26.732 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:50:30 -0500 (0:00:00.062) 0:03:26.794 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:50:30 -0500 (0:00:00.048) 0:03:26.843 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.027055", "end": "2026-02-14 11:50:30.802067", "rc": 0, "start": "2026-02-14 11:50:30.775012" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:50:30 -0500 (0:00:00.444) 0:03:27.288 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.029477", "end": "2026-02-14 11:50:31.222399", "rc": 0, "start": "2026-02-14 11:50:31.192922" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:50:31 -0500 (0:00:00.419) 0:03:27.707 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.028668", "end": "2026-02-14 11:50:31.651094", "rc": 0, "start": "2026-02-14 11:50:31.622426" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:50:31 -0500 (0:00:00.431) 0:03:28.139 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.029589", "end": "2026-02-14 11:50:32.081253", "rc": 0, "start": "2026-02-14 11:50:32.051664" } STDOUT: podman podman-default-kube-network quadlet-basic-name TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:50:32 -0500 (0:00:00.428) 0:03:28.567 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:50:32 -0500 (0:00:00.426) 0:03:28.994 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:50:32 -0500 (0:00:00.411) 0:03:29.405 ***** ok: [managed-node2] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-nftables-reload.service": { "name": "netavark-nftables-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quadlet-basic-network.service": { "name": "quadlet-basic-network.service", "source": "systemd", "state": "stopped", "status": "generated" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:50:35 -0500 (0:00:02.100) 0:03:31.505 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 0] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:14 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.029) 0:03:31.534 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_file_src": "", "__podman_quadlet_spec": {}, "__podman_quadlet_str": "[Network]\nSubnet=192.168.29.0/24\nGateway=192.168.29.1\nLabel=app=wordpress\nNetworkName=quadlet-basic-name\n", "__podman_quadlet_template_src": "templates/quadlet-basic.network.j2" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 1] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:25 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.153) 0:03:31.688 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_continue_if_pull_fails": false, "__podman_pull_image": true, "__podman_state": "absent", "__podman_systemd_unit_scope": "", "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if no quadlet spec is given] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:35 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.030) 0:03:31.718 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_quadlet_str", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 2] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:48 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.096) 0:03:31.814 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_name": "quadlet-basic", "__podman_quadlet_type": "network", "__podman_rootless": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:57 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.062) 0:03:31.876 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.057) 0:03:31.933 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "'getent_passwd' not in ansible_facts or __podman_handle_user not in ansible_facts['getent_passwd']", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.037) 0:03:31.971 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not ansible_facts[\"getent_passwd\"][__podman_handle_user]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.035) 0:03:32.006 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 14 February 2026 11:50:35 -0500 (0:00:00.043) 0:03:32.050 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087317.803642, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "f32c52c5ca57cf760bfb35ae49a86af53817352a", "ctime": 1771087310.205592, "dev": 51714, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 9163113, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-pie-executable", "mode": "0755", "mtime": 1764201600.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 15560, "uid": 0, "version": "2735365168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.397) 0:03:32.447 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.024) 0:03:32.472 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.025) 0:03:32.498 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_handle_user not in [\"root\", \"0\"]", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.023) 0:03:32.521 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.022) 0:03:32.544 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.025) 0:03:32.570 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.023) 0:03:32.594 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.023) 0:03:32.617 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_stat_getsubids.stat.exists", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 3] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:63 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.024) 0:03:32.642 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_activate_systemd_unit": true, "__podman_images_found": [], "__podman_kube_yamls_raw": "", "__podman_service_name": "quadlet-basic-network.service", "__podman_systemd_scope": "system", "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 4] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:73 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.046) 0:03:32.688 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_path": "/etc/containers/systemd" }, "changed": false } TASK [fedora.linux_system_roles.podman : Get kube yaml contents] *************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:79 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.047) 0:03:32.735 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set per-container variables part 5] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:90 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.031) 0:03:32.767 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_images": [], "__podman_quadlet_file": "/etc/containers/systemd/quadlet-basic.network", "__podman_volumes": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Set per-container variables part 6] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:108 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.075) 0:03:32.843 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Cleanup quadlets] ********************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:115 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.036) 0:03:32.879 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:4 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.062) 0:03:32.942 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stop and disable service] ************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 Saturday 14 February 2026 11:50:36 -0500 (0:00:00.037) 0:03:32.979 ***** changed: [managed-node2] => { "changed": true, "enabled": false, "failed_when_result": false, "name": "quadlet-basic-network.service", "state": "stopped", "status": { "AccessSELinuxContext": "system_u:object_r:systemd_generator_unit_file_t:s0", "ActiveEnterTimestamp": "Sat 2026-02-14 11:48:32 EST", "ActiveEnterTimestampMonotonic": "597148907", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target system.slice -.mount network-online.target systemd-journald.socket sysinit.target", "AllowIsolate": "no", "AssertResult": "yes", "AssertTimestamp": "Sat 2026-02-14 11:48:32 EST", "AssertTimestampMonotonic": "597106269", "Before": "shutdown.target", "BindLogSockets": "no", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "yes", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "35094000", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanLiveMount": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore", "CleanResult": "success", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2026-02-14 11:48:32 EST", "ConditionTimestampMonotonic": "597106266", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroupId": "13306", "ControlPID": "0", "CoredumpFilter": "0x33", "CoredumpReceive": "no", "DebugInvocation": "no", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "DefaultStartupMemoryLow": "0", "Delegate": "no", "Description": "quadlet-basic-network.service", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveMemoryHigh": "3630997504", "EffectiveMemoryMax": "3630997504", "EffectiveTasksMax": "21804", "ExecMainCode": "1", "ExecMainExitTimestamp": "Sat 2026-02-14 11:48:32 EST", "ExecMainExitTimestampMonotonic": "597148716", "ExecMainHandoffTimestamp": "Sat 2026-02-14 11:48:32 EST", "ExecMainHandoffTimestampMonotonic": "597116519", "ExecMainPID": "79789", "ExecMainStartTimestamp": "Sat 2026-02-14 11:48:32 EST", "ExecMainStartTimestampMonotonic": "597107028", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore --subnet 192.168.29.0/24 --gateway 192.168.29.1 --label app=wordpress quadlet-basic-name ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStartEx": "{ path=/usr/bin/podman ; argv[]=/usr/bin/podman network create --ignore --subnet 192.168.29.0/24 --gateway 192.168.29.1 --label app=wordpress quadlet-basic-name ; flags= ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExitType": "main", "ExtensionImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FileDescriptorStorePreserve": "restart", "FinalKillSignal": "9", "FragmentPath": "/run/systemd/generator/quadlet-basic-network.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOReadBytes": "[not set]", "IOReadOperations": "[not set]", "IOSchedulingClass": "2", "IOSchedulingPriority": "4", "IOWeight": "[not set]", "IOWriteBytes": "[not set]", "IOWriteOperations": "[not set]", "IPAccounting": "no", "IPEgressBytes": "[no data]", "IPEgressPackets": "[no data]", "IPIngressBytes": "[no data]", "IPIngressPackets": "[no data]", "Id": "quadlet-basic-network.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2026-02-14 11:48:32 EST", "InactiveExitTimestampMonotonic": "597107550", "InvocationID": "c3b503d774d044e2b272914ff1cb5fd7", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "8388608", "LimitMEMLOCKSoft": "8388608", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "524288", "LimitNOFILESoft": "1024", "LimitNPROC": "13627", "LimitNPROCSoft": "13627", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "13627", "LimitSIGPENDINGSoft": "13627", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LiveMountResult": "success", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "ManagedOOMMemoryPressure": "auto", "ManagedOOMMemoryPressureDurationUSec": "[not set]", "ManagedOOMMemoryPressureLimit": "0", "ManagedOOMPreference": "none", "ManagedOOMSwap": "auto", "MemoryAccounting": "yes", "MemoryAvailable": "3072376832", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryKSM": "no", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemoryPeak": "14045184", "MemoryPressureThresholdUSec": "200ms", "MemoryPressureWatch": "auto", "MemorySwapCurrent": "[not set]", "MemorySwapMax": "infinity", "MemorySwapPeak": "0", "MemoryZSwapCurrent": "[not set]", "MemoryZSwapMax": "infinity", "MemoryZSwapWriteback": "yes", "MountAPIVFS": "no", "MountImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAPolicy": "n/a", "Names": "quadlet-basic-network.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMPolicy": "stop", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "OnSuccessJobMode": "fail", "Perpetual": "no", "PrivateDevices": "no", "PrivateIPC": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivatePIDs": "no", "PrivateTmp": "no", "PrivateTmpEx": "no", "PrivateUsers": "no", "PrivateUsersEx": "no", "ProcSubset": "all", "ProtectClock": "no", "ProtectControlGroups": "no", "ProtectControlGroupsEx": "no", "ProtectHome": "no", "ProtectHostname": "no", "ProtectKernelLogs": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectProc": "default", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "ReloadResult": "success", "ReloadSignal": "1", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target -.mount", "RequiresMountsFor": "/run/containers", "Restart": "no", "RestartKillSignal": "15", "RestartMaxDelayUSec": "infinity", "RestartMode": "normal", "RestartSteps": "0", "RestartUSec": "100ms", "RestartUSecNext": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RootEphemeral": "no", "RootImagePolicy": "root=verity+signed+encrypted+unprotected+absent:usr=verity+signed+encrypted+unprotected+absent:home=encrypted+unprotected+absent:srv=encrypted+unprotected+absent:tmp=encrypted+unprotected+absent:var=encrypted+unprotected+absent", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "RuntimeRandomizedExtraUSec": "0", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SetLoginEnvironment": "no", "Slice": "system.slice", "SourcePath": "/etc/containers/systemd/quadlet-basic.network", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StartupMemoryHigh": "infinity", "StartupMemoryLow": "0", "StartupMemoryMax": "infinity", "StartupMemorySwapMax": "infinity", "StartupMemoryZSwapMax": "infinity", "StateChangeTimestamp": "Sat 2026-02-14 11:48:32 EST", "StateChangeTimestampMonotonic": "597148907", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SurviveFinalKillSignal": "no", "SyslogFacility": "3", "SyslogIdentifier": "quadlet-basic-network", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "2147483646", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "21804", "TimeoutAbortUSec": "1min 30s", "TimeoutCleanUSec": "infinity", "TimeoutStartFailureMode": "terminate", "TimeoutStartUSec": "infinity", "TimeoutStopFailureMode": "terminate", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "generated", "UtmpMode": "init", "Wants": "network-online.target", "WatchdogSignal": "6", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.podman : See if quadlet file exists] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:35 Saturday 14 February 2026 11:50:37 -0500 (0:00:00.820) 0:03:33.800 ***** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1771087711.5137534, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "19c9b17be2af9b9deca5c3bd327f048966750682", "ctime": 1771087711.5171118, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 415236346, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1771087711.2237515, "nlink": 1, "path": "/etc/containers/systemd/quadlet-basic.network", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 105, "uid": 0, "version": "3323148039", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:40 Saturday 14 February 2026 11:50:37 -0500 (0:00:00.392) 0:03:34.192 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Slurp quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:6 Saturday 14 February 2026 11:50:37 -0500 (0:00:00.036) 0:03:34.229 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet file] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:12 Saturday 14 February 2026 11:50:38 -0500 (0:00:00.367) 0:03:34.597 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Parse quadlet yaml file] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:20 Saturday 14 February 2026 11:50:38 -0500 (0:00:00.032) 0:03:34.630 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Reset raw variable] ******************* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/parse_quadlet_file.yml:28 Saturday 14 February 2026 11:50:38 -0500 (0:00:00.022) 0:03:34.653 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_raw": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Remove quadlet file] ****************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:44 Saturday 14 February 2026 11:50:38 -0500 (0:00:00.022) 0:03:34.676 ***** changed: [managed-node2] => { "changed": true, "path": "/etc/containers/systemd/quadlet-basic.network", "state": "absent" } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:52 Saturday 14 February 2026 11:50:38 -0500 (0:00:00.388) 0:03:35.064 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "not __podman_is_booted", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Refresh systemd] ********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:60 Saturday 14 February 2026 11:50:38 -0500 (0:00:00.020) 0:03:35.085 ***** ok: [managed-node2] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.podman : Remove managed resource] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:70 Saturday 14 February 2026 11:50:39 -0500 (0:00:00.761) 0:03:35.847 ***** changed: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true } TASK [fedora.linux_system_roles.podman : Remove volumes] *********************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:116 Saturday 14 February 2026 11:50:39 -0500 (0:00:00.473) 0:03:36.320 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Clear parsed podman variable] ********* task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:133 Saturday 14 February 2026 11:50:39 -0500 (0:00:00.039) 0:03:36.360 ***** ok: [managed-node2] => { "ansible_facts": { "__podman_quadlet_parsed": null }, "changed": false } TASK [fedora.linux_system_roles.podman : Prune images no longer in use] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:137 Saturday 14 February 2026 11:50:39 -0500 (0:00:00.027) 0:03:36.388 ***** changed: [managed-node2] => { "changed": true, "cmd": [ "podman", "image", "prune", "--all", "-f" ], "delta": "0:00:00.027992", "end": "2026-02-14 11:50:40.342512", "rc": 0, "start": "2026-02-14 11:50:40.314520" } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:148 Saturday 14 February 2026 11:50:40 -0500 (0:00:00.450) 0:03:36.838 ***** included: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:13 Saturday 14 February 2026 11:50:40 -0500 (0:00:00.068) 0:03:36.907 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 14 February 2026 11:50:40 -0500 (0:00:00.127) 0:03:37.034 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 14 February 2026 11:50:40 -0500 (0:00:00.036) 0:03:37.070 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_rootless | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : For testing and debugging - images] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:158 Saturday 14 February 2026 11:50:40 -0500 (0:00:00.035) 0:03:37.106 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "images", "-n" ], "delta": "0:00:00.028631", "end": "2026-02-14 11:50:41.045805", "rc": 0, "start": "2026-02-14 11:50:41.017174" } TASK [fedora.linux_system_roles.podman : For testing and debugging - volumes] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:167 Saturday 14 February 2026 11:50:41 -0500 (0:00:00.427) 0:03:37.533 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "volume", "ls", "-n" ], "delta": "0:00:00.029039", "end": "2026-02-14 11:50:41.467562", "rc": 0, "start": "2026-02-14 11:50:41.438523" } TASK [fedora.linux_system_roles.podman : For testing and debugging - containers] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:176 Saturday 14 February 2026 11:50:41 -0500 (0:00:00.425) 0:03:37.959 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "ps", "--noheading" ], "delta": "0:00:00.027428", "end": "2026-02-14 11:50:41.890548", "rc": 0, "start": "2026-02-14 11:50:41.863120" } TASK [fedora.linux_system_roles.podman : For testing and debugging - networks] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:185 Saturday 14 February 2026 11:50:41 -0500 (0:00:00.416) 0:03:38.375 ***** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "network", "ls", "-n", "-q" ], "delta": "0:00:00.028247", "end": "2026-02-14 11:50:42.306077", "rc": 0, "start": "2026-02-14 11:50:42.277830" } STDOUT: podman podman-default-kube-network TASK [fedora.linux_system_roles.podman : For testing and debugging - secrets] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:194 Saturday 14 February 2026 11:50:42 -0500 (0:00:00.421) 0:03:38.796 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - pods] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:204 Saturday 14 February 2026 11:50:42 -0500 (0:00:00.421) 0:03:39.218 ***** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : For testing and debugging - services] *** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Saturday 14 February 2026 11:50:43 -0500 (0:00:00.420) 0:03:39.639 ***** ok: [managed-node2] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.fedoraproject.FirewallD1.service": { "name": "dbus-org.fedoraproject.FirewallD1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "running", "status": "enabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "ip6tables.service": { "name": "ip6tables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ipset.service": { "name": "ipset.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iptables.service": { "name": "iptables.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "netavark-dhcp-proxy.service": { "name": "netavark-dhcp-proxy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-firewalld-reload.service": { "name": "netavark-firewalld-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "netavark-nftables-reload.service": { "name": "netavark-nftables-reload.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "podman-auto-update.service": { "name": "podman-auto-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-clean-transient.service": { "name": "podman-clean-transient.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman-kube@.service": { "name": "podman-kube@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "podman-restart.service": { "name": "podman-restart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "podman.service": { "name": "podman.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.podman : Create and update quadlets] *********** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_quadlet_spec.yml:119 Saturday 14 February 2026 11:50:45 -0500 (0:00:02.065) 0:03:41.705 ***** skipping: [managed-node2] => { "changed": false, "false_condition": "__podman_state != \"absent\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Cancel linger] ************************ task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:198 Saturday 14 February 2026 11:50:45 -0500 (0:00:00.025) 0:03:41.731 ***** skipping: [managed-node2] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.podman : Handle credential files - absent] ***** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:205 Saturday 14 February 2026 11:50:45 -0500 (0:00:00.018) 0:03:41.749 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - absent] ******** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:214 Saturday 14 February 2026 11:50:45 -0500 (0:00:00.017) 0:03:41.767 ***** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [Ensure no resources] ***************************************************** task path: /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:365 Saturday 14 February 2026 11:50:45 -0500 (0:00:00.025) 0:03:41.793 ***** ok: [managed-node2] => { "changed": false } MSG: All assertions passed PLAY RECAP ********************************************************************* managed-node2 : ok=787 changed=73 unreachable=0 failed=1 skipped=970 rescued=1 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.14", "end_time": "2026-02-14T16:48:51.228343+00:00Z", "host": "managed-node2", "message": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "start_time": "2026-02-14T16:48:47.210352+00:00Z", "task_name": "Ensure container images are present", "task_path": "/tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2" }, { "ansible_version": "2.17.14", "delta": "0:00:00.033270", "end_time": "2026-02-14 11:48:52.586838", "host": "managed-node2", "message": "", "rc": 0, "start_time": "2026-02-14 11:48:52.553568", "stdout": "Feb 14 11:44:06 managed-node2 conmon[33222]: conmon 257f1d51b1075ac8c229 : winsz read side: 15, winsz write side: 16\nFeb 14 11:44:06 managed-node2 systemd[1]: Started libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2427.\nFeb 14 11:44:06 managed-node2 conmon[33222]: conmon 257f1d51b1075ac8c229 : container PID: 33224\nFeb 14 11:44:06 managed-node2 podman[33160]: 2026-02-14 11:44:06.122843666 -0500 EST m=+0.369090254 container init 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry)\nFeb 14 11:44:06 managed-node2 podman[33160]: 2026-02-14 11:44:06.126102643 -0500 EST m=+0.372349223 container start 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry)\nFeb 14 11:44:06 managed-node2 podman[33160]: 2026-02-14 11:44:06.130281128 -0500 EST m=+0.376527469 pod start 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 (image=, name=httpd2)\nFeb 14 11:44:06 managed-node2 python3.12[33153]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nFeb 14 11:44:06 managed-node2 python3.12[33153]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2\n Container:\n 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\n \nFeb 14 11:44:06 managed-node2 python3.12[33153]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2026-02-14T11:44:05-05:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2026-02-14T11:44:05-05:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using transient store: false\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2026-02-14T11:44:05-05:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Initializing event backend journald\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/crun\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 bridge podman1 2026-02-14 11:42:18.681480354 -0500 EST [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice for parent machine.slice and name libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"no command or entrypoint provided, and no CMD or ENTRYPOINT from image: defaulting to empty string\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"setting container name 0454ffcc8b08-infra\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Allocated lock 1 for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are supported\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Created container \\\"337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Container \\\"337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Container \\\"337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\\\" has run directory \\\"/run/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"adding container to pod httpd2\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"setting container name httpd2-httpd2\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Allocated lock 2 for container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Created container \\\"257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Container \\\"257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Container \\\"257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\\\" has run directory \\\"/run/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata\\\"\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Strongconnecting node 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Pushed 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a onto stack\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Finishing node 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a. Popped 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a off stack\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Strongconnecting node 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Pushed 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 onto stack\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Finishing node 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2. Popped 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 off stack\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Made network namespace at /run/netns/netns-88026c48-10f7-33b6-4b2c-c0b5e250fcc3 for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\"\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"Created root filesystem for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a at /var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/rootfs/merge\"\n [DEBUG netavark::network::validation] Validating network namespace...\n [DEBUG netavark::commands::setup] Setting up...\n [INFO netavark::firewall] Using nftables firewall driver\n [DEBUG netavark::network::bridge] Setup network podman-default-kube-network\n [DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.2/24]\n [DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]\n [DEBUG netavark::network::bridge] Using mtu 9001 from default route interface for the network\n [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/ip_forward to 1\n [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/conf/podman1/route_localnet to 1\n [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/conf/podman1/rp_filter to 2\n [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv6/conf/eth0/autoconf to 0\n [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/conf/eth0/arp_notify to 1\n [DEBUG netavark::network::sysctl] Setting sysctl value for net/ipv4/conf/eth0/rp_filter to 2\n [INFO netavark::network::netlink_route] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)\n [DEBUG netavark::firewall::firewalld] Adding firewalld rules for network 10.89.0.0/24\n [DEBUG netavark::firewall::firewalld] Adding subnet 10.89.0.0/24 to zone trusted as source\n [INFO netavark::firewall::nft] Creating container chain nv_abf306ea_10_89_0_0_nm24\n [DEBUG netavark::dns::aardvark] Spawning aardvark server\n [DEBUG netavark::dns::aardvark] start aardvark-dns: [\"systemd-run\", \"-q\", \"--scope\", \"/usr/libexec/podman/aardvark-dns\", \"--config\", \"/run/containers/networks/aardvark-dns\", \"-p\", \"53\", \"run\"]\n [DEBUG netavark::commands::setup] {\n \"podman-default-kube-network\": StatusBlock {\n dns_search_domains: Some(\n [\n \"dns.podman\",\n ],\n ),\n dns_server_ips: Some(\n [\n 10.89.0.1,\n ],\n ),\n interfaces: Some(\n {\n \"eth0\": NetInterface {\n mac_address: \"fe:41:3b:4d:b8:2d\",\n subnets: Some(\n [\n NetAddress {\n gateway: Some(\n 10.89.0.1,\n ),\n ipnet: 10.89.0.2/24,\n },\n ],\n ),\n },\n },\n ),\n },\n }\n [DEBUG netavark::commands::setup] Setup complete\n time=\"2026-02-14T11:44:05-05:00\" level=debug msg=\"/proc/sys/crypto/fips_enabled does not contain '1', not adding FIPS mode bind mounts\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Setting Cgroups for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a to machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice:libpod:337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/rootfs/merge\\\"\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created OCI spec for container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a at /var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata/config.json\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice for parent machine.slice and name libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a -u 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata -p /run/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata/pidfile -n 0454ffcc8b08-infra --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --hooks-dir --exit-command-arg /usr/share/containers/oci/hooks.d --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --stopped-only --exit-command-arg 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a]\"\n time=\"2026-02-14T11:44:06-05:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice and unitName libpod-conmon-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Received: 33219\"\n time=\"2026-02-14T11:44:06-05:00\" level=info msg=\"Got Conmon PID as 33217\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a in OCI runtime\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Starting container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a with command [/catatonit -P]\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Started container 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/VRWALXDJXGCPTG6I4RK6YKJW5Q,upperdir=/var/lib/containers/storage/overlay/aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c/diff,workdir=/var/lib/containers/storage/overlay/aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c28,c372\\\"\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Mounted container \\\"257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\\\" at \\\"/var/lib/containers/storage/overlay/aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c/merged\\\"\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created root filesystem for container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 at /var/lib/containers/storage/overlay/aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c/merged\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"/proc/sys/crypto/fips_enabled does not contain '1', not adding FIPS mode bind mounts\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Setting Cgroups for container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 to machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice:libpod:257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created OCI spec for container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 at /var/lib/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata/config.json\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice for parent machine.slice and name libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 -u 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata -p /run/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --persist-dir /run/libpod/persist/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --hooks-dir --exit-command-arg /usr/share/containers/oci/hooks.d --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --stopped-only --exit-command-arg 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2]\"\n time=\"2026-02-14T11:44:06-05:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice and unitName libpod-conmon-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Received: 33224\"\n time=\"2026-02-14T11:44:06-05:00\" level=info msg=\"Got Conmon PID as 33222\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Created container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 in OCI runtime\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Starting container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Started container 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2026-02-14T11:44:06-05:00\" level=debug msg=\"Shutting down engines\"\n time=\"2026-02-14T11:44:06-05:00\" level=info msg=\"Received shutdown.Stop(), terminating!\" PID=33160\nFeb 14 11:44:06 managed-node2 python3.12[33153]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nFeb 14 11:44:06 managed-node2 python3.12[33380]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:44:06 managed-node2 systemd[1]: Reload requested from client PID 33381 ('systemctl') (unit session-8.scope)...\nFeb 14 11:44:06 managed-node2 systemd[1]: Reloading...\nFeb 14 11:44:06 managed-node2 systemd-rc-local-generator[33433]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:44:06 managed-node2 systemd[1]: Reloading finished in 228 ms.\nFeb 14 11:44:07 managed-node2 python3.12[33601]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None\nFeb 14 11:44:07 managed-node2 systemd[1]: Reload requested from client PID 33604 ('systemctl') (unit session-8.scope)...\nFeb 14 11:44:07 managed-node2 systemd[1]: Reloading...\nFeb 14 11:44:07 managed-node2 systemd-rc-local-generator[33655]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:44:07 managed-node2 systemd[1]: Reloading finished in 220 ms.\nFeb 14 11:44:08 managed-node2 python3.12[33824]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:44:08 managed-node2 systemd[1]: Created slice system-podman\\x2dkube.slice - Slice /system/podman-kube.\n\u2591\u2591 Subject: A start job for unit system-podman\\x2dkube.slice has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit system-podman\\x2dkube.slice has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2512.\nFeb 14 11:44:08 managed-node2 systemd[1]: Starting podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play...\n\u2591\u2591 Subject: A start job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2434.\nFeb 14 11:44:08 managed-node2 podman[33828]: 2026-02-14 11:44:08.458903188 -0500 EST m=+0.025281117 pod stop 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 (image=, name=httpd2)\nFeb 14 11:44:15 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 podman[33828]: time=\"2026-02-14T11:44:18-05:00\" level=warning msg=\"StopSignal SIGTERM failed to stop container httpd2-httpd2 in 10 seconds, resorting to SIGKILL\"\nFeb 14 11:44:18 managed-node2 conmon[33222]: conmon 257f1d51b1075ac8c229 : container 33224 exited with status 137\nFeb 14 11:44:18 managed-node2 systemd[1]: libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 conmon[33222]: conmon 257f1d51b1075ac8c229 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice/libpod-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope/container/memory.events\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.481884669 -0500 EST m=+10.048262979 container died 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage)\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --hooks-dir /usr/share/containers/oci/hooks.d --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup --stopped-only 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2)\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=info msg=\"Setting parallel job count to 7\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=info msg=\"Using sqlite as database backend\"\nFeb 14 11:44:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay-aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c-merged.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay-aa7c321cb1b9cc03826ed0f9ed5929182186de6635ed9454bb236c5d02f2a31c-merged.mount has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using graph driver overlay\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using run root /run/containers/storage\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using transient store: false\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Initializing event backend journald\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/crun\\\"\"\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.518945693 -0500 EST m=+10.085323581 container cleanup 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0)\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --hooks-dir /usr/share/containers/oci/hooks.d --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup --stopped-only 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2)\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Shutting down engines\"\nFeb 14 11:44:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33840]: time=\"2026-02-14T11:44:18-05:00\" level=info msg=\"Received shutdown.Stop(), terminating!\" PID=33840\nFeb 14 11:44:18 managed-node2 systemd[1]: libpod-conmon-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-conmon-257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2.scope has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 systemd[1]: libpod-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.529731368 -0500 EST m=+10.096109300 container stop 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a (image=, name=0454ffcc8b08-infra, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2)\nFeb 14 11:44:18 managed-node2 conmon[33217]: conmon 337f995e0e5b707c847d : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice/libpod-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope/container/memory.events\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.531475473 -0500 EST m=+10.097853456 container died 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a (image=, name=0454ffcc8b08-infra)\nFeb 14 11:44:18 managed-node2 aardvark-dns[33213]: Received SIGHUP\nFeb 14 11:44:18 managed-node2 aardvark-dns[33213]: Successfully parsed config\nFeb 14 11:44:18 managed-node2 aardvark-dns[33213]: Listen v4 ip {}\nFeb 14 11:44:18 managed-node2 aardvark-dns[33213]: Listen v6 ip {}\nFeb 14 11:44:18 managed-node2 aardvark-dns[33213]: No configuration found stopping the sever\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:44:18 managed-node2 systemd[1]: run-p33207-i33208.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit run-p33207-i33208.scope has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 kernel: veth0 (unregistering): left allmulticast mode\nFeb 14 11:44:18 managed-node2 kernel: veth0 (unregistering): left promiscuous mode\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --hooks-dir /usr/share/containers/oci/hooks.d --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup --stopped-only 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a)\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=info msg=\"Setting parallel job count to 7\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=info msg=\"Using sqlite as database backend\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using graph driver overlay\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using run root /run/containers/storage\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using transient store: false\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Initializing event backend journald\"\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.5734] device (podman1): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/crun\\\"\"\nFeb 14 11:44:18 managed-node2 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service...\n\u2591\u2591 Subject: A start job for unit NetworkManager-dispatcher.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit NetworkManager-dispatcher.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2519.\nFeb 14 11:44:18 managed-node2 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service.\n\u2591\u2591 Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit NetworkManager-dispatcher.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2519.\nFeb 14 11:44:18 managed-node2 systemd[1]: run-netns-netns\\x2d88026c48\\x2d10f7\\x2d33b6\\x2d4b2c\\x2dc0b5e250fcc3.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit run-netns-netns\\x2d88026c48\\x2d10f7\\x2d33b6\\x2d4b2c\\x2dc0b5e250fcc3.mount has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a-rootfs-merge.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a-rootfs-merge.mount has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.645299132 -0500 EST m=+10.211677129 container cleanup 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a (image=, name=0454ffcc8b08-infra, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2)\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend netavark --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --hooks-dir /usr/share/containers/oci/hooks.d --runtime crun --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend journald --syslog container cleanup --stopped-only 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a)\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=debug msg=\"Shutting down engines\"\nFeb 14 11:44:18 managed-node2 /usr/bin/podman[33851]: time=\"2026-02-14T11:44:18-05:00\" level=info msg=\"Received shutdown.Stop(), terminating!\" PID=33851\nFeb 14 11:44:18 managed-node2 systemd[1]: libpod-conmon-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-conmon-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a.scope has successfully entered the 'dead' state.\nFeb 14 11:44:18 managed-node2 systemd[1]: Removed slice machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice - cgroup machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice.\n\u2591\u2591 Subject: A stop job for unit machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit machine-libpod_pod_0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2.slice has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2598 and the job result is done.\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.70205527 -0500 EST m=+10.268433185 container remove 257f1d51b1075ac8c229f0d62d7b678aa26c14251ea847f34b652d947f7fe1e2 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry)\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.723359249 -0500 EST m=+10.289737186 container remove 337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a (image=, name=0454ffcc8b08-infra, pod_id=0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2)\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.735329201 -0500 EST m=+10.301707105 pod remove 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2 (image=, name=httpd2)\nFeb 14 11:44:18 managed-node2 podman[33828]: Pods stopped:\nFeb 14 11:44:18 managed-node2 podman[33828]: 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2\nFeb 14 11:44:18 managed-node2 podman[33828]: Pods removed:\nFeb 14 11:44:18 managed-node2 podman[33828]: 0454ffcc8b080d4db100f49d8076dad871ad1e2ad275ec0a21667562f2f6c7e2\nFeb 14 11:44:18 managed-node2 podman[33828]: Secrets removed:\nFeb 14 11:44:18 managed-node2 podman[33828]: Volumes removed:\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.735787226 -0500 EST m=+10.302165293 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge)\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.753265753 -0500 EST m=+10.319643757 container create c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:44:18 managed-node2 systemd[1]: Created slice machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice - cgroup machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice.\n\u2591\u2591 Subject: A start job for unit machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2600.\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.789688005 -0500 EST m=+10.356065937 container create 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.793663888 -0500 EST m=+10.360041777 pod create dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2)\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.795354104 -0500 EST m=+10.361732147 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.818008901 -0500 EST m=+10.384386884 container create 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, io.containers.autoupdate=registry, app=test, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0)\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.818367488 -0500 EST m=+10.384745413 container restart c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:44:18 managed-node2 systemd[1]: Started libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2606.\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.872690582 -0500 EST m=+10.439068523 container init c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:44:18 managed-node2 podman[33828]: 2026-02-14 11:44:18.876183545 -0500 EST m=+10.442561631 container start c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered blocking state\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:44:18 managed-node2 kernel: veth0: entered allmulticast mode\nFeb 14 11:44:18 managed-node2 kernel: veth0: entered promiscuous mode\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered blocking state\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered forwarding state\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.8947] manager: (podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/5)\nFeb 14 11:44:18 managed-node2 (udev-worker)[33859]: Network interface NamePolicy= disabled on kernel command line.\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:44:18 managed-node2 (udev-worker)[33860]: Network interface NamePolicy= disabled on kernel command line.\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered blocking state\nFeb 14 11:44:18 managed-node2 kernel: podman1: port 1(veth0) entered forwarding state\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9052] device (podman1): carrier: link connected\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9058] device (veth0): carrier: link connected\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9060] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/6)\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9180] device (podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9187] device (podman1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9199] device (podman1): Activation: starting connection 'podman1' (ddcc90c9-9614-4d49-81ea-de66f003e113)\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9201] device (podman1): state change: disconnected -> prepare (reason 'none', managed-type: 'external')\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9204] device (podman1): state change: prepare -> config (reason 'none', managed-type: 'external')\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9207] device (podman1): state change: config -> ip-config (reason 'none', managed-type: 'external')\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9210] device (podman1): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9268] device (podman1): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9304] device (podman1): state change: secondaries -> activated (reason 'none', managed-type: 'external')\nFeb 14 11:44:18 managed-node2 NetworkManager[815]: [1771087458.9322] device (podman1): Activation: successful, device activated.\nFeb 14 11:44:18 managed-node2 systemd[1]: Started run-p33919-i33920.scope - [systemd-run] /usr/libexec/podman/aardvark-dns --config /run/containers/networks/aardvark-dns -p 53 run.\n\u2591\u2591 Subject: A start job for unit run-p33919-i33920.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit run-p33919-i33920.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2612.\nFeb 14 11:44:19 managed-node2 systemd[1]: Started libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2618.\nFeb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.029725697 -0500 EST m=+10.596103734 container init 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.032671824 -0500 EST m=+10.599049788 container start 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:44:19 managed-node2 systemd[1]: Started libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2625.\nFeb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.071338207 -0500 EST m=+10.637716150 container init 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z)\nFeb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.073950536 -0500 EST m=+10.640328628 container start 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage)\nFeb 14 11:44:19 managed-node2 podman[33828]: 2026-02-14 11:44:19.078480939 -0500 EST m=+10.644858856 pod start dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2)\nFeb 14 11:44:19 managed-node2 podman[33828]: Pod:\nFeb 14 11:44:19 managed-node2 podman[33828]: dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3\nFeb 14 11:44:19 managed-node2 podman[33828]: Container:\nFeb 14 11:44:19 managed-node2 podman[33828]: 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528\nFeb 14 11:44:19 managed-node2 systemd[1]: Started podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play.\n\u2591\u2591 Subject: A start job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2434.\nFeb 14 11:44:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a-userdata-shm.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-337f995e0e5b707c847dafe95d43a1ef42f59b79182db7c8511e578c91d5dd3a-userdata-shm.mount has successfully entered the 'dead' state.\nFeb 14 11:44:19 managed-node2 python3.12[34088]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:44:20 managed-node2 python3.12[34245]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:21 managed-node2 python3.12[34401]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:44:21 managed-node2 python3.12[34556]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:44:22 managed-node2 podman[34733]: 2026-02-14 11:44:22.965161635 -0500 EST m=+0.328798382 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610\nFeb 14 11:44:23 managed-node2 python3.12[34923]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:44:23 managed-node2 python3.12[35078]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:44:24 managed-node2 python3.12[35233]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:44:24 managed-node2 python3.12[35358]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1771087464.1235387-15297-279098182960707/.source.yml _original_basename=.o5ptkggs follow=False checksum=e4784a08bb43caa8f773f2aa113f2c5371f34613 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:44:25 managed-node2 python3.12[35513]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.238227145 -0500 EST m=+0.013796205 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge)\nFeb 14 11:44:25 managed-node2 systemd[1]: Created slice machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice - cgroup machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice.\n\u2591\u2591 Subject: A start job for unit machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2632.\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.274947218 -0500 EST m=+0.050516294 container create fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d)\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.279274874 -0500 EST m=+0.054843985 pod create cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3)\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.305656895 -0500 EST m=+0.081226077 container create 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry)\nFeb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered blocking state\nFeb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered disabled state\nFeb 14 11:44:25 managed-node2 kernel: veth1: entered allmulticast mode\nFeb 14 11:44:25 managed-node2 kernel: veth1: entered promiscuous mode\nFeb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered blocking state\nFeb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered forwarding state\nFeb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered disabled state\nFeb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered blocking state\nFeb 14 11:44:25 managed-node2 kernel: podman1: port 2(veth1) entered forwarding state\nFeb 14 11:44:25 managed-node2 NetworkManager[815]: [1771087465.3341] manager: (veth1): new Veth device (/org/freedesktop/NetworkManager/Devices/7)\nFeb 14 11:44:25 managed-node2 (udev-worker)[35531]: Network interface NamePolicy= disabled on kernel command line.\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.28240009 -0500 EST m=+0.057969381 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610\nFeb 14 11:44:25 managed-node2 NetworkManager[815]: [1771087465.3385] device (veth1): carrier: link connected\nFeb 14 11:44:25 managed-node2 systemd[1]: Started libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope.\n\u2591\u2591 Subject: A start job for unit libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2638.\nFeb 14 11:44:25 managed-node2 systemd[1]: Started libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2645.\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.422411406 -0500 EST m=+0.197980570 container init fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d)\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.426185128 -0500 EST m=+0.201754273 container start fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d)\nFeb 14 11:44:25 managed-node2 systemd[1]: Started libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope.\n\u2591\u2591 Subject: A start job for unit libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2652.\nFeb 14 11:44:25 managed-node2 systemd[1]: Started libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2659.\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.481490168 -0500 EST m=+0.257059305 container init 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry)\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.48418125 -0500 EST m=+0.259750473 container start 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry)\nFeb 14 11:44:25 managed-node2 podman[35520]: 2026-02-14 11:44:25.488330304 -0500 EST m=+0.263899381 pod start cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3)\nFeb 14 11:44:26 managed-node2 python3.12[35723]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:44:26 managed-node2 systemd[1]: Reload requested from client PID 35724 ('systemctl') (unit session-8.scope)...\nFeb 14 11:44:26 managed-node2 systemd[1]: Reloading...\nFeb 14 11:44:26 managed-node2 systemd-rc-local-generator[35770]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:44:26 managed-node2 systemd[1]: Reloading finished in 234 ms.\nFeb 14 11:44:26 managed-node2 python3.12[35944]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None\nFeb 14 11:44:26 managed-node2 systemd[1]: Reload requested from client PID 35947 ('systemctl') (unit session-8.scope)...\nFeb 14 11:44:26 managed-node2 systemd[1]: Reloading...\nFeb 14 11:44:27 managed-node2 systemd-rc-local-generator[36001]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:44:27 managed-node2 systemd[1]: Reloading finished in 236 ms.\nFeb 14 11:44:27 managed-node2 python3.12[36168]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:44:27 managed-node2 systemd[1]: Starting podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play...\n\u2591\u2591 Subject: A start job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2666.\nFeb 14 11:44:27 managed-node2 podman[36172]: 2026-02-14 11:44:27.75184531 -0500 EST m=+0.020904811 pod stop cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3)\nFeb 14 11:44:28 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 podman[36172]: time=\"2026-02-14T11:44:37-05:00\" level=warning msg=\"StopSignal SIGTERM failed to stop container httpd3-httpd3 in 10 seconds, resorting to SIGKILL\"\nFeb 14 11:44:37 managed-node2 systemd[1]: libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.775367984 -0500 EST m=+10.044427533 container died 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry)\nFeb 14 11:44:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-547664d6d5241636666f9721df93db52fd32d8116fb1e7171b34d6f07db6d627-merged.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay-547664d6d5241636666f9721df93db52fd32d8116fb1e7171b34d6f07db6d627-merged.mount has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.808010446 -0500 EST m=+10.077069923 container cleanup 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0)\nFeb 14 11:44:37 managed-node2 systemd[1]: libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-conmon-48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11.scope has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.816250478 -0500 EST m=+10.085310205 container stop fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d)\nFeb 14 11:44:37 managed-node2 systemd[1]: libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.842391823 -0500 EST m=+10.111451498 container died fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra)\nFeb 14 11:44:37 managed-node2 kernel: podman1: port 2(veth1) entered disabled state\nFeb 14 11:44:37 managed-node2 kernel: veth1 (unregistering): left allmulticast mode\nFeb 14 11:44:37 managed-node2 kernel: veth1 (unregistering): left promiscuous mode\nFeb 14 11:44:37 managed-node2 kernel: podman1: port 2(veth1) entered disabled state\nFeb 14 11:44:37 managed-node2 systemd[1]: run-netns-netns\\x2df18140bf\\x2d08ea\\x2d8813\\x2d237b\\x2dc0aab5f301f6.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit run-netns-netns\\x2df18140bf\\x2d08ea\\x2d8813\\x2d237b\\x2dc0aab5f301f6.mount has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028-rootfs-merge.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028-rootfs-merge.mount has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028-userdata-shm.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028-userdata-shm.mount has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.90018477 -0500 EST m=+10.169244253 container cleanup fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d)\nFeb 14 11:44:37 managed-node2 systemd[1]: libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-conmon-fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028.scope has successfully entered the 'dead' state.\nFeb 14 11:44:37 managed-node2 systemd[1]: Removed slice machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice - cgroup machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice.\n\u2591\u2591 Subject: A stop job for unit machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit machine-libpod_pod_cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d.slice has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2751 and the job result is done.\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.909189817 -0500 EST m=+10.178249293 pod stop cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3)\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.914377069 -0500 EST m=+10.183436549 pod stop cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3)\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.93614438 -0500 EST m=+10.205203953 container remove 48a6ac9d595d8c152f48670592fc418ec170a6dff043ac0421dabc66b0216b11 (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d, io.buildah.version=1.21.0, io.containers.autoupdate=registry, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage)\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.956877655 -0500 EST m=+10.225937174 container remove fd9e1c588d2f199669bae7925e5313c3688923494f7b1d406d34becb44b2b028 (image=, name=cdaf404db00c-infra, pod_id=cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d)\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.963972623 -0500 EST m=+10.233032101 pod remove cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d (image=, name=httpd3)\nFeb 14 11:44:37 managed-node2 podman[36172]: Pods stopped:\nFeb 14 11:44:37 managed-node2 podman[36172]: cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d\nFeb 14 11:44:37 managed-node2 podman[36172]: Pods removed:\nFeb 14 11:44:37 managed-node2 podman[36172]: cdaf404db00cfbe305e147f5389c8c34605e187184cd384397b1aef387b9ec0d\nFeb 14 11:44:37 managed-node2 podman[36172]: Secrets removed:\nFeb 14 11:44:37 managed-node2 podman[36172]: Volumes removed:\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.96437851 -0500 EST m=+10.233438132 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge)\nFeb 14 11:44:37 managed-node2 podman[36172]: 2026-02-14 11:44:37.981973764 -0500 EST m=+10.251033256 container create e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:44:37 managed-node2 systemd[1]: Created slice machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice - cgroup machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice.\n\u2591\u2591 Subject: A start job for unit machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2755.\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.011860762 -0500 EST m=+10.280920334 container create b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.015980365 -0500 EST m=+10.285039839 pod create 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 (image=, name=httpd3)\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.04217597 -0500 EST m=+10.311235469 container create 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, app=test)\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.042466927 -0500 EST m=+10.311526431 container restart e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:44:38 managed-node2 systemd[1]: Started libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2761.\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.081629628 -0500 EST m=+10.350689249 container init e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.084933774 -0500 EST m=+10.353993248 container start e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered blocking state\nFeb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered disabled state\nFeb 14 11:44:38 managed-node2 kernel: veth1: entered allmulticast mode\nFeb 14 11:44:38 managed-node2 kernel: veth1: entered promiscuous mode\nFeb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered blocking state\nFeb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered forwarding state\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.018569499 -0500 EST m=+10.287629154 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610\nFeb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered disabled state\nFeb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered blocking state\nFeb 14 11:44:38 managed-node2 kernel: podman1: port 2(veth1) entered forwarding state\nFeb 14 11:44:38 managed-node2 (udev-worker)[36203]: Network interface NamePolicy= disabled on kernel command line.\nFeb 14 11:44:38 managed-node2 NetworkManager[815]: [1771087478.1092] manager: (veth1): new Veth device (/org/freedesktop/NetworkManager/Devices/8)\nFeb 14 11:44:38 managed-node2 NetworkManager[815]: [1771087478.1101] device (veth1): carrier: link connected\nFeb 14 11:44:38 managed-node2 systemd[1]: Started libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2767.\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.178418528 -0500 EST m=+10.447478108 container init b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.180727983 -0500 EST m=+10.449787532 container start b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:44:38 managed-node2 systemd[1]: Started libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope - libcrun container.\n\u2591\u2591 Subject: A start job for unit libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2774.\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.213492942 -0500 EST m=+10.482552472 container init 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service, app=test)\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.215831642 -0500 EST m=+10.484891319 container start 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage)\nFeb 14 11:44:38 managed-node2 podman[36172]: 2026-02-14 11:44:38.220001057 -0500 EST m=+10.489060638 pod start 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 (image=, name=httpd3)\nFeb 14 11:44:38 managed-node2 podman[36172]: Pod:\nFeb 14 11:44:38 managed-node2 podman[36172]: 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5\nFeb 14 11:44:38 managed-node2 podman[36172]: Container:\nFeb 14 11:44:38 managed-node2 podman[36172]: 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc\nFeb 14 11:44:38 managed-node2 systemd[1]: Started podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play.\n\u2591\u2591 Subject: A start job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2666.\nFeb 14 11:44:38 managed-node2 sudo[36456]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwfzisldzdwyyefuftqqnwcjguwtzerm ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087478.5294943-15719-277869054228043/AnsiballZ_command.py'\nFeb 14 11:44:38 managed-node2 sudo[36456]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:44:38 managed-node2 python3.12[36460]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:38 managed-node2 systemd[29195]: Started podman-36468.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 127.\nFeb 14 11:44:38 managed-node2 sudo[36456]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:44:39 managed-node2 python3.12[36630]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:39 managed-node2 python3.12[36793]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:40 managed-node2 sudo[37006]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsnubttuzlaozlpnbdfvbxosrbpdzdqj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087479.9051843-15774-102000498703581/AnsiballZ_command.py'\nFeb 14 11:44:40 managed-node2 sudo[37006]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:44:40 managed-node2 python3.12[37009]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:40 managed-node2 sudo[37006]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:44:40 managed-node2 python3.12[37167]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:41 managed-node2 python3.12[37325]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:41 managed-node2 python3.12[37483]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:44:42 managed-node2 python3.12[37641]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:44:42 managed-node2 python3.12[37797]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_no9h_drm_podman/httpd1-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:42 managed-node2 python3.12[37953]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_no9h_drm_podman/httpd2-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:43 managed-node2 python3.12[38109]: ansible-ansible.legacy.command Invoked with _raw_params=ls -alrtF /tmp/lsr_no9h_drm_podman/httpd3-create _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:45 managed-node2 python3.12[38420]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:44:46 managed-node2 python3.12[38581]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:44:48 managed-node2 python3.12[38738]: ansible-ansible.legacy.dnf Invoked with name=['firewalld'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None\nFeb 14 11:44:49 managed-node2 python3.12[38894]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None\nFeb 14 11:44:49 managed-node2 python3.12[39051]: ansible-ansible.legacy.systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None\nFeb 14 11:44:50 managed-node2 python3.12[39208]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] ipset_options={} protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nFeb 14 11:44:51 managed-node2 python3.12[39363]: ansible-ansible.legacy.dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None\nFeb 14 11:44:52 managed-node2 python3.12[39519]: ansible-ansible.legacy.dnf Invoked with name=['grubby'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None\nFeb 14 11:44:53 managed-node2 python3.12[39675]: ansible-ansible.legacy.dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None\nFeb 14 11:44:54 managed-node2 python3.12[39831]: ansible-setup Invoked with filter=['ansible_selinux'] gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nFeb 14 11:44:56 managed-node2 python3.12[40028]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nFeb 14 11:44:56 managed-node2 python3.12[40183]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nFeb 14 11:44:59 managed-node2 python3.12[40338]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nFeb 14 11:45:00 managed-node2 python3.12[40494]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:00 managed-node2 python3.12[40651]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:01 managed-node2 python3.12[40807]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:02 managed-node2 python3.12[40963]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:02 managed-node2 python3.12[41119]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nFeb 14 11:45:03 managed-node2 python3.12[41274]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:03 managed-node2 python3.12[41429]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:04 managed-node2 sudo[41634]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmkqlusgchkybllnzalektonsmlqmsrd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087503.961329-16726-18565808136635/AnsiballZ_podman_image.py'\nFeb 14 11:45:04 managed-node2 sudo[41634]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41638.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 131.\nFeb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41645.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 135.\nFeb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41652.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 139.\nFeb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41659.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 143.\nFeb 14 11:45:04 managed-node2 systemd[29195]: Started podman-41667.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 147.\nFeb 14 11:45:05 managed-node2 systemd[29195]: Started podman-41675.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 151.\nFeb 14 11:45:05 managed-node2 systemd[29195]: Started podman-41682.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 155.\nFeb 14 11:45:05 managed-node2 systemd[29195]: Started podman-41689.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 159.\nFeb 14 11:45:05 managed-node2 sudo[41634]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:45:05 managed-node2 python3.12[41850]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:05 managed-node2 python3.12[42007]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:06 managed-node2 python3.12[42162]: ansible-ansible.legacy.stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:45:06 managed-node2 python3.12[42240]: ansible-ansible.legacy.file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=.n4pqsi5o recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:06 managed-node2 sudo[42445]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnxwhijhmtcundqyezuqfudfknggmlfl ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087506.773379-16793-188090990846597/AnsiballZ_podman_play.py'\nFeb 14 11:45:06 managed-node2 sudo[42445]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nFeb 14 11:45:07 managed-node2 systemd[29195]: Started podman-42456.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 163.\nFeb 14 11:45:07 managed-node2 systemd[29195]: Created slice user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice - cgroup user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 167.\nFeb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nFeb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nFeb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2026-02-14T11:45:07-05:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2026-02-14T11:45:07-05:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2026-02-14T11:45:07-05:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"systemd-logind: Unknown object '/'.\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using transient store: false\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/crun\\\"\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network b45c64e00bfe2c0071d8275383afdd1283c4d60a5ceed7d974f55458a724e831 bridge podman1 2026-02-14 11:43:46.509060406 -0500 EST [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Created cgroup path user.slice/user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice for parent user.slice and name libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Created cgroup user.slice/user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice\"\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Got pod cgroup as user.slice/user-3001.slice/user@3001.service/user.slice/user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice\"\n Error: adding pod to state: name \"httpd1\" is in use: pod already exists\n time=\"2026-02-14T11:45:07-05:00\" level=debug msg=\"Shutting down engines\"\n time=\"2026-02-14T11:45:07-05:00\" level=info msg=\"Received shutdown.Stop(), terminating!\" PID=42456\nFeb 14 11:45:07 managed-node2 python3.12[42448]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nFeb 14 11:45:07 managed-node2 sudo[42445]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:45:08 managed-node2 python3.12[42617]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nFeb 14 11:45:08 managed-node2 python3.12[42773]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:09 managed-node2 python3.12[42930]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:09 managed-node2 python3.12[43086]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:10 managed-node2 python3.12[43241]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:11 managed-node2 podman[43419]: 2026-02-14 11:45:11.367298586 -0500 EST m=+0.482134491 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610\nFeb 14 11:45:11 managed-node2 python3.12[43609]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:12 managed-node2 python3.12[43766]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:12 managed-node2 python3.12[43921]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:45:12 managed-node2 python3.12[43999]: ansible-ansible.legacy.file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=.lv3t88g0 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nFeb 14 11:45:13 managed-node2 podman[44161]: 2026-02-14 11:45:13.466695843 -0500 EST m=+0.013716044 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge)\nFeb 14 11:45:13 managed-node2 systemd[1]: Created slice machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice - cgroup machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice.\n\u2591\u2591 Subject: A start job for unit machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2781.\nFeb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nFeb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nFeb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2026-02-14T11:45:13-05:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2026-02-14T11:45:13-05:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2026-02-14T11:45:13-05:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using transient store: false\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2026-02-14T11:45:13-05:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Initializing event backend journald\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/crun\\\"\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 bridge podman1 2026-02-14 11:42:18.681480354 -0500 EST [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice for parent machine.slice and name libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice\"\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_7c894f51075a4997df9c365701af555c58cb9c561c56b21bbe388780b79b7066.slice\"\n Error: adding pod to state: name \"httpd2\" is in use: pod already exists\n time=\"2026-02-14T11:45:13-05:00\" level=debug msg=\"Shutting down engines\"\nFeb 14 11:45:13 managed-node2 python3.12[44154]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nFeb 14 11:45:14 managed-node2 python3.12[44322]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:15 managed-node2 python3.12[44479]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:15 managed-node2 python3.12[44636]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:16 managed-node2 python3.12[44791]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:17 managed-node2 podman[44968]: 2026-02-14 11:45:17.151065988 -0500 EST m=+0.289585155 image pull 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610\nFeb 14 11:45:17 managed-node2 python3.12[45159]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:18 managed-node2 python3.12[45316]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:18 managed-node2 python3.12[45471]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:45:18 managed-node2 python3.12[45549]: ansible-ansible.legacy.file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=.vboa9hyz recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:19 managed-node2 python3.12[45704]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nFeb 14 11:45:19 managed-node2 podman[45712]: 2026-02-14 11:45:19.30811819 -0500 EST m=+0.014177431 network create abf306ea2c779c5e218f6b34956c8ea6d05504ba0b442f67ab8bc2f394035cd5 (name=podman-default-kube-network, type=bridge)\nFeb 14 11:45:19 managed-node2 systemd[1]: Created slice machine-libpod_pod_b0f4498c0b1a6a00087d92e451e47188438fd77ff1718caf8b2d35d83725f547.slice - cgroup machine-libpod_pod_b0f4498c0b1a6a00087d92e451e47188438fd77ff1718caf8b2d35d83725f547.slice.\n\u2591\u2591 Subject: A start job for unit machine-libpod_pod_b0f4498c0b1a6a00087d92e451e47188438fd77ff1718caf8b2d35d83725f547.slice has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit machine-libpod_pod_b0f4498c0b1a6a00087d92e451e47188438fd77ff1718caf8b2d35d83725f547.slice has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2787.\nFeb 14 11:45:20 managed-node2 sudo[45924]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsuheqpdodqrzzzbcamsddzhcuitzot ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087519.8040786-17251-92408683234572/AnsiballZ_command.py'\nFeb 14 11:45:20 managed-node2 sudo[45924]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:45:20 managed-node2 python3.12[45927]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:20 managed-node2 systemd[29195]: Started podman-45934.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 171.\nFeb 14 11:45:20 managed-node2 sudo[45924]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:45:20 managed-node2 python3.12[46099]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:21 managed-node2 python3.12[46262]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:21 managed-node2 sudo[46475]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvrrappufczsldymgagfxtigzpflzerg ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087521.1786187-17311-51783948001152/AnsiballZ_command.py'\nFeb 14 11:45:21 managed-node2 sudo[46475]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:45:21 managed-node2 python3.12[46478]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:21 managed-node2 sudo[46475]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:45:21 managed-node2 python3.12[46636]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:22 managed-node2 python3.12[46794]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:22 managed-node2 python3.12[46952]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:23 managed-node2 python3.12[47108]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:23 managed-node2 python3.12[47264]: ansible-ansible.legacy.uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None ciphers=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:25 managed-node2 python3.12[47575]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:26 managed-node2 python3.12[47736]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:28 managed-node2 python3.12[47893]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nFeb 14 11:45:29 managed-node2 python3.12[48050]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:29 managed-node2 python3.12[48207]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:30 managed-node2 python3.12[48363]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:30 managed-node2 python3.12[48519]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:31 managed-node2 python3.12[48675]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:31 managed-node2 sudo[48882]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuoolknzrliqkfqtcydhnfthpyqbeewx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087531.3850875-17892-266440434332148/AnsiballZ_systemd.py'\nFeb 14 11:45:31 managed-node2 sudo[48882]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:45:31 managed-node2 python3.12[48885]: ansible-systemd Invoked with name=podman-kube@-home-podman_basic_user-.config-containers-ansible\\x2dkubernetes.d-httpd1.yml.service scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None\nFeb 14 11:45:31 managed-node2 systemd[29195]: Reload requested from client PID 48888 ('systemctl')...\nFeb 14 11:45:31 managed-node2 systemd[29195]: Reloading...\nFeb 14 11:45:31 managed-node2 systemd[29195]: Reloading finished in 65 ms.\nFeb 14 11:45:31 managed-node2 systemd[29195]: Stopping podman-kube@-home-podman_basic_user-.config-containers-ansible\\x2dkubernetes.d-httpd1.yml.service - A template for running K8s workloads via podman-kube-play...\n\u2591\u2591 Subject: A stop job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 175.\nFeb 14 11:45:42 managed-node2 podman[48899]: time=\"2026-02-14T11:45:42-05:00\" level=warning msg=\"StopSignal SIGTERM failed to stop container httpd1-httpd1 in 10 seconds, resorting to SIGKILL\"\nFeb 14 11:45:42 managed-node2 conmon[31412]: conmon 6da07bcbaedcde56d4e1 : Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-3001.slice/user@3001.service/user.slice/user-libpod_pod_39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1.slice/libpod-6da07bcbaedcde56d4e1f6d376605d0690fe64cd6f8d4ca51d42712f887d41cc.scope/container/memory.events\nFeb 14 11:45:42 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:45:42 managed-node2 kernel: veth0 (unregistering): left allmulticast mode\nFeb 14 11:45:42 managed-node2 kernel: veth0 (unregistering): left promiscuous mode\nFeb 14 11:45:42 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:45:42 managed-node2 systemd[29195]: Removed slice user-libpod_pod_39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1.slice - cgroup user-libpod_pod_39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1.slice.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 176 and the job result is done.\nFeb 14 11:45:42 managed-node2 conmon[31368]: conmon 5b8a8d5d0430c64be456 : Failed to open cgroups file: /sys/fs/cgroup/user.slice/user-3001.slice/user@3001.service/user.slice/libpod-5b8a8d5d0430c64be4560d51ae0a7e88111b69a3710627acb43738bf92a81739.scope/container/memory.events\nFeb 14 11:45:42 managed-node2 podman[48899]: Pods stopped:\nFeb 14 11:45:42 managed-node2 podman[48899]: 39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1\nFeb 14 11:45:42 managed-node2 podman[48899]: Pods removed:\nFeb 14 11:45:42 managed-node2 podman[48899]: 39ddb6583f9470ab70075d904c51c89733c2027919e08f61d9b513b5c72bfcf1\nFeb 14 11:45:42 managed-node2 podman[48899]: Secrets removed:\nFeb 14 11:45:42 managed-node2 podman[48899]: Volumes removed:\nFeb 14 11:45:42 managed-node2 systemd[29195]: Stopped podman-kube@-home-podman_basic_user-.config-containers-ansible\\x2dkubernetes.d-httpd1.yml.service - A template for running K8s workloads via podman-kube-play.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 175 and the job result is done.\nFeb 14 11:45:42 managed-node2 systemd[29195]: podman-kube@-home-podman_basic_user-.config-containers-ansible\\x2dkubernetes.d-httpd1.yml.service: Consumed 621ms CPU time, 64.1M memory peak.\n\u2591\u2591 Subject: Resources consumed by unit runtime\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit UNIT completed and consumed the indicated resources.\nFeb 14 11:45:42 managed-node2 sudo[48882]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:45:42 managed-node2 python3.12[49104]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:43 managed-node2 sudo[49312]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zolhlvibkwbjlkciijuxwwzhkfjcpmeg ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087542.9204967-18233-260601492252058/AnsiballZ_podman_play.py'\nFeb 14 11:45:43 managed-node2 sudo[49312]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nFeb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play version: 5.6.0, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nFeb 14 11:45:43 managed-node2 systemd[29195]: Started podman-49322.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 180.\nFeb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nFeb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nFeb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nFeb 14 11:45:43 managed-node2 python3.12[49315]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nFeb 14 11:45:43 managed-node2 sudo[49312]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:45:43 managed-node2 python3.12[49484]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:44 managed-node2 python3.12[49639]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nFeb 14 11:45:45 managed-node2 python3.12[49795]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:46 managed-node2 python3.12[49952]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:45:46 managed-node2 python3.12[50108]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None\nFeb 14 11:45:46 managed-node2 systemd[1]: Reload requested from client PID 50111 ('systemctl') (unit session-8.scope)...\nFeb 14 11:45:46 managed-node2 systemd[1]: Reloading...\nFeb 14 11:45:46 managed-node2 systemd-rc-local-generator[50150]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:45:46 managed-node2 systemd[1]: Reloading finished in 246 ms.\nFeb 14 11:45:47 managed-node2 systemd[1]: Stopping podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play...\n\u2591\u2591 Subject: A stop job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2794.\nFeb 14 11:45:47 managed-node2 podman[50178]: 2026-02-14 11:45:47.057544426 -0500 EST m=+0.022922833 pod stop dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2)\nFeb 14 11:45:57 managed-node2 podman[50178]: time=\"2026-02-14T11:45:57-05:00\" level=warning msg=\"StopSignal SIGTERM failed to stop container httpd2-httpd2 in 10 seconds, resorting to SIGKILL\"\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.075883411 -0500 EST m=+10.041261947 container stop 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z)\nFeb 14 11:45:57 managed-node2 systemd[1]: libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528.scope has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.104222413 -0500 EST m=+10.069600901 container died 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0)\nFeb 14 11:45:57 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7d74a1e1f62f6341c88dc81407340e28d99c68a78b273dc960b703e2edb884e7-merged.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay-7d74a1e1f62f6341c88dc81407340e28d99c68a78b273dc960b703e2edb884e7-merged.mount has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 systemd[5488]: Created slice background.slice - User Background Tasks Slice.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 20.\nFeb 14 11:45:57 managed-node2 systemd[5488]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 19.\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.136412765 -0500 EST m=+10.101791145 container cleanup 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0)\nFeb 14 11:45:57 managed-node2 systemd[5488]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 19.\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.148326182 -0500 EST m=+10.113704699 container stop 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:45:57 managed-node2 systemd[1]: libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0.scope has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.174172879 -0500 EST m=+10.139551469 container died 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:45:57 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:45:57 managed-node2 kernel: veth0 (unregistering): left allmulticast mode\nFeb 14 11:45:57 managed-node2 kernel: veth0 (unregistering): left promiscuous mode\nFeb 14 11:45:57 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:45:57 managed-node2 systemd[1]: run-netns-netns\\x2de4ab7e41\\x2da9bf\\x2d3ef1\\x2d2d12\\x2ddbe2c8a39368.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit run-netns-netns\\x2de4ab7e41\\x2da9bf\\x2d3ef1\\x2d2d12\\x2ddbe2c8a39368.mount has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0-rootfs-merge.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0-rootfs-merge.mount has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0-userdata-shm.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0-userdata-shm.mount has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.23444651 -0500 EST m=+10.199825019 container cleanup 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:45:57 managed-node2 systemd[1]: Removed slice machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice - cgroup machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice.\n\u2591\u2591 Subject: A stop job for unit machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit machine-libpod_pod_dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3.slice has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2795 and the job result is done.\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.242083364 -0500 EST m=+10.207461866 pod stop dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2)\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.264794903 -0500 EST m=+10.230173312 container remove 0c001377f1f1d6bc97251399428e770993bec7821fd3622a03d880ee81cba528 (image=quay.io/libpod/testimage:20210610, name=httpd2-httpd2, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service, app=test)\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.285758663 -0500 EST m=+10.251137072 container remove 7924bc6de5cb1d79ce6d5e40aeed060bdf6cc84b1b99889d2ace2defe92cd9b0 (image=, name=dafbc399bcc6-infra, pod_id=dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.293078382 -0500 EST m=+10.258456765 pod remove dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3 (image=, name=httpd2)\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.29594165 -0500 EST m=+10.261320164 container kill c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:45:57 managed-node2 systemd[1]: libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3.scope has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.30088823 -0500 EST m=+10.266266621 container died c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:45:57 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3-rootfs-merge.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3-rootfs-merge.mount has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 podman[50178]: 2026-02-14 11:45:57.355078722 -0500 EST m=+10.320457131 container remove c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3 (image=, name=cf46bf67adce-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service)\nFeb 14 11:45:57 managed-node2 podman[50178]: Pods stopped:\nFeb 14 11:45:57 managed-node2 podman[50178]: dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3\nFeb 14 11:45:57 managed-node2 podman[50178]: Pods removed:\nFeb 14 11:45:57 managed-node2 podman[50178]: dafbc399bcc697918966759b7a4e3c23bedc1702b0a1828b97d2d8a1086775e3\nFeb 14 11:45:57 managed-node2 podman[50178]: Secrets removed:\nFeb 14 11:45:57 managed-node2 podman[50178]: Volumes removed:\nFeb 14 11:45:57 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state.\nFeb 14 11:45:57 managed-node2 systemd[1]: Stopped podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service - A template for running K8s workloads via podman-kube-play.\n\u2591\u2591 Subject: A stop job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2794 and the job result is done.\nFeb 14 11:45:57 managed-node2 python3.12[50381]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:45:58 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3-userdata-shm.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-c38263ce84de06922897531accc5892f008d30a900eaecd37d8438bc2a7265e3-userdata-shm.mount has successfully entered the 'dead' state.\nFeb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nFeb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play version: 5.6.0, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml\nFeb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml\nFeb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nFeb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nFeb 14 11:45:58 managed-node2 python3.12[50538]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nFeb 14 11:45:58 managed-node2 python3.12[50706]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:45:59 managed-node2 python3.12[50861]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:00 managed-node2 python3.12[51018]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:01 managed-node2 python3.12[51174]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None\nFeb 14 11:46:01 managed-node2 systemd[1]: Reload requested from client PID 51177 ('systemctl') (unit session-8.scope)...\nFeb 14 11:46:01 managed-node2 systemd[1]: Reloading...\nFeb 14 11:46:01 managed-node2 systemd-rc-local-generator[51215]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:46:01 managed-node2 systemd[1]: Reloading finished in 228 ms.\nFeb 14 11:46:01 managed-node2 systemd[1]: Stopping podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play...\n\u2591\u2591 Subject: A stop job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2798.\nFeb 14 11:46:01 managed-node2 podman[51244]: 2026-02-14 11:46:01.595932777 -0500 EST m=+0.023865329 pod stop 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 (image=, name=httpd3)\nFeb 14 11:46:11 managed-node2 podman[51244]: time=\"2026-02-14T11:46:11-05:00\" level=warning msg=\"StopSignal SIGTERM failed to stop container httpd3-httpd3 in 10 seconds, resorting to SIGKILL\"\nFeb 14 11:46:11 managed-node2 systemd[1]: libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc.scope has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.624723575 -0500 EST m=+10.052656258 container died 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry)\nFeb 14 11:46:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay-aa73ec333742c02f48305550cfb2c13cad6c09f505663b91204e062523eb0502-merged.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay-aa73ec333742c02f48305550cfb2c13cad6c09f505663b91204e062523eb0502-merged.mount has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.657360503 -0500 EST m=+10.085293056 container cleanup 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage)\nFeb 14 11:46:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 systemd[1]: libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac.scope has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.667356994 -0500 EST m=+10.095289630 container died b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:46:11 managed-node2 kernel: podman1: port 2(veth1) entered disabled state\nFeb 14 11:46:11 managed-node2 kernel: veth1 (unregistering): left allmulticast mode\nFeb 14 11:46:11 managed-node2 kernel: veth1 (unregistering): left promiscuous mode\nFeb 14 11:46:11 managed-node2 kernel: podman1: port 2(veth1) entered disabled state\nFeb 14 11:46:11 managed-node2 systemd[1]: run-p33919-i33920.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit run-p33919-i33920.scope has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 NetworkManager[815]: [1771087571.6978] device (podman1): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')\nFeb 14 11:46:11 managed-node2 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service...\n\u2591\u2591 Subject: A start job for unit NetworkManager-dispatcher.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit NetworkManager-dispatcher.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2799.\nFeb 14 11:46:11 managed-node2 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service.\n\u2591\u2591 Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit NetworkManager-dispatcher.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2799.\nFeb 14 11:46:11 managed-node2 systemd[1]: run-netns-netns\\x2d21d381d5\\x2dd108\\x2d36c8\\x2d3925\\x2df88b1712188a.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit run-netns-netns\\x2d21d381d5\\x2dd108\\x2d36c8\\x2d3925\\x2df88b1712188a.mount has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac-rootfs-merge.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac-rootfs-merge.mount has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.780271382 -0500 EST m=+10.208203998 container cleanup b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:46:11 managed-node2 systemd[1]: Removed slice machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice - cgroup machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice.\n\u2591\u2591 Subject: A stop job for unit machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit machine-libpod_pod_891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5.slice has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2878 and the job result is done.\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.807588953 -0500 EST m=+10.235521507 container remove 781e6cc74a8e0563f34583e0d4651ef219744842941061bc37ede4e890c83abc (image=quay.io/libpod/testimage:20210610, name=httpd3-httpd3, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, app=test, created_at=2021-06-10T18:55:36Z, created_by=test/system/build-testimage, io.buildah.version=1.21.0, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.829454691 -0500 EST m=+10.257387246 container remove b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac (image=, name=891e6927a022-infra, pod_id=891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.836846027 -0500 EST m=+10.264778551 pod remove 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5 (image=, name=httpd3)\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.840052556 -0500 EST m=+10.267985686 container kill e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:46:11 managed-node2 systemd[1]: libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit libpod-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b.scope has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.845916667 -0500 EST m=+10.273849494 container died e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:46:11 managed-node2 podman[51244]: 2026-02-14 11:46:11.896294322 -0500 EST m=+10.324226877 container remove e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b (image=, name=1cda2b92d774-service, PODMAN_SYSTEMD_UNIT=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service)\nFeb 14 11:46:11 managed-node2 podman[51244]: Pods stopped:\nFeb 14 11:46:11 managed-node2 podman[51244]: 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5\nFeb 14 11:46:11 managed-node2 podman[51244]: Pods removed:\nFeb 14 11:46:11 managed-node2 podman[51244]: 891e6927a022991e8dc457e4835cf812a1b2c17c295bada3b4f4aa8509833ea5\nFeb 14 11:46:11 managed-node2 podman[51244]: Secrets removed:\nFeb 14 11:46:11 managed-node2 podman[51244]: Volumes removed:\nFeb 14 11:46:11 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state.\nFeb 14 11:46:11 managed-node2 systemd[1]: Stopped podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service - A template for running K8s workloads via podman-kube-play.\n\u2591\u2591 Subject: A stop job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2798 and the job result is done.\nFeb 14 11:46:12 managed-node2 python3.12[51460]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac-userdata-shm.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-b5c044a364804ae29d25f9402f25ca5f58352b3e450e2168ed8a6236c67058ac-userdata-shm.mount has successfully entered the 'dead' state.\nFeb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b-rootfs-merge.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b-rootfs-merge.mount has successfully entered the 'dead' state.\nFeb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b-userdata-shm.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay\\x2dcontainers-e2a837a58f84bf8d3dd456008dab8ef789ff35b156c5ab091e7101a9b49d228b-userdata-shm.mount has successfully entered the 'dead' state.\nFeb 14 11:46:12 managed-node2 python3.12[51617]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nFeb 14 11:46:12 managed-node2 python3.12[51617]: ansible-containers.podman.podman_play version: 5.6.0, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml\nFeb 14 11:46:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:46:13 managed-node2 python3.12[51785]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:46:14 managed-node2 python3.12[51940]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nFeb 14 11:46:14 managed-node2 python3.12[52096]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:15 managed-node2 sudo[52303]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqishaqoeeauqjityygmflzpaqldnowu ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087574.7806642-19332-58185055903523/AnsiballZ_podman_container_info.py'\nFeb 14 11:46:15 managed-node2 sudo[52303]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:46:15 managed-node2 python3.12[52306]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None\nFeb 14 11:46:15 managed-node2 systemd[29195]: Started podman-52307.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 184.\nFeb 14 11:46:15 managed-node2 sudo[52303]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:46:15 managed-node2 sudo[52519]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hangkhtqwiganrwnkxxhsiofgprcvkkx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087575.4226453-19357-159026990468623/AnsiballZ_command.py'\nFeb 14 11:46:15 managed-node2 sudo[52519]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:46:15 managed-node2 python3.12[52522]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:15 managed-node2 systemd[29195]: Started podman-52523.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 188.\nFeb 14 11:46:15 managed-node2 sudo[52519]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:46:16 managed-node2 sudo[52734]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gomapidfsfoypmttsyusbxhkhjdpprqy ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087575.9556694-19387-69229158351339/AnsiballZ_command.py'\nFeb 14 11:46:16 managed-node2 sudo[52734]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:46:16 managed-node2 python3.12[52737]: ansible-ansible.legacy.command Invoked with _raw_params=podman secret ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:16 managed-node2 systemd[29195]: Started podman-52738.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 192.\nFeb 14 11:46:16 managed-node2 sudo[52734]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:46:16 managed-node2 python3.12[52901]: ansible-ansible.legacy.command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None\nFeb 14 11:46:16 managed-node2 systemd[1]: Stopping user@3001.service - User Manager for UID 3001...\n\u2591\u2591 Subject: A stop job for unit user@3001.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit user@3001.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2881.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Activating special unit exit.target...\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopping podman-pause-37bd2e87.scope...\n\u2591\u2591 Subject: A stop job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 216.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Removed slice app-podman\\x2dkube.slice - Slice /app/podman-kube.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 203 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: app-podman\\x2dkube.slice: Consumed 621ms CPU time, 64.1M memory peak.\n\u2591\u2591 Subject: Resources consumed by unit runtime\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit UNIT completed and consumed the indicated resources.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Removed slice user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice - cgroup user-libpod_pod_4d871698b496b5a13b3ae5821d008092c1c613ef59e6d2863c0fc18c5a4d4607.slice.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 215 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped target default.target - Main User Target.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 202 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped target basic.target - Basic System.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 200 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped target paths.target - Paths.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 204 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped target sockets.target - Sockets.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 211 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped target timers.target - Timers.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 205 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 210 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 207 and the job result is done.\nFeb 14 11:46:16 managed-node2 dbus-broker[29751]: Dispatched 2413 messages @ 3(\u00b115)\u03bcs / message.\n\u2591\u2591 Subject: Dispatched 2413 messages\n\u2591\u2591 Defined-By: dbus-broker\n\u2591\u2591 Support: https://groups.google.com/forum/#!forum/bus1-devel\n\u2591\u2591 \n\u2591\u2591 This message is printed by dbus-broker when shutting down. It includes metric\n\u2591\u2591 information collected during the runtime of dbus-broker.\n\u2591\u2591 \n\u2591\u2591 The message lists the number of dispatched messages\n\u2591\u2591 (in this case 2413) as well as the mean time to\n\u2591\u2591 handling a single message. The time measurements exclude the time spent on\n\u2591\u2591 writing to and reading from the kernel.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopping dbus-broker.service - D-Bus User Message Bus...\n\u2591\u2591 Subject: A stop job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 213.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped systemd-tmpfiles-setup.service - Create User Files and Directories.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 206 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped dbus-broker.service - D-Bus User Message Bus.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 213 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Stopped podman-pause-37bd2e87.scope.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 216 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Removed slice session.slice - User Core Session Slice.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 212 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Removed slice user.slice - Slice /user.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 214 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Closed dbus.socket - D-Bus User Message Bus Socket.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 217 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Removed slice app.slice - User Application Slice.\n\u2591\u2591 Subject: A stop job for unit UNIT has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit UNIT has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 218 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[29195]: app.slice: Consumed 649ms CPU time, 64.7M memory peak.\n\u2591\u2591 Subject: Resources consumed by unit runtime\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit UNIT completed and consumed the indicated resources.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Reached target shutdown.target - Shutdown.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 199.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Finished systemd-exit.service - Exit the Session.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 197.\nFeb 14 11:46:16 managed-node2 systemd[29195]: Reached target exit.target - Exit the Session.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 196.\nFeb 14 11:46:16 managed-node2 systemd-logind[768]: Removed session 10.\n\u2591\u2591 Subject: Session 10 has been terminated\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 Documentation: sd-login(3)\n\u2591\u2591 \n\u2591\u2591 A session with the ID 10 has been terminated.\nFeb 14 11:46:16 managed-node2 systemd[1]: user@3001.service: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit user@3001.service has successfully entered the 'dead' state.\nFeb 14 11:46:16 managed-node2 systemd[1]: Stopped user@3001.service - User Manager for UID 3001.\n\u2591\u2591 Subject: A stop job for unit user@3001.service has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit user@3001.service has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2881 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[1]: user@3001.service: Consumed 2.184s CPU time, 83.2M memory peak.\n\u2591\u2591 Subject: Resources consumed by unit runtime\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit user@3001.service completed and consumed the indicated resources.\nFeb 14 11:46:16 managed-node2 systemd[1]: Stopping user-runtime-dir@3001.service - User Runtime Directory /run/user/3001...\n\u2591\u2591 Subject: A stop job for unit user-runtime-dir@3001.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit user-runtime-dir@3001.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2880.\nFeb 14 11:46:16 managed-node2 systemd[1]: run-user-3001.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit run-user-3001.mount has successfully entered the 'dead' state.\nFeb 14 11:46:16 managed-node2 systemd[1]: user-runtime-dir@3001.service: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit user-runtime-dir@3001.service has successfully entered the 'dead' state.\nFeb 14 11:46:16 managed-node2 systemd[1]: Stopped user-runtime-dir@3001.service - User Runtime Directory /run/user/3001.\n\u2591\u2591 Subject: A stop job for unit user-runtime-dir@3001.service has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit user-runtime-dir@3001.service has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2880 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[1]: Removed slice user-3001.slice - User Slice of UID 3001.\n\u2591\u2591 Subject: A stop job for unit user-3001.slice has finished\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A stop job for unit user-3001.slice has finished.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2882 and the job result is done.\nFeb 14 11:46:16 managed-node2 systemd[1]: user-3001.slice: Consumed 2.212s CPU time, 83.2M memory peak.\n\u2591\u2591 Subject: Resources consumed by unit runtime\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit user-3001.slice completed and consumed the indicated resources.\nFeb 14 11:46:17 managed-node2 python3.12[53062]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:19 managed-node2 python3.12[53218]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:21 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state.\nFeb 14 11:46:21 managed-node2 python3.12[53375]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:24 managed-node2 python3.12[53531]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:26 managed-node2 python3.12[53687]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:29 managed-node2 python3.12[53843]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:29 managed-node2 sudo[54049]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtkxdbcybdoesitcfkqviyhvcybnwfk ; /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087589.3860164-19842-259624477313611/AnsiballZ_command.py'\nFeb 14 11:46:29 managed-node2 sudo[54049]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:46:29 managed-node2 python3.12[54052]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd1 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:29 managed-node2 sudo[54049]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:46:30 managed-node2 python3.12[54214]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd2 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:30 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:46:30 managed-node2 python3.12[54376]: ansible-ansible.legacy.command Invoked with _raw_params=podman pod exists httpd3 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:30 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:46:30 managed-node2 sudo[54588]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqjkzkyquobosfntmmeshlxejznbhuye ; XDG_RUNTIME_DIR=/run/user/3001 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087590.6654937-19897-26461502310971/AnsiballZ_command.py'\nFeb 14 11:46:30 managed-node2 sudo[54588]: pam_unix(sudo:session): session opened for user podman_basic_user(uid=3001) by root(uid=0)\nFeb 14 11:46:31 managed-node2 python3.12[54591]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --user list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd1[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:31 managed-node2 sudo[54588]: pam_unix(sudo:session): session closed for user podman_basic_user\nFeb 14 11:46:31 managed-node2 python3.12[54749]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd2[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:31 managed-node2 python3.12[54907]: ansible-ansible.legacy.command Invoked with _raw_params=set -euo pipefail\n systemctl --system list-units -a -l --plain | grep -E '^[ ]*podman-kube@.+-httpd3[.]yml[.]service[ ]+loaded[ ]+active '\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:32 managed-node2 python3.12[55065]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:34 managed-node2 python3.12[55375]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:35 managed-node2 python3.12[55536]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nFeb 14 11:46:35 managed-node2 python3.12[55692]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:37 managed-node2 python3.12[55850]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nFeb 14 11:46:38 managed-node2 python3.12[56006]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:38 managed-node2 python3.12[56163]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:38 managed-node2 python3.12[56319]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g podman_basic_user _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:39 managed-node2 python3.12[56475]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:40 managed-node2 python3.12[56631]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:40 managed-node2 python3.12[56786]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:40 managed-node2 python3.12[56941]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:46:41 managed-node2 python3.12[57096]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nFeb 14 11:46:42 managed-node2 python3.12[57252]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:43 managed-node2 python3.12[57409]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd2.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:43 managed-node2 python3.12[57565]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None\nFeb 14 11:46:44 managed-node2 python3.12[57722]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:44 managed-node2 python3.12[57877]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:46:45 managed-node2 python3.12[58032]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:46 managed-node2 python3.12[58189]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-escape --template podman-kube@.service /etc/containers/ansible-kubernetes.d/httpd3.yml _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:47 managed-node2 python3.12[58345]: ansible-systemd Invoked with name=podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None\nFeb 14 11:46:47 managed-node2 python3.12[58502]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:48 managed-node2 python3.12[58657]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:46:48 managed-node2 python3.12[58812]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nFeb 14 11:46:49 managed-node2 python3.12[58968]: ansible-stat Invoked with path=/run/user/3001 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:50 managed-node2 python3.12[59123]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:46:50 managed-node2 python3.12[59278]: ansible-file Invoked with path=/tmp/lsr_no9h_drm_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:46:51 managed-node2 sshd-session[59304]: Accepted publickey for root from 10.31.12.69 port 46594 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nFeb 14 11:46:51 managed-node2 systemd-logind[768]: New session 11 of user root.\n\u2591\u2591 Subject: A new session 11 has been created for user root\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 Documentation: sd-login(3)\n\u2591\u2591 \n\u2591\u2591 A new session with the ID 11 has been created for the user root.\n\u2591\u2591 \n\u2591\u2591 The leading process of the session is 59304.\nFeb 14 11:46:51 managed-node2 systemd[1]: Started session-11.scope - Session 11 of User root.\n\u2591\u2591 Subject: A start job for unit session-11.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit session-11.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2884.\nFeb 14 11:46:51 managed-node2 sshd-session[59304]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)\nFeb 14 11:46:51 managed-node2 sshd-session[59307]: Received disconnect from 10.31.12.69 port 46594:11: disconnected by user\nFeb 14 11:46:51 managed-node2 sshd-session[59307]: Disconnected from user root 10.31.12.69 port 46594\nFeb 14 11:46:51 managed-node2 sshd-session[59304]: pam_unix(sshd:session): session closed for user root\nFeb 14 11:46:51 managed-node2 systemd[1]: session-11.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit session-11.scope has successfully entered the 'dead' state.\nFeb 14 11:46:51 managed-node2 systemd-logind[768]: Session 11 logged out. Waiting for processes to exit.\nFeb 14 11:46:51 managed-node2 systemd-logind[768]: Removed session 11.\n\u2591\u2591 Subject: Session 11 has been terminated\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 Documentation: sd-login(3)\n\u2591\u2591 \n\u2591\u2591 A session with the ID 11 has been terminated.\nFeb 14 11:46:53 managed-node2 python3.12[59514]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d\nFeb 14 11:46:56 managed-node2 python3.12[59698]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:56 managed-node2 python3.12[59854]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:46:57 managed-node2 python3.12[60009]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:59 managed-node2 python3.12[60320]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:46:59 managed-node2 python3.12[60481]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nFeb 14 11:47:00 managed-node2 python3.12[60637]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:02 managed-node2 sshd-session[60665]: Accepted publickey for root from 10.31.12.69 port 54102 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nFeb 14 11:47:02 managed-node2 systemd-logind[768]: New session 12 of user root.\n\u2591\u2591 Subject: A new session 12 has been created for user root\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 Documentation: sd-login(3)\n\u2591\u2591 \n\u2591\u2591 A new session with the ID 12 has been created for the user root.\n\u2591\u2591 \n\u2591\u2591 The leading process of the session is 60665.\nFeb 14 11:47:02 managed-node2 systemd[1]: Started session-12.scope - Session 12 of User root.\n\u2591\u2591 Subject: A start job for unit session-12.scope has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit session-12.scope has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2966.\nFeb 14 11:47:02 managed-node2 sshd-session[60665]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)\nFeb 14 11:47:02 managed-node2 sshd-session[60668]: Received disconnect from 10.31.12.69 port 54102:11: disconnected by user\nFeb 14 11:47:02 managed-node2 sshd-session[60668]: Disconnected from user root 10.31.12.69 port 54102\nFeb 14 11:47:02 managed-node2 sshd-session[60665]: pam_unix(sshd:session): session closed for user root\nFeb 14 11:47:02 managed-node2 systemd[1]: session-12.scope: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit session-12.scope has successfully entered the 'dead' state.\nFeb 14 11:47:02 managed-node2 systemd-logind[768]: Session 12 logged out. Waiting for processes to exit.\nFeb 14 11:47:02 managed-node2 systemd-logind[768]: Removed session 12.\n\u2591\u2591 Subject: Session 12 has been terminated\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 Documentation: sd-login(3)\n\u2591\u2591 \n\u2591\u2591 A session with the ID 12 has been terminated.\nFeb 14 11:47:04 managed-node2 python3.12[60876]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d\nFeb 14 11:47:05 managed-node2 python3.12[61060]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:05 managed-node2 python3.12[61215]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:06 managed-node2 python3.12[61370]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:08 managed-node2 python3.12[61681]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:09 managed-node2 python3.12[61844]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nFeb 14 11:47:09 managed-node2 python3.12[62000]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:11 managed-node2 python3.12[62157]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:12 managed-node2 python3.12[62314]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:12 managed-node2 python3.12[62469]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:47:13 managed-node2 python3.12[62594]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087632.5337622-22450-265320183664731/.source.container dest=/etc/containers/systemd/nopull.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=670d64fc68a9768edb20cad26df2acc703542d85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:15 managed-node2 python3.12[62904]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:16 managed-node2 python3.12[63065]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:17 managed-node2 python3.12[63222]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:47:19 managed-node2 podman[63388]: 2026-02-14 11:47:19.295521214 -0500 EST m=+0.017817019 image pull-error this_is_a_bogus_image:latest short-name resolution enforced but cannot prompt without a TTY\nFeb 14 11:47:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:47:19 managed-node2 python3.12[63550]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:20 managed-node2 python3.12[63705]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:47:20 managed-node2 python3.12[63830]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087639.8898952-22814-104891175911053/.source.container dest=/etc/containers/systemd/bogus.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=1d087e679d135214e8ac9ccaf33b2222916efb7f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:22 managed-node2 python3.12[64140]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:23 managed-node2 python3.12[64301]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:25 managed-node2 python3.12[64458]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:26 managed-node2 python3.12[64615]: ansible-systemd Invoked with name=nopull.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None\nFeb 14 11:47:26 managed-node2 python3.12[64771]: ansible-stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:27 managed-node2 python3.12[65083]: ansible-file Invoked with path=/etc/containers/systemd/nopull.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:28 managed-node2 python3.12[65238]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:47:28 managed-node2 systemd[1]: Reload requested from client PID 65239 ('systemctl') (unit session-8.scope)...\nFeb 14 11:47:28 managed-node2 systemd[1]: Reloading...\nFeb 14 11:47:28 managed-node2 quadlet-generator[65263]: Warning: bogus.container specifies the image \"this_is_a_bogus_image\" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details.\nFeb 14 11:47:28 managed-node2 systemd-rc-local-generator[65291]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:47:28 managed-node2 systemd[1]: Reloading finished in 210 ms.\nFeb 14 11:47:29 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:47:31 managed-node2 python3.12[65771]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:32 managed-node2 python3.12[65932]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:33 managed-node2 python3.12[66089]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:34 managed-node2 python3.12[66246]: ansible-systemd Invoked with name=bogus.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None\nFeb 14 11:47:35 managed-node2 systemd[1]: Reload requested from client PID 66249 ('systemctl') (unit session-8.scope)...\nFeb 14 11:47:35 managed-node2 systemd[1]: Reloading...\nFeb 14 11:47:35 managed-node2 quadlet-generator[66273]: Warning: bogus.container specifies the image \"this_is_a_bogus_image\" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details.\nFeb 14 11:47:35 managed-node2 systemd-rc-local-generator[66298]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:47:35 managed-node2 systemd[1]: Reloading finished in 219 ms.\nFeb 14 11:47:35 managed-node2 python3.12[66465]: ansible-stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:36 managed-node2 python3.12[66777]: ansible-file Invoked with path=/etc/containers/systemd/bogus.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:37 managed-node2 python3.12[66932]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:47:37 managed-node2 systemd[1]: Reload requested from client PID 66933 ('systemctl') (unit session-8.scope)...\nFeb 14 11:47:37 managed-node2 systemd[1]: Reloading...\nFeb 14 11:47:37 managed-node2 systemd-rc-local-generator[66984]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:47:37 managed-node2 systemd[1]: Reloading finished in 210 ms.\nFeb 14 11:47:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:47:38 managed-node2 python3.12[67311]: ansible-user Invoked with name=user_quadlet_basic uid=1111 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None\nFeb 14 11:47:38 managed-node2 useradd[67313]: new group: name=user_quadlet_basic, GID=1111\nFeb 14 11:47:38 managed-node2 useradd[67313]: new user: name=user_quadlet_basic, UID=1111, GID=1111, home=/home/user_quadlet_basic, shell=/bin/bash, from=/dev/pts/0\nFeb 14 11:47:38 managed-node2 rsyslogd[985]: imjournal: journal files changed, reloading... [v8.2510.0-5.el10 try https://www.rsyslog.com/e/0 ]\nFeb 14 11:47:38 managed-node2 rsyslogd[985]: imjournal: journal files changed, reloading... [v8.2510.0-5.el10 try https://www.rsyslog.com/e/0 ]\nFeb 14 11:47:40 managed-node2 python3.12[67624]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:41 managed-node2 python3.12[67785]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:42 managed-node2 python3.12[67942]: ansible-getent Invoked with database=passwd key=user_quadlet_basic fail_key=False service=None split=None\nFeb 14 11:47:43 managed-node2 python3.12[68098]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nFeb 14 11:47:43 managed-node2 systemd[1]: Created slice user-1111.slice - User Slice of UID 1111.\n\u2591\u2591 Subject: A start job for unit user-1111.slice has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit user-1111.slice has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3126.\nFeb 14 11:47:43 managed-node2 systemd[1]: Starting user-runtime-dir@1111.service - User Runtime Directory /run/user/1111...\n\u2591\u2591 Subject: A start job for unit user-runtime-dir@1111.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit user-runtime-dir@1111.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3048.\nFeb 14 11:47:43 managed-node2 systemd[1]: Finished user-runtime-dir@1111.service - User Runtime Directory /run/user/1111.\n\u2591\u2591 Subject: A start job for unit user-runtime-dir@1111.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit user-runtime-dir@1111.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3048.\nFeb 14 11:47:43 managed-node2 systemd[1]: Starting user@1111.service - User Manager for UID 1111...\n\u2591\u2591 Subject: A start job for unit user@1111.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit user@1111.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3128.\nFeb 14 11:47:43 managed-node2 systemd-logind[768]: New session 13 of user user_quadlet_basic.\n\u2591\u2591 Subject: A new session 13 has been created for user user_quadlet_basic\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 Documentation: sd-login(3)\n\u2591\u2591 \n\u2591\u2591 A new session with the ID 13 has been created for the user user_quadlet_basic.\n\u2591\u2591 \n\u2591\u2591 The leading process of the session is 68102.\nFeb 14 11:47:43 managed-node2 (systemd)[68102]: pam_unix(systemd-user:session): session opened for user user_quadlet_basic(uid=1111) by user_quadlet_basic(uid=0)\nFeb 14 11:47:43 managed-node2 systemd[68102]: Queued start job for default target default.target.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Created slice app.slice - User Application Slice.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 5.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Started grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 10.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 9.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Reached target paths.target - Paths.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 12.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Reached target timers.target - Timers.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 8.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Starting dbus.socket - D-Bus User Message Bus Socket...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 4.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 11.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Listening on dbus.socket - D-Bus User Message Bus Socket.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 4.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 11.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Reached target sockets.target - Sockets.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Reached target basic.target - Basic System.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 2.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Reached target default.target - Main User Target.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 1.\nFeb 14 11:47:43 managed-node2 systemd[68102]: Startup finished in 64ms.\n\u2591\u2591 Subject: User manager start-up is now complete\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The user manager instance for user 1111 has been started. All services queued\n\u2591\u2591 for starting have been started. Note that other services might still be starting\n\u2591\u2591 up or be started at any later time.\n\u2591\u2591 \n\u2591\u2591 Startup of the manager took 64969 microseconds.\nFeb 14 11:47:43 managed-node2 systemd[1]: Started user@1111.service - User Manager for UID 1111.\n\u2591\u2591 Subject: A start job for unit user@1111.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit user@1111.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3128.\nFeb 14 11:47:44 managed-node2 python3.12[68273]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:44 managed-node2 sudo[68480]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uixpklxygdemobdbhyuryznyhdkszmkn ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087664.1855588-23870-3454953199487/AnsiballZ_podman_secret.py'\nFeb 14 11:47:44 managed-node2 sudo[68480]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:47:44 managed-node2 systemd[68102]: Created slice session.slice - User Core Session Slice.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 14.\nFeb 14 11:47:44 managed-node2 systemd[68102]: Starting dbus-broker.service - D-Bus User Message Bus...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 13.\nFeb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored\nFeb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored\nFeb 14 11:47:44 managed-node2 systemd[68102]: Started dbus-broker.service - D-Bus User Message Bus.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 13.\nFeb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Ready\nFeb 14 11:47:44 managed-node2 systemd[68102]: Created slice user.slice - Slice /user.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 20.\nFeb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68496.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 19.\nFeb 14 11:47:44 managed-node2 systemd[68102]: Started podman-pause-57116b4d.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 23.\nFeb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68512.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 27.\nFeb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68519.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 31.\nFeb 14 11:47:44 managed-node2 sudo[68480]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:47:45 managed-node2 python3.12[68680]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nFeb 14 11:47:46 managed-node2 python3.12[68835]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:46 managed-node2 sudo[69042]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbpkvqovbwkjyfrwfoiymaxsidvpmvs ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087666.2460904-23929-12811921702894/AnsiballZ_podman_secret.py'\nFeb 14 11:47:46 managed-node2 sudo[69042]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69053.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 35.\nFeb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69060.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 39.\nFeb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69068.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 43.\nFeb 14 11:47:46 managed-node2 sudo[69042]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:47:47 managed-node2 python3.12[69230]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:47 managed-node2 python3.12[69387]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:48 managed-node2 python3.12[69543]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:49 managed-node2 python3.12[69699]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nFeb 14 11:47:49 managed-node2 python3.12[69854]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:50 managed-node2 python3.12[70009]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:47:50 managed-node2 python3.12[70134]: ansible-ansible.legacy.copy Invoked with dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network owner=user_quadlet_basic group=1111 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1771087669.9321482-24074-44070762850677/.source.network _original_basename=.19wp4gkp follow=False checksum=19c9b17be2af9b9deca5c3bd327f048966750682 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:50 managed-node2 sudo[70339]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apzsbbomwlqjxtafirxqggwpyjtxmtby ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087670.684208-24104-102282262690494/AnsiballZ_systemd.py'\nFeb 14 11:47:50 managed-node2 sudo[70339]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:47:51 managed-node2 python3.12[70342]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:47:51 managed-node2 systemd[68102]: Reload requested from client PID 70343 ('systemctl')...\nFeb 14 11:47:51 managed-node2 systemd[68102]: Reloading...\nFeb 14 11:47:51 managed-node2 systemd[68102]: Reloading finished in 39 ms.\nFeb 14 11:47:51 managed-node2 sudo[70339]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:47:51 managed-node2 sudo[70557]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbyikavwgufawfbnmbhihisncpxpuwz ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087671.337951-24131-108859254372812/AnsiballZ_systemd.py'\nFeb 14 11:47:51 managed-node2 sudo[70557]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:47:51 managed-node2 python3.12[70560]: ansible-systemd Invoked with name=quadlet-basic-network.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:47:51 managed-node2 systemd[68102]: Starting podman-user-wait-network-online.service - Wait for system level network-online.target as user....\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 59.\nFeb 14 11:47:51 managed-node2 sh[70564]: active\nFeb 14 11:47:51 managed-node2 systemd[68102]: Finished podman-user-wait-network-online.service - Wait for system level network-online.target as user..\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 59.\nFeb 14 11:47:51 managed-node2 systemd[68102]: Starting quadlet-basic-network.service...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 47.\nFeb 14 11:47:51 managed-node2 quadlet-basic-network[70566]: quadlet-basic-name\nFeb 14 11:47:51 managed-node2 systemd[68102]: Finished quadlet-basic-network.service.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 47.\nFeb 14 11:47:51 managed-node2 sudo[70557]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:47:52 managed-node2 python3.12[70728]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:52 managed-node2 python3.12[70886]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:53 managed-node2 python3.12[71042]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:54 managed-node2 python3.12[71198]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nFeb 14 11:47:55 managed-node2 python3.12[71353]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:55 managed-node2 python3.12[71508]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:47:55 managed-node2 python3.12[71633]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087675.2243838-24329-194671574787178/.source.network dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:47:56 managed-node2 sudo[71838]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbqqluhqmoaicdmjhhkmabpjkqakfen ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087675.9977405-24371-137166020726145/AnsiballZ_systemd.py'\nFeb 14 11:47:56 managed-node2 sudo[71838]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:47:56 managed-node2 python3.12[71842]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:47:56 managed-node2 systemd[68102]: Reload requested from client PID 71843 ('systemctl')...\nFeb 14 11:47:56 managed-node2 systemd[68102]: Reloading...\nFeb 14 11:47:56 managed-node2 systemd[68102]: Reloading finished in 39 ms.\nFeb 14 11:47:56 managed-node2 sudo[71838]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:47:56 managed-node2 sudo[72058]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqnmoirqgewpatwzwawhhrlgyanrsed ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087676.6492229-24391-168211225876343/AnsiballZ_systemd.py'\nFeb 14 11:47:56 managed-node2 sudo[72058]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:47:57 managed-node2 python3.12[72061]: ansible-systemd Invoked with name=quadlet-basic-unused-network-network.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:47:57 managed-node2 systemd[68102]: Starting quadlet-basic-unused-network-network.service...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 60.\nFeb 14 11:47:57 managed-node2 quadlet-basic-unused-network-network[72064]: systemd-quadlet-basic-unused-network\nFeb 14 11:47:57 managed-node2 systemd[68102]: Finished quadlet-basic-unused-network-network.service.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 60.\nFeb 14 11:47:57 managed-node2 sudo[72058]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:47:57 managed-node2 python3.12[72226]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:47:58 managed-node2 python3.12[72383]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:58 managed-node2 python3.12[72539]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:47:59 managed-node2 python3.12[72695]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nFeb 14 11:48:00 managed-node2 python3.12[72850]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:00 managed-node2 python3.12[73005]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:48:01 managed-node2 python3.12[73130]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087680.5019636-24577-124432255509769/.source.volume dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=90a3571bfc7670328fe3f8fb625585613dbd9c4a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:01 managed-node2 sudo[73335]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aizslvcfwxohxjustnktmhoxtyqdyuqo ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087681.2433515-24617-277250510433155/AnsiballZ_systemd.py'\nFeb 14 11:48:01 managed-node2 sudo[73335]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:48:01 managed-node2 python3.12[73338]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:48:01 managed-node2 systemd[68102]: Reload requested from client PID 73339 ('systemctl')...\nFeb 14 11:48:01 managed-node2 systemd[68102]: Reloading...\nFeb 14 11:48:01 managed-node2 systemd[68102]: Reloading finished in 39 ms.\nFeb 14 11:48:01 managed-node2 sudo[73335]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:48:02 managed-node2 sudo[73553]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnaimusrbkhfvcfbsqyjenyppzojpcum ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087681.8992858-24637-239185248282254/AnsiballZ_systemd.py'\nFeb 14 11:48:02 managed-node2 sudo[73553]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:48:02 managed-node2 python3.12[73556]: ansible-systemd Invoked with name=quadlet-basic-mysql-volume.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:48:02 managed-node2 systemd[68102]: Starting quadlet-basic-mysql-volume.service...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 73.\nFeb 14 11:48:02 managed-node2 quadlet-basic-mysql-volume[73559]: quadlet-basic-mysql-name\nFeb 14 11:48:02 managed-node2 systemd[68102]: Finished quadlet-basic-mysql-volume.service.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 73.\nFeb 14 11:48:02 managed-node2 sudo[73553]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:48:03 managed-node2 python3.12[73722]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:03 managed-node2 python3.12[73879]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:03 managed-node2 python3.12[74035]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:04 managed-node2 python3.12[74191]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nFeb 14 11:48:05 managed-node2 python3.12[74346]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:05 managed-node2 python3.12[74501]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:48:06 managed-node2 python3.12[74626]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087685.4910405-24770-45960372293351/.source.volume dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=fd0ae560360afa5541b866560b1e849d25e216ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:06 managed-node2 sudo[74831]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwpvfwjdxtnxlrjnsruhcvrbszosshmc ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087686.2318456-24792-71938151665693/AnsiballZ_systemd.py'\nFeb 14 11:48:06 managed-node2 sudo[74831]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:48:06 managed-node2 python3.12[74834]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:48:06 managed-node2 systemd[68102]: Reload requested from client PID 74835 ('systemctl')...\nFeb 14 11:48:06 managed-node2 systemd[68102]: Reloading...\nFeb 14 11:48:06 managed-node2 systemd[68102]: Reloading finished in 42 ms.\nFeb 14 11:48:06 managed-node2 sudo[74831]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:48:07 managed-node2 sudo[75049]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuthmwvkdjusmazenuymwuyactedtesj ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087686.879755-24802-91952004617364/AnsiballZ_systemd.py'\nFeb 14 11:48:07 managed-node2 sudo[75049]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:48:07 managed-node2 python3.12[75052]: ansible-systemd Invoked with name=quadlet-basic-unused-volume-volume.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:48:07 managed-node2 systemd[68102]: Starting quadlet-basic-unused-volume-volume.service...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 86.\nFeb 14 11:48:07 managed-node2 quadlet-basic-unused-volume-volume[75055]: systemd-quadlet-basic-unused-volume\nFeb 14 11:48:07 managed-node2 systemd[68102]: Finished quadlet-basic-unused-volume-volume.service.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 86.\nFeb 14 11:48:07 managed-node2 sudo[75049]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:48:08 managed-node2 python3.12[75217]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:08 managed-node2 python3.12[75374]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:08 managed-node2 python3.12[75530]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:09 managed-node2 python3.12[75686]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nFeb 14 11:48:10 managed-node2 sudo[75891]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnxglmukkluarwwhgryjkisfktidpwgs ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087689.9702964-24906-141256695692020/AnsiballZ_podman_image.py'\nFeb 14 11:48:10 managed-node2 sudo[75891]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:48:10 managed-node2 systemd[68102]: Started podman-75895.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 99.\nFeb 14 11:48:10 managed-node2 systemd[68102]: Started podman-75903.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 103.\nFeb 14 11:48:15 managed-node2 systemd[68102]: podman-75903.scope: Consumed 8.633s CPU time, 469.5M memory peak.\n\u2591\u2591 Subject: Resources consumed by unit runtime\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit UNIT completed and consumed the indicated resources.\nFeb 14 11:48:15 managed-node2 systemd[68102]: Started podman-76114.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 107.\nFeb 14 11:48:16 managed-node2 systemd[68102]: Started podman-76121.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 111.\nFeb 14 11:48:17 managed-node2 systemd[68102]: Started podman-76129.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 115.\nFeb 14 11:48:17 managed-node2 systemd[68102]: Started podman-76137.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 119.\nFeb 14 11:48:17 managed-node2 sudo[75891]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:48:17 managed-node2 python3.12[76298]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:18 managed-node2 python3.12[76453]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:48:18 managed-node2 python3.12[76578]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087697.7685003-25203-102137340282322/.source.container dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=0b6cac7929623f1059e78ef39b8b0a25169b28a6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:18 managed-node2 sudo[76783]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgfztswzveapjwryvbcfeqxxlwjdmhyv ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087698.5060833-25237-65119752173567/AnsiballZ_systemd.py'\nFeb 14 11:48:18 managed-node2 sudo[76783]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:48:18 managed-node2 python3.12[76786]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:48:18 managed-node2 systemd[68102]: Reload requested from client PID 76787 ('systemctl')...\nFeb 14 11:48:18 managed-node2 systemd[68102]: Reloading...\nFeb 14 11:48:19 managed-node2 systemd[68102]: Reloading finished in 42 ms.\nFeb 14 11:48:19 managed-node2 sudo[76783]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:48:19 managed-node2 sudo[77002]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpzvyfzxibzygvqvywjclcqjhwmygvfl ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087699.1828828-25258-135996473326537/AnsiballZ_systemd.py'\nFeb 14 11:48:19 managed-node2 sudo[77002]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0)\nFeb 14 11:48:19 managed-node2 python3.12[77005]: ansible-systemd Invoked with name=quadlet-basic-mysql.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:48:19 managed-node2 systemd[68102]: Starting quadlet-basic-mysql.service...\n\u2591\u2591 Subject: A start job for unit UNIT has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 123.\nFeb 14 11:48:19 managed-node2 systemd[68102]: Started rootless-netns-cae0da54.scope.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 138.\nFeb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered blocking state\nFeb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered disabled state\nFeb 14 11:48:19 managed-node2 kernel: veth0: entered allmulticast mode\nFeb 14 11:48:19 managed-node2 kernel: veth0: entered promiscuous mode\nFeb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered blocking state\nFeb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered forwarding state\nFeb 14 11:48:19 managed-node2 systemd[68102]: Started run-p77036-i77037.scope - [systemd-run] /usr/libexec/podman/aardvark-dns --config /run/user/1111/containers/networks/aardvark-dns -p 53 run.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 142.\nFeb 14 11:48:19 managed-node2 systemd[68102]: Started quadlet-basic-mysql.service.\n\u2591\u2591 Subject: A start job for unit UNIT has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit UNIT has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 123.\nFeb 14 11:48:19 managed-node2 quadlet-basic-mysql[77008]: eb573be648acda7fed60ce6841ee124752e10f81306eb341a44d9c0b11f44d3d\nFeb 14 11:48:19 managed-node2 sudo[77002]: pam_unix(sudo:session): session closed for user user_quadlet_basic\nFeb 14 11:48:20 managed-node2 python3.12[77251]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:20 managed-node2 python3.12[77407]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:21 managed-node2 python3.12[77566]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:21 managed-node2 python3.12[77730]: ansible-stat Invoked with path=/var/lib/systemd/linger/user_quadlet_basic follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:23 managed-node2 python3.12[78042]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:24 managed-node2 python3.12[78227]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nFeb 14 11:48:25 managed-node2 python3.12[78383]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:27 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:27 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:27 managed-node2 podman[78590]: 2026-02-14 11:48:27.13362275 -0500 EST m=+0.017279956 secret create 90fe43ec6167996f11d8c921b\nFeb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:28 managed-node2 podman[78781]: 2026-02-14 11:48:28.453280763 -0500 EST m=+0.022645603 secret create 90451f17a3693765e0b13c8ad\nFeb 14 11:48:29 managed-node2 python3.12[78950]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:30 managed-node2 python3.12[79107]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:31 managed-node2 python3.12[79274]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:48:31 managed-node2 python3.12[79410]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/systemd/quadlet-basic.network owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1771087710.8078883-25734-32145037604992/.source.network _original_basename=.y5jiqtp_ follow=False checksum=19c9b17be2af9b9deca5c3bd327f048966750682 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:32 managed-node2 python3.12[79565]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:48:32 managed-node2 systemd[1]: Reload requested from client PID 79566 ('systemctl') (unit session-8.scope)...\nFeb 14 11:48:32 managed-node2 systemd[1]: Reloading...\nFeb 14 11:48:32 managed-node2 systemd-rc-local-generator[79615]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:48:32 managed-node2 systemd[1]: Reloading finished in 220 ms.\nFeb 14 11:48:32 managed-node2 python3.12[79785]: ansible-systemd Invoked with name=quadlet-basic-network.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:48:32 managed-node2 systemd[1]: Starting quadlet-basic-network.service...\n\u2591\u2591 Subject: A start job for unit quadlet-basic-network.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit quadlet-basic-network.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3214.\nFeb 14 11:48:32 managed-node2 podman[79789]: 2026-02-14 11:48:32.926261081 -0500 EST m=+0.016760408 network create 753e73850896ed526bbf5b94858e3cd2708517eedee1b4c90d154f7a2349144f (name=quadlet-basic-name, type=bridge)\nFeb 14 11:48:32 managed-node2 quadlet-basic-network[79789]: quadlet-basic-name\nFeb 14 11:48:32 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:32 managed-node2 systemd[1]: Finished quadlet-basic-network.service.\n\u2591\u2591 Subject: A start job for unit quadlet-basic-network.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit quadlet-basic-network.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3214.\nFeb 14 11:48:33 managed-node2 python3.12[79950]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:34 managed-node2 python3.12[80108]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:35 managed-node2 python3.12[80263]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-unused-network.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:48:35 managed-node2 python3.12[80388]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087714.978921-25930-142517737796637/.source.network dest=/etc/containers/systemd/quadlet-basic-unused-network.network owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:36 managed-node2 python3.12[80543]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:48:36 managed-node2 systemd[1]: Reload requested from client PID 80544 ('systemctl') (unit session-8.scope)...\nFeb 14 11:48:36 managed-node2 systemd[1]: Reloading...\nFeb 14 11:48:36 managed-node2 systemd-rc-local-generator[80594]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:48:36 managed-node2 systemd[1]: Reloading finished in 213 ms.\nFeb 14 11:48:36 managed-node2 python3.12[80763]: ansible-systemd Invoked with name=quadlet-basic-unused-network-network.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:48:36 managed-node2 systemd[1]: Starting quadlet-basic-unused-network-network.service...\n\u2591\u2591 Subject: A start job for unit quadlet-basic-unused-network-network.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit quadlet-basic-unused-network-network.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3298.\nFeb 14 11:48:36 managed-node2 podman[80767]: 2026-02-14 11:48:36.941382834 -0500 EST m=+0.020181531 network create 01cf3213e3ed2d0e128267f68e75813ac56a211a4b375eb13d2a19c6244fb9f3 (name=systemd-quadlet-basic-unused-network, type=bridge)\nFeb 14 11:48:36 managed-node2 quadlet-basic-unused-network-network[80767]: systemd-quadlet-basic-unused-network\nFeb 14 11:48:36 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:36 managed-node2 systemd[1]: Finished quadlet-basic-unused-network-network.service.\n\u2591\u2591 Subject: A start job for unit quadlet-basic-unused-network-network.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit quadlet-basic-unused-network-network.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3298.\nFeb 14 11:48:37 managed-node2 python3.12[80929]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:39 managed-node2 python3.12[81086]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:39 managed-node2 python3.12[81241]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-mysql.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:48:39 managed-node2 python3.12[81366]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087719.2426136-26105-180291594378143/.source.volume dest=/etc/containers/systemd/quadlet-basic-mysql.volume owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=90a3571bfc7670328fe3f8fb625585613dbd9c4a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:40 managed-node2 python3.12[81521]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:48:40 managed-node2 systemd[1]: Reload requested from client PID 81522 ('systemctl') (unit session-8.scope)...\nFeb 14 11:48:40 managed-node2 systemd[1]: Reloading...\nFeb 14 11:48:40 managed-node2 systemd-rc-local-generator[81573]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:48:40 managed-node2 systemd[1]: Reloading finished in 217 ms.\nFeb 14 11:48:41 managed-node2 python3.12[81742]: ansible-systemd Invoked with name=quadlet-basic-mysql-volume.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:48:41 managed-node2 systemd[1]: Starting quadlet-basic-mysql-volume.service...\n\u2591\u2591 Subject: A start job for unit quadlet-basic-mysql-volume.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit quadlet-basic-mysql-volume.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3382.\nFeb 14 11:48:41 managed-node2 podman[81746]: 2026-02-14 11:48:41.283012966 -0500 EST m=+0.024739231 volume create quadlet-basic-mysql-name\nFeb 14 11:48:41 managed-node2 quadlet-basic-mysql-volume[81746]: quadlet-basic-mysql-name\nFeb 14 11:48:41 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:41 managed-node2 systemd[1]: Finished quadlet-basic-mysql-volume.service.\n\u2591\u2591 Subject: A start job for unit quadlet-basic-mysql-volume.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit quadlet-basic-mysql-volume.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3382.\nFeb 14 11:48:42 managed-node2 python3.12[81909]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:43 managed-node2 python3.12[82066]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:43 managed-node2 python3.12[82221]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-unused-volume.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True\nFeb 14 11:48:44 managed-node2 python3.12[82346]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087723.576437-26316-237709762951197/.source.volume dest=/etc/containers/systemd/quadlet-basic-unused-volume.volume owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=fd0ae560360afa5541b866560b1e849d25e216ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None\nFeb 14 11:48:44 managed-node2 python3.12[82501]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None\nFeb 14 11:48:44 managed-node2 systemd[1]: Reload requested from client PID 82502 ('systemctl') (unit session-8.scope)...\nFeb 14 11:48:44 managed-node2 systemd[1]: Reloading...\nFeb 14 11:48:44 managed-node2 systemd-rc-local-generator[82557]: /etc/rc.d/rc.local is not marked executable, skipping.\nFeb 14 11:48:44 managed-node2 systemd[1]: Reloading finished in 213 ms.\nFeb 14 11:48:45 managed-node2 python3.12[82722]: ansible-systemd Invoked with name=quadlet-basic-unused-volume-volume.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None\nFeb 14 11:48:45 managed-node2 systemd[1]: Starting quadlet-basic-unused-volume-volume.service...\n\u2591\u2591 Subject: A start job for unit quadlet-basic-unused-volume-volume.service has begun execution\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit quadlet-basic-unused-volume-volume.service has begun execution.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3466.\nFeb 14 11:48:45 managed-node2 podman[82726]: 2026-02-14 11:48:45.588887323 -0500 EST m=+0.024810141 volume create systemd-quadlet-basic-unused-volume\nFeb 14 11:48:45 managed-node2 quadlet-basic-unused-volume-volume[82726]: systemd-quadlet-basic-unused-volume\nFeb 14 11:48:45 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:45 managed-node2 systemd[1]: Finished quadlet-basic-unused-volume-volume.service.\n\u2591\u2591 Subject: A start job for unit quadlet-basic-unused-volume-volume.service has finished successfully\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 A start job for unit quadlet-basic-unused-volume-volume.service has finished successfully.\n\u2591\u2591 \n\u2591\u2591 The job identifier is 3466.\nFeb 14 11:48:46 managed-node2 python3.12[82888]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nFeb 14 11:48:47 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:51 managed-node2 podman[83053]: 2026-02-14 11:48:51.154323799 -0500 EST m=+3.494317061 image pull-error quay.io/linux-system-roles/mysql:5.6 unable to copy from source docker://quay.io/linux-system-roles/mysql:5.6: copying system image from manifest list: reading blob sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8: Digest did not match, expected sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\nFeb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:51 managed-node2 python3.12[83273]: ansible-ansible.legacy.command Invoked with _raw_params=set -x\n set -o pipefail\n exec 1>&2\n #podman volume rm --all\n #podman network prune -f\n podman volume ls\n podman network ls\n podman secret ls\n podman container ls\n podman pod ls\n podman images\n systemctl list-units | grep quadlet\n _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.\n\u2591\u2591 Subject: Unit succeeded\n\u2591\u2591 Defined-By: systemd\n\u2591\u2591 Support: https://access.redhat.com/support\n\u2591\u2591 \n\u2591\u2591 The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nFeb 14 11:48:52 managed-node2 python3.12[83475]: ansible-ansible.legacy.command Invoked with _raw_params=grep type=AVC /var/log/audit/audit.log _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nFeb 14 11:48:52 managed-node2 python3.12[83631]: ansible-ansible.legacy.command Invoked with _raw_params=journalctl -ex _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None", "task_name": "Dump journal", "task_path": "/tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:327" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 14 February 2026 11:50:45 -0500 (0:00:00.037) 0:03:41.830 ***** =============================================================================== fedora.linux_system_roles.podman : Wait for user session to exit closing state -- 12.25s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cancel_linger.yml:62 fedora.linux_system_roles.podman : Ensure container images are present --- 7.32s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 fedora.linux_system_roles.podman : Ensure container images are present --- 4.03s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_images.yml:2 fedora.linux_system_roles.podman : Stop and disable service ------------- 2.40s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:12 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.29s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.10s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.09s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.08s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 fedora.linux_system_roles.podman : For testing and debugging - services --- 2.07s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/cleanup_quadlet_spec.yml:214 Check files ------------------------------------------------------------- 1.35s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:218 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.33s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Gathering Facts --------------------------------------------------------- 1.19s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_basic.yml:9 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.04s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.04s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.04s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.04s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.02s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.02s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Gather the package facts ------------- 0.99s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Start service ------------------------ 0.90s /tmp/collections-RsQ/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/create_update_quadlet_spec.yml:98 Feb 14 11:47:04 managed-node2 python3.12[60876]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Feb 14 11:47:05 managed-node2 python3.12[61060]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:05 managed-node2 python3.12[61215]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:06 managed-node2 python3.12[61370]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:08 managed-node2 python3.12[61681]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:09 managed-node2 python3.12[61844]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:47:09 managed-node2 python3.12[62000]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:11 managed-node2 python3.12[62157]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:12 managed-node2 python3.12[62314]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:12 managed-node2 python3.12[62469]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:47:13 managed-node2 python3.12[62594]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087632.5337622-22450-265320183664731/.source.container dest=/etc/containers/systemd/nopull.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=670d64fc68a9768edb20cad26df2acc703542d85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:15 managed-node2 python3.12[62904]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:16 managed-node2 python3.12[63065]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:17 managed-node2 python3.12[63222]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:47:19 managed-node2 podman[63388]: 2026-02-14 11:47:19.295521214 -0500 EST m=+0.017817019 image pull-error this_is_a_bogus_image:latest short-name resolution enforced but cannot prompt without a TTY Feb 14 11:47:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:47:19 managed-node2 python3.12[63550]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:20 managed-node2 python3.12[63705]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:47:20 managed-node2 python3.12[63830]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087639.8898952-22814-104891175911053/.source.container dest=/etc/containers/systemd/bogus.container owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=1d087e679d135214e8ac9ccaf33b2222916efb7f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:22 managed-node2 python3.12[64140]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:23 managed-node2 python3.12[64301]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:25 managed-node2 python3.12[64458]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:26 managed-node2 python3.12[64615]: ansible-systemd Invoked with name=nopull.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:47:26 managed-node2 python3.12[64771]: ansible-stat Invoked with path=/etc/containers/systemd/nopull.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:27 managed-node2 python3.12[65083]: ansible-file Invoked with path=/etc/containers/systemd/nopull.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:28 managed-node2 python3.12[65238]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:47:28 managed-node2 systemd[1]: Reload requested from client PID 65239 ('systemctl') (unit session-8.scope)... Feb 14 11:47:28 managed-node2 systemd[1]: Reloading... Feb 14 11:47:28 managed-node2 quadlet-generator[65263]: Warning: bogus.container specifies the image "this_is_a_bogus_image" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Feb 14 11:47:28 managed-node2 systemd-rc-local-generator[65291]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:47:28 managed-node2 systemd[1]: Reloading finished in 210 ms. Feb 14 11:47:29 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:47:31 managed-node2 python3.12[65771]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:32 managed-node2 python3.12[65932]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:33 managed-node2 python3.12[66089]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:34 managed-node2 python3.12[66246]: ansible-systemd Invoked with name=bogus.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:47:35 managed-node2 systemd[1]: Reload requested from client PID 66249 ('systemctl') (unit session-8.scope)... Feb 14 11:47:35 managed-node2 systemd[1]: Reloading... Feb 14 11:47:35 managed-node2 quadlet-generator[66273]: Warning: bogus.container specifies the image "this_is_a_bogus_image" which not a fully qualified image name. This is not ideal for performance and security reasons. See the podman-pull manpage discussion of short-name-aliases.conf for details. Feb 14 11:47:35 managed-node2 systemd-rc-local-generator[66298]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:47:35 managed-node2 systemd[1]: Reloading finished in 219 ms. Feb 14 11:47:35 managed-node2 python3.12[66465]: ansible-stat Invoked with path=/etc/containers/systemd/bogus.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:36 managed-node2 python3.12[66777]: ansible-file Invoked with path=/etc/containers/systemd/bogus.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:37 managed-node2 python3.12[66932]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:47:37 managed-node2 systemd[1]: Reload requested from client PID 66933 ('systemctl') (unit session-8.scope)... Feb 14 11:47:37 managed-node2 systemd[1]: Reloading... Feb 14 11:47:37 managed-node2 systemd-rc-local-generator[66984]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:47:37 managed-node2 systemd[1]: Reloading finished in 210 ms. Feb 14 11:47:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:47:38 managed-node2 python3.12[67311]: ansible-user Invoked with name=user_quadlet_basic uid=1111 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 14 11:47:38 managed-node2 useradd[67313]: new group: name=user_quadlet_basic, GID=1111 Feb 14 11:47:38 managed-node2 useradd[67313]: new user: name=user_quadlet_basic, UID=1111, GID=1111, home=/home/user_quadlet_basic, shell=/bin/bash, from=/dev/pts/0 Feb 14 11:47:38 managed-node2 rsyslogd[985]: imjournal: journal files changed, reloading... [v8.2510.0-5.el10 try https://www.rsyslog.com/e/0 ] Feb 14 11:47:38 managed-node2 rsyslogd[985]: imjournal: journal files changed, reloading... [v8.2510.0-5.el10 try https://www.rsyslog.com/e/0 ] Feb 14 11:47:40 managed-node2 python3.12[67624]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:41 managed-node2 python3.12[67785]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:42 managed-node2 python3.12[67942]: ansible-getent Invoked with database=passwd key=user_quadlet_basic fail_key=False service=None split=None Feb 14 11:47:43 managed-node2 python3.12[68098]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:47:43 managed-node2 systemd[1]: Created slice user-1111.slice - User Slice of UID 1111. ░░ Subject: A start job for unit user-1111.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-1111.slice has finished successfully. ░░ ░░ The job identifier is 3126. Feb 14 11:47:43 managed-node2 systemd[1]: Starting user-runtime-dir@1111.service - User Runtime Directory /run/user/1111... ░░ Subject: A start job for unit user-runtime-dir@1111.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@1111.service has begun execution. ░░ ░░ The job identifier is 3048. Feb 14 11:47:43 managed-node2 systemd[1]: Finished user-runtime-dir@1111.service - User Runtime Directory /run/user/1111. ░░ Subject: A start job for unit user-runtime-dir@1111.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@1111.service has finished successfully. ░░ ░░ The job identifier is 3048. Feb 14 11:47:43 managed-node2 systemd[1]: Starting user@1111.service - User Manager for UID 1111... ░░ Subject: A start job for unit user@1111.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@1111.service has begun execution. ░░ ░░ The job identifier is 3128. Feb 14 11:47:43 managed-node2 systemd-logind[768]: New session 13 of user user_quadlet_basic. ░░ Subject: A new session 13 has been created for user user_quadlet_basic ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 13 has been created for the user user_quadlet_basic. ░░ ░░ The leading process of the session is 68102. Feb 14 11:47:43 managed-node2 (systemd)[68102]: pam_unix(systemd-user:session): session opened for user user_quadlet_basic(uid=1111) by user_quadlet_basic(uid=0) Feb 14 11:47:43 managed-node2 systemd[68102]: Queued start job for default target default.target. Feb 14 11:47:43 managed-node2 systemd[68102]: Created slice app.slice - User Application Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 5. Feb 14 11:47:43 managed-node2 systemd[68102]: Started grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 10. Feb 14 11:47:43 managed-node2 systemd[68102]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 9. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target paths.target - Paths. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 12. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target timers.target - Timers. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 8. Feb 14 11:47:43 managed-node2 systemd[68102]: Starting dbus.socket - D-Bus User Message Bus Socket... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 4. Feb 14 11:47:43 managed-node2 systemd[68102]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 11. Feb 14 11:47:43 managed-node2 systemd[68102]: Listening on dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 4. Feb 14 11:47:43 managed-node2 systemd[68102]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 11. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target sockets.target - Sockets. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 3. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 2. Feb 14 11:47:43 managed-node2 systemd[68102]: Reached target default.target - Main User Target. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 1. Feb 14 11:47:43 managed-node2 systemd[68102]: Startup finished in 64ms. ░░ Subject: User manager start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The user manager instance for user 1111 has been started. All services queued ░░ for starting have been started. Note that other services might still be starting ░░ up or be started at any later time. ░░ ░░ Startup of the manager took 64969 microseconds. Feb 14 11:47:43 managed-node2 systemd[1]: Started user@1111.service - User Manager for UID 1111. ░░ Subject: A start job for unit user@1111.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@1111.service has finished successfully. ░░ ░░ The job identifier is 3128. Feb 14 11:47:44 managed-node2 python3.12[68273]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:44 managed-node2 sudo[68480]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uixpklxygdemobdbhyuryznyhdkszmkn ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087664.1855588-23870-3454953199487/AnsiballZ_podman_secret.py' Feb 14 11:47:44 managed-node2 sudo[68480]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:44 managed-node2 systemd[68102]: Created slice session.slice - User Core Session Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 14. Feb 14 11:47:44 managed-node2 systemd[68102]: Starting dbus-broker.service - D-Bus User Message Bus... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 13. Feb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Feb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Feb 14 11:47:44 managed-node2 systemd[68102]: Started dbus-broker.service - D-Bus User Message Bus. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 13. Feb 14 11:47:44 managed-node2 dbus-broker-launch[68510]: Ready Feb 14 11:47:44 managed-node2 systemd[68102]: Created slice user.slice - Slice /user. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 20. Feb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68496.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 19. Feb 14 11:47:44 managed-node2 systemd[68102]: Started podman-pause-57116b4d.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 23. Feb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68512.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 27. Feb 14 11:47:44 managed-node2 systemd[68102]: Started podman-68519.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 31. Feb 14 11:47:44 managed-node2 sudo[68480]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:45 managed-node2 python3.12[68680]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:47:46 managed-node2 python3.12[68835]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:46 managed-node2 sudo[69042]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdbpkvqovbwkjyfrwfoiymaxsidvpmvs ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087666.2460904-23929-12811921702894/AnsiballZ_podman_secret.py' Feb 14 11:47:46 managed-node2 sudo[69042]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69053.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 35. Feb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69060.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 39. Feb 14 11:47:46 managed-node2 systemd[68102]: Started podman-69068.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 43. Feb 14 11:47:46 managed-node2 sudo[69042]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:47 managed-node2 python3.12[69230]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:47 managed-node2 python3.12[69387]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:48 managed-node2 python3.12[69543]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:49 managed-node2 python3.12[69699]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:47:49 managed-node2 python3.12[69854]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:50 managed-node2 python3.12[70009]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:47:50 managed-node2 python3.12[70134]: ansible-ansible.legacy.copy Invoked with dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network owner=user_quadlet_basic group=1111 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1771087669.9321482-24074-44070762850677/.source.network _original_basename=.19wp4gkp follow=False checksum=19c9b17be2af9b9deca5c3bd327f048966750682 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:50 managed-node2 sudo[70339]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apzsbbomwlqjxtafirxqggwpyjtxmtby ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087670.684208-24104-102282262690494/AnsiballZ_systemd.py' Feb 14 11:47:50 managed-node2 sudo[70339]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:51 managed-node2 python3.12[70342]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:47:51 managed-node2 systemd[68102]: Reload requested from client PID 70343 ('systemctl')... Feb 14 11:47:51 managed-node2 systemd[68102]: Reloading... Feb 14 11:47:51 managed-node2 systemd[68102]: Reloading finished in 39 ms. Feb 14 11:47:51 managed-node2 sudo[70339]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:51 managed-node2 sudo[70557]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbyikavwgufawfbnmbhihisncpxpuwz ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087671.337951-24131-108859254372812/AnsiballZ_systemd.py' Feb 14 11:47:51 managed-node2 sudo[70557]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:51 managed-node2 python3.12[70560]: ansible-systemd Invoked with name=quadlet-basic-network.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:47:51 managed-node2 systemd[68102]: Starting podman-user-wait-network-online.service - Wait for system level network-online.target as user.... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 59. Feb 14 11:47:51 managed-node2 sh[70564]: active Feb 14 11:47:51 managed-node2 systemd[68102]: Finished podman-user-wait-network-online.service - Wait for system level network-online.target as user.. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 59. Feb 14 11:47:51 managed-node2 systemd[68102]: Starting quadlet-basic-network.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 47. Feb 14 11:47:51 managed-node2 quadlet-basic-network[70566]: quadlet-basic-name Feb 14 11:47:51 managed-node2 systemd[68102]: Finished quadlet-basic-network.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 47. Feb 14 11:47:51 managed-node2 sudo[70557]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:52 managed-node2 python3.12[70728]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:52 managed-node2 python3.12[70886]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:53 managed-node2 python3.12[71042]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:54 managed-node2 python3.12[71198]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:47:55 managed-node2 python3.12[71353]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:55 managed-node2 python3.12[71508]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:47:55 managed-node2 python3.12[71633]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087675.2243838-24329-194671574787178/.source.network dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:47:56 managed-node2 sudo[71838]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbqqluhqmoaicdmjhhkmabpjkqakfen ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087675.9977405-24371-137166020726145/AnsiballZ_systemd.py' Feb 14 11:47:56 managed-node2 sudo[71838]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:56 managed-node2 python3.12[71842]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:47:56 managed-node2 systemd[68102]: Reload requested from client PID 71843 ('systemctl')... Feb 14 11:47:56 managed-node2 systemd[68102]: Reloading... Feb 14 11:47:56 managed-node2 systemd[68102]: Reloading finished in 39 ms. Feb 14 11:47:56 managed-node2 sudo[71838]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:56 managed-node2 sudo[72058]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqnmoirqgewpatwzwawhhrlgyanrsed ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087676.6492229-24391-168211225876343/AnsiballZ_systemd.py' Feb 14 11:47:56 managed-node2 sudo[72058]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:47:57 managed-node2 python3.12[72061]: ansible-systemd Invoked with name=quadlet-basic-unused-network-network.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:47:57 managed-node2 systemd[68102]: Starting quadlet-basic-unused-network-network.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 60. Feb 14 11:47:57 managed-node2 quadlet-basic-unused-network-network[72064]: systemd-quadlet-basic-unused-network Feb 14 11:47:57 managed-node2 systemd[68102]: Finished quadlet-basic-unused-network-network.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 60. Feb 14 11:47:57 managed-node2 sudo[72058]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:47:57 managed-node2 python3.12[72226]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:47:58 managed-node2 python3.12[72383]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:58 managed-node2 python3.12[72539]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:47:59 managed-node2 python3.12[72695]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:48:00 managed-node2 python3.12[72850]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:00 managed-node2 python3.12[73005]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:01 managed-node2 python3.12[73130]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087680.5019636-24577-124432255509769/.source.volume dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=90a3571bfc7670328fe3f8fb625585613dbd9c4a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:01 managed-node2 sudo[73335]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aizslvcfwxohxjustnktmhoxtyqdyuqo ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087681.2433515-24617-277250510433155/AnsiballZ_systemd.py' Feb 14 11:48:01 managed-node2 sudo[73335]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:01 managed-node2 python3.12[73338]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:01 managed-node2 systemd[68102]: Reload requested from client PID 73339 ('systemctl')... Feb 14 11:48:01 managed-node2 systemd[68102]: Reloading... Feb 14 11:48:01 managed-node2 systemd[68102]: Reloading finished in 39 ms. Feb 14 11:48:01 managed-node2 sudo[73335]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:02 managed-node2 sudo[73553]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnaimusrbkhfvcfbsqyjenyppzojpcum ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087681.8992858-24637-239185248282254/AnsiballZ_systemd.py' Feb 14 11:48:02 managed-node2 sudo[73553]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:02 managed-node2 python3.12[73556]: ansible-systemd Invoked with name=quadlet-basic-mysql-volume.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:02 managed-node2 systemd[68102]: Starting quadlet-basic-mysql-volume.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 73. Feb 14 11:48:02 managed-node2 quadlet-basic-mysql-volume[73559]: quadlet-basic-mysql-name Feb 14 11:48:02 managed-node2 systemd[68102]: Finished quadlet-basic-mysql-volume.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 73. Feb 14 11:48:02 managed-node2 sudo[73553]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:03 managed-node2 python3.12[73722]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:03 managed-node2 python3.12[73879]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:03 managed-node2 python3.12[74035]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:04 managed-node2 python3.12[74191]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:48:05 managed-node2 python3.12[74346]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:05 managed-node2 python3.12[74501]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:06 managed-node2 python3.12[74626]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087685.4910405-24770-45960372293351/.source.volume dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=fd0ae560360afa5541b866560b1e849d25e216ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:06 managed-node2 sudo[74831]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwpvfwjdxtnxlrjnsruhcvrbszosshmc ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087686.2318456-24792-71938151665693/AnsiballZ_systemd.py' Feb 14 11:48:06 managed-node2 sudo[74831]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:06 managed-node2 python3.12[74834]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:06 managed-node2 systemd[68102]: Reload requested from client PID 74835 ('systemctl')... Feb 14 11:48:06 managed-node2 systemd[68102]: Reloading... Feb 14 11:48:06 managed-node2 systemd[68102]: Reloading finished in 42 ms. Feb 14 11:48:06 managed-node2 sudo[74831]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:07 managed-node2 sudo[75049]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuthmwvkdjusmazenuymwuyactedtesj ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087686.879755-24802-91952004617364/AnsiballZ_systemd.py' Feb 14 11:48:07 managed-node2 sudo[75049]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:07 managed-node2 python3.12[75052]: ansible-systemd Invoked with name=quadlet-basic-unused-volume-volume.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:07 managed-node2 systemd[68102]: Starting quadlet-basic-unused-volume-volume.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 86. Feb 14 11:48:07 managed-node2 quadlet-basic-unused-volume-volume[75055]: systemd-quadlet-basic-unused-volume Feb 14 11:48:07 managed-node2 systemd[68102]: Finished quadlet-basic-unused-volume-volume.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 86. Feb 14 11:48:07 managed-node2 sudo[75049]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:08 managed-node2 python3.12[75217]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:08 managed-node2 python3.12[75374]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:08 managed-node2 python3.12[75530]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:09 managed-node2 python3.12[75686]: ansible-ansible.legacy.command Invoked with creates=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl enable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Feb 14 11:48:10 managed-node2 sudo[75891]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnxglmukkluarwwhgryjkisfktidpwgs ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087689.9702964-24906-141256695692020/AnsiballZ_podman_image.py' Feb 14 11:48:10 managed-node2 sudo[75891]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:10 managed-node2 systemd[68102]: Started podman-75895.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 99. Feb 14 11:48:10 managed-node2 systemd[68102]: Started podman-75903.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 103. Feb 14 11:48:15 managed-node2 systemd[68102]: podman-75903.scope: Consumed 8.633s CPU time, 469.5M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Feb 14 11:48:15 managed-node2 systemd[68102]: Started podman-76114.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 107. Feb 14 11:48:16 managed-node2 systemd[68102]: Started podman-76121.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 111. Feb 14 11:48:17 managed-node2 systemd[68102]: Started podman-76129.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 115. Feb 14 11:48:17 managed-node2 systemd[68102]: Started podman-76137.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 119. Feb 14 11:48:17 managed-node2 sudo[75891]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:17 managed-node2 python3.12[76298]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd state=directory owner=user_quadlet_basic group=1111 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:18 managed-node2 python3.12[76453]: ansible-ansible.legacy.stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:18 managed-node2 python3.12[76578]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087697.7685003-25203-102137340282322/.source.container dest=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container owner=user_quadlet_basic group=1111 mode=0644 follow=False _original_basename=systemd.j2 checksum=0b6cac7929623f1059e78ef39b8b0a25169b28a6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:18 managed-node2 sudo[76783]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgfztswzveapjwryvbcfeqxxlwjdmhyv ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087698.5060833-25237-65119752173567/AnsiballZ_systemd.py' Feb 14 11:48:18 managed-node2 sudo[76783]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:18 managed-node2 python3.12[76786]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:18 managed-node2 systemd[68102]: Reload requested from client PID 76787 ('systemctl')... Feb 14 11:48:18 managed-node2 systemd[68102]: Reloading... Feb 14 11:48:19 managed-node2 systemd[68102]: Reloading finished in 42 ms. Feb 14 11:48:19 managed-node2 sudo[76783]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:19 managed-node2 sudo[77002]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpzvyfzxibzygvqvywjclcqjhwmygvfl ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087699.1828828-25258-135996473326537/AnsiballZ_systemd.py' Feb 14 11:48:19 managed-node2 sudo[77002]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:19 managed-node2 python3.12[77005]: ansible-systemd Invoked with name=quadlet-basic-mysql.service scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:19 managed-node2 systemd[68102]: Starting quadlet-basic-mysql.service... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 123. Feb 14 11:48:19 managed-node2 systemd[68102]: Started rootless-netns-cae0da54.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 138. Feb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered blocking state Feb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:48:19 managed-node2 kernel: veth0: entered allmulticast mode Feb 14 11:48:19 managed-node2 kernel: veth0: entered promiscuous mode Feb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered blocking state Feb 14 11:48:19 managed-node2 kernel: podman1: port 1(veth0) entered forwarding state Feb 14 11:48:19 managed-node2 systemd[68102]: Started run-p77036-i77037.scope - [systemd-run] /usr/libexec/podman/aardvark-dns --config /run/user/1111/containers/networks/aardvark-dns -p 53 run. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 142. Feb 14 11:48:19 managed-node2 systemd[68102]: Started quadlet-basic-mysql.service. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 123. Feb 14 11:48:19 managed-node2 quadlet-basic-mysql[77008]: eb573be648acda7fed60ce6841ee124752e10f81306eb341a44d9c0b11f44d3d Feb 14 11:48:19 managed-node2 sudo[77002]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:20 managed-node2 python3.12[77251]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:20 managed-node2 python3.12[77407]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:21 managed-node2 python3.12[77566]: ansible-ansible.legacy.command Invoked with _raw_params=cat /home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:21 managed-node2 python3.12[77730]: ansible-stat Invoked with path=/var/lib/systemd/linger/user_quadlet_basic follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:23 managed-node2 python3.12[78042]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:24 managed-node2 python3.12[78227]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:48:25 managed-node2 python3.12[78383]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:27 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:27 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:27 managed-node2 podman[78590]: 2026-02-14 11:48:27.13362275 -0500 EST m=+0.017279956 secret create 90fe43ec6167996f11d8c921b Feb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:28 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:28 managed-node2 podman[78781]: 2026-02-14 11:48:28.453280763 -0500 EST m=+0.022645603 secret create 90451f17a3693765e0b13c8ad Feb 14 11:48:29 managed-node2 python3.12[78950]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:30 managed-node2 python3.12[79107]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:31 managed-node2 python3.12[79274]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:31 managed-node2 python3.12[79410]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/systemd/quadlet-basic.network owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1771087710.8078883-25734-32145037604992/.source.network _original_basename=.y5jiqtp_ follow=False checksum=19c9b17be2af9b9deca5c3bd327f048966750682 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:32 managed-node2 python3.12[79565]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:32 managed-node2 systemd[1]: Reload requested from client PID 79566 ('systemctl') (unit session-8.scope)... Feb 14 11:48:32 managed-node2 systemd[1]: Reloading... Feb 14 11:48:32 managed-node2 systemd-rc-local-generator[79615]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:48:32 managed-node2 systemd[1]: Reloading finished in 220 ms. Feb 14 11:48:32 managed-node2 python3.12[79785]: ansible-systemd Invoked with name=quadlet-basic-network.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:32 managed-node2 systemd[1]: Starting quadlet-basic-network.service... ░░ Subject: A start job for unit quadlet-basic-network.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-network.service has begun execution. ░░ ░░ The job identifier is 3214. Feb 14 11:48:32 managed-node2 podman[79789]: 2026-02-14 11:48:32.926261081 -0500 EST m=+0.016760408 network create 753e73850896ed526bbf5b94858e3cd2708517eedee1b4c90d154f7a2349144f (name=quadlet-basic-name, type=bridge) Feb 14 11:48:32 managed-node2 quadlet-basic-network[79789]: quadlet-basic-name Feb 14 11:48:32 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:32 managed-node2 systemd[1]: Finished quadlet-basic-network.service. ░░ Subject: A start job for unit quadlet-basic-network.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-network.service has finished successfully. ░░ ░░ The job identifier is 3214. Feb 14 11:48:33 managed-node2 python3.12[79950]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:34 managed-node2 python3.12[80108]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:35 managed-node2 python3.12[80263]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-unused-network.network follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:35 managed-node2 python3.12[80388]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087714.978921-25930-142517737796637/.source.network dest=/etc/containers/systemd/quadlet-basic-unused-network.network owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=52c9d75ecaf81203cc1f1a3b1dd00fcd25067b01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:36 managed-node2 python3.12[80543]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:36 managed-node2 systemd[1]: Reload requested from client PID 80544 ('systemctl') (unit session-8.scope)... Feb 14 11:48:36 managed-node2 systemd[1]: Reloading... Feb 14 11:48:36 managed-node2 systemd-rc-local-generator[80594]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:48:36 managed-node2 systemd[1]: Reloading finished in 213 ms. Feb 14 11:48:36 managed-node2 python3.12[80763]: ansible-systemd Invoked with name=quadlet-basic-unused-network-network.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:36 managed-node2 systemd[1]: Starting quadlet-basic-unused-network-network.service... ░░ Subject: A start job for unit quadlet-basic-unused-network-network.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-unused-network-network.service has begun execution. ░░ ░░ The job identifier is 3298. Feb 14 11:48:36 managed-node2 podman[80767]: 2026-02-14 11:48:36.941382834 -0500 EST m=+0.020181531 network create 01cf3213e3ed2d0e128267f68e75813ac56a211a4b375eb13d2a19c6244fb9f3 (name=systemd-quadlet-basic-unused-network, type=bridge) Feb 14 11:48:36 managed-node2 quadlet-basic-unused-network-network[80767]: systemd-quadlet-basic-unused-network Feb 14 11:48:36 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:36 managed-node2 systemd[1]: Finished quadlet-basic-unused-network-network.service. ░░ Subject: A start job for unit quadlet-basic-unused-network-network.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-unused-network-network.service has finished successfully. ░░ ░░ The job identifier is 3298. Feb 14 11:48:37 managed-node2 python3.12[80929]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:39 managed-node2 python3.12[81086]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:39 managed-node2 python3.12[81241]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-mysql.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:39 managed-node2 python3.12[81366]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087719.2426136-26105-180291594378143/.source.volume dest=/etc/containers/systemd/quadlet-basic-mysql.volume owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=90a3571bfc7670328fe3f8fb625585613dbd9c4a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:40 managed-node2 python3.12[81521]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:40 managed-node2 systemd[1]: Reload requested from client PID 81522 ('systemctl') (unit session-8.scope)... Feb 14 11:48:40 managed-node2 systemd[1]: Reloading... Feb 14 11:48:40 managed-node2 systemd-rc-local-generator[81573]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:48:40 managed-node2 systemd[1]: Reloading finished in 217 ms. Feb 14 11:48:41 managed-node2 python3.12[81742]: ansible-systemd Invoked with name=quadlet-basic-mysql-volume.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:41 managed-node2 systemd[1]: Starting quadlet-basic-mysql-volume.service... ░░ Subject: A start job for unit quadlet-basic-mysql-volume.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-mysql-volume.service has begun execution. ░░ ░░ The job identifier is 3382. Feb 14 11:48:41 managed-node2 podman[81746]: 2026-02-14 11:48:41.283012966 -0500 EST m=+0.024739231 volume create quadlet-basic-mysql-name Feb 14 11:48:41 managed-node2 quadlet-basic-mysql-volume[81746]: quadlet-basic-mysql-name Feb 14 11:48:41 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:41 managed-node2 systemd[1]: Finished quadlet-basic-mysql-volume.service. ░░ Subject: A start job for unit quadlet-basic-mysql-volume.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-mysql-volume.service has finished successfully. ░░ ░░ The job identifier is 3382. Feb 14 11:48:42 managed-node2 python3.12[81909]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:43 managed-node2 python3.12[82066]: ansible-file Invoked with path=/etc/containers/systemd state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:43 managed-node2 python3.12[82221]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/systemd/quadlet-basic-unused-volume.volume follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Feb 14 11:48:44 managed-node2 python3.12[82346]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1771087723.576437-26316-237709762951197/.source.volume dest=/etc/containers/systemd/quadlet-basic-unused-volume.volume owner=root group=0 mode=0644 follow=False _original_basename=systemd.j2 checksum=fd0ae560360afa5541b866560b1e849d25e216ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:48:44 managed-node2 python3.12[82501]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:48:44 managed-node2 systemd[1]: Reload requested from client PID 82502 ('systemctl') (unit session-8.scope)... Feb 14 11:48:44 managed-node2 systemd[1]: Reloading... Feb 14 11:48:44 managed-node2 systemd-rc-local-generator[82557]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:48:44 managed-node2 systemd[1]: Reloading finished in 213 ms. Feb 14 11:48:45 managed-node2 python3.12[82722]: ansible-systemd Invoked with name=quadlet-basic-unused-volume-volume.service scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None Feb 14 11:48:45 managed-node2 systemd[1]: Starting quadlet-basic-unused-volume-volume.service... ░░ Subject: A start job for unit quadlet-basic-unused-volume-volume.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-unused-volume-volume.service has begun execution. ░░ ░░ The job identifier is 3466. Feb 14 11:48:45 managed-node2 podman[82726]: 2026-02-14 11:48:45.588887323 -0500 EST m=+0.024810141 volume create systemd-quadlet-basic-unused-volume Feb 14 11:48:45 managed-node2 quadlet-basic-unused-volume-volume[82726]: systemd-quadlet-basic-unused-volume Feb 14 11:48:45 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:45 managed-node2 systemd[1]: Finished quadlet-basic-unused-volume-volume.service. ░░ Subject: A start job for unit quadlet-basic-unused-volume-volume.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit quadlet-basic-unused-volume-volume.service has finished successfully. ░░ ░░ The job identifier is 3466. Feb 14 11:48:46 managed-node2 python3.12[82888]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:47 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:51 managed-node2 podman[83053]: 2026-02-14 11:48:51.154323799 -0500 EST m=+3.494317061 image pull-error quay.io/linux-system-roles/mysql:5.6 unable to copy from source docker://quay.io/linux-system-roles/mysql:5.6: copying system image from manifest list: reading blob sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8: Digest did not match, expected sha256:82f6e5dc9ef6571b55539f02c80df6c7e3eafbe1533e780efee880fb528404f8, got sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 Feb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:51 managed-node2 python3.12[83273]: ansible-ansible.legacy.command Invoked with _raw_params=set -x set -o pipefail exec 1>&2 #podman volume rm --all #podman network prune -f podman volume ls podman network ls podman secret ls podman container ls podman pod ls podman images systemctl list-units | grep quadlet _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:48:52 managed-node2 python3.12[83475]: ansible-ansible.legacy.command Invoked with _raw_params=grep type=AVC /var/log/audit/audit.log _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:52 managed-node2 python3.12[83631]: ansible-ansible.legacy.command Invoked with _raw_params=journalctl -ex _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:54 managed-node2 python3.12[83942]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:55 managed-node2 python3.12[84103]: ansible-getent Invoked with database=passwd key=user_quadlet_basic fail_key=False service=None split=None Feb 14 11:48:55 managed-node2 python3.12[84259]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:56 managed-node2 python3.12[84416]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:56 managed-node2 python3.12[84572]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:48:58 managed-node2 python3.12[84728]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:58 managed-node2 sudo[84935]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqlginxeungrwkamduyxfzaaxwxdyroa ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087738.2434657-26823-276491638289098/AnsiballZ_podman_secret.py' Feb 14 11:48:58 managed-node2 sudo[84935]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:48:58 managed-node2 systemd[68102]: Started podman-84939.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 146. Feb 14 11:48:58 managed-node2 sudo[84935]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:48:59 managed-node2 python3.12[85100]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:48:59 managed-node2 sudo[85307]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxkzszgzapobgwqrbldzdtfegeedzdwz ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087739.755661-26893-243767024018106/AnsiballZ_podman_secret.py' Feb 14 11:48:59 managed-node2 sudo[85307]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:00 managed-node2 systemd[68102]: Started podman-85311.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 150. Feb 14 11:49:00 managed-node2 sudo[85307]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:01 managed-node2 python3.12[85472]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:01 managed-node2 python3.12[85629]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:01 managed-node2 python3.12[85785]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:02 managed-node2 python3.12[85941]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:03 managed-node2 sudo[86148]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oetymnabpyjaimcngmtfesacndifqbdm ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087742.8538067-27024-205000829974343/AnsiballZ_systemd.py' Feb 14 11:49:03 managed-node2 sudo[86148]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:03 managed-node2 python3.12[86151]: ansible-systemd Invoked with name=quadlet-basic-mysql.service scope=user state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:49:03 managed-node2 systemd[68102]: Reload requested from client PID 86154 ('systemctl')... Feb 14 11:49:03 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:03 managed-node2 systemd[68102]: Reloading finished in 46 ms. Feb 14 11:49:03 managed-node2 systemd[68102]: Stopping quadlet-basic-mysql.service... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 154. Feb 14 11:49:04 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:49:04 managed-node2 kernel: veth0 (unregistering): left allmulticast mode Feb 14 11:49:04 managed-node2 kernel: veth0 (unregistering): left promiscuous mode Feb 14 11:49:04 managed-node2 kernel: podman1: port 1(veth0) entered disabled state Feb 14 11:49:05 managed-node2 quadlet-basic-mysql[86165]: quadlet-basic-mysql-name Feb 14 11:49:05 managed-node2 systemd[68102]: Stopped quadlet-basic-mysql.service. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 154 and the job result is done. Feb 14 11:49:05 managed-node2 systemd[68102]: quadlet-basic-mysql.service: Consumed 2.978s CPU time, 601.8M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Feb 14 11:49:05 managed-node2 sudo[86148]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:05 managed-node2 python3.12[86356]: ansible-stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:06 managed-node2 python3.12[86668]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:49:06 managed-node2 sudo[86873]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzmdmmazblyalosrybmtplnfpgpcbedf ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087746.6781564-27176-147193157339695/AnsiballZ_systemd.py' Feb 14 11:49:06 managed-node2 sudo[86873]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:07 managed-node2 python3.12[86876]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:49:07 managed-node2 systemd[68102]: Reload requested from client PID 86877 ('systemctl')... Feb 14 11:49:07 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:07 managed-node2 systemd[68102]: Reloading finished in 42 ms. Feb 14 11:49:07 managed-node2 sudo[86873]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:07 managed-node2 sudo[87092]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqwjjzhynesjtauwiipzbdccglujczjd ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087747.3582249-27207-244105632236249/AnsiballZ_command.py' Feb 14 11:49:07 managed-node2 sudo[87092]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:07 managed-node2 systemd[68102]: Started podman-87096.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 155. Feb 14 11:49:07 managed-node2 sudo[87092]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:08 managed-node2 sudo[87307]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqqmrxybbbxvkkbucdcndqqrlbfehkdp ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087747.919991-27231-279750427271648/AnsiballZ_command.py' Feb 14 11:49:08 managed-node2 sudo[87307]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:08 managed-node2 python3.12[87310]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:08 managed-node2 systemd[68102]: Started podman-87311.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 159. Feb 14 11:49:08 managed-node2 sudo[87307]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:09 managed-node2 python3.12[87473]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:09 managed-node2 python3.12[87630]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:10 managed-node2 python3.12[87786]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:11 managed-node2 python3.12[87942]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:11 managed-node2 sudo[88149]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvodzuzepnrbkwmivymjcymnuqshvjbg ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087751.3848622-27393-73162619233040/AnsiballZ_systemd.py' Feb 14 11:49:11 managed-node2 sudo[88149]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:11 managed-node2 python3.12[88152]: ansible-systemd Invoked with name=quadlet-basic-unused-volume-volume.service scope=user state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:49:11 managed-node2 systemd[68102]: Reload requested from client PID 88155 ('systemctl')... Feb 14 11:49:11 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:11 managed-node2 systemd[68102]: Reloading finished in 43 ms. Feb 14 11:49:11 managed-node2 systemd[68102]: Stopped quadlet-basic-unused-volume-volume.service. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 163 and the job result is done. Feb 14 11:49:11 managed-node2 sudo[88149]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:12 managed-node2 python3.12[88320]: ansible-stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:13 managed-node2 python3.12[88633]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-volume.volume state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:49:13 managed-node2 sudo[88838]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwkivlhzoqbxlzmfvzcekkzvzvlebzip ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087753.450952-27463-279831161003512/AnsiballZ_systemd.py' Feb 14 11:49:13 managed-node2 sudo[88838]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:13 managed-node2 python3.12[88841]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:49:13 managed-node2 systemd[68102]: Reload requested from client PID 88842 ('systemctl')... Feb 14 11:49:13 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:13 managed-node2 systemd[68102]: Reloading finished in 41 ms. Feb 14 11:49:14 managed-node2 sudo[88838]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:14 managed-node2 sudo[89056]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpmygepwtnljuxumzgukxclskbdixccp ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087754.1153712-27473-81814955930232/AnsiballZ_command.py' Feb 14 11:49:14 managed-node2 sudo[89056]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:14 managed-node2 systemd[68102]: Started podman-89060.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 164. Feb 14 11:49:14 managed-node2 sudo[89056]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:14 managed-node2 sudo[89271]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoybhrcjvgqaokqlngpasewrazidkluq ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087754.6874022-27492-120816214593348/AnsiballZ_command.py' Feb 14 11:49:14 managed-node2 sudo[89271]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:15 managed-node2 python3.12[89274]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:15 managed-node2 systemd[68102]: Started podman-89275.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 168. Feb 14 11:49:15 managed-node2 sudo[89271]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:16 managed-node2 python3.12[89436]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:16 managed-node2 python3.12[89593]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:16 managed-node2 python3.12[89749]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:17 managed-node2 python3.12[89906]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:17 managed-node2 sudo[90113]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibktoerjxapjymikputginpkburvccc ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087757.7913702-27587-146239020171495/AnsiballZ_systemd.py' Feb 14 11:49:18 managed-node2 sudo[90113]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:18 managed-node2 python3.12[90116]: ansible-systemd Invoked with name=quadlet-basic-mysql-volume.service scope=user state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:49:18 managed-node2 systemd[68102]: Reload requested from client PID 90119 ('systemctl')... Feb 14 11:49:18 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:18 managed-node2 systemd[68102]: Reloading finished in 41 ms. Feb 14 11:49:18 managed-node2 systemd[68102]: Stopped quadlet-basic-mysql-volume.service. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 172 and the job result is done. Feb 14 11:49:18 managed-node2 sudo[90113]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:18 managed-node2 python3.12[90284]: ansible-stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:19 managed-node2 python3.12[90597]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-mysql.volume state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:49:19 managed-node2 sudo[90802]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwwhftfkhacliwotsnetlbhkkjigkttb ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087759.7708926-27640-26822273092798/AnsiballZ_systemd.py' Feb 14 11:49:19 managed-node2 sudo[90802]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:20 managed-node2 python3.12[90805]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:49:20 managed-node2 systemd[68102]: Reload requested from client PID 90806 ('systemctl')... Feb 14 11:49:20 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:20 managed-node2 systemd[68102]: Reloading finished in 39 ms. Feb 14 11:49:20 managed-node2 sudo[90802]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:20 managed-node2 sudo[91020]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwsypeilqfgpwwfhbymqtovyjgvpvag ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087760.4353452-27657-44728270482150/AnsiballZ_command.py' Feb 14 11:49:20 managed-node2 sudo[91020]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:20 managed-node2 systemd[68102]: Started podman-91024.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 173. Feb 14 11:49:20 managed-node2 sudo[91020]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:21 managed-node2 sudo[91235]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ialyturgdwpweizeyixwcmmtefzhkuri ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087760.9999917-27669-174819469644378/AnsiballZ_command.py' Feb 14 11:49:21 managed-node2 sudo[91235]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:21 managed-node2 python3.12[91238]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:21 managed-node2 systemd[68102]: Started podman-91239.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 177. Feb 14 11:49:21 managed-node2 sudo[91235]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:22 managed-node2 python3.12[91401]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:22 managed-node2 python3.12[91558]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:23 managed-node2 python3.12[91714]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:23 managed-node2 python3.12[91870]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:24 managed-node2 sudo[92077]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwtbridrsdqzxqnxpbaauxzqmcielurg ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087764.0761387-27767-157318686785060/AnsiballZ_systemd.py' Feb 14 11:49:24 managed-node2 sudo[92077]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:24 managed-node2 python3.12[92080]: ansible-systemd Invoked with name=quadlet-basic-unused-network-network.service scope=user state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:49:24 managed-node2 systemd[68102]: Reload requested from client PID 92083 ('systemctl')... Feb 14 11:49:24 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:24 managed-node2 systemd[68102]: Reloading finished in 40 ms. Feb 14 11:49:24 managed-node2 systemd[68102]: Stopped quadlet-basic-unused-network-network.service. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 181 and the job result is done. Feb 14 11:49:24 managed-node2 sudo[92077]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:25 managed-node2 python3.12[92248]: ansible-stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:25 managed-node2 python3.12[92560]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic-unused-network.network state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:49:26 managed-node2 sudo[92765]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjuyccyegoqhvzuphegemztzhelmdend ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087766.0256815-27824-180766593839802/AnsiballZ_systemd.py' Feb 14 11:49:26 managed-node2 sudo[92765]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:26 managed-node2 python3.12[92768]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:49:26 managed-node2 systemd[68102]: Reload requested from client PID 92769 ('systemctl')... Feb 14 11:49:26 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:26 managed-node2 systemd[68102]: Reloading finished in 38 ms. Feb 14 11:49:26 managed-node2 sudo[92765]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:26 managed-node2 sudo[92983]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haijxrgpdwqfqyvsyahfupmrberunnhz ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087766.6786122-27841-238466504516563/AnsiballZ_command.py' Feb 14 11:49:26 managed-node2 sudo[92983]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:27 managed-node2 systemd[68102]: Started podman-92987.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 182. Feb 14 11:49:27 managed-node2 sudo[92983]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:27 managed-node2 sudo[93198]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwtizvwzrxsvcvxpilfoxrsrhlreutpx ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087767.2341135-27853-105564300176654/AnsiballZ_command.py' Feb 14 11:49:27 managed-node2 sudo[93198]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:27 managed-node2 python3.12[93201]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:27 managed-node2 systemd[68102]: Started podman-93202.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 186. Feb 14 11:49:27 managed-node2 sudo[93198]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:28 managed-node2 python3.12[93363]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:29 managed-node2 python3.12[93520]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:29 managed-node2 python3.12[93676]: ansible-ansible.legacy.command Invoked with _raw_params=getsubids -g user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:30 managed-node2 python3.12[93832]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:30 managed-node2 sudo[94039]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoxnycbjpqemgbumtmoflksetdactvvy ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087770.5351803-27990-32843092903872/AnsiballZ_systemd.py' Feb 14 11:49:30 managed-node2 sudo[94039]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:31 managed-node2 python3.12[94042]: ansible-systemd Invoked with name=quadlet-basic-network.service scope=user state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:49:31 managed-node2 systemd[68102]: Reload requested from client PID 94045 ('systemctl')... Feb 14 11:49:31 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:31 managed-node2 systemd[68102]: Reloading finished in 37 ms. Feb 14 11:49:31 managed-node2 systemd[68102]: Stopped quadlet-basic-network.service. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 190 and the job result is done. Feb 14 11:49:31 managed-node2 sudo[94039]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:31 managed-node2 python3.12[94210]: ansible-stat Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:32 managed-node2 python3.12[94522]: ansible-file Invoked with path=/home/user_quadlet_basic/.config/containers/systemd/quadlet-basic.network state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:49:32 managed-node2 sudo[94727]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsaxygvcptvxzyovvybtdeddaqbmmana ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087772.508133-28074-27501371697623/AnsiballZ_systemd.py' Feb 14 11:49:32 managed-node2 sudo[94727]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:32 managed-node2 python3.12[94730]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:49:32 managed-node2 systemd[68102]: Reload requested from client PID 94731 ('systemctl')... Feb 14 11:49:32 managed-node2 systemd[68102]: Reloading... Feb 14 11:49:33 managed-node2 systemd[68102]: Reloading finished in 38 ms. Feb 14 11:49:33 managed-node2 sudo[94727]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:33 managed-node2 sudo[94946]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgaxxaumnbkjsihgougrlzpgfysnccin ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087773.1651285-28104-216449549191330/AnsiballZ_command.py' Feb 14 11:49:33 managed-node2 sudo[94946]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:33 managed-node2 systemd[68102]: Started podman-94950.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 191. Feb 14 11:49:33 managed-node2 sudo[94946]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:33 managed-node2 sudo[95161]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qarixxruglaaswqbddzleaocdqvieczu ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087773.7330365-28131-133992935256761/AnsiballZ_command.py' Feb 14 11:49:33 managed-node2 sudo[95161]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:34 managed-node2 python3.12[95164]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:34 managed-node2 systemd[68102]: Started podman-95165.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 195. Feb 14 11:49:34 managed-node2 sudo[95161]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:35 managed-node2 python3.12[95327]: ansible-stat Invoked with path=/run/user/1111 follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:35 managed-node2 sudo[95534]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yubgkumgbtrzbtgdsgkerzztcfpxzasv ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087775.177092-28188-214978136065615/AnsiballZ_podman_container_info.py' Feb 14 11:49:35 managed-node2 sudo[95534]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:35 managed-node2 python3.12[95537]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Feb 14 11:49:35 managed-node2 systemd[68102]: Started podman-95538.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 199. Feb 14 11:49:35 managed-node2 sudo[95534]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:35 managed-node2 sudo[95750]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ismplcgbigklpeyfoyueasgefxfpclle ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087775.7609694-28207-235861474233237/AnsiballZ_command.py' Feb 14 11:49:35 managed-node2 sudo[95750]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:36 managed-node2 python3.12[95753]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:36 managed-node2 systemd[68102]: Started podman-95754.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 203. Feb 14 11:49:36 managed-node2 sudo[95750]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:36 managed-node2 sudo[95965]: root : TTY=pts/0 ; PWD=/root ; USER=user_quadlet_basic ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbpebeajydjeadjrlrjgdweavuixaeza ; XDG_RUNTIME_DIR=/run/user/1111 /usr/bin/python3.12 /var/tmp/ansible-tmp-1771087776.2645173-28234-6211960725573/AnsiballZ_command.py' Feb 14 11:49:36 managed-node2 sudo[95965]: pam_unix(sudo:session): session opened for user user_quadlet_basic(uid=1111) by root(uid=0) Feb 14 11:49:36 managed-node2 python3.12[95968]: ansible-ansible.legacy.command Invoked with _raw_params=podman secret ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:36 managed-node2 systemd[68102]: Started podman-95969.scope. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 207. Feb 14 11:49:36 managed-node2 sudo[95965]: pam_unix(sudo:session): session closed for user user_quadlet_basic Feb 14 11:49:37 managed-node2 python3.12[96130]: ansible-ansible.legacy.command Invoked with removes=/var/lib/systemd/linger/user_quadlet_basic _raw_params=loginctl disable-linger user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Feb 14 11:49:37 managed-node2 systemd[1]: Stopping user@1111.service - User Manager for UID 1111... ░░ Subject: A stop job for unit user@1111.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user@1111.service has begun execution. ░░ ░░ The job identifier is 3561. Feb 14 11:49:37 managed-node2 systemd[68102]: Activating special unit exit.target... Feb 14 11:49:37 managed-node2 systemd[68102]: Stopping podman-pause-57116b4d.scope... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 220. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped target default.target - Main User Target. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 217 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped podman-user-wait-network-online.service - Wait for system level network-online.target as user.. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 215 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped target basic.target - Basic System. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 214 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped target paths.target - Paths. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 218 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped target sockets.target - Sockets. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 228 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped target timers.target - Timers. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 221 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 224 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 222 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopping dbus-broker.service - D-Bus User Message Bus... ░░ Subject: A stop job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has begun execution. ░░ ░░ The job identifier is 223. Feb 14 11:49:37 managed-node2 dbus-broker[68511]: Dispatched 4014 messages @ 1(±9)μs / message. ░░ Subject: Dispatched 4014 messages ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ This message is printed by dbus-broker when shutting down. It includes metric ░░ information collected during the runtime of dbus-broker. ░░ ░░ The message lists the number of dispatched messages ░░ (in this case 4014) as well as the mean time to ░░ handling a single message. The time measurements exclude the time spent on ░░ writing to and reading from the kernel. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 229 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped dbus-broker.service - D-Bus User Message Bus. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 223 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Stopped podman-pause-57116b4d.scope. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 220 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Removed slice session.slice - User Core Session Slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 225 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Removed slice user.slice - Slice /user. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 219 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: user.slice: Consumed 8.822s CPU time, 469.6M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Feb 14 11:49:37 managed-node2 systemd[68102]: Closed dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 231 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: Removed slice app.slice - User Application Slice. ░░ Subject: A stop job for unit UNIT has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit UNIT has finished. ░░ ░░ The job identifier is 230 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[68102]: app.slice: Consumed 3.131s CPU time, 602.4M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit UNIT completed and consumed the indicated resources. Feb 14 11:49:37 managed-node2 systemd[68102]: Reached target shutdown.target - Shutdown. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 213. Feb 14 11:49:37 managed-node2 systemd[68102]: Finished systemd-exit.service - Exit the Session. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 212. Feb 14 11:49:37 managed-node2 systemd[68102]: Reached target exit.target - Exit the Session. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 211. Feb 14 11:49:37 managed-node2 systemd-logind[768]: Removed session 13. ░░ Subject: Session 13 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 13 has been terminated. Feb 14 11:49:37 managed-node2 systemd[1]: user@1111.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user@1111.service has successfully entered the 'dead' state. Feb 14 11:49:37 managed-node2 systemd[1]: Stopped user@1111.service - User Manager for UID 1111. ░░ Subject: A stop job for unit user@1111.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user@1111.service has finished. ░░ ░░ The job identifier is 3561 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[1]: user@1111.service: Consumed 13.160s CPU time, 921.2M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user@1111.service completed and consumed the indicated resources. Feb 14 11:49:37 managed-node2 systemd[1]: Stopping user-runtime-dir@1111.service - User Runtime Directory /run/user/1111... ░░ Subject: A stop job for unit user-runtime-dir@1111.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-runtime-dir@1111.service has begun execution. ░░ ░░ The job identifier is 3560. Feb 14 11:49:37 managed-node2 systemd[1]: run-user-1111.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-user-1111.mount has successfully entered the 'dead' state. Feb 14 11:49:37 managed-node2 systemd[1]: user-runtime-dir@1111.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user-runtime-dir@1111.service has successfully entered the 'dead' state. Feb 14 11:49:37 managed-node2 systemd[1]: Stopped user-runtime-dir@1111.service - User Runtime Directory /run/user/1111. ░░ Subject: A stop job for unit user-runtime-dir@1111.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-runtime-dir@1111.service has finished. ░░ ░░ The job identifier is 3560 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[1]: Removed slice user-1111.slice - User Slice of UID 1111. ░░ Subject: A stop job for unit user-1111.slice has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit user-1111.slice has finished. ░░ ░░ The job identifier is 3562 and the job result is done. Feb 14 11:49:37 managed-node2 systemd[1]: user-1111.slice: Consumed 13.187s CPU time, 921.2M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit user-1111.slice completed and consumed the indicated resources. Feb 14 11:49:37 managed-node2 python3.12[96291]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:39 managed-node2 python3.12[96447]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:42 managed-node2 python3.12[96603]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:44 managed-node2 python3.12[96759]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:46 managed-node2 python3.12[96915]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:49 managed-node2 python3.12[97071]: ansible-ansible.legacy.command Invoked with _raw_params=loginctl show-user --value -p State user_quadlet_basic _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:49 managed-node2 python3.12[97227]: ansible-user Invoked with name=user_quadlet_basic uid=1111 state=absent non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Feb 14 11:49:49 managed-node2 userdel[97229]: delete user 'user_quadlet_basic' Feb 14 11:49:49 managed-node2 userdel[97229]: removed group 'user_quadlet_basic' owned by 'user_quadlet_basic' Feb 14 11:49:49 managed-node2 userdel[97229]: removed shadow group 'user_quadlet_basic' owned by 'user_quadlet_basic' Feb 14 11:49:51 managed-node2 python3.12[97539]: ansible-ansible.legacy.command Invoked with _raw_params=podman --version _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:52 managed-node2 python3.12[97701]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Feb 14 11:49:53 managed-node2 python3.12[97857]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:55 managed-node2 podman[98015]: 2026-02-14 11:49:55.163580618 -0500 EST m=+0.017687955 secret remove 90fe43ec6167996f11d8c921b Feb 14 11:49:55 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:49:56 managed-node2 podman[98177]: 2026-02-14 11:49:56.266345094 -0500 EST m=+0.014230316 secret remove 90451f17a3693765e0b13c8ad Feb 14 11:49:56 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:49:57 managed-node2 python3.12[98339]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:58 managed-node2 python3.12[98496]: ansible-systemd Invoked with name=quadlet-basic-mysql.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:49:58 managed-node2 python3.12[98652]: ansible-stat Invoked with path=/etc/containers/systemd/quadlet-basic-mysql.container follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:49:58 managed-node2 python3.12[98807]: ansible-file Invoked with path=/etc/containers/systemd/quadlet-basic-mysql.container state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:49:59 managed-node2 python3.12[98962]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:49:59 managed-node2 podman[98963]: 2026-02-14 11:49:59.612301779 -0500 EST m=+0.030744109 image untag 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f quay.io/libpod/testimage:20210610 Feb 14 11:49:59 managed-node2 podman[98963]: 2026-02-14 11:49:59.595023096 -0500 EST m=+0.013465501 image remove 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f Feb 14 11:49:59 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:00 managed-node2 python3.12[99125]: ansible-ansible.legacy.command Invoked with _raw_params=podman images -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:00 managed-node2 python3.12[99287]: ansible-ansible.legacy.command Invoked with _raw_params=podman volume ls -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:01 managed-node2 python3.12[99449]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --noheading _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:01 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:01 managed-node2 python3.12[99611]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:01 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:01 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:02 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:02 managed-node2 python3.12[100099]: ansible-service_facts Invoked Feb 14 11:50:05 managed-node2 python3.12[100362]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:50:06 managed-node2 python3.12[100519]: ansible-systemd Invoked with name=quadlet-basic-unused-volume-volume.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:50:06 managed-node2 systemd[1]: Reload requested from client PID 100522 ('systemctl') (unit session-8.scope)... Feb 14 11:50:06 managed-node2 systemd[1]: Reloading... Feb 14 11:50:06 managed-node2 systemd-rc-local-generator[100562]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:50:06 managed-node2 systemd[1]: Reloading finished in 219 ms. Feb 14 11:50:06 managed-node2 systemd[1]: quadlet-basic-unused-volume-volume.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit quadlet-basic-unused-volume-volume.service has successfully entered the 'dead' state. Feb 14 11:50:06 managed-node2 systemd[1]: Stopped quadlet-basic-unused-volume-volume.service. ░░ Subject: A stop job for unit quadlet-basic-unused-volume-volume.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit quadlet-basic-unused-volume-volume.service has finished. ░░ ░░ The job identifier is 3564 and the job result is done. Feb 14 11:50:07 managed-node2 python3.12[100740]: ansible-stat Invoked with path=/etc/containers/systemd/quadlet-basic-unused-volume.volume follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:50:08 managed-node2 python3.12[101052]: ansible-file Invoked with path=/etc/containers/systemd/quadlet-basic-unused-volume.volume state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:50:08 managed-node2 python3.12[101207]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:50:08 managed-node2 systemd[1]: Reload requested from client PID 101208 ('systemctl') (unit session-8.scope)... Feb 14 11:50:08 managed-node2 systemd[1]: Reloading... Feb 14 11:50:08 managed-node2 systemd-rc-local-generator[101263]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:50:08 managed-node2 systemd[1]: Reloading finished in 213 ms. Feb 14 11:50:09 managed-node2 podman[101425]: 2026-02-14 11:50:09.514431155 -0500 EST m=+0.022986972 volume remove systemd-quadlet-basic-unused-volume Feb 14 11:50:09 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:10 managed-node2 python3.12[101586]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:10 managed-node2 python3.12[101748]: ansible-ansible.legacy.command Invoked with _raw_params=podman images -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:11 managed-node2 python3.12[101910]: ansible-ansible.legacy.command Invoked with _raw_params=podman volume ls -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:11 managed-node2 python3.12[102072]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --noheading _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:11 managed-node2 python3.12[102234]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:12 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:13 managed-node2 python3.12[102720]: ansible-service_facts Invoked Feb 14 11:50:15 managed-node2 python3.12[102983]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:50:16 managed-node2 python3.12[103140]: ansible-systemd Invoked with name=quadlet-basic-mysql-volume.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:50:16 managed-node2 systemd[1]: Reload requested from client PID 103143 ('systemctl') (unit session-8.scope)... Feb 14 11:50:16 managed-node2 systemd[1]: Reloading... Feb 14 11:50:16 managed-node2 systemd-rc-local-generator[103189]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:50:16 managed-node2 systemd[1]: Reloading finished in 213 ms. Feb 14 11:50:16 managed-node2 systemd[1]: quadlet-basic-mysql-volume.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit quadlet-basic-mysql-volume.service has successfully entered the 'dead' state. Feb 14 11:50:16 managed-node2 systemd[1]: Stopped quadlet-basic-mysql-volume.service. ░░ Subject: A stop job for unit quadlet-basic-mysql-volume.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit quadlet-basic-mysql-volume.service has finished. ░░ ░░ The job identifier is 3565 and the job result is done. Feb 14 11:50:17 managed-node2 python3.12[103361]: ansible-stat Invoked with path=/etc/containers/systemd/quadlet-basic-mysql.volume follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:50:18 managed-node2 python3.12[103673]: ansible-file Invoked with path=/etc/containers/systemd/quadlet-basic-mysql.volume state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:50:18 managed-node2 python3.12[103828]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:50:18 managed-node2 systemd[1]: Reload requested from client PID 103829 ('systemctl') (unit session-8.scope)... Feb 14 11:50:18 managed-node2 systemd[1]: Reloading... Feb 14 11:50:19 managed-node2 systemd-rc-local-generator[103882]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:50:19 managed-node2 systemd[1]: Reloading finished in 216 ms. Feb 14 11:50:19 managed-node2 podman[104046]: 2026-02-14 11:50:19.619871876 -0500 EST m=+0.024307541 volume remove quadlet-basic-mysql-name Feb 14 11:50:19 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:20 managed-node2 python3.12[104208]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:20 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:20 managed-node2 python3.12[104370]: ansible-ansible.legacy.command Invoked with _raw_params=podman images -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:20 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:21 managed-node2 python3.12[104532]: ansible-ansible.legacy.command Invoked with _raw_params=podman volume ls -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:21 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:21 managed-node2 python3.12[104693]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --noheading _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:21 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:21 managed-node2 python3.12[104855]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:21 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:22 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:22 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:23 managed-node2 python3.12[105342]: ansible-service_facts Invoked Feb 14 11:50:25 managed-node2 python3.12[105605]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:50:26 managed-node2 python3.12[105762]: ansible-systemd Invoked with name=quadlet-basic-unused-network-network.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:50:26 managed-node2 systemd[1]: Reload requested from client PID 105765 ('systemctl') (unit session-8.scope)... Feb 14 11:50:26 managed-node2 systemd[1]: Reloading... Feb 14 11:50:26 managed-node2 systemd-rc-local-generator[105809]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:50:27 managed-node2 systemd[1]: Reloading finished in 215 ms. Feb 14 11:50:27 managed-node2 systemd[1]: quadlet-basic-unused-network-network.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit quadlet-basic-unused-network-network.service has successfully entered the 'dead' state. Feb 14 11:50:27 managed-node2 systemd[1]: Stopped quadlet-basic-unused-network-network.service. ░░ Subject: A stop job for unit quadlet-basic-unused-network-network.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit quadlet-basic-unused-network-network.service has finished. ░░ ░░ The job identifier is 3566 and the job result is done. Feb 14 11:50:27 managed-node2 python3.12[105984]: ansible-stat Invoked with path=/etc/containers/systemd/quadlet-basic-unused-network.network follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:50:28 managed-node2 python3.12[106296]: ansible-file Invoked with path=/etc/containers/systemd/quadlet-basic-unused-network.network state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:50:28 managed-node2 python3.12[106451]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:50:28 managed-node2 systemd[1]: Reload requested from client PID 106452 ('systemctl') (unit session-8.scope)... Feb 14 11:50:28 managed-node2 systemd[1]: Reloading... Feb 14 11:50:29 managed-node2 systemd-rc-local-generator[106503]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:50:29 managed-node2 systemd[1]: Reloading finished in 213 ms. Feb 14 11:50:29 managed-node2 podman[106669]: 2026-02-14 11:50:29.624642825 -0500 EST m=+0.017005245 network remove 01cf3213e3ed2d0e128267f68e75813ac56a211a4b375eb13d2a19c6244fb9f3 (name=systemd-quadlet-basic-unused-network, type=bridge) Feb 14 11:50:29 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:30 managed-node2 python3.12[106830]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:30 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:30 managed-node2 python3.12[106992]: ansible-ansible.legacy.command Invoked with _raw_params=podman images -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:30 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:31 managed-node2 python3.12[107154]: ansible-ansible.legacy.command Invoked with _raw_params=podman volume ls -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:31 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:31 managed-node2 python3.12[107317]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --noheading _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:31 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:32 managed-node2 python3.12[107480]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:32 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:32 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:32 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:33 managed-node2 python3.12[107967]: ansible-service_facts Invoked Feb 14 11:50:35 managed-node2 python3.12[108230]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:50:37 managed-node2 python3.12[108387]: ansible-systemd Invoked with name=quadlet-basic-network.service scope=system state=stopped enabled=False force=True daemon_reload=False daemon_reexec=False no_block=False masked=None Feb 14 11:50:37 managed-node2 systemd[1]: Reload requested from client PID 108390 ('systemctl') (unit session-8.scope)... Feb 14 11:50:37 managed-node2 systemd[1]: Reloading... Feb 14 11:50:37 managed-node2 systemd-rc-local-generator[108429]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:50:37 managed-node2 systemd[1]: Reloading finished in 218 ms. Feb 14 11:50:37 managed-node2 systemd[1]: quadlet-basic-network.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit quadlet-basic-network.service has successfully entered the 'dead' state. Feb 14 11:50:37 managed-node2 systemd[1]: Stopped quadlet-basic-network.service. ░░ Subject: A stop job for unit quadlet-basic-network.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit quadlet-basic-network.service has finished. ░░ ░░ The job identifier is 3567 and the job result is done. Feb 14 11:50:37 managed-node2 python3.12[108608]: ansible-stat Invoked with path=/etc/containers/systemd/quadlet-basic.network follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Feb 14 11:50:38 managed-node2 python3.12[108920]: ansible-file Invoked with path=/etc/containers/systemd/quadlet-basic.network state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Feb 14 11:50:39 managed-node2 python3.12[109075]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None Feb 14 11:50:39 managed-node2 systemd[1]: Reload requested from client PID 109076 ('systemctl') (unit session-8.scope)... Feb 14 11:50:39 managed-node2 systemd[1]: Reloading... Feb 14 11:50:39 managed-node2 systemd-rc-local-generator[109129]: /etc/rc.d/rc.local is not marked executable, skipping. Feb 14 11:50:39 managed-node2 systemd[1]: Reloading finished in 209 ms. Feb 14 11:50:39 managed-node2 podman[109293]: 2026-02-14 11:50:39.821238072 -0500 EST m=+0.018352802 network remove 753e73850896ed526bbf5b94858e3cd2708517eedee1b4c90d154f7a2349144f (name=quadlet-basic-name, type=bridge) Feb 14 11:50:39 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:40 managed-node2 python3.12[109454]: ansible-ansible.legacy.command Invoked with _raw_params=podman image prune --all -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:41 managed-node2 python3.12[109616]: ansible-ansible.legacy.command Invoked with _raw_params=podman images -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:41 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:41 managed-node2 python3.12[109779]: ansible-ansible.legacy.command Invoked with _raw_params=podman volume ls -n _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:41 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:41 managed-node2 python3.12[109942]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --noheading _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:42 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:42 managed-node2 python3.12[110104]: ansible-ansible.legacy.command Invoked with _raw_params=podman network ls -n -q _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Feb 14 11:50:42 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:42 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Feb 14 11:50:43 managed-node2 python3.12[110592]: ansible-service_facts Invoked Feb 14 11:50:45 managed-node2 sshd-session[110726]: Accepted publickey for root from 10.31.12.69 port 59518 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Feb 14 11:50:45 managed-node2 systemd-logind[768]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 110726. Feb 14 11:50:45 managed-node2 systemd[1]: Started session-14.scope - Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 3568. Feb 14 11:50:45 managed-node2 sshd-session[110726]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Feb 14 11:50:45 managed-node2 sshd-session[110729]: Received disconnect from 10.31.12.69 port 59518:11: disconnected by user Feb 14 11:50:45 managed-node2 sshd-session[110729]: Disconnected from user root 10.31.12.69 port 59518 Feb 14 11:50:45 managed-node2 sshd-session[110726]: pam_unix(sshd:session): session closed for user root Feb 14 11:50:45 managed-node2 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Feb 14 11:50:45 managed-node2 systemd-logind[768]: Session 14 logged out. Waiting for processes to exit. Feb 14 11:50:45 managed-node2 systemd-logind[768]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Feb 14 11:50:46 managed-node2 sshd-session[110756]: Accepted publickey for root from 10.31.12.69 port 59530 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Feb 14 11:50:46 managed-node2 systemd-logind[768]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 110756. Feb 14 11:50:46 managed-node2 systemd[1]: Started session-15.scope - Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 3650. Feb 14 11:50:46 managed-node2 sshd-session[110756]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)