ansible-playbook [core 2.17.13] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-VhC executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Jun 4 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-7)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_default.yml **************************************************** 1 plays in /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml PLAY [Ensure mandatory variables are defined] ********************************** TASK [Set up test environment] ************************************************* task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:10 Wednesday 20 August 2025 09:16:11 -0400 (0:00:00.080) 0:00:00.080 ****** included: fedora.linux_system_roles.ha_cluster for managed-node4 TASK [fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:9 Wednesday 20 August 2025 09:16:11 -0400 (0:00:00.043) 0:00:00.123 ****** ok: [managed-node4] => { "ansible_facts": { "inventory_hostname": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Ensure facts used by tests] ******* task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:14 Wednesday 20 August 2025 09:16:11 -0400 (0:00:00.160) 0:00:00.284 ****** [WARNING]: Platform linux on host localhost is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node4] TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:22 Wednesday 20 August 2025 09:16:13 -0400 (0:00:01.783) 0:00:02.067 ****** ok: [managed-node4] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:27 Wednesday 20 August 2025 09:16:14 -0400 (0:00:01.050) 0:00:03.118 ****** ok: [managed-node4] => { "ansible_facts": { "__ha_cluster_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:32 Wednesday 20 August 2025 09:16:14 -0400 (0:00:00.136) 0:00:03.254 ****** skipping: [managed-node4] => { "changed": false, "false_condition": "ansible_distribution == 'RedHat'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:41 Wednesday 20 August 2025 09:16:14 -0400 (0:00:00.166) 0:00:03.420 ****** skipping: [managed-node4] => { "changed": false, "false_condition": "__ha_cluster_is_ostree | d(false)", "skip_reason": "Conditional result was False" } TASK [Run the role] ************************************************************ task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:15 Wednesday 20 August 2025 09:16:14 -0400 (0:00:00.089) 0:00:03.510 ****** included: fedora.linux_system_roles.ha_cluster for managed-node4 TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:3 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.126) 0:00:03.636 ****** included: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml for managed-node4 TASK [fedora.linux_system_roles.ha_cluster : Ensure ansible_facts used by role] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:2 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.051) 0:00:03.687 ****** skipping: [managed-node4] => { "changed": false, "false_condition": "__ha_cluster_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:10 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.105) 0:00:03.792 ****** skipping: [managed-node4] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:15 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.052) 0:00:03.845 ****** skipping: [managed-node4] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:19 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.068) 0:00:03.913 ****** ok: [managed-node4] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": {}, "__ha_cluster_fence_agent_packages_default": "{{ ['fence-agents-all'] + (['fence-virt'] if ansible_architecture == 'x86_64' else []) }}", "__ha_cluster_fullstack_node_packages": [ "corosync", "libknet1-plugins-all", "resource-agents", "pacemaker" ], "__ha_cluster_pcs_provider": "pcs-0.10", "__ha_cluster_qdevice_node_packages": [ "corosync-qdevice", "bash", "coreutils", "curl", "grep", "nss-tools", "openssl", "sed" ], "__ha_cluster_repos": [], "__ha_cluster_role_essential_packages": [ "pcs", "corosync-qnetd", "openssl" ], "__ha_cluster_sbd_packages": [ "sbd" ], "__ha_cluster_services": [ "corosync", "corosync-qdevice", "pacemaker" ] }, "ansible_included_var_files": [ "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed-node4] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node4] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": { "aarch64": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "noarch": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc" ], "ppc64le": [ "fence-agents-compute", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "s390x": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "x86_64": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ] }, "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node4] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": { "aarch64": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "noarch": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc" ], "ppc64le": [ "fence-agents-compute", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "s390x": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "x86_64": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ] }, "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node4] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": { "aarch64": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "noarch": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc" ], "ppc64le": [ "fence-agents-compute", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "s390x": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "x86_64": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ] }, "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node4] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": { "aarch64": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "noarch": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc" ], "ppc64le": [ "fence-agents-compute", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "s390x": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "x86_64": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ] }, "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.ha_cluster : Set Linux Pacemaker shell specific variables] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:42 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.198) 0:00:04.112 ****** ok: [managed-node4] => { "ansible_facts": {}, "ansible_included_var_files": [ "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/shell_pcs.yml" ], "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Enable package repositories] ****** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:6 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.076) 0:00:04.189 ****** included: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml for managed-node4 TASK [fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:3 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.101) 0:00:04.290 ****** ok: [managed-node4] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/RedHat.yml" }, "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } ok: [managed-node4] => (item=CentOS.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml" }, "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml" } skipping: [managed-node4] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node4] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:21 Wednesday 20 August 2025 09:16:15 -0400 (0:00:00.157) 0:00:04.448 ****** included: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml for managed-node4 TASK [fedora.linux_system_roles.ha_cluster : List active CentOS repositories] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:3 Wednesday 20 August 2025 09:16:16 -0400 (0:00:00.198) 0:00:04.646 ****** ok: [managed-node4] => { "changed": false, "cmd": [ "dnf", "repolist" ], "delta": "0:00:00.186060", "end": "2025-08-20 09:16:17.002872", "rc": 0, "start": "2025-08-20 09:16:16.816812" } STDOUT: repo id repo name appstream CentOS Stream 9 - AppStream baseos CentOS Stream 9 - BaseOS beaker-client Beaker Client - RedHatEnterpriseLinux9 beaker-harness Beaker harness beakerlib-libraries Copr repo for beakerlib-libraries owned by bgoncalv copr:copr.devel.redhat.com:lpol:qa-tools Copr repo for qa-tools owned by lpol epel Extra Packages for Enterprise Linux 9 - x86_64 epel-cisco-openh264 Extra Packages for Enterprise Linux 9 openh264 (From Cisco) - x86_64 epel-debuginfo Extra Packages for Enterprise Linux 9 - x86_64 - Debug epel-source Extra Packages for Enterprise Linux 9 - x86_64 - Source extras-common CentOS Stream 9 - Extras packages highavailability CentOS Stream 9 - HighAvailability TASK [fedora.linux_system_roles.ha_cluster : Enable CentOS repositories] ******* task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:10 Wednesday 20 August 2025 09:16:17 -0400 (0:00:00.935) 0:00:05.582 ****** skipping: [managed-node4] => (item={'id': 'highavailability', 'name': 'HighAvailability'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.id not in __ha_cluster_repolist.stdout", "item": { "id": "highavailability", "name": "HighAvailability" }, "skip_reason": "Conditional result was False" } skipping: [managed-node4] => (item={'id': 'resilientstorage', 'name': 'ResilientStorage'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.name != \"ResilientStorage\" or ha_cluster_enable_repos_resilient_storage", "item": { "id": "resilientstorage", "name": "ResilientStorage" }, "skip_reason": "Conditional result was False" } skipping: [managed-node4] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.ha_cluster : Install role essential packages] *** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:11 Wednesday 20 August 2025 09:16:17 -0400 (0:00:00.061) 0:00:05.643 ****** fatal: [managed-node4]: FAILED! => { "changed": false, "rc": 1, "results": [] } MSG: Failed to download metadata for repo 'highavailability': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried TASK [Extract errors] ********************************************************** task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:19 Wednesday 20 August 2025 09:16:21 -0400 (0:00:04.418) 0:00:10.062 ****** ok: [managed-node4] => { "ansible_facts": { "error_list": [] }, "changed": false } TASK [Check errors] ************************************************************ task path: /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:24 Wednesday 20 August 2025 09:16:21 -0400 (0:00:00.119) 0:00:10.181 ****** fatal: [managed-node4]: FAILED! => { "assertion": "'ha_cluster_hacluster_password must be specified' in error_list", "changed": false, "evaluated_to": false } MSG: Assertion failed PLAY RECAP ********************************************************************* managed-node4 : ok=14 changed=0 unreachable=0 failed=1 skipped=6 rescued=1 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.13", "end_time": "2025-08-20T13:16:21.542537+00:00Z", "host": "managed-node4", "message": "Failed to download metadata for repo 'highavailability': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "rc": 1, "start_time": "2025-08-20T13:16:17.129134+00:00Z", "task_name": "Install role essential packages", "task_path": "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:11" }, { "ansible_version": "2.17.13", "end_time": "2025-08-20T13:16:21.705146+00:00Z", "host": "managed-node4", "message": "Assertion failed", "start_time": "2025-08-20T13:16:21.667004+00:00Z", "task_name": "Check errors", "task_path": "/tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:24" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Wednesday 20 August 2025 09:16:21 -0400 (0:00:00.040) 0:00:10.222 ****** =============================================================================== fedora.linux_system_roles.ha_cluster : Install role essential packages --- 4.42s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:11 fedora.linux_system_roles.ha_cluster : Ensure facts used by tests ------- 1.78s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:14 fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 1.05s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:22 fedora.linux_system_roles.ha_cluster : List active CentOS repositories --- 0.94s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:3 fedora.linux_system_roles.ha_cluster : Set platform/version specific variables --- 0.20s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:19 fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories --- 0.20s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:21 fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories --- 0.17s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:32 fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters --- 0.16s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:9 fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories --- 0.16s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:3 fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree --- 0.14s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:27 Run the role ------------------------------------------------------------ 0.13s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:15 Extract errors ---------------------------------------------------------- 0.12s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:19 fedora.linux_system_roles.ha_cluster : Ensure ansible_facts used by role --- 0.11s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:2 fedora.linux_system_roles.ha_cluster : Enable package repositories ------ 0.10s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:6 fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd --- 0.09s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:41 fedora.linux_system_roles.ha_cluster : Set Linux Pacemaker shell specific variables --- 0.08s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:42 fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree --- 0.07s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:15 fedora.linux_system_roles.ha_cluster : Enable CentOS repositories ------- 0.06s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:10 fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 0.05s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:10 fedora.linux_system_roles.ha_cluster : Set platform/version specific variables --- 0.05s /tmp/collections-VhC/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:3 Aug 20 09:16:09 managed-node4 sshd[10870]: Accepted publickey for root from 10.31.45.195 port 51858 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 20 09:16:09 managed-node4 systemd-logind[607]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 10870. Aug 20 09:16:09 managed-node4 systemd[1]: Started Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 1591. Aug 20 09:16:09 managed-node4 sshd[10870]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Aug 20 09:16:09 managed-node4 sshd[10873]: Received disconnect from 10.31.45.195 port 51858:11: disconnected by user Aug 20 09:16:09 managed-node4 sshd[10873]: Disconnected from user root 10.31.45.195 port 51858 Aug 20 09:16:09 managed-node4 sshd[10870]: pam_unix(sshd:session): session closed for user root Aug 20 09:16:09 managed-node4 systemd-logind[607]: Session 15 logged out. Waiting for processes to exit. Aug 20 09:16:09 managed-node4 systemd[1]: session-15.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-15.scope has successfully entered the 'dead' state. Aug 20 09:16:09 managed-node4 systemd-logind[607]: Removed session 15. ░░ Subject: Session 15 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 15 has been terminated. Aug 20 09:16:13 managed-node4 python3.9[11071]: ansible-setup Invoked with gather_subset=['min'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Aug 20 09:16:14 managed-node4 python3.9[11224]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 20 09:16:16 managed-node4 python3.9[11373]: ansible-ansible.legacy.command Invoked with _raw_params=dnf repolist _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 20 09:16:17 managed-node4 python3.9[11523]: ansible-ansible.legacy.dnf Invoked with name=['pcs', 'corosync-qnetd', 'openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Aug 20 09:16:22 managed-node4 sshd[11584]: Accepted publickey for root from 10.31.45.195 port 59206 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 20 09:16:22 managed-node4 systemd-logind[607]: New session 16 of user root. ░░ Subject: A new session 16 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 16 has been created for the user root. ░░ ░░ The leading process of the session is 11584. Aug 20 09:16:22 managed-node4 systemd[1]: Started Session 16 of User root. ░░ Subject: A start job for unit session-16.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-16.scope has finished successfully. ░░ ░░ The job identifier is 1660. Aug 20 09:16:22 managed-node4 sshd[11584]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Aug 20 09:16:22 managed-node4 sshd[11587]: Received disconnect from 10.31.45.195 port 59206:11: disconnected by user Aug 20 09:16:22 managed-node4 sshd[11587]: Disconnected from user root 10.31.45.195 port 59206 Aug 20 09:16:22 managed-node4 sshd[11584]: pam_unix(sshd:session): session closed for user root Aug 20 09:16:22 managed-node4 systemd-logind[607]: Session 16 logged out. Waiting for processes to exit. Aug 20 09:16:22 managed-node4 systemd[1]: session-16.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-16.scope has successfully entered the 'dead' state. Aug 20 09:16:22 managed-node4 systemd-logind[607]: Removed session 16. ░░ Subject: Session 16 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 16 has been terminated. Aug 20 09:16:22 managed-node4 sshd[11612]: Accepted publickey for root from 10.31.45.195 port 59208 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 20 09:16:22 managed-node4 systemd-logind[607]: New session 17 of user root. ░░ Subject: A new session 17 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 17 has been created for the user root. ░░ ░░ The leading process of the session is 11612. Aug 20 09:16:22 managed-node4 systemd[1]: Started Session 17 of User root. ░░ Subject: A start job for unit session-17.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-17.scope has finished successfully. ░░ ░░ The job identifier is 1729. Aug 20 09:16:22 managed-node4 sshd[11612]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)