id
stringlengths
1
25.2k
title
stringlengths
5
916
summary
stringlengths
4
1.51k
description
stringlengths
2
32.8k
solution
stringlengths
2
32.8k
KB8116
Alert - A130113 - Execution of the PostThaw Script Failed
Investigating Execution of the PostThaw Script Failed issues on a Nutanix cluster.
This Nutanix article provides the information required for troubleshooting the alert 'Execution of the PostThaw Script Failed' for your Nutanix cluster. Alert overview The 'Execution of the PostThaw Script Failed' alert is generated when the execution of PostThaw scripts on a virtual machine has failed to start or complete, leaving the application in the virtual machine in its PreFreeze state. PostThaw scripts are used in combination with PreFreeze scripts and Nutanix Guest Tools to allow Nutanix Snapshots to be application-consistent on operating systems where VSS is not present. These scripts can also be used in combination with VSS. Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "Execution of the PostThaw Script Failed" }, { "Check ID": "Guest VM failed to execute the post_thaw script during the creation of the application consistent snapshot." }, { "Check ID": "Manually run the post_thaw script on the guest VM.\t\t\tFix the cause of the script failure to avoid any further execution failures.\t\t\tResolve the specified reason for the failure. If you still cannot resolve the error, contact Nutanix support." }, { "Check ID": "Some of the applications that got stopped by the pre_freeze script may not start after an application-consistent snapshot is created for the VM." }, { "Check ID": "A130133" }, { "Check ID": "Execution of the PostThaw Script Failed" }, { "Check ID": "Failed to execute the post_thaw script during the creation of the application consistent snapshot with uuid '{snapshot_uuid}' for the VM '{vm_name}'. Error: {error_message}." } ]
Troubleshooting There are two major components on the guest virtual machine review when troubleshooting PostThawScriptExecutionFailed. One component is the PostThaw scripts themselves. The other is the NGT service on the guest, which triggers the script. If the PostThawScriptExecutionFailed alert has been triggered, the guest application is likely in its PreFreeze state. Once you have identified why the script failed to execute, execute it manually to allow the application to continue running as normal. To troubleshoot the PostThaw scripts themselves, find them and execute them manually to identify the error. Ensure that scripts files still exist, otherwise, you will need to either restore them from an old backup or re-install NGT. The locations of the PostThaw scripts are: Linux - /usr/local/sbin/post_thawWindows - C:\Program Files\Nutanix\scripts\post_thaw.bat Execute these scripts manually and ensure they succeed. If they do not succeed, review the error produced by the script. If the script fails to execute, assure the scripts meet the guidelines outlined in the Nutanix Portal Documentation pre_freeze and post_thaw scripts https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:wc-application-consistent-pre-freeze-post-thaw-guidelines-r.html. Verify that the status of the NGT service is running on the virtual machine. For Linux virtual machines, the status should return "active (running)": $ sudo service ngt_guest_agent status For Windows virtual machines, the status should return "Running": > Get-Service "Nutanix Guest Agent" If the NGT service is not running, make sure you have the latest version of NGT installed. (For details on upgrading NGT, see the Prism Central Guide: Upgrading NGT https://portal.nutanix.com/page/documents/details/?targetId=Prism-Central-Guide%3Amul-ngt-pc-upgrade-t.html.) With the latest version of NGT installed, if you find the NGT service is still not running, review the NGT logs on the guest to identify the cause of the NGT service not running. Log locations: Linux - /usr/local/nutanix/ngt/logsWindows - C:\Program Files\Nutanix\Logs If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support. https://portal.nutanix.com./ Collect additional information and attach them to the support case. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching files to the case To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB5094
x86 security issue
Jan 3, 2018 update
For Future updates, refer to latest version of Security Advisory #07 at https://portal.nutanix.com/#/page/static/securityAdvisories https://portal.nutanix.com/#/page/static/securityAdvisoriesJan 10th update: Please refer to KB 5104 for Support's public-facing response to this issue. Cases should be linked to that KB as well as the security advisory going forward.Jan 4th update:Please refer to Security Advisory 007: http://download.nutanix.com/alerts/Security-Advisory_0007_v1.pdf http://download.nutanix.com/alerts/Security-Advisory_0007_v1.pdfJan 3rd update:Recently there has been discussion on public forums on a recently identified Intel CPU hardware-level bug that may have security implications. Please use the following to respond to customer queries. More information will be shared as they become available.Nutanix is aware of the discussions in the public domain on a possible security vulnerability that affects x86 architecture platforms. Nutanix is currently evaluating it and as most of the technical details are not available due to the embargo, will respond appropriately when sufficient information is available.
For Future updates, refer to latest version of Security Advisory #07 at https://portal.nutanix.com/#/page/static/securityAdvisories https://portal.nutanix.com/#/page/static/securityAdvisories
KB15160
NCC health checks fail to run due to cluster_health service down
NCC health checks fail due to the cluster health service being down on one or more CVM. Failed to execute NCC with arguments: ['/home/nutanix/ncc/bin/ncc', 'health_checks', 'run_all']
NCC health check fails with the following error from CVM or Prism GUI. This is due to the cluster health service being down: Failed to execute NCC with arguments: ['/home/nutanix/ncc/bin/ncc', 'health_checks', 'run_all'] This may also cause a task failure in Prism GUI.
A task may still show up in Prism and ecli with kRunning status: nutanix@cvm:~$ ecli task.list include_completed=false If an NCC health check task was created and then stalled, abort and cancel it using the following commands: nutanix@cvm:~$ ergon_update_task --task_uuid='0e6bbc5f-cd49-46c8-a47b-cd9d971d95ad' --task_status=aborted If there is a parent task, first kill the child task and then the parent task. Restart cluster health service: (not impactful, although cluster health checks will be unavailable until the service is back up): nutanix@cvm:~$ allssh 'genesis stop cluster_health ; cluster start ; sleep 10' Find the health_scheduler_master or the cluster health service leader: nutanix@cvm:~$ panacea_cli show_leaders Re-initiate cluster health check on the health_scheduler_master identified above: nutanix@cvm:~$ ncc health_checks run_all This will now run successfully if the cluster health service came back up successfully.
KB13876
Stale vCenter service connections causing Uhura to show storage as AllPathsDown
Uhura logs show AllPathsDown error even though the cluster isn't experiencing an outage.
When attempting to restore VMs from local snapshots, the restore task kicks off but does not successfully complete.Checking the ecli output, we see several kVMregister tasks failed. nutanix@NTNX-XXXXXXXX1:~$ ecli task.list include_completed=true Checking Uhura logs on Uhura leader, we see the following signature: 2022-09-15 16:32:17,421Z INFO esx_host.py:1420 DataStore GLT_VMW_201_Repl is having accessible issues AllPathsDown_Start However, as VMs are accessible, we can conclude the datastores are accessible as well. We can confirm this by running the following command: nutanix@CVM:~$ hostssh "esxcfg-nas -l | grep -i una" In the above example, we show a datatore prod-vm which is not available on host .126. However, for this KB to apply, there would be no unavailable datastores. If any unavailable datastores are being reported, troubleshoot those before continuing to the solution section.
If VMs are running with no issues and there are no unavailable datastores reported we can continue with the solutions steps. It is possible that the hypervisor management services are misreporting the datastore status. To resolve this, restart the host.d and vxpa services on the ESXi hosts. nutanix@CVM:~$ hostssh '/etc/init.d/hostd restart' Once complete, restart genesis on all CVMs: nutanix@CVM:~$ allssh "genesis restart"
KB10708
Nutanix Kubernetes Engine - UI and "karbonctl login" fail after restarting aplos service on Prism Central
Restarting the "aplos" service on the Prism Central VM(s) may result in the Nutanix Kubernetes Engine UI failing to load, and the "karbonctl login" command failing with a, "Failed to get the unified token from Prism Central" error.
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.Restarting the aplos service on a Prism Central VM may be required for certain troubleshooting workflows. After restarting the aplos service on a Prism Central VM, such as via genesis stop aplos; cluster start, the Karbon UI in Prism Central may fail to load, instead showing only a blank screen, like so:Browser developer tools, such as Chrome's Developer Tools, may show a 404 Not Found error and an Uncaught TypeError when opening the Karbon UI, as shown in the following examples: prism-pc.254207ab745929d13cff.js:1 GET https://<Prism Central IP>:9440/karbon/version?__=1612217748294 404 (Not Found) 298.17a3e1e16ab2cb480510.prism-pc.js:1 Uncaught TypeError: Cannot read property 'find' of undefined After this issue occurs, browsing to the Nutanix Objects UI in Prism Central, then browsing to the Karbon UI may show the Objects UI instead.Additionally, executing karbonctl login from the Prism Central VM fails with the following error: nutanix@PCVM:$ ~/karbon/karbonctl login --pc-username <username> --pc-password <password> Note that there may be no log signatures to clearly indicate when this issue has occurred, but the failure of karbonctl login output in the above error is a reliable indicator. Scenario 2: Upgrade of NKE 2.2.3 to NKE 2.x may fail with below log signatures 2024-06-03 18:19:29,790Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) [DEBUG]: Failed to make GET request: URL: https://10.50.135.206:9440/karbon/v1-beta.1/k8s/clusters, traceba ck: Traceback (most recent call last): 2024-06-03 18:19:29,791Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) File "/home/nutanix/upgrade/lcm_staging/cdffef67-bbcc-4f41-80cb-d6cad453d667/release/karbon/update_tools/ka rbon_interface.py", line 141, in get_k8s_clusters 2024-06-03 18:19:29,791Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) (url, resp.status_code)) 2024-06-03 18:19:29,791Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) Exception: Invalid response: URL: https://10.50.135.206:9440/karbon/v1-beta.1/k8s/clusters, status code: 404 2024-06-03 18:19:29,791Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) 2024-06-03 18:19:29,791Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) [DEBUG]: Failed to pass all prechecks and cannot proceed to upgrade: {'nutanix_kubernetes_engine_k8scluster_eol_precheck': 'Failed to retrieve k8s cluster(s) managed by Nutanix Kubernetes Engine: Failed to make GET request: URL: https://x.x.x.206:9440/karbon/v1-beta.1/k8s/clusters, error: Invalid response: URL: https://10.50.135.206:9440/karbon/v1-beta.1/k8s/clusters, status code: 404'} 2024-06-03 18:19:29,791Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) 2024-06-03 11:19:29.671560 Traceback (most recent call last): 2024-06-03 18:19:29,791Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) File "./nutanix/tools/lcm_helper", line 391, in update 2024-06-03 18:19:29,792Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) image, options.request_version) 2024-06-03 18:19:29,792Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) File "/home/nutanix/upgrade/lcm_staging/cdffef67-bbcc-4f41-80cb-d6cad453d667/release/karbon/update/__init__.py", line 63, in upgrade 2024-06-03 18:19:29,792Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) "Details of failed prechecks: %s" % fail_msg) 2024-06-03 18:19:29,792Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) Exception: Nutanix Kubernetes Engine upgrade failed at precheck stage. Details of failed prechecks: {'nutanix_kubernetes_engine_k8scluster_eol_precheck': 'Failed to retrieve k8s cluster(s) managed by Nutanix Kubernetes Engine: Failed to make GET request: URL: https://x.x.x.206:9440/karbon/v1-beta.1/k8s/clusters, error: Invalid response: URL: https://x.x.x.206:9440/karbon/v1-beta.1/k8s/clusters, status code: 404'} 2024-06-03 18:19:29,792Z INFO 10209488 helper.py:148 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) 2024-06-03 18:19:29,792Z ERROR 10209488 helper.py:145 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) 2024-06-03 11:19:29.671637 EXCEPT:{"err_msg": "Update failed with error: [Nutanix Kubernetes Engine upgrade failed at precheck stage. Details of failed prechecks: {'nutanix_kubernetes_engine_k8scluster_eol_precheck': 'Failed to retrieve k8s cluster(s) managed by Nutanix Kubernetes Engine: Failed to make GET request: URL: https://x.x.x.206:9440/karbon/v1-beta.1/k8s/clusters, error: Invalid response: URL: https://x.x.x.206:9440/karbon/v1-beta.1/k8s/clusters, status code: 404'}]", "name": "release.karbon.update", "stage": 0} 2024-06-03 18:19:29,808Z INFO 10209488 metric_entity.py:1705 (x.x.x.206, update, d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37) Exception report: {'error_type': 'LcmUpdateOpError', 'kwargs': {'module_name': u'release.karbon.update', 'err_msg': u"Update failed with error: [Nutanix Kubernetes Engine upgrade failed at precheck stage. Details of failed prechecks: {'nutanix_kubernetes_engine_k8scluster_eol_precheck': 'Failed to retrieve k8s cluster(s) managed by Nutanix Kubernetes Engine: Failed to make GET request: URL: https://x.x.x.206:9440/karbon/v1-beta.1/k8s/clusters, error: Invalid response: URL: https://x.x.x.206:9440/karbon/v1-beta.1/k8s/clusters, status code: 404'}]", 'env': 'pc', 'ip': u'x.x.x.206'}} 2024-06-03 18:19:29,819Z ERROR 10209488 lcm_ops_by_pc.py:360 (update) Failed to perform operation as task d768d20a-7f9f-4c6e-4d67-6f2f0a0afd37 failed 2024-06-03 18:19:29,824Z INFO 10209488 catalog_staging.py:1351 (update) Removing staging dir: /home/nutanix/upgrade/lcm_staging/cdffef67-bbcc-4f41-80cb-d6cad453d667
To resolve this issue, restart the karbon_core service on the Prism Central VM(s): nutanix@PCVM:$ allssh 'genesis stop karbon_core'; cluster start If the above doesn't resolve the issue, refer to KB 11547 https://portal.nutanix.com/kb/11547 for resolution. The same error can occur when the ssl_terminator_config_printer is missing karbon urls.
KB11999
LCM inventory on PC fails with reason: "[Failed to download manifest due to LcmDownloadError(u'No registered clusters found which support multiple file catalog items',)]"
When trying to run LCM inventory on Prism Central (PC), it fails with reason "Inventory setup failed. Reason: [Failed to download manifest due to LcmDownloadError(u'No registered clusters found which support multiple file catalog items',)])"
In some scenarios, LCM inventory on Prism Central (PC) fails with the following error: Inventory setup failed. Reason: [Failed to download manifest due to LcmDownloadError(u'No registered clusters found which support multiple file catalog items',)]) No Prism Element (PE) clusters are currently registered to this Prism Central: nutanix@NTNX-A-PCVM:~$ ncli multicluster get-cluster-stateRegistered Cluster Count: 0 [None]
LCM inventory is failing on PC because no PE clusters are registered to this Prism Central. LCM catalog requires PC to be registered to at least one PE cluster to use the Catalog service. Since there are no PE clusters registered to PC, the catalog task fails with 'No registered clusters found which support multiple file catalog items' error which in turn is causing LCM inventory to fail as well.Register at least one Prism Element (PE) cluster to Prism Central and then re-run the LCM inventory again on Prism Central to resolve the issue.Refer to Register Cluster with Prism Central https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_20:mul-register-wc-t.html section in Prism Web console guide for steps on registering PE cluster to Prism Central.
KB11630
Prism Central | Scale-out Fails with "Image Download Error"
Prism Central | Scale-out Fails with "Image Download Error"
Attempts to scale-out PC running pc.2021.3 or earlier fails with the following error message: "Image Download Error"
This issue is caused by the API returning the full version of Prism Central and not the product version; the product version should be used for version checks in Prsim Central.The fix is available in pc.2021.3.0.2 and later. There is no known workaround for this issue at this time apart from upgrading the single instance PC and then performing the scale-out operation.
""ISB-100-2019-05-30"": ""ISB-049-2017-04-20""
null
null
null
null
KB4601
NX Hardware [Power Supply] – Both power supply LEDs are Amber
Don't attempt to re-power to the chassis by unplugging and plugging if you see that both power supply LEDs are Amber.
Internal KB - Do not share this document with partners or customers. Your support case should be escalated to Support Management's visibility ASAP.For single-node Ultra system (such as Nx-3175),If both power supply LEDs are illuminated amber, do not attempt to re-apply power to the chassis by unplugging and plugging the power cables. If the amber LED is illuminated on both power supplies, there is a critical event such as over temperature or over current. Re-powering the chassis can cause a secondary failure such as a burnt capacitor. For a multi-node system such as X9 or X10 Twin-pro system,if you observe Amber LED on both PWS, it could be in 2 different conditions (PS off in standby or critical event). Please press power button (do not hold it as it will reset power supply), if LED shows: Both green, then both PWS were in standby mode, server is in normal operating condition. No actions requiredIf one LED is green & the other PWS LED is amber, that amber PWS may have a power cord connection (please check) or replace that power supply onlyIf both LED are amber, it means that both PWS are in an abnormal state. Follow the solution section.
If the power supply still has standby power (1 Hz blinking green LED), and the BMC is still running, check the IPMI log for critical event notifications.If you cannot collect IPMI logs or any evidence for further troubleshooting, dispatch a chassis, nodes and PSUs and arrange for Failure Analysis (FA) for all replaced parts.
KB3120
How to create Windows PE (WinPE) image that contains Nutanix VirtIO drivers
Create a Windows PE (WinPE) image with integrated Nutanix VirtIO drivers for use on Nutanix AHV
This article describes how to create a Windows PE (WinPE) image with integrated Nutanix VirtIO drivers. This can be useful in troubleshooting Windows VMs running on AHV, as by default, Windows does not have a Nutanix VirtIO SCSI driver, which means that you will not have access to VM SCSI drives. More info on WinPE: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/winpe-intro https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/winpe-intro
Preparation steps Download Assessment and Deployment Kit (ADK): https://learn.microsoft.com/en-us/windows-hardware/get-started/adk-install https://learn.microsoft.com/en-us/windows-hardware/get-started/adk-install Install Windows PE: Download the latest Nutanix VirtIO drivers from the Support Portal https://portal.nutanix.com/page/downloads?product=ahv&bit=VirtIO. Inject Nutanix VirtIO drivers into WinPE Mount Nutanix VirtIO ISO by double-clicking on it (works starting from Windows 8/2012 and newer). As an alternative, ISO contents can be unpacked using 7-Zip, WinRAR, or any other similar utility. In the following case, the ISO is mounted as drive "E:". Drive D: is used as a temporary disk. In this case, the Windows PE image is built based on Windows Server 2012 R2. Click Start, and type "deployment". Right-click Deployment and Imaging Tools Environment and then select Run as administrator. Run the following commands: C:\> copype amd64 D:\WinPE_amd64 Show injected drivers: C:\> Dism /Get-Drivers /Image:"d:\WinPE_amd64\mount" Unmount WIM: C:\> Dism /Unmount-Image /MountDir:"d:\WinPE_amd64\mount" /commit Optional Utilities You can also add a few more useful utilities to WinPE: imagex.exe You can copy it from ADK installation folder: C:\> copy "C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64\DISM\imagex.exe" d:\WinPE_amd64\media BootRec.exe It is not part of ADK, but it can be extracted from any Windows Server 2012 R2 installation ISO. After mounting it, go to \Sources\Boot. Mount wim using dism: C:\> Dism /Mount-Image /ImageFile:"E:\sources\boot.wim" /index:1 /MountDir:"d:\temp\mount" /readonly Then copy BootRec.exe to WinPE folder: C:\> copy D:\temp\mount\Windows\System32\BootRec.exe d:\WinPE_amd64\media Unmount wim: C:\> Dism /Unmount-Image /MountDir:"d:\temp\mount" /discard Create WinPE image Run the following command: C:\> MakeWinPEMedia /ISO d:\WinPE_amd64 d:\temp\WinPE_amd64_virtio.iso
KB10510
Commvault repeatedly scrolling 404 errors attempting to delete snapshots that no longer exist
Commvault has an issue in which it can remove a snapshot, then forget about it. It will continue to spam aplos logs with 404 errors as it continues to attempt the snapshot deletes.
An issue with Commvault backups can arise where it seems to remove associated snapshots, then loses track and continues trying to remove these snapshots later. Often there is a delay of hours or maybe a day between when the snapshot is deleted and when it returns to delete it again.The identifying characteristics are below.Commvault proxy logs will be full of 404 errors accessing NutanixLogs will indicate that Commvault has lost track of deletes it has performed, but still has a record of the snap UUIDs. Because of this, it will continue attempting to delete these snapshots already deleted by Commvault.Commvault logs have entries like the following: 2156 1420 09/19 03:44:55 ###### CVLibCurl::CVLibCurlSendHttpReqInternal() - curl-err:[No error] http-resp:[404] url:[https://10.xx.xx.10:9440/api/nutanix/v3/vm_snapshots/deffc278-3872-4f93-bffa-b82fxxxxxx2] req-data:[_null_] len:[0] server-resp:[{ The DELETE messages with 404 errors in aplos.out are the key to identifying this issue on the Nutanix sideOn the cluster, you will find messages like the following on multiple CVMs within aplos.out: 2020-12-15 17:07:34 UWSGI x.x.x.65 [DELETE]:/v3/vm_snapshots/3a5fa123-ae73-41bd-a7de-5d0295820758 issued by: commvaultadmin took 59 msecs, response status: 404 To quickly identify such errors, run the command below: cvm$ allssh "grep 'DELETE.*[0-9a-f\-]\{36\}.*status: 404' data/logs/aplos* | head -n 5" Since there could be thousands of results, we can limit output with the "head" command or remove "| head -n 5" to list them all.Similarly, to find a list of successful delete operations the following command can be run: cvm$ allssh "grep 'DELETE.*[0-9a-f\-]\{36\}.*status: 202' data/logs/aplos* | head -n 5"
Commvault Support may require evidence from Nutanix that the forgotten delete issue is present to pursue the issue effectively.The only way to see this is to find the successful delete (202 return code) and the unsuccessful deletes (404 return code) for the same snapshot UUID in the Nutanix Aplos logs. Listing them all using the 2 commands in the Description section above and comparing the output is the only way to do this.
KB16996
IPMI foundation error fatal: Detecting model
Foundation fails to detect the correct HW type of the node. Manual Phoenix works. Foundation logs and manual ipmitool invocations contain "invalid role"
Imaging through Foundation fails: fatal: Detecting model Opening the foundation.log you may see the following error: 2023-10-18 13:56:07,374Z INFO Model is 0123456789 You may also see "Invalid Role" error in the foundation.log: 2023-10-18 13:56:15,933Z ERROR Error attempting connection to Lenovo BMC (00.00.000.00): Invalid role When using ipmitool and running the lan print command you will see Cipher Suite Priv Max is all X: [root@NTNX~]# ipmitool lan print
Overview of the procedure: Backup FRUdo hard BMC reset with ipmitoolrestore IP and FRU. Note: The ADMIN password will be set back to board serial. Steps: 1) Confirm the problem is present and make a note of the FRU list infoCheck for "invalid role" error and Cipher Suite Priv Max is all X'sFRU List Info: [root@NTNX- ~]# ipmitool fru list (You can get the Chassis Serial Number for the command below by running the command: ipmitool fru list above)Confirm the "Invalid Role" error: ipmitool H <IP> -U ADMIN -P <Chassis Serial#> -I lanplus fru list Cipher Suite Priv Max all X's: [root@NTNX- ~]# ipmitool lan print 2) Backup the FRU info : /ipmicfg fru tbackup fruinfo.txt 3) Make a note of the IP address, Subnet mask, and default gateway as they will be wiped in the next step.We can check the IP information by doing a lan print: [root@NTNX- ~]# ipmitool lan print 4) Reset the BMC: ipmitool raw 0x30 0x41 5) Run a lan print to see all the information. We are interested in the "Cipher Suite Priv Max" field(Note that the IP information will be lost because we reset the BMC): [root@NTNX- ~]# ipmitool lan print 6) We need to set the IP information back to what it was:IP Address: [root@NTNX- ~]# ipmitool lan set 1 ipaddr <IP ADDRESS> 7) Subnet Mask: [root@NTNX- ~]# ipmitool lan set 1 netmask 255.255.252.0 8) Default Gateway: [root@NTNX- ~]# ipmitool lan set 1 defgw ipaddr <GATEWAY IP ADDRESS> 9) Enable arp, snmp, and auth: [root@NTNX- ~]# ipmitool lan set 1 arp respond on 10) Now We Need to restore the FRU data: /ipmicfg fru trestore fruinfo.txt 11) Confirm the FRU has been populated (Data Below has been redacted; make sure the data is correct): [root@NTNX- ~]# ipmitool fru list You should now be able to complete Foundation without the "Detecting Model" error.
KB11397
File Analytics - Scan fails - Year Out of Range
This KB is designed to address the 'year is out of range' error seen on File Analytics scans.
When running a File System Scan on File Analytics against one or more shares fails, we must review the reason for the failure under the following log on the File Analytics VM (FAVM):/mnt/logs/containers/analytics_gateway/metadata_collector/metadata_collector.log.ERROR 2021-02-12 01:13:26Z,141 ERROR 7599 scanner.py:collect_metadata_for_object: 1593 - Error in collecting metadata for object /tmp/AFS.local/Share_1/Folder_1/File.ini" We can see a traceback for 'year is out of range' for a specific file, in this case on the share 'Share_1'. Below are the steps to review the date of the file using the File Server VM (FSVM) command line:1) Identify the FSVM owning the share and Top Level Directory (TLD - if this is a Distributed Share)... nutanix@FSVM:~$ afs share.owner_fsvm Share_1 path=Folder_1 2) SSH to the NVM Internal IP from the above output and use 'stat' on the "Absolute Path on the owner NVM" line plus the filename as seen in File Analytics... nutanix@FSVM:~$ stat /zroot/shares/6d323013-1e2e-4403-8bb7-1505ba4aa16f/:418594fc-ccbf-461a-8934-5724a60b4041/7fd4155f-e2d3-4a39-af11-a3016a53fcdd/Folder_1/File.ini Note: The modified Year is 30044, which is what led to the File System Scan failure.
The cause for these invalid timestamps could be a variety of things, such as corrupted source data prior to migration from a prior NAS server. Below are the steps on how resolve this issue. 1) SSH to the owner FSVM and move into the share path of the share and directory where this issue is occurring.2) Run the below commands to recursively update the Access, Modify, and Change time of all the files and directories in that path. nutanix@FSVM:~$ touch <name of file> Note: If there are many files and directories in the same directory having this issue, run the below command in the directory in question to update all the files and directories. nutanix@FSVM:~$ find . -type d -exec touch {} + 3) While on the owner FSVM, run the below command to identify the file_analytics_snapshot for the share in question. nutanix@FSVM:~$ afs snapshot.list share_name=Share_1 filter=BACKUP 4) Delete the file_analytics_snapshot snapshot associated with this share using the following command. nutanix@FSVM:~$ afs snapshot.remove snapshot_uuid_list=<snapshot_uuid> 5) Trigger a new scan in File Analytics after snapshot deletion and it should succeed.
KB14941
Kubelet unable to start post host image or k8s upgrade due to incorrect containerd configuration
Kubelet unable to start post host image or k8s upgrade due to incorrect containerd configuration
Customer may notice k8s upgrade or host upgrade fails with worker or master nodes in NotReady state. Checking the kubelet status you may see the kubelet is stuck in activating state, [root@karbon-k8s-master-0 nutanix]# systemctl status kubelet-master Checking journalctl logs you will see an error with containerd run time. May 04 00:36:54 karbon-rgs-pa-k8s-production-8a2d23-k8s-master-0 kubelet[137078]: E0504 00:36:54.388615 137078 remote_runtime.go:86] "Version from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" Checking the kubelet systemd unit file the run time is configured as containerd. [root@karbon-k8s-master-0 nutanix]# systemctl cat kubelet-master | grep runtime Checking the containerd configuration file you will see an unconfigured config file. [root@karbon-k8s-master-0 nutanix]# cat /etc/containerd/config.toml
To resolve the issue follow the steps below,Step1: Stop kubelet service with following command If the issue is on master node run the below command [root@karbon-k8s-master-x nutanix]# systemctl stop kubelet-master If the issue is on worker node run the below command [root@karbon-k8s-worker-x nutanix]# systemctl stop kubelet-worker Step2: Run the enable_containerd.sh script to reconfigure config.toml file. [root@karbon-k8s-master-x nutanix]# sh /var/nutanix/containerd/enable_containerd.sh Step3: Confirm kubelet is started For master node run below command, [root@karbon-k8s-master-x nutanix]# systemctl status kubelet-master For worker node run below command, [root@karbon-k8s-master-x nutanix]# systemctl status kubelet-worker
KB14122
Self service (Calm) brownfield operations for Azure VMs may fail with timeout
Self Service (formerly Calm) brownfield operations for Azure VMs may fail with timeout
BackgroundCurrently, when a customer runs brownfield operations for an Azure VM, The Calm service Styx and Indra interact with Azure and run a query for all the VMs in all subscriptions configured in Calm account.This query operation is done to identify the VM to import the VM into Calm during the brownfield import operation. Customers may run brownfield Azure VM operations of Azure VMs from Calm UI or DSL. These operations may fail because of timeout caused by scale issue CALM-32445 https://jira.nutanix.com/browse/CALM-32445. Identification1. Running brownfield operation from DSL scripts we observe a “too many 504 error responses” error message for the brownfield_import/vms/list API call. Note that from the start of the execution of the DSL script it may take up to 30 minutes for the error message to appear because of timeout. [2022-06-21 06:38:59] [INFO] [calm.dsl.cli.bps:646] yy-xx-cc-bf found 2. Track the brownfield API requests in ikat_proxy access logs to understand for which PCVM the API call was redirected and the error code of 504 was returned. root@NPCVM:/home/apache/ikat_access_logs# zgrep -i 'brownfield' prism_proxy_access_log.out Ikat_proxy service redirects the requests to Mercury service and Mercury service will forward the request to Calm service for processing the request.In the above case we see 504 error response code from PCVM .182 and .230.3. Checking the mercury logs for the blueprint API call we observe kTimeout error from local PCVM IP and port 4241. Calm microservice styx listens to request on port 4241 and processes the blueprint-related request I20220621 06:53:50.148224Z 35481 request_processor_handle_v3_api_op.cc:1003] <HandleApiOp: op_id: 6964243 | type: POST | base_path: /calm/v3/blueprints | external | XFF: xx.xx.xx.19> Created corresponding AuthenticateOp for refresh: 6964244 The above logs indicate that mercury service timed out in getting request from the styx service and Mercury service returns error code 504 to the client performing the blueprints operation.4. The calm microservices related logs for different services are logged under the /home/docker/epsilon/log directory or /home/docker/nucalm/log directory. The styx service relies on indra service to fetch the VM related information for Azure VMs and this operation fails with Exception: Unknown error. Below are the styx logs that indicate the unknown exception error [2022-06-21 06:53:50.600826Z] INFO [styx:134517:DummyThread-189] [:][cr:aeb95d93-c4c7-48ec-b037-e8750bd70bf2][pr:][rr:aeb95d93-c4c7-48ec-b037-e8750bd70bf2] calm.common.api_helpers.brownfield_helper.get_filtered_vms:81 [:::] vm_list not found in cache 5. Reviewing the indra.log logs we observe that fetching of Azure VMs information does succeed but takes a lot of time. Also, note that the indra.log contains a json response which will have the Azure VMs related information. Note that this request was tracked with the ergon task completed at 07:01 but the request to Indra service came at 06:53. 2022-06-21 07:01:48.13713Z INFO indra 207 entry.go:314 github.com/sirupsen/logrus.(*Entry).Logf [cr:244f01d5-1b18-42b3-8ecf-28e4ca98c453][pr:244f01d5-1b18-42b3-8ecf-28e4ca98c453][rr:aeb95d93-c4c7-48ec-b037-e8750bd70bf2][logger:indra] Public IP for NI Ecli task details of indra service cloud sync operation succeed but takes a long time to finish. . . .
Due to the inefficient way of querying and processing of VMs information from Azure, the Calm microservices may timeout in processing the brownfield API request. This leads to 504 error response code being returned to the client who makes the brownfield operation API call (either UI or from Calm DSL) Currently, there is no workaround for this particular issue. Software defect CALM-32445 https://jira.nutanix.com/browse/CALM-32445 has optimization fixes in fetching the required VM info from Azure account which will avoid timeouts in services. Customers will need to upgrade calm to 3.6 or later to fix this issue.
KB13323
Cloning UEFI VM fails with InternalException if Secure boot is enabled
Cloning UEFI VM fails with InternalException if Secure boot is enabled
UEFI VM clone operation may fail with InternalException error if Secure boot is enabled for the cloned VM.Steps to reproduce: Create a UEFI VM without a "Secure Boot".While cloning a VM enable the "Secure Boot" option.The task will fail with InternalException error. Steps to verify the issue: Connect to any CVM and run the following command to list all failed tasks: nutanix@cvm:~$ ecli task.list operation_type_list=kVmClone status_list=kFailed Get additional details from the failed task: nutanix@cvm:~$ ecli task.get 669be5ab-98c3-4800-a9ec-9606c6bf29bb
VMs with the "Secure Boot" option should have q35 machine type, but cloning workflow does not change VM type, resulting in an error.This issue is resolved in: AOS 6.5.X family (LTS): AOS 6.5.2AOS 6.6.X family (STS): AOS 6.6 Please upgrade AOS to versions specified above or newer.WorkaroundDo not enable the "Secure Boot" option while cloning. Instead, clone the VM and once the task completes, update the clone and enable the "Secure Boot".
KB12425
NDB | How to delete a VM and Database if it had modifications done to it outside of ERA
If a DB VM that is managed by Nutanix Database Service (NDB) was modified or edited outside of NDB then there is a chance that NDB's Metadata for this VM is now corrupt and it cannot be deleted from NDB anymore. This article explains how to delete the data.
Nutanix Database Service (NDB) is formerly known as Era.In certain scenarios, when attempting to remove a database in Era, users may encounter this error message "Failed to remove the staging drive: 'cannot find the required device" Usually, this happens when the DB has been modified or edited outside of Era, there is a chance that the metadata for this DB is now corrupt, thus Era can not remove the DB.
1- Delete the VM and volume group attached to it from Prism Element, to remove all associated entities with the database.2- Remove the soft entities from Era using the commands belowdbserver entry: era > dbserver remove [ip=<> | id=<> ] skip_cleanup=true db entry: era > database remove engine=oracle_database name=<db-name> delete_time_machine=true skip_cleanup=true
""Verify all the services in CVM (Controller VM)
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": """"
null
null
null
KB15872
Self-Service: Application Create/Delete fails with "Rogue resume received by trl"
Application Creation or Deletion fails with Rogue resume received by trl.
Nutanix Self-Service is formerly known as Calm.In certain scenarios, deploying or deleting application from Self Service may fail with an error similar to: Rogue resume received by trl 'aaa-bbb-ccc-ddd-eee' from child 'fff-ggg-hhh-iii-jjj' This is usually noticed when Policy Engine in deployed along with Self-service and there's a version mismatch between the two.To check the Self-service version, execute the following command from the nutanix user of PCVM/CalmVM: nutanix@PCVM:~$ docker inspect nucalm | grep VERSION To check the Policy Engine version, use the below command: nutanix@PCVM:~$ policy_ip=$(zkcat /appliance/logical/policy_engine/status | awk -F'"' '/ip/ {print $8}');ssh $policy_ip 'docker exec policy bash -ic "source ~/.bashrc; cat /home/policy/conf/commit_ids.txt"' As from the above scenario, we can see that Policy Engine version is 2 major releases behind the Self-service version.
To mitigate the issue, upgrade the Policy Engine to the latest compatible version.To upgrade the Policy Engine on a PCVM, use LCM.To upgrade the Policy Engine on CalmVM, follow Self-Service Administration and Operations Guide https://portal.nutanix.com/page/documents/details?targetId=Self-Service-Admin-Operations-Guide-v3_7_1_1:nuc-policy-engine-enable-t.html.If the issue persists after upgrading Policy Engine, contact Nutanix Support http://portal.nutanix.com
KB12685
NARF - Nutanix Activity Report Facilitator
Nutanix Activity Report Facilitator (NARF) is a tool to query and report performance information from Nutanix clusters.
The NARF program provides real-time and historic statistics for a running cluster.In interactive mode it displays a dynamic real-time view of the cluster with two sections. On the top section is the list of nodes with their overall CPU, memory, and IO statistics and at the bottom the virtual machine or volume group list with their statistics for CPU, memory, IO. Both lists can be sorted by CPU, memory, IOPS, bandwidth, vdisks, and latency. The list of VMs can also be filtered by host.The interactive mode looks like the following:In command-line mode (CLI), there are multiple reports available for nodes, VMs and VGs. Reports can be run in three ways: Live, specifying a counter and interval in seconds much in the same way as commands like iostat or sar. The command will run the desired report N number of times spaced at the specified intervals in seconds.Time range, specifying a time range and interval for the samples. The command will collect the statistics during the indicated time range spaced at the specified intervals in seconds.Exporter, specifying a counter and interval in seconds the command will run the desired report N number of times spaced at the specified intervals in seconds and will generate an export file in line protocol format.
Narf is integrated with panacea and can be run as part of panacea_cli (bundled with NCC 4.6). Interactive mode Interactive mode can be started running panacea_cli narf.py without parameters.When running in interactive mode there is a series of hotkeys that can be used to sort, filter, and change the displayed information. A hotkeys panel with help on the available keys can be displayed by pressing "h".The following image displays interactive mode with the hotkeys panel to the right and VMs filtered by the host, option available with the TAB key. Command-line mode As previously explained in the description section we have three options in the command-line mode. Below examples go into details and explains how to use these three modes: Live Mode: In live mode, it runs much in the way of commands like sar and iostat, with the possibility of specifying an interval in seconds and a number of iterations. It is necessary to indicate the entity report being: "-n" for nodes, "-v" for virtual machines, and "-g" for volume groups. Optionally, it's possible to specify the sort criteria with "-s" and report type with "-t". Not all report types are implemented for all entities, an error message will be generated if the report is not available yet. The following example displays a virtual machine IOPS report, sorted by IOPS as well, with an interval of 30 seconds and it runs two times. This sort of report can be run on a per-second basis, but it's important to keep in mind that Arithmos collects information every 30 seconds for most of the stats. nutanix@CVM$ panacea_cli narf.py -vN asterix1 -t iops -s iops 30 2 Time Range Mode: In time range mode, a start time, end time and sample rate can be specified. Narf will collect historic data from Arithmos and will average it at the specified intervals. Just like in live mode it is necessary to indicate the entity report with: "-n" for nodes, "-v" for virtual machines, or "-g" for volume groups. Optionally it's possible to specify the sort criteria with "-s" and report type with "-t". Not all report types are implemented for all entities, an error message will be generated if the report is not available yet. The minimum interval that can be specified in time range reports is 30 seconds and must be smaller than the difference between start and end time.When no data is available for a given stat a value of "-1" is displayed.The following example displays a node's latency report sorted by CPU with an interval of 3600 seconds (1 hour) during a time range of 3 hours. This is an AHV cluster, therefore, doesn't have statistics available for hypervisor latency. nutanix@CVM$ panacea_cli narf.py -nt lat -s cpu -S 2022/02/16-13:00:00 -E 2022/02/16-16:00:00 3600 Export Mode: In export mode, Narf creates a line protocol file that can be loaded natively into InfluxDB. At the moment the export mode will collect overall stats for nodes and virtual machines, it doesn't support reports for specific entities or report types. nutanix@CVM$ panacea_cli narf.py -eS 2022/02/09-09:00:00 -E 2022/02/09-12:00:00 3600 Usage Usage can be displayed with "-h" parameter like most of Linux/UNIX commands. nutanix@CVM$ panacea_cli narf.py -h
KB12810
Nutanix Files - Setting up API user on Varonis side gets "Error Code 401 Unauthorized"
If the username configured in Files for the API contains upper-case letters, it could be that lower-case letters are required to configure the account.
On the Fileserver, log into the minerva leader, and look for the following log messages (Example user named "FilesAPIUser"): You can see in ~/data/logs/minerva_nvm_gateway.out, the "List of file server users" messages contain the API user with upper-case letters: nutanix@FSVM:~$ grep "List of file server users" data/logs/minerva_nvm_gateway.out A quick "allssh" command to use: allssh 'grep "List of file server users" data/logs/minerva_nvm_gateway.out' If you check ~/data/logs/aplos.out logs, however, you find this: nutanix@FSVM:~$ allssh 'grep "ERROR.*exist in the system" data/logs/aplos.out'
Go into the web UI for the partner server settings, and remove the API user.Re-create the user with all lower-case letters.Go to the partner software (such as Varonis) and try it again.
KB6156
Genesis crashing "No vmknic exists in same subnet as eth0/eth2"
This article describes an issue where genesis is crashing with the message "No vmknic exists in same subnet as eth0/eth2."
While adding a new node to a cluster, the node is not being discovered because genesis is crashing and Foundation is not starting. genesis.out log shows: nutanix@CVM~$ less ~/data/logs/genesis.out
There are two possible reasons for the issue: The CVM eth0 has an APIPA address. Modify /etc/sysconfig/network-scripts/ifcfg-eth0 and assign a static IP address, gateway and network mask. Change BOOTPROTO="dhcp" to BOOTPROTO="none". Here is an example of an ifcfg-eth0 output: nutanix@CVM~$ vi /etc/sysconfig/network-scripts/ifcfg-eth0 Host vmk port has APIPA address assigned. The Host vmknic IP configuration can be checked with esxcfg-vmknic -l. Assign a static IP either through IPMI/DCUI or if there is an ssh connection through the CVM using the command below: [root@ESX:~] esxcfg-vmknic -i <ip> -n <net_mask> <port_group> After making either of the changes above, reboot the CVM: nutanix@CVM~$ cvm_shutdown -r now
KB6828
Remove User From PC Custom Role
You may encounter a scenario where adding a user to a custom role in Prism Central, typically when using a Category based entity assignment, creates a situation where you cannot remove the user because due to the role not populating the fields in "Role Assignment" or "Manage Role" and therefor, also unable to delete the Custom Role itself.
When using Category based entities for Custom Roles in Prism Central, you can encounter a scenario in which the "User and groups" and "Entities" fields do not show the data the Role Assignment was created with. This creates a situation where you are unable to remove the user or group, and in return, unable to delete the Custom Role at all. Role Assignment will look similar to the below:
When on the CLI of the Prism Central VM, from nuclei, you can parse the different Access Control Policy objects and then filer them down to their associated role. Once that is done, you can then dump them to a file named "access_control_policy_by_role" (this can be named anything, the above is just an example, just alter the line below to fit your file name). for i in `nuclei access_control_policy.list count=400| tail -n +6 | awk '{print $1}'`; do echo ========= $i ========= >> access_control_policy_by_role; nuclei access_control_policy.get $i | grep -A2 role_reference | grep name >> access_control_policy_by_role; done When you cat the above file, you should see output similar to the below: nutanix@CVM:~$ cat access_control_policy_by_role You can then delete the Role by using the below (where "default-0c604087595e" is the line above "name: BP-Test"): nuclei access_control_policy.delete default-0c604087595e
KB13782
Calm NCInternalProjectMigration and DMProjectsVPCMigration task fail after upgrading to calm 3.5.2 or later
Domain manager fails to migrate Default project after upgrading to Calm 3.5.2 or later.
Customers upgrading Calm or Prism Central may notice the domain manager migration will fail and trigger a DomainManagerServiceDown alert in Prism Central with the following message: Domain Manager data migration failed for release version 3.5.2, Please contact customer support. Following are the failure details, Discovered that the Domain Manager internal service is not working: 'Domain Manager data migration failed for release version 3.6.2, Please contact customer support. Following are the failure details, Migration failure can manifest in different ways: Unable to manage VMs using PC after upgrading Prism Central.New VM creation fails after upgrading PC to 2022.6 or above with ENTITY_NOT_FOUND: Category Project:_Internal does not exist After the PC upgrade, RBAC users are unable to open VNC console with (Error fetching VM details).Recovery plan execution fails with error: ENTITY_NOT_FOUND Category Project:_internal does not exist Calm UI shows both default and _internal projects with no entities associated with them.Degraded Recovery points.Updating a VM in Prism Central fails with the error: Cannot read properties of null (reading 'status') Projects filtering does not work, for example, when assigning a VM to a project. In the "Manage VM Ownership" popup, the search for projects with a matching keyword will not filter. This article only applies to the above error messages from the DomainManagerServiceDown alert and the exceptions logged in /home/nutanix/data/logs/domain_manager.out* log as shown below: 2022-08-18 17:34:16,056Z ERROR category.py:1026 Entity is not updated. Category value: default already exists [ERROR] base:44: Migration failed, migration-key: nc_internal_project_migration Note: After the migration fails, the "domain_manager.out" log file may not be updated. If the PC log collection duration does not cover the failure date/time, the "domain_manager.out" will not be collected. Collect the file manually.
Contact Nutanix Support http://portal.nutanix.com to resolve the issue.
KB8517
The HPE Smart Array P408i-p SR Gen10 Controller missing
null
After abrupt power off, if the system is not booting into Hypervisor, follow the below procedure
Login to Prism and check for any warnings, errors or alerts related to boot drive failures and Check for any alerts, errors, and warnings by logging into the ILOIf you dont find any issues, login to the failed node via ILO web console and check if the system is looking for any boot media to boot the system. If the boot drive is missing, try the following procedure: Reboot the system and during system boot, press F9 to get into the System Utilities Next Go to System Configuration Select BIOS/Platform Configuration (RBSU) under System Configuration Check whether both the 208i and 408i controllers are visible If the 408i controller is missing get in to “PCIe Device Configuration” from “BIOS/Platform Configuration (RBSU)” Select the “Embedded RAID1: HPE Smart Array E208i-a SR Gen10” Disable the PCIe Option ROM Save the changes and reboot the serverDuring the Reboot Press F9 once again and get into the “System Utilities-> BIOS/Platform Configuration (RBSU)” Check whether the HPE Smart Array P408i-p SR Gen10 Controller is available now (If not repeat the same steps for “HPE Smart Array P408i-p SR Gen10 Controller” alsoOnce you have the HPE Smart Array P408i-p SR Gen10 controller visible now go back to the previous step and Enable the option ROM options which we disabled as part of this Debugging in step8.Set the “HPE Smart Array P408i-p SR Gen10 Controller” as “Bootable Device(s) for Legacy Boot Mode by following the below Path System Utilities -> System Configuration-> HPE Smart Array P408i-p SR Gen10->Set Bootable Device(s) for Legacy Boot Mode ->Select Bootable Logical Drive -> Logical Drive 1 (Logical Drive 1) Array A-> Set as Primary Bootable Device Set the secondary bootable device also as “Logical Drive 1 (Logical Drive 1) Array ANow reboot the system and check whether the system is able to find the Hypervisor After setting the “HPE Smart Array P408i-p SR Gen10” as legacy boot device If the system is still not booting into the hypervisor, the logical volume itself is corrupted. So create the logical volume.After creating the logical volume follow the Nutanix reactive break-fix procedure to recover the node mentioned after the creating volume steps.Steps for Logical Volume Creation: Reboot the system and press F9System UtilitiesSystem ConfigurationsHPE Smart Array P408i-p SR Gen 10Array ConfigurationCreate an ArraySelect the two physical Disks by checking the checkbox using the two Physical Drives. Proceed to Next FormSelect the RAID Level as “RAID 1 Proceed to Next LevelGive the Logical Drive LabelLeave the remaining details as it isSave the changes Set the above newly created logical drive as Primary and secondary boot device as mentioned below. Then follow the Nutanix Hypervisor Break-fix procedure on failed Nodes (Quick Steps given below) Set Bootable Logical Drive/volume: Select the "HPE-Smart Array P408i-p SR Gen 10 controllerSelect ConfigureUnder Configure select "set Bootable Logical Drive/volume"Select the above created Logical Drive/VolumeSelect both "Primary" and "Secondary" Boot Logical Drive/Volume as the newly created logical volumeSelect OK and finish the process Nutanix Hypervisor Break Fix Procedure – Reactive Mode Once you have done all the above workaround, Login to the Prism and select the failed node from Hardware - > Diagram page Select the “Repair Host Boot Device”. Select the failed Host in the drop down and click “NEXT” Since the hypervisor is corrupted, it can’t reach the node. So it will fail to discover the node UUID. Now, click the option to download the Phoenx.iso and download the phoenx.iso into your local machineLogin to the corresponding node ILO page and load the downloaded phoenix. iso in virtual media and reboot the serverThe system will boot into Phoenix mode. Once the phoenix is loaded and installed installation will take placeOnce the installation completed, once again login to the prism page and select the same host which is undergoing repair, click the “Repair Host Boot Device”, Select the host and click next Based on the type of hypervisor, the system may ask you to upload the hypervisor bundle. (In some cases, the hypervisor will be taken from the node, in some scenarios, we have to upload the hypervisor image)Once the Hypervisor is getting installed, the recovery is completed and the cluster will come to normal state Now you should be able to see the Hypervisor up and running.
KB13538
Application-Consistent Snapshots May Fail for Windows VMs with the Oracle VSS Writer Installed
This article describes an issue where VSS application-consistent snapshots are failing on Windows guest VMs with Oracle DB installed after they are being migrated using Nutanix Move.
This article describes an issue where VSS snapshots may fail and alert A130165 "Application-Consistent Recovery Point created at <date> for the VM <vm-name> failed because Quiescing guest VM failed or timed out" may be seen on VMs with Oracle DB installed after they are migrated using Nutanix Move. In the Windows VM, the Volume Shadow Copy Service (VSS) shows the following sequence of events in the application event log: Source: VSS Source: Nutanix Source: Nutanix Source: Nutanix Source: Nutanix Log Name: Application The command "vssadmin list writers" does not show any errors, but it does show the presence of the Oracle VSS Writer: vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool To verify the issue, disable/stop Oracle VSS writer and then try to take an application-consistent snapshot. If Oracle VSS Writer is the cause of the previous failures, this snapshot should succeed.
The issue caused by the Oracle VSS is reporting access denied errors, possibly following a migration using Nutanix Move. Oracle Support has indicated that the VSS writer service may be disabled. Nutanix advises confirming with Oracle Support that this issue does not impact functionality in your environment.
KB6229
SAML Support for SSP
SAML support for Prism Central was recently introduced, however we have requests from the field for SAML support with SSP as well.
Problem Statement: SAML support for Prism Central was recently introduced, however we have requests from the field for SAML support with SSP as well. It is currently not supported. Reason for Request: As a Service Provider, we rely on SSP for all our customers to access and manage their environment. Most of them are demanding 2-factor authentication and we started implementing Nexus Hybrid Access Gateway [nexusgroup.com] https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nexusgroup.com_our-2Dsoftware_nexus-2Dhybrid-2Daccess-2Dgateway_&d=DwMGaQ&c=s883GpUCOChKOHiocYtGcg&r=j4pqy19bCc5KFdQeWFyd0HnymfTBrcuGT0w5o01jvvg&m=oEtAbFsenS2QQ4mrRkjSWH-DCjSWDInmyOCzJE3-OpY&s=juilLbgDblJIj1iJzegc_W1-EbKWjkz44RnV1fZzRcI&e=, which is an identity provider software using SAML.
PM-1437 http://jira.nutanix.com/browse/PM-1437 is opened for tracking the progress of this feature. Please attach your cases to the FEAT and inform customer about the same.
KB8069
FAQs - NCM Security Central
This article answers frequently asked questions regarding NCM Security Central (NCM SC).
This article answers frequently asked questions regarding Nutanix Cloud Manager Security Central (NCM SC).
Onboarding Nutanix on-prem What are the prerequisites to configure Nutanix Clusters for security monitoring in NCM Security Central? The user should have admin access to NCM Security Central and Nutanix Prism Central (PC) console. Additionally, the user should have the necessary access to create and set up a VM. Users are required to install an SC VM through Prism Central. The requirements for configuring the SC VM are: 4 vCPUs8 GB RAM40 GB HDDOne or two Ethernet interfacesPrism Central IP addressAdmin or user credentials of Prism CentralAn account with NCM Security Central https://flow.nutanix.com/securitycentralAccess to portal.nutanix.com http://portal.nutanix.com Follow the onboarding user guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-nutanix-account-add-sc-c.html for more information. Note: Check your AOS and Prism Central versions in the compatibility matrix for downloading the correct SC VM https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-scvm-compatibility-with-aos-pc-r.html. What is the VM Vulnerability Scanner on my Nutanix clusters onboarded to NCM Security Central? NCM Security Central users can choose to integrate their Qualys VMDR to visualize and monitor vulnerabilities reported from Qualys VMDR from NCM Security Central SaaS platform. The vulnerabilities are reported in the form of audit checks under Findings and Alerts, which helps users to share, schedule, and download reports on any vulnerability. User can also choose to alert their team by creating ServiceNow tickets all from the same Security Central platform. Know how to enable Qualys VMDR for your Nutanix VMs by referring to this link https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-qualys-integration-sc-c.html. Do I need to install the SC VM on all Nutanix clusters? No, you only need to install SC VM for every PC (Prism Central) installation. For example, if you have 1 PC managing 5 clusters, then you need to install only 1 SC VM. Do I need to install SC VM in the same cluster as PC? No, there is no need to install SC VM in the cluster where the PC resides. SC VM can be launched in any of the clusters managed by that PC. The only prerequisite is that the SC VM should have connectivity to the PC. Can NCM Security Central be hosted in my private data center? No, NCM Security Central is available as a SaaS service only. You only have to install the SC VM in your cluster. Can I enable NCM Security Central for specific clusters? Yes, you can enable NCM Security Central for specific clusters. You can update the list of clusters configured with NCM Security Central by referring to NCM Security Central User Guide: Configuring Nutanix Account in Security Central SaaS https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-nutanix-account-configure-nx-sc-t.html. Where can I download the SC VM from? Use Security Central (formerly Flow Security Central) donwloads https://portal.nutanix.com/page/downloads?product=flowSecurity to download SC VM. After onboarding is complete, how much time does it take for results to appear? It takes up to 3 hours for the audit results to appear, depending on the size of your infrastructure. Can I monitor NC2-AWS workloads from NCM Security Central? Yes, onboarding your NC2-AWS instances is the same as onboarding for any Nutanix cluster. Follow the steps NCM Security Central User Guide: Nutanix Cloud Clusters (NC2) on AWS Onboarding https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-nutanix-cloud-clusters-onboarding-sc-c.html. The true hybrid cloud dashboard come alive in NCM Security Central when you onboard your NC2-AWS, Nutanix on-prem and AWS native public cloud accounts on NCM Security Central. AWS Why are AWS VPC Flow Logs required while adding your AWS accounts to NCM Security Central? VPC Flow Logs will help users gain insights on potential ransomware attempts such as Port Scan attacks, DDoS attacks, Dictionary attacks, Data Leak attacks, etc. These alerts are not simple and run through an ML-based algorithm to provide you with accurate threat attempts with reduced false positives. NCM Security Central, while onboarding your AWS account, asks optionally to enable the collection of your AWS VPC Flow Logs to an AWS S3 bucket of your choice. If you have not enabled VPC Flow Logs yet, know how to enable AWS VPC Flow Logs by referring to NCM Security Central User Guide: Editing AWS Account https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-aws-account-manage-t.html. Know how to enable threat intelligence on your NCM Security Central by referring to this link https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-threat-detection-audit-enable-c.html. Azure Can we onboard Azure CSP and pay-as-you-go accounts in NCM Security Central? Yes, NCM Security Central can onboard any type of Azure account as it requires configuring Tenants & Subscriptions. Why do we need to execute a PowerShell script during the Azure onboarding steps in NCM Security Central? Executing a PowerShell script eliminates the steps of manual role assignments and permissions. Can I restrict access for my team members to only a few accounts? Yes, an NCM Security Central admin can restrict access to a limited cloud account for a user. See User configuration https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-user-add-sc-t.html. Licensing What licenses are required to use NCM Security Central? For Nutanix on-prem: NCM Ulitmate or NCI Ultimate or NCI Pro + Security Add-on For public cloud (AWS and Azure): NCM SaaS Explore more on licensing from Nutanix Cloud Platform Software Options https://www.nutanix.com/products/cloud-platform/software-options. How to get access to Security Central? Log in to your My Nutanix account, scroll to Security Central, and select Start Trial. How can NCM Security Central help customers with only an NCM Ultimate license? NCM Security Central can help users prepare for ransomware attacks by detecting potential threats such as Port Scans, Data Leaks, DDoS, etc., and can also help strengthen security posture by identifying critical misconfigurations in applications. General Can NCM Security Central help users in determining VM categories? Yes, using ML-based algorithms, Security Central can help you identify applications by determining the categories that need to be applied on a VM. What kind of microsegmentation policies are supported in NCM Security Central? Currently, users can build application-based policies in Monitor Mode in NCM Security Central. Is Flow Network Security - Next Gen (FNS NG) supported in NCM Security Central? NCM Security Central supports FNS NG for advanced VLANs only, i.e., policy recommendations will be generated on VMs on advanced VLANs. Microsegmentation recommendations for Nutanix VPCs are still under exploration. Can I analyze logs using NCM Security Central? NCM Security Central can help users run queries on inventory and configuration of resources across AWS, Azure, and Nutanix on-prem. It can also run queries on IPFIX (network) logs to determine, for example, the amount of bandwidth of data transfer in the whole network or most chatty VMs. Can NCM Security Central help in ransomware/malware detection? Security Central offers detection of potential DDoS, Port Scan, Data Leak, Dictionary Attack, VMs communicating to malicious IP address, UEBA alerts and many more such anomaly-based use cases for ransomware detection. Is NCM Security Central for Nutanix dependent on AHV? Yes, currently, NCM Security Central only monitors AHV-based VMs. There are a few security audits that are not dependent on AHV, either user access-based or Prism Central configuration-based. Learn more about Investigate from NCM Security Central User Guide: Investigate https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-investigate-overview-sc-c.html. Can NCM Security Central identify host-level alerts? Yes, NCM Security Central can identify AWS EC2 and Nutanix VM-level alerts related to VM metadata, like ports open to the public. For Nutanix on-prem, if Qualys integration is enabled, then the user can receive OS-level vulnerability information. Learn more in NCM Security Central User Guide: Qualys Integration https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-qualys-integration-sc-c.html. How do I configure to get alerts on my Hipchat or Sumo Logic? Are these not part of NCM Security Central integrations? Users can utilize the Webhooks URL capability to get notifications of any desired platform (SIEM/SOAR). Learn more NCM Security Central User Guide: Webhook Integration https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-webhook-integration-sc-c.html. Can I restrict user actions on NCM Security Central? Yes, you can configure RBAC for users in NCM Security Central and also define read/write access at the cluster level. Refer to User Management in NCM Security Central https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-beam-administration-user-management-sc-c.html. Can a user-created custom audit check be added to any compliance policy? Yes, you can add any custom audit created by you to any custom-created compliance policy. Can I change the severity of any NCM Security Central audit? You cannot change the severity level for NCM Security Central default audits; however, you can always edit the severity level for custom audits you create. How can I see audit results related to a specific cloud resource, for example, AWS S3? NCM Security Central has a search bar for easy discovery of audit checks you want. All you need to do is simply type the text S3 and the search results will fetch you all the S3-related audits to choose from. Is there an installation guide I can follow? Yes, follow the set-up guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-nutanix-account-add-sc-c.html. Can I schedule reports to be sent to some recipients? Yes, you can schedule various reports to be sent over email on a daily, weekly, or monthly schedule. Learn how to schedule a report by referring to NCM Security Central User Guide: Scheduling Reports https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-security-reports-configure-t.html. How frequently are the audit results updated? The audit results are updated every 3 hours for Nutanix on-prem and are event-driven in nature for public clouds Azure and AWS. I have a VM for internal use, and I do not want NCM Security Central to report issues on it. How can I do this? You can use Suppression Rules under Settings to whitelist a VM from Security Central’s monitoring status. Learn more about Suppression Rules from NCM Security Central User Guide: Suppression Rules https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-suppression-rules-overview-aws-sc-c.html. How can I programmatically send alerts to my team’s inbox whenever there is a high-severity alert? From Settings > Integration Rules, you can create notification rules of your choice by selecting the required filters to get always updated on alerts from Security Central. Learn how to set notifications from NCM Security Central User Guide: Integration Rules https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Security-Central-User-Guide:flo-integration-rule-sc-c.html. What is the maximum size of a file that can be uploaded when marking issues as resolved in the Compliance view? A maximum of 10 MB of any file type can be uploaded while marking an issue resolved.
KB9609
Missing cluster data on Prism Central main dashboard
PC displays cluster UUID instead of cluster name and some statistic data is missing on dashboard.
After registering a Prism Element to Prism Central, on Prism Central main dashboard, PE cluster UUID is shown instead of cluster name along with some statistic data missing (No data available message). For example: Communication between PE and PC is working fine (checked with "nuclei remote_connection.health_check_all" and "nc -zvvv <PE IP> 9440"). On Prism Central, there are no errors in aplos, aplos_engine, insights_server or mercury logs.However, on the PE cluster side, prism_gateway logs are filled with the below error message: ERROR 2020-06-20 05:42:43,462 pool-5-thread-1 [] background.multicluster.ArithmosHistoricalDataSender.run:122 Error in Arithmos historical data sender On the PE side, you can confirm that the zknode is missing by running the following command: nutanix@CVM:~$ zkls /appliance/physical/clusterdatastate/<UUID>
Missing "/appliance/physical/clusterdatastate/<UUID>" zknode, prevents Prism Gateway from sending cluster data to Prism Central. To workaround the issue, you need to recreate ClusterDataState zknode, which is not by "ArithmosDataSender" thread at Prism Gateway start: Login to the cluster via SSH;Find the prism leader: nutanix@CVM:~$ curl -s localhost:2019/prism/leader|awk '{print "Prism ", $1}'; echo Restart prism using: nutanix@CVM:~$ genesis stop prism; cluster start Refresh Prism Central page and confirm cluster name is now displayed and statistic data is being received.
KB5351
LCM: Troubleshooting LCM on Dell Clusters
Troubleshooting guide to assist LCM (Life Cycle Manager) issues on Dell XC platforms.
Note: All Dell issues in iDRAC or PTAgent must be handled by Dell Support. LCM stands for Life Cycle Manager and is a 1-click upgrade mechanism introduced in AOS 5.0. LCM releases independent of AOS and has more frequent releases. LCM performs two major operations: Inventory: Identifies the components that can be updated on XC.Updates: Updates those components that need updates. Teams involved: The following teams might be involved in handling an LCM support case on XC: Dell Support teamNutanix Support teamNutanix Engineering team (LCM and Infrastructure) LCM XC internals: LCM uses the PowerTools umbrella to perform updates on the XC cluster.iDRAC Service Module and PTAgent are two software pre-requisites for performing updates on 13G Platforms. Since LCM 2.4.4, LCM firmware upgrades use Redfish based upgrades. LCM updates all components together on XC since they are packaged through a firmware payload bundle. (The single bundle will have BIOS, BMC, NIC and other applicable firmware and dependencies will be taken care of automatically). Hence, unlike NX, components will not be individually updateable or visible. Prerequisites: iDRAC Service Module (ISM) minimum version is 3.4.0PTAgent minimum version is 1.9.0If customers are running a lower version compared to the above minimum versions, then LCM will not show the available update. Instead, the customer needs to manually upgrade the cluster to the above minimum version. Reach out to the Dell Support team for any support in the manual upgrade process. Modules that can be upgraded using LCM: Hardware Entities: A consolidated payload including NIC, HBA, BIOS, iDRAC firmware, etc. iDRAC is no longer a separate entity. It is bundled in "Hardware Entities" from LCM 2.3.2 onwards.Dell Update Manager: The utility otherwise called as PTAgentISM: Lightweight software OS service which is called iDRAC Service Module (ISM) For Software-Only platforms, Nutanix performs the qualification and has these listed in our compatibility matrix https://portal.nutanix.com/page/documents/compatibility-matrix. For the Dell Hardware Support matrix, refer to KB 4308 http://portal.nutanix.com/kb/4308.DELL releases a new firmware version on LCM once a quarter. However, this might be more often in case of a security vulnerability fix.
Find LCM leader Upon encountering an LCM failure, log in to the LCM leader. Use lcm_leader as below: nutanix@cvm$ lcm_leader Logs To triage LCM bugs on XC, the following logs are required: CVM (part of NCC log collector) /home/nutanix/data/logs/genesis*/home/nutanix/data/logs/lcm_ops.out/home/nutanix/data/logs/lcm_wget.out PowerTools logs (part of NCC log collector in NCC 3.5.1): ESX: /scratch/log/dellaum.log*/scratch/log/PowerTools.log*/scratch/dell/config/PTAgent.config Note: LCM 2.3.1 automatically triggers the LCM log collector utility when LCM inventory or upgrade operation fails. Refer to LCM Log Collection Utility https://portal.nutanix.com/kb/7288 for details. LCM module issues Run the below command on the LCM leader. nutanix@cvm$ grep –r Exception /home/nutanix/data/logs/genesis* The Exception message could be one of the following: Underlying module failed to perform the inventory or update action: In genesis.out, you will see this message: ERROR exception.py:47 LCM Exception [LcmRecoverableError]: Command (['/home/nutanix/cluster/bin/lcm/lcm_ops_by_host', '102', '302', 'ca1d77af-e62d-48c5-b0a1-8250e029cdff']) returned 1 Or: ERROR exception.py:47 LCM Exception [LcmUnRecoverableError]: Command (['/home/nutanix/cluster/bin/lcm/lcm_ops_by_host', '102', '302', 'ca1d77af-e62d-48c5-b0a1-8250e029cdff']) returned 1 If an error with such a signature shows up, check lcm_ops.out on the same CVM. It should have a corresponding exception raising ‘LcmOpsError’. The lines preceding this message should specify what went wrong with the operation. Inventory failed to identify underlying hardware (specific to SATA DOM update support): genesis.out has the same signature as mentioned above.lcm_ops.out has the following line: ERROR cpdb_utils.py:915 Failed to get managed entity list from modules”. Check the preceding line for something that looks like: INFO cpdb_utils.py:828 Family: dell_gen_13, Class: Host Boot Devices, Model: SATADOM 3IE3 In this case, SATA DOM SIE3 was detected in the system but LCM’s Host Boot Device module does not support this model. Open a case with Nutanix Support https://portal.nutanix.com if you encounter this. LCM framework issues If you see other Exception messages in genesis.out that do not match the signature above, you might be hitting an LCM framework issue.LCM framework is responsible for pre-checks, downloads, and orchestrating the updates.Pre-checks: These errors are benign and catch any current situations that could lead to failures in the future. This should be visible through the UI as well. ERROR exception.py:47 LCM Exception [LcmRecoverableError]: Pre-check 'test_cluster_status' failed: 'precheck test_cluster_status failed. Refer to KB-6375 https://portal.nutanix.com/kb/6375 to determine whether a node or service may be down in the cluster, causing the pre-check failure. Download phase errors: The logs preceding the Exception message mention some form of download issue. Check lcm_wget.out to see if there are issues connecting to your server.Other framework issues: Check for other messages in genesis.out that are LCM related. Triaging errors in lcm_ops.out There are multiple reasons why something fails in XC LCM. This section is specific to issues with the modules and not the framework. For every LcmOpsError seen in lcm_ops.out, you need to look at the preceding logs to understand what is happening. Updates An update operation involves the following phases (last log message for each phase is provided below): Start of operation INFO lcm_ops_by_host:172 Starting to run state machine for LCM ops on Host The script lcm_ops_by_host is eponymous and mentions where the operation is happening. Set up the operation INFO lcm_actions_helper.py:112 action_list: [('get_shutdown_token', 'getting shutdown token', 1), ('forward_storage_traffic', 'forwarding storage traffic', 2), ('enter_host_mm', 'entering host into maintenance mode', 8), ('enter_cvm_mm', 'entering CVM into maintenance mode', 4), ('wait_for_services_shutdown', 'waiting for all services to shutdown on CVM', 2), ('shutdown_cvm', 'shutdown service VM', 2)] Run pre actions and setup for update operation INFO ergon_utils.py:151 Updating task with state 2000, message Finished to execute pre-actions on host Stage the modules INFO staging_utils.py:168 Staging is done for node x.x.x.x Run the operation INFO lcm_ops_by_cvm:292 Updating LCM entity 28472bde-a591-428d-b3c0-4bc119fba92e after update to version 1.3-0.7 This is where you would see most failures. If something fails, you will see such an error: ERROR ergon_utils.py:176 Error: Failed to run cd "/scratch/tmp/lcm_staging" && python ./nutanix/tools/lcm_helper 102 release.dell.firmware-entities.esx '' 'Firmware Payload' 'PAYLOAD_13G-1.0-1.0.tar.gz' on 10.xx.xx.167 with ret: 1, out: Installed payload version: 1.0-1.24 The lines following this line should give you the stdout of the update operation. That will help understand what failed during the update. Run post operations including staging cleanup INFO lcm_ops_by_cvm:302 Performing clean up post operation Mark operation complete lcm_ops_by_cvm:361 LCM operation 102 for 301 is successful Inventory workflow does not involve pre-actions and post-actions since the state of the node/CVM does not change when running inventory. Known XC failure outputs (Expanding on the "Run the operation" point from above): This issue means PTAgent is not able to confirm that the update went through. Lastupdate call failed: Error: {u'lastupdateresult': u'FAILED', u'lastupdatecompleted': u'2017-07-27T20:10:34.286943', u'lastupdatereturncode': u'1', u'agentstatus': u'UNKNOWN', u'lastupdatestarted': u'2017-07-27T20:10:33.998239'}. Look at dellaum.log and PowerTools.log from the node/CVM where the operation was run. You can find the node IP by looking at the "Staging the modules" log line. Update status call failed: PTAgent is unable to respond to the call. PTAgent is unable to respond to an update call: This happens when you get the "Connection refused error". The inventory times out and gives an error. This might happen if the iDRAC is not responding and we may see the following error when we run the API call: nutanix@cvm$ sudo curl -k -H "Content-Type: application/json" -X POST -d {} https://192.168.5.1:8086/api/PT/v1/host/SMF/inventory In cases like this, the Dell representative will have to upgrade the iDRAC firmware if any. Resetting the iDRAC on the host firmware is up to date may resolve the issue. If PTAgent fails on one of the nodes, always collect the PowerTools.log and dellaum.log to triage. Scenario: AOS upgrade stuck In this scenario, an AOS upgrade performed using LCM becomes stuck after the CVM which just booted-up from the upgrade is unable to bring up its services. This occurs when the Hades disk manager service fails in the process of making an API calls seeking disk information from Dell PTagent on the local hypervisor. If PTagent fails to respond with this information in a timely manner, the Hades service fails to initialize and most CVM services will stay down until the issue is corrected. Beginning with AOS 6.5.3 and later releases, this particular dependency between the Hades service and PTagent is removed.Below are the symptoms that one may see when encountering this issue. nutanix@CVM:~$ cs Hades.out log nutanix@CVM:~$ less data/logs/hades.out Genesis.out log nutanix@CVM:~$ less data/logs/genesis.out list_disks command nutanix@CVM:~$ list_disks API calls querying information about local PTagent fail on impacted CVM returns status "idraccache error" and "idracConnection error" nutanix@CVM:~$ curl -s -k https://192.168.5.1:8086/api/PT/v1/host/agentinfo; echo The Host is taking longer than usual to return the API calls for drives nutanix@CVM:~$ hostssh "grep '/api/PT/v1/host/drives' /var/log/dell/pta_access.log | tail -n5" Solution: Please engage Dell Support for assistance with diagnosing and remediating the issues seen with PTagent.
KB13350
VM performance impacted by intermittent latency and freezing of different applications due to high Cassandra GC pause time
VM performance impacted with intermittent latency and freezing of different applications due to high GC pause time
In situations with heavy IO patterns combined with other metadata operations, like snapshot operations or Curator related tasks, as well as memory fragmentation, there could be situations where the Java Heap often gets between 75% and 90% of its usage. Depending on the memory configuration of the CVM, we see different Java Heap sizes: CVM's up to 32 GB with 3 GB for Cassandra heapCVM's with 40 GB or more with 5 GB or 6GB for Cassandra heapCVM's with dense storage and 36 GB or more with 6GB for Cassandra heap The rule of thumb is that heap usage needs to be mostly under 75%, capped under 80% at max. due to this setting: --XX:CMSInitiatingOccupancyFraction=75 – CMS kicks in when we reach a certain threshold of fullness When the Java Heap is at 90% and more range, significant time is needed to garbage collect and free up memory. The process could lead to frozen IO and increased latency for some or all VMs on the cluster. Garbage Collection (GC) here refers to GC performed by Java Virtual Machine (JVM). Objects without references in the heap memory are deleted during JVM's GC. During GC, Cassandra will be halted because the system is busy performing the GC activity. This is what we refer to as the pause time.Cassandra has two components, memtables (Memory) and SSTables (On disk). Data streams of key-value pairs are buffered into sorted memtables that are periodically flushed to the disk – forming a set of small, sorted files (sstables). The commit log resides on the disk and is used for crash recovery. Writes are simple and quick in LSM trees. The Cassandra write process persists a write to the commit log first, so it is possible to recover if a crash occurs before the write operation returns. After that, the operation gets applied to the memtable. At this point, the write is successful on this node and returns successful completion. To keep the memory footprint in check, memtables are flushed to sstables and then memtable and commit logs are cleared. To check for GC pause interval, you could grep for "GCInspector" in the system INFO log inside "/home/nutanix/data/logs/cassandra" folder.For Cassandra heap usage, you could grep for "current heap size" in the system INFO log inside "/home/nutanix/data/logs/cassandra" folder. GCinspector logging in System log: We can see that the GC process had to halt the IO for 5.8 seconds and 6.2 seconds, respectively. system.log.WARN: WARN [Service Thread] 2022-06-20 08:35:38,567Z GCInspector.java (line 301) ConcurrentMarkSweep GC in 5837ms. CMS Old Gen: 3450903712 -> 3380248792; Par Eden Space: 1431699456 -> 0; Par Survivor Space: 20276032 -> 0 The below command filters CurrentMarkSweep times for values over 500ms. Either remove the filter for the day or specify the day of interest: allssh 'grep 'ConcurrentMarkSweep' ~/data/logs/cassandra/system.log.INFO* | grep 'YYYY-MM-DD' | sed 's/ms.//g' | awk "{if (\$13 > 500) {print a\" \"\$0}}" | sort -u -k4' Cassandra heap usage logging can also be found in cassandra_monitor.INFO (/nutanix/home/data/logs/cassandra_monitor.INFO), showing a heap usage of 93.5%, calculated by dividing the current heap size by the max heap size multiplied by 100, in this case operation: (4853239504/5189795840)*100. I20220620 08:39:35.832767Z 11965 cassandra_monitor.cc:5746] Cassandra current heap size: 4853239504, Max heap size: 5189795840 To see Cassandra heap usage over 75%, use the below example: for i in $(svmips); do ssh $i cat ~/data/logs/cassandra_monitor.INFO | grep 'Cassandra current heap size' | grep 'YYYYMMDD' | sed 's/,//g' | awk '{d = $1; t = $2; a = $9; b = $13; print d,t,"Cassandra current heap size: "a,"Max heap size: "b,"Cassandra Heap Percentage: "(a/b*100)}' | awk '$15>75'; done In the cassandra_monitor.INFO log file are warnings with high Cassandra heap usage. nutanix@CVM::~$ allssh "grep -C1 'over the alarm level' ~/data/logs/cassandra_monitor.INFO" Panacea also shows the issue in the GC stats with the MaxPauseTIme (showing the maximum during the last 60 seconds) and TotalPauseTime (sum for 60 seconds) metrics. These metrics are taken from /home/nutanix/data/logs/sysstats/cassandra_gc_stats.INFO, which are plotted every minute. The more granular instances of high pause times can be taken from the Cassandra system.log.INFO while looking for ConcurrentMarkSweep.The below example is from the Utility Generic Graph Plotter):Figure 1: MaxPauseTime and TotalPauseTime
NOTE: Create a Tech-Help and have the findings/RCA validated with an STL before proceeding with the gflag changes. High GC pause time caused severe slowness in Cassandra which has an intermittent effect on general cluster latencyIf all the mentioned details in the description has been verified, an increase of the Cassandra heap as well as the Common Memory Pool (CMP) should be considered.Below is an example of an increased Java Heap to 7 GB as well as the Common Memory Pool to 19 GB (the customer was running on 17 GB CMP before. For more information about the CMP please refer to ISB-101 https://confluence.eng.nutanix.com:8443/display/STK/ISB-101-2019%3A+Increasing+the+CVM+Common+Memory+Pool) nutanix@CVM::~$ /home/nutanix/serviceability/bin/edit-aos-gflags --service=cassandra --all_future_versions Panacea signature has been added in NCC-4.6.4 for the "Cassandra GC Pause High". Look for the table name "HighCassandraGCTotalPause", table title "Cassandra GC Pause High". It will show the "Latest Occurrence" and "Max TotalPauseTime (ms)"
nutanix@CVM:~$ sudo nvme id-ctrl /dev/spdk/nvme0 -H | head
null
null
null
null
KB11251
High average queue size reported in iostats for mdadm raid in AOS 5.19.x and AOS 5.20.x
In AOS < 5.20.2 & >5.19 there is a kernel cosmetic bug that makes mdadm raid stat values to show unrealistic numbers, which may cause erroneous conclusions about the state of the RAID and are also affecting the sreinsights graphs.
Clusters running AOS 5.15.x and AOS 5.20.x (before 5.20.2) are susceptible to a kernel cosmetic bug that causes mdadm raid values to show unrealistic numbers in iostat output. This may cause erroneous conclusions about the state of the mdadm raid status due to no I/O being statistically observed and the SREinights graphs will also be incorrect. In this situation, mdadm does not show any issues in its configuration which is as expected: nutanix@CVM:$ cat /proc/mdstat When looking at iostat output the values reported for mdadm devices (mdx) are similar to the following output - extremely high aqu-sz and continuous 100 on %util: nutanix@CVM:$ iostat -x 1
When observing those values, it does not mean that there is any issue on the state of the mdadm raid, they can safely be ignored, and should have no ill-effect on the node.The issue is due to a CentOS cosmetic kernel bug (related CentOS ticket ( https://bugs.centos.org/view.php?id=18138 https://bugs.centos.org/view.php?id=18138), that is being tracked with ENG-387883 https://jira.nutanix.com/browse/ENG-387883 which is fixed in AOS 5.20.2 (kernel version kernel-3.10.0-1160.31.1)
KB16303
Prism Central Login fails immediately after upgrade due to incorrect timezone
When admin set a hw clock timezone on Prism Central for any timezone that is ahead of UTC time, immediately after Prism Central upgrade they will notice login failure. Due to Ikat certificates signed with wrong time the certificate validity starts at a future time causing certificate verification to fail at Mercury.
Customers may notice login failure immediately after upgrading their Prism Central. Ensure the following conditions match before following the solution.1. Prism Central was upgraded less than 12 hours before. 2. Trying to login to Prism Central throws the below error, Persisting new session failed, upstream connect error or disconnect/reset before headers. reset reason: connection failure 3. Mercury logs (~/data/logs/mercury.INFO) reports certificate verification failed error like below, I20240224 05:09:54.326764Z 77163 tcp_server.cc:532] Accepted a new connection (id 15709, fd 143) from xx.xx.xx.aa:49460 on port 9444 4. Ikat certificate validity starts at a future than the current time reported by Prism Central nutanix@NTNX-PCVM:~$ openssl x509 -in /home/certs/IkatProxyService/IkatProxyService.crt -noout -dates In the above output you can see the certs notBefore time is a future time to the time reported by Prism Central VM. 5. Prism Central VM configuration shows hwclock time zone is set to a timezone that is ahead of UTC. Below command should be run from Prism Element cluster ssh where Prism Central VM is present. Replace the name of the Prism Central VM with the name customer has set. nutanix@NTNX-CVM:~$ :~$ acli vm.get PrismCentral | grep hwclock_timezone In the above you can see the Prism Central VM is set to Asia/HongKong timezone which is 8 hours ahead of UTC time.6. Checking genesis.out in Prism Central VM we can see the certificates were signed at a future time nutanix@NTNX-PCVM:~$ grep "Creating certificates for Ikat" ~/data/logs/genesis.out You can see in the above genesis.out logs the certificate was signed on 24th Feb at 11:30 AM UTC but the current time is only 24th Feb 05:59 AM.
The issue occurs due to hwclock_timezone set on the Prism Central VM in AHV. AHV provides the BIOS time for the Prism Central vm at the time of booting after the upgrade. Since hwclock_timezone is set to a timezone that is ahead of UTC, AHV converts the current time to timezone set on the hwclock_timezone. In the above example the Prism Central VM booted at 03:30 UTC, but due to the hwclock_timezone, AHV adds 8 hours to the time and provides the time to Prism Central VM as 11:30 AM but Prism Central VM thinks the time provided is in UTC timezone as supposed to Asia/HongKong timezone. When genesis starts it starts with UTC time set to 11:30 AM, during the start up process certificates are signed for Ikat service so a wrong start time is recorded in the certificates. Eventually during genesis bootup, PC VM reaches to NTP server and finds its time as ahead and pulls the time back to correct time which is 3 AM UTC. But the certs for IKat are already signed and will not be re-signed causing the certs to be invalid. To resolve the issue follow the steps below, 1. Move the Ikat certs to a tmp directory nutanix@NTNX-PCVM:~$ mv /home/certs/Ikat*/*Service.crt ~/tmp/ 2. Stop ikat_proxy and ikat_control_plane and then restart genesis nutanix@NTNX-PCVM:~$ genesis stop ikat_control_plane ikat_proxy; genesis restart 3. Wait until the /home/certs/IkatControlPlaneService/IkatControlPlaneService.crt and /home/certs/IkatProxyService/IkatProxyService.crt are created. 4. Wait for few minutes and try to login to Prism Central UI, it should work now.To fix this issue permanently the hwclock_timezone on the Prism Central VM should be set back to UTC from Prism Element cluster using the below command nutanix@NTNX-CVM:~$ acli vm.update <vm_name> hwclock_timezone=UTC After updating hwclock time zone, Prism Central VM needs to be shutdown and started again. Do not perform a reboot, it should be a shutdown and start again.
KB6443
Pre-Upgrade Check: test_ixgben_driver_esx_host
This pre-upgrade check runs during  hypervisor Upgrades on NX platforms. It checks whether ixgben driver is present on ESXi host.
test_ixgben_driver_esx_host is a pre-upgrade check run during the upgrade which checks if the ixgben driver is present on the ESXi host when the ESXi version is earlier than 6.5.0u1 or 6.7 (not including these versions) since the ixgben driver is supported from ESXi 6.5.0U1 and 6.7.0. Therefore, this check ignores ESXi hosts whose version is greater than or equal to 6.5.0U1 or 6.7.0.Error messages generated on UI by this check: [host_ip] hypervisor has 10G links with ixgben driver, please migrate to ixgbe driver before AOS upgrade. Please refer to Field Advisory #53 for more details Cannot get model name for node for ixgben check
Nutanix has identified an issue, where the ixgben (native) driver can cause a loss in network connectivity to a vmkernel interface if it has a VLAN configured on it. Nutanix Field Advisory #53 ESXi network connectivity loss when using ixgben (native) 10GbE NIC drivers https://download.nutanix.com/alerts/Field_Advisory_0053.pdf Note: ESXi hosts using the Intel 10GE NIC Driver (ixgbe) are NOT affected. To determine if any node in the cluster is using the ixgben driver nutanix@cvm$ hostssh 'hostname && esxcli network nic list | grep ixgben || true' In the example output above, only node “ESX1-2” uses the ixgben driver.For clusters on an ESXi version before ESXi 6.5 update 1, Nutanix recommends reverting to the ixgbe driver. KB-4940 http://portal.nutanix.com/kb/4940 details provides the following information: the procedure to revert the driver to ixgbe how to restore management access if management access is losthow to upgrade CVM foundation to avoid the ixgben driver getting introduced during cluster expansions, ESXi upgrades, or other break-fix activities Once this issue is resolved, you can verify it by running NCC health check: nutanix@cvm$ ncc health_checks hypervisor_checks esx_driver_compatibility_check
KB8038
Alert - A1064 - ProtectionDomainSnapshotFailure
Investigating ProtectionDomainSnapshotFailure issues on a Nutanix cluster
This Nutanix article provides the information required for troubleshooting the alert A1064 - ProtectionDomainSnapshotFailure for your Nutanix cluster. Alert overview The alert A1064 - ProtectionDomainSnapshotFailure is generated for multiple reasons. Some examples include: The Metro protection domain exceeds the supported limit for the number of entities.Cluster services are failing, and snapshot tasks are timing out.Guest VMs or files are not available. If cluster services are down or not working as expected, those should be resolved first before re-attempting to take any snapshots. Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "Protection Domain Snapshot Failure." }, { "Check ID": "Protection domain has VMs being protected by other vstore(s) or in case of a cluster where Metro Availability is configured, Metro protection domain has more entities than supported." }, { "Check ID": "Unprotect VMs from other vstore(s) before snapshotting the concerned protection domain. In case of a cluster where Metro Availability is configured, make sure number of entities are within the supported limit." }, { "Check ID": "Stargate service may have been down or restarted during snapshot operation." }, { "Check ID": "Check if the Stargate service has restarted or is unavailable. Refer to KB 3784 to resolve Stargate service related failures. Once Stargate service is stable, subsequent snapshot creation tasks will succeed." }, { "Check ID": "Protection domain have vTPM enabled VMs present in it. Protection domain based snapshot operation is not supported for vTPM enabled VMs." }, { "Check ID": "Protect vTPM enabled VMs using a Nutanix DR Protection Policy to create Recovery Points." }, { "Check ID": "A requested snapshot of guest VMs and files in the protection domain did not succeed." }, { "Check ID": "A1064" }, { "Check ID": "Protection Domain Snapshot Failure" }, { "Check ID": "Protection domain {protection_domain_name} snapshot '{snapshot_id}' failed. {reason}." } ]
TroubleshootingAssess the error displayed. The last portion of the alert message will determine which steps must be taken to resolve the issue. It may include the following reasons: If taking an application consistent snapshot (a quiesce required snapshot), a virtual machine under heavy load may reach a timeout value before completing the snapshot.If using Metro Availability, ensure that all data protection guidelines https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:sto-metro-availability-c.html are met.Stargate service on the cluster may not be stable.Run a full NCC health check and ensure that everything passes.Prism Central VM (PCVM) is protected by the protection domain. Protecting PCVM by AsyncDR is not supported. To ensure that a virtual machine is not under heavy load that may lead to timeouts for the snapshot process, you can either confirm in the guest OS using a command such as top or through the Prism UI in the VM table by selecting the virtual machines in the failing protection domain. Resolving the issueThe error message displayed at the end of the alert provides the reason for the alert. Often, using the information provided in this message will be enough to make the required change. For the issues that are commonly seen, some solutions may include: Re-running the snapshot and seeing if it completes on a second attempt.Removing entities from a Metro protection domain to be under the recommended maximum limit.Check if the Stargate service has restarted or is unavailable. Refer to KB-3784 http://portal.nutanix.com/kb/3784 to resolve Stargate service-related failures. Once the Stargate service is stable, subsequent snapshot creation tasks will succeed.Resolving other issues on the cluster that may be impacting performance or operation.Unprotect the VTPM-enabled VMs from the Protection Domain and protect it in protection policy via Nutanix DR https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:Disaster-Recovery-DRaaS-Guide.Unprotect PCVM, and use Prism Central Backup and Restore (Prism Central Disaster Recovery) https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2023_3:mul-cluster-pcdr-introduction-pc-c.html feature for protecting Prism Central. If you need further assistance or if the steps mentioned above do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collect additional information and attach it to the support case. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 http://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 http://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 http://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching files to the caseTo attach files to the case, follow KB-1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB7407
How to reset nutanix or admin user password in Move
Password reset for "nutanix" and "admin" users in Move Appliance.
This article describes procedure to recover Move user appliance password.
To reset the admin password, refer to Move User Guide: Resetting Admin Password https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v4_7:top-reset-admin-password.htmlTo reset the nutanix password, follow the steps below: Login as admin into the Move appliance VM: Username: admin Note: admin user on Move does not have all the major rights, so the best way is to change the user to root. So, on the Move VM type : [admin@nutanix-move:~]rs Run the command: /opt/xtract-vm/bin/cli-linux-amd64 and provide username nutanix when prompted. You will enter into Nutanix-Move shell and see the output below: root@move on ~ $ /opt/xtract-vm/bin/cli-linux-amd64 Once you are in the Nutanix-move shell, run the command password reset as shown below, and can set a new password for nutanix user: localhost (Nutanix-Move) » password reset Now you will be able to access Move GUI web page with a new password.
KB5435
Citrix Cloud Connect Integration with Nutanix AHV
This KB describes, how to integrate Citrix Cloud with Nutanix AHV
Last month the GA bits for Nutanix AHV MCS plugin for Citrix Cloud connect were posted on portal. This KB describes how to integrate Citrix Cloud with Nutanix AHV
Please refer to below mentioned presentation link which describes how to integrate Citrix Cloud with Nutanix AHV:https://drive.google.com/open?id=1hmnzNN22hM9D6yLgk9bSN3vi0BGsO5l7
KB2019
Active Working Set Size - SSD Tier
How to respond to customers looking for information around the current active working set size
Customers may raise questions about the SSD tier capacity and its ability to handle the existing active working set (i.e.: the ability for the SSD tier to handle data up-migration events and the ingesting of new writes, without becoming saturated).This KB addresses how to perform such an investigation and discusses current limitations.
The SSD tier is designed to remain at around 75% utilization to provide warm hits for frequently used data. Curator's ILM functions will clear out the disk periodically as the SSDs drift above this mark so that we retain some buffer space to accommodate sudden bursts of fresh writes from VMs, or a sudden up-migration of data. Note that up-migration is governed by a few Stargate parameters such as having accessed the same data 3 times within 10 minutes, it is usually unlikely that there is a sudden burst of up-migration.For more information on ILM, see KB 3569 https://portal.nutanix.com/kb/3569.An exception could be where the system is undersized in the first place. For example, an SQL server could be running batch jobs overnight and filling the SSD tier with its database's working set, followed by end-users powering on desktop VMs on Monday morning and forcing a swap-out of the working set with OS boot data.To investigate the capacity of the SSDs under the cluster's current workload, you can create an analysis graph to view the transformed usage or the "Physical Usage" of the SSD. First grab the disk IDs of the SSD tier disks, then input some or all of them into the graph. An example is shown in the following two screenshots. The interesting element from this graph is that we see that the red line (disk 85) is often flat lining, which occurs at ~94% SSD utilization, and which we can see corresponds to ~233.87GB on this node. The next disk (disk 90), currently at 214.92GB, mimics disk 85's graph but at a lower level - it is actually the other SSD on the same node. The other disks that have been graphed are all looking much more normal/acceptable.The explanation for the above is that there is one node (with disk 85 and 90) that hosts server VMs, while the other VMs host a VDI implementation. It turns out that in this instance, the server node is slightly overloaded from an active working set perspective, which may lead to noticeable latency while accessing those VMs during these SSD-usage peaks. This type of analysis is typically enough for customers to get a feel for how their environment is behaving with respect to working set size. It may lead to further questions and potentially to attempting to tune Curator or Stargate gflags - please consult with STL or Engineering in this instance. Attempting to determine the best size for additional nodes' SSDs This is a pre-sales activity that Nutanix Support is not currently able to add much value to. The collect_perf statistics and Weather reports https://confluence.eng.nutanix.com:8443/display/SO/Using+Illuminati+-+init.SRE#UsingIlluminatiinit.SRE-AccessingareportonIlluminati currently available do not expose this information, hence the reliance on the graph technique above. For more information on collect_perf, see KB 1993 https://portal.nutanix.com/kb/1993.We should let the customer know that it may be possible to get information as to the current working set size on a legacy SAN, but that this is something their current provider and/or Nutanix Sales/Partner Sales team should be investigating/calculating. Once this working set size is determined for the VMs that are going to be migrated to Nutanix, it should be possible to find an optimum SSD size and thus correct Nutanix node to purchase.
KB14374
NDB | Unable to register a Standalone SQL Server Database in to separate time machine, when a Database Group with time machine already exists
Cannot register a standalone SQL Database with its own time machine due to "Auto Register Database" enabled
If a Standalone SQL Server Database VM is registered in NDB along with several Databases via a Database group sharing a single Time Machine, it is not possible to add another database to the same SQL Server VM with its own time machine. When registering the database, The Database Server is not listed under "Registered Database Server VMs".This occurs when the Database Server VM is configured with "Auto Register Databases":
As specified in the NDB SQL Server Management Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-NDB-SQL-Server-Database-Management-Guide-v2_5:top-sql-server-database-register-t.html: You cannot register a SQL Server database on a registered database server VM containing a database group with Auto-Register Databases option enabled.In order to register another database with its own time machine, first set the Auto Register Database to false for the specific Database Group in Era CLI: (replace dbgroupname with specific Database Group Name) database_group update database_group_name="dbgroupname" auto_register_databases=false When registering the database, The Database Server will now be listed under "Registered Database Server VMs" and allows the registration of a database with its own time machineNotes: Make sure that the Standalone Database does not share the same disks as the Databases in the Database Group. If they share the same disks, two different time machines will create snapshots of the same disk and thus result in twice the amount of storage used in the Nutanix Cluster.Any new Databases created on this DB Server will not be automatically added to the Database Group and will need to be manually registered in NDB.
KB15105
HPE DX-380 Plus FSC with WD 14TB/16TB/18TB/20TB may experience disks offline/resets events
HPE DX-380 Plus FSC with WD 14TB/16TB/18TB/20TB may experience disks offline/resets events
[]
For further troubleshooting logs collection and HPE case will be needed.The HPE case needs to be opened and asked to escalate to L3. The following data must be collected right away (immediately after a disk failure) and shared in the HPE case:- AHS logs from iLO- the CVM dmesg/kernel logs during the time of issue.- disk models, fw versions.- node serial numbers.- HBA MR216 FW version.- the CVM hades.out- the CVM recent IO stat file : /home/nutanix/data/logs/sysstats/iostat.INFO- storcli outputs, files and dumps.You can found storcli in the following path on a CVM:/usr/local/nutanix/cluster/lib/storcli /c0 show events file=events.logstorcli /c0 show alilogstorcli64 /c0 show events type=sincereboot file=SBevents.logstorcli64 /c0 show all > show_all.logstorcli64 /c0 show termlog > show_termlog.logbsudo ./storcli64.exe /c0 get snapdumpThe last command will generate a dump zip file, which needs to be shared with HPE.
KB13369
Nutanix Database Service | Patching failed at "Applied Data Patch Step" and timed out
Patching fails at "Applied Data Patch Step" and timed out.
Patching, after running for 20 minutes, gets stuck at "Applying Data Patch". It will fail after 30 mins timeout.Symptoms: Patch 31771877 apply: WITH ERRORS Cause:Invalid Objects are not compiled before patching, or execution permission from public has been revoked.Applies To: All ERA VersionsAll DB Server VMs with Database NOTE: Nutanix Database Service (NDB) is formerly known as Era.
Logs to Collect:Login to DB server VM using era user used during provisioning/registering or root user and perform the following: cd /tmp Check directory with name same as operation ID and do the following: cd <Operation> Collect the <Operation>_SCRIPT.log file and proceed with the following solution:1. Login to Database as sysdba and execute the following sqlplus / as sysdba 2. Check if still some objects are invalid3. Select owner, object_name, object_type from dba_objects where status = 'INVALID'4. If still invalid objects are there, run the below commands: grant EXECUTE on DBMS_BACKUP_RESTORE to PUBLIC; 5. Run the following command again and check invalid objects, now there will be no invalid object @?/rdbms/admin/utrp.sql 6. Re-submit patching operation
KB12877
Foundation - PSOD on Nutanix G8 nodes running ESXi 7.0u2a or ESXi 7.0u2d
PSOD on Nutanix G8 nodes running ESXi 7.0u2a or 7.0 u2d installed during foundation.
Purple Screen of Death(PSOD) is detected on the nodes matching all of the following parameters: Nutanix G8 nodesIntel X550 NIC cards installedCPU of Intel(R) Xeon(R) Gold 6354 family with 18 cores or more (less than 32 cores) OR Nutanix G8 nodesIntel X710 NIC cards installedCPU of Intel Xeon(R) Gold 5320T family
This is a known issue for ESXi 7.0 Update 3 and earlier. Refer to VMware ESXi 7.0 Update 3c Release Notes https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html ESXi hosts might fail with a purple diagnostic screen due to issues with ACPI Component Architecture (ACPICA) semaphores To fix the problem, image your nodes with VMware ESXi 7.0 Update 3c or later using Foundation.
KB14812
NDB - Time machine operations failed to get picked up by the DB server VMs
Troubleshooting flow for the issue where time machine operations failed to get picked up by the DB server VMs. Applies to NDB after v2.2 in the AMEX environment.
This article applies to any version of NDB after v2.2 in the AMEX environment. Whenever a new operation is triggered to be deployed to a DB server, the NDB server prepares the payload and waits for the DB server to request work. The DB server requests work periodically (every 10 seconds) and pulls all the tasks created for it. This periodic polling is performed by a process called “perform_work”. This is a cron job registered in the DB server. In addition to polling for work, this process also works as a heartbeat so that the NDB server does not mark the DB server as UNREACHABLE. The cronjob perform-work is a simple bash script which invokes a Python module driver-cli to pull the work from the NDB server. The driver-cli then invokes an asynchronous process, async-driver-cli, to execute the work provided. Perform_work sometimes fails to fetch the work in the AMEX environment where Pbrun is installed, resulting in the following symptoms: The DB server is marked as ERA_DAEMON_UNREACHABLE by the NDB-ServerDB server operations are not being dispatched. Additional details about the DB server components discussed above: Perform_work script location: /opt/era_base/era_engine/deploy/setup/perform_work.shLog file generated by the perform_work: /opt/era_base/logs/perform_work.logLog file generated by driver-cli python module: /opt/era_base/logs/drivers/cli/driver_cli.logLog file generated by async-driver-cli module: /opt/era_base/logs/drivers/cli/async_driver_cli.log
When a DB server is marked as UNREACHABLE, it essentially means that the NDB-server has not received a heart-beat from the DB server. The primary reasons for this would be either the cron-job perform_work is not running in the DB server OR the cron-job is running but failing with errors repeatedly, thus unable to connect to the NDB server. The following steps help in debugging both the reasons provided here. The tags [DB Server], [NDB-UI], [NDB Server], etc. indicate where the step must be executed. Identify the IP address registered with the NDB-server. Go to the Databases page of the NDB UI to identify the DB server VM IPPing the DB server IP address from the NDB server: ssh era@NDB-Server-IP If the DB server is not pingable, then ensure the following: The DB server VM is up and running via Prism Element where the DB server is hostedThe NIC for the DB server has a valid IP address. Resubmit the NDB operation after ensuring the NDB server can ping the DB server. Exit troubleshooting here. If the DB server is pingable, connect to the DB server VM using SSH and check if the perform_work cron-job. AMEX uses erauser as the username for their DB servers. Henceforth, we will assume the username to be erauser wherever needed. ssh erauser@DB-server-IP [erauser@DB-server-IP ]ps -eaf | grep perform_work | grep -v grep If there is no perform_work process reported, check the crontab using: [erauser@DB-server-IP ] crontab -l If the perform_work crontab is disabled or commented, uncomment it and retry the NDB operation. Exit troubleshooting here. If there is a perform-work operation running, check the file stats of the following log files. Log files must be owned by erauser (see Step 4) and should have permissions 666. [erauser@DB-server-IP ] cd /opt/era_base/logs/drivers/ [erauser@DB-server-IP ] ls -hl host_operations.log [erauser@DB-server-IP ] ls -hl eracommon.log Correct log files ownership and permissions: Alter the file ownership using: [erauser@DB-server-IP ] chown erauser:erauser <log filename> File permissions using: [erauser@DB-server-IP ] chmod 666 <log filename> Retry the NDB operation. Exit troubleshooting here. [DB Server] If the log files ownership and permissions are correct, then check the driver-cli.log and async-driver-cli.log (see description) for any permission errors such as “Permission Denied”. This can be quickly identified using the following grep command: [erauser@DB-server-IP ] grep “Permission denied” /opt/era_base/logs/drivers/cli/driver_cli.log [erauser@DB-server-IP ] grep “Permission denied” /opt/era_base/logs/drivers/cli/async_driver_cli.log An example of the error when the log file for the operation has been created with incorrect ownership: "PermissionError: [Errno 13] Permission denied: '/opt/era_base/logs/drivers/postgres_database/snapshot_database/c051b059-8b5f-4bfe-aff3-85f24cd9c37d-2023-03-22-14:00:07.log'" [DB Server] If there are permission errors reported for any log file. Identify the offending log file and alter the permissions to 666 and ownership to erauser.Retry the NDB operation. Exit troubleshooting here. If there are no errors reported, consult the NDB Engineering team for further assistance.
KB2763
Access to release-api.nutanix.com Fails During a 1-Click Upgrade
To automatically download the upgrade file during a 1-Click Upgrade, release-api.nutanix.com should be reachable. This KB article describes troubleshooting procedures to enable access to release-api.nutanix.com.
During a 1-Click upgrade, the Prism web console fails to automatically download the upgrade file with the following error: release-api.nutanix.com could not be reached. Please check Name Server and Proxy settings. To automatically download the upgrade file during a 1-Click upgrade, release-api.nutanix.com should be reachable. Note: If the customer is using a proxy, authentication to the proxy must be configured using one of the following methods: If using a multi-protocol authentication schema on the proxy, basic authentication must be the first authentication type in the schema. OR If using only one authentication protocol, the authentication protocol must be basic authentication. The Prism web console returns an error if release-api.nutanix.com is not reachable. To resolve this issue, you should check the following: Name Server (DNS Server)Proxy Settings
Refer to the following troubleshooting points for a solution. In the Prism web console, check if a name server is configured. Navigate to Settings > Name Servers.If a name server is not configured, add a valid name server.If a name server is configured, ping the name server to verify if it is configured correctly.Verify that you can ping an external website (for example, google.com).If you can ping an external website, verify that you can ping release-api.nutanix.com.Run the nslookup command to query the name server.If you cannot ping release-api.nutanix.com, verify if the name server can resolve release-api.nutanix.com by using the following command: nutanix@cvm$ dig release-api.nutanix.com If you still get an error, check the firewall in your network. To verify connection, try the following: nutanix@cvm nc -z -w 1 -v release-api.nutanix.com 80 If the above command returns "Connected to <IP address>:80", the connection was successful.If the above command returns "Connection timed out" or something similar, the connection failed to get through. In this case, look at the firewall.Ensure that the firewall port 80 is open. Port 80 is the default port which the cluster uses to connect to release-api.nutanix.com. Nutanix recommends that you open the port.Ensure that the Controller VM's IP address and the virtual IP address of the cluster are in the whitelist of your firewall to allow traffic.If ping works and you still get an error, check the proxy setting on the Prism web console. Navigate to Setting -> HTTP Proxy. If the proxy settings are causing an issue, you might see the following in prism_gateway.log: WARN 2016-04-13 17:17:28,525 http-nio-127.0.0.1-9081-exec-39upgrade.retreivers.RetrieverImpl.findStagingServerSoftwareVersions:180 Failed reading upgrade software list from 'release-api.nutanix.com' java.io.IOException: Unable to tunnel through proxy. Proxy returns "HTTP/1.1 407 Proxy Authentication Required" You might also have a transparent proxy configured in your network. If a transparent proxy is configured, you might see the following in prism_gateway.log: WARN 2016-03-17 12:29:58,111 http-nio-127.0.0.1-9081-exec-6 upgrade.retreivers.RetrieverImpl.findStagingServerSoftwareVersions:180 Failed reading upgrade software list from 'release-api.nutanix.com' javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target In your proxy settings, allow all the traffic from the Controller VM to the necessary destinations to fix the issues caused by proxy settings. This error might occur because of some problems in the Prism service. If the Prism service is the cause of the error, you might see the following in the logs: WARN 2016-03-03 05:38:46,192 http-nio-127.0.0.1-9081-exec-10 web.selectors.NutanixProxySelector.select:84 Failed get InetAddress for 'release-api.nutanix.com': release-api.nutanix.com: unknown error Restart the Prism service. nutanix@cvm$ allssh genesis stop prism Review output and confirm Prism service is stopped. nutanix@cvm$ cluster start If the Prism web console does not show the upgrade as available, or the failed connectivity message does not disappear within a few minutes, and you have confirmed through the log files that connectivity and download were successful, then restart the Prism services. Summary Check the name server (DNS server).Check the proxy settings.Check Transparent Proxy in your network.Check prism_gateway.log and restart the Prism service if the error is seen. Logs You can check the following logs: /prism_gateway log/automatic_download_support_log Successful logs can be seen as follows: 2015-02-05 19:23:44 INFO automatic_download_support:226 Finding upgradable versions for 4.0.1 While looking at automatic_download_support_log, test using the AOS version, because NCC, firmware, and BIOS logs are not recorded in this file.
KB15343
Alert - A400123 - DepreciationOfLegacyQuota
Investigating DepreciationOfLegacyQuota issues on a Nutanix cluster
This Nutanix article provides the information required for troubleshooting the alert DepreciationOfLegacyQuota for your Nutanix cluster or Prism Central (PC) cluster.Alert OverviewThe DepreciationOfLegacyQuota alert is generated when Policy Engine is not enabled and the legacy quotas are still configured to be enabled.Sample Alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "Deprecation Of Legacy Quota Check" }, { "Check ID": "Quota policy help you define and enforce quota limits on infrastructure resources usage in a project. With this release of Prism Central 2024.1, legacy quotas have been deprecated. Any existing/legacy quotas defined in projects shall not be enforced until you enable Policy Engine" }, { "Check ID": "Please refer to KB-15343" }, { "Check ID": "Legacy Quota will not work as expected." }, { "Check ID": "A400123" }, { "Check ID": "Legacy quotas are deprecated." }, { "Check ID": "Legacy quotas are deprecated. Any existing/legacy quotas defined in projects shall not be enforced until you enable Policy Engine" } ]
Quota policies enforce a usage limit on an infrastructure resource for projects and restrict project members to use more than the specified quota limits. Quotas ensure that a single project or a few projects do not overrun the infrastructures. If the cluster runs out of a resource, project members cannot use the resource even if the project has not reached its specified limit.With this release of Prism Central 2024.1, legacy quotas have been deprecated. Any existing/legacy quotas defined in projects shall not be enforced until you enable Policy Engine.Resolution:Enable Policy Engine under Admin Center->Settings-> Governance policy. On enabling policy engine, the existing quotas for projects shall be enforced automatically.If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com/. Collect additional information and attach them to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 http://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 http://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 http://portal.nutanix.com/kb/6691. nutanix@CVM$ logbay collect --aggregate=true If the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command: nutanix@CVM$ ncc log_collector run_all Attaching Files to the Case When viewing the support case on the support portal, use the Reply option and upload the files from there If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 http://portal.nutanix.com/kb/1294.
""Firmware Link\t\t\t\tMD5 Checksum"": ""Link\t\t\t\tMD5=f7b2b5c44b243d690f1b30233e329651""
null
null
null
null
KB11208
Nutanix Move | Hyper-V Failover Cluster inventory returns no VMs for Move-4.0.0
Nutanix Move-4.0.0 inventory shows 0 VMs if the source Hyper-V environment is added with Failover Cluster IP/FQDN.
Nutanix Move-4.0.0 inventory shows 0 VMs if the source Hyper-V environment is added with Failover Cluster IP/FQDN. Sample screenshot: Manually refreshing the inventory gives the following error on Move UI. Sample screenshot: Sample error message: Failed to get inventory for source 'XYZ'. [DisplayMessage="Failed to read VM's Hard Drive info", Location="/hermes/go/src/hypervisor/hyperv/utils.go:188"] Move HyperV agent internal error. (error=0x8000)
Nutanix Engineering is aware of the issue and is working on a fix in a future release. There are currently two workarounds: Add each Hyper-V host as a standalone environment instead of using Failover Cluster IP or FQDN.Deploy Move-3.7.2 if Failover Cluster IP or FQDN must be used.
KB13554
[NKE] Pods fail to mount on OpenShift or NKE with error: No such file or directory
NKE Pods fail to mount on Openshift or NKE when using Files
Pods fails to mount on Openshift or NKE when using NFS share for files and fails with error "failed, reason given by server: No such file or directory"We can see the following error in CSI logs, it's not able to find the directory hence it fails: 812 13:18:13.672304 1 mount_linux.go:175] Mount failed: exit status 32 When customer tries to mount it manually using leading '/ ' in pvc-XXX it works: [root@ocp1-dapps-win02 tmp]# mount -vvv -t nfs http://fs1-jab.nutanix.fedins.com:/pvc-c06c2686-d3bd-4044-900e-d77796dfa8b0 ./tmp/ We get the same error when we mount it manually without the leading "/": [root@ocp1-dapps-win02 tmp]# mount -vvv -t nfs http://fs1-jab.nutanix.X.com:pvc-c06c2686-d3bd-4044-900e-d77796dfa8b0 ./tmp/
In csi 2.5 one of the pre-requisite is enabling only NFSv4, please check with customer setting of files share in Prism Element https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_5:csi-csi-plugin-prerequisites-r.html https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_5:csi-csi-plugin-prerequisites-r.html
KB13877
Entering keys on the Objects Browser gives “The Credentials are incorrect. Please provide correct credentials” error
When a user tries to login via the Objects Browser to access a bucket in an object store, a “The Credentials are incorrect” error is returned.
Note: The error discussed in this article is produced by the Objects Browser, but it or similar errors may apply to other S3 clients as well.When a user tries to enter access and secret keys to access a bucket in an object store, it fails with “The Credentials are incorrect” message: On a PCVM, /home/nutanix/data/logs/prism_gateway.log will show "ClientAuth is not enabled" with the below trace: WARN 2022-10-19 16:08:17,474Z http-nio-127.0.0.1-9081-exec-9 [] auth.commands.CACAuthenticationProvider.isAutoLoginEnabled:370 ClientAuth is not enabled. The below trace is observed in the IAM pod logs (Note: steps to access pod logs are shown below): {"level":"info","msg":"requested headers: %vmap[Content-Type:[application/json] Accept:[application/json]]","time":"2022-10-19T16:54:10Z"} How to check the IAM pod logs on Objects which run on MSP cluster.Step 1: SSH to the MSP cluster From a PCVM, list MSP clusters: nutanix@PCVM:~$ mspctl cluster list SSH to the objects cluster: nutanix@PCVM:~$ mspctl cluster ssh <cluster_name> Step 2: List the IAM pods and login to the pod to find the log location List the IAM pod to get the pod name: [nutanix@default-0 ~]$ kubectl get pods -n ntnx-base -l=app=iam Print the pod logs: [nutanix@default-0 ~]$ kubectl logs -n ntnx-base <iam pod name> The user may also check Active Directory logs for more details about the failed authentication.
This issue can occur if the password of the Active Directory Service Account, specified in the directory configuration of the object store, expires. Step 1: To isolate if this is an Active Directory-specific issue, try with a local user, and it should work. To check the list for users, Prism Central GUI -> Services -> Objects -> Acccess Keys -> Step 2: Update the password in the Objects UI to connect the Active Directory Prism Central GUI -> Services -> Objects -> Acccess Keys -> Configure Directories -> Select user Select Edit next to the directory and update the password for the Service Account.Click Save.
]"
This article lists useful commands for AOS components and ESXi.
null
null
null
KB9065
Centos 7, Redhat7 VM can hang or panic when CPU hot add feature is used
CentOS 7.X and RedHat RHEL 7.X guest VMs, including Nutanix VMs such as PCVM and FSVM, can be impacted by a RHEL/CentOS bug and hang or panic when the CPU hot add feature is used in ESXI or AHV environments. This KB explains the symptoms and provides recommendations for customers.
It has been recently identified that using CPU hot add feature for a guest VM that has either CentOS 7.X or RedHat (RHEL) 7.X, can cause the VM to "Hang" or "Panic", especially when this feature is attempted to be used under high memory pressure inside the guest VM. Nutanix VMs using CentOS 7.X may also be impacted when CPU hot add is used (includes Prism Central VMs and Nutanix Files VMs).This symptom has been reproducible in Nutanix labs and the issue happens no matter which hypervisor is used (AHV and ESXi), and is not a hypervisor bug. The root cause of this issue has been determined to be a bug in the CentOS/RHEL kernel in the affected VM itself rather than Nutanix AHV or VMware ESXi hypervisors and this KB identifies the details of fix plan tracked to address the issue.
In any cases of VM hang or panic when CPU hot add feature was used by customer, make sure that the affected VM is CentOS 7.X or RedHat RHEL 7.X. If it is not Centos 7.X or Redhat 7.X VM, then it is likely this KB is not related and please stop reading through this KB. With Nutanix internal testing, we have confirmed that the condition mostly happens when CPU hot plug in is attempted with quite low free memory available inside the guest VM. Especially if the free memory is below 120MB in UVM and CPU hot add is used then regardless how many vCPUs are being hot added, this operation may cause the CentOS7/RHEL7 UVMs to either crash (kernel Oops panic) or to be in hung state hung (in AHV), ESXi hypervisors will have the below panic signature in the vmcore-dmesg.txt file. The core files are generated if the VM panic happens and they can be found in /var/crash in the guest VM.If the hypervisor is AHV and the affected VM is hung, then core files can be manually obtained if the guest VM was configured CentOS or RHEL to allow this. Details on how to configure and generate this are available in KB 9066 (AHV | How to generate dump files for hung Linux guest VM), if the customer wishes to be able to capture the signature on a future occurrence. (Customers can be advised to log a support ticket with Nutanix if KB 9066 needs to be followed) The VM crashes because an invalid (NULL) pointer is dereferenced: /var/crash/vmcore-dmesg.txt shows the following panic signature: [ 92.164060] CPU8 has been hot-added Fix plan: Redhat 7.X VM: The bug is officially fixed in RHEL 8.0. For RHEL 7.x, the official fix is planned for RHEL 7.9. Please see the Redhat solution " https://access.redhat.com/solutions/4969471 https://access.redhat.com/solutions/4969471" for more information on the fix. (This site requires a login account to be set up). Customers impacted by this issue can contact RedHat support for further assistance or require a patch to their release. .CentOS 7.X VM: We have reproduced the same issue as observed with RHEL 7.X, and it is now being tracked through the CentOS bug link. Customers using CentOS VMs can refer to the CentOS bug ID 17324 and follow up on the fix status with the CentOS support link. Nutanix PCVM/FSVM running CentOS 7.X: This issue is resolved in Prism Central 2020.11 or higher. Ensure Prism Central is upgraded to 2020.11 or higher before increasing CPU or Memory resources. This issue is resolved for Nutanix Files in 3.7.2 and 3.8. Ensure Nutanix Files is updated to 3.7.2 or 3.8 before attempting CPU or Memory resource changes
KB4547
Supported driver version for Intel X520-based network adapters on Hyper-V on Windows Server 2012 R2
null
On Hyper-V clusters, installing the latest NIC driver for the Hyper-V host via Windows Update is, generally, not recommended, as this may result in a driver version that is not supported. This article gives guidance with the supported driver versions for Intel X520-based network adapters on Hyper-V clusters running Windows Server 2012 R2.
NOTE: Please see the Nutanix Compatibility Matrix https://support-portal.nutanix.com/page/documents/compatibility-matrix/ for Hypervisor OS compatibility with your hardware. If your model is not listed, please reference your hardware vendors documentation. For Nutanix NX platforms: Please encourage upgrading to Hyper-V 2016 due to a soon coming EOL for support of Hyper-V 2012 R2.Please use the latest driver available from the Intel website as most of this guidance was due to issues with Windows driver updates via Windows Updates. For Dell XC platforms: Please use the Dell-recommend driver. Note that the Windows image that Dell flashes on the platform from the factory already contains Dell drivers and Nutanix does not override them or install drivers on top of them.
KB13340
[NDB] Provisioning an Oracle RAC database using ASMLIB fails complaining about insufficient space available in the selected disks.
Provisioning an Oracle RAC database using ASMLIB fails complaining about insufficient space available in the selected disks.
This article applies to Databases created with ASMLIB driver. The issue can happen because of the following reasons: One of the reasons for this issue is Incomplete or incorrect configuration of ASMLIB on the VM used as source for creating the software profile. Incorrect RPM's installed on the OS.Wrong user permissions on the asmlib related files.Missing group for the grid and rdbms owner. Logs to collect: If the provision operation consists of provisioning of DBserver and Database in single operation, then logs get collected on ERA Server in case multi-cluster is not enabled and ERA AGENT in case of multi-cluster. /home/era/era_base/logs/drivers/oracle_database/provision/<operation-id> If the provision operation consists of only provisioning database into exiting dbserver, then the logs are stored on the dbserv /opt/era_base/logs/drivers/oracle_database/provision/<operation-id> Provisioning operation also writes cmd execution logs into temp directory. /tmp/<operation-id> In this location you can check the file <operation-id>_scripts.log. This will have detailed output of each cmd run during grid and db installation. If we want to collect all these logs and check them offline, then we can collect diagnostic bundle from era. Administration → Diagnostics → Select Era Server or Era Agent (in case of point 1). → Click generate bundle.Administration → Diagnostics → Select DB Server (in case of point 2). → Click generate bundle.
1. Ensure that the correct versions of oracleasmlib and oracleasm-support packages are installed on the VM used as source for creating the software profile using command rpm -qa | grep <package_name>. For example, for Enterprise Linux7, the ASMLIB RPMs should be:- oracleasm-support-2.1.11-2.el7.x86_64 2. We have to ensure the below file exists after the rpms are installed. This file is created by "oracleasmlib" rpm. If this file is not there then the disk creation will fail. ls -lrth /opt/oracle/extapi/64/asm/orcl/1/ 3. We have to ensure that grid and oracle users should have the same primary group and database owners should also have a secondary group assigned as the oracleamlib driver group in addition to other groups. We have seen issues later where the primary group is not the same and the database is not able to locate ASM diskgroup even though they are mounted.4. If we see that groups are different , we have to ensure and modify the config files both for grid/oracle respectively on the source host which is used for creating software profiles. The below files should have the same group as oracle asm driver group. ls -ld /u02/app/oracle/product/19.0.0/dbhome_1/bin/oracle 5. Below file from db software should show correct group info /u01/app/19.0.0/grid/rdbms/lib/config.c
KB14904
CVM in a boot loop on Lenovo HX series after firmware upgrade or hardware replacement
CVM fails to probe NVMe disks on Lenovo HX series and goes into boot loop during missing boot devices.
Problem After motherboard replacement or BIOS upgrade, the CVM goes to a boot loop because it cannot find a valid boot partition.This issue is specific to the following: Lenovo HX3331 hardware (potentially all HX series)Motherboard replacement or BIOS upgrades may sometimes cause VMD to stop respondingFrom XClarity GUI BIOS settings look normalNutanix platforms that support VMD - CVM booting from NVMe drives in passthru configuration and with VMD enabledNutanix AHV Symptoms The CVM will go to a boot loop. The CVM console (ServiceVM_Centos.0.out in the AHV host) shows the following behavior: .. To confirm the symptoms, we should see the following during the boot process: While booting, each NVMe device responds with a "-19" status during the probe. (Note: The host has 4 NVMe drives in the example, so the log excerpt above shows 4 probe results)The NVMe namespace is not visible or discovered. (svmboot: NVMe namespace are not discovered after 10 seconds)It cannot find a valid boot partition. (svmboot: error: no valid boot partition)Intel VMD and VMD for Direct Assign are enabled in BIOS Here is an example of a successful NVMe probe during CVM boot process with VMD-enabled devices: ...
Solution Check and upgrade XCC to the latest version. As per KB-14811 https://portal.nutanix.com/kb/14811 NVMe drives may become invisible to CVM after firmware upgrades.Download and install OnceCLI tool from Lenovo site. OneCLI is a console application that allows users to get inventory information about a node and also configure parameters (power management or VMD passtrough for example). It's available for Windows and Linux.Follow the official Lenovo article for your server model. Reset to defaults and configure all BIOS settings. Here is an example OneCLI configuration for ThinkAgile HX systems (3rd Gen): OneCLI config loaddefault BootOrder When applying the fix from BIOS, Navigate to UEFI settings > System Settings > Devices and IO Ports > Intel VMD > Enable/Disable Intel VMD:Needs to be Enabled. Additional VMD Information KB-12360 https://portal.nutanix.com/kb/12360 - NVMe SSD General troubleshooting KB-10053 https://portal.nutanix.com/kb/10053 - NCC Health Check: vmd_driver_disablement_check
KB9806
Nutanix Kubernetes Engine - Docker service crashing on ETCD VM on k8s cluster
Docker service crashing in ETCD VM due to incorrect PE password encoded in /var/nutanix/docker/plugins/<id>/config.json
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.The docker daemon on the etcd VM goes into a crash loop, generating core dump files and filling up the root partition, which prevents the etcd service from starting and the DVP Plugin enablement: [nutanix@karbon-xxxx-xxxx-etcd-0 ~]$ sudo journalctl -xe [nutanix@karbon-xxxx-xxxx-etcd-0 ~]$ sudo journalctl -u docker.service -f Core dumps can be noted below the "/" partition: -rw------- 1 root root 121290752 Jul 24 11:17 core.55933 Clearing the core dump files and restarting the docker service may not help, as docker still goes into a crash loop. Checking the PE cluster configuration parameters in /var/nutanix/docker/plugins/<id>/config.json, the config.json file is a long one liner, find the instance of PRISM_PASSWORD with the base 64-encoded value. ... Decode the base 64-encoded value with the following command and verify if it matches the correct PE password: echo "<ENCODED_PASSWORD>" | base64 --decode && echo Note: Kubernetes clusters deployed on early releases of Karbon may contain the plain-text (i.e., non-encoded) password in the /var/nutanix/docker/plugins/<id>/config.json file.
Stop ETCD and docker services: [nutanix@karbon-xxxxx-etcd-0 ~]sudo systemctl stop etcd Clean up core dump files under the root partition: [nutanix@karbon-xxxxx-etcd-0 ~]$ ls -lh / Set 'Enable: false' in /var/nutanix/docker/plugins/<id>/config.json Start docker service: ​[nutanix@karbon-xxxxx-etcd-0 ~]$ sudo systemctl start docker (volume plugin will be put in disabled state) Encode the new password for the user, using this command: [nutanix@karbon-xxxxx-etcd-0 ~]$ echo 'new passwrd' | tr -d "\n"|base64 # Encode the new password The output of the above command will be the <encoded password value>.Now with docker started and stable and the plugin in a disabled state, set the correct password param: [nutanix@karbon-xxxxx-etcd-0 ~]$ docker plugin set nutanix:latest PRISM_PASSWORD=<encoded password value> Trigger script /home/nutanix/docker_plugin/upgrade_plugin.py to update and enable volume plugin.Start ETCD service: [nutanix@karbon-xxxxx-etcd-0 ~]$ sudo systemctl start etcd
KB10446
Nutanix Cloud Clusters (NC2) - Cannot provision node due to AWS Quota exceeded issue. Quota type CPU.
This error is encountered if the associated AWS account does not have sufficient vCPU limit to provision a NC2 on AWS.
Cannot provision node due to AWS Quota exceeded issue. Quota type CPU. The error displayed: You have requested more vCPU capacity than your current vCPU limit of 32 allows for the instance bucket that the specified instance type belongs to. The above error message indicates that the AWS Cloud account does not have a sufficient vCPU limit. By default, a new AWS account has a vCPU limit of 32. Visit the EC2 Bare-metal Instance Details https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Clusters-AWS:aws-clusters-aws-xi-supported-regions-metals.html to determine the vCPUs needed to be able to create the cluster. To view the current vCPU limit for your AWS account, go to https://console.aws.amazon.com/ec2/#Limits https://console.aws.amazon.com/ec2/#Limits and search for “Running On-Demand All Standard”.
Request a vCPU limit increase by either clicking on “Request limit increase” or going to http://aws.amazon.com/contact-us/ec2-request http://aws.amazon.com/contact-us/ec2-request. When submitting a request, state that the limit increase request is necessary to deploy NC2.
KB5216
X-Ray: Networking Configuration and Requirements
null
Networking Requirements and Configuration of X-Ray vm deployment/usage
Zero-Configuration (zeroconf) The X-Ray VM can use zeroconf to provide network connectivity to the user for its interface. Zeroconf is a standard for IP addressing where a host self-assigns an IP address.In absence of DHCP, zeroconf will assign a link-local IP in the reserved addresses (169.x.x.x). This will allow the X-Ray VM to communicate with the user over eth0, but will not allow the X-Ray VM to communicate with worker VMs or storage.To use zero-configuration networking, ensure that the workload VMs, and second NIC (eth1) of the deployed X-Ray VM are all configured to use the same layer-2 network. If DHCP is not enabled, static IPs must be manually configured on the VM. This is done by editing the config files for the corresponding interface. /etc/sysconfig/network-scripts/ifcfg-eth0/etc/sysconfig/network-scripts/ifcfg-eth1 nutanix@xray$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0 ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="xray_ip_address" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_address" BOOTPROTO="none" Port Usage 443 to be opened to my.nutanix.com http://my.nutanix.com/ from the X-Ray VM for activation80/443 inbound to X-Ray for UI 22 inbound for SSH to X-Ray VM 443 outbound (to AHV/vCenter) for management functions 5000 outbound to worker VMs for API22 outbound to worker VMs for SSH
KB13941
Nutanix Files - 100TB shares size properties does not display bytes on Windows
Nutanix Files - File share properties - mapped to a Windows client does not display size in bytes it keeps working... and does not display size in bytes for Used space and Free space. This happens when share size attributes Used space, Free space and Capacity crosses 100T(Trillion) in bytes.
Properties of a 100TB+ sized Nutanix File share mapped to a Windows client will not display the share size correctly. The properties page will only display working... This happens only when share size attributes Used space, Free space and Capacity crosses 100T(Trillion) in bytes
There are no known workarounds and this limitation is on the windows client. Some content migration tools will get stuck in computing size when attempting to move the data.The only possible way around this is to set the Max size limit to something under 100T, however, that may not be possible in all customer cases.
KB7831
Unable to configure authentication to Windows Active Directory in Prism
Unable to configure authentication to Windows Active Directory in Prism.
Adding a Windows Active Directory as a new Directory Service under Prism > Settings (gear icon) > Authentication fails with the message: Verification of LDAP service account failed.. null /home/nutanix/data/logs/prism_gateway.log shows: INFO 2019-07-17 22:38:53,097 http-nio-127.0.0.1-9081-exec-190 adapter.proxy.PrismAdapterImpl.loadConfiguration:121 Loading the Zeus configuration forcefully Also note the errors "LDAP: error code 49" and "AcceptSecurityContext error, data 52e, v3839" in the sample output above.
LDAP error 49 with security context error 52e indicates an Active Directory (AD) AcceptSecurityContext error usually associated with the username and/or password being incorrect, hence the "null" value returned as indicated in the log. See the LDAP Wiki page LDAP Result Codes https://ldap.com/ldap-result-code-reference/ for more information about the error code. Authentication via LDAP method in the form of ldap://<IP or FQDN>:389 usually requires using the userPrincipalName attribute value (user@domain_base), for example, [email protected]. Re-enter the value for the Service Account using "administrator@<Domain Name>" to allow the authentication to the Active Directory to be successful.
KB1980
Attaching knowledge base article to a case in Salesforce
To attach a knowledge base article to a case in Salesforce, use the Find Articles button.
There is an editable field in Salesforce , with the tab in the screenshot below , if you link a certain KB using this field , it will not populate the attached article history on the case. As you can see in the screen shot below , this field which shows the KB article attached is empty
The standard process as far as the linking goes currently is as below - Click on Find Articles from the tab in the Screen shot above , and put in the article number in the search field - Click on the Drop Down Arrow to link the KB to the case as shown in screen shot below You would now be able to view the KB linked to the SR,
KB7068
How to reset root account password on AHV host
This article describes steps to recover AHV root account password.
This article describes the steps to recover the root account password on the AHV host. A few instances where this needs to be done are below: The root account password was lost.The root account is locked and inaccessible.The expiration date on the root password was set and it expired. If you hit this issue, you may see an output similar to the below when trying to connect to the AHV host via SSH: nutanix@cvm: ~$ ssh [email protected]
Starting from Foundation 4.6 it is possible to reset the root account using the Phoenix image.PreparationUse one of the following approaches to prepare Phoenix ISO: Download Phoenix iso (version 4.6 or newer) from the Nutanix Portal https://portal.nutanix.com/page/downloads?product=phoenix. or Upgrade Foundation on the Nutanix cluster to version 4.6 or newer.In Prism UI go to the Hardware page.Select any host and click on the "Repair Host Boot Device" button.Then select the "Continue without snapshot" option.On the "Download Phoenix Image" screen click on the Download button. Perform the password reset Perform the steps described in Shutting down a node in a cluster https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v5_15:ahv-node-shutdown-ahv-t.html chapter of the AHV Administration Guide to put the AHV host into maintenance mode and shut it down.Mount the phoenix.iso, which was downloaded earlier via the IPMI console and boot the node from it.Once the Phoenix environment is up and running execute the following command: [root@phoenix ~]# phoenix/reset_passwd -a If Foundation 4.6 cannot be used then the AHV host should be reimaged to recover the root password.In order to reinstall AHV, consider using Host Boot Disk Repair https://portal.nutanix.com/page/documents/details?targetId=Hypervisor-Boot-Drive-Replacement-Platform-NX3175G5:Hypervisor-Boot-Drive-Replacement-Platform-NX3175G5 workflow (section about hardware replacement is not needed in this case and should be skipped).If you are trying to reset the password on the G8 node where the boot drive is NVME disk, and you got the below error, consider engaging Nutanix Support https://portal.nutanix.com/. [root@phoenix ~]# phoenix/reset_passwd -a
KB5753
Cutoff time exceptions for Next Business Day delivery
This article lists the exceptions to the current cutoff times for Next Business Day delivery
Parts ordered before the local cutoff time(3:00 PM Local Time for All the Regions) will be shipped to the customer during the Next Business Day (NBD).This document lists the exceptions to the current cutoff times which are described here: https://www.nutanix.com/support-services/product-support/faqs/
APAC [ { "Country": "Macau", "Time": "5PM local time" }, { "Country": "Taiwan\t\t\t(Only apply for Taipei area)", "Time": "5PM local time" }, { "Country": "Thailand\t\t\t(Only apply for Bangkok metro area)", "Time": "5PM local time" } ]
KB16958
[Objects] - Deployment is failing with error "Failed to setup IAM replica service"
During the Nutanix Objects deployment, the process might fail due to IAM user replication not finishing on time.
During the deployment of the new Objects store, there might be a scenario where the deployment fails in the IAM replica setup. This problem might be caused by different reasons. This document will explain the steps to validate why the IAM replica setup has failed. On the ~/data/logs/aoss_service_manager.out these errors might be found: nutanix@PCVM ~]$less ~/data/logs/aoss_service_manager.out time="2024-05-27 12:51:53.623358Z" level=error msg="Failed to unmarshal response from local IAM: invalid character 'u' looking for beginning of value" file="iam_client_impl.go:404" process=iam_replicator_svc time="2024-05-27 12:51:53.623519Z" level=error msg="{\"code\":504,\"message\":\"Failed to unmarshal response from local IAM\"}" file="iam_replicator_helpers.go:473" process=iam_replicator_svc time="2024-05-27 12:51:53.623621Z" level=error msg="Error when importing users. internal error" file="sync_orchestrator.go:877" process=iam_replicator_svc time="2024-05-27 12:51:53.625652Z" level=error msg="IAM replication failed: {\"ad_errors\":null,\"code\":500,\"endpoint_name\":\"s3-objects-01\",\"message\":\"Error when importing users. internal error\",\"user_errors\":null}" file="sync_orchestrator.go:518" Endpoint="iam-proxy.ntnx-base.s3-objects-01.prism-central.cluster.local:8445" EndpointName=s3-objects-01 EndpointType=kSmspIAMv2 process=iam_replicator_svc time="2024-05-27 12:51:53.62635Z" level=error msg="Updating ReplicationErrors: {\"target_replication_errors\":[{\"ad_errors\":null,\"code\":500,\"endpoint_name\":\"s3-objects-01\",\"message\":\"Error when importing users. internal error\",\"user_errors\":null}]}" file="iam_replicator_helpers.go:366" process=iam_replicator_svc time="2024-05-27 12:51:53.630466Z" level=info msg="IAM replication to target: s3-objects-01 took 2m4.454817221s" file="iam_replicator_helpers.go:480" process=iam_replicator_svc ....... time="2024-05-27 12:52:28.525008Z" level=error msg="Post deployment function failed" file="deployment.go:130" ComponentName:=msp error="Failed to setup IAM replica service:Replication of some users/configuration failed" process=service_manager_entities_api_svc DEPLOYER_TRACE time="2024-05-27 12:52:28.526054Z" level=error msg="Deployer operation failed" current_build=default current_version=4.2 error="Failed to setup IAM replica service:Replication of some users/configuration failed" instance_name=s3-objects-01 instance_uuid=d2ee5b3a-ee35-4279-5df9-b4c6a57e890f operation_mode=CREATE target_build=default target_version=4.2 task_uuid=e32db445-5056-444a-6db0-97533135c35f DEPLOYER_TRACE time="2024-05-27 12:52:28.532087Z" level=info msg="Updated Object store in IDF" attributes="[state error_message_list task_uuid]" current_build=default current_version=4.2 instance_name=s3-objects-01 instance_uuid=d2ee5b3a-ee35-4279-5df9-b4c6a57e890f operation_mode=CREATE target_build=default target_version=4.2 task_uuid=e32db445-5056-444a-6db0-97533135c35f time="2024-05-27 12:52:28.547542Z" level=info msg="Operation CREATE finished for objectstore d2ee5b3a-ee35-4279-5df9-b4c6a57e890f" file="ostoremsp.go:2256" process=service_manager_entities_api_svc On iam-user-auth pod side, the following errors might be noticed around the same timestamps: Note: The iam-user-auth logs can be found on the new objects VM that were deployed. {"log":"time=\"2024-05-27T12:56:35Z\" level=error msg=\"Failed to marshal response to json : http2: stream closed\" requestID=9f7d8303-5273-9de3-8bf7-81ca40a16c21\n","stream":"stderr","time":"2024-05-27T12:56:35.7565339Z"} In this example, more than 100 AD users needed to be imported, and each user took ~4s to be imported.
Identification: Checking the ~/data/logs/aoss_servcie_manager.out log on the PCVM we might notice the error "IAM replication failed": nutanix@PCVM:~$ less ~/data/logs/aoss_servcie_manager.out time="2024-05-29 12:48:23.900927Z" level=error msg="Failed to unmarshal response from local IAM: invalid character 's' looking for beginning of value" file="iam_client_impl.go:404" process=iam_replicator_svc time="2024-05-29 12:48:23.901014Z" level=error msg="{\"code\":408,\"message\":\"Failed to unmarshal response from local IAM\"}" file="iam_replicator_helpers.go:473" process=iam_replicator_svc time="2024-05-29 12:48:23.901038Z" level=error msg="Error when importing users. internal error" file="sync_orchestrator.go:877" process=iam_replicator_svc time="2024-05-29 12:48:23.901077Z" level=error msg="IAM replication failed: {\"ad_errors\":null,\"code\":500,\"endpoint_name\":\"s3-objects-01\",\"message\":\"Error when importing users. internal error\",\"user_errors\":null}" file="sync_orchestrator.go:518" Endpoint="iam-proxy.ntnx-base.s3-objects-01.prism-central.cluster.local:8445" EndpointName=s3-objects-01 EndpointType=kSmspIAMv2 process=iam_replicator_svc time="2024-05-29 12:48:23.901142Z" level=error msg="Updating ReplicationErrors: {\"target_replication_errors\":[{\"ad_errors\":null,\"code\":500,\"endpoint_name\":\"s3-objects-01\",\"message\":\"Error when importing users. internal error\",\"user_errors\":null}]}" file="iam_replicator_helpers.go:366" process=iam_replicator_svc While looking at the iam-user-auth pods. We see this is an error for a delayed response to /api/iam/authn/v1/buckets_access_keys API: less PodLogs/iam-proxy-86669598c8-9t9j8_ntnx-base_iam-proxy-f39ca23a9bc857d4acca789b62bd7641e8c24b81709d59e18aac79e9e90cfc98.log {"log":"[2024-05-27T12:49:53.621Z] \"POST /api/iam/authn/v1/buckets_access_keys HTTP/1.1\" 504 UT 24 120000 \"Go-http-client/1.1\" \"9f7d8303-5273-9de3-8bf7-81ca40a16c21\" xxx.xxx.xxx.xxx:37296 \"10.200.32.183:5556\" \n","stream":"stdout","time":"2024-05-27T12:51:58.176349675Z"} Note: The request was sent on 2024-05-27T12:49:53.621Z and failed after 120000 msec, resulting in 504 errors. Workaround:To solve this issue, we need to follow these steps: Increase timeout from 120s to 600s for objects iam-proxy Add the gflag on iam_replicator_svc for -iam_sync_timeout,-iam_request_timeout and lower the import user batch value -iam_import_user_batch_size 1 - Increase the timeout to 600s for iam-proxy. Make a copy of the current IAMv2Objects/iam-proxy-control-plane.yaml configuration: nutanix@PCVM:~$ allssh "cp /home/docker/msp_controller/bootstrap/services/IAMv2Objects/iam-proxy-control-plane.yaml /home/docker/msp_controller/bootstrap/services/IAMv2Objects/iam-proxy-control-plane.yaml_backup" Update the timeout value from 120s to 600s: nutanix@PCVM:~$ allssh "sed -i 's/120s/600s/g' /home/docker/msp_controller/bootstrap/services/IAMv2Objects/iam-proxy-control-plane.yaml" 2 - Add the additional flags on the iam_replicator_svc inside the aoss_service_manager container: Note: The steps below must be done for all PCVMs in a 3-node setup.Exec on the aoss_service_manager container: nutanix@PCVM:~$ docker exec -it aoss_service_manager bash vi /etc/supervisord.conf Add -iam_sync_timeout=120m -iam_request_timeout=30m -iam_import_user_batch_size=10 flag in the iam_replicator_svc section like below. The line should be like: command=/home/nutanix/bin/iam_replicator_svc --logtostderr --io_timeout_sec=120 --rbac_operations_file=/home/nutanix/config/iam_replicator_svc_rbac_operations.json -iam_sync_timeout=120m -iam_request_timeout=30m -iam_import_user_batch_size=10 Save the file. Reload supervisorctl supervisorctl reload Exit from the container and proceed with the next PCVM. After applying the above two septs, Attempt a new Object deployment. This time, the IAM Replica should be completed. If the issue persists after following the steps, please contact the Senior SRE or STL for further troubleshooting.
KB4308
Dell Hardware Support matrix
This document provides information about the supported software, firmware, and hardware versions and technical specifications for the Dell EMC XC Web-Scale Hyper-converged Appliances. NOTE: This Support Matrix contains the latest compatibility and interoperability information. If you observe inconsistencies between this information and other documentation or references, this document supersedes all other documentation.
This KB lists the product links for the Dell XC series appliances providing easy access to the Overview, Drivers & Downloads, and all Documentation. Very useful per appliance documentation examples: LCM Reference GuidesSystem Support Matrix (Supported HW, FW, and Driver versions)Solutions GuidesDeployment GuidesService ManualsRelease Notes
Dell XC All Current Models Compatibility Matrix https://i.dell.com/sites/csdocuments/Product_Docs/en/fy19q4_1065-ss-xc-appliance-spec-sheet-120718.pdf https://i.dell.com/sites/csdocuments/Product_Docs/en/fy19q4_1065-ss-xc-appliance-spec-sheet-120718.pdf Generic Hyper-converged Landing Page https://www.dell.com/support/home/en-au/products/converged_infrastructure/hyperconverged_systems?lwp=rt https://www.dell.com/support/home/en-au/products/converged_infrastructure/hyperconverged_systems?lwp=rt Appliance Product Pages Generation 16 XC7625: https://www.dell.com/support/home/en-us/product-support/product/dell-xc7625-core/overview https://www.dell.com/support/home/en-us/product-support/product/dell-xc7625-core/overviewXC760: https://www.dell.com/support/home/en-us/product-support/product/dell-xc760-core/overview https://www.dell.com/support/home/en-us/product-support/product/dell-xc760-core/overviewXC660: https://www.dell.com/support/home/en-us/product-support/product/dell-xc660-core/overview https://www.dell.com/support/home/en-us/product-support/product/dell-xc660-core/overviewXC760xa : https://www.dell.com/support/home/en-us/product-support/product/dell-xc760xa-core/overview https://www.dell.com/support/home/en-us/product-support/product/dell-xc760xa-core/overviewXC660xs : https://www.dell.com/support/home/en-us/product-support/product/dell-xc660xs-core/overview https://www.dell.com/support/home/en-us/product-support/product/dell-xc660xs-core/overview Generation 15 XC7525 : https://www.dell.com/support/home/en-us/product-support/product/xc7525-core/overview https://www.dell.com/support/home/en-us/product-support/product/xc7525-core/overviewXC6520 : https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc6520/overview https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc6520/overviewXC750 : https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc750/overview https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc750/overviewXC4510C : https://www.dell.com/support/home/en-us/product-support/product/dell-xc4510c-core/overview https://www.dell.com/support/home/en-us/product-support/product/dell-xc4510c-core/overviewXC450 : https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc450/overview https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc450/overviewXC650 : https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc650/overview https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc650/overviewXC4520 : https://www.dell.com/support/home/en-us/product-support/product/dell-xc4510c-core/overview https://www.dell.com/support/home/en-us/product-support/product/dell-emc-xc650/overview Generation 14 XC640: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc640-ent/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc640-ent/overviewXC740-xd: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc740xd-ent/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc740xd-ent/overviewXC940: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc940-ent/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc940-ent/overviewXC6420: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc6420/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc6420/overviewXCXR2 (Core): https://www.dell.com/support/home/us/en/04/product-support/product/dell-emc-xc-core-xcxr2/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-emc-xc-core-xcxr2/overview Generation 13 XC430: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc430/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc430/overviewXC630: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc630/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc630/overviewXC730: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc730/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc730/overviewXC730-XD: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc730xd/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc730xd/overviewXC6320: https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc6320/overview https://www.dell.com/support/home/us/en/04/product-support/product/dell-xc6320/overview
KB13405
Enable Remote Tunnel for Prism Central
The article describes how to open remote tunnel to Prism Central.
Tunnels can be opened for Prism Central (PC) clusters as well. Functionality is built into the CLI and not the GUI. You can gather cluster ID by using the command below: ncli cluster info | grep 'Cluster Id' The cluster id begins after the double colon "::". Sample ncli cluster info command output below: nutanix@PCVM:~$ ncli cluster info
SSH into PCVM.Run the command below: nutanix@PCVM:~$ ncli cluster start-remote-support duration=1440 Note: Starting in pc.2022.9, ncli allows a default duration of up to 4320 minutes (72 hours). To open for longer, use the force=true flag. Now, Nutanix Support should be able to connect to the PC via the COT server.
KB12601
Acropolis leader does not proceed HA when an AHV host is temporarily disconnected
AHV HA may stall if it has lost the connection between an unstable AHV host in very rare situation.
AHV HA may stall if the Acropolis leader makes a connection to the AHV host and then loses connection while it is in the process of reconciliation. The symptom The Acropolis leader detected AHV host (xxx.xxx.xxx.1) was disconnected. Then, it created the corresponding DisconnectedHost object.Then, the _monitor_host_state thread within the Acropolis leader started to wait on the _host_state_change_event.wait with 40 seconds timeout. 2021-12-15 15:07:33 INFO manager.py:904 Connection state for host <the AHV host UUID>, it is not connected While the _monitor_host_state thread was waiting for 40 seconds (until the disconnect_expiration "1639548493.78 == Wed Dec 15 15:08:13 2021" in this example), the _reconnect_loop thread was able to connect to the AHV host and changed the connection state to RECONCILIATION state from DISCONNECTED. And this thread proceeded to do the reconciliation workflow, i.e., "Reconfiguring bridge changes"... 2021-12-15 15:07:33 INFO connection.py:438 Connecting to xxx.xxx.xxx.1 The 40 seconds passed. The _monitor_host_state thread tried to process the DisconnectedHost object about the AHV host again. This thread started waiting on the _host_state_change_event without any timeout because the connection state was already in RECONCILIATION (not DISCONNECTED) at that time. 2021-12-15 15:08:13 INFO manager.py:549 Processing DisconnectedHost(uuid=<the AHV host UUID>, disconnect_expiration=1639548493.78, reconnecting=None, failover_task_uuid=None, failover_task_complete=None, host_agent_restarted=None) It is expected that this DisconnectedHost object was processed and HA failover for this AHV hosts would be initiated. But, the Acropolis leader did not, because the thread was waiting on the thread synchronization event without any timeout.Then, the reconciliation workflow timed out in this case. The _reconnect_loop thread set the connection state to DISCONNECTED again at this point. But, because the _monitor_host_state thread had been waiting on the thread synchronization event without any timeout, HA was not triggered. 2021-12-15 15:08:46 ERROR ovs_br_manager.py:161 Timed out running { To confirm the state of the _monitor_host_state thread by taking a thread dump If you send a SIGUSR1 signal to the python process running the Acropolis leader, a thread dump will be recorded in the data/logs/acropolis.out. If you send a SIGUSR1 signal several times to the python process, you will be able to see the _monitor_host_state thread is waiting in the thread dumps, for example: How to send a SIGUSR1 signal to the python process running the Acropolis service: nutanix@cvm$ ps -ef | grep '/usr/bin/python2.7 -B /home/nutanix/bin/acropolis' | grep -v grep An example of the thread dump of the _monitor_host_state thread <Greenlet at 0x7f59794ae5f0: <bound method AcropolisHostHAManager._monitor_host_state of <acropolis.ha.manager.AcropolisHostHAManager object at 0x7f5980081610>>> Please note you can see a similar stack about this _monitor_host_state thread even when Acropolis leader is in normal state. Please make sure that the AHV host once was disconnected and was connected again soon before the disconnect_expiration time and then no HA was initiated after that.
This issue is resolved in: AOS 5.20.X family (LTS): AOS 5.20.4AOS 6.x family (STS): AOS 6.1 Please upgrade AOS to versions specified above or newer.To recover from this situation, it is necessary to restart Acropolis leader. Login to the CVM where the Acropolis leader is running on as the "nutanix" user.Perform the checks described in KB 12365 http://portal.nutanix.com/kb/12365 to make sure it is safe to stop Acropolis.Stop the Acropolis service in the CVM by "genesis stop acropolis" command.Start the Acropolis service by "cluster start" command. Please refer to KB 2305 https://portal.nutanix.com/kb/2305 to determine the CVM running Acropolis leader.
KB3720
During CLI AOS Upgrade Genesis Crash: ImportError: cannot import name symbol_database
When upgrading via the CLI you may encounter a problem where genesis will crash on any or all CVMs with the following error: ImportError: cannot import name symbol_database
Note: it is always recommended to use Prism and one-click to upgrade a cluster. Even a dark site should be able to download the code locally and perform a manual upload via prism. It should be extremely rare that someone is triggering an update via the CLI.Customers who opt to use the CLI to upgrade AOS rather than one-click could potentially run into a bug that is caused by not cleaning up the protobuf-2.5.0-py2.6.egg from a prior upgrade. Issue is being tracked in https://jira.nutanix.com/browse/ENG-50353. It is stated to have been fixed in 4.6.1. If your customer is upgrading via the CLI from a version prior to 4.6.1, it is possible that they are exposed to the problem.If you suspect you are hitting this issue, there will be random CVMs in the cluster where genesis will be crashing after running the upgrade command via the CLI. You will need to confirm the problem by looking for the signature below in the /home/nutanix/data/logs/genesis.out file: Traceback (most recent call last):
WARNING: Support, SEs and Partners should not use CLI AOS upgrade methods without guidance from Engineering or a Senior/Staff SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before proposing or considering these options.Assuming you are seeing the above issue, you will need to work to manually move (remove) the older protobuf-2.5.0-py2.6.egg file from each affected CVM to /home/nutanix/tmp from /home/nutanix/cluster/lib/py. These files can later be deleted once the upgrade is complete. As an additional measure, it is recommended to remove the older file from the /home/nutanix/data/installer/[code_path]/lib/py directory as well. Steps below will work around this problem and allow genesis to start and the upgrade to be re-attempted. The following steps should be completed on each affected CVM (verify impacted CVMs via cluster status and/or logging into each CVM and running genesis status). Verify which CVMs have two protobuf files (one older and one newer). For instances of this problem, the older file is named protobuf-2.5.0-py2.6.egg and the newer file will be named protobuf-2.6.1-py2.6-linux-x86_64.egg: allssh 'ls -la /home/nutanix/cluster/lib/py/protobuf*' Move dupilicate, older egg file to /home/nutanix/tmp on affected CVMs (name the file whatever you would like): mv /home/nutanix/cluster/lib/py/protobuf-2.5.0-py2.6.egg /home/nutanix/tmp/protobuf-2.5.0-py2.6.egg.bkp Verify which CVMs have two protobuf files in the install path (your command may differ depending on the CVM): allssh 'ls -la /home/nutanix/data/installer/el6-release-danube-4*/lib/py/protobuf*' Move/remove older egg file from previous install paths (examples below will be unique to each cluster - again choose a name to rename to that makes sense for you): allssh 'mv /home/nutanix/data/installer/el6-release-danube-4.7*/lib/py/protobuf-2.5.0-py2.6.egg /home/nutanix/tmp/protobuf-2.5.0-py2.6.egg.47.bkp' SSH to each 'bad' CVM to restart genesis (at this point genesis should work or may already be working as it should have auto-restarted on its own): genesis restart Again, genesis should now be up on all CVMs. You may notice a few new services due to partial upgrade that are not up, which will prevent the cluster from coming fully online. To work around this, you need to remove the /home/nutanix/install directory on any CVMs that have it so that we can attempt a fresh install in the next step with the -p option (skips pre-upgrade checks): allssh 'rm -rf /home/nutanix/install' Re-do the cluster install (the sample commands below assume 4.7): cluster enable_auto_install
KB13825
Zookeeper in a crash loop due to CRC corruption in snapshot in “/home/nutanix/data/zookeeper/version-2/”
Zookeeper goes in crash loop due to failing to read the snapshot file of Zookeeper configurations file due to different check sum or file not found
Zookeeper service can be in a crash loop due to corrupted files on the CVM /home partition.Possible messages that will be printed in zookeeper logs are: problem reading snap file /home/nutanix/data/zookeeper/version-2/napshot.CRC corruption in snapshot : /home/nutanix/data/zookeeper/version-2/snapshot.Not able to find valid snapshots in /home/nutanix/data/zookeeper/version-2. The Zookeeper service will be in a loop trying to read corrupted files and not able to fully initialize on this CVM.Example from zookeeper logs: 2022-10-06 11:27:56,492Z - INFO [main:FileSnap@83] - Reading snapshot /home/nutanix/data/zookeeper/version-2/snapshot.500081589 When a Zookeeper service is unstable in a such way - then the active Zookeeper role (Leader or Follower role) should migrate to another CVM in the cluster.But in 3 node clusters (or 5 node clusters with FT2 enabled) there is no other node that is not holding some Zookeeper role already. And as a result, the Zookeeper service will stay in a crash loop due to a corrupted snapshot file.It is expected to see NCC health checks flailing for this node.Note: That node IP x.y.z.218 here is just an IP of CVM where the check was initiated. And problematic CVM is having zk1 alias. Detailed information for zkinfo_check_plugin: Check the /etc/hosts file on CVM to see which IPs the zk aliases are mapped to.In this example zk1 corresponds to CVM with IP .217: x.y.z.218 zk3 # DON'T TOUCH THIS LINE Check for Zookeeper roles: nutanix@cvm:~$ for i in $(sed -ne "s/#.*//; s/zk. //p" /etc/hosts) ; do echo -n "$i: ZK " ; ssh $i "source /etc/profile ; zkServer.sh status" 2>&1 | grep -viE "nut|config|fips|jmx" ; done It is expected to see that for the problematic CVM it will print "ZK Error contacting service. It is probably not running.": x.y.z.218: ZK Mode: leader
Zookeeper Follower staying in this state (crash loop) is expected.An improvement ENG-507558 https://jira.nutanix.com/browse/ENG-507558 was raised to discuss if we need zookeeper automatically healing in such cases.As for the root cause of the file corruption - it should be investigated separately. In most cases, this is due to some HW errors (like bad DIMM with some uncorrectable errors, or issues with disk access).Workaround:Please involve STL to verify the findings and confirm that it is safe to apply this workaround.Note: removing zookeeper files when this is not appropriate can lead to data loss.Resolution steps require stopping the Zookeeper service on the unstable Follower node and then moving away files from the zookeeper folder. And then when it started again - it should retrieve the fresh config from the Leader node.
KB12737
Nutanix Files: CVE-2021-44142 does not impact Nutanix Files
Nutanix Files Samba implementation does not rely on the implementation of "VFS_fruit" the module is not configured in our implementation, hence we are not exposed to the vulnerability in CVE-2021-44142 (Out-of-bounds heap read/write vulnerability in VFS module vfs_fruit).
CVE-2021-44142 (Out-of-bounds heap read/write vulnerability in VFS module vfs_fruit) was released as a Samba vulnerability. https://www.samba.org/samba/security/CVE-2021-44142.html
Nutanix File's Samba implementation does not rely on the implementation of "VFS_fruit".The module is not configured in our implementation, hence we are not exposed to the vulnerability in CVE-2021-44142 (Out-of-bounds heap read/write vulnerability in VFS module vfs_fruit).
KB9117
Commvault Backup failing with "Oops - Server error"
Customer’s 3rd party (Commvault) backups are intermittently failing with “Oops - Server error”.
Commvault backups will intermittently fail with “Oops - Server error”. Both the Snap copy and Backup Copy will fail. There wont be any specific pattern to these failures. VM's that are failed to backup on one day will be successful on the other day. Following error can be found in vsbkp.log on the Commvault Agent VM. The backup job fails with the message "Virtual Machine was not found" 02/25 06:05:55 1893797 Scheduler Reset pending cause received for RCID [0], ReservationId [0] from client [VMname] and application [vsbkp]. Level [0] flags [0] id [0] overwrite [0] append [0] CustId[0]. VMname\vsbkp.log ------------------------ 4604 14f4 Below traceback will be seen in the aplos.out log on Aplos master node. The error message clearly says "Could not connect to the directory service". 2020-02-25 11:07:58 WARNING auth_util.py:75 Error during user group validation/ user entity update. Inaccurate data will be resolved in next request.
Suggest to configure Commvault with a local Prism Admin account instead of AD account in order to eliminate AD authentication issue. If the backup's complete successfully after changing to local ADMIN account, we can confirm it is an issue with AD authentication. Fixing the issue on AD server or changing the Nutanix LDAP configuration to point to a working AD server should fix the issue.
KB10729
How to delete containers that are marked for removal in zeus_config
This KB explains the situation of a stale container due to existing files in the container
It has been observed in some customer environments that deleted containers stay in zeus_config but are not removed. This KB article is a guide on identifying these containers and assisting customers in completely removing them from their clusters.The containers marked for removal are presumably deleted from the cluster a while ago, sometimes up to years ago.IMPORTANT MANDATORY STEP: This KB contains a script that performs metadata deletion. Although the script has been deemed safe to use by engineering, the use of the same must be tracked via a TH engagement and Support Tech Lead involvement. Please make sure to open a TH and engage an STL before performing any action. Identification: The following NCC check flags the existence of these lingering datastores, note the -del suffix in the datastores names: Detailed information for check_storage_access: The following NCC can also be seen and provide the same information, with the deleted datastore/container and the "-del-timestamp" suffix. Detailed information for container_on_removed_storage_pool: The same datastore names (with the -del suffix) will be present in zeus_config_printer output. The following alert will be periodically raised in Prism (See KB Alert - A20032 - Containers are marked for removal /articles/Knowledge_Base/Alert-A20032-Containers-are-marked-for-removal for more details): alert_time: Tue Dec 06 2022 15:05:10 Root cause: As per ENG-356294, a race condition between deleting a key from Pithos and WAL Cassandra writes failing caused to delete a key and then forget about the op. This caused further writes to the VDiskConfig to fail, as it was now in an incorrect state with one key missing. The ENG ticket fixes this issue by ensuring deletion does not happen before the WAL failure. Potential impact: VM level snapshots failingMetadata bloatIn some sporadic cases, outages like those seen in ONCALL-7558
IMPORTANT: Engineering has an automated script to clean up the stale NFS entries which can be used to clean up the deleted container.Please make sure to open a Tech-Help and engage an STL to use the script safely. Improper usage of the script can lead to irrecoverable data loss. Identify the list of containers marked for removal in zeus_config. nutanix@CVM:~$ zeus_config_printer | grep -i del Containers marked for removal have a del appended to the container name as seen above. Verify that the containers do not exist on the cluster by inspecting the output of the following commands. nutanix@CVM:~$ ncli ctr ls Get the container ID for containers marked for removal from zeus_config nutanix@CVM:~$ zeus_config_printer | grep -A32 -i del In the above output, 61599 is the container ID, and 1518016503472, which is in the container name, is the timestamp when the container was marked for deletion. Use the first 10 digits of this timestamp in the command listed below. $ date -d@1518016503 Search for the container ID (61599) in curator.INFO on curator leader, and you will observe that the container is not being deleted as the counters are non-zero. $ grep 61599 ~/data/logs/curator.INFO identify the information for the deleted container and follow the steps below to remove it: Identify the number of inodes to be deleted for the container by running the following command. Replace 61599 with the appropriate container ID. $ allssh 'grep "ContainerMapNFSInodeMap\[61599\]" ~/data/logs/curator.*INFO*' Download the helper script from: https://download.nutanix.com/kbattachments/10729/container_inodes_v2.py https://download.nutanix.com/kbattachments/10729/container_inodes_v2.py and copy it into /home/nutanix of any CVM in the cluster $ cd /home/nutanix Run the script with the following parameters. It executes in dry-run by default. Take note of the number of keys reported by the script (highlighted in bold) $ sudo chmod +x container_inodes_v2.py Ensure the number of keys reported by the script are identical to the number of counters reported by Curator: I0131 17:20:46.998004 22879 mapreduce_job.cc:522] ContainerMapNFSInodeMap[61599] = 351 IMPORTANT: If the values are not the same, DO NOT RUN the script and engage engineering immediately via ONCALL. Run the script with --delete_keys_for_container parameter to clean up the stale inodes: $ python container_inodes_v2.py --ctr_id 61599 --delete_keys_for_container Once the inodes are removed the container will not disappear immediately from Zeus. It will need some curator scans to fully disappear. Until then they will still be visible in Zeus. Repeat the steps for any other containers that are pending removal.
{
null
null
null
null
KB11662
Cleanup process for a failed RF1 disk
This article describes the cleanup process for a failed RF1 disk.
A disk with RF1 egroups fails, or you get one of the following alert messages: This drive potentially contains RF1 data and therefore, there is a potential data loss. AOS ensures that data in the failed disk is replicated before removing the disk from the system. For RF1 data, since there are no extra replicas, the data cannot be recovered. Therefore when removing a bad disk with RF1 data, the disk removal may get stuck. In order to unblock the disk removal, it is necessary that the RF1 data in the bad disk is deleted.
Since most likely all the data in the RF1 container are impacted, delete the RF1 container for a thorough cleanup. In AOS 6.0.1 and later, Nutanix provides a script cleanup_and_recreate_vdisks.py to help with this process. The script is located in the following path in the CVM: /usr/local/nutanix/cluster/bin/cleanup_and_recreate_vdisks/cleanup_and_recreate_vdisks.py Typical usage of the script: python cleanup_and_recreate_vdisks.py -C cleanup_and_recreate -i <cluster_ip> -u admin -p <password> -c <rf1_container_name> --skip_confirmation The script provides the following command options for -C / --command argument: The script takes the following parameters: Mandatory: -C <command>: See the above table.-u <username>: username to log into Prism UI-p <password>: password to log into Prism UI-c <container_name>: RF1 container name Other optional parameters: -i <cluster_ip>: cluster IP address. If not provided, 127.0.0.1 will be used for the local node Prism connection.--skip_confirmation: if not provided, will ask for user confirmation before any data deletion.--non_rf1_cleanup: if a non-RF1 container name is provided, this option should be set to True for the script to proceed. Notes on cleanup data: For ESXi, if a VMX file is in the RF1 container, the script deletes the entire VM. If the VM has RF2 vDisks on other containers, they will also be deleted because the Prism API vDisk detach deletes the vDisk from the container (datastore). There is no API to only detach but not delete.For AHV snapshots, as long as it includes one RF1 vDisk, the whole snapshot will be deleted. If you would like to preserve any data of the snapshots, follow the workaround: clone a VM from the snapshot.edit the VM to remove the RF1 vDisks.migrate the VM to another container. All data must be deleted as the container must be empty.For files that are not attached to VMs and VG in the container, the script may not be able to clean these files. The files will be displayed in the output and have to be deleted manually. Typically, such files may include: vDisks belonging to unregistered VMs in RF1 datastore in ESXiVM template with RF1 vDisks in ESXiany standalone files in RF1 container [ { "Command": "cleanup", "Description": "Deletes the RF1 disks from VM, VG and snapshots. If any other files remain, the script will list the file paths and the customer will need to manually delete them.\nSee Notes below for details." }, { "Command": "cleanup_and_recreate", "Description": "This option first cleans up the old RF1 data and container, and then it recreates the VM disks with the same size and attach them to the VM and VG.\nFor ESX, if whole VMs are deleted because the VM configuration vmx files are in an RF1 container, they will not be recovered by this option." } ]
KB12778
Newly added cluster node marked as degraded (RDMA and Network Segmentation)
KB describing an issue where newly added nodes gets marked as degraded if Network Segmentation for RDMA is in use
Clusters with RDMA and Backplane Network Segmentation enabled are susceptible to a rare issue where a node could be marked as degraded immediately after it is added to a cluster.No further degraded node alerts are seen after the initial alert is resolved.To verify there is not another cause of the degraded node, and to help check the status, refer to KB 3361 https://portal.nutanix.com/kb/3361.
Identification To check whether the cluster is using RDMA and Network Segmentation, run the following command on any CVM: network_segment_status An example where the cluster/nodes do not possess RDMA hardware: nutanix@cvm:~$ network_segment_status An example where the cluster/nodes possess RDMA hardware, which is enabled along with network segmentation: nutanix@cvm:~$ network_segment_status Clusters with RDMA and Network Segmentation enabled are susceptible to false degraded node alerts mentioned in the description. If you see any degraded node alerts, be sure to check KB 3361 https://portal.nutanix.com/kb/3361 before considering this false positive scenario. Workaround No workaround is available for this issue.After confirming the false positive, mark degraded node as fixed in PE. Alternatively, wait for DND auto-resolve to mark the node as fixed automatically - after 24 hours in AOS 6.1.1 and above. Solution This issue ( ENG-448290 https://jira.nutanix.com/browse/ENG-448290) is fixed in AOS releases 6.5.1 and 6.6 as follows: Added 8-minute buffer time before a new node can be considered for DND. Corresponding gflag: --zkmonitor_new_node_score_time_threshold_msecsAdded the check to consider a node for DND only if it has at least 1 score from a peer which is more than --zkmonitor_stale_score_time_interval_msecs old.
KB2722
NCC Health Check: vmknics_subnet_check
The NCC health check vmknics_subnet_check verifies if vmknics on ESXi host have IP addresses in the same IP subnet.
The NCC vmknics_subnet_check test verifies if vmknics IP addresses are configured on the same subnet on any given host. Note: vmknics in the same subnet on the same ESXi host is an unsupported configuration. Running the NCC Check Run this check as a part of the complete NCC checks. nutanix@cvm$ ncc health_checks run_all Or run the vmknics_subnet_check check individually. nutanix@cvm$ ncc health_checks hypervisor_checks vmknics_subnet_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is not scheduled to run on an interval. This check does not generate an alert. Sample output If multiple vmknics are found with IP addresses in the same subnet, the check reports a warning, and an output similar to the following is displayed. Running /health_checks/hypervisor_checks/vmknics_subnet_check on the node. Output messaging [ { "Check ID": "Check if vmknics have different subnets" }, { "Check ID": "vmknics have ip address configured in the same subnet" }, { "Check ID": "Correct the IP addressing in the network.\t\t\tReview KB 2722 for more details." }, { "Check ID": "vmknics in the same ip subnet on the same esxi host is unsupported." } ]
If vmknics_subnet_check reports a WARN status, correct the IP address on the ESXi host that is reported by the NCC. Note: Nutanix does not recommend using two vmknics in the same subnet on ESXi. When the two vmknics reside on the same subnet, ESXi is unable to split vMotion traffic and other management traffic apart according to the GUI configuration. vMotion is a burst-type workload that uses no bandwidth until DRS or a vSphere administrator starts a vMotion (or puts a host into maintenance mode). But when a vMotion starts, the network interface gets saturated as the vMotion is also used for the Nutanix Cluster. Nutanix recommends using two different subnets. Verify the vmknics configuration and rectify the configuration so that the vmknics belong to different subnets. After the problem is resolved and the vmknics are in different subnets or one of the vmknics is removed, restart the CVM (Controller VM) to read the network configuration changes.
KB8021
Alert - A1060 - ProtectionDomainChangeModeFailure
Investigating Protection Domain Activation or Migration Failure alerts on a Nutanix cluster
Alert Overview The A1060 - Protection Domain Activation or Migration Failure occurs when a protection domain cannot be activated or migrated. This could be due to the following: Protection domain with same name might be active on remote site.Remote sites might not be configured correctly. Protection domain has one or more missing VMs/VGs. Sample Alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "Protection Domain Activation or Migration Failure" }, { "Check ID": "Protection domain cannot be activated or migrated.\t\t\tProtection domain with same name might be active on remote site.\t\t\tRemote sites might not be configured correctly, please check the remote sites on source and target clusters.\t\t\tProtection Domain has one or more missing VMs/VGs." }, { "Check ID": "Resolve the stated reason for the failure. If you cannot resolve the error, contact Nutanix support." }, { "Check ID": "Protected VMs could not be started during failover to a remote site." }, { "Check ID": "A1060" }, { "Check ID": "Protection Domain Activation or Migration Failure" }, { "Check ID": "Protection domain protection_domain_name activate/deactivate failed with the error : reason" } ]
Troubleshooting Log in to the target clusters and verify that a protection domain with the same name is not active there. Verify the remote site configuration on the source and target cluster and ensure that the container mapping, network mapping are configured correctly. Refer to the Data Protection and Recovery with Prism Element Guide on the Portal.Verify that the protection domain is not missing any VMs or VGs. Resolving the Issue Address any inconsistences with the Remote Site configurationRefer to the Remote Site configuration section of the "Data Protection and Recovery with Prism Element" guide on the Portal for the configuration steps. If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com./. Collect additional information and attach them to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@CVM ~$ logbay collect --aggregate=true Attaching Files to the Case To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB4763
NCC Health Check: azure_disk_size_check
The NCC health check azure_disk_size_check determines if the Azure Controller VM has the recommended disk configuration.
The NCC health check azure_disk_size_check determines if the Azure Controller VM has the recommended disk configuration. This check only runs on Azure Controller VMs. This check fails if there are less than 2 disks or if the disks are less than 256 GB. Running the NCC checkThe check is part of the full NCC health check that you can run by using the following command: nutanix@cvm$ ncc health_checks run_all You can also run this check separately: nutanix@cvm$ ncc health_checks cloud_checks azure_disk_size_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. Output messaging [ { "Description": "Azure cloud Controller VM does not have minimum of two 256GB+ disks." }, { "Description": "Contact Nutanix support so that the Azure cloud Controller VM has at least two 256GB+ disks." }, { "Description": "Azure cloud Controller VM has smaller disks." }, { "Description": "Azure cloud Controller VM has smaller than the two 256GB+ recommended disks. Contact Nutanix support for further assistance." }, { "Description": "This check is scheduled to run every day, by default." }, { "Description": "This check will generate an alert after 1 failure." } ]
To configure the Azure Controller VM, contact Nutanix Support https://portal.nutanix.com for assistance.
KB10678
Nutanix Move - Windows guest VM on Hyper-V displays Ethernet device name as Ethernet0_bak after migration
After successful migration of Windows VM, the Ethernet device name restore task may fail, leaving name as Ethernet0_bak.
Versions Affected: Move 3.7.0 and earlier. Migration is successful, marked as completed in Move GUI. No errors were reported in Move logs. Ethernet device is configured with name Ethernet0_bak: In wmi-net-util-restore-log.txt, located in C:\Nutanix\Temp, we can notice that restoring the name did not complete: 2021/01/14 11:17:45 Execute wmic nicconfig where index=4 call SetDNSServerSearchOrder (x.x.x.x,y.y.y.y) This is a cosmetic display name issue.
Rename the adapter name from Network Connections if required: Upgrade Nutanix Move to the latest release to avoid recurrence.
KB15453
NIC troubleshooting
NIC connectivity and stability issues are not necessarily due to faulty NIC hardware. This KB guides through troubleshooting NIC issues exploring more likely causes for concern before settling in on a hardware cause.
There are a variety of cases affecting network connectivity. Underlying root cause is seldom a NIC hardware failure. The purpose of this KB it to assist in troubleshooting to the extent that the proper troubleshooting paths are followed and to prevent an unneeded NIC dispatch. Some examples of networking case symptoms include : Missed or Dropped PacketsCRC Errors / No link on connectionNIC not seen by hostFirmware/Driver on matching supported releases resulting in connectivity issuesNIC unseen after FW update Identifying the NIC hardware There are various categories of NIC cards and some are present on various models of NX server hardware. It is important to understand what types of NIC are expected to be seen. 1. LOM - LAN On Motherboard - These are the NIC ports that are soldered onto the system board of various Nutanix branded servers up to the G8 /G8N family. If a LOM NIC is determined to be defective after troubleshooting, it will result in a Node replacement, as a LOM NIC is not available as a field replaceable unit. 2. AIOM AOC - Advanced I/O Module (Add on card) - AIOM NICs were introduced with the introduction of G8 Multi-node (aka “Big Twin”) Hardware and serve the same purpose as the LOM. They are, however available as FRUs and if determined to be defective after troubleshooting, they can be replaced without requiring a node replacement. Server models with AIOM include NX-1065-G8, NX3060-G8, NX8036-G8.Due to supply chain issues during the pandemic, Nutanix introduced G8N variants of these “Big Twin” multi-node servers in three different phases which have an impact on how the primary NIC (including support for shared ILOM access) are supported NX G8N Phase1: NO AIOM supportNX G8N Phase2: Available AIOM supportNX G8N Phase3: Mandatory AIOM support (2x 10GBaseT AIOM or 2x 10GBaseT + 2x 10GbE SFP+ AIOM) 3. PCIE AOC - These are the PCIE Add On Cards that are typically associated with expansion NICs. Each server model follows its own set of configuration rules when it comes to PCIE Add on cards. Notable exceptions to consider are listed here.Due to supply chain issues during the pandemic, Nutanix introduced G8N variants of the “Ultra” single node servers which replace the onboard NICs with specific add in NICs in specific slots per server model. As shown below. NX-8150N-G8 Required NIC in PCIe slot NIC4 - 1x 10GBaseT 2P Missed or Dropped Packets Missed or Dropped packets is not indicative of a NIC hardware failure. Dropped packets are often due to multicast packets flooding the NIC due to VLANs on being properly trimmed on the upstream switch to only the set of VLANs needed on the Nutanix hosts. In other cases dropped packets may be due to not enough bandwidth to accommodate the load and additional NICs / ports should be configured/ aggregated. Reference KBs listed below for further troubleshooting: KB-1381 https://portal.nutanix.com/kb/1381 NCC Health Check: host_nic_error_check KB-4540 https://portal.nutanix.com/kb/4540 Handling packet drops from vNICs in AHV KB-4444 https://portal.nutanix.com/kb/4444 Increasing RX buffer on Intel 10GbE adapters KB-13310 https://portal.nutanix.com/kb/13310AHV VMs on Cisco UCS platform may experience network packet loss when NIC driver is enic KB-2883 https://portal.nutanix.com/kb/2883 NCC Health Check: host_rx_packets_drop KB-3706 https://portal.nutanix.com/kb//3706NCC Health Check: host_tx_packets_drop KB-12040 https://portal.nutanix.com/kb/12040 UDP packet drops on AHV Linux VMs with UDP Fragmentation Offload (UFO) enabled KB-11990 https://portal.nutanix.com/kb/11990 Large packet loss in the Windows VMs using VMXNET3 CRC errors CRC errors are due to errors that have been induced elsewhere in the network before reaching the NIC. The sending host computes a cyclic redundancy check (CRC) of the entire Ethernet frame and puts this value in the Ethernet frame's FCS (frame check sequence) section after the user payload. The intermediate switch and the destination host check this computed value against the value they compute to determine if the frame has been corrupted in transit. The issue may be between the attached upstream physical switch and the NIC or it may be originating further up in the network. One requisite step is to test the media connecting the NIC to the switch. See Media Related Issues section for more information. KB-1381 https://portal.nutanix.com/kb/1381 NCC Health Check: host_nic_error_check KB-15350 https://portal.nutanix.com/kb/15350 Mellanox NIC slow to change state and/or CRC errors KB-13287 https://portal.nutanix.com/kb/13287 [host_nic_error_check] RX_missed/RX_CRC errors against unused interface eth0 after AOS upgrade to 5.20.x and NCC version to 4.5.0.2 KB-1088 https://portal.nutanix.com/kb/1088 How to troubleshoot Network Issues on ESXi in a Nutanix Block KB-8581 https://portal.nutanix.com/kb/8581 Firmware issue on Mellanox ConnectX-4 Lx version 14.21.1000 causing very high checksum errors
NIC Not Seen by Host Steps for AHV host 1. Login to the AHV host and determine if NIC is seen - this will provide BUS ID and NIC model for all interfaces seen by hypervisor. In this example we see 2 NICs - A dual port NIC at Bus address 18:00 and a dual port NIC at 3b:00. [root@AHV ~]# lspci |grep -i net 2. Get the list of “ethX” devices that the system sees. - this command provides the ethX | Link State (look for “UP”) | MAC address (you will need this) |MTU [root@AHV ~]# ip link |grep -i " eth" -A1 3. If you do not see the expected number of ethX devices, check for existence of enp* devices which may indicate that there was a previous NIC replacement wherein the NIC replacement script was not correctly executed. You may need to run the AHV NIC replacement script in the hardware replacement workflow that pertains to you hardware.4. Look at each ethX device to see which devices map to which bus addresses. In this example we will look up eth0 to see that it maps to 0000:18:00.0 [root@AHV ~]# ethtool -i eth0 5. You can continue this for each NIC in the system ethX in order to complete a mapping of NICS to bus addresses. In this manner you can determine which device may be missing from the lspci output when you find an ethX device without a matching PCI bus address. eth0 >> 0000:18:00.0 6. If you do not see the correct number of ports listed - if your NIC is missing from this output, you may have a failed hardware component. Two steps need to be taken to confirm: Shutdown the node in question and power it down for 2 minutes. This is especially applicable if the NIC has recently gone unresponsive after a failed NIC FW update. It is critical that the node be actually powered down, not simply rebooted. Power the system up to see if it sees the NIC.Boot the system to a "vanilla" phoenix image and run the "lspci |grep -i net" command again from phoenix. If phoenix does not see the NIC at all, you should proceed with a NIC replacement. If phoenix does see the NIC, but AHV does not see the NIC, consider reimaging the hypervisor on the node. Steps for ESXi Host 1. Login to the ESXi host and determine if NIC is seen - this will provide BUS ID and NIC model for all interfaces seen by hypervisor. It also usefully provides the vmnic / BUS location where ESXi sees the card/port. [root@ESXi:~]# esxcfg-nics -l 2. If you do not see the correct number of ports listed - if your NIC is missing from this output, you may have a failed hardware component. Two steps need to be taken to confirm: Shutdown the node in question and power it down for 2 minutes. This is especially applicable if the NIC has recently gone unresponsive after a failed NIC FW update. It is critical that the node be actually powered down, not simply rebooted. Power the system up to see if it sees the NIC.Boot the system to a "vanilla" phoenix image and run the "lspci |grep -i net" command again from phoenix. If phoenix does not see the NIC at all, you should proceed with a NIC replacement. If phoenix does see the NIC, but ESXi does not see the NIC, consider re-imaging the hypervisor on the node. Issues with Hypervisor NIC drivers and/or Firmware Compatibility You will need to run a number of commands to gather details on the current system to determine if the Drivers and Firmware are at supported levels and have been qualified to run with each other. The process will vary based on hypervisor.If you are experiencing an issue with a system not seeing the NIC after a failed NIC FW update, you will need to power cycle the server. Shutdown the node in question and power it down for 2 minutes. It is critical that the node be actually powered down, not simply rebooted in order for the system to reload the actual Firmware for the NIC. AHV commands 1. Once you have identified the NIC in question from the output of lspci |grep -i net command above, run the following including Bus ID to get more specifics on the physical NIC. The goal here is to get more specific information on the NIC beyond the generic information provided by the generic "lspci" command. In this example we will again use BUS ID 18:00.0. [root@AHV ~]# lspci -vvnn -s 18:00.018:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10G X550T [8086:1563] (rev 01) Subsystem: Super Micro Computer Inc Device [15d9:0920] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin B routed to IRQ 57 NUMA node: 0 ... In this example the highlighted values map to the Vendor and Device information shown below. This information is critical when referencing currently supported FW/Driver combinations. Using this information look up the specifics of the NIC including Part number and Card Name from the LCM for NIC (NX)- Qualified FW & Driver dependency matrix. https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=103609055 Vendor ID: 0x8086Device ID: 0x1563SubVendor ID: 0x15d9SubDevice ID: 0x0920 2. Using this information on the confluence page LCM for NIC (NX)- Qualified FW & Driver dependency matrix. https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=103609055 we get the following specifics for this NIC: 3. Using this specific Card name string, look up the qualified firmware/driver combinations. In this example, we show an abbreviated list. 4. Check driver and firmware version from the host to see how it matches up with the qualified list obtained above. [root@AHV ~]# ethtool -i eth0 On Confluence, we list AHV versions with their internal name. You can use This Chart http://dashboard.ahv.nutanix.com/?tab=releases to find the Internal version of AHV. 20201105.30281 = AHV v7.30.3 5. In this case we see that the FW could be upgraded to 0x80001743 (3.50) (Latest) to match the latest driver already in use - 5.1.0-k, for AHV version 7.x. Note that NIC drivers should be upgraded before NIC FW. With AHV, NIC drivers are upgraded by upgrading AHV NIC drivers are not upgraded manually. ESXi commands 1. Confirm the version of ESXi that is running on the server with the command below. Take the build number and look it up on the VMware build number/ versions KB https://kb.vmware.com/s/article/2143832 to get the version number. In this case, build-14320388 relates to ESXi 6.7 Update 3 [root@ESXi:~]# uname -a 2. Run the following to get the detailed information on Vendor/Subvendor and Device/Subdevice ID which is necessary to determine proper NIC firmware / driver alignment.This command will return output from each vmnic found in the system. Focus in on the PCI BUS / vmnicX to identify the card of interest and then use the [root@ESXi:~]# esxcli hardware pci list | grep -i net -A18 -B8 3 Use the LCM for NIC (NX)- Qualified FW & Driver dependency matrix https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=103609055 to determine the supported NIC driver/ Firmware combination supported by LCM.Find the line(s) that match values for Device/Subdevice and Vendor/Subvendor. If you have more than one match, use the Card name that matches the number of ports on your card. In this case we match on Silicom 82599 2P 10G PE210G2SPI9AE-XR-NU. 4. Search further down into the next matrix based on this Card name to find the supported firmware based on ESXi version, ESXi driver version, and card name. 5. Determine the ESXi driver and Firmware currently installed on a specific vmnicX. [root@ESXi:~]# esxcli network nic get -n vmnic0 6. Comparing the current driver, ixgben, version 1.8.7, we see that the qualified FW version is 0x800006d1 (4.40) (Latest). In the case of this configuration, there are two more versions of the ixgben drivers to which this server could be upgraded. The recommended method of upgrading drivers/firmware on ESXi servers is to use LCM to upgrade drivers before upgrading Firmware.There are situations with ESXi where if the NIC driver has been previously upgraded with VMware Update manager or other non-LCM methods to a version not supported by LCM, then LCM will not provide available upgrades to driver or firmware. In this circumstance, it may be necessary to upgrade the driver manually to an LCM supported version, after which LCM should be able to handle the upgrades going forward. In the event LCM cannot be used to do the upgrade, please reference KB-10634 Manual upgrade procedure for Nutanix NIC firmware https://portal.nutanix.com/page/documents/kbs/10634[ { "Vendor": "Supermicro", "Model": "Intel X550", "Num Ports": "2", "Max. Speed": "10Gbps", "Part No": "AOC-MTG-I2TM-NI22 Rev 1.01A", "Card name": "Supermicro X550T 2P 10G AOC-MTG-I2TM-NI22", "Class": "0x020000", "Vendor ID": "0x8086", "Device ID": "0x1563", "Sub device ID": "0x0920", "Sub vendor ID": "0x15d9" }, { "Vendor": "NIC Model", "Model": "Hypervisor/ CVM", "Num Ports": "Qualified Driver List", "Max. Speed": "Qualified FW versions range" }, { "Vendor": "Supermicro X550T 2P 10G AOC-MTG-I2TM-NI22", "Model": "AHV EL6 (ixgbe)", "Num Ports": "5.6.3 (Latest)", "Max. Speed": "0x80001112 (2.20)" }, { "Vendor": "0x80000aee" }, { "Vendor": "5.0.4 (Min)", "Model": "0x80001112 (2.20)" }, { "Vendor": "0x80000aee" }, { "Vendor": "0x80000a73" }, { "Vendor": "0x800007f6 (Min)" }, { "Vendor": "AHV 7.x (ixgbe)", "Model": "5.1.0-k", "Num Ports": "0x80001743 (3.50) (Latest)" }, { "Vendor": "5.6.3", "Model": "0x80001112 (2.20)" }, { "Vendor": "0x80000aee (1.93)" }, { "Vendor": "5.6.1 (Min)", "Model": "0x80001112 (2.20) (Min)" }, { "Vendor": "0x80000aee" }, { "Vendor": "0x80000a73" }, { "Vendor": "0x800007f6" }, { "Vendor": "AHV 8.x (ixgbe)", "Model": "5.16.5 (Latest)", "Num Ports": "0x80001743 (3.50) (Latest)" }, { "Vendor": "5.12.5 (Min)", "Model": "0x80001743 (3.50)" }, { "Vendor": "Vendor", "Model": "Model", "Num Ports": "Num Ports", "Max. Speed": "Max. Speed", "Part No": "Part No", "Card name": "Card name", "Class": "Class", "Vendor ID": "Vendor ID", "Device ID": "Device ID", "Sub device ID": "Sub device ID", "Sub vendor ID": "Sub vendor ID" }, { "Vendor": "Silicom", "Model": "Intel 82599", "Num Ports": "2", "Max. Speed": "10Gbps", "Part No": "PE210G2SPI9AE-XR-NU", "Card name": "Silicom 82599 2P 10G PE210G2SPI9AE-XR-NU", "Class": "0x020000", "Vendor ID": "0x8086", "Device ID": "0x10fb", "Sub device ID": "0x000c", "Sub vendor ID": "0x8086" }, { "Vendor": "Silicom", "Model": "Intel 82599", "Num Ports": "4", "Max. Speed": "10Gbps", "Part No": "PE310G4SPi9LB-XR", "Card name": "Silicom 82599 4P 10G PE310G4SPi9LB-XR", "Class": "0x020000", "Vendor ID": "0x8086", "Device ID": "0x10fb", "Sub device ID": "0x000c", "Sub vendor ID": "0x8086" }, { "Vendor": "NIC Model", "Model": "Hypervisor/CVM", "Num Ports": "Qualified Driver List", "Max. Speed": "Qualified FW Versions range" }, { "Vendor": "Silicom 82599 2P 10G PE210G2SPI9AE-XR-NU", "Model": "ESXi 6.5 (ixgbe)", "Num Ports": "4.5.1-iov (Latest)", "Max. Speed": "0x800006d1 (4.40) (Latest)" }, { "Vendor": "ESXi 6.7 (ixgben)", "Model": "1.10.3.0 (Latest)" }, { "Vendor": "1.8.7" }, { "Vendor": "1.7.17" }, { "Vendor": "1.7.1 (Min)" }, { "Vendor": "ESXi 7.0 (ixgben)", "Model": "1.15.1.0 (Latest)" }, { "Vendor": "1.10.3.0" }, { "Vendor": "1.8.7" } ]
KB9829
Alert - Nutanix Cloud Clusters (NC2) - Cluster Node Joining Timeout
This article explains the possible reasons for receiving Cluster Node Joining Timeout alert for Nutanix nodes running on Cloud.
When a user initiates a Capacity Increase operation from NC2 Console or a Nutanix node is being added as a part of node condemn workflow, below are the 5 main stages of these operations: During the Joining stage, agents installed on Nutanix nodes in an existing cluster send a request to cluster service (Genesis) to add a new node to the existing AOS cluster.If agents could not add the node successfully within two hours, Nutanix orchestrator will flag this issue in the Notification Center of NC2 Console by triggering the below alert: Node being added in the cluster is in Joining stage for longer than expected. ProvisioningBootingInstallingJoiningRunning
Usually, joining a new node to the cluster takes under 40 minutes. A process that takes two hours means that there is some unexpected issue that requires investigation. This could be due to agents installed on AHV nodes or unexpected issues in the Nutanix cluster services. Both categories of issues require logs inspection to find the root cause of the problem. Engage Nutanix Support https://portal.nutanix.com to assist with resolving this issue. You can also contact the Support Team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers.
KB13638
Prism Central Web page is not accessible with invalid client_id error message code 400
When a customer tries to open the PC from all different browsers it gives the below error message "Invalid client_id (\"b098123e-5198-5fd7-99d3-e07c3b381bd6\")." code 400 due to missing Mercury’s client ID in the IAM DB.
In HAR logs the web request looks like the following: Request URL: https://10.115.34.54:9440/api/iam/authn/v1/oidc/auth?client_id=b098123e-5198-5fd7-99d3-e07c3b381bd6&redirect_uri=https:%2F%2F10.115.34.54:9440%2Fauth_code%3Foriginal_request=https:%252F%252F10.115.34.54:9440%252F&response_type=code&scope=offline_access+openid+profile+email+groups Confirm the message error" by looking at iam-user-authn-xxxx pod logs , either from live cluster or from cmsp log bundle. {"log":"time=\"2022-09-07T08:00:38Z\" level=error msg=\"Failed to parse authorization request: Invalid client_id (\\\"b098123e-5198-5fd7-99d3-e07c3b381bd6\\\").\" requestID=f0bf54dc-0437-9d02-9a90-5926d913a193\n","stream":"stderr","time":"2022-09-07T08:00:38.408351491Z"} Above Invalid client_id error means Mercury’s client ID b098123e-5198-5fd7-99d3-e07c3b381bd6 is not available in the IAM DB, in the table service_client. Table service_client can be checked by looking at Postgres DB and query this table. It's empty nutanix@NTNX-10-115-34-54-A-PCVM:~/data/logs$ sudo kubectl get pods -A But Mercury client id is in zknode oidc_client nutanix@NTNX-10-115-34-54-A-PCVM:~$ zkcat /appliance/logical/oidc_client
IMPORTANT: If you run into this issue on newer PC versions (i.e. > pc.2022.9), do due diligence to collect a full logbay BEFORE proceeding to the below workaround. Get approval from an ONCALL Screener before performing zk edits.WORKAROUND:Invalid client_id error is due to missing Mercury’s client ID in the IAM DB.To fix the issue, remove the old Mercury client ID and restart mercury and ikat_control_plane nutanix@PCVM~: zkrm /appliance/logical/MercurySharedInfo Confirm that prism IAMv2 authentication is working.
""ISB-100-2019-05-30"": ""Description""
null
null
null
null
KB4575
AOS Upgrade stuck due to Failed to sync time with peer
AOS Upgrade stuck due to Failed to sync time with peer
During an AOS upgrade, one CVM reboots with the new release but is not releasing the token due to NTP sync with another CVM: 2017-06-26 16:37:56 INFO node_manager.py:5110 Name server list without duplicates: 10.XX.XX.70 10.XX.XX.71
Collect the necessary logs to debug the issue. (Please do this for every case regarding this issue, so that Engineering can permanently resolve this problem.)Perform the following on each of the CVMs to collect the logs: `ps auxf | less` and then find the PID of `/usr/bin/python2.7 -B /usr/local/nutanix/cluster/bin/genesis --foreground=true --genesis_self_monitoring=false --logtostderr`.kill -SIGUSR1 GENESIS_PID (This doesn’t impact functioning but dumps the state of each thread to genesis.out) We expect something like the following in genesis.out. This along with the logs from all nodes should help us identify what went wrong. <Greenlet at 0x67512d0: <bound method Thread.__bootstrap of <Thread(Thread-40, started 108335824)>>> Resolution of the Issue Since the CVM is not able to sync with the CVM 10.26.225.53, restarting genesis on CVM 10.26.225.53 will make the services to start and the upgrade goes on.In some instances it may be necessary to restart genesis on all CVMs that have yet to be upgraded until NTP/Genesis leader has been moved to the CVM which has been upgraded to later version of AOS.NOTE: Please tag your cases to ENG-92573 http://jira.nutanix.com/browse/ENG-92573 and share NCC log bundle for root cause.
KB13384
Node removals in On-Prem clusters or Hibernations in NC2 clusters can get stuck on clusters with EC containers due to missing parity egroups
When an egroup still exists with no extents in it, node removals in On-Prem clusters or hibernation in AWS can fail due to the inability to handle evacuating that egroup.
This problem is observed in both NC2 on AWS and On-Prem clusters. Steps for On-Prem clusters Disk removal was stuck due to one egroup nutanix@CVM~$: allssh "grep ExtentGroupsToMigrateFromDisk ~/data/logs/curator.INFO" nutanix@CVM~$:~$ allssh "grep 'Egroups for removable disk' ~/data/logs/curator.INFO" Stargate logs reported metadata lookup failures for fixer ops when trying to rebuild this egroup ~/data/logs/stargate.INFO Identify the failed egroup nutanix@CVM~$: grep -i "Erasure fixer op failed for group" ~/data/logs/stargate.INFO There was no egid metadata for this parity egroup and this egroup was not found on any physical disks nutanix@CVM~$: medusa_printer -lookup egid --egroup_id 405008462 Steps for NC2 on AWS Starting with AOS 6.0.1 onwards, NC2 on AWS offers the capability to hibernate/resume the cluster to/from an AWS S3 bucket. One of the requirements for this functionality is that the cluster must be in a healthy state to allow hibernation. Starting with AOS >=6.1.1, there is an issue where hibernate can get stuck on rare occasions on EC-enabled clusters/containers. Looking at progress_monitor_cli --fetchall: progress_task_list { in curator.INFO would have tasks stuck evacuating extent groups: 50458:I20220504 15:25:30.713479Z 21960 mapreduce_job.cc:634] NumExtentGroupsToEvacuate[2305843009213693957] = 1 In stargate.INFO we would see something like this for an egroup: I20220504 17:24:40.536121Z 23339 vdisk_micro_egroup_fixer_op.cc:7333] vdisk_id=5546 operation_id=10629858 egroup_id=17986406 EC op 10629860 on erasure egroup 17986406 started, The medusa egid lookup for this egroup would look like: nutanix@NTNX-CVM:10.x.x.x:~$ medusa_printer -lookup egid -egroup_id 17986406 1. This egroup would not have extents. 2. The parity egroup for this egroup may not exist: nutanix@NTNX-CVM:10.x.x.x:~$ medusa_printer -lookup egid -egroup_id 20032422
This is for hibernation in NC2 clusters and node removals in on-prem clusters. For hibernation to proceed, as a workaround it is necessary to delete the medusa egid entry. After this hibernation should continue.It is necessary to consult with Devex/Engineering/Support Tech Leads in order to proceed. Therefore, it is advisable to open an ONCALL/TH for the engagement. Please refer to the following page for the ONCALL process https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=92021962
KB16477
Lenovo - Prism displays incorrect disk slot numbers (4-7) on Lenovo HX650 V3 nodes
This KB describes an issue where Prism displays incorrect disk slot numbers on Lenovo HX650 V3 nodes.
It is noticed that Prism will display disks in incorrect slots on the Prism Hardware diagram. For example in the below diagram, disk on physical slot 6 is shown as on slot 4 in Prism UI, Similarly, in the below diagram the disk on physical slot 4 is shown as on slot 6, Note : The disk slots showing interchangeably exist only between slots 4-7. However, the LED Serviceability of the disks in these scenarios work correctly. Thus, disk location can be verified by turning on the LED for any drive between slots 4 to 7 on Prism. For the drive which has gone bad, if you turn the LED on from the Prism UI - LED will light up correctly for the impacted drive.
This is a cosmetic issue and it should not affect any operations. This issue occurs due to cabling changes on Lenovo nodes which causes layout mismatch for slots 4 to 7. This issue is fixed in Foundation 5.6.1 and foundation platforms 2.15.1 versions.
KB16639
Customizing logging-operator-logging application deployment (fluentBit, fluentD)
Customizing logging-operator-logging application deployment (fluentBit, fluentD)
Customizing Logging-operator-logging's component is not possible through the usual Kommander's --installer-config yaml file. For cases where components of the logging-operator-logging needs to be customized, like FluentD's resource limit or number of replicas. Please see the following examples.
Create a configMap named logging-operator-logging-overrides, that contains the desired customized values. For example, fluentd's resource request/limit: apiVersion: v1 kind: ConfigMap metadata: name: logging-operator-logging-overrides namespace: kommander data: values.yaml: | fluentd: resources: limits: cpu: 1 memory: 2000Mi requests: cpu: 1 memory: 1500Mi or replicas: data: values.yaml: | fluentd: replicas: 3 This would be picked up by the logging-operator and would restart fluentd with the new configuration. You can also describe the default logging-operator-logging's configMap to see other configurations applied to fluentd and fluentbit.
KB14234
Alert - A160054 - File Server Partner Server Connectivity Down
Troubleshooting and resolving alert FileServerPartnerServerNotReachable.
This Nutanix article provides the information required for troubleshooting the alert FileServerPartnerServerNotReachable for your Nutanix Files cluster.Alert overviewIf connectivity between a Nutanix Files cluster and it's partner server is lost, this alert will trigger. Sample alert Block Serial Number: 23SMXXXXXXXX Output messaging [ { "Check ID": "Partner server is not responding to file notifications." }, { "Check ID": "Failed to reach partner server from File Server VM." }, { "Check ID": "Ensure that the partner server is functioning and that there is connectivity between File Server VMs and the partner server." }, { "Check ID": "File server stopped notifying file operation events" }, { "Check ID": "A160054" }, { "Check ID": "File Server Partner Server Connectivity Down" }, { "Check ID": "Partner server {partner_server_host} is not responding from file server {file_server_name}." } ]
Troubleshooting Ensure Partner Server is up and accessible.Log in to FSVM and ensure connectivity to the Partner Server via network tools. Use ping from an FSVM to the remote/partner serverUse nc to test connectivity to a specific port nutanix@FSVM:~$ nc -v ##.##.##.## <port> Resolving the issueIf you feel this alert is incorrect, need assistance, or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collect additional information and attach it to the support case. Collecting additional informationRefer to “Logging Onto A File ServerVM” https://portal.nutanix.com/#/page/docs/details?targetId=Files-v3_6:fil-file-server-fsvm-login-t.html section of the Setup Guide for instructions on how to SSH into a file server VM (FSVM). Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the Logbay bundle from Minerva leader using the following command (for more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691): Note: Execute "<afs> info.get_leader" command from one of the CVMs (Controller VMs) to get the minerva leader IP. Using File Server VM Name: nutanix@cvm$ logbay collect -t file_server_logs -O file_server_name_list= <FSVM name> Using File Server VM IP: nutanix@cvm$ logbay collect -t file_server_logs -O file_server_vm_list=<FSVM IP> Attaching files to the case​​​​​Attach the files at the bottom of the support case on the support portal. If the size of the logs being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB-1294 https://portal.nutanix.com/kb/1294 for instructions.
""Verify all the services in CVM (Controller VM)
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""Command to check if port is open on remote host from ESXi""
null
null
null
KB14925
Blueprint Launch or Runbook Execution fail
Launching a Blueprint fails or executing a runbook gets stuck or may result in error.
Symptoms In Calm, Blueprint or Runbook launch fails at different stages with various error messages, such as "Script execution failed with status 255!" or "Script execution has failed with system error Worker failed during script execution which is an irrecoverable sate. Things are not good" Sometimes, it might take very long time for Runbook execution to complete, or it might be stuck in a pending state.Jove logs may show "removing worker from active worker" nutanix@PCVM:/home/docker/nucalm/log$ grep "removing worker from active workers hercules-" jove* jove.log.10:2024-04-13 18:59:28.33703Z INFO jove 207361 manager.go:404 ces/jove/worker.(*Manager).deactivateWorker [component:WorkerManager][interface:0.0.0.0][port:4115][workerClass:hercules] removing worker from active workers hercules-1-283b4aaa-9ccf-4744-aa5f-2a48f3181723 jove.log.10:2024-04-13 18:59:28.36937Z INFO jove 207361 manager.go:404 ces/jove/worker.(*Manager).deactivateWorker [component:WorkerManager][interface:0.0.0.0][port:4115][workerClass:hercules] removing worker from active workers hercules-0-283b4aaa-9ccf-4744-aa5f-2a48f3181723 jove.log.10:2024-04-13 19:56:35.54659Z INFO jove 207361 manager.go:404 ces/jove/worker.(*Manager).deactivateWorker [component:WorkerManager][interface:0.0.0.0][port:4115][workerClass:hercules] removing worker from active workers hercules-0-283b4aaa-9ccf-4744-aa5f-2a48f3181723 Tracebacks may be present in /home/docker/nucalm/log/hercules.log Traceback (most recent call last): File "/home/calm/venv/lib/python2.7/site-packages/gevent/greenlet.py", line 534, in run result = self._run(*self.args, **self.kwargs) File "/tmp/pip-install-7gbMNd/calm/calm/server/hercules/greenlets/blueprint_launch.py", line 646, in _run File "/tmp/pip-install-7gbMNd/calm/calm/server/hercules/greenlets/blueprint_launch.py", line 293, in bp_launch File "/tmp/pip-install-7gbMNd/calm/calm/server/hercules/greenlets/blueprint_launch.py", line 515, in _patch_runtime_editables File "/tmp/pip-install-7gbMNd/calm/calm/common/api_helpers/common_helper.py", line 153, in validate_objects File "/home/calm/venv/lib/python2.7/site-packages/schematics/models.py", line 259, in validate strict=strict) File "/home/calm/venv/lib/python2.7/site-packages/schematics/validate.py", line 41, in validate context=context, partial=partial, strict=strict) File "/home/calm/venv/lib/python2.7/site-packages/schematics/transforms.py", line 109, in import_loop raw_value = field_converter(field, raw_value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/validate.py", line 35, in field_converter field.validate(value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/types/compound.py", line 27, in validate validator(value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/types/compound.py", line 68, in validate_model model_instance.validate() File "/home/calm/venv/lib/python2.7/site-packages/schematics/models.py", line 259, in validate strict=strict) File "/home/calm/venv/lib/python2.7/site-packages/schematics/validate.py", line 41, in validate context=context, partial=partial, strict=strict) File "/home/calm/venv/lib/python2.7/site-packages/schematics/transforms.py", line 109, in import_loop raw_value = field_converter(field, raw_value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/validate.py", line 35, in field_converter field.validate(value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/types/compound.py", line 27, in validate validator(value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/types/compound.py", line 68, in validate_model model_instance.validate() File "/home/calm/venv/lib/python2.7/site-packages/schematics/models.py", line 259, in validate strict=strict) File "/home/calm/venv/lib/python2.7/site-packages/schematics/validate.py", line 41, in validate context=context, partial=partial, strict=strict) File "/home/calm/venv/lib/python2.7/site-packages/schematics/transforms.py", line 109, in import_loop raw_value = field_converter(field, raw_value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/validate.py", line 35, in field_converter field.validate(value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/types/compound.py", line 27, in validate validator(value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/types/compound.py", line 181, in validate_items self.field.validate(item) File "/home/calm/venv/lib/python2.7/site-packages/schematics/types/compound.py", line 27, in validate validator(value) File "/home/calm/venv/lib/python2.7/site-packages/schematics/types/compound.py", line 68, in validate_model model_instance.validate() AttributeError: 'NoneType' object has no attribute 'validate' <BPLaunchWorker at 0x7f14db6c0eb0> failed with AttributeError Identification This may happen due to duplicate workers registered with Jove service in nucalm container. To identify this run the following steps:1. SSH to PCVM and exec into nucalm container nutanix@PCVM$ docker exec -it nucalm bash Run the activate command and to to cshell.py [root@ntnx-a-pcvm /]# activate Run the following script in cshell.py ws_entries = model.WorkerState.query() Note: The above script can be copy-pasted into the cshell session.If the output of the script returns an output similar to the below containing some UUIDs, then proceed to the solution steps. If no output is seen engage an STL or open an ONCALL to further investigate the issue. In [1]: ws_entries = model.WorkerState.query() An empty output looks like below: In [1]: ws_entries = model.WorkerState.query()
To fix the duplicate workers, we must Stop Jove service, clean the existing worker state for Jove, start Jove Step 1: stopping the Jove service (on all 3 nodes if scale-out deployment). nutanix@PCVM:~$ docker exec -it epsilon bash Step 2: clean the existing worker state for Jove (on any one node, regardless of scale-out) nutanix@PCVM:~$ docker exec -it nucalm bash Step 3: start Jove (on all three nodes if scale-out deployment) nutanix@PCVM:~$ docker exec -it epsilon bash Verification:Run the identification step again on nucalm container to confirm there are no duplicate workers present
KB11317
There Are Feature Violations, No Licenses Found. License Feature Violations or Expired Licenses on Xi-Beam
This article describes an issue where feature violation occurs on Xi Beam for on-premises and public clouds.
On the Xi Beam console, there might be a license feature violation alert or a license expired alert while accessing the cost management features for Nutanix on-premises clusters and public cloud. This can be seen when navigating to the Beam app from Cost Management in Prism Central (PC). Feature violations occur in the following scenarios: There is an active public cloud subscription but you are using the Nutanix features in Beam without a Prism Ultimate license. There is an active Prism Ultimate license but you are using the public cloud features in Beam without an active subscription.
Ensure a valid Prism Ultimate license is applied and the subscription to use the Beam features for your Nutanix on-premises and public clouds is active. If either a license or subscription is missing, a banner is displayed at the top of the Beam page indicating that there are feature violations.
KB13792
NGT Installation fails with "This installation package is not supported by this processor type. Contact your product vendor"
NGT Installation fails with "This installation package is not supported by this processor type. Contact your product vendor"
When the installation of NGT is invoked, the below error may be presented. 0x80070661 - This installation package is not supported by this processor type. Contact your product vendor
This error is shown because NGT is only supported on 64-bit operating systems. To resolve this, kindly migrate workloads to a 64-bit Guest VM and install NGT.
KB3448
Nutanix Disaster Recovery and Application Consistent VM snapshot issues on ESXi
Nutanix Disaster Recovery and Application Consistent snapshot issues on ESXi
Note: Nutanix Disaster Recovery is formerly known as Leap. Issues: An application-consistent snapshot fails using Nutanix Guest Tools (NGT) with an error indicating that a volume is not supported by the VSS provider. You are able to take an application-consistent VSS Snapshot using VMware Tools for this same logical volume. Note: The issue is resolved in AOS 5.0. Error in Nutanix_Guest_Tools log (C:\Program Files\Nutanix\Logs\): The instance of IVSSBackupComponent failed to Add volume [\\?\Volume {f3eaa844-727c-11e3-86f8-806e6f6e6963}] to snapshot set. This error occurs because the disk serial number is reported as NULL, which you see when you enable VSS tracing as described in Microsoft KB 887013 https://support.microsoft.com/en-ca/help/887013/how-to-enable-the-volume-shadow-copy-service-s-debug-tracing-features. [ 8:54:49.056 P:39C0 T:1718 CORHWUTC(0460) HWDIAG] Page 80 info for drive \\.\PHYSICALDRIVE0 There is also a DR scenario where the disk.EnableUUID option must be set to true. The symptoms include: Some VMs fail to power on after cluster "Test Failover" to the DR site.VMs have NGT installed.Hypervisor is ESXi.DR virtual network is added to VM successfully. Error in Uhura log: 2020-07-10 16:29:10 ERROR base_task.py:1021 Internal error while executing state CHANGE_POWER_STATE for task VmChangePowerState with uuid c884cb6d-790e-5ebf-b899-26c340930439; Error: Error creating host specific VM change power state task. Error: CannotAccessNetwork: Network interface 'Network adapter 1' uses network 'DVSwitch: b5 69 0b 50 5e 29 06 80-bd 1a 59 b7 10 3e b3 c8', which is not accessible. Hyperint log for failed VM shows "Reverting to snapshot". INFO [hyperint-worker-1324] 2020-07-09 16:09:59,742 VsphereVmRegistrationOp.java (line 388) VM WebTest2 is registered successfully Hyperint log for successful VM has only one entry. INFO [hyperint-worker-1393] 2020-07-09 18:21:38,451 VsphereVmRegistrationOp.java (line 424) Id of the newly registered VM GlobalTS08 is 66f3bb04-293e-406e-8bda-11bab16cc4c9 The Cerebro h-traces show the snapshot workflow took hypervisor snapshot, despite NGT being installed. Error in Cerebro h-traces. "NGT VSS capability is not enabled for VM <uuid>"
Check the disk.EnableUUID option for the VM. This must be set to True so that ESXi reports a SCSI serial number. To change the disk.EnableUUID option for the VM, do the following. Power off the VM.Right-click the VM and select Edit Settings.Select the Options tab.Under Advanced Options, click General.Click the Configure Parameters button.Find the disk.EnableUUID in the table and change the value to TRUE.Click OK twice to save the changes.Power on the VM. Once the VM has booted, confirm that you are able to take an application-consistent snapshot using Nutanix Guest Tools.
KB2781
NX Hardware [Memory] – MCE error to DIMM Channel Mapping
This article lists the mapping between the MCE error and DIMM channel.
In instances where the node hangs on a Haswell system (G4), ESXi MCE errors may appear in the vmkernel logs or on a PSOD. When decoded, the MCE error relates to memory DIMM issues.Engineering has done some testing and has provided info for a NX-3175-G4 (X10DRU-i MB). Below summarises the DIMMs corresponding to Channel 0 & Channel 1.Channel 0 DIMMA1DIMMA2DIMMA3DIMMC1DIMMC2DIMMC3DIMME1DIMME2DIMME3DIMMG1DIMMG2DIMMG3 Channel 1 DIMMB1DIMMB2DIMMB3DIMMD1DIMMD2DIMMD3DIMMF1DIMMF2DIMMF3DIMMH1DIMMH2DIMMH3 Other G4 models based on the X10DRT-P also have the same result as above. The difference is this Motherboard has 16 DIMM slots (2 DIMM/Channel).For an NX-3060-G5 using an Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, you can identify the DIMM which caused an MCA Memory Controller Scrubbing Error using the following logic:Example of error in vmkernel.log: 2020-09-01T06:18:48.225Z cpu10:66184)MCA: 207: CE Intr G0 Bb S8c000048000800c2 A827247f80 M90008a308a3108c Memory Controller Scrubbing Error on Channel 2. Notice that the message originates on "cpu10." The E5-2620 CPU is an 8-core model with hyperthreading, so cpu0-15 belong to physical CPU1 and 16-31 belong to physical CPU2. Then, channels 0-3 map to DIMM positions A-D on physical CPU1 or DIMM positions E-H on physical CPU2.Hence, the culprit for the MCA seen in the vmkernel logs is a DIMM in Channel C.
Some MCA or MCE errors can show as FATAL, while there is not fatal event generated. VMware has acknowledged this is a bug with the MCE code, fixed in ESXi 6.0 Update 2 Build 3620759 and ESXi 5.5 patch ESXi550-201602001 Build 3568722​Below is the test data generated by Engineering per DIMM slot under an Intel CPU E5-2998 V3. CPU1 DIMMs always report on CPU1, and CPU2 report vary CPU from 34 to 62.The corresponding vmkernel log entry are listed as well: Test data: Generated CECC error on DIMM, Vmkernel reports as below:
KB5902
[SRM] SRA 2.4.1 - Virtual Machine file [datastore_name]test_vm/test_vm.vmx cannot be found on recovered datastore.
If a VM that is protected by SRM on a protected vStore has a .hlog file in the VM folder, the SRM test failover fails.
Test Failover Executed from SRM using SRA 2.4.1 fails for some VMs with the error: Virtual Machine file [Datastore_name]<vm_name>/vm_name.vmx cannot be found on the recovered datastore. Notes: This KB is related to an issue where the "Replicate recent changes" option was selected when running the Test Failover from SRM.This applies only to SRA 2.4.1. Observations: When a VM protected by SRM is vMotioned (Change hosts only. Not storage migrated), the VM will fail to recover during the SRM test failover with the above error.On vMotion of a VM, a .hlog file is created in the VM folder.This file is usually created on a storage vMotion as per VMware Documentation. (We are not sure why this file is seen on a vMotion.)Running a test failover from SRM fails with "VMX file cannot be found on the recovered datastore" for those VMs that have a .hlog file in the VM folder.We have observed that if the .hlog file is removed from the VM folder, and a test failover is performed, the VM is successfully restored and registered. Why is this issue seen on SRA 2.4.1 only: With SRA 2.4.1, we have a slightly different implementation. We create a test_failover folder and copy the latest snapshot from the .snapshot folder to this new test_failover folder.VMs are registered from this test_failover folder when an SRM Test failover is run.From the ESX side, it looks like the SRM implementation treats files in a folder that are not on a .snapshot folder differently.In 2.4.1, Nutanix creates a Test_failover folder and copies the latest snapshot from the .snapshot folder to the Test_failover folder. If the folder contains the .hlog file, SRM treats this VM differently and fails the recovery and registration of the VM with the error seen in the screenshot.The same configuration and workflow with SRA 2.4 are not affected. The difference between SRA 2.4 and SRA 2.4.1 is that only in SRA 2.4.1 do we create the Test_failover folder. This is done so that, while a test is in progress, the underlying replication is not affected.It seems like SRM workflow is affected if the .hlog file is found within a subfolder on the datastore. To determine if you are hitting this issue: Check to see if there is a .hlog file in the VM folder on the customer's setup.Check with VMware if it is okay to delete or move the file. (In our test, deleting the file has not affected the VMs. But since this file is created by VMware, it is best to consult with them before deleting these files.)Running the Test failover on the VMs after the .hlog file is moved/deleted successfully restores the VM and registers it on the host. Some of the events and logs that can be checked to correlate: Check the events on vCenter to determine if any that are on the protected vstore were vMotioned. Relocate virtual machine,windows_NuTest_UVM_5,DRTENGSRM.LOCAL\\Administrator,"7/24/18, 10:30:09 AM GMT","7/24/18, 10:30:27 AM GMT" Run test recovery. Notice that these VMs that had been migrated failed. Check the screenshot above. Observe the <vm_name>.hlog file created following the migration. [root@esxi:/vmfs/volumes/e259214e-5f79da26] ls -lth */* | grep windows_sql_madhu-67176ee5.hlog Alternatively, you can also grep for .hlog file on the datastore using the command below. Log in to the ESX host using SSH and root.cd to the datastore that is protected. [root@esxi:/] cd /vmfs/volumes/protected_vstore grep for the .hlog files in all the VM folders on the datastore. [root@esxi:/vmfs/volumes/e259214e-5f79da26] ls -lth */* | grep *.hlo The configuration file VMX has a ref to the hlog file. A file with the name seen in the VMX file is only created on a migration task. migrate.hostlog = "./windows_sql_madhu-67176ee5.hlog"
Workaround: AOS version below 5.5: If you have DRS on the ESX cluster set to automated, for the period of the test, set DRS to Manual.You can move the <vm_name>.hlog files out of the VM folder and re-run the SRM Test_failover.After the test is completed, set the DRS rules back to the initial setting. AOS 5.5 and above: The following workaround can be applied to the cluster on both the source and destination sites. Configure a Cerebro gflag to skip the .hlog file on restore.Restart the Cerebro service across all the nodes in the cluster. Configure a Cerebro gflag to skip the .hlog file on restore: Add the below line in the "~/config/cerebro.gflag" file on each node of the source and destination clusters.If the file is not there, create one and add the below line. It will inherit the default permissions for the Nutanix user. (Note: The file should be there if this is a working SRM setup.) --cerebro_regex_list_to_exclude_files_on_restore=^\/[^\/]+\/.*\.iorm.*;^\/[^\/]+\/.*\.lck.*;^\/[^\/]+\/.*\.hlog If the file already exists, append the below line to the existing line. ;^\/[^\/]+\/.*\.hlog Restart cerebro service across the cluster: Note: Restarting the cerebro service will abort all replications or migrations that are in progress.If there are any migrations in progress, wait for the migrations to complete before restarting this service. nutanix@cvm$ allssh genesis stop cerebro; cluster start Note: It is possible the system will not recognize the gflag if created in the cerebro.gflag file as mentioned above. To validate if the gflag is set, run the below NCC command. nutanix@cvm$ ncc health_checks system_checks gflags_diff_check The output should list the gflag if recognized and set. If the gflag is not modified by the steps above, the NCC check will pass. This would indicate that the gflag was not set. Follow KB 1071 http://portal.nutanix.com/kb/1071 to update the gflag using the "edit-aos-gflags" function. This still requires the restart of the cerebro service across the cluster.
KB16704
Version fetch task failed due to RIM V2 verification failed with LCM Local Web Server
Perform LCM inventory failed with error "Version fetch task failed," while logs indicate SHA256 does not match, RIM V2 verification failed.
This article applies if you are using a Local Web Server.After extracting LCM bundles into the local web server, performing LCM inventory fails at the version fetch task with "LCM was not able to fetch the available version for modules: [u'NVMe Drives (Power Cycle)', u'NICs', u'M.2 Drives', u'SMC Redpool BMC', u'NICs', u'SMC Redpool BIOS']": 2024-04-02 14:09:08,761Z ERROR 17742704 exception.py:86 LCM Exception [LcmExceptionHandler]: Version fetch task failed. LCM was not able to fetch the available version for modules: [u'NVMe Drives (Power Cycle)', u'NICs', u'M.2 Drives', u'SMC Redpool BMC', u'NICs', u'SMC Redpool BIOS'] Found in /home/nutanix/data/logs/lcm_ops.out on lcm_leader, there was "RIM V2 verification failed" with SHA256 for metadata does not match the signature: 2024-04-02 14:07:23,545Z INFO 80952752 zookeeper_session.py:625 [2, 508b2b56-db34-4549-7cce-caf4c327bf26] ZK session establishment complete, sessionId=0x28dd22af020bc85, negotiated timeout=20 secs For an earlier daily successful inventory before extracting the latest firmware bundle, the signature for the same NIC version was verified: 2024-04-02 07:03:23,070Z INFO 80952752 zookeeper_session.py:625 [2, 8f7dfdc6-2fe0-43ba-6eee-a30e1bef6776] ZK session establishment complete, sessionId=0x28dd22af020ab99, negotiated timeout=20 secs
Validate the SHA256 for the metadata file reported by RIM V2 verification failure on the local web server. If the SHA256 of the file on the local web server is the same as the wrong one in the log that RIM V2 verification failed, the metadata.json file might be corrupted. Run the Get-FileHash command on the Windows IIS local webserver to verify the SHA256 checksum of the metadata file. The output of metadata.json under location release\builds\nx-builds\nic\12.27.1016\ shows the same hash value reported by the LCM inventory and failed RIM V2 verification. For example, we get the SHA256 of the metadata file, a687ffc95b9f64004e84b5e62a9facc4e2a08fa45292dadf7e9b06e2d987dd30, as shown in the screenshot. This is the same SHA256 value that is reported by the lcm_ops.out log in the example above. Replace the corrupt upgrade bundle files with the correct upgrade files and retry the LCM inventory.Note: Do not use the 7-Zip tool to extract the TAR file. Use WinZip version 22 or later (disable the TAR file smart CR/LF option), or extract the TAR file manually with tar -xvf command. If you have extraction issues, see Troubleshooting File Extraction for a Web Server https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide:top-lcm-darksite-trouble-t.html. For instructions on verifying the sha256sum value, refer to KB-11939 https://portal.nutanix.com/kb/11939.If SHA256 for metadata.json on the local web server is not the same as the value in the log, somehow, the data might be changed between the local web server and LCM. Contact Nutanix Support https://portal.nutanix.com/page/home to resolve the issue. KB-6861 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000LMTiCAO describes LCM Inventory may fail with a checksum mismatch error where the proxy is enabled.
KB5961
Prism CPU Disparity with Guest Virtual Machines
CPU and/or Memory usage in Prism display differently than what the virtual machine reports.
One may notice that the CPU and/or memory usage that is displayed in Prism is different from the CPU and/or memory usage that is reported from within a guest virtual machine.
Prism, (by way of the Arithmos daemon) polls the Hypervisor for the guest Virtual Machine CPU and Memory usage statistics. As such, what Prism is displaying is what the Hypervisor is reporting. The following is VMware's API reference for CPU usage. CPU usage as a percentage during the interval https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.HistoricalInterval.html. VM - Amount of actively used virtual CPU, as a percentage of total available CPU. This is the host's view of the CPU usage, not the guest operating system view. It is the average CPU utilization overall available virtual CPUs in the virtual machine. For example, if a virtual machine with one virtual CPU is running on a host that has four physical CPUs and the CPU usage is 100%, the virtual machine is using one physical CPU completely. virtual CPU usage = usagemhz / (# of virtual CPUs x core frequency) Host - Actively used CPU of the host, as a percentage of the total available CPU. Active CPU is approximately equal to the ratio of the used CPU to the available CPU. available CPU = # of physical CPUs x clock rate 100% represents all CPUs on the host. For example, if a four-CPU host is running a virtual machine with two CPUs, and the usage is 50%, the host is using two CPUs completely. Cluster - Sum of actively used CPU of all virtual machines in the cluster, as a percentage of the total available CPU. CPU Usage = CPU usagemhz / effectivecpu Prism reports the host's view of the CPU usage, not the guest operating system view. It is the average CPU utilization overall available virtual CPUs in the virtual machine. For example, if a virtual machine with one virtual CPU is running on a host that has four physical CPUs and the CPU usage is 100%, the virtual machine is using one physical CPU completely.VMware Docs for CPU: CPU Counters https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/cpu_counters.htmlVMware Docs for Memory: Memory Counters https://vdc-repo.vmware.com/vmwb-repository/dcr-public/5bc36046-6569-42b8-a60d-4b175d91fa9d/56a2807a-c2a0-4971-8cd8-ee5440e17b19/doc/memory_counters.html
KB10089
NCC Health Check: ovs_bond_config
The NCC health check ovs_bond_config verifies if all host NICs that are added to the br0-up bond in br0 bridge still exists.
The NCC health check ovs_bond_config verifies if all host NICs that are added to the br0-up bond in br0 bridge still exists.This check was introduced in NCC 4.1.0 and only applies to AHV hypervisor.This check is scheduled to run once every 24 hours on CVM and does not generate an alert.Running the NCC check The check can be run as part of a complete NCC by running: nutanix@CVM$ ncc health_checks run_all It can also be run individually as follows: nutanix@CVM$ ncc health_checks network_checks switch_checks ovs_bond_config You can also run the check from the Prism web console Health page. Select Actions > Run Checks > All Checks > Run. Interpreting the check results If the check results in a PASS, NICs which are part of br0-up bond in br0 bridge have valid configuration: Running : health_checks network_checks switch_checks ovs_bond_config If one or more AHV hosts have unknown devices added to br0, the check will result in a FAIL. Running : health_checks network_checks switch_checks ovs_bond_config Output messaging: [ { "Check ID": "Health check to track bonds with no existing network devices." }, { "Check ID": "There are some interface(s) in the OVS bond for which no network device(s) exist." }, { "Check ID": "Remove the interfaces having no network devices using OVS commands." }, { "Check ID": "If some interfaces don't have network devices configured, OVS will try to reconfigure the bonds which can cause race condition during deletion." } ]
If the check fails, then most likely NIC or motherboard with built-in NIC was replaced, but the nic_replace script was not run. A stale NIC can still be present in the bond configuration. Check Configuring a NIC in AHV chapter of the Network Interface Card Replacement (AHV) https://portal.nutanix.com/page/documents/list?type=hardware&filterKey=Component&filterVal=NIC guide for your server model to find what steps should be performed post-NIC replacement. Example: NX 8150 G7 https://portal.nutanix.com/page/documents/details/?targetId=NX8150G7-NIC-Replacement-AHV:ahv-nic-add-ahv-t.html .If the steps or solution described in this article do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collect and attach the following information to the support case: A complete NCC report: nutanix@cvm:~$ ncc health_checks run_all A log bundle generated from the cluster. This can be collected through Prism Web Console's health page. Select Actions > Collect Logs. Logs can also be collected through the command line using logbay ( KB 6691 https://portal.nutanix.com/kb/6691 - NCC - Logbay Quickstart Guide): nutanix@cvm:~$ logbay collect
KB14982
NCC - Frequently Asked Questions (FAQ)
This is a Generic Troubleshooting and FAQ KB for NCC focusing SREs
This document serves as a comprehensive resource for commonly asked questions and a troubleshooting guide on NCC for SREs.
What is NCC? Nutanix Cluster Check (NCC) is cluster-resident software that can help diagnose cluster health and identify configurations qualified and recommended by Nutanix. NCC continuously and proactively runs hundreds of checks and takes action to resolve issues. NCC can be run on Prism Element (PE) or Prism Central (PC) if the individual nodes are up, regardless of the cluster state. When run from the Controller VM (CVM) command line or web console, NCC generates a log file with the output of the checks the user selects. What is the difference between NCC Checks and NCC Plugins? An NCC Check is a purpose-specific or component-specific code block within a module. It serves the function of carrying out specific tasks or evaluating specific components, e.g., Fan Speed Low Check.An NCC Plugin can encompass a single check or a collection of related checks that are designed to work together, e.g., ipmi_sensor_threshold_check. What are external checks in NCC? In the context of monitoring the health of an NCC, an alert can be triggered based on the result of a health check, or it can be generated independently by a specific component such as Stargate, Cerebro, Curator, and others. When a component directly generates an alert, it is called an "external check." For more information, refer to the below documents: What actually *is* an NCC "check"? https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=14475292 How do External Checks and Health Checks Work? https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=92026343 How to check the NCC version installed in the cluster To check the NCC version installed in a cluster, run the following command from any CVM: nutanix@cvm:~$ ncc --version Which Hypervisors are supported by NCC? NCC is supported by: AHVESXiHyper-V How to run NCC on Prism Element and Prism Central Note: Select the desired NCC version from the drop-down menu on the right side. You can run a subset of health checks instead of running all the health checks. A full list of individual health check plugins is shown below: nutanix@cvm:~$ ncc health_checks To run the NCC Health check on Prism Element, click here https://portal.nutanix.com/page/documents/details?targetId=NCC-Guide-NCC-v4_6:ncc-ncc-checks-running-t.html.To run the NCC Health check on Prism Central, click here https://portal.nutanix.com/page/documents/details?targetId=NCC-Guide-NCC-v4_6:ncc-ncc-checks-running-pc-t.html. How to trigger NCC health checks remotely on the customer cluster To trigger NCC Health check on the customer cluster if Pulse is enabled, follow the below steps: Open the case in Salesforce.Click on the Insights Log Collection button: Nutanix portal for the target cluster will be opened. Click on New Collection in the screen. Check the NCC Health Checks option. Click on Start Collection to initiate the log collection: How to check the NCC version bundled with an AOS version Refer to the NCC Releases page https://confluence.eng.nutanix.com:8443/display/~shekar.nagesh/Upcoming+NCC+Releases for bundling information of NCC with AOS or Prism Central. How to find the health-server leader in a cluster You can get the Cluster health leader using below command: For NCC version > 4.4: nutanix@cvm:~$ panacea_cli show_leaders | grep health_scheduler_master With NCC 3.10 or later and older than NCC 4.4.0: nutanix@cvm:~$ panacea_cli zeus_config_printer.py -t leaders_list If panacea_cli is not available: nutanix@cvm:~$ python ~/ncc/bin/health_client.py | awk '{print $5}' What is the impact of an NCC upgrade? Can I upgrade NCC without any downtime? Upgrading NCC restarts the cluster health service. It does not impact other services or User VMs, so it will not cause downtime. How to upgrade NCC on a cluster Use the following methods to upgrade NCC: Note: You can select the desired NCC version from the drop-down menu on the right to get the latest available documentation. To upgrade NCC using GUI in Prism Element, you can use LCM https://portal.nutanix.com/page/documents/details?targetId=NCC-Guide-NCC-v4_6:ncc-cluster-ncc-upgrade-lcm-t.html. For NCC Dark site upgrade, click here https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_6:top-lcm-darksite-ncc-direct-upload-t.html.To upgrade NCC using GUI in Prism Central, click here https://portal.nutanix.com/page/documents/details?targetId=NCC-Guide-NCC-v4_6:ncc-ncc-install-pc-t.html.To upgrade/install NCC using the installer file, click here https://portal.nutanix.com/page/documents/details?targetId=NCC-Guide-NCC-v4_6:ncc-ncc-install-t.html. Can NCC be auto-upgraded? Yes, when you click on “Perform Inventory” on the LCM page, you will get the option to enable “Auto Inventory”. Once you select the LCM Auto inventory option, you need to select the Auto Upgrade NCC checkbox as well in the LCM Inventory Menu. Now, whenever LCM performs an Auto Inventory, it will also check for the latest available NCC version and upgrade it automatically. What is the command to retrieve NCC install/upgrade history? You can view the NCC upgrade history stored in the below file on each CVM, including the timestamp: nutanix@cvm:~$ cat /home/nutanix/config/ncc_upgrade.history Is there a place where I can get details about all the health checks in an NCC version? You can get details about any particular version's NCC health checks in the NCC Plugin and Check Tracker Sheet https://docs.google.com/spreadsheets/d/1p9nREYjWU1kSc3sDHk6d9gz0z3Xqkttku-2qZstj5yo/edit?usp=sharing. How do I check the tentative release date for a specific NCC version? You can find the tentative release date on the Release Management page https://confluence.eng.nutanix.com:8443/display/RM/Release+Management+Home. When will NCC be available in LCM/1-click? NCC will be available in LCM/1-click after the adoption of the release by about 1000 clusters without any significant issues. Historical data indicates that this milestone is hit in about 2-3 weeks post the Initial Release posting on the Support Portal. More details about the NCC release process can be found on NCC Releases https://confluence.eng.nutanix.com:8443/display/~shekar.nagesh/NCC+Releases#9890bc94-734d-40fa-ae12-4faa718c2be5-143760835. What happens when 1-click/LCM is enabled for NCC? Once NCC is enabled for 1-click/LCM, the latest version of NCC will be available for upgrade on the cluster automatically under Upgrade Software and LCM Inventory. Is there a way to enable/disable any health checks on the cluster? Warning: Disabling an NCC Health Check is not advisable in a production cluster as it would result in the loss of crucial notifications about cluster issues related to the NCC check. Via GUI: Go to Health Tab in PrismClick on ActionsClick on Manage ChecksSelect the Check you want to disableClick on Turn Check Off Via CLI: Find the NCC Check ID using the below command: nutanix@cvm:~$ ncli health-check list Disable the NCC Check using the below command: nutanix@cvm:~$ ncli health-check update id=14008 enable=false If Prism is not available or if you are getting below error: Error: null Then use below Python script to disable the NCC Check: nutanix@cvm:~/ncc/bin$ python plugin_config_cli.py --disable_check=True --check_id=<ENTER_CHECK_ID> What is the purpose of the NCC Debug mode, and when should it be used? The Debug mode enhances the logging capability of NCC, offering more comprehensive information to identify and troubleshoot underlying issues. While the standard configuration enables INFO level logs, Debug mode activates DEBUG level logs, encompassing payload details in the RPC calls and other vital data. How to run NCC in debug mode To enable logs under the debug flag for the 'health-server' process, add the following line in the 'config/cluster_health.gflags' file: '--debug=True' For this change to take effect, 'cluster_health' will have to be restarted on all nodes in the cluster. Can the debug logs be enabled for a single plugin run? It is possible to enable the debug logs for a single plugin run. For example, if the plugin that is being run is available at 'health_checks system_checks cvm_memory_usage_check' as per the check schema definition, the command that needs to be executed will be: nutanix@cvm:~$ ncc health_checks system_checks cvm_memory_usage_check --debug=True How to collect Hardware specific info using NCC You can use the below command to see the hardware information of the cluster: nutanix@cvm:~$ ncc hardware_info show_hardware_info Hardware information will be shown in the screen and will also be stored in the below path: /home/nutanix/data/hardware_logs/ How can the hardware changes, such as the Serial Number of the newly replaced DIMM on a Node, be identified? Run the below command to update hardware information in the cluster: nutanix@cvm:~$ ncc hardware_info update_hardware_info You can see the hardware changes in the below file for each CVM: nutanix@cvm:~$ cat ~/data/logs/sysstats/hardware_differences.INFO Is there a Visual guide of NCC check schema to understand how each thing appears on Prism Health Page UI? Refer to the following link to find the Visual Guide for Health UI Schema: Visual guide to check schema https://confluence.eng.nutanix.com:8443/display/SW/Visual+guide+to+check+schema How to check the NCC source code Cluster health framework code is available in Sourcegraph https://sourcegraph.ntnxdpro.com/cluster-health-master for the master repository. Click here https://sourcegraph.ntnxdpro.com/cluster-health-master/-/tree/cluster_health_framework/ncc/py/ncc/plugins/health_checks to view the code for individual NCC Health Checks. For a specific NCC version, check here https://sourcegraph.ntnxdpro.com/gerrit/[email protected]/-/tree/cluster_health_framework/ncc/py/ncc/plugins/health_checks . What are the logs to start troubleshooting a health check failure? The relevant log files are: health_server.log – Logs related to the 'health_server' processncc-output.log – Logs related to the NCC command output run on cluster What relevant logs will be collected for debugging 'health-server' issues? The 'health-server' logs are in the '~/data/logs/' directory. The relevant log files are: health_server.log – Logs related to the 'health_server' processncc.log – Logs related to the 'NCC' processncc-output.log – Logs related to the NCC command output runs on the clustercluster_health.out – Logs related to the health server's service monitorcheck_cluster_health_service.log – Logs related to 'health_server' service monitorlogbayd.log – Logs related to logbay executionlogbay_service_monitor.out – Logs related to logbay service monitoratlc.out – Logs related to Alert Triggered Log Collectionatlc_service_monitor.out – Logs related to the ATLC service monitor I wanted to create a Knowledge base article for NCC Health Checks. Where can I find templates to write NCC KB or Alert KB? Find templates below: NCC KB Template https://confluence.eng.nutanix.com:8443/display/SW/NCC+KB+Template Alert KB Template https://confluence.eng.nutanix.com:8443/display/SW/Alert+KB+Template What is Logbay? Nutanix Cluster Check package includes Logbay as the go-to log collection tool. Logbay collects the logs from Controller VMs and hosts. It can mask sensitive information like IP addresses, container names, etc. After the task finishes, the log bundle is available for download from the Tasks dashboard. How to collect logs using Logbay Logs can be collected using the following methods: To collect the log bundle using Prism, click here https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_0:wc-logs-collection-ncc-web-console-wc-t.html.To collect the log bundle using CLI, click here https://portal.nutanix.com/kb/6691. How to collect Anonymised log bundle. What information gets masked when we use the anonymize option in Logbay? To Collect an anonymized log bundle, you can add the below parameter in the logbay command: --anonymize=1 Example: nutanix@cvm:~$ logbay collect --anonymize=1 Details that are anonymized: Is there a mechanism to collect sub-items inside a tag using Logbay to collect logs more granularly to fetch just the required logs and keep the aggregate bundle small in size? You can collect the granular logs using tag parameters only, but instead of a tag, you pass the item name as a parameter. nutanix@cvm:~$ logbay collect -t zookeeper_out_logs You can get all the items available using the below command: nutanix@cvm:~$ logbay list_items Is there any Cheat Sheet available for NCC and Logbay? Refer to the following links: NCC Cheatsheet https://confluence.eng.nutanix.com:8443/display/STK/NCC+Cheatsheet Logbay Cheatsheet https://confluence.eng.nutanix.com:8443/display/STK/Logbay+Cheatsheet [ { "Cluster Information": "CVM IPs", "Masked Characters": "cc.cc.cc." }, { "Cluster Information": "Hypervisor IPs", "Masked Characters": "hh.hh.hh." }, { "Cluster Information": "All other IPs", "Masked Characters": "xx.yy.zz." }, { "Cluster Information": "Cluster Name", "Masked Characters": "Cluster1" }, { "Cluster Information": "Protection Domain Name", "Masked Characters": "PD0, PD1, and so on" }, { "Cluster Information": "Container Name", "Masked Characters": "Container0, Container1, and so on." }, { "Cluster Information": "Hypervisor Hostname", "Masked Characters": "Hypervisor.hostname0, Hypervisor.hostname1, and so on." }, { "Cluster Information": "VM Names", "Masked Characters": "VM1, VM2 and so on" }, { "Cluster Information": "* Emails and Smart Cards", "Masked Characters": "####@####" }, { "Cluster Information": "* LDAP Server", "Masked Characters": "ldapURL####" }, { "Cluster Information": "* LDAP Server display name", "Masked Characters": "display_name: ####" }, { "Cluster Information": "* sAMAccountName", "Masked Characters": "####" }, { "Cluster Information": "* CN, OU, DC", "Masked Characters": "CN=####, OU=####, DC=####" } ]
KB9583
UVM's hang during One-Click CVM Memory Upgrade on Hyper-V due to ungraceful shutdown of CVMs.
During One-Click CVM Memory Upgrade on Hyper-V, CVM's are not shutdown gracefully which results in the route not being injected in timely manner on Hyper-V host before the CVM shutting down causing the UVM's on that host to hang during the One-Click CVM Memory Upgrade.
Symptoms:Customer may report UVM's hanging or crashing during One-Click CVM Memory Upgrade on Hyper-VSignatures:1. In the hyperv_log (hyperv_log.csv in the log bundle.) shows loss of connectivity to SMB share before injection of HA route is logged on the genesis master as per below:hyperv_log ("host1") "4/21/2020 11:33:03 PM","102314","Error","30804","host1.customer.com",,"Microsoft-Windows-SmbClient/Connectivity","Microsoft-Windows-SMBClient","The network connection failed. genesis.out (on local CVM) 2020-04-30 07:10:22 INFO cluster_manager.py:4492 Successfully granted token to 10.63.30.34 reason rolling_restart It looks normal Rolling restart started and got shutdown token. However, genesis.out log stopped after Verifying the route.Rolling restart should have the following Stopping service logs which are missing in this : 2020-04-27 08:19:36 INFO cluster_manager.py:4559 Shutdown token details ip 10.63.18.93 time 1587975539.26 reason nos_upgrade 2. A Windows UVM Application Event logs may show ESENT Event ID: 508 write timeouts during the time delay between the CVM being shutdown and the route being injected on the host as per below: (Note: this VM was running on host3...see time-frame above)
Upgrade AOS to 5.10.11+, 5.18+, 5.17.1+ or 5.15.2 prior to performing One-Click Memory upgrade on Hyper-V or perform manual CVM memory upgrade.Reason for this : 2020-04-30 07:10:23 INFO hypervisor_utils.py:94 Ran memory update cmd:/home/nutanix/cluster/bin/update_cvm_hyperv -m 24576 update_cvm_hyperv script is called to upgrade CVM memory size. However, this script doesn't include the CVM status monitoring method but forcefully shutdowns the CVM using PowerShell script from HyperV.In the fixed version, we include the CVM status checking in the PowerShell script and CVM memory update is performed after CVM shutdown is confirmed.