IS 8.0.2 Update 3 on RHEL9 Platform (2024)

更新 ID: UPD350425

バージョン: 8.0.2.2200

プラットフォーム: Linux

リリース日: 2024-08-13

要約

IS 8.0.2 Update 3 on RHEL9 Platform

説明

 * * * READ ME * * * * * * InfoScale 8.0.2 * * * * * * Patch 2200 * * * Patch Date: 2024-08-12This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH * KNOWN ISSUESPATCH NAME----------InfoScale 8.0.2 Patch 2200OPERATING SYSTEMS SUPPORTED BY THE PATCH----------------------------------------RHEL9 x86-64PACKAGES AFFECTED BY THE PATCH------------------------------VRTSamfVRTSaslapmVRTScavfVRTScpsVRTSdbacVRTSdbedVRTSfsadvVRTSgabVRTSglmVRTSgmsVRTSlltVRTSodmVRTSpythonVRTSrestVRTSsfcpiVRTSsfmhVRTSsptVRTSvbsVRTSvcsVRTSvcsagVRTSvcseaVRTSvekiVRTSvlicVRTSvxfenVRTSvxfsVRTSvxvmBASE PRODUCT VERSIONS FOR THE PATCH----------------------------------- * InfoScale Availability 8.0.2 * InfoScale Enterprise 8.0.2 * InfoScale Foundation 8.0.2 * InfoScale Storage 8.0.2SUMMARY OF INCIDENTS FIXED BY THE PATCH---------------------------------------Patch ID: VRTSvxfs-8.0.2.2100* 4144078 (4142349) Using sendfile() on VxFS file system might result in hang.* 4162063 (4136858) Added a basic sanity check for directory inodes in ncheck codepath.* 4162064 (4121580) WORM flag is getting set on checkpoint mounter or RW mode.* 4162065 (4158238) vxfsrecover command exits with error if the previous invocation terminated abnormally.* 4162066 (4156650) Older checkpoints remain, if SecureFS is recovered from newer checkpoint.* 4162220 (4099775) System might panic if ownership change operations are done for a quota enabled Filesystem* 4163183 (4158381) Server panicked with "Kernel panic - not syncing: Fatal exception"* 4164090 (4163498) Veritas File System df command logging doesn't have sufficient permission while validating tunable configuration* 4164270 (4156384) Filesystem's metadata can get corrupted due to missing transaction in the intent log* 4165966 (4165967) mount and fsck commands are facing few SELinux permission denials issue.* 4166501 (4163862) Mutex lock contention is observed in cluster file system under massive file creation workload* 4166502 (4163127) Spinlock contention observed during inode allocation for massive file creation operation on cluster file system.* 4166503 (4162810) Spikes in CPU usage of glm threads were observed in output of "top" command during massive file creation workload on cluster file system.* 4168357 (4076646) Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area.* 4171307 (4171308) VxFS support for RHEL-9.4* 4172054 (4162316) FS migration to VxFS might hit the kernel PANIC if CrowdStrike falcon sensor is running.* 4172753 (4173685) fsck command facing few SELinux permission denials issue.* 4173064 (4163337) Intermittent df slowness seen across cluster.* 4174242 (4174538) mount and fsck commands are facing few SELinux permission denials issue.* 4174244 (4174539) fsck command facing few SELinux permission denials issue.Patch ID: VRTSvxfs-8.0.2.1700* 4159284 (4145203) Invoking veki through systemctl inside vxfs-startup script.* 4159938 (4155961) Panic in vx_rwlock during force unmount.* 4160325 (4160740) Command fsck is facing few SELinux permission denials issue.* 4160326 (4160742) mount and fsck commands are facing few SELinux permission denials issue.* 4161120 (4161121) Non root user is unable to access log files under /var/log/vx directoryPatch ID: VRTSvxfs-8.0.2.1600* 4157410 (4157409) Security Vulnerabilities exists in the current versions of third party components, sqlite and expat, used by VxFS .Patch ID: VRTSvxfs-8.0.2.1500* 4119626 (4119627) Command fsck is facing few SELinux permission denials issue.* 4138668 (4138669) VxFS support for RHEL9.3* 4146580 (4141876) Parallel invocation of command vxschadm might delete previous SecureFS configuration.* 4148734 (4148732) get_dg_vol_names is leaking memory.* 4150065 (4149581) VxFS Secure clock is running behind than expected by huge margin.* 4152206 (4152205) VXFS support for RHEL 9.3 minor kernel 5.14.0-362.18.1.Patch ID: VRTSvxfs-8.0.2.1400* 4141666 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS.Patch ID: VRTSvxfs-8.0.2.1200* 4121230 (4119990) Recovery stuck while flushing and invalidating the buffers* 4124924 (4118556) VxFS support for RHEL9.2.* 4125870 (4120729) Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.* 4125871 (4114176) After failover, job sync fails with error "Device or resource busy".* 4125873 (4108955) VFR job hangs on source if thread creation fails on target.* 4125875 (4112931) vxfsrepld consumes a lot of virtual memory when it has been running for long time.* 4125878 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.* 4126104 (4122331) Enhancement in vxfs error message which are logged while marking the bitmap or inode as "BAD".* 4127509 (4107015) When finding VxFS module with version same as kernel version, need to consider kernel-build number.* 4127510 (4107777) If VxFS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.* 4127594 (4126957) System crashes with VxFS stack.* 4127621 (4127623) VXFS support for RHEL9.0 minor kernel 5.14.0-70.36.1* 4127720 (4127719) Added fallback logic in fsdb binary and made changes to fstyp binary such that it now dumps uuid.* 4127785 (4127784) Earlier fsppadm binary was just giving warning in case of invalid UID, GID number. After this change providing invalid UID / GID e.g."1ABC" (UID/GID are always numbers) will result into error and parsing will stop.* 4128249 (4119965) VxFS mount binary failed to mount VxFS with SELinux context.* 4129494 (4129495) Kernel panic observed in internal VxFS LM conformance testing.* 4129681 (4129680) Generate and add changelog in VxFS rpm* 4131312 (4128895) On servers with SELinux enabled, VxFS mount command may throw error.Patch ID: VRTSspt-8.0.2.1300* 4139975 (4149462) New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.* 4146957 (4149448) New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.Patch ID: VRTSrest-3.0.10* 4124960 (4130028) GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs* 4124963 (4127170) While modifying the system list for service group when dependency is there, the api would fail* 4124964 (4127167) -force option is used by default in delete of rvg and a new -online option is used in patch of rvg* 4124966 (4127171) While getting excluded disks on Systems API we were getting nodelist instead of nodename in href* 4124968 (4127168) In GET request on rvgs all datavolumes in RVGs not listed correctly* 4125162 (4127169) Get disks api failing when cvm is down on any nodePatch ID: VRTSfsadv-8.0.2.1800* 4162373 (4130255) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.Patch ID: VRTSfsadv-8.0.2.1500* 4153164 (4088024) Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.Patch ID: VRTSpython-3.9.16 P05* 4169026 (4169025) VRTSpython package version upgrade from 3.9.16.4 to 3.9.16.5.Patch ID: VRTSpython-3.9.16.4* 4161479 (4161477) Upgrading various vulnerable third-party modules in VRTSpython to fix exploitable security issues.Patch ID: VRTSsfcpi-8.0.2.1300* 4006619 (4015976) On a Solaris system, patch upgrade of InfoScale fails with an error in the alternate boot environment.* 4008502 (4008744) Rolling upgrade using response file fails if one or more operating system packages are missing on the cluster nodes.* 4010025 (4010024) While upgrading from InfoScale 7.4.2 to 7.4.2.xxx, CPI installs the packages from the 7.4.2.xxx patch only and not the base packages of 7.4.2 GA.* 4012032 (4012031) Installer does not upgrade VRTSvxfs and VRTSodm insidenon-global zones* 4013446 (4008578) Even though a cluster node may have a fully qualified hostname, the product installer trims this value and uses the shorter hostname for the cluster configuration.* 4014920 (4015139) Product installer fails to install InfoScale on RHEL 8 systems if IPv6 addresses are provided for the system list.* 4014985 (4014983) The product installer does not display a warning at the time of the pre-upgrade check to suggest that you will need to provide telemetry details later on if the cluster nodes are not registered with TES or VCR.* 4016078 (4007633) The product installer fails to synchronize the system clocks with the NTP server.* 4020090 (4022920) The product installer fails to install InfoScale 7.4.2 on SLES 15 SP2.* 4021517 (4021515) On SLES 12 SP4 and later systems, the installer fails to fetch the media speed of the network interfaces.* 4022492 (4022640) Installer fails to complete installation after it automatically downloads a required support patch from SORT that contains a VRTSvlic package.* 4027741 (4027759) The product installer installs lower versions packages if multiple patch bundles are specified using the patch path options in the incorrect order.* 4033243 (4033242) When a responsefile is used, the product installer fails to add the required VCS users.* 4033688 (4033687) The InfoScale product installer deletes any existing cluster configuration files during uninstallation.* 4034357 (4033988) The product installer does not allow the installation of an Infoscale patch bundle if a more recent version of any package in the bundle is already installed on the system.* 4038945 (4033957) The VRTSveki and the VRTSvxfs RPMs fail to upgrade when using yum.* 4040836 (4040833) After an InfoScale upgrade to version 7.4.2 Update 2 on Solaris, the latest vxfs module is not loaded.* 4041770 (4041816) On RHEL 8.4, the system panics after the InfoScale stack starts.* 4042590 (4042591) On RHEL 8.4, installer disable IMF for the CFSMount and the Mount agents.* 4042890 (4043075) After performing a phased upgrade of InfoScale, the product installer fails to update the types.cf file.* 4043366 (4042674) The product installer does not honor the single-node mode of a cluster and restarts it in the multi-mode if 'vcs_allowcomms = 1'.* 4043372 (4043371) If SecureBoot is enabled on the system, the product installer fails to install some InfoScale RPMs (VRTSvxvm, VRTSaslapm, VRTScavf).* 4043892 (4043890) The product installer incorrectly prompts users to install deprecated OS RPMs for LLT over RDMA configurations.* 4045881 (4043751) The VRTScps RPM installation may fail on SLES systems.* 4046196 (4067426) Package uninstallation during a rolling upgrade fails if non-global zones are under VCS service group control* 4050467 (4050465) The InfoScale product installer fails to create VCS users for non-secure clusters.* 4052860 (4052859) The InfoScale product installer needs to install the VRTSpython package on AIX and Solaris.* 4052867 (4052866) The InfoScale product installer needs to run the CollectorService process during a fresh configuration of VCS.* 4053752 (4053753) The licensing service is upgraded to allow an InfoScale server to be registered with a Veritas Usage Insights server.* 4053875 (4053635) The vxconfigd service fails to start during the add node operation when the InfoScale product installer is used to perform a fresh configuration on Linux.* 4053876 (4053638) The installer prompt to mount shared volumes during an add node operation does not advise that the corresponding CFSMount entries will be updated in main.cf.* 4054322 (4054460) InfoScale installer fails to start the GAB service with the -start option on Solaris.* 4054913 (4054912) During upgrade, the product installer fails to stop the vxfs service.* 4055055 (4055242) InfoScale installer fails to install a patch on Solaris.* 4066237 (4057908) The product installer fails to configure passwordless SSH communication for remote Solaris systems.* 4067433 (4067432) While upgrading to 7.4.2 Update 2 , VRTSvlic patch package installation fails.* 4070908 (4071690) The installer does perform an Infoscale configuration without registering the Infoscale server to an edge server.* 4079500 (4079853) Patch installer flashes a false error message with -precheck option.* 4079916 (4079922) Installer fails to complete installation after it automatically downloads a required support patch from SORT that contains a VRTSperl package.* 4080100 (4080098) Installer fails to complete the CP server configuration.* 4081964 (4081963) VRTSvxfs patch fails to install on Linux platforms.* 4084977 (4084975) Installer fails to complete the CP server configuration.* 4085612 (4087319) On RHEL 7.4.2, Installer fails to uninstall VxVM while upgrading from 7.4.2 to 8.0U1.* 4086047 (4086045) When Infoscale cluster is reconfigured, LLT, GAB, VXFEN services fail to start after reboot.* 4086570 (4076583) On a Solaris system, the InfoScale installer runs set/unset publisher several times but does not disable the publisher. The deployment process slows down as a result.* 4086624 (4086623) Installer fails to complete the CP server configuration.* 4087148 (4088698) CPI installer tries to download a must-have patch whose version is lower than the version specified in media path.* 4087809 (4086533) VRTSfsadv pkg fails to upgrade from 7.4.2 U4 to 8.0 U1 while using yum upgrade.* 4089657 (4089934) Installer does not update the '/opt/VRTSvcs/conf/config/types.cf' file after a VRTSvcs patch upgrade.* 4089815 (4089867) On Linux, Installer fails to start fsdedupschd service.* 4092407 (4092408) On a Linux platform, CPI installer fails to correctly identify status of vxfs_replication service.Patch ID: VRTSsfmh-8.0.2.500* 4160665 (4160661) sfmh for IS 7.4.2 U7Patch ID: -4.01.802.002* 4173483 (4173483) Providing Patch Release for VRTSvlicPatch ID: VRTSvcsea-8.0.2.1600* 4088599 (4088595) hapdbmigrate utility fails to online the oracle service groupPatch ID: VRTSvcsea-8.0.2.1400* 4058775 (4073508) Oracle virtual fire-drill is failing.Patch ID: VRTSvcsag-8.0.2.1600* 4149272 (4164374) VCS DNS Agent monitor gets timeout if multiple DNS servers are added as Stealth Masters and if few of them get hung.* 4156630 (4156628) Getting message "Uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317"constantly.* 4162102 (4163518) Apache (httpd) agent hangs on reboot due to ordering dependency deadlock between vcs and httpd.* 4162659 (4162658) LVMVolumeGroup resource fails to offline/clean in cloud environment after path failure.* 4162753 (4142040) While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.Patch ID: VRTSvcsag-8.0.2.1500* 4157581 (4157580) There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS.Patch ID: VRTSvcsag-8.0.2.1400* 4114880 (4152700) When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found.* 4135534 (4152812) AWS EBSVol agent takes long time to perform online and offline operations on resources.* 4137215 (4094539) Agent resource monitor not parsing process name correctly.* 4137376 (4122001) NIC resource remain online after unplug network cable on ESXi server.* 4137377 (4113151) VMwareDisksAgent reports resource online before VMware disk to be online is present into vxvm/dmp database.* 4137602 (4121270) EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.* 4137618 (4152886) AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a shared VPC.* 4143918 (4152815) AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent.Patch ID: VRTSvcsag-8.0.2.1200* 4130206 (4127320) The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.Patch ID: VRTScps-8.0.2.1600* 4152885 (4152882) vxcpserv process received SIGABRT signal due to invalid pointer access in acvsc_lib while writing logs.Patch ID: VRTScps-8.0.2.1500* 4161971 (4161970) Security vulnerabilities exists in Sqlite third-party components used by VCS.Patch ID: VRTSdbed-8.0.2.1200* 4163136 (4136146) Update security component libraries.Patch ID: VRTSdbed-8.0.2.1100* 4153061 (4092588) SFAE failed to start with systemd.Patch ID: VRTSvbs-8.0.2.1100* 4163135 (4136146) Update security component libraries.Patch ID: VRTSvcs-8.0.2.1600* 4162755 (4136359) When upgrading InfoScale with latest Public Patch Bundle or VRTSvcsag package, types.cf is updated.Patch ID: VRTSvcs-8.0.2.1500* 4157581 (4157580) There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS.Patch ID: VRTSvcs-8.0.2.1400* 4133677 (4129493) Tenable security scan kills the Notifier resource.Patch ID: VRTSvcs-8.0.2.1200* 4113391 (4124956) GCO configuration with hostname is not working.Patch ID: VRTSdbac-8.0.2.1400* 4161967 (4157901) vcsmmconfig.log file permission is hardcoded, but permission should be set as per EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM.* 4164415 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).Patch ID: VRTSdbac-8.0.2.1300* 4137328 (4137325) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).Patch ID: VRTSdbac-8.0.2.1100* 4124670 (4122405) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).* 4125119 (4125118) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).Patch ID: VRTSamf-8.0.2.1600* 4161436 (4161644) System panics when AMF enabled and there are Process/Application resources.* 4162305 (4168084) AMF caused kernel BUG: scheduling while atomic when umount file system.* 4164504 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).Patch ID: VRTSamf-8.0.2.1400* 4137600 (4136003) A cluster node panics when the AMF module overruns internal buffer to analyze arguments of an executable binary.* 4153059 (4137325) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).Patch ID: VRTSamf-8.0.2.1200* 4132379 (4122405) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).* 4132620 (4125118) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).Patch ID: VRTSveki-8.0.2.1600* 4164290 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).Patch ID: VRTSveki-8.0.2.1400* 4135795 (4135683) Enhancing debugging capability of VRTSveki package installation* 4140468 (4152368) Some incidents do not appear in changelog because their cross-references are not properly processedPatch ID: VRTSveki-8.0.2.1200* 4120300 (4110457) Veki packaging were failing due to dependency* 4124135 (4122405) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).* 4130816 (4130815) Generate and add changelog in VEKI rpm* 4132626 (4125118) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).Patch ID: VRTSveki-8.0.2.1100* 4118568 (4110457) Veki packaging were failing due to dependencyPatch ID: VRTSvxfen-8.0.2.1100* 4156076 (4156075) EO changes file permission tunable* 4164329 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).* 4169032 (4166666) Failed to configure Disk based fencing on rdm mapped devices from KVM host to kvm guestPatch ID: VRTSvxfen-8.0.2.1400* 4137326 (4137325) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).Patch ID: VRTSvxfen-8.0.2.1200* 4124086 (4124084) Security vulnerabilities exist in the Curl third-party components used by VCS.* 4124644 (4122405) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).* 4125891 (4113847) Support for even number of coordination disks for CVM-based disk-based fencing* 4125895 (4108561) Reading vxfen reservation not working* 4132625 (4125118) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).Patch ID: VRTSllt-8.0.2.1600* 4162744 (4139781) Unexpected or corrupted skb, memory type missing in buffer header.* 4173093 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).Patch ID: VRTSllt-8.0.2.1400* 4132209 (4124759) Panic happened with llt_ioship_recv on a server running in AWS.* 4137611 (4135825) Once root file system is full during llt start, llt module failing to load forever.* 4153057 (4137325) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).Patch ID: VRTSllt-8.0.2.1200* 4124138 (4122405) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).* 4128886 (4128887) During rmmod of llt package, warning trace is observed on kernel versions higher than 5.14 on RHEL9 and SLES15.* 4132621 (4125118) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).Patch ID: VRTSgab-8.0.2.1600* 4173084 (4164328) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).Patch ID: VRTSgab-8.0.2.1400* 4153058 (4137325) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).Patch ID: VRTSgab-8.0.2.1200* 4132378 (4122405) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).* 4132623 (4125118) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).Patch ID: VRTSvxvm-8.0.2.1700* 4153377 (4152445) Replication failed to start due to vxnetd threads not running* 4153874 (4010288) [Cosmote][NBFS]ECV:DR:Replace Node on Primary failed with error"Rebuild data for faulted node failed"* 4155091 (4118510) Volume manager tunable to control log file permissions* 4157012 (4145715) Secondary SRL log error while reading data from log* 4157643 (4159198) vxfmrmap coredump.* 4158662 (4159200) AIX 7.3 - Script error while installing VXVM Patch -"VRTSvxvm.post_u[289]: -q: not found"* 4158920 (4159680) set_proc_oom_score: not found while /usr/lib/vxvm/bin/vxconfigbackupd gets executed* 4164944 (4165970) LOG_FILE_PERM & perm_change command not found* 4164947 (4165971) /var/tmp/rpm-tmp.Kl3ycu: line 657: [: missing `]'* 4166882 (4161852) [BankOfAmerica][Infoscale][Upgrade] Post InfoScale upgrade, command "vxdg upgrade" succeeds but generates apparent error "RLINK is not encypted"* 4172377 (4172033) Data corruption due to stale DRL agenodes* 4173722 (4158303) Panic at dmpsvc_da_analyze_error+417* 4174239 (4171979) Panic occurs with message "kernel BUG at fs/inode.c:1578!"Patch ID: VRTSaslapm 8.0.2.1700* 4155091 (4118510) Volume manager tunable to control log file permissionsPatch ID: VRTSvxvm-8.0.2.1600* 4128883 (4112687) DLE (Dynamic Lun Expansion) of single path GPT disk may corrupt disk public region.* 4137508 (4066310) Added BLK-MQ feature for DMP driver* 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .* 4143558 (4141890) TUTIL0 field may not get cleared sometimes after cluster reboot.* 4153566 (4090410) VVR secondary node panics during replication.* 4153570 (4134305) Collecting ilock stats for admin SIO causes buffer overrun.* 4153597 (4146424) CVM Failed to join after power off and Power on from ILO* 4154104 (4142772) Error mask NM_ERR_DCM_ACTIVE on rlink may not be cleared resulting in the rlink being unable to get into DCM again.* 4154107 (3995831) System hung: A large number of SIOs got queued in FMR.* 4155719 (4154921) system is stuck in zio_wait() in FC-IOV environment after reboot the primary control domain when dmp_native_support is on.* 4158517 (4159199) AIX 7.3 TL2 - Memory fault(coredump) while running "./scripts/admin/vxtune/vxdefault.tc"* 4161646 (4149528) Cluster wide hang after faulting nodes one by one* 4162053 (4132221) Supportability requirement for easier path link to dmpdr utility* 4162055 (4116024) machine panic due to access illegal address.* 4162058 (4046560) vxconfigd aborts on Solaris if device's hardware path is too long.* 4162665 (4162664) VxVM support on Rocky Linux 8 and 9.* 4162917 (4139166) Enable VVR Bunker feature for shared diskgroups.* 4162966 (4146885) Restarting syncrvg after termination will start sync from start* 4164114 (4162873) disk reclaim is slow.* 4164250 (4154121) add a new tunable use_hw_replicatedev to enable Volume Manager to import the hardware replicated disk group.* 4164252 (4159403) add clearclone option automatically when import the hardware replicated disk group.* 4164254 (4160883) clone_flag was set on srdf-r1 disks after reboot.* 4165431 (4160809) [Cosmote][NBFS][media-only]CVM Failed to join after reboot operation from GUI* 4165889 (4165158) Disk associated with CATLOG showing in RECOVER State after rebooting nodes.* 4166559 (4168846) Support VxVM on RHEL9.4* 4166881 (4164734) Disable support for TLS1.1Patch ID: VRTSaslapm 8.0.2.1600* 4169012 (4169016) Support ASL-APM on RHEL9.4Patch ID: VRTSvxvm-8.0.2.1400* 4124889 (4090828) Enhancement to track plex att/recovery data synced in past to debug corruption* 4129765 (4111978) Replication failed to start due to vxnetd threads not running on secondary site.* 4130858 (4128351) System hung observed when switching log owner.* 4130861 (4122061) Observing hung after resync operation, vxconfigd was waiting for slaves' response* 4132775 (4132774) VxVM support on SLES15 SP5* 4133930 (4100646) Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisks* 4133946 (3972344) vxrecover returns an error - 'ERROR V-5-1-11150' Volume <vol_name> not found'* 4135127 (4134023) vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed.* 4135388 (4131202) In VVR environment, changeip command may fail.* 4136419 (4089696) In FSS environment, with DCO log attached to VVR SRL volume, reboot of the cluster may result into panic on the CVM master node.* 4136428 (4131449) In CVR environment, the restriction of four RVGs per diskgroup has been removed.* 4136429 (4077944) In VVR environment, application I/O operation may get hung.* 4136802 (4136751) Added selinux permissions for fcontext: aide_t, support_t, mdadm_t* 4136859 (4117568) vradmind dumps core due to invalid memory access.* 4136866 (4090476) SRL is not draining to secondary.* 4136868 (4120068) A standard disk was added to a cloned diskgroup successfully which is not expected.* 4136870 (4117957) During a phased reboot of a two node Veritas Access cluster, mounts would hang.* 4137174 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions.* 4137175 (4124223) Core dump is generated for vxconfigd in TC execution.* 4137508 (4066310) Added BLK-MQ feature for DMP driver* 4137615 (4087628) CVM goes into faulted state when slave node of primary is rebooted .* 4137630 (4139701) VxVM support on RHEL 9.3* 4137753 (4128271) In CVR environment, a node is not able to join the CVM cluster if RVG recovery is taking place.* 4137757 (4136458) In CVR environment, the DCM resync may hang with 0% sync remaining.* 4137986 (4133793) vxsnap restore failed with DCO IO errors during the operation when run in loop for multiple VxVM volumes.* 4138051 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full.* 4138069 (4139703) Panic due to wrong use of OS API (HUNZA issue)* 4138075 (4129873) In CVR environment, if CVM slave node is acting as logowner, then I/Os may hang when data volume is grown.* 4138101 (4114867) systemd-udevd[2224]: invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 103 ('D')* 4138107 (4065490) VxVM udev rules consumes more CPU and appears in "top" output when system has thousands of storage devices attached.* 4138224 (4129489) With VxVM installed in AWS cloud environment, disk devices may intermittently disappear from 'vxdisk list' output.* 4138236 (4134069) VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node.* 4138237 (4113240) In CVR environment, with hostname binding configured, Rlink on VVR secondary may have incorrect VVR primary IP.* 4138251 (4132799) No detailed error messages while joining CVM fail.* 4138348 (4121564) Memory leak for volcred_t could be observed in vxio.* 4138537 (4098144) vxtask list shows the parent process without any sub-tasks which never progresses for SRL volume* 4138538 (4085404) Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.* 4140598 (4141590) Some incidents do not appear in changelog because their cross-references are not properly processed* 4143580 (4142054) primary master got panicked with ted assert during the run.* 4143857 (4130393) vxencryptd crashed repeatedly due to segfault.* 4145064 (4145063) unknown symbol message logged in syslogs while inserting vxio module.* 4146550 (4108235) System wide hang due to memory leak in VVR vxio kernel module* 4149499 (4149498) Getting unsupported .ko files not found warning while upgrading VM packages.* 4150099 (4150098) vxconfigd goes down after few VxVM operations and System file system becomes read-only .* 4150459 (4150160) Panic due to less memory allocation than requiredPatch ID: VRTSaslapm 8.0.2.1400* 4137995 (4117350) Import operation on disk group created on Hitachi ShadowImage (SI) disks is failing .* 4153119 (4153120) ASLAPM rpm Support on RHEL9.3Patch ID: VRTSvxvm-8.0.2.1200* 4119267 (4113582) In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.* 4123065 (4113138) 'vradmin repstatus' invoked on the secondary site shows stale information* 4123069 (4116609) VVR Secondary logowner change is not reflected with virtual hostnames.* 4123080 (4111789) VVR does not utilize the network provisioned for it.* 4124291 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.* 4124794 (4114952) With virtual hostnames, pause replication operation fails.* 4124796 (4108913) Vradmind dumps core because of memory corruption.* 4125003 (4118478) VxVM Support on RHEL9.2* 4125392 (4114193) 'vradmin repstatus' incorrectly shows replication status as inconsistent.* 4125811 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd* 4128127 (4132265) Machine attached with NVMe devices may panic.* 4128835 (4127555) Unable to configure replication using diskgroup id.* 4129664 (4129663) Generate and add changelog in vxvm rpm* 4129766 (4128380) With virtual hostnames, 'vradmin resync' command may fail if invoked from DR site.* 4130402 (4107801) /dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.* 4130827 (4098391) Continuous system crash is observed during VxVM installation.* 4130947 (4124725) With virtual hostnames, 'vradmin delpri' command may hang.Patch ID: VRTSaslapm 8.0.2.1200* 4132969 (4122583) ASLAPM rpm Support on RHEL9.2* 4133009 (4133010) Generate and add changelog in aslapm rpmPatch ID: VRTSvxvm-8.0.2.1100* 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml].Patch ID: VRTSaslapm 8.0.2.1100* 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml].Patch ID: VRTScavf-8.0.2.2100* 4162683 (4153873) Deport decision was being dependent on local system only not on all systems in the clusterPatch ID: VRTScavf-8.0.2.1500* 4133969 (4074274) DR test and failover activity might not succeed for hardware-replicated disk groups and EMC SRDF hardware-replicated disk groups are failing with PR operation failed message.* 4137640 (4088479) The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.Patch ID: VRTSgms-8.0.2.1900* 4166491 (4166490) GMS support for RHEL-9.4Patch ID: VRTSgms-8.0.2.1500* 4138412 (4138416) GMS support for RHEL9.3* 4152214 (4152213) GMS support for RHEL 9.3 minor kernel 5.14.0-362.18.1.Patch ID: VRTSgms-8.0.2.1200* 4124915 (4118303) GMS support for RHEL9.2.* 4126266 (4125932) no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.* 4127527 (4107112) When finding GMS module with version same as kernel version, need to consider kernel-build number.* 4127528 (4107753) If GMS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.* 4127628 (4127629) GMS support for RHEL9.0 minor kernel 5.14.0-70.36.1* 4129708 (4129707) Generate and add changelog in GMS rpmPatch ID: VRTSglm-8.0.2.2100* 4166489 (4166488) GLM support for RHEL-9.4* 4174551 (4171246) vxglm status shows active even if it fails to load module.Patch ID: VRTSglm-8.0.2.1500* 4138274 (4126298) System may panic due to unable to handle kernel paging request and memory corruption could happen.* 4138407 (4138408) GLM support for RHEL9.3* 4152212 (4152211) GLM support for RHEL 9.3 minor kernel 5.14.0-362.18.1.Patch ID: VRTSglm-8.0.2.1200* 4124912 (4118297) GLM support for RHEL9.2.* 4127524 (4107114) When finding GLM module with version same as kernel version, need to consider kernel-build number.* 4127525 (4107754) If GLM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.* 4127626 (4127627) GLM support for RHEL9.0 minor kernel 5.14.0-70.36.1* 4129715 (4129714) Generate and add changelog in GLM rpmPatch ID: VRTSodm-8.0.2.1900* 4166495 (4166494) ODM support for RHEL-9.4Patch ID: VRTSodm-8.0.2.1700* 4154116 (4118154) System may panic in simple_unlock_mem() when errcheckdetail enabled.* 4159290 (4159291) ODM module is not getting loaded with newly rebuilt VxFS.Patch ID: VRTSodm-8.0.2.1500* 4138419 (4138477) ODM support for RHEL9.3* 4152210 (4152208) ODM support for RHEL 9.3 minor kernel 5.14.0-362.18.1.Patch ID: VRTSodm-8.0.2.1400* 4144274 (4144269) After installing VRTSvxfs-8.0.2.1400 ODM fails to start.Patch ID: VRTSodm-8.0.2.1200* 4124928 (4118466) ODM support for RHEL9.2.* 4126262 (4126256) no symbol version warning for VEKI's symbol in dmesg after SFCFSHA configuration* 4127518 (4107017) When finding ODM module with version same as kernel version, need to consider kernel-build number.* 4127519 (4107778) If ODM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.* 4127624 (4127625) ODM support for RHEL9.0 minor kernel 5.14.0-70.36.1* 4129838 (4129837) Generate and add changelog in ODM rpmDETAILS OF INCIDENTS FIXED BY THE PATCH---------------------------------------This patch fixes the following incidents:Patch ID: VRTSvxfs-8.0.2.2100* 4144078 (Tracking ID: 4142349)SYMPTOM:Using sendfile() on VxFS file system might result in hang with following stack trace.schedule()mutex_lock()vx_splice_to_pipe()vx_splice_read()splice_file_to_pipe()do_sendfile()do_syscall()DESCRIPTION:VxFS code erroneously tries to take pipe lock twice in the splice read code path, which might result in hang when sendfile() system call is used.RESOLUTION:VxFS now uses generic_file_splice_read() instead of own implementation for splice read.* 4162063 (Tracking ID: 4136858)SYMPTOM:ncheck utility was generating coredump while running on a corrupted filesystem due to the absence of sanity check for directory inodes.DESCRIPTION:ncheck utility was generating coredump while running on a corrupted filesystem due to the absence of sanity check for directory inodes.RESOLUTION:Added a basic sanity check for directory inodes. Now, the utility is not generating any coredump on corrupted FS, instead gracefully exiting in case of any error.* 4162064 (Tracking ID: 4121580)SYMPTOM:Modification operations will be allowed on checkpoint despite having WORM flag set.DESCRIPTION:If checkpoint is mounter on one mode in RW (READ-WRITE) mode, and WORM flag is getting set from other node, it will be allowed.RESOLUTION:Issue is fixed with the code change.* 4162065 (Tracking ID: 4158238)SYMPTOM:vxfsrecover command exits with error if the previous invocation terminated abnormally.DESCRIPTION:vxfsrecover command exits with error if the previous invocation terminated abnormally due to missing cleanup in the binary.RESOLUTION:Code changes have been done to perform cleanup properly in case of abnormal termination of "vxfsrecover" process.* 4162066 (Tracking ID: 4156650)SYMPTOM:Stale checkpoint entries will remain.DESCRIPTION:For example if we have checkpoints T1 (NEWEST), T2, T3 ... TN (OLDEST)In case if recovery happened from T3, then T1 and T2 will not be deleted in future, as we have lost their information.RESOLUTION:Code changes have been done in the vxfstaskd binary to avoid above mentioned issues.* 4162220 (Tracking ID: 4099775)SYMPTOM:System panic.DESCRIPTION:The reason for panic is race between two or threads trying to extend the per node quota file.RESOLUTION:Code is modified to handle this race condition.* 4163183 (Tracking ID: 4158381)SYMPTOM:Server panicked with "Kernel panic - not syncing: Fatal exception"DESCRIPTION:Server panicked due to accessing the freed dentry, also the dentry's hlist has been corrupted.There is a difference betwen the VXFS's dentry implementation and the kernel equivalent of dentry implementation.VXFS implementation of find_alias and splice_alias is based on some old kernel versions of d_find_alias and d_splice_alias. We need to keep them in sync with the newer kernel code to avoid landing into any such issue.RESOLUTION:Addressing the difference between our dentry related function like splice_alias, find_alias and the kernel equivalent of these functions.Made kernel equivalent code changes in our dentry's find_alias and splice_alias functions.* 4164090 (Tracking ID: 4163498)SYMPTOM:Veritas File System df command logging doesn't have sufficient permission while validating tunable configurationDESCRIPTION:While updating Veritas File system df command logging, it does check the mode of permission with tunable configuration to update the log with respective permission.The permission for checking tunable configuration is not correct.RESOLUTION:Updated code to set the permission correctly..EO df_vxfs shows error with flex metrics-storage serviceWhen logging to vxfs_cmdlog with df_vxfs utility, it will check tunable configuration of eo_perm in read write mode.And Tunable configuration file doesn't have sufficient permission for this mode.As the root is mounted as ro (readonly)The vxtunefs command try to update tunable value to config file located in /etc/vx/vxfssystemdirectory based on eo_perm tunable.[root@nbapp862 vx]# mount | grep //dev/dm-8 on / type ext4 (ro,relatime,seclabel,stripe=16)/dev/dm-8 on /etc/opt/veritas type ext4 (rw,relatime,seclabel,stripe=16)* 4164270 (Tracking ID: 4156384)SYMPTOM:Filesystem's metadata can get corrupted due to missing transaction in the intent log. This ends up resulting mount failure in some scenario.DESCRIPTION:Filesystem's metadata can get corrupted due to missing transaction in the intent log. This ends up resulting mount failure in some scenario. Also, it is not limited to mount failure. It may result in some more corruption in the filesystem metadata.RESOLUTION:Code changes have been done in reconfig code path to add a missing transaction which will prevent redundant replay of already done transactions which was causing the issue.* 4165966 (Tracking ID: 4165967)SYMPTOM:mount and fsck commands are facing few SELinux permission denials issue.DESCRIPTION:mount and fsck commands are facing few SELinux permission denials issue to read files with the default file type and to manage files from /etc directory.RESOLUTION:Required SELinux permissions are added for mount and fsck commands to be able to read files with the default file type and to manage files from /etc directory.* 4166501 (Tracking ID: 4163862)SYMPTOM:Mutex lock contention is observed in cluster file system under massive file creation workloadDESCRIPTION:Mutex lock contention is observed in cluster file system under massive file creation workload. This mutex lock is used to access/modify delegated inode allocation unit list on cluster file system nodes. As multiple processes creating new files need to read this list, they contend of this mutex lock. Stack trace as following is observed in file creation code path.__mutex_lockvx_dalist_getauvx_cfs_inofindauvx_iallocvx_dircreate_tranvx_int_createvx_do_createvx_create1vx_create_vpvx_createvfs_createRESOLUTION:Mutex lock for delegated inode allocation unit list is converted to read write fast sleep lock. It will help to access delegated inode allocation unit list parallelly for file creation processes.* 4166502 (Tracking ID: 4163127)SYMPTOM:Spinlock contention observed during inode allocation for massive file creation operation on cluster file system.DESCRIPTION:While file creation operation happens on cluster node, flag for inode allocation unit is accessed under protection of inode allocation spinlock. Hence, we see the contention with following code path: vx_ismapdelfullvx_get_map_dele vx_mdele_tryholdvx_cfs_inofindauvx_iallocvx_dircreate_tranvx_int_createvx_do_createvx_create1vx_create_vpvx_createvfs_createRESOLUTION:Code changes have been done to reduce the contention on inode allocation spinlock.* 4166503 (Tracking ID: 4162810)SYMPTOM:Spikes in CPU usage of glm threads were observed in output of "top" command during massive file creation workload on cluster file system.DESCRIPTION:When running massive file creation workload from a cluster node(s), there was observation that a huge number of GLM threads are shown in top command output. These glm threads are contending for a global spinlock and consuming CPUs cycles with following stack trace. native_queued_spin_lock_slowpath()_raw_spin_lock_irqsave()vx_glmlist_thread()vx_kthread_init()kthread()RESOLUTION:Code is modified to split the heavy global spinlock at different priority level.* 4168357 (Tracking ID: 4076646)SYMPTOM:Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area.DESCRIPTION:Unprivileged memory can get corrupted by VxFS incase inode size is 512 Byte and inode's attribute resides in its immediate area.RESOLUTION:Code changes have been done in attribute code path to make sure the free space in attribute area should never exceed length of that area.* 4171307 (Tracking ID: 4171308)SYMPTOM:VxFS module failed to load on RHEL-9.4 kernelDESCRIPTION:This issue occurs due to changes in the RHEL-9.4 kernelRESOLUTION:VxFS module is updated to accommodate the changes in the kernel and load as expected on RHEL-9.4 kernel* 4172054 (Tracking ID: 4162316)SYMPTOM:System PANICDESCRIPTION:CrowdStrike falcon might generate Kernel PANIC, during migration from native FS to VxFS.RESOLUTION:Required fix is added to vxfs code.* 4172753 (Tracking ID: 4173685)SYMPTOM:fsck command facing few SELinux permission denials issue.DESCRIPTION:fsck command facing few SELinux permission denials issue to write to user temporary files.RESOLUTION:Required SELinux permissions are added for fsck command to be able to write to user temporary files.* 4173064 (Tracking ID: 4163337)SYMPTOM:Intermittent df slowness seen across cluster due to slow cluster-wide file system freeze.DESCRIPTION:For certain workload, intent log reset can happen relatively frequently and whenever it happens it will trigger cluster-wide freeze. If there are a lot of dirty buffers that need flushing and invalidation, then the freeze might take long time to finish. The slowest part in the invalidation of cluster buffers is the de-initialisation of its glm lock which requires lots of lock release messages to be sent to the master lock node. This can cause flowcontrol to be set at LLT layer and slow down the cluster-wide freeze and block commands like df, ls for that entire duration.RESOLUTION:Code is modified to avoid buffer flushing and invalidation in case freeze is triggered by intent log reset.* 4174242 (Tracking ID: 4174538)SYMPTOM:mount and fsck commands are facing few SELinux permission denials issue.DESCRIPTION:mount and fsck commands are facing few SELinux permission denials issue to write to user temporary files.RESOLUTION:Required SELinux permissions are added for mount and fsck commands to be able to write to user temporary files.* 4174244 (Tracking ID: 4174539)SYMPTOM:fsck command facing few SELinux permission denials issue.DESCRIPTION:fsck command facing few SELinux permission denials issue to manage files with the default file type.RESOLUTION:Required SELinux permissions are added for fsck command to be able to manage files with the default file type.Patch ID: VRTSvxfs-8.0.2.1700* 4159284 (Tracking ID: 4145203)SYMPTOM:vxfs startup scripts fails to invoke veki for kernel version higher than 3.DESCRIPTION:vxfs startup script failed to start Veki, as it was calling system V init script to start veki instead of the systemctl interface.RESOLUTION:Current code changes checks if kernel version is greater than 3.x and if systemd is present then use systemctl interface otherwise use system V interface* 4159938 (Tracking ID: 4155961)SYMPTOM:System panic due to null i_fset in vx_rwlock().DESCRIPTION:Panic in vx_rwlock due to race between vx_rwlock() and vx_inode_deinit() function.Panic stack[exception RIP: vx_rwlock+174]..#10 __schedule#11 vx_write#12 vfs_write#13 sys_pwrite64#14 system_call_fastpathRESOLUTION:Code changes have been done to fix this issue.* 4160325 (Tracking ID: 4160740)SYMPTOM:Command fsck is facing few SELinux permission denials issue.DESCRIPTION:Command fsck is facing few SELinux permission denials issue to manage generic files in /etc.RESOLUTION:Required SELinux permissions are added for command fsck to be able to manage generic files in /etc.* 4160326 (Tracking ID: 4160742)SYMPTOM:mount and fsck commands are facing few SELinux permission denials issue.DESCRIPTION:mount and fsck commands are facing few SELinux permission denials issue to manage and set the attributes of /var directories.RESOLUTION:Required SELinux permissions are added for mount and fsck commands to be able to manage and set the attributes /var directories.* 4161120 (Tracking ID: 4161121)SYMPTOM:Non root user is unable to access log files under /var/log/vx directoryDESCRIPTION:In vxfs post install script we create /var/log/vx directory with 0600 permission, hence non root user is unable to read logfiles fromthis location As part of eo log file permission tunable changes , the log file permissions are getting change as expected but due to this directory permission 0600 non root users are unable to access these log files.RESOLUTION:Change this /var/log/vx directory permission to 0755.Patch ID: VRTSvxfs-8.0.2.1600* 4157410 (Tracking ID: 4157409)SYMPTOM:Security Vulnerabilities were observed in the current versions of third party components [sqlite and expat] used by VxFS .DESCRIPTION:In an internal security scan, security vulnerabilities in [sqlite and expat] were observed.RESOLUTION:Upgrading the third party components [sqlite and expat] to address these vulnerabilities.Patch ID: VRTSvxfs-8.0.2.1500* 4119626 (Tracking ID: 4119627)SYMPTOM:Command fsck is facing few SELinux permission denials issue.DESCRIPTION:Command fsck is facing few SELinux permission denials issue to manage var_log_t files and search init_var_run_t directories.RESOLUTION:Required SELinux permissions are added for command fsck to be able to manage var_log_t files and search init_var_run_t directories.* 4138668 (Tracking ID: 4138669)SYMPTOM:VxFS module failed to load on RHEL9.3 kernel.DESCRIPTION:This issue occurs due to changes in RHEL9.3 kernel.RESOLUTION:VxFS module is updated to accommodate the changes in the kernel and load as expected on RHEL9.3 kernel.* 4146580 (Tracking ID: 4141876)SYMPTOM:Old SecureFS configuration is getting deleted.DESCRIPTION:It is possible that multiple instances of vxschadm binary are getting executed to update the config file, however there are high chances that the last updater will nullify the previous binary changes.RESOLUTION:Added synchronisation mechanism between two processes of vxschadm command running across the Infoscale cluster.* 4148734 (Tracking ID: 4148732)SYMPTOM:Memory by binaries / daemons who are calling this API, e.g. vxfstaskd daemonDESCRIPTION:At every call get_dg_vol_names() is not freeing the 8192 bytes of memory, which will result to increase in the total consumption of virtual memory by vxfstaskd.RESOLUTION:Free the unused memory.* 4150065 (Tracking ID: 4149581)SYMPTOM:WORM checkpoints and files will not be deleted despite their retention period is expired.DESCRIPTION:Frequent FS freeze operations, like creation of checkpoint, may cause SecureClock to get drifted from its regular update cycle.RESOLUTION:Fixed this bug.* 4152206 (Tracking ID: 4152205)SYMPTOM:The VxFS module fails to load on RHEL9.3 minor kernel 5.14.0-362.18.1.DESCRIPTION:This issue occurs due to changes in the RHEL9.3 minor kernel.RESOLUTION:Updated VXFS to support RHEL 9.3 minor kernel 5.14.0-362.18.1.Patch ID: VRTSvxfs-8.0.2.1400* 4141666 (Tracking ID: 4141665)SYMPTOM:Security vulnerabilities exist in the Zlib third-party components used by VxFS.DESCRIPTION:VxFS uses Zlib third-party components with some security vulnerabilities.RESOLUTION:VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed.Patch ID: VRTSvxfs-8.0.2.1200* 4121230 (Tracking ID: 4119990)SYMPTOM:Some nodes in cluster are in hang state and recovery is stuck.DESCRIPTION:There is a deadlock where one thread locks the buffer and wait for the recovery to complete. Recovery on the other hand may get stuck while flushing andinvalidating the buffers from buffer cache as it cannot lock the buffer.RESOLUTION:If recovery is in progress, release buffer and return VX_ERETRY callers will retry the operation. There are some cases where lock is taken on 2 buffers. Forthose cases pass the flag VX_NORECWAIT which will retry the operation after releasing both the buffers.* 4124924 (Tracking ID: 4118556)SYMPTOM:VxFS module failed to load on RHEL9.2 kernel.DESCRIPTION:This issue occurs due to changes in RHEL9.2 kernel.RESOLUTION:VxFS module is updated to accommodate the changes in the kernel and load as expected on RHEL9.2 kernel.* 4125870 (Tracking ID: 4120729)SYMPTOM:Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.DESCRIPTION:If full sync is started in recovery mode, we don't update state on target during start of replication (from failed to full-sync running). This state change is missed and is causing issues with states for next incremental syncs.RESOLUTION:Updated the code to address the correct state at target when vfr full sync is started in recovery mode* 4125871 (Tracking ID: 4114176)SYMPTOM:After failover, job sync fails with error "Device or resource busy".DESCRIPTION:If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".RESOLUTION:Code is modified to correct the state on target when job was started in recovery mode.* 4125873 (Tracking ID: 4108955)SYMPTOM:VFR job hangs on source if thread creation fails on target.DESCRIPTION:On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.RESOLUTION:Code is modified to retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.* 4125875 (Tracking ID: 4112931)SYMPTOM:vxfsrepld consumes a lot of virtual memory when it has been running for long time.DESCRIPTION:Current VxFS thread pool is not efficient if it has been used for daemon process like vxfsrepld. It didn't release underline resources used by newly created threads which in turn increasing virtual memory consumption of that process. underline resources of threads will be released either when we call pthread_join() on them or when threads are created with detached attribute. with current implementation, pthread_join() is called only when thread pool is destroyed as a part of cleanup. but with vxfsrepld, it is not expected to call pool_destroy() every time when job is successful. pool_destroy is called only when repld is stopped. This was leading to consolidating threads resources and increasing VM usage of process.RESOLUTION:Code is modified to detach threads when it exits.* 4125878 (Tracking ID: 4096267)SYMPTOM:Veritas File Replication jobs might failed when there are large number of jobs run in parallel.DESCRIPTION:File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service andjob might failed.RESOLUTION:updated code to handle the code to take a hold while checking invalid job configuration.* 4126104 (Tracking ID: 4122331)SYMPTOM:Block number, device id information, in-core inode state are missing from the error messages logged in syslog while marking a bitmap/inode as "BAD".DESCRIPTION:Block number, device id information, in-core inode state are missing from the error messages logged in syslog upon encountering bitmap corruption or while marking an inode "BAD".RESOLUTION:Code changes have been done to include required missing information in corresponding error messages.* 4127509 (Tracking ID: 4107015)SYMPTOM:The VxFS module fails to load on linux minor kernel.DESCRIPTION:This issue occurs due to changes in the minor kernel.RESOLUTION:Modified existing modinst-vxfs script to consider kernel-build version in exact-version-module version calculation.* 4127510 (Tracking ID: 4107777)SYMPTOM:The VxFS module fails to load on linux minor kernel.DESCRIPTION:This issue occurs due to changes in the minor kernel.RESOLUTION:Modified existing modinst-vxfs script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present.* 4127594 (Tracking ID: 4126957)SYMPTOM:If "fsadm -o mntunlock=<string> <mountpoint>" and "umount -f <mountpoint>" operations are run in parallel,system may crash with following stack: vx_aioctl_unsetmntlock+0xd3/0x2a0 [vxfs] vx_aioctl_vfs+0x256/0x2d0 [vxfs] vx_admin_ioctl+0x156/0x2f0 [vxfs] vxportalunlockedkioctl+0x529/0x660 [vxportal] do_vfs_ioctl+0xa4/0x690 ksys_ioctl+0x64/0xa0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x5b/0x1b0DESCRIPTION:There is a race condition between these two operations, due to which by the time fsadm thread tries to accessFS data structure, it is possible that umount operation has already freed the structures, which leads to panic.RESOLUTION:As a fix, the fsadm thread first checks if the umount operation is in progress. If so, it fails rather than continuing.* 4127621 (Tracking ID: 4127623)SYMPTOM:The VxFS module fails to load on RHEL9.0 minor kernel 5.14.0-70.36.1DESCRIPTION:This issue occurs due to changes in the RHEL9.0 minor kernelRESOLUTION:Updated VXFS to support RHEL9.0 minor kernel 5.14.0-70.36.1* 4127720 (Tracking ID: 4127719)SYMPTOM:fsdb binary fails to open the device on a VVR secondary volume in RW mode although it has write permissions. fstyp binary could not dump fs_uuid value.DESCRIPTION:We have observed that fsdb when run on a VVR secondary volume bails out. At fs level, the volume has write permission but since it is secondary from VVR perspective, it is not allowed to be opened in write mode at block layer.fstyp binary could not dump fs_uuid value along with other superblock fields.RESOLUTION:Added fallback logic, wherein fsdb if fs_open fails to open the device in Read-Write mode, it will try to open it in Read-Only mode. Fixed fstyp binary to dump fs_uuid value along with other superblock fields.Code changes have been done to reflect these changes.* 4127785 (Tracking ID: 4127784)SYMPTOM:/opt/VRTS/bin/fsppadm validate /mnt4 invalid_uid.xmlUX:vxfs fsppadm: WARNING: V-3-26537: Invalid USER id 1xx specified at or near line 10DESCRIPTION:Before this fix, fsppadm command was not stoping the parsing, and was treating invalid uid/gid as warning only. Here invalid uid/gid means whetherthey are integer numbers or not. If given uid/gid is not existing then it is still a warning.RESOLUTION:Code added to give user proper error in case if invalid user/group ids are provided.* 4128249 (Tracking ID: 4119965)SYMPTOM:VxFS mount binary failed to mount VxFS with SELinux context.DESCRIPTION:Mounting the file system using VxFS binary with specific SELinux context shows below error:/FSQA/fsqa/vxfsbin/mount -t vxfs /dev/vx/dsk/testdg/vol1 /mnt1 -ocontext="system_u:object_r:httpd_sys_content_t:s0"UX:vxfs mount: ERROR: V-3-28681: Selinux context is invalid or option/operation is not supported. Please look into the syslog for more information.RESOLUTION:VxFS mount command is modified to pass context options to kernel only if SELinux is enabled.* 4129494 (Tracking ID: 4129495)SYMPTOM:Kernel panic observed in internal VxFS LM conformance testing.DESCRIPTION:Kernel panic has been observed in internal VxFS testing, OS writeback thread marking inode for writeback and then calling filesystem hook vx_writepages.The OS writeback is not expected to get inside iput(), as it would self deadlock while waiting on writeback. This deadlock causing tsrapi command hang which further causing kernel panic.RESOLUTION:Modified code to avoid deallocation of inode when the inode writeback is in progress.* 4129681 (Tracking ID: 4129680)SYMPTOM:VxFS rpm does not have changelogDESCRIPTION:Changelog in rpm will help to find missing incidents with respect to other version.RESOLUTION:Changelog is generated and added to VxFS rpm.* 4131312 (Tracking ID: 4128895)SYMPTOM:On servers with SELinux enabled, VxFS mount command may throw error as following.Error message: UX:vxfs mount: ERROR: V-3-21264: <volume> is already mounted, <mount_point> is busy, or the allowable number of mount points has been exceeded.DESCRIPTION:VxFS mount commands now run with vxfs_mount_t SELinux context. This context was missing permissions to execute VxVM commands. Hence it was not able to confirm whether the filesystem was already mounted elsewhere or not. Hence it may throw error as the volume is already mounted.RESOLUTION:Permission to run VxVM commands under vxfs_mount_t SELinux context are added.Patch ID: VRTSspt-8.0.2.1300* 4139975 (Tracking ID: 4149462)SYMPTOM:New script is provided list_missing_incidents.py which compares changelogs of rpm and lists missing incidents in new version.DESCRIPTION:list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of script refer README.list_missing_incidents in VRTSspt packageRESOLUTION:list_missing_incidents.py compares changelogs of old version rpm with new version rpm and lists missing incidents in new-version rpm if any. For details of script refer README.list_missing_incidents in VRTSspt package* 4146957 (Tracking ID: 4149448)SYMPTOM:New script is provided check_incident_inchangelog.py which will check if incident abstract is present in changelog.DESCRIPTION:If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt packageRESOLUTION:If Changelog is present in rpm or installed package, then provided script in VRTSspt can check if incident abstract is present in changelog. For details of script refer README.check_incident_inchangelog in VRTSspt packagePatch ID: VRTSrest-3.0.10* 4124960 (Tracking ID: 4130028)SYMPTOM:GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specsDESCRIPTION:The get api was returning different response from what was mentioned in the specsRESOLUTION:Changed the response of the GET api of vm and fs apis to match the specs. After the changes client generated code will not get error* 4124963 (Tracking ID: 4127170)SYMPTOM:While modifying the system list for service group when dependency is there, the api would failDESCRIPTION:While modifying the system list for service group when dependency is there, the api would fail. So we were not able to modify system list if there were dependency of service group on other service groupRESOLUTION:Now we have modified the code for api to modify system list for service group when the dependency exists.* 4124964 (Tracking ID: 4127167)SYMPTOM:DELETE rvg was failing when replication was in progressDESCRIPTION:DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volumeRESOLUTION:DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume* 4124966 (Tracking ID: 4127171)SYMPTOM:While getting excluded disks on Systems API we were getting nodelist instead of nodename in href. When the user would try to GET on that link, the request would failDESCRIPTION:The GET system list api was returning wrong reference links for excluded disks. When the user would try to GET on that link, the request would failRESOLUTION:Returning the correct href for excluded disks from GET system api.* 4124968 (Tracking ID: 4127168)SYMPTOM:In GET request on rvgs all datavolumes in RVGs not listed correctlyDESCRIPTION:The command which we were using for getting the list of data volumes on rvg, was not returning all data volumes, because of which the api was not returning all the data volumes of rvgRESOLUTION:Changed the command to get data volumes of rvg. Now GET on rvg will return all the data volumes associated with that rvg* 4125162 (Tracking ID: 4127169)SYMPTOM:Get disks api failing when cvm is down on any nodeDESCRIPTION:When node is out of cluster from CVM, GET disks api is failing and not giving proper outputRESOLUTION:Used the appropriate checks to get the proper list of disks from GET disks api.Patch ID: VRTSfsadv-8.0.2.1800* 4162373 (Tracking ID: 4130255)SYMPTOM:Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.DESCRIPTION:VxFS uses the OpenSSL third-party components in which some security vulnerability exist.RESOLUTION:VxFS is updated to use newer version (1.1.1v) of this third-party components in which the security vulnerabilities have been addressed.Patch ID: VRTSfsadv-8.0.2.1500* 4153164 (Tracking ID: 4088024)SYMPTOM:Security vulnerabilities exist in the OpenSSL third-party components used by VxFS.DESCRIPTION:VxFS uses the OpenSSL third-party components in which some security vulnerability exist.RESOLUTION:VxFS is updated to use newer version (1.1.1q) of this third-party components in which the security vulnerabilities have been addressed.Patch ID: VRTSpython-3.9.16 P05* 4169026 (Tracking ID: 4169025)SYMPTOM:Version upgrade for the VRTSpython package from 3.9.16.4 to 3.9.16.5.DESCRIPTION:Version upgrade for the VRTSpython package from 3.9.16.4 to 3.9.16.5.RESOLUTION:Version upgrade for the VRTSpython package from 3.9.16.4 to 3.9.16.5.Patch ID: VRTSpython-3.9.16.4* 4161479 (Tracking ID: 4161477)SYMPTOM:There are open exploitable CVEs with High/Critical CVSS scores in the current PPL and other modules under VRTSpython.DESCRIPTION:There are open exploitable CVEs with High/Critical CVSS scores in the current Python Programming Language and other modules under VRTSpython.RESOLUTION:Updating several vulnerable third-party modules in VRTSpython to address open exploitable security vulnerabilities.Patch ID: VRTSsfcpi-8.0.2.1300* 4006619 (Tracking ID: 4015976)SYMPTOM:On a Solaris system, patch upgrade of InfoScale fails with an error in the alternate boot environment.DESCRIPTION:InfoScale does not support patch upgrade in alternate boot environments. Therefore, when you provide the "-rootpath" argument to the installer during a patch upgrade, the patch upgrade operation fails with the following error message: CPI ERROR V-9-0-0 The -rootpath option works only with upgrade tasks.RESOLUTION:The installer is enhanced to support patch upgrades in alternate boot environments by using the -rootpath option.* 4008502 (Tracking ID: 4008744)SYMPTOM:Rolling upgrade using response file fails if one or more operating system packages are missing on the cluster nodes.DESCRIPTION:When rolling upgrade is performed using response file, the installer script checks for the missing operating system packages and installs them. After installing the missing packages on the cluster nodes, the installer script fails and exit. The following error message is logged:CPI ERROR V-9-40-1153 Response file error, no configuration for rolling upgrade phase 1.This issue occurs because the installer script fails to check the sub-cluster numbers in the response file.RESOLUTION:The installer script is enhanced to continue with the rolling upgrade after installing the missing OS packages on the cluster nodes.* 4010025 (Tracking ID: 4010024)SYMPTOM:CPI assumes that the third digit in an InfoScale 7.4.2 version indicates a patch version, and not a GA version. Therefore, it upgrades the packages from the patch only and does not upgrade the base packages.DESCRIPTION:To compare product versions and to set the type of installation, CPI compares the currently installed version with the target version to be installed. However, instead of comparing all the digits in a version, it incorrectly compares only the first two digits. In this case, CPI compares 7.4 with 7.4.2.xxx, and finds that the first 2 digits match exactly. Therefore, it assumes that the base version is already installed and then installs the patch packages only.RESOLUTION:This hotfix updates the CPI to recognize InfoScale 7.4.2 as a base version and 7.4.2.xxx (for example) as a patch version. After you apply this patch, CPI can properly upgrade the base packages first, and then proceed to upgrade the packages that are in the patch.* 4012032 (Tracking ID: 4012031)SYMPTOM:If older versions of the VRTSvxfs and VRTSodm packages are installed innon-global zones, they are not upgraded when upgrade to a newer version of the product.DESCRIPTION:If older versions of the VRTSvxfs and VRTSodm packages are installed innon-global zones, you must uninstall them before you perform a product upgrade. After you upgrade thosepackages in global zones, you must then install the VRTSvxfs and VRTSodm packages manaully in the non-globalzones.RESOLUTION:The CPI will handle the VRTSodm and VRTSvxfs package in non-global zone in thesame manner it does in global zone.* 4013446 (Tracking ID: 4008578)SYMPTOM:Even though a cluster node may have a fully qualified hostname, the product installer trims this value and uses the shorter hostname for the cluster configuration.DESCRIPTION:The name of a cluster node may be set to a fully qualified hostname, for example, somehost.example.com. However, by default, the product installer trims this value and uses the shorter hostname (for example, somehost) for the cluster configuration.RESOLUTION:This hotfix updates the installer to allow the use of the new "-fqdn" option. If this option is specified, the installer uses the fully qualified hostname for cluster configuration. Otherwise, the installer continues with the default behavior.* 4014920 (Tracking ID: 4015139)SYMPTOM:If IPv6 addresses are provided for the system list on a RHEL 8 system, the product installer fails to verify the network communication with the remote systems and cannot proceed with the installation. The following error is logged:CPI ERROR V-9-20-1104 Cannot ping <IPv6_address>. Please make sure that: - Provided hostname is correct - System <IPv6_address> is in same network and reachable - 'ping' command is available to use (provided by 'iputils' package)DESCRIPTION:This issue occurs because the installer uses the ping6 command to verify the communication with the remote systems if IPv6 addresses are provided for the system list. For RHEL 8 and its minor versions, the path for ping6 has changed from /bin/ping6 to /sbin/ping6, but the installer uses the old path.RESOLUTION:This hotfix updates the installer to use the correct path for the ping6 command.* 4014985 (Tracking ID: 4014983)SYMPTOM:The product installer does not display a warning at the time of the pre-upgrade check to suggest that you will need to provide telemetry details later on if the cluster nodes are not registered with TES or VCR.DESCRIPTION:The product installer prompts you to provide the telemetry details of cluster nodes after upgrading the InfoScale packages but before starting the services. If you cancel the installation at this stage, the Cluster Server resources cannot be brought online. Therefore, a warning message is required during the pre-upgrade checks to remind you to keep these details ready.RESOLUTION:The product installer is updated to notify you at the time of the pre-upgrade check, that if the cluster nodes are not registered with TES or VCR, you will need to provide these telemetry details later on.* 4016078 (Tracking ID: 4007633)SYMPTOM:The product installer fails to synchronize the system clocks with the NTP server.DESCRIPTION:This issue occurs when the /usr/sbin/ntpdate file is missing on the systems where the clocks need to be synchronized.RESOLUTION:Updated the installer to include a dependency on the ntpdate package, which helps the system clocks to be synchronized with the NTP server.* 4020090 (Tracking ID: 4022920)SYMPTOM:The product installer fails to install InfoScale 7.4.2 on SLES 15 SP2 and displays the following error message: CPI ERROR V-9-0-00 No padv object defined for padv SLESx8664 for system <system_name>DESCRIPTION:This issue occurs because the format of the kernel version that is required by SLES 15 SP2 is not included.RESOLUTION:The installer is updated to include the format of the kernel version that is required to support installation on SLES 15 SP2.* 4021517 (Tracking ID: 4021515)SYMPTOM:On SLES 12 SP4 and later systems, the installer fails to fetch the media speed of the network interfaces.DESCRIPTION:The path of the 'ethtool' command is changed in SLES 12 SP4. Therefore, on SLES 12 SP4 and later systems, the installer does not recognize the changed path and uses an incorrect path '/sbin/ethtool' instead of '/usr/sbin/ethtool' for the 'ethtool' command. Consequently, while configuring the product, the installer fails to fetch the media speed of the network interfaces and displays its value as "Unknown".RESOLUTION:This hotfix updates the installer to use the correct path for the 'ethtool' command.* 4022492 (Tracking ID: 4022640)SYMPTOM:The product installer fails to complete the installation after it automatically downloads a required support patch from SORT that contains a VRTSvlic package. The following error is logged:CPI ERROR V-9-0-00 No pkg object defined for pkg VRTSvlic401742 and padv <<PADV>>DESCRIPTION:During installation, the product installer looks for any applicable platform support patches that are available on SORT and automatically downloads them. However, it fails to correctly identify the base version of the VRTSvlic package on the system to compare it with the downloaded version. Consequently, even though the appropriate patch is available, the installer fails to complete the installation.RESOLUTION:To address this issue, the product installer is updated to correctly identify the base version of the VRTSvlic package on a system.* 4027741 (Tracking ID: 4027759)SYMPTOM:The product installer installs lower versions packages if multiple patch bundles are specified using the patch path options in the incorrect order.DESCRIPTION:The product installer expects that the patch bundles, if any, are specified in the lower-to-higher order. Consequently, the installer always overrides the package version from the available package in the last patch bundle in which it exists. If patch bundles are not specified in the expected order, installer installs last available version for a component package.RESOLUTION:To address this issue, the product installer is updated to correctly identify the higher package version before installing the patch bundles. It does so regardless of the order in which the patch bundles are specified.* 4033243 (Tracking ID: 4033242)SYMPTOM:When a responsefile is used, the product installer fails to add the required VCS users.DESCRIPTION:The product installer fails to add the required VCS users during configuration, if a responsefile is used to provide the corresponding input.RESOLUTION:This patch updates the installer so that it adds the required VCS users while using a responsefile for configuration.* 4033688 (Tracking ID: 4033687)SYMPTOM:The InfoScale product installer deletes any existing cluster configuration files during uninstallation.DESCRIPTION:During uninstallation, the product installer deletes cluster configuration files like /etc/llthosts, /etc/llttab, and so on. Consequently, when you reinstall the InfoScale product, you need to perform all the cluster configuration procedures again.RESOLUTION:This patch updates the product installer so that it no longer deletes any existing cluster configuration files. Consequently, you can reconfigure the clusters quickly by using the existing configuration files.* 4034357 (Tracking ID: 4033988)SYMPTOM:The product installer displays the following error message after the precheck and does not allow you to proceed with the installation:The higher version of <package_name> is already installed on <system_name>DESCRIPTION:The product installer compares the versions of the packages in an Infoscale patch bundle with those of the packages that are installed on a system. If a more recent version of any of the packages in the bundle is found to be already installed on the system, the installer displays an error. It does not allow you to proceed further with the installation.RESOLUTION:The product installer is updated to allow the installation of an Infoscale patch bundle that may contain older versions of some packages. Instead of an error message, the installer now displays a warning message and lets you proceed with the installation.* 4038945 (Tracking ID: 4033957)SYMPTOM:The VRTSveki and the VRTSvxfs RPMs fail to upgrade when using yum.DESCRIPTION:The product installer assumes that services are stopped if the corresponding modules are unloaded. However, the veki and the vxfs services remain active even after the modules are unloaded, which causes the RPMs to fail during the upgrade.RESOLUTION:The installer is enhanced to stop the veki and the vxfs services when the modules are unloaded but the services remain active.* 4040836 (Tracking ID: 4040833)SYMPTOM:After an InfoScale upgrade to version 7.4.2 Update 2 on Solaris, the latest vxfs module is not loaded.DESCRIPTION:This issue occurs when the patch_path option of the product installer is used to perform the upgrade. When patch_path is used, the older versions of the packages in the patch are not uninstalled. Thus, the older version of the vxfs package gets loaded after the installation.RESOLUTION:This hotfix updates the product installer to address this issue with the patch_path option. When patch_path is used, the newer versions of the packages in the patch are installed only after the corresponding older packages are uninstalled.* 4041770 (Tracking ID: 4041816)SYMPTOM:On RHEL 8.4, the system panics after the InfoScale stack starts.DESCRIPTION:The system panics when the CFSMount agent or the Mount agent attempts to register for IMF. This issue occurs because these agents are IMF-enabled by default.RESOLUTION:This hotfix updates the product installer to disable IMF for the CFSMount and the Mount agents on RHEL 8.4 systems.* 4042590 (Tracking ID: 4042591)SYMPTOM:On RHEL 8.4, installer disable IMF for the CFSMount and the Mount agents.DESCRIPTION:By default IMF for the CFSMount and the Mount agents are enabled. On RHEL 8.4, installer was used to disable IMF for the CFSMount and the Mount agents before starting the agents.RESOLUTION:This hotfix updates the product installer to not to disable IMF for the CFSMount and the Mount agents on RHEL 8.4 systems.* 4042890 (Tracking ID: 4043075)SYMPTOM:After performing a phased upgrade of InfoScale, the product installer fails to update the types.cf file.DESCRIPTION:You can rename the /etc/llttab file before an OS upgrade and revert to the original configuration after the OS upgrade and before you start the InfoScale stack upgrade. However, if you do not revert the renamed /etc/llttab file, the product installer fails to identify that VCS is configured on the mentioned systems and proceeds with upgrade. Consequently, the installer does not update the .../config/types.cf file.RESOLUTION:This hotfix updates the product installer to avoid such a situation. It displays an error and exits after the precheck tasks if the /etc/llttab file is missing and the other VCS configuration files are available on the mentioned systems.* 4043366 (Tracking ID: 4042674)SYMPTOM:The product installer does not honor the single-node mode of a cluster and restarts it in the multi-mode if 'vcs_allowcomms = 1'.DESCRIPTION:On a system where a single-node cluster is running, if you upgrade InfoScale using a response file with 'vcs_allowcomms = 1', the existing cluster configuration is not restored. The product installer restarts the cluster in the multi-node mode. When 'vcs_allowcomms = 1', the installer does not consider the value of the ONENODE parameter in the /etc/sysconfig/vcs file. It fails to identify that VCS is configured on the systems mentioned in the response file and proceeds with upgrade. Consequently, the installer neither updates the .../config/types.cf file nor restores the /etc/sysconfig/vcs file.RESOLUTION:This hotfix updates the product installer to honor the single-node mode of an existing cluster configuration on a system.* 4043372 (Tracking ID: 4043371)SYMPTOM:If SecureBoot is enabled on the system, the product installer fails to install some InfoScale RPMs (VRTSvxvm, VRTSaslapm, VRTScavf).DESCRIPTION:InfoScale installations are not supported on systems where SecureBoot is enabled. However, the product installer does not check whether SecureBoot is enabled on the system. Consequently, it fails to install some InfoScale RPMs even though it proceeds with the installation.RESOLUTION:This hotfix updates the product installer to check whether SecureBoot is enabled, and if so, display an appropriate error message and exit.* 4043892 (Tracking ID: 4043890)SYMPTOM:The product installer incorrectly prompts users to install deprecated OS RPMs for LLT over RDMA configurations.DESCRIPTION:The following RPMs that are used in LLT RDMA configurations have been deprecated: (1) the libmthca and the libmlx4 OS RPMs have been replaced by libibverbs, and (2) the rdma RPM has been replaced by rdma-core. However, even when the libibverbs and the rdma-core RPMs are installed on a system, the InfoScale installer prompts users to install the corresponding deprecated RPMs. Furthermore, if these deprecated RPMs are not installed, the installer does not allow users to proceed with the InfoScale cluster configuration.RESOLUTION:This hotfix updates the product installer to use the correct OS RPMs dependency list for LLT over RDMA configurations.* 4045881 (Tracking ID: 4043751)SYMPTOM:The VRTScps RPM installation may fail on SLES systems.DESCRIPTION:The VRTScps RPM has a dependency on the "/etc/SUSE-brand" file that is available in the "branding-SLE" package. If the brand file is not present on the system, the InfoScale product installer may fail to install the VRTScps RPM on SLES systems.RESOLUTION:This hotfix adresses the issue by updating the product installer to include the "branding-SLE" package in OS dependency list during the installation pre-check.* 4046196 (Tracking ID: 4067426)SYMPTOM:Package uninstallation during a rolling upgrade fails if non-global zones are under VCS service group controlDESCRIPTION:During a Rolling upgrade, VCS failover service groups get switched to another machine in the cluster before upgrade starts. Installer checks whether any zone has Veritas packages installed on it and tries to uninstall the packages from the non-global zone before upgrading. Uninstall fails because non-global zone is not on the current machine but failover is on the machine which is not upgrading.RESOLUTION:Installer checks if any zone has Veritas packages installed and is under VCS service group control. In such a case, installer does not upgrade zone on the current machine because non-global zone switches to the other machine in the cluster.* 4050467 (Tracking ID: 4050465)SYMPTOM:The InfoScale product installer fails to create VCS users for non-secure clusters.DESCRIPTION:In case of non-secure clusters, the product installer fails to locate the binary that is required for password encryption. Consequently, the addition of VCS users fails.RESOLUTION:The product installer is updated to be able to successfully create VCS users in case of non-secure clusters.* 4052860 (Tracking ID: 4052859)SYMPTOM:The InfoScale licensing component on AIX and Solaris and the InfoScale agents for Pure Storage replication on AIX do not work if the VRTSPython package is not installed.DESCRIPTION:On AIX and Solaris, the VRTSpython package is required to support the InfoScale licensing component. On AIX, the VRTSpython package is required to support the InfoScale agents for Pure Storage replication. These components do not work because the InfoScale product installer does not install the VRTSPython module on AIX and Solaris.RESOLUTION:The product installer is enhanced to install the VRTSpython package on AIX and Solaris.* 4052867 (Tracking ID: 4052866)SYMPTOM:After a fresh configuration of VCS, if the InfoScale cluster node is not registered with the Usage Insights service, the expected nagging messages do not get logged in the telemetry and the system logs.DESCRIPTION:The CollectorService process needs to be running during an installation or a configuration operation so that the InfoScale node can register itself with the Usage Insights service. This issue occurs because the product installer does not start the CollectorService process during a fresh configuration of VCS.RESOLUTION:The product installer is updated to start the CollectorService process during a fresh configuration of VCS.* 4053752 (Tracking ID: 4053753)SYMPTOM:The InfoScale product installer prompts you to enter the host name or the IP address and the port number of a Usage Insights server instance.DESCRIPTION:The licensing service is upgraded to help manage your InfoScale licenses more effectively. As part of this upgrade, you can register an InfoScale server with an on-premises Usage Insights server in your data center. The product installer prompts you to enter the host name or the IP address and the port number of the Usage Insights server instance. Using this information, the installer registers the InfoScale server with the Usage Insights server.RESOLUTION:The product installer is enhanced to register InfoScale servers with a Veritas Usage Insights server.* 4053875 (Tracking ID: 4053635)SYMPTOM:If a system is restarted immediately after the product installation, the vxconfigd service fails to start when an add node operation is initiated during a fresh configuration.DESCRIPTION:When a system is restarted immediately after product installation, the VxVM agent attempts to start the vxvm-boot service. The service does not started properly due to the presence of the install-db file on the system. During a product configuration (or the add node operation), the product installer removes the install-db file and attempts to start the vxvm-boot service. However, because the service was already in the active state, systemd does not execute the vxvm-startup scripts again. Consequently, both the 'vxdctl init' and the 'vxdctl enable' commands fail with the exit code '4'.RESOLUTION:The product installer is updated to first restart the vxvm-boot service and then start the vxconfigd service when a system is restarted.* 4053876 (Tracking ID: 4053638)SYMPTOM:The installer prompt to mount shared volumes during an add node operation does not advise that the corresponding CFSMount entries will be updated in main.cf.DESCRIPTION:If any shared volumes are mounted on the existing cluster nodes, during the add node operation, the installer provides you an option to mount them on new nodes as well. If you choose to mount the shared volumes on the new nodes, the installer updates the corresponding CFSMount entries in the main.cf file. However, the installer prompt does not advise that main.cf will also be updated.RESOLUTION:The installer prompt is updated to advise that the CFSMount entries in main.cf will be updated if you choose to mount shared volumes on new nodes.* 4054322 (Tracking ID: 4054460)SYMPTOM:InfoScale installer fails to start the GAB service with the -start option on Solaris.DESCRIPTION:The product installer removes the GAB driver when it stops the service with the -stop option. However, it does not add the GAB driver while starting the service with the -start option, so the GAB service fails to start.RESOLUTION:The product installer is updated to load the GAB driver before it attempts to start the GAB service.* 4054913 (Tracking ID: 4054912)SYMPTOM:During upgrade, the product installer fails to stop the vxfs service.DESCRIPTION:If upgrading from lower than InfoScale 7.3 version, the product installer assumes that services are stopped if the corresponding modules are unloaded. However, the vxfs services remain active even after the modules are unloaded, which causes the RPMs to fail during the upgrade.RESOLUTION:The installer is enhanced to stop the vxfs services when the modules are unloaded but the services remain active.* 4055055 (Tracking ID: 4055242)SYMPTOM:InfoScale installer fails to install a patch on Solaris and displays the following error:CPI ERROR V-9-0-00 Cannot find VRTSpython for padv Sol11sparc in <<media path>>DESCRIPTION:The product installer expects the VRTSpython patch to be available in the media path that is provided. When the patch is not available at the expected location, the installer fails to proceed and exits with the aforementioned error message.RESOLUTION:The product installer is updated to handle such a scenario and proceed to install the other available patches.* 4066237 (Tracking ID: 4057908)SYMPTOM:The InfoScale product installer fails to configure passwordless SSH communication for remote Solaris systems that have one of these SRUs installed: 11.4.36.101.2 and 11.4.37.101.1.DESCRIPTION:To establish passwordless SSH communication with a remote system, the installer tries to fetch the home directory details of the remote system. This task fails due to an issue with the function that fetches those details. The product installation fails because the passwordless SSH communication is not established.RESOLUTION:The product installer is updated to address this issue so that the installation does not fail when the aforementioned SRUs are installed on a system.* 4067433 (Tracking ID: 4067432)SYMPTOM:While upgrading to 7.4.2 Update 2 , VRTSvlic patch package fails to install.DESCRIPTION:VRTSvlic patch package has a package level dependency on VRTSpython. When upgrading the VRTSvlic patch package, only VRTSvlic publisher is set. VRTSvlic publisher fails to resolve the VRTSpython dependency and installation is unsuccessful.RESOLUTION:Installer checks if VRTSvlic and VRTSpython are in the patch list. If both are present, installer sets the publisher for VRTSvlic and VRTSpython before running the command to install the VRTSvlic patch. The package level dependency is thus resolved and VRTSvlic and VRTSpython patches get installed.* 4070908 (Tracking ID: 4071690)SYMPTOM:The InfoScale product installer prompts you to enter the host name or the IP address and the port number of an edge server.DESCRIPTION:If the edge server details not provided, the InfoScale server is not registered with the edge server. Consequently, the installer does not perform InfoScale configurations on the InfoScale server.RESOLUTION:The installer is updated to perform Infoscale configurations without registering the Infoscale server to an edge server. It no longer prompts you for the edge server details.* 4079500 (Tracking ID: 4079853)SYMPTOM:Patch installer flashes a false error message with -precheck option.DESCRIPTION:while installing patch with -precheck option installer flashes a false error message - 'CPI ERROR V-9-0-0 A more recent version of InfoScale Enterprise, 7.4.2.1100, is already installed on server'.RESOLUTION:A check added for hotfixupgrade and precheck both for performing further task.* 4079916 (Tracking ID: 4079922)SYMPTOM:The product installer fails to complete the installation after it automatically downloads a required support patch from SORT that contains a VRTSperl package. The following error is logged:CPI ERROR V-9-0-00 No pkg object defined for pkg VRTSperl530 and padv <<PADV>>DESCRIPTION:During installation, the product installer looks for any applicable platform support patches that are available on SORT and automatically downloads them. However, it fails to correctly identify the base version of the VRTSperl package on the system to compare it with the downloaded version. Consequently, even though the appropriate patch is available, the installer fails to complete the installation.RESOLUTION:To address this issue, the product installer is updated to correctly identify the base version of the VRTSperl package on a system.* 4080100 (Tracking ID: 4080098)SYMPTOM:Installer fails to complete the CP server configuration. The following error is logged:CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>DESCRIPTION:Installer uses the 'openssl req' command to create CA certificate and csr file. With VRTSperl 5.34.0.2 onwards, openssl version is updated. The updated version requires config file to be passed with the 'openssl req' command by using -config paramater. Consequently, the installer fails to create CA certificate and csr file causing CP server configuration failure.RESOLUTION:Product installer is updated to pass the configuration file with the 'openssl req' command only.* 4081964 (Tracking ID: 4081963)SYMPTOM:VRTSvxfs patch fails to install on Linux platforms while applying the security patch for 742 SP1.DESCRIPTION:VRTSvxfs patch needs veki module to be loaded before it loads its modules. VRTSvxfs patch rpm verifies whether Veki module is loaded. If it is not loaded, VRTSvxfs patch does not get installed and an error message appears.RESOLUTION:A new preinstall check added to the CPI installer for VRTSvxfs to verify whether veki is loaded, and loads it before VRTSvxfs patch is applied.* 4084977 (Tracking ID: 4084975)SYMPTOM:Installer fails to complete the CP server configuration. The following error is logged:CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>DESCRIPTION:To create CA certificate and csr file, installer uses the 'openssl req' command and passes the openssl configuration file '/opt/VRTSperl/non-perl-libs/bin/openssl.cnf' by using -config parameter to the 'openssl req' command. OpenSSL version 1.0.2, does not have an openssl configuration file. Hence, the installer fails to create CA certificate and csr file, and CP server configuration fails.RESOLUTION:Installer updated to check and pass the openssl configuration file only if the file is present on the system.* 4085612 (Tracking ID: 4087319)SYMPTOM:Installer fails to uninstall VxVM while upgrading from 7.4.2 to 8.0U1.DESCRIPTION:While upgrading, uninstallation of previous rpms fails if the semodule is not loaded. Installer fails to uninstall VxVM if semodule is not loaded before uninstallation.RESOLUTION:Installer enhanced to check and take appropriate action if semodule vxvm is not loaded before uninstallation.* 4086047 (Tracking ID: 4086045)SYMPTOM:When Infoscale cluster is reconfigured, LLT, GAB, VXFEN services fail to start after reboot.DESCRIPTION:Installer updates the /etc/sysconfig/<<service>> file and incorrectly sets START_<<service>> and STOP_<<service>> value as '0' in pre_configure task even when VCS is not set for reconfiguration. These services thus fail to start after reboot.RESOLUTION:Installer is enhanced to not to update the /etc/sysconfig/<<service>> files when VCS is not set for reconfiguration.* 4086570 (Tracking ID: 4076583)SYMPTOM:On a Solaris system, the InfoScale installer runs set/unset publisher several times but does not disable the publisher. The deployment process slows down as a result.DESCRIPTION:CPI installer sets/unsets the Veritas publisher several times which slows the deployment process.RESOLUTION:The Installer sets all the publishers together. Subsequently the higher version of the package/patch available in the publishers is selected. Solaris and other publishers except Veritas are also disabled.* 4086624 (Tracking ID: 4086623)SYMPTOM:Installer fails to complete the CP server configuration. The following error is logged:CPI ERROR V-9-40-4422 Unable to create CA certificate /var/VRTScps/security/certs/ca.crt on <<system>>CPI ERROR V-9-40-4427 Unable to create csr file /var/VRTSvxfen/security/certs/client_{<<uuid>>}.csr for <<system>> on <<system>>DESCRIPTION:Installer check the openssl_conf file on client nodes instead of CP server, consequently even if openssl_conf file is not present on CP server installer tries to utilize the same and fails to generate the CA certificate and csr files.RESOLUTION:Product installer is updated to check and pass the configuration file from CP server.* 4087148 (Tracking ID: 4088698)SYMPTOM:CPI installer tries to download a must-have patch whose version is lower than the version specified in media path. If installer is unable to download, the following error message is displayed:CPI ERROR V-9-30-1114 Failed to connect to SORT (https://sort.veritas.com), the patch <<patchname>> is required to deploy this product.DESCRIPTION:CPI installer tries to download a lower version must-have patch even if patch bundle(s) of equal or higher version of all the patches from the must-have patch is provided in the media path.RESOLUTION:CPI installer does not download the required must-have patch if equal or higher version patches are supplied in the mediapath.* 4087809 (Tracking ID: 4086533)SYMPTOM:VRTSfsadv pkg fails to upgrade from 7.4.2 U4 to 8.0 U1 while using yum upgrade. Following error observed:The fsdedupschd.service is running. Please stop fsdedupschd.service before upgrading.error: %prein(VRTSfsadv-8.0.0.1700-RHEL8.x86_64) scriptlet failed, exit status 1DESCRIPTION:fsdedupschd service is started as a post-installation task of VRTSfsadv 7.4.2.2600 Package. Before yum upgrade to 8.0 U1, installer does not set up the fsdedupschd service and VRTSfsadv package fails to upgrade.RESOLUTION:Installer is enhanced to handle the start and stop of VRTSfsadv-related services i.e fsdedupschd and vxfs_replication.* 4089657 (Tracking ID: 4089934)SYMPTOM:Installer does not update the '/opt/VRTSvcs/conf/config/types.cf' file after a VRTSvcs patch upgrade.DESCRIPTION:If '../conf/types.cf' file is changed with a VRTSvcs patch, during patch upgrade installer does not update the '..conf/config/types.cf' file. It needs to be updated manually to avoid unexpected issues.RESOLUTION:Product installer is enhanced to correctly populate the '..conf/config/types.cf' file if '../conf/types.cf' file is changed with a VRTSvcs patch.* 4089815 (Tracking ID: 4089867)SYMPTOM:On Linux, Installer fails to start the fsdedupschd service if VRTSfsadv 7.4.2.0000 is installed on the system.DESCRIPTION:VRTSfsadv 7.4.2.0000 doesn't have systemctl wrapper for fsdedupschd start script and there is missing shebang line in the bash script. Because of two issues, installer fails to start the fsdedupschd service.RESOLUTION:Installer is enhanced to handle the start and stop of VRTSfsadv-related services i.e fsdedupschd and vxfs_replication only if VRTSfsadv 7.4.2.2600 or higher is installed.* 4092407 (Tracking ID: 4092408)SYMPTOM:CPI installer fails to correctly identify status of vxfs_replication service.DESCRIPTION:Installer parses 'ps -ef' output to determine if vxfs_replication service has started. Because of an intermediate 'pidof' process, installer incorrectly identifies status of vxfs_replication service as started and during poststart check fails, as vxfs_replication had not started.RESOLUTION:Product installer is updated to skip the 'pidof' process while determining the vxfs_replication service status.Patch ID: VRTSsfmh-8.0.2.500* 4160665 (Tracking ID: 4160661)SYMPTOM:NADESCRIPTION:NARESOLUTION:NAPatch ID: -4.01.802.002* 4173483 (Tracking ID: 4173483)SYMPTOM:Security vulnerability in SLIC componentDESCRIPTION:Security vulnerability in SLIC component version 3.5RESOLUTION:Upgraded the SLIC component to 3.7Patch ID: VRTSvcsea-8.0.2.1600* 4088599 (Tracking ID: 4088595)SYMPTOM:hapdbmigrate utility fails to online the oracle service group due to a timing issue.DESCRIPTION:hapdbmigrate utility fails to online the oracle service group due to a timing issue.example:./hapdbmigrate -pdbres pdb1_res -cdbres cdb2_res -XMLdirectory /oracle_xmlCluster prechecks and validation DoneTaking PDB resource [pdb1_res] offline DoneModification of cluster configuration DoneVCS ERROR V-16-41-39 Group [CDB2_grp] is not ONLINE after 300 seconds on %vcs_node%VCS ERROR V-16-41-41 Group [CDB2_grp] is not ONLINE on some nodes in the clusterBringing PDB resource [pdb1_res] online on CDB resource [cdb2_res]DoneFor further details, see '/var/VRTSvcs/log/hapdbmigrate.log'RESOLUTION:hapdbmigrate utility modified to ensure enough time elapses between probe of PDB resource and online of CDB group.Patch ID: VRTSvcsea-8.0.2.1400* 4058775 (Tracking ID: 4073508)SYMPTOM:Oracle virtual fire-drill is failing due to Oracle password file location changes from Oracle version 21c.DESCRIPTION:Oracle password file has been moved to $ORACLE_BASE/dbs from Oracle version 21c.RESOLUTION:Environment variables are used for pointing the updated path for the password file.It is mandatory from Oracle 21c and later versions for a client to configure .env file path in EnvFile attribute. This file must have ORACLE_BASE path added to work Oracle virtual fire-drill feature. Sample EnvFile content with ORACLE_BASE path for Oracle 21c [root@inaqalnx013 Oracle]# cat /opt/VRTSagents/ha/bin/Oracle/envfile ORACLE_BASE="/u02/app/oracle/product/21.0.0/dbhome_1/"; export ORACLE_BASE; Sample attribute value EnvFile = "/opt/VRTSagents/ha/bin/Oracle/envfile"Patch ID: VRTSvcsag-8.0.2.1600* 4149272 (Tracking ID: 4164374)SYMPTOM:VCS DNS Agent monitor gets timeout if multiple DNS servers are added as Stealth Masters and if few of them get hung.DESCRIPTION:This is due to the fact that VCS monitor calls nsupdate and dig commands and these calls are sequential to each DNS server. The default timeout of monitor routine (60s) is not enough to complete the nsupdate and dig calls for all servers as nsupdate can have minimum timeout of 20s. So, if there are more than 3 dns servers configured in env and 3 of them are in hung state, monitor routine will get timed out failing over resource even though 4th DNS server might be working fine.RESOLUTION:The changes are made to call nsupdate for all DNS servers in parallel and similar changes for dig command.* 4156630 (Tracking ID: 4156628)SYMPTOM:Getting message "Uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317" constantly.DESCRIPTION:The following message is constantly being reported in the NIC_A.log as $version is not getting initialized.2024/02/05 15:32:00 VCS INFO V-16-2-13716 Thread(1312) Resource(csgnic): Output of the completed operation (monitor)==============================================Use of uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317, <IFCONFIG> line 1.Use of uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317, <IFCONFIG> line 2.Use of uninitialized value $version in string eq at /opt/VRTSvcs/bin/NIC/monitor line 317, <IFCONFIG> line 3.RESOLUTION:During performing ping test, the $version is not initializing so code is updated to handle this problem.* 4162102 (Tracking ID: 4163518)SYMPTOM:Apache (httpd) agent hangs on reboot for over 10 minutes.DESCRIPTION:Apache hangs as VCS waits for httpd stoppage, and httpd waits for VCS stoppage. On node in bootup, Apache resource comes online due to this dependency even as tho it has failed over on other node. This invokes concurrency violation and tries to bring down httpd.RESOLUTION:Remove the dependency between vcs and httpd* 4162659 (Tracking ID: 4162658)SYMPTOM:LVMVolumeGroup resource fails to offline/clean in cloud environment after path failure.DESCRIPTION:If disk is detached unable to failover LVMVolumeGroup.RESOLUTION:Implementation of PanicSystemOnVGLoss attribute.0 - Default value and behaviour, does not failover (not halting the system).1 Halt the system if deactivation of volume group fails.2 - Do not halt the system. Allow failover (Note risk of data corruption).* 4162753 (Tracking ID: 4142040)SYMPTOM:While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.DESCRIPTION:While upgrading the VRTSvcsag rpm package, the '/etc/VRTSvcs.conf/config/types.cf' file on Veritas Cluster Server(VCS) might be incorrectly updated.During some instances, the user might be informed to manually copy '/etc/VRTSvcs/conf/types.cf' to the existing '/etc/VRTSvcs/conf/config/types.cf' file. Need to remove the message "Implement /etc/VRTSvcs/conf/types.cf to utilize resource type updates" when updating the VRTSvcsag rpm.RESOLUTION:To ensure that '/etc/VRTSvcs/conf/config/types.cf file' is updated correctly following VRTSvcsag updates, the script user_trigger_update_types can be manually triggered by the user. The following message displays:Leaving existing /etc/VRTSvcs/conf/config/types.cf configuration file unmodifiedCopy /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/user_trigger_update_types to /opt/VRTSvcs/bin/triggersTo manually update the types.cf, execute command "hatrigger -user_trigger_update_types 0Patch ID: VRTSvcsag-8.0.2.1500* 4157581 (Tracking ID: 4157580)SYMPTOM:Security vulnerabilities have been identified in the current version of the third-party component OpenSSL, which is utilized by VCS.DESCRIPTION:There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS.RESOLUTION:VCS is updated to use newer versions of OpenSSL in which the security vulnerabilities have been addressed.Patch ID: VRTSvcsag-8.0.2.1400* 4114880 (Tracking ID: 4152700)SYMPTOM:When Private DNS Zone resource ID is passed, the AzureDNSZone Agent returns an error saying that the resource cannot be found.DESCRIPTION:Azure Private DNS Zone with AzureDNSZone Agent is not supported.RESOLUTION:The Azure Private DNS Zone is supported by the AzureDNSZone Agent by installing the Azure library for Private DNS Zone(azure-mgmt-privatedns). This library has functions that can be utilized by for Private DNS zone operations. The resource ID is differentiated based on the Public and the Private DNS zones and the corrective actions are taken accordingly. For DNS zones, the resource ID differs between Public and Private DNS zones. The resource ID can be parsed, and the resource type can be checked to determine whether it is a Public or Private DNS zone.* 4135534 (Tracking ID: 4152812)SYMPTOM:AWS EBSVol agent takes long time to perform online and offline operations on resources.DESCRIPTION:When a large number of AWS EBSVol resources are configured, it takes a long time to perform online and offline operations on these resources. EBSVol is a single threaded agent and hence prevents parallel execution of attach and detach EBS volume commands.RESOLUTION:To resolvethe issue, the default value of 'NumThreads' attribute of EBSVol agent is modified from 1 to 10 and the agent is enhanced to use the locking mechanism to avoid conflicting resource configuration. This results in enhanced response time for parallel execution of attach and detach commands. Also, the default value of MonitorTimeout attribute is modified from 60 to 120. This avoids timeout of monitor entry point when response of AWS CLI/server is unexpectedly slow.* 4137215 (Tracking ID: 4094539)SYMPTOM:The MonitorProcesses argument in the resource ArgListValues being passed to the agent (bundled ApplicationAgent) is incorrectly removing an extra needed space from the following process, as found via the recommended CLI process test.DESCRIPTION:In the ArgListValues under MonitorProcesses with the extra space it even shows up when displaying the resource.RESOLUTION:For the monitored process (not program) only remove leading and trailing spaces. Do not remove extra spaces between words.* 4137376 (Tracking ID: 4122001)SYMPTOM:NIC resource remain online after unplug network cable on ESXi server.DESCRIPTION:Previously MII checking network statistics/ping test but now it's directly marking NIC state ONLINE by checking NIC status in operstate. Now there is no any ping check before that. If it's fail to detect operstate file then only it's going to check for PING test. But on ESXi server environment NIC is already marked ONLINE as operstate file is available with state UP & carrier bit is set. So, even if "NetworkHosts" is not reachable then also it's marking NIC resource ONLINE.RESOLUTION:The NIC agent's already having "PingOptimize" attribute. We introduced new value (2) for "PingOptimize" attribute to make decision of perform PING test. If "PingOptimize = 2" then only it will do PING test or else it will work as per previous design.* 4137377 (Tracking ID: 4113151)SYMPTOM:Dependent DiskGroupAgent fails to get its resource online due to disk group import failure.DESCRIPTION:VMwareDisksAgent reports its resource online just after VMware disk is attached to virutal machine, if dependent DiskGroup resource starts to online at the moment it must fail because VMware disk is not yet present into vxdmp database due to VxVM transaction latency. Customer used to add retry times to work around this problem but cannot apply the same to every environment.RESOLUTION:Added a finite period of wait for VMware disk is present into vxdmp database before online is complete.* 4137602 (Tracking ID: 4121270)SYMPTOM:EBSvol agent error in attach disk : RHEL 7.9 + Infoscale 8.0 on AWS instance type c6i.large with NVME devices.DESCRIPTION:After attaching volume to instance its taking some time to update its device mapping in system. Due to which if we run lsblk -d -o +SERIAL immediately after attaching volume then its not showing that volume details in output. Due to which $native_device was getting blank/uninitialized. So, we need to wait for some time to get device mapping updated in system.RESOLUTION:We have added logic to retry once for same command after some interval if in first run we didnt find expected volume device mapping. And now its properly updating attribute NativeDevice.* 4137618 (Tracking ID: 4152886)SYMPTOM:AWSIP agent fails to bring OverlayIP resources online and offline on the instances in a VPC that is shared across multiple AWS accounts.DESCRIPTION:When VPC is shared across multiple AWS accounts, route table associated with the subnets is exclusively owned by the owner account. AWS restricts the modification in the route table from any other account. When AWSIP agent tries to bring OverlayIP resource online on the Instance owned by a different account, may not have privileges to update the route table. In such cases, AWSIP agent fails to edit the route table, and fails to bring OverlayIP resource online and offline.RESOLUTION:To support cross-account deployment, assign appropriate privileges on shared resources. Create an AWS profile to grant permissions to update Route Table of VPC through different nodes belonging to different AWS accounts. This profile is used to update route tables accordingly.A new attribute "Profile" is introduced in AWSIP agent. Use this attribute to configure the above created profile.* 4143918 (Tracking ID: 4152815)SYMPTOM:AWS EBS Volume in-use with other AWS instance is getting used by cluster nodes through AWS EBSVol agent.DESCRIPTION:AWS EBS Volume which is attached to AWS instance that is not part of cluster is getting attach to the node of cluster during online event. Instead, Unable to detach volume' message should be logged in log file as volume is already in use by another AWS instance in AWS EBSVol agent.RESOLUTION:AWS EBSVol agent is enhanced to avoid attachment of in-use EBS volumes whose instances are not part of cluster.Patch ID: VRTSvcsag-8.0.2.1200* 4130206 (Tracking ID: 4127320)SYMPTOM:The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.DESCRIPTION:The agent fails to bring online a resource when the shell for the user is set to /sbin/nologin.RESOLUTION:The ProcessOnOnly agent is enhanced to support the /sbin/nologin shell. If the shell is set to /sbin/nologin, the agent uses /bin/bash as shell to start the process.Patch ID: VRTScps-8.0.2.1600* 4152885 (Tracking ID: 4152882)SYMPTOM:Intermittently losing access to the CP serversDESCRIPTION:Since we write logs into every log files(vxcpserve_[A|B|C].log ) at most till maxlen, but if it goes beyond that length, a new file will be opened and the old one will be closed. At this stack, fptr uses the old pointer, resulting in fwrite() to a closed FILE ptr.RESOLUTION:Opened a new file before assignment of fptr so that it will point to a correct FILE ptr.Patch ID: VRTScps-8.0.2.1500* 4161971 (Tracking ID: 4161970)SYMPTOM:Security vulnerabilities exists Sqlite third-party components used by VCS.DESCRIPTION:VCS uses the Sqlite third-party components in which some security vulnerability exists.RESOLUTION:VCS is updated to use newer versions of Sqlite third-party component in which the security vulnerabilities have been addressed.Patch ID: VRTSdbed-8.0.2.1200* 4163136 (Tracking ID: 4136146)SYMPTOM:Old version v6.1.14.26DESCRIPTION:New version available v6.1.14.27.RESOLUTION:Use New version available v6.1.14.27 libraries.Patch ID: VRTSdbed-8.0.2.1100* 4153061 (Tracking ID: 4092588)SYMPTOM:SFAE failed to start with systemd.DESCRIPTION:SFAE failed to start with systemd as currently SFAE service is used in backward compatibility mode using init script.RESOLUTION:Added systemd support for SFAE, such as; systemctl commands - stop/start/status/restart/enable/disable.Patch ID: VRTSvbs-8.0.2.1100* 4163135 (Tracking ID: 4136146)SYMPTOM:Old version v6.1.14.26DESCRIPTION:New version available v6.1.14.27.RESOLUTION:Use New version available v6.1.14.27 libraries.Patch ID: VRTSvcs-8.0.2.1600* 4162755 (Tracking ID: 4136359)SYMPTOM:When upgrading InfoScale with latest Public Patch Bundle or VRTSvcsag package, types.cf is updated.DESCRIPTION:To use new types, attribute(like PanicSystemOnVGLoss). User need to copy /etc/VRTSvcs/conf/types.cf to /etc/VRTSvcs/conf/config/types.cf, this copying may fault the resource due to missing types(like HTC) from new types.cf.RESOLUTION:Implemented new external trigger to manually update the /etc/VRTSvcs/conf/config/types.cf. Follow the post installation instructions of VRTSvcsag rpm.Patch ID: VRTSvcs-8.0.2.1500* 4157581 (Tracking ID: 4157580)SYMPTOM:Security vulnerabilities have been identified in the current version of the third-party component OpenSSL, which is utilized by VCS.DESCRIPTION:There are security vulnerabilities present in the current version of third-party component, OpenSSL, that is utilized by VCS.RESOLUTION:VCS is updated to use newer versions of OpenSSL in which the security vulnerabilities have been addressed.Patch ID: VRTSvcs-8.0.2.1400* 4133677 (Tracking ID: 4129493)SYMPTOM:Tenable security scan kills the Notifier resource.DESCRIPTION:When nmap port scan performed on port 14144 (on which notifier process is listening), notifier gets killed because of connection request.RESOLUTION:The required code changes have done to prevent Notifier agent crash when nmap port scan is performed on notifier port 14144.Patch ID: VRTSvcs-8.0.2.1200* 4113391 (Tracking ID: 4124956)SYMPTOM:Traditionally, virtual IP addresses are used as cluster addresses. Cluster address is also used for peer-to-peer communication in GCO-DR deployment. Thus, gcoconfig utility is accustomed to IPv4 and IPv6 addresses. It gives error if hostname is provided as cluster address.DESCRIPTION:In cloud ecosystem, hostnames are widely used. As cluster address, gcoconfig utility must be compatible with hostname and virtual IPs.RESOLUTION:To address the limitation (gcoconfig does not accept hostname as cluster address), gcoconfig utility is enhanced to be supported on the following:1. NIC and IP configuration: i. Continue using NIC and IP configuration.2. Hostname as cluster address along with corresponding DNS. i. On premise DNS: a. Utility will take following inputs: Domain, Resource Records, TSIGKeyFile (if opted secured DNS), and StealthMasters (optional). b. Accordingly, gcoconfig utility will create DNS resource in cluster service group. ii. AWSRoute53 DNS: It is Amazon DNS web service. a. Utility will take following inputs: Hosted Zone ID, Resource Records, AWS Binaries, and Directory Path. b. Accordingly, gcoconfig utility will create AWSRoute53 DNS type's resource in cluster service group. iii. AzureDNSZone: It is Microsoft DNS web service. a. Utility will take following inputs: Azure DNS Zone Resource Id, Resource Records are mandatory attributes. Additionally, user must either provide Managed Identity Client ID or Azure Auth Resource. b. Accordingly, gcoconfig utility will create AzureDNSZone type's resource in cluster service group.For the end points mentioned in Resource Records, gcoconfig utility can neither ensure their accessibility, nor manage their lifecycle. Hence, these are not within the scope of gcoconfig utility.Patch ID: VRTSdbac-8.0.2.1400* 4161967 (Tracking ID: 4157901)SYMPTOM:vcsmmconfig.log does not show file permissions 600 if EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM is set to 0.DESCRIPTION:vcsmmconfig.log does not show file permissions 600 if EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM is set to 0.RESOLUTION:Changes done in order to set file permission of vcsmmconfig.log as per EO-tunable VCS_ENABLE_PUBSEC_LOG_PERM.* 4164415 (Tracking ID: 4164328)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.Patch ID: VRTSdbac-8.0.2.1300* 4137328 (Tracking ID: 4137325)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 2.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 3(RHEL9.3) is now introduced.Patch ID: VRTSdbac-8.0.2.1100* 4124670 (Tracking ID: 4122405)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 0.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 2(RHEL9.2) is now introduced.* 4125119 (Tracking ID: 4125118)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions for RHEL9 Update 0 for EUS kernel(released later on 5.14.0-70.30.1.el9).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 0 for EUS kernel(5.14.0-70.36.1.el9) is now introduced.Patch ID: VRTSamf-8.0.2.1600* 4161436 (Tracking ID: 4161644)SYMPTOM:System panics when VCS enabled AMF module.DESCRIPTION:System panics that indicates after amf_prexec_hook extracted an argument longer than 4k which spansover two pages it is reading the 3rd page, that is illegal because all arguments are loaded in two pages.RESOLUTION:AMF continue to extract arguments from internal buffer before moving to next page.* 4162305 (Tracking ID: 4168084)SYMPTOM:System panics when VCS enabled AMF module to monitor mount point.DESCRIPTION:AMF calls sleepable function while it holds spin lock for mount point event, resulting in system panic.RESOLUTION:Use a busy flag to synchronize multi threads so that it can release spin lock.* 4164504 (Tracking ID: 4164328)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.Patch ID: VRTSamf-8.0.2.1400* 4137600 (Tracking ID: 4136003)SYMPTOM:A cluster node panics when VCS enabled AMF module that monitors process on/off.DESCRIPTION:A cluster node panics which indicates that an issue with the AMF module overruns into user space buffer during it is analyzing an argument of 8K size. The AMF module cannot load that length of data into internal buffer, eventually misleading access into user buffer which is not allowed when kernel SMAP in effect.RESOLUTION:The AMF module is constrained to ignore an argument of 8K or bigger size to avoid internal buffer overrun.* 4153059 (Tracking ID: 4137325)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 2.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 3(RHEL9.3) is now introduced.Patch ID: VRTSamf-8.0.2.1200* 4132379 (Tracking ID: 4122405)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 0.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 2(RHEL9.2) is now introduced.* 4132620 (Tracking ID: 4125118)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions for RHEL9 Update 0 for EUS kernel(released later on 5.14.0-70.30.1.el9).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 0 for EUS kernel(5.14.0-70.36.1.el9) is now introduced.Patch ID: VRTSveki-8.0.2.1600* 4164290 (Tracking ID: 4164328)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.Patch ID: VRTSveki-8.0.2.1400* 4135795 (Tracking ID: 4135683)SYMPTOM:Enhancing debugging capability of VRTSveki package installationDESCRIPTION:Enhancing debugging capability of VRTSveki package installation using temporary debug logs for SELinux policy file installation.RESOLUTION:Code is changed to store output of VRTSveki SELinux policy file installation in temporary debug logs.* 4140468 (Tracking ID: 4152368)SYMPTOM:Some incidents do not appear in changelog because their cross-references are not properly processedDESCRIPTION:Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution.RESOLUTION:All cross-references are traversed to find parent-child only if it present and then find top.Patch ID: VRTSveki-8.0.2.1200* 4120300 (Tracking ID: 4110457)SYMPTOM:Veki packaging failure due to missing of storageapi specific filesDESCRIPTION:While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes were not taken care in the Veki mk-symlink and build scripts.RESOLUTION:Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.This is helping to package the storageapi along with veki and resolving all interdependencies* 4124135 (Tracking ID: 4122405)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 0.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 2(RHEL9.2) is now introduced.* 4130816 (Tracking ID: 4130815)SYMPTOM:VEKI rpm does not have changelogDESCRIPTION:Changelog in rpm will help to find missing incidents with respect to other version.RESOLUTION:Changelog is generated and added to VEKI rpm.* 4132626 (Tracking ID: 4125118)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions for RHEL9 Update 0 for EUS kernel(released later on 5.14.0-70.30.1.el9).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 0 for EUS kernel(5.14.0-70.36.1.el9) is now introduced.Patch ID: VRTSveki-8.0.2.1100* 4118568 (Tracking ID: 4110457)SYMPTOM:Veki packaging failure due to missing of storageapi specific filesDESCRIPTION:While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes were not taken care in the Veki mk-symlink and build scripts.RESOLUTION:Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.This is helping to package the storageapi along with veki and resolving all interdependenciesPatch ID: VRTSvxfen-8.0.2.1100* 4156076 (Tracking ID: 4156075)SYMPTOM:EO changes file permission tunableDESCRIPTION:EO changes file permission tunableRESOLUTION:EO changes file permission tunable* 4164329 (Tracking ID: 4164328)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.* 4169032 (Tracking ID: 4166666)SYMPTOM:Failed to configure Disk based fencing on rdm mapped devices from KVM host to kvm guestDESCRIPTION:While configuring read key buffer exceeding the maximum buffer size in KVM hypervisor.RESOLUTION:Reduced maximum number of keys to 1022 to support read key in KVM hypervisor.Patch ID: VRTSvxfen-8.0.2.1400* 4137326 (Tracking ID: 4137325)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 2.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 3(RHEL9.3) is now introduced.Patch ID: VRTSvxfen-8.0.2.1200* 4124086 (Tracking ID: 4124084)SYMPTOM:Security vulnerabilities exist in the Curl third-party components used by VCS.DESCRIPTION:Security vulnerabilities exist in the Curl third-party components used by VCS.RESOLUTION:Curl is upgraded in which the security vulnerabilities have been addressed.* 4124644 (Tracking ID: 4122405)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 0.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 2(RHEL9.2) is now introduced.* 4125891 (Tracking ID: 4113847)SYMPTOM:Even number of cp disks is not supported by design. This enhancement is a part of AFA wherein a faulted disk needs to be replaced as soon as the number of coordination disks is even in number and fencing is up and runningDESCRIPTION:Regular split / network partitioning must be an odd number of disks.Even number of cp support is provided with cp_count. With cp_count/2+1, fencing is not allowed to come up. Also if cp_count is not defined in vxfenmode file then by default minimum 3 cp disk are needed, otherwise vxfen does not start.RESOLUTION:In case of even number of cp disk, another disk is added. The number of cp disks is odd and fencing is thus running.* 4125895 (Tracking ID: 4108561)SYMPTOM:Vxfen print keys internal utility was not working because of overrunning of array internallyDESCRIPTION:Vxfen print keys internal utility will not work if the number of keys exceed 8 will then return garbage valueOverrunning array keylist[i].key of 8 bytes at byte offset 8 using index y (which evaluates to 8)RESOLUTION:Restricted the internal loop to VXFEN_KEYLEN. Reading reservation working fine now.* 4132625 (Tracking ID: 4125118)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions for RHEL9 Update 0 for EUS kernel(released later on 5.14.0-70.30.1.el9).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 0 for EUS kernel(5.14.0-70.36.1.el9) is now introduced.Patch ID: VRTSllt-8.0.2.1600* 4162744 (Tracking ID: 4139781)SYMPTOM:System panics occasionally in LLT stack where LLT over ether enabled.DESCRIPTION:LLT allocates skb memory from own cache for those >4k messages and sets a field of skb_shared_info pointing to a LLT function, later uses the field to determine if skb is allocated from own cache. When receiving a packet, OS also allocates skb from system cache and doesn't reset the field, then passes skb to LLT. Occasionally the stale pointer in memory can mislead LLT to think a skb is from own cache and uses LLT free api by mistake.RESOLUTION:LLT uses a hash table to record skb allocated from own cache and doesn't set the field of skb_shared_info.* 4173093 (Tracking ID: 4164328)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.Patch ID: VRTSllt-8.0.2.1400* 4132209 (Tracking ID: 4124759)SYMPTOM:Panic happened with llt_ioship_recv on a server running in AWS.DESCRIPTION:In AWS environment packets can be duplicated even it is configured in UDP. As UDP it is not expected.RESOLUTION:To avoid the panic we have checked the packet is already in send queue of bucket and decided invalid/duplicate packet.* 4137611 (Tracking ID: 4135825)SYMPTOM:Once root file system is full during llt start, llt module failing to load forever.DESCRIPTION:Disk is full, user rebooted system or restart the product. In this case while LLT loading, it deletes llt links and tried creates new one using link names and "/bin/ln -f -s". As disk is full, it unable to create the links. Even after making space, it failed to create the links as those are deleted. So failed to load LLT module.RESOLUTION:If existing links are not present, added the logic to get name of file names to create new links.* 4153057 (Tracking ID: 4137325)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 2.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 3(RHEL9.3) is now introduced.Patch ID: VRTSllt-8.0.2.1200* 4124138 (Tracking ID: 4122405)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 0.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 2(RHEL9.2) is now introduced.* 4128886 (Tracking ID: 4128887)SYMPTOM:Below warning trace is observed while unloading llt module:[171531.684503] Call Trace:[171531.684505] <TASK>[171531.684509] remove_proc_entry+0x45/0x1a0[171531.684512] llt_mod_exit+0xad/0x930 [llt][171531.684533] ? find_module_all+0x78/0xb0[171531.684536] __do_sys_delete_module.constprop.0+0x178/0x280[171531.684538] ? exit_to_user_mode_loop+0xd0/0x130DESCRIPTION:While unloading llt module, vxnet/llt dir is not removed properly due to which warning trace is observed .RESOLUTION:Proc_remove api is used which cleans up the whole subtree.* 4132621 (Tracking ID: 4125118)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions for RHEL9 Update 0 for EUS kernel(released later on 5.14.0-70.30.1.el9).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 0 for EUS kernel(5.14.0-70.36.1.el9) is now introduced.Patch ID: VRTSgab-8.0.2.1600* 4173084 (Tracking ID: 4164328)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 4 (RHEL9.4).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 4 (RHEL9.4) is now introduced.Patch ID: VRTSgab-8.0.2.1400* 4153058 (Tracking ID: 4137325)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 3(RHEL9.3).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 2.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 3(RHEL9.3) is now introduced.Patch ID: VRTSgab-8.0.2.1200* 4132378 (Tracking ID: 4122405)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 2(RHEL9.2).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions later than RHEL9 Update 0.RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 2(RHEL9.2) is now introduced.* 4132623 (Tracking ID: 4125118)SYMPTOM:Veritas Infoscale Availability does not support Red Hat Enterprise Linux 9 Update 0 for minor EUS kernel(5.14.0-70.36.1.el9).DESCRIPTION:Veritas Infoscale Availability does not support Red Hat Enterprise Linux versions for RHEL9 Update 0 for EUS kernel(released later on 5.14.0-70.30.1.el9).RESOLUTION:Veritas Infoscale Availability support for Red Hat Enterprise Linux 9 Update 0 for EUS kernel(5.14.0-70.36.1.el9) is now introduced.Patch ID: VRTSvxvm-8.0.2.1700* 4153377 (Tracking ID: 4152445)SYMPTOM:Replication failed to start due to vxnetd threads not running on secondary site.DESCRIPTION:Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck in a dead loop till max retry reached.RESOLUTION:Code changes have been made to add lock protection to avoid the race condition.* 4153874 (Tracking ID: 4010288)SYMPTOM:On setup Replace node fails due to DCM log plex not getting recovered.DESCRIPTION:This is happening because dcm log plex kstate is going enabled with state RECOVER and stale flag set on it. Plex attach expect plex kstate to be not enabled to allow attach operation which fails in this case. Due to some race, plex state of log dcm plex is getting set to enabled.RESOLUTION:Changes done to detect such problematic dcm plex state and correct it and then normal plex attach transactions are triggered.* 4155091 (Tracking ID: 4118510)SYMPTOM:Volume manager tunable to control log file permissionsDESCRIPTION:With US President Executive Order 14028 compliance changes, all product log file permissions changed to 600. Introduced tunable "log_file_permissions" to control the log file permissions to 600 (default), 640 or 644. The tunable can be changed at install time or any time with reboot.RESOLUTION:Added the log_file_permissions tunable.* 4157012 (Tracking ID: 4145715)SYMPTOM:Replication disconnectDESCRIPTION:There was issue with dummy update handling on secondary side when temp logging is enabled.It was observed that update next to dummy update is not found on secondary site. Dummy updatewas getting written with incorrect metadata about size of VVR update.RESOLUTION:Fixed dummy update size metadata getting written on disk.* 4157643 (Tracking ID: 4159198)SYMPTOM:vxfmrmap utility generated coredump in solaris due to missing id in pfmtDESCRIPTION:The coredump was seen due to missing id in pfmt.RESOLUTION:Added id in pfmt() statement.* 4158662 (Tracking ID: 4159200)SYMPTOM:. . . . . << Copyright notice for VRTSvxvm >> . . . . . . .Copyright (c) 2023 Veritas Technologies LLC. All rights reserved. Veritas and the Veritas Logo are trademarks or registered trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners.The Licensed Software and Documentation are deemed to be commercial computer software as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19 "Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq. "Commercial Computer Software and Commercial Computer Software Documentation," as applicable, and any successor regulations, whether delivered by Veritas as on premises or hosted services. Any use, modification, reproduction release, performance, display or disclosure of the Licensed Software and Documentation by the U.S. Government shall be solely in accordance with the terms of this Agreement.. . . . . << End of copyright notice for VRTSvxvm >>. . . .sysck: 3001-024 The file /sbin/vxconfigd is the wrong file type.VRTSvxvm.post_u[289]: -q: not found <<<<<<<<<<<<<< Script errorFinished processing all filesets. (Total time: 26 secs)DESCRIPTION:Script error is shown due to usage of undeclared variable.RESOLUTION:Define the variable used.* 4158920 (Tracking ID: 4159680)SYMPTOM:0 Fri Apr 5 20:32:30 IST 2024 + read bd_dg bd_dgid 0 Fri Apr 5 20:32:30 IST 2024 + 0 Fri Apr 5 20:32:30 IST 2024 first_time=1+ clean_tempdir 0 Fri Apr 5 20:32:30 IST 2024 + whence -v set_proc_oom_score 0 Fri Apr 5 20:32:30 IST 2024 set_proc_oom_score not found 0 Fri Apr 5 20:32:30 IST 2024 + 0 Fri Apr 5 20:32:30 IST 2024 1> /dev/null+ set_proc_oom_score 17695012 0 Fri Apr 5 20:32:30 IST 2024 /usr/lib/vxvm/bin/vxconfigbackupd[295]: set_proc_oom_score: not found 0 Fri Apr 5 20:32:30 IST 2024 + vxnotifyDESCRIPTION:type set_proc_oom_score &>/dev/null && set_proc_oom_score $$Here the stdout and stderr stream is not getting redirected to /dev/null. This is because "&>" is incompatible with POSIX.>out 2>&1 is a POSIX-compliant way to redirect both standard output and standard error to out. It also works in pre-POSIX Bourne shells.RESOLUTION:The code changes have been done to fix the problem.* 4164944 (Tracking ID: 4165970)SYMPTOM:Warnings can be seen while installing vm packagesDESCRIPTION:There were syntactical error due to which these both variable were not getting read properly.RESOLUTION:Issue is fixed with the code change.* 4164947 (Tracking ID: 4165971)SYMPTOM:Seeing messages while installing VxVM packages.DESCRIPTION:Getting un-expected message on the console while installing VxVM package on the RHEL9 machines.Below are the un_expected messages generated while installing /root/patch/VRTSvxvm* pkg/var/tmp/rpm-tmp.Kl3ycu: line 657: [: missing `]'RESOLUTION:Issue is fixed with the code change.* 4166882 (Tracking ID: 4161852)SYMPTOM:Post InfoScale upgrade, command "vxdg upgrade" succeed but throw error "RLINK is not encypted".DESCRIPTION:In "vxdg upgrade" codepath we need to regenerate the encryption keys if encrypted Rlinks are present in VxVM configuration. But key regeneration code was getting called even if Rlinks are not encrypted. And so further code was throwing error that "VxVM vxencrypt ERROR V-5-1-20484 Rlink is not encrypted!"RESOLUTION:Necessary code changes has been made to invoke encryption key regeneration for RLinks only if it is encrypted.* 4172377 (Tracking ID: 4172033)SYMPTOM:Data corruption after recovery of volumeDESCRIPTION:When disabled / detached volume gets started after storage coming back it was leaving stale agenodes in memory which was causing detach tracking not happening forsubsequent IOs on same region as stale agenode.RESOLUTION:Cleaned up stale agendas at appropriate stage.* 4173722 (Tracking ID: 4158303)SYMPTOM:vxvm-boot service fails to start in alloted time after customer applied patch 8.0.2.1500 patchDESCRIPTION:ESCALATION JIRA:https://jira.community.veritas.com/browse/STESC-8721Panic:PID: 155809 TASK: ffff9abc40e08000 CPU: 11 COMMAND: "dmpdaemon"#0 [ffffc13ca29a7b30] machine_kexec at ffffffffb4e6da33#1 [ffffc13ca29a7b88] __crash_kexec at ffffffffb4fb757a#2 [ffffc13ca29a7c48] crash_kexec at ffffffffb4fb84b1#3 [ffffc13ca29a7c60] oops_end at ffffffffb4e2be31#4 [ffffc13ca29a7c80] no_context at ffffffffb4e7f923#5 [ffffc13ca29a7cd8] __bad_area_nosemaphore at ffffffffb4e7fc9c#6 [ffffc13ca29a7d20] do_page_fault at ffffffffb4e808b7#7 [ffffc13ca29a7d50] page_fault at ffffffffb5a011ae [exception RIP: dmpsvc_da_analyze_error+417] RIP: ffffffffc0ecc411 RSP: ffffc13ca29a7e08 RFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff9abd96d1f800 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 68d38a89bb1d8221 RDI: ffffc13ca29a7e48 RBP: ffff9abf201b1400 R8: ffffc13ca29a7e08 R9: ffffc13ca29a7e4e R10: 0000000000000000 R11: 0000000000000000 R12: ffff9abd96d1f100 R13: 0000000000000000 R14: ffff9abd588c3938 R15: 00000000085000b0 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018#8 [ffffc13ca29a7e90] dmp_error_analysis_callback at ffffffffc10397fa [vxdmp]#9 [ffffc13ca29a7ed0] dmp_daemons_loop at ffffffffc104b3a4 [vxdmp]#10 [ffffc13ca29a7f10] kthread at ffffffffb4f1e974#11 [ffffc13ca29a7f50] ret_from_fork at ffffffffb5a0028fRESOLUTION:BLK-MQ code processes IO in request form it does not deal with the bio. The bio which were seeing here it looks be a dummy bio which might be added just for compatibility. Looks like we need to add a piece of code that will check whether IO is request based or bio based, if it is request based then handle it differently. We are doing same thing for all other places need to handle it here as well.* 4174239 (Tracking ID: 4171979)SYMPTOM:System panics with following message "kernel BUG at fs/inode.c:1578!"DESCRIPTION:Panic Stack:PID: 68713 TASK: ff1dea3fc85c4000 CPU: 17 COMMAND: "blkid" #0 [ff6a453d122e3958] machine_kexec at ffffffff8a06da63 #1 [ff6a453d122e39b0] __crash_kexec at ffffffff8a1b86ca #2 [ff6a453d122e3a70] crash_kexec at ffffffff8a1b9601 #3 [ff6a453d122e3a88] oops_end at ffffffff8a02be31 #4 [ff6a453d122e3aa8] do_trap at ffffffff8a028047 #5 [ff6a453d122e3af0] do_invalid_op at ffffffff8a028d86 #6 [ff6a453d122e3b10] invalid_op at ffffffff8ac00da4 [exception RIP: iput+436] RIP: ffffffff8a389804 RSP: ff6a453d122e3bc8 RFLAGS: 00010202 RAX: ff1deac36eec3b88 RBX: ff1dea3ed45e4ec0 RCX: 000000000800005d RDX: ff1dea3ed45e4ec0 RSI: ff1dea649674d300 RDI: ff1dea3ed45e4fb8 RBP: ffffffff8ba096c0 R8: 0000000000000000 R9: 0000000000000000 R10: ff6a453d122e3c28 R11: 0000000000000007 R12: ff1deac36eec3a10 R13: ffffffff8a3b32e0 R14: 0000000000000000 R15: 0000000000000000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ff6a453d122e3be8] bd_acquire at ffffffff8a3b3261 #8 [ff6a453d122e3c08] blkdev_open at ffffffff8a3b332e #9 [ff6a453d122e3c20] do_dentry_open at ffffffff8a364fd3#10 [ff6a453d122e3c50] path_openat at ffffffff8a37acab#11 [ff6a453d122e3d28] do_filp_open at ffffffff8a37d153RESOLUTION:There is a inode reference leaks in our code due to which the inode reference count increases till its reaches its maximum permissible value(i.e. max size of a 32-bit unsigned int, or 4294967295). Once it hits this then it wraps back to 0 which is an invalid value which causes the system panic.Code change to fix inode reference count leaks have been done.Patch ID: VRTSaslapm 8.0.2.1700* 4155091 (Tracking ID: 4118510)SYMPTOM:Volume manager tunable to control log file permissionsDESCRIPTION:With US President Executive Order 14028 compliance changes, all product log file permissions changed to 600. Introduced tunable "log_file_permissions" to control the log file permissions to 600 (default), 640 or 644. The tunable can be changed at install time or any time with reboot.RESOLUTION:Added the log_file_permissions tunable.Patch ID: VRTSvxvm-8.0.2.1600* 4128883 (Tracking ID: 4112687)SYMPTOM:vxdisk resize corrupts disk public region and causes file system mount fail.DESCRIPTION:For single path disk, during two transactions of resize operation, the private region IOs could be incorrectly sent to partition 3 of the GPT disk, which would cause 48 more sectors shift. This may make the private region data written to public region and cause corruption.RESOLUTION:Code changes have been made to fix the problem.* 4137508 (Tracking ID: 4066310)SYMPTOM:New feature for performance improvementDESCRIPTION:Linux subsystem has two types of block driver 1) block multiqueue driver 2) bio based block driver. since day -1 DMP was a bio based driver now added new feature that is block multiqueue for DMP.RESOLUTION:resolved* 4137995 (Tracking ID: 4117350)SYMPTOM:Below error is observed when trying to import # vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdgVxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:Replicated dg record is found.Did you want to import hardware replicated LUNs?Try vxdg [-o usereplicatedev=only] import option with -c[s]Please refer to system log for details.DESCRIPTION:REPLICATED flag is used to identify a hardware replicated device so to import dg on the REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .RESOLUTION:REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.* 4143558 (Tracking ID: 4141890)SYMPTOM:TUTIL0 field may not get cleared sometimes after cluster reboot.DESCRIPTION:TUTIL field may not get cleared sometimes after cluster reboot due to cleanup issue for volume start operation.RESOLUTION:Autofix can cleanup this and trigger recovery. Also Fix is checked-in for this.* 4153566 (Tracking ID: 4090410)SYMPTOM:PID: 19769 TASK: ffff8fd2f619b180 CPU: 31 COMMAND: "vxiod" #0 [ffff8fcef196bbf0] machine_kexec at ffffffffbb2662f4 #1 [ffff8fcef196bc50] __crash_kexec at ffffffffbb322a32 #2 [ffff8fcef196bd20] panic at ffffffffbb9802cc #3 [ffff8fcef196bda0] volrv_seclog_bulk_cleanup_verification at ffffffffc09f099a [vxio] #4 [ffff8fcef196be18] volrv_seclog_write1_done at ffffffffc09f0a41 [vxio] #5 [ffff8fcef196be48] voliod_iohandle at ffffffffc0827688 [vxio] #6 [ffff8fcef196be88] voliod_loop at ffffffffc082787c [vxio] #7 [ffff8fcef196bec8] kthread at ffffffffbb2c5e61DESCRIPTION:This panic on secondary node is explicitly triggered when unexpected data is detected during data verification process. This is due to incorrect data sent by primary for a specific network failure scenario.RESOLUTION:The source has been changed to fix this problem on primary.* 4153570 (Tracking ID: 4134305)SYMPTOM:Illegal memory access is detected when an admin SIO is trying to lock a volume.DESCRIPTION:While locking a volume, an admin SIO is converted to an incompatible SIO, on which collecting ilock stats causes memory overrun.RESOLUTION:The code changes have been made to fix the problem.* 4153597 (Tracking ID: 4146424)SYMPTOM:CVM node join operation may hang with vxconfigd on master node stuck in following code path. ioctl () kernel_ioctl () kernel_get_cvminfo_all () send_slaves () master_send_dg_diskids () dg_balance_copies () client_abort_records () client_abort () dg_trans_abort () dg_check_kernel () vold_check_signal () request_loop () main ()DESCRIPTION:During vxconfigd level communication between master and slave nodes, if GAB returns EAGAIN,vxconfigd code does a poll on the GAB fd. In normal circ*mstances, the GAB will return the poll callwith appropriate return value. If however, the poll timeout occurs (poll returning 0), it was erroneously treated as success and the caller assumes that message was sent, when in fact ithad failed. This resulted in the hang in the message exchange between master and slavevxconfigd.RESOLUTION:Fix is to retry the send operation on GAB fd after some delay if the poll times out in the contextof EAGAIN or ENOMEM error. The fix is applicable on both master and slave side functions* 4154104 (Tracking ID: 4142772)SYMPTOM:In case SRL overflow frequently happens, SRL reaches 99% filled but the rlink is unable to get into DCM mode.DESCRIPTION:When starting DCM mode, need to check if the error mask NM_ERR_DCM_ACTIVE has been set to prevent duplicated triggers. This flag should have been reset after DCM mode was activated by reconnecting the rlink. As there's a racing condition, the rlink reconnect may be completed before DCM is activated, hence the flag isn't able to be cleared.RESOLUTION:The code changes have been made to fix the issue.* 4154107 (Tracking ID: 3995831)SYMPTOM:System hung: A large number of SIOs got queued in FMR.DESCRIPTION:When IO load is high, there may be not enough chunks available. In that case, DRL flushsio needs to drive fwait queue which may get some available chunks. Due a race condition and a bug inside DRL, DRL may queue the flushsio and fail to trigger flushsio again, then DRL ends in a permanent hung situation, not able to flush the dirty regions. The queued SIOs fails to be driven further hence system hung.RESOLUTION:Code changes have been made to drive SIOs which got queued in FMR.* 4155719 (Tracking ID: 4154921)SYMPTOM:system is stuck in zio_wait() in FC-IOV environment after reboot the primary control domain when dmp_native_support is on.DESCRIPTION:Due to the different reasons, DMP might disable its subpaths. In a particular scenario, DMP might fail to reset IO QUIESCES flag on its subpaths, which caused IOs got queued in DMP defer queue. In case the upper layer, like zfs, kept waiting for IOs to complete, this bug might cause whole system hang.RESOLUTION:Code changes have been made to reset IO quiesce flag properly after disabled dmp path.* 4158517 (Tracking ID: 4159199)SYMPTOM:coredump was being generated while running the TC "./scripts/admin/vxtune/vxdefault.tc" on AIX 7.3 TL2gettimeofday(??, ??) at 0xd02a7dfcget_exttime(), line 532 in "vm_utils.c"cbr_cmdlog(argc = 2, argv = 0x2ff224e0, a_client_id = 0), line 275 in "cbr_cmdlog.c"main(argc = 2, argv = 0x2ff224e0), line 296 in "vxtune.c"DESCRIPTION:Passing NULL parameter to gettimeofday function was causing coredump creationRESOLUTION:Code changes have been made to pass timeval parameter instead of NULL to gettimeofday function.* 4161646 (Tracking ID: 4149528)SYMPTOM:Vxconfigd and vx commands hang. The vxconfigd stack is seen as follows. volsync_wait volsiowait voldco_read_dco_toc voldco_await_shared_tocflush volcvm_ktrans_fmr_cleanup vol_ktrans_commit volconfig_ioctl volsioctl_real vols_ioctl vols_unlocked_ioctl do_vfs_ioctl ksys_ioctl __x64_sys_ioctl do_syscall_64 entry_SYSCALL_64_after_hwframeDESCRIPTION:There is a hang in CVM reconfig and DCO-TOC protocol. This results in vxconfigd and vxvm commands to hang. In case overlapping reconfigs, it is possible that rebuild seqno on master and slave end up having different values.At this point if some DCO-TOC protocol is also in progress, the protocol gets hung due to difference in the rebuildseqno (messages are dropped).One can find messages similar to following in the /etc/vx/log/logger.txt on master node. We can see the mismatch in the rebuild seqno in the two messages. Look at the strings - "rbld_seq: 1" "fsio-rbld_seqno: 0". The seqno receivedfrom slave is 1 and the one present on master is 0.Jan 16 11:57:56:329170 1705386476329170 38ee FMR dco_toc_req: mv: masterfsvol1-1 rcvd req withold_seq: 0 rbld_seq: 1Jan 16 11:57:56:329171 1705386476329171 38ee FMR dco_toc_req: mv: masterfsvol1-1 pend rbld, retry rbld_seq: 1 fsio-rbld_seqno: 0 old: 0 cur: 3 new: 3 flag: 0xc10d stRESOLUTION:Instead of using rebuild seqno to determine whether the DCO TOC protocol is running the same reconfig, using reconfig seqno as a rebuild seqno. Since the reconfig seqno on all nodes in the cluster is same, the DCO TCOprotocol will find consistent rebuild seqno during CVM reconfig and will not result in some node droppingthe DCO TOC protocol messages.Added CVM protocol version check while using reconfig seqno as rebuild seqno. Thus new functionality will come into effect only if CVM protocol version is >= 300.* 4162053 (Tracking ID: 4132221)SYMPTOM:Supportability requirement for easier path link to dmpdr utilityDESCRIPTION:The current paths of DMPDR utility are so long and hard to remember for the customers. So it was requested to create a symbolic link to this utility for easier access.RESOLUTION:Code changes are made to create a symlink to this utility for easier access* 4162055 (Tracking ID: 4116024)SYMPTOM:kernel panicked at gab_ifreemsg with following stack:gab_ifreemsggab_freemsgkmsg_gab_sendvol_kmsg_sendmsgvol_kmsg_senderDESCRIPTION:In a CVR environment there is a RVG of > 600 data volumes, enabling vxvvrstatd daemon through service vxvm-recover. vxvvrstatd calls into ioctl(VOL_RV_APPSTATS) , the latter will generate a kmsg whose length is longer than 64k and trigger a kernel panic due to GAB/LLT no support any message longer than 64k.RESOLUTION:Code changes have been done to add a limitation to the maximum number of data volumes for which that ioctl(VOL_RV_APPSTATS) can request the VVR statistics.* 4162058 (Tracking ID: 4046560)SYMPTOM:vxconfigd aborts on Solaris if device's hardware path is more than 128 characters.DESCRIPTION:When vxconfigd started, it claims the devices exist on the node and updates VxVM devicedatabase. During this process, devices which are excluded from vxvm gets excluded from VxVM device database.To check if device to be excluded, we consider device's hardware full path. If hardware path length ismore than 128 characters, vxconfigd gets aborted. This issue occurred as code is unable to handle hardwarepath string beyond 128 characters.RESOLUTION:Required code changes has been done to handle long hardware path string.* 4162665 (Tracking ID: 4162664)SYMPTOM:VxVM fails to install on Rocky linux 8 and 9DESCRIPTION:VxVM fails to install on Rocky linux and throws below error :This release of VxVM is for Red Hat Enterprise Linux 8and CentOS Linux 8.Please install the appropriate OSand then restart this installation of VxVM.error: %prein(VRTSvxvm-9.0.0.0000-0802_RHEL8.x86_64) scriptlet failed, exit status 1error: VRTSvxvm-9.0.0.0000-0802_RHEL8.x86_64: install failedRESOLUTION:Required code changes have been done to make the package compatible with RL8/9.* 4162917 (Tracking ID: 4139166)SYMPTOM:Enable VVR Bunker feature for shared diskgroups.DESCRIPTION:VVR Bunker feature was not supported for shared diskgroup configurations.RESOLUTION:Enable VVR Bunker feature for shared diskgroups.* 4162966 (Tracking ID: 4146885)SYMPTOM:Restarting syncrvg after termination will start sync from startDESCRIPTION:vradmin syncrvg would terminate after 2 minutes of inactivity like network error. If run again, it would restart from scratchRESOLUTION:Continue vradmin syncrvg operation from where it was terminated* 4164114 (Tracking ID: 4162873)SYMPTOM:disk reclaim is slow.DESCRIPTION:Disk reclaim length should be decided by storage's max reclaim length. But Volume Manager split the reclaim request into smaller segments than the maximum reclaim length, which led to a performance regression.RESOLUTION:Code change has been made to avoid splitting the reclaim request in volume manager level.* 4164250 (Tracking ID: 4154121)SYMPTOM:When the replicated disks are in SPLIT mode, importing its disk group on target node failed with "Device is a hardware mirror".DESCRIPTION:When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group on target node failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported when enable use_hw_replicatedev.RESOLUTION:The code is enhanced to import the replicated disk group on target node when enable use_hw_replicatedev.* 4164252 (Tracking ID: 4159403)SYMPTOM:When the replicated disks are in SPLIT mode and use_hw_replicatedev is on, disks are marked as cloned disks after the hardware replicated disk group gets imported.DESCRIPTION:add clearclone option automatically when import the hardware replicated disk group to clear the cloned flag on disks.RESOLUTION:The code is enhanced to import the replicated disk group with clearclone option.* 4164254 (Tracking ID: 4160883)SYMPTOM:clone_flag was set on srdf-r1 disks after reboot.DESCRIPTION:Clean clone got reset in case of AUTOIMPORT, which misled the clone_flag got set on the disk in the end.RESOLUTION:Code change has been made to correct the behavior of setting clone_flag on a disk.* 4165431 (Tracking ID: 4160809)SYMPTOM:vxconfigd hang during VxVM transaction causing cluster hang situationDESCRIPTION:VxVM Volume with Data Change Object (DCO) configured with volume pre-allocates memory to perform bitmap read/write operations. This memory is pre-allocated during volume create/start times using KMEM cache (kmem_cache_alloc() call) . If system is under memory pressure, this memory allocation with KMEM cache gets stuck for long time waiting for memory to be grabbed. This leads to VxVM transaction hang like situation and eventually leads to IO slowness or clusterwide status for long time causing application IO timeouts.RESOLUTION:Changes done in FMR memory buffer allocation logic to use _get_free_pages() and vmalloc() based allocation instead of going through kmem_cache_alloc() calls to avoid hang situations. The code has been added to ensure allocation code quickly falls back to vmalloc() if _get_free_pages() is unable to allocate memory and thus avoiding hang like situation.* 4165889 (Tracking ID: 4165158)SYMPTOM:Plexes of layered volumes in VVR environment, remain in STALE state even after manual or vxattachd driven vxrecover operation.DESCRIPTION:The issue is that the stale TUTILs are not getting detected and cleared by vxattachd under some specific conditions.- The specific conditions are: 1. volume under rvg (VVR) + 2. volume should be layered.- This is because, it relies on "vxprint -a" o/p.- Looks like vxprint -a does not capture the layered volume in its o/p. - When vxprint -a is given the name of the layered volume (vxprint -a vol-L01), then it prints the layered volume correctlyRESOLUTION:- Found another option -h, which when used with -a, does show layered volume, without giving the object name.- After modifying the vxattachd to use "-ah" instead of "-a", it was able to recover the volumes.- Also extending the logic to clear stale tutils to private disk groups.* 4166559 (Tracking ID: 4168846)SYMPTOM:Support VxVM on RHEL9.4DESCRIPTION:VxVM encountered breakages with RHEL9.4.RESOLUTION:Changes have been done to support VxVM on RHEL9.4* 4166881 (Tracking ID: 4164734)SYMPTOM:Support for TLS1.1 is not disabled.DESCRIPTION:In VxVM product we have disabled support for TLS 1.0, SSLv2 and SSLv3 already. Support TLS1.1 is not disabled.TLSv1.1 has security vulnerabilitiesRESOLUTION:Make required code change to disable support for TLS1.1.Patch ID: VRTSaslapm 8.0.2.1600* 4169012 (Tracking ID: 4169016)SYMPTOM:Support ASL-APM on RHEL9.4DESCRIPTION:ASL-APM compiled RHEL9.4.RESOLUTION:Changes have been done to support ASL-APM on RHEL9.4Patch ID: VRTSvxvm-8.0.2.1400* 4124889 (Tracking ID: 4090828)SYMPTOM:Dumped fmrmap data for better debuggability for corruption issuesDESCRIPTION:vxplex att/vxvol recover cli will internally fetch fmrmaps from kernel using existing ioctl before starting attach operation and get data in binary format and dump to file and store it with specific format like volname_taskid_date.RESOLUTION:Changes done now dumps the fmrmap data into a binary file.* 4129765 (Tracking ID: 4111978)SYMPTOM:Replication failed to start due to vxnetd threads not running on secondary site.DESCRIPTION:Vxnetd was waiting to start "nmcomudpsrv" and "nmcomlistenserver" threads. Due to a race condition of some resource between those two thread, vxnetd was stuck in a dead loop till max retry reached.RESOLUTION:Code changes have been made to add lock protection to avoid the race condition.* 4130858 (Tracking ID: 4128351)SYMPTOM:System hung observed when switching log owner.DESCRIPTION:VVR mdship SIOs might be throttled due to reaching max allocation count,etc. These SIOs are holding io count. When log owner change kicked in and quiesced RVG. VVR log owner change SIO is waiting for iocount to drop to zero to proceed further. VVR mdship requests from the log client are returned with EAGAIN as RVG quiesced. The throttled mdship SIOs need to be driven by the upcoming mdship requests, hence the deadlock, which caused system hung.RESOLUTION:Code changes have been made to flush the mdship queue before VVR log owner change SIO waiting for IO drain.* 4130861 (Tracking ID: 4122061)SYMPTOM:Observing hung after resync operation, vxconfigd was waiting for slaves' response.DESCRIPTION:VVR logowner was in a transaction and returned VOLKMSG_EAGAIN to CVM_MSG_GET_METADATA which is expected. Once the client received VOLKMSG_EAGAIN, it would sleep 10 jiffies and retry the kmsg . In a busy cluster, it might happen the retried kmsgs plus the new kmsgs got built up and hit the kmsg flowcontrol before the vvr logowner transaction completed. Once the client refused any kmsgs due to the flowcontrol. The transaction on vvr logowner might get stuck because it required kmsg response from all the slave node.RESOLUTION:Code changes have been made to increase the kmsg flowcontrol and don't let kmsg receiver fall asleep but handle the kmsg in a restart function.* 4132775 (Tracking ID: 4132774)SYMPTOM:Existing VxVM package fails to load on SLES15SP5DESCRIPTION:There are multiple changes done in this kernel related to handling of SCSI passthrough requests ,initialization of bio routines , ways of obtaining blk requests .Hence existing code is not compatible with SLES15SP5.RESOLUTION:Required changes have been done to make VxVM compatible with SLES15SP5.* 4133930 (Tracking ID: 4100646)SYMPTOM:Recoveries of dcl objects not happening due to ATT, RELOCATE flags are set on DCL subdisksDESCRIPTION:Due to multiple reason stale tutil may remain stamped on dcl subdisks which may cause next vxrecover instancesnot able to recover dcl plex.RESOLUTION:Issue is resolved by vxattachd daemon intelligently detecting these stale tutils and clearing+triggering recoveries after 10 min interval.* 4133946 (Tracking ID: 3972344)SYMPTOM:After reboot of a node on a setup where multiple diskgroups / Volumes within diskgroups are present, sometimes in /var/log/messages an error 'vxrecover ERROR V-5-1-11150 Volume <volume_name> does not exist' is logged.DESCRIPTION:In volume_startable function (volrecover.c), dgsetup is called to set the current default diskgroup. This does not update the current_group variable leading to inappropriate mappings. Volumes are searched in an incorrect diskgroup which is logged in the error message.The vxrecover command works fine if the diskgroup name associated with volume is specified. [vxrecover -g <dg_name> -s]RESOLUTION:Changed the code to use switch_diskgroup() instead of dgsetup. Current_group is updated and the current_dg is set. Thus vxrecover finds the Volume correctly.* 4135127 (Tracking ID: 4134023)SYMPTOM:vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed with below error:# vxconfigrestore -p LINUXSRDFVxVM vxconfigrestore INFO V-5-2-6198 Diskgroup LINUXSRDF configuration restoration started ......VxVM vxdg ERROR V-5-1-0 Disk group LINUXSRDF: import failed:Replicated dg record is found.Did you want to import hardware replicated LUNs?Try vxdg [-o usereplicatedev=only] import option with -c[s]Please refer to system log for details.... ...VxVM vxconfigrestore ERROR V-5-2-3706 Diskgroup configuration restoration for LINUXSRDF failed.DESCRIPTION:H/W Replicated diskgroup can be imported only with option "-o usereplicatedev=only". vxconfigrestore didn't do H/W Replicated diskgroup check, without giving the proper import option diskgroup import failed.RESOLUTION:The code changes have been made to do H/W Replicated diskgroup check in vxconfigrestore .* 4135388 (Tracking ID: 4131202)SYMPTOM:In VVR environment, 'vradmin changeip' would fail with following error message:VxVM VVR vradmin ERROR V-5-52-479 Host <host> not reachable.DESCRIPTION:Existing heartbeat to new secondary host is assumed, whereas it starts after the changeip operation.RESOLUTION:Heartbeat assumption is fixed.* 4136419 (Tracking ID: 4089696)SYMPTOM:In FSS environment, with DCO log attached to VVR SRL volume, the reboot of the cluster may result into panic on the CVM master node as follows: voldco_get_mapidvoldco_get_detach_mapidvoldco_get_detmap_offsetvoldco_recover_detach_mapvolmv_recover_dcovolvol_mv_fmr_precommitvol_mv_precommitvol_ktrans_precommit_parallelvolobj_ktrans_sio_startvoliod_iohandlevoliod_loopDESCRIPTION:If DCO is configured with SRL volume, and both SRL volume plexes and DCO plexes get I/O error, this panic occurs in the recovery path.RESOLUTION:Recovery path is fixed to manage this condition.* 4136428 (Tracking ID: 4131449)SYMPTOM:In CVR environments, there was a restriction to configure up to four RVGs per diskgroup as more RVGs resulted in degradation of I/O performance in case of VxVM transactions.DESCRIPTION:In CVR environments, VxVM transactions on an RVG also impacted I/O operations on other RVGs in the same diskgroup resulting in I/O performance degradation in case of higher number of RVGs configured in a diskgroup.RESOLUTION:VxVM transaction impact has been isolated to each RVG resulting in the ability to scale beyond four RVGs in a diskgroup.* 4136429 (Tracking ID: 4077944)SYMPTOM:In VVR environment, when I/O throttling gets activated and deactivated by VVR, it may result in an application I/O hang.DESCRIPTION:In case VVR throttles and unthrottles I/O, the diving of throttled I/O is not done in one of the cases.RESOLUTION:Resolved the issue by making sure the application throttled I/Os get driven in all the cases.* 4136802 (Tracking ID: 4136751)SYMPTOM:Selinux denies access to such files where support_t permissions are requiredDESCRIPTION:Selinux denies access to such files where support_t permissions are required to fix such denials added this fixRESOLUTION:code changes have been done for this issue, hence resolved* 4136859 (Tracking ID: 4117568)SYMPTOM:Vradmind dumps core with the following stack:#1 std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (this=0x7ffdc380d810, __str=<error reading variable: Cannot access memory at address 0x3736656436303563>)#2 0x000000000040e02b in ClientMgr::closeStatsSession#3 0x000000000040d0d7 in ClientMgr::client_ipm_close#4 0x000000000058328e in IpmHandle::~IpmHandle#5 0x000000000057c509 in IpmHandle::events#6 0x0000000000409f5d in mainDESCRIPTION:After terminating vrstat, the StatSession in vradmind was closed and the corresponding Client object was deleted. When closing the IPM object of vrstat, try to access the removed Client, hence the core dump.RESOLUTION:Core changes have been made to fix the issue.* 4136866 (Tracking ID: 4090476)SYMPTOM:Storage Replicator Log (SRL) is not draining to secondary. rlink status shows the outstanding writes never got reduced in several hours.VxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRLVxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRLVxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRLVxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRLVxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRLVxVM VVR vxrlink INFO V-5-1-4640 Rlink xxx has 239346 outstanding writes, occupying 2210892 Kbytes (0%) on the SRLDESCRIPTION:In poor network environment, VVR seems not syncing. Another reconfigure happened before VVR state became clean, VVR atomic window got set to a large size. VVR couldnt complete all the atomic updates before the next reconfigure. VVR ended kept sending atomic updates from VVR pending position. Hence VVR appears to be stuck.RESOLUTION:Code changes have been made to update VVR pending position accordingly.* 4136868 (Tracking ID: 4120068)SYMPTOM:A standard disk was added to a cloned diskgroup successfully which is not expected.DESCRIPTION:When add a disk to a disk group, a pre-check will be made to avoid ending up with a mixed diskgroup. In a cluster, the local node might fail to use the latest record to do the pre-check, which caused a mixed diskgroup in the cluster, further caused node join failure.RESOLUTION:Code changes have been made to use latest record to do a mixed diskgroup pre-check.* 4136870 (Tracking ID: 4117957)SYMPTOM:During a phased reboot of a two node Veritas Access cluster, mounts would hang. Transaction aborted waiting for io drain.VxVM vxio V-5-3-1576 commit: Timedout waiting for Cache XXXX to quiesce, iocount XX msg 0DESCRIPTION:Transaction on Cache object getting failed since there are IOs waiting on the cache object. Those queued IOs couldn't be proceeded due to the missing flag VOLOBJ_CACHE_RECOVERED on the cache object. A transact might kicked in when the old cache was doingrecovery, therefore the new cache object might fail to inherit VOLOBJ_CACHE_RECOVERED, further caused IO hung.RESOLUTION:Code changes have been to fail the new cache creation if the old cache is doing recovery.* 4137174 (Tracking ID: 4081740)SYMPTOM:vxdg flush command slow due to too many luns needlessly access /proc/partitions.DESCRIPTION:Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group.RESOLUTION:Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt.* 4137175 (Tracking ID: 4124223)SYMPTOM:Core dump is generated for vxconfigd in TC execution.DESCRIPTION:TC creates a scenario where 0s are written in first block of disk. In such case, Null check is necessary in code before some variable is accessed. This Null check is missing which causes vxconfigd core dump in TC execution.RESOLUTION:Necessary Null checks is added in code to avoid vxconfigd core dump.* 4137508 (Tracking ID: 4066310)SYMPTOM:New feature for performance improvementDESCRIPTION:Linux subsystem has two types of block driver 1) block multiqueue driver 2) bio based block driver. since day -1 DMP was a bio based driver now added new feature that is block multiqueue for DMP.RESOLUTION:resolved* 4137615 (Tracking ID: 4087628)SYMPTOM:When DCM is in replication mode with volumes mounted having large regions for DCM to sync and if slave node reboot is triggered, this might cause CVM to go into faulted state .DESCRIPTION:During Resiliency tests, performed sequence of operations as following. 1. On an AWS FSS-CVR setup, replication is started across the sites for 2 RVGs.2. The low owner service groups for both the RVGs are online on a Slave node. 3. Rebooted another Slave node where logowner is not online. 4. After Slave node come back from reboot, it is unable to join CVM Cluster. 5. Also vx commands are also hung/stuck on the CVM Master and Logowner slave node.RESOLUTION:In RU SIO before requesting vxfs_free_region(), drop IO count and hold it again after. Because the transaction has been locked (vol_ktrans_locked = 1) right before calling vxfs_free_region(), we don't need the iocount to hold rvg from being removed.* 4137630 (Tracking ID: 4139701)SYMPTOM:Existing VxVM package fails to load on RHEL 9.3DESCRIPTION:There are multiple changes done in this kernel related to handling of kobj ,bio_set_op_attrs.Hence existing code is not compatible with RHEL 9.3.RESOLUTION:Required changes have been done to make VxVM compatible with RHEL 9.3.* 4137753 (Tracking ID: 4128271)SYMPTOM:In CVR environment, a node is not able to join the CVM cluster if RVG recovery is taking place.DESCRIPTION:If there has been an SRL overflow, then RVG recovery takes more time as it was loaded with more work than required because the recovery related metadata was not updated.RESOLUTION:Updated the metadata correctly to reduce the RVG recovery time.* 4137757 (Tracking ID: 4136458)SYMPTOM:In CVR environment, if CVM slave node is acting as logowner, the DCM resync issues after snapshot restore may hang showing 0% sync is remaining.DESCRIPTION:The DCM resync completion is not correctly communicated to CVM master resulting into hang.RESOLUTION:The DCM resync operation is enhanced to correctly communicate resync completion to CVM master.* 4137986 (Tracking ID: 4133793)SYMPTOM:DCO experience IO Errors while doing a vxsnap restore on vxvm volumes.DESCRIPTION:Dirty flag was getting set in context of an SIO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set. This led to transaction errors while doing a vxsnap restore command in loop for vxvm volumes causing transaction abort. As a result, VxVM tries to cleanup by removing newly added BMs. Now, VxVM tries to access the deleted BMs. however it is not able to since they were deleted previously. This ultimately leads to DCO IO error.RESOLUTION:Skip first write klogging in the context of an IO with flag VOLSIO_AUXFLAG_NO_FWKLOG being set.* 4138051 (Tracking ID: 4090943)SYMPTOM:On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog: VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary.DESCRIPTION:When RVG logowner node panic, RVG recovery happens in 3 phases.At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrectand during this time if there is logowner change then Rlink won't get connected.RESOLUTION:Handled in-memory and on-disk SRL positions correctly.* 4138069 (Tracking ID: 4139703)SYMPTOM:System gets panicked on RHEL9.2 AWS environment while registering the pgr key.DESCRIPTION:On RHEL 9.2, Observing panic while reading PGR keys on AWS VM.2)Reproduction steps: Run "/etc/vx/diag.d/vxdmppr read /dev/vx/dmp/ip-10-20-2-49_nvme4_0" on AWS nvme 9.2 setup.3) Build details: ga8_0_2_all_maint4)Test Bed details: AWS VM with RHEL 9.2 Nodes:Access details(login)Console details:4) OS and Kernel details: 5.14.0-284.11.1.el9_2.x86_645). Crash dump and core dump location with Binary6) Failure signature: PID: 8250 TASK: ffffa0e882ca1c80 CPU: 1 COMMAND: "vxdmppr" #0 [ffffbf3c4039f8e0] machine_kexec at ffffffffb626c237 #1 [ffffbf3c4039f938] __crash_kexec at ffffffffb63c3c9a #2 [ffffbf3c4039f9f8] crash_kexec at ffffffffb63c4e58 #3 [ffffbf3c4039fa00] oops_end at ffffffffb62291db #4 [ffffbf3c4039fa20] do_trap at ffffffffb622596e #5 [ffffbf3c4039fa70] do_error_trap at ffffffffb6225a25 #6 [ffffbf3c4039fab0] exc_invalid_op at ffffffffb6d256be #7 [ffffbf3c4039fad0] asm_exc_invalid_op at ffffffffb6e00af6 [exception RIP: kfree+1074] RIP: ffffffffb6578e32 RSP: ffffbf3c4039fb88 RFLAGS: 00010246 RAX: ffffa0e7984e9c00 RBX: ffffa0e7984e9c00 RCX: ffffa0e7984e9c60 RDX: 000000001bc22001 RSI: ffffffffb6729dfd RDI: ffffa0e7984e9c00 RBP: ffffa0e880042800 R8: ffffa0e8b572b678 R9: ffffa0e8b572b678 R10: 0000000000005aca R11: 00000000000000e0 R12: fffff20e00613a40 R13: fffff20e00613a40 R14: ffffffffb6729dfd R15: 0000000000000000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #8 [ffffbf3c4039fbc0] blk_update_request at ffffffffb6729dfd #9 [ffffbf3c4039fc18] blk_mq_end_request at ffffffffb672a11a#10 [ffffbf3c4039fc30] dmp_kernel_nvme_ioctl at ffffffffc09f2647 [vxdmp]#11 [ffffbf3c4039fd00] dmp_dev_ioctl at ffffffffc09a3b93 [vxdmp]#12 [ffffbf3c4039fd10] dmp_send_nvme_passthru_cmd_over_node at ffffffffc09f1497 [vxdmp]#13 [ffffbf3c4039fd60] dmp_pr_do_nvme_read.constprop.0 at ffffffffc09b78e1 [vxdmp]#14 [ffffbf3c4039fe00] dmp_pr_read at ffffffffc09e40be [vxdmp]#15 [ffffbf3c4039fe78] dmpioctl at ffffffffc09b09c3 [vxdmp]#16 [ffffbf3c4039fe88] dmp_ioctl at ffffffffc09d7a1c [vxdmp]#17 [ffffbf3c4039fea0] blkdev_ioctl at ffffffffb6732b81#18 [ffffbf3c4039fef0] __x64_sys_ioctl at ffffffffb65df1ba#19 [ffffbf3c4039ff20] do_syscall_64 at ffffffffb6d2515c#20 [ffffbf3c4039ff50] entry_SYSCALL_64_after_hwframe at ffffffffb6e0009b RIP: 00007fef03c3ec6b RSP: 00007ffd1acad8a8 RFLAGS: 00000202 RAX: ffffffffffffffda RBX: 00000000444d5061 RCX: 00007fef03c3ec6b RDX: 00007ffd1acad990 RSI: 00000000444d5061 RDI: 0000000000000003 RBP: 0000000000000003 R8: 0000000001cbba20 R9: 0000000000000000 R10: 00007fef03c11d78 R11: 0000000000000202 R12: 00007ffd1acad990 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000002 ORIG_RAX: 0000000000000010 CS: 0033 SS: 002bRESOLUTION:resolved* 4138075 (Tracking ID: 4129873)SYMPTOM:In CVR environment, the application I/O may hang if CVM slave node is acting as RVG logowner and a data volume grow operation is triggered followed by a logclient node leaving the cluster.DESCRIPTION:When logowner is not CVM master, and data volume grow operation is taking place, the CVM master controls the region locking for IO operations. In case, a logclient node leaves the cluster, the I/O operations initiated by it are not cleaned up correctly due to lack of co-ordination between CVM master and RVG logowner node.RESOLUTION:Co-ordination between CVM master and RVG logowner node is fixed to manage the I/O cleanup correctly.* 4138101 (Tracking ID: 4114867)SYMPTOM:Getting these error messages while adding new disks[root@server101 ~]# cat /etc/udev/rules.d/41-VxVM-selinux.rules | tail -1KERNEL=="VxVM*", SUBSYSTEM=="block", ACTION=="add", RUN+="/bin/sh -c 'if [ `/usr/sbin/getenforce` != "Disabled" -a `/usr/sbin/[root@server101 ~]#[root@server101 ~]# systemctl restart systemd-udevd.service[root@server101 ~]# udevadm test /block/sdb 2>&1 | grep "invalid"invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 104 ('D')DESCRIPTION:In /etc/udev/rules.d/41-VxVM-selinux.rules double quotation on Disabled and disable is the issue.RESOLUTION:Code changes have been made to correct the problem.* 4138107 (Tracking ID: 4065490)SYMPTOM:systemd-udev threads consumes more CPU during system bootup or device discovery.DESCRIPTION:During disk discovery when new storage devices are discovered, VxVM udev rules are invoked for creating hardware pathsymbolic link and setting SELinux security context on Veritas device files. For creating hardware path symbolic link to eachstorage device, "find" command is used internally which is CPU intensive operation. If too many storage devices are attached tosystem, then usage of "find" command causes high CPU consumption.Also, for setting appropriate SELinux security context on VxVM device files, restorecon is done irrespective of SELinux is enabled or disabled.RESOLUTION:Usage of "find" command is replaced with "udevadm" command. SELinux security context on VxVM device files is being setonly when SELinux is enabled on system.* 4138224 (Tracking ID: 4129489)SYMPTOM:With VxVM installed in AWS cloud environment, disk devices may intermittently disappear from 'vxdisk list' output.DESCRIPTION:There was an issue with disk discovery at OS and DDL layer.RESOLUTION:Integration issue with disk discovery was resolved.* 4138236 (Tracking ID: 4134069)SYMPTOM:VVR replication was not using VxFS SmartMove feature if filesystem was not mounted on RVG Logowner node.DESCRIPTION:Initial synchronization and DCM replay of VVR required the filesystem to be mounted locally on the logowner node as VVR did not have capability to fetch the required information from a remotely mounted filesystem mount point.RESOLUTION:VVR is updated to fetch the required SmartMove related information from a remotely mounted filesystem mount point.* 4138237 (Tracking ID: 4113240)SYMPTOM:In CVR environment, with hostname binding configured, Rlink on VVR secondary may have incorrect VVR primary IP.DESCRIPTION:VVR Secondary Rlink picks up a wrong IP randomly since the replication is configured using virtual host which maps to multiple IPs.RESOLUTION:VVR Primary IP is corrected on the VVR Secondary Rlink.* 4138251 (Tracking ID: 4132799)SYMPTOM:If GLM is not loaded, start CVM fails with the following errors:# vxclustadm -m gab startnodeVxVM vxclustadm INFO V-5-2-9687 vxclustadm: Fencing driver is in disabled mode - VxVM vxclustadm ERROR V-5-1-9743 errno 3DESCRIPTION:The error number but the error message is printed while joining CVM fails.RESOLUTION:The code changes have been made to fix the issue.* 4138348 (Tracking ID: 4121564)SYMPTOM:Memory leak for volcred_t could be observed in vxio.DESCRIPTION:Memory leak could occur if some private region IOs hang on a disk and there are duplicate entries for the disk in vxio.RESOLUTION:Code has been changed to avoid memory leak.* 4138537 (Tracking ID: 4098144)SYMPTOM:vxtask list shows the parent process without any sub-tasks which never progresses for SRL volumeDESCRIPTION:vxtask remains stuck since the parent process doesn't exit. It was seen that all childs are completed, but the parent is not able to exit.(gdb) p active_jobs$1 = 1Active jobs are reduced as in when childs complete. Somehow one count is pending and we don't know which child exited without decrementing count. Instrumentation messages are added to capture the issue.RESOLUTION:Added a code that will create a log file in /etc/vx/log/. This file will be deleted when vxrecover exists successfully. The file will be present when vxtask parent hang issue is seen.* 4138538 (Tracking ID: 4085404)SYMPTOM:Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.DESCRIPTION:The active map flush caused RVG serialization. Once RVG gets serialized, all IOs are queued in restart queue, till the active map flush is finished. The too frequent active map flush caused the huge IO drop during flushing SRL to DCM.RESOLUTION:The code is modified to adjust the frequency of active map flush and balance the application IO and SRL flush.* 4140598 (Tracking ID: 4141590)SYMPTOM:Some incidents do not appear in changelog because their cross-references are not properly processedDESCRIPTION:Every cross-references is not parent-child. In such case 'top' will not be present and changelog script ends execution.RESOLUTION:All cross-references are traversed to find parent-child only if it present and then find top.* 4143580 (Tracking ID: 4142054)SYMPTOM:System panicked in the following stack:[ 9543.195915] Call Trace:[ 9543.195938] dump_stack+0x41/0x60[ 9543.195954] panic+0xe7/0x2ac[ 9543.195974] vol_rv_inactive+0x59/0x790 [vxio][ 9543.196578] vol_rvdcm_flush_done+0x159/0x300 [vxio][ 9543.196955] voliod_iohandle+0x294/0xa40 [vxio][ 9543.197327] ? volted_getpinfo+0x15/0xe0 [vxio][ 9543.197694] voliod_loop+0x4b6/0x950 [vxio][ 9543.198003] ? voliod_kiohandle+0x70/0x70 [vxio][ 9543.198364] kthread+0x10a/0x120[ 9543.198385] ? set_kthread_struct+0x40/0x40[ 9543.198389] ret_from_fork+0x1f/0x40DESCRIPTION:- From the SIO stack, we can see that it is a case of done being called twice. - Looking at vol_rvdcm_flush_start(), we can see that when child sio is created, it is being directly added to the the global SIO queue. - This can cause child SIO to start while vol_rvdcm_flush_start() is still in process of generating other child SIOs. - It means that, say the first child SIO gets done, it can find the children count going to zero and calls done.- The next child SIO, also independently find children count to be zero and call done.RESOLUTION:The code changes have been done to fix the problem.* 4143857 (Tracking ID: 4130393)SYMPTOM:vxencryptd crashed repeatedly due to segfault.DESCRIPTION:Linux could pass large IOs with 2MB size to VxVM layer, however vxencryptd only expects IOs with maximum IO size 1MB from kernel and only pre-allocates 1MB buffer size for encryption/decryption. This would cause vxencryptd to crash when processing large IOs.RESOLUTION:Code changes have been made to allocate enough buffer.* 4145064 (Tracking ID: 4145063)SYMPTOM:vxio Module fails to load post VxVM package installation.DESCRIPTION:Following message is seen in dmesg:[root@dl360g10-115-v23 ~]# dmesg | grep symbol[ 2410.561682] vxio: no symbol version for storageapi_associate_blkgRESOLUTION:Because of incorrectly nested IF blocks in the "src/linux/kernel/vxvm/Makefile.target", the code for the RHEL 9 block was not getting executed, because of which certain symbols were not present in the vxio.mod.c file. This in turn caused the above mentioned symbol warning to be seen in dmesg.Fixed the improper nesting of the IF conditions.* 4146550 (Tracking ID: 4108235)SYMPTOM:System wide hang causing all application and config IOs hangDESCRIPTION:Memory pools are used in vxio driver for managing kernel memory for different purposes. One of the pool called 'NMCOM pool' used on VVR secondary was causing memory leak. Memory leak was not getting detected from pool stats as metadata referring to pool itself was getting freed.RESOLUTION:Bug causing memory leak is fixed. There was race condition in VxVM transaction code path on secondary side of VVR where memory was not getting freed when certain conditions was hit.* 4149499 (Tracking ID: 4149498)SYMPTOM:While upgrading the VxVM package, a number of warnings are seen regarding .ko files not being found for various modules.DESCRIPTION:These warnings are seen because all the unwanted .ko files have been removed.RESOLUTION:Code changes have been done to not get these warnings.* 4150099 (Tracking ID: 4150098)SYMPTOM:After few VxVM operations ,if a reboot is taken file system goes into read-only mode and vxconfigd does not come up .DESCRIPTION:SELinux context of /etc/fstab is getting updated which is causing the issue.RESOLUTION:Fixed the SELinux context of /etc/fstab.* 4150459 (Tracking ID: 4150160)SYMPTOM:System gets panicked in dmp code pathDESCRIPTION:CMDS-fsmigadm test hits "Oops: 0003 [#1] PREEMPT SMP PTI"2)Reproduction steps: Running cmds-fsmigadm test.3) Build details:# rpm -qi VRTSvxvmName : VRTSvxvmVersion : 8.0.3.0000Release : 0716_RHEL9Architecture: x86_64Install Date: Wed 10 Jan 2024 11:46:24 AM ISTGroup : Applications/SystemSize : 414813743License : Veritas ProprietarySignature : RSA/SHA256, Thu 04 Jan 2024 04:24:23 PM IST, Key ID 4e84af75cc633953Source RPM : VRTSvxvm-8.0.3.0000-0716_RHEL9.src.rpmBuild Date : Thu 04 Jan 2024 06:35:01 AM ISTBuild Host : vmrsvrhel9bld.rsv.ven.veritas.comPackager : enterprise_technical_support@veritas.comVendor : Veritas Technologies LLCURL : www.veritas.com/supportSummary : Veritas Volume ManagerRESOLUTION:removed buggy code and fixed it.Patch ID: VRTSaslapm 8.0.2.1400* 4137995 (Tracking ID: 4117350)SYMPTOM:Below error is observed when trying to import # vxdg -n SVOL_SIdg -o useclonedev=on -o updateid import SIdgVxVM vxdg ERROR V-5-1-0 Disk group SIdg: import failed:Replicated dg record is found.Did you want to import hardware replicated LUNs?Try vxdg [-o usereplicatedev=only] import option with -c[s]Please refer to system log for details.DESCRIPTION:REPLICATED flag is used to identify a hardware replicated device so to import dg on the REPLICATED disks , usereplicatedev option must be used . As that was not provided hence issue was observed .RESOLUTION:REPLICATED flag has been removed for Hitachi ShadowImage (SI) disks.* 4153119 (Tracking ID: 4153120)SYMPTOM:Support for ASLAPM on RHEL9.3DESCRIPTION:The RHEL9.3 is new release and hence APM module should be recompiled with new kernel.RESOLUTION:Compiled APM with new kernel.Patch ID: VRTSvxvm-8.0.2.1200* 4119267 (Tracking ID: 4113582)SYMPTOM:In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.DESCRIPTION:Reboot of primary nodes resulted in missing write completions of updates on the primary SRL volume. After the node came up, last update received by VVR secondary was incorrectly compared with the missing updates.RESOLUTION:Fixed the check to correctly compare the last received update by VVR secondary.* 4123065 (Tracking ID: 4113138)SYMPTOM:In CVR environments configured with virtual hostname, after node reboots on VVR Primary and Secondary, 'vradmin repstatus' invoked on the secondary site shows stale information with following warning message:VxVM VVR vradmin INFO V-5-52-1205 Primary is unreachable or RDS has configuration error. Displayed status information is from Secondary and can be out-of-date.DESCRIPTION:This issue occurs when there is a explicit RVG logowner set on the CVM master due to which the old connection of vradmind with its remote peer disconnects and new connection is not formed.RESOLUTION:Fixed the issue with the vradmind connection with its remote peer.* 4123069 (Tracking ID: 4116609)SYMPTOM:In CVR environments where replication is configured using virtual hostnames, vradmind on VVR primary loses connection with its remote peer after a planned RVG logowner change on the VVR secondary site.DESCRIPTION:vradmind on VVR primary was unable to detect a RVG logowner change on the VVR secondary site.RESOLUTION:Enabled primary vradmind to detect RVG logowner change on the VVR secondary site.* 4123080 (Tracking ID: 4111789)SYMPTOM:In VVR/CVR environments, VVR would use any IP/NIC/network to replicate the data and may not utilize the high performance NIC/network configured for VVR.DESCRIPTION:The default value of tunable was set to 'any_ip'.RESOLUTION:The default value of tunable is set to 'replication_ip'.* 4124291 (Tracking ID: 4111254)SYMPTOM:vradmind dumps core with the following stack:#3 0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6#4 0x000000000045922c in RDS::getHandle ()#5 0x000000000056ec04 in StatsSession::addHost ()#6 0x000000000045d9ef in RDS::addRVG ()#7 0x000000000046ef3d in RDS::createDummyRVG ()#8 0x000000000044aed7 in PriRunningState::update ()#9 0x00000000004b3410 in RVG::update ()#10 0x000000000045cb94 in RDS::update ()#11 0x000000000042f480 in DBMgr::update ()#12 0x000000000040a755 in main ()DESCRIPTION:vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.RESOLUTION:The issue has been fixed by making code changes.* 4124794 (Tracking ID: 4114952)SYMPTOM:With VVR configured with a virtual hostname, after node reboots on DR site, 'vradmin pauserep' command failed with following error:VxVM VVR vradmin ERROR V-5-52-421 vradmind server on host <host> not responding or hostname cannot be resolved.DESCRIPTION:The virtual host mapped to multiple IP addresses, and vradmind was using incorrectly mapped IP address.RESOLUTION:Fixed by using the correct mapping of IP address from the virtual host.* 4124796 (Tracking ID: 4108913)SYMPTOM:Vradmind dumps core with the following stacks:#3 0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6#4 0x00000000005d7a90 in VList::concat () at VList.C:1017#5 0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280#6 0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389#7 0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764#8 0x00000000004093e9 in process_message () at srvmd.C:418#9 0x000000000040a66d in main () at srvmd.C:733#0 0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6#1 0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6#2 0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6#3 0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6#4 0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6#5 0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491#6 0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480#7 0x00000000005d7244 in VElem::~VElem () at VList.C:480#8 0x00000000005d8ad9 in VList::~VList () at VList.C:1167#9 0x000000000040a71a in main () at srvmd.C:743#0 0x000000000040b826 in DList::head () at ../include/DList.h:82#1 0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318#2 0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157#3 0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117#4 0x000000000046f610 in RDS::collectStats () at RDS.C:6011#5 0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799#6 0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0#7 0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6DESCRIPTION:There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.RESOLUTION:The code changes have been made to fix the issue.* 4125003 (Tracking ID: 4118478)SYMPTOM:VxVM installation fails on RHEL9.2DESCRIPTION:There have been multiple changes done regarding blkcg_gq, blk_put_request, bio_clone_fast, bio_init, blk_cleanup_queue, blk_cleanup_disk, blk_execute_rq, blk_get_request, etc hence VxVM code is not compatible with these new code changes done in kernel .RESOLUTION:Required changes has been done to make VxVM compatible with RHEL9.2.* 4125392 (Tracking ID: 4114193)SYMPTOM:'vradmin repstatus' command showed replication data status incorrectly as 'inconsistent'.DESCRIPTION:vradmind was relying on replication data status from both primary as well as DR site.RESOLUTION:Fixed replication data status to rely on the primary data status.* 4125811 (Tracking ID: 4090772)SYMPTOM:vxconfigd/vx commands hang on secondary site in a CVR environment.DESCRIPTION:Due to a window with unmatched SRL positions, if any application (e.g. fdisk) tryingto open the secondary RVG volume will acquire a lock and wait for SRL positions to match.During this if any vxvm transaction kicked in will also have to wait for same lock.Further logowner node panic'd which triggered logownership change protocol which hungas earlier transaction was stuck. As logowner change protocol could not complete,in absence of valid logowner SRL position could not match and caused deadlock. That leadto vxconfigd and vx command hang.RESOLUTION:Added changes to allow read operation on volume even if SRL positions areunmatched. We are still blocking write IOs and just allowing open() call for read-onlyoperations, and hence there will not be any data consistency or integrity issues.* 4128127 (Tracking ID: 4132265)SYMPTOM:Machine with NVMe disks panics with following stack: blk_update_requestblk_mq_end_requestdmp_kernel_nvme_ioctldmp_dev_ioctldmp_send_nvme_passthru_cmd_over_nodedmp_pr_do_nvme_readdmp_pgr_readdmpioctldmp_ioctlblkdev_ioctl__x64_sys_ioctldo_syscall_64DESCRIPTION:Issue was applicable to setups with NVMe devices which do not support SCSI3-PR as an ioctl was called without checking correctly if SCSI3-PR was supported.RESOLUTION:Fixed the check to avoid calling the ioctl on devices which do not support SCSI3-PR.* 4128835 (Tracking ID: 4127555)SYMPTOM:While adding secondary site using the 'vradmin addsec' command, the command fails with following error if diskgroup id is used in place of diskgroup name:VxVM vxmake ERROR V-5-1-627 Error in field remote_dg=<dgid>: name is too longDESCRIPTION:Diskgroup names can be 32 characters long where as diskgroup ids can be 64 characters long. This was not handled by vradmin commands.RESOLUTION:Fix vradmin commands to handle the case where longer diskgroup ids can be used in place of diskgroup names.* 4129664 (Tracking ID: 4129663)SYMPTOM:vxvm rpm does not have changelogDESCRIPTION:Changelog in rpm will help to find missing incidents with respect to other version.RESOLUTION:Changelog is generated and added to vxvm rpm.* 4129766 (Tracking ID: 4128380)SYMPTOM:If VVR is configured using virtual hostname and 'vradmin resync' command is invoked from a DR site node, it fails with following error:VxVM VVR vradmin ERROR V-5-52-405 Primary vradmind server disconnected.DESCRIPTION:In case of virtual hostname maps to multiple IPs, vradmind service on the DR site was not able to reach the VVR logowner node on the primary site due to incorrect IP address mapping used.RESOLUTION:Fixed vradmind to use correct mapped IP address of the primary vradmind.* 4130402 (Tracking ID: 4107801)SYMPTOM:/dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.DESCRIPTION:vxpath-links is responsible for creating the the hardware paths under /dev/vx/.dmp .This script get invokes from: /lib/udev/vxpath_links. The "/lib/udev" folder is not present in SLES15SP3.This folder is explicitly removed from SLES15SP3 onwards and it is expected to create Veritas specific scripts/libraries from vendor specific folder.RESOLUTION:Code changes have been made to invoke "/etc/vx/vxpath-links" instead of "/lib/udev/vxpath-links".* 4130827 (Tracking ID: 4098391)SYMPTOM:Kernel panic is observed with following stack:#6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e [exception RIP: bfq_bio_bfqg+37] RIP: ffffffffb1e78135 RSP: ffffa479c21cf7a0 RFLAGS: 00010002 RAX: 000000000000001f RBX: 0000000000000000 RCX: ffffa479c21cf860 RDX: ffff8bd779775000 RSI: ffff8bd795b2fa00 RDI: ffff8bd795b2fa00 RBP: ffff8bd78f136000 R8: 0000000000000000 R9: ffff8bd793a5b800 R10: ffffa479c21cf828 R11: 0000000000001000 R12: ffff8bd7796b6e60 R13: ffff8bd78f136000 R14: ffff8bd795b2fa00 R15: ffff8bd7946ad0bc ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018#7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458#8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f#9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09#10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3#11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b#12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a#13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1#14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5#15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5#16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2#17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d#18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377#19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa#20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c#21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4#22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e#23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]#24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]#25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp]#26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp]#27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19#28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719#29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262#30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296#31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b#32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008cDESCRIPTION:VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel &gt;= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel &gt;= 5.14.21-150400.24.11.1RESOLUTION:Code changes have been done to fix this issue in IS-8.0 and IS-8.0.2.* 4130947 (Tracking ID: 4124725)SYMPTOM:With VVR configured using virtual hostnames, 'vradmin delpri' command could hang after doing the RVG cleanup.DESCRIPTION:'vradmin delsec' command used prior to 'vradmin delpri' command had left the cleanup in an incomplete state resulting in next cleanup command to hang.RESOLUTION:Fixed to make sure that 'vradmin delsec' command executes its workflow correctly.Patch ID: VRTSaslapm 8.0.2.1200* 4132969 (Tracking ID: 4122583)SYMPTOM:Support for ASLAPM on RHEL9.2DESCRIPTION:The RHEL9.2 is new release and hence APM module should be recompiled with new kernel.RESOLUTION:Compiled APM with new kernel.* 4133009 (Tracking ID: 4133010)SYMPTOM:aslapm rpm does not have changelogDESCRIPTION:Changelog in rpm will help to find missing incidents with respect to other version.RESOLUTION:Changelog is generated and added to aslapm rpm.Patch ID: VRTSvxvm-8.0.2.1100* 4125322 (Tracking ID: 4119950)SYMPTOM:Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.DESCRIPTION:Third party components [curl and libxml] in their current versions, used by VxVM have been reported with security vulnerabilities which needsRESOLUTION:[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.Patch ID: VRTSaslapm 8.0.2.1100* 4125322 (Tracking ID: 4119950)SYMPTOM:Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.DESCRIPTION:Third party components [curl and libxml] in their current versions, used by VxVM have been reported with security vulnerabilities which needsRESOLUTION:[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.Patch ID: VRTScavf-8.0.2.2100* 4162683 (Tracking ID: 4153873)SYMPTOM:CVM master reboot resulted in volumes disabled on the slave nodeDESCRIPTION:The Infoscale stack exhibits unpredictable behaviour during reboots, sometimes the node hangs to come online, the working node goes into the faulted state and sometimes the cvm won't start on the rebooted node.RESOLUTION:Now we have added the mechanism for making decisions about deport and the code has been integrated with an offline routine.Patch ID: VRTScavf-8.0.2.1500* 4133969 (Tracking ID: 4074274)SYMPTOM:DR test and failover activity might not succeed for hardware-replicated disk groups and EMC SRDF hardware-replicated disk groups are failing with PR operation failed message.DESCRIPTION:In case of hardware-replicated disks like EMC SRDF, failover of disk groups might not automatically succeed and a manual intervention might be needed. After failover, disks at the new primary site have the 'udid_mismatch' flag which needs to be updated manually for a successful failover. And also we need to change SCSI-3 error message to "PR operation failed".RESOLUTION:For DMP environments, the VxVM & DMP extended attributes need to be refreshed by using 'vxdisk scandisks' prior to import. VxVM has also provided a new vxdg import option '-o usereplicatedev=only' with DMP. This option selects only the hardware-replicated disks during LUN selection process.And Pre VxVM 8.0.x, we getting "SCSI-3 PR operation failed" as shown and changes done respectivelySample syntax# /usr/sbin/vxdg -s -o groupreserve=VCS -o clearreserve -cC -t import AIXSRDFVxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed:SCSI-3 PR operation failedVRTScavf (CVM) 7.4.2.2201 agent enhanced on AIX to handle EMC SRDF VxVM vxdg ERROR V-5-1-19179 Disk group AIXSRDF: import failed: SCSI-3 PR operation failed failuresNEW 8.0.x VxVM error message format:2023/09/27 12:44:02 VCS INFO V-16-20007-1001 CVMVolDg:<RESOURCE-NAME>:online:VxVM vxdg ERROR V-5-1-19179 Disk group <DISKGROUP-NAME>: import failed:PR operation failed* 4137640 (Tracking ID: 4088479)SYMPTOM:The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.DESCRIPTION:The EMC SRDF managed diskgroup import failed with below error. This failure is specific to EMC storage only on AIX with Fencing.#/usr/sbin/vxdg -o groupreserve=VCS -o clearreserve -c -tC import srdfdgVxVM vxdg ERROR V-5-1-19179 Disk group srdfdg: import failed:SCSI-3 PR operation failedRESOLUTION:06/16 14:31:49: VxVM vxconfigd DEBUG V-5-1-7765 /dev/vx/rdmp/emc1_0c93: pgr_register: setting pgrkey: AVCS06/16 14:31:49: VxVM vxconfigd DEBUG V-5-1-5762 prdev_open(/dev/vx/rdmp/emc1_0c93): open failure: 47 //#define EWRPROTECT 47 /* Write-protected media */06/16 14:31:49: VxVM vxconfigd ERROR V-5-1-18444 vold_pgr_register: /dev/vx/rdmp/emc1_0c93: register failed:errno:47 Make sure the disk supports SCSI-3 PR. AIX differentiates between RW and RD-only opens. When the underlying device state has changed, because of the pending open count(dmp_cache_open feature), device open failed.Patch ID: VRTSgms-8.0.2.1900* 4166491 (Tracking ID: 4166490)SYMPTOM:GMS module failed to load on RHEL-9.4 kernelDESCRIPTION:This issue occurs due to changes in the RHEL-9.4 kernelRESOLUTION:GMS module is updated to accommodate the changes in the kernel and load as expected on RHEL-9.4 kernelPatch ID: VRTSgms-8.0.2.1500* 4138412 (Tracking ID: 4138416)SYMPTOM:The GMS module fails to load on RHEL9.3 kernel.DESCRIPTION:This issue occurs due to changes in the RHEL9.3 kernel.RESOLUTION:GMS module is updated to accommodate the changes in the kernel and load as expected on RHEL9.3 kernel* 4152214 (Tracking ID: 4152213)SYMPTOM:The GMS module fails to load on RHEL9.3 minor kernel 5.14.0-362.18.1.DESCRIPTION:This issue occurs due to changes in the RHEL9.3 minor kernel.RESOLUTION:Updated GMS to support RHEL 9.3 minor kernel 5.14.0-362.18.1.Patch ID: VRTSgms-8.0.2.1200* 4124915 (Tracking ID: 4118303)SYMPTOM:The GMS module fails to load on RHEL9.2.DESCRIPTION:This issue occurs due to changes in the RHEL9.2 kernel.RESOLUTION:GMS module is updated to accommodate the changes in the kernel and load as expected on RHEL9.2.* 4126266 (Tracking ID: 4125932)SYMPTOM:no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.DESCRIPTION:no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.RESOLUTION:Updated the code to build gms with correct kbuild symbols.* 4127527 (Tracking ID: 4107112)SYMPTOM:The GMS module fails to load on linux minor kernel.DESCRIPTION:This issue occurs due to changes in the minor kernel.RESOLUTION:Modified existing modinst-gms script to consider kernel-build version in exact-version-module version calculation.* 4127528 (Tracking ID: 4107753)SYMPTOM:The GMS module fails to load on linux minor kernel.DESCRIPTION:This issue occurs due to changes in the minor kernel.RESOLUTION:Modified existing modinst-gms script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present.* 4127628 (Tracking ID: 4127629)SYMPTOM:The GMS module fails to load on RHEL9.0 minor kernel 5.14.0-70.36.1DESCRIPTION:This issue occurs due to changes in the RHEL9.0 minor kernel.RESOLUTION:Updated GMS to support RHEL9.0 minor kernel 5.14.0-70.36.1* 4129708 (Tracking ID: 4129707)SYMPTOM:GMS rpm does not have changelogDESCRIPTION:Changelog in rpm will help to find missing incidents with respect to other version.RESOLUTION:Changelog is generated and added to GMS rpm.Patch ID: VRTSglm-8.0.2.2100* 4166489 (Tracking ID: 4166488)SYMPTOM:GLM module failed to load on RHEL-9.4 kernelDESCRIPTION:This issue occurs due to changes in the RHEL-9.4 kernelRESOLUTION:GLM module is updated to accommodate the changes in the kernel and load as expected on RHEL-9.4 kernel* 4174551 (Tracking ID: 4171246)SYMPTOM:vxglm status shows active even if it fails to load module.DESCRIPTION:systemctl status vxglm command shows the vxglm service as active even after it failed to load the module.RESOLUTION:Code changes have been done to fix this issue.Patch ID: VRTSglm-8.0.2.1500* 4138274 (Tracking ID: 4126298)SYMPTOM:System may panic due to unable to handle kernel paging request and memory corruption could happen.DESCRIPTION:Panic may occur due to a race between a spurious wakeup and normal wakeup of thread waiting for glm lock grant. Due to the race, the spurious wakeup would have already freed a memory and then normal wakeup thread might be passing that freed and reused memory to wake_up function causing memory corruption and panic.RESOLUTION:Fixed the race between a spurious wakeup and normal wakeup threadsby making wake_up lock protected.* 4138407 (Tracking ID: 4138408)SYMPTOM:The GLM module fails to load on RHEL9.3 kernel.DESCRIPTION:This issue occurs due to changes in the RHEL9.3 kernel.RESOLUTION:GLM module is updated to accommodate the changes in the kernel and load as expected on RHEL9.3 kernel.* 4152212 (Tracking ID: 4152211)SYMPTOM:The GLM module fails to load on RHEL9.3 minor kernel 5.14.0-362.18.1.DESCRIPTION:This issue occurs due to changes in the RHEL9.3 minor kernel.RESOLUTION:Updated GLM to support RHEL 9.3 minor kernel 5.14.0-362.18.1.Patch ID: VRTSglm-8.0.2.1200* 4124912 (Tracking ID: 4118297)SYMPTOM:The GLM module fails to load on RHEL9.2.DESCRIPTION:This issue occurs due to changes in the RHEL9.2 kernel.RESOLUTION:GLM module is updated to accommodate the changes in the kernel and load as expected on RHEL9.2.* 4127524 (Tracking ID: 4107114)SYMPTOM:The GLM module fails to load on linux minor kernel.DESCRIPTION:This issue occurs due to changes in the minor kernel.RESOLUTION:Modified existing modinst-glm script to consider kernel-build version in exact-version-module version calculation.* 4127525 (Tracking ID: 4107754)SYMPTOM:The GLM module fails to load on linux minor kernel.DESCRIPTION:This issue occurs due to changes in the minor kernel.RESOLUTION:Modified existing modinst-glm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present.* 4127626 (Tracking ID: 4127627)SYMPTOM:The GLM module fails to load on RHEL9.0 minor kernel 5.14.0-70.36.1DESCRIPTION:This issue occurs due to changes in the RHEL9.0 minor kernel.RESOLUTION:Updated GLM to support RHEL9.0 minor kernel 5.14.0-70.36.1* 4129715 (Tracking ID: 4129714)SYMPTOM:GLM rpm does not have changelogDESCRIPTION:Changelog in rpm will help to find missing incidents with respect to other version.RESOLUTION:Changelog is generated and added to GLM rpm.Patch ID: VRTSodm-8.0.2.1900* 4166495 (Tracking ID: 4166494)SYMPTOM:ODM module failed to load on RHEL-9.4 kernelDESCRIPTION:This issue occurs due to changes in the RHEL-9.4 kernelRESOLUTION:ODM module is updated to accommodate the changes in the kernel and load as expected on RHEL-9.4 kernelPatch ID: VRTSodm-8.0.2.1700* 4154116 (Tracking ID: 4118154)SYMPTOM:System may panic in simple_unlock_mem() when errcheckdetail enabled with stack trace as follows.simple_unlock_mem()odm_io_waitreq()odm_io_waitreqs()odm_request_wait()odm_io()odm_io_stat()vxodmioctl()DESCRIPTION:odm_io_waitreq() has taken a lock and waiting to complete the IO request but it is interrupted by odm_iodone() to perform IO and unlocked a lock taken by odm_io_waitreq(). So when odm_io_waitreq() tries to unlock the lock it leads to panic as lock was unlocked already.RESOLUTION:Code has been modified to resolve this issue.* 4159290 (Tracking ID: 4159291)SYMPTOM:ODM module is not getting loaded with newly rebuilt VxFS.DESCRIPTION:ODM module is not getting loaded with newly rebuilt VxFS, need recompilation of ODM with newly rebuilt VxFS.RESOLUTION:Recompiled the ODM with newly rebuilt VxFS.Patch ID: VRTSodm-8.0.2.1500* 4138419 (Tracking ID: 4138477)SYMPTOM:ODM module failed to load on RHEL9.3 kernel.DESCRIPTION:This issue occurs due to changes in the RHEL9.3 kernel.RESOLUTION:ODM module is updated to accommodate the changes in the kernel and load as expected on RHEL9.3 kernel.* 4152210 (Tracking ID: 4152208)SYMPTOM:The ODM module fails to load on RHEL9.3 minor kernel 5.14.0-362.18.1.DESCRIPTION:This issue occurs due to changes in the RHEL9.3 minor kernel.RESOLUTION:Updated ODM to support RHEL 9.3 minor kernel 5.14.0-362.18.1.Patch ID: VRTSodm-8.0.2.1400* 4144274 (Tracking ID: 4144269)SYMPTOM:After installing VRTSvxfs-8.0.2.1400, ODM fails to start.DESCRIPTION:Because of the VxFS version update, the ODM module needs to be repackaged due to aninternal dependency on VxFS version.RESOLUTION:As part of this fix, the ODM module has been repackaged to support the updatedVxFS version.Patch ID: VRTSodm-8.0.2.1200* 4124928 (Tracking ID: 4118466)SYMPTOM:ODM module failed to load on RHEL9.2 kernel.DESCRIPTION:This issue occurs due to changes in the RHEL9.2 kernel.RESOLUTION:ODM module is updated to accommodate the changes in the kernel and load as expected on RHEL9.2 kernel.* 4126262 (Tracking ID: 4126256)SYMPTOM:no symbol version warning for "ki_get_boot" in dmesg after SFCFSHA configurationDESCRIPTION:modpost is unable to read VEKI's Module.symvers while building ODM module, which results in no symbol version warning for "ki_get_boot" symbol of VEKI.RESOLUTION:Modified the code to make sure that modpost picks all the dependent symbols while building ODM module.* 4127518 (Tracking ID: 4107017)SYMPTOM:The ODM module fails to load on linux minor kernel.DESCRIPTION:This issue occurs due to changes in the minor kernel.RESOLUTION:Modified existing modinst-odm script to consider kernel-build version in exact-version-module version calculation.* 4127519 (Tracking ID: 4107778)SYMPTOM:The ODM module fails to load on linux minor kernel.DESCRIPTION:This issue occurs due to changes in the minor kernel.RESOLUTION:Modified existing modinst-odm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present.* 4127624 (Tracking ID: 4127625)SYMPTOM:The ODM module fails to load on RHEL9.0 minor kernel 5.14.0-70.36.1DESCRIPTION:This issue occurs due to changes in the RHEL9.0 minor kernel.RESOLUTION:Updated ODM to support RHEL9.0 minor kernel 5.14.0-70.36.1* 4129838 (Tracking ID: 4129837)SYMPTOM:ODM rpm does not have changelogDESCRIPTION:Changelog in rpm will help to find missing incidents with respect to other version.RESOLUTION:Changelog is generated and added to ODM rpm.INSTALLING THE PATCH--------------------Run the Installer script to automatically install the patch:-----------------------------------------------------------Please be noted that the installation of this P-Patch will cause downtime.To install the patch perform the following steps on at least one node in the cluster:1. Copy the patch infoscale-rhel9_x86_64-Patch-8.0.2.2200.tar.gz to /tmp2. Untar infoscale-rhel9_x86_64-Patch-8.0.2.2200.tar.gz to /tmp/hf # mkdir /tmp/hf # cd /tmp/hf # gunzip /tmp/infoscale-rhel9_x86_64-Patch-8.0.2.2200.tar.gz # tar xf /tmp/infoscale-rhel9_x86_64-Patch-8.0.2.2200.tar3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.) # pwd /tmp/hf # ./installVRTSinfoscale802P2200 [<host1> <host2>...]You can also install this patch together with 8.0.2 base release using Install Bundles1. Download this patch and extract it to a directory2. Change to the Veritas InfoScale 8.0.2 directory and invoke the installer script with -patch_path option where -patch_path should point to the patch directory # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]Install the patch manually:--------------------------Manual installation is not recommended.REMOVING THE PATCH------------------Manual uninstallation is not recommended.KNOWN ISSUES------------* Tracking ID: 4132637SYMPTOM: No Symptom FoundWORKAROUND: No WorkAround FoundSPECIAL INSTRUCTIONS--------------------NONEOTHERS------NONE

以下の製品リリースに適用されます

InfoScale Availability 8.0.2

リリース日: 2023-06-05

延長サポート開始日:未定

サステイニングサポート開始日:未定

サポート終了 (EOSL):未定

InfoScale Storage 8.0.2

リリース日: 2023-06-05

延長サポート開始日:未定

サステイニングサポート開始日:未定

サポート終了 (EOSL):未定

InfoScale Foundation 8.0.2

リリース日: 2023-06-05

延長サポート開始日:未定

サステイニングサポート開始日:未定

サポート終了 (EOSL):未定

InfoScale Enterprise 8.0.2

リリース日: 2023-06-05

延長サポート開始日:未定

サステイニングサポート開始日:未定

サポート終了 (EOSL):未定

更新ファイル

IS 8.0.2 Update 3 on RHEL9 Platform (2024)
Top Articles
Executive summary – Electricity 2024 – Analysis - IEA
With new HIV infections rising in a growing number of countries and regions, urgent action is needed to turn the prevention crisis around
The Atlanta Constitution from Atlanta, Georgia
Craigslist Parsippany Nj Rooms For Rent
Culos Grandes Ricos
Guardians Of The Galaxy Vol 3 Full Movie 123Movies
Charmeck Arrest Inquiry
Babyrainbow Private
Breakroom Bw
Reddit Wisconsin Badgers Leaked
Discover Westchester's Top Towns — And What Makes Them So Unique
Industry Talk: Im Gespräch mit den Machern von Magicseaweed
Overton Funeral Home Waterloo Iowa
How to find cash from balance sheet?
Teenleaks Discord
라이키 유출
Arre St Wv Srj
Traveling Merchants Tack Diablo 4
Drift Boss 911
A Biomass Pyramid Of An Ecosystem Is Shown.Tertiary ConsumersSecondary ConsumersPrimary ConsumersProducersWhich
Sea To Dallas Google Flights
Craigslist Illinois Springfield
Horn Rank
Tire Plus Hunters Creek
Tim Steele Taylorsville Nc
Mchoul Funeral Home Of Fishkill Inc. Services
DIY Building Plans for a Picnic Table
Little Caesars Saul Kleinfeld
Fbsm Greenville Sc
Emily Katherine Correro
Stolen Touches Neva Altaj Read Online Free
Panchitos Harlingen Tx
Ljw Obits
The Complete Guide To The Infamous "imskirby Incident"
RALEY MEDICAL | Oklahoma Department of Rehabilitation Services
Pensacola Cars Craigslist
Bob And Jeff's Monticello Fl
If You're Getting Your Nails Done, You Absolutely Need to Tip—Here's How Much
Payrollservers.us Webclock
John M. Oakey & Son Funeral Home And Crematory Obituaries
Noh Buddy
Collision Masters Fairbanks
A rough Sunday for some of the NFL's best teams in 2023 led to the three biggest upsets: Analysis
Ratchet And Clank Tools Of Destruction Rpcs3 Freeze
Neil Young - Sugar Mountain (2008) - MusicMeter.nl
City Of Irving Tx Jail In-Custody List
UNC Charlotte Admission Requirements
60 Days From August 16
The Plug Las Vegas Dispensary
Lake County Fl Trash Pickup Schedule
Obituary Roger Schaefer Update 2020
Affidea ExpressCare - Affidea Ireland
Latest Posts
Article information

Author: Francesca Jacobs Ret

Last Updated:

Views: 5695

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Francesca Jacobs Ret

Birthday: 1996-12-09

Address: Apt. 141 1406 Mitch Summit, New Teganshire, UT 82655-0699

Phone: +2296092334654

Job: Technology Architect

Hobby: Snowboarding, Scouting, Foreign language learning, Dowsing, Baton twirling, Sculpting, Cabaret

Introduction: My name is Francesca Jacobs Ret, I am a innocent, super, beautiful, charming, lucky, gentle, clever person who loves writing and wants to share my knowledge and understanding with you.