First post of 2025 it like to start with a good one for me 🙂 .
Very Very Happy with it!!

First post of 2025 it like to start with a good one for me 🙂 .
Very Very Happy with it!!
On the 4th Oktober VMware Skyline was end of life.
VMware Skyline was great:
• Proactive Issue Identification
• Automated Insights
• Health Scans and Remediation
• Integration with support
VMware by Broadcom are building critical Findings and Self-Help recommendations directly in product starting with VCF (from 5.2) and Aria Operations (from v8.18 July 2024)
Many of the other Skyline features are being planned for inclusion in future releases in Cloud Foundation and Aria Operations. We will see what the future will bring.
But for now how do you get this working.
First Step:
Update Aria Operations to 8.18.2 (Lastest)
Second Steps:
1. vCenter (Don’t for get enable vSAN), NSX, VCF, Aria vRA
2. Configure log collection in Aria Logs for the following components:
• Configure vCenter server integration in Aria for Logs
• Configure log forwarding on vCenter server, ESXi hosts(automatically in Aria FOR logs), and SDDC manager
3. Integrating VMware Aria Operations for Logs and VMware Aria Operations
4. Connect Skyline Health Diagnostics (SHD)
5. In Aria LoginSight check the vRops integration checkboxes
Bij default Enable launch in context can be disabled when configured at first.
After upgrading and checking the settings its finally working 😊 (It can take some time).
Essential Insights on Windows Server 2025
I my made my own vSAN Health report based on Get-vSANinfo
You can find the script on my Github: Link
That script dit not get all info that i wanted. I use is for all my different homelabs.
Funtions: Cluster,Hosts,VMs,vSANVersion,vSanUpgrade,HealthCheckEnabled,TimeOfHclUpdate,StoragePolicy,vSanDiskClaimMode,faultdomaincount,ObjectOutOfcompliance,vSanOverallHealth,vSanOverallHealthDescription,vSanHealthScore,ComponentLimitHealth,OpenIssue,vSanFreeSpaceTB,vSanCapacityTB
Addons:
PerformanceServiceEnabled
PerformanceStatsStoragePolicy
faultdomaincoun
StretchedClusterEnabled
vSanFailureToTolerate (Works only in Second run, Work in Progress)
You schedule the script and send it to your e-mail
VCF MGMT domain
Homelab
How to do a in place upgrade from a Windows Server 2022 Active Directory controller to a Windows Server 2025 active directory
Finish
You can check vCenter Server Management vCenterurl:5480
Here are some commands you can run to see what vCenter is doing behind the scenes!
/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator
/usr/lib/vmware-vmdir/bin/vdcadmintool
–> option 6This post is about how to remove such an inaccessible object within vSAN.
Open an SSH session to the vCenter and enter the command rvc localhost in the command line.
Navigate to the destinated vSAN cluster where you want to remove the inaccessible objects using cd and utilize ls to list in each step like this one:
Verify the state of vSAN objects using the command vsan.check_state -r . This check involves three steps:
During this check, as you can see in the next screenshot, there are four inaccessible objects with the same UUID as those listed in Virtual Objects within the vSphere Client.
To remove them, open an SSH session to any ESXi in the cluster and use the following command /usr/lib/vmware/osfs/bin/objtool delete -u <UUID> -f replacing UUID with the one you want to remove.
After you remove all inaccessible objects and run the (vsan.checkstate -r .) once again, you should no longer see any inaccessible objects.
I had the opportunity to test a Dell vSAN node. I had a older unattend install esxi iso.
This installed the ESXi OS on the wrong disk. After a correct install vSAN did not see this this disk ready for use for vSAN. Combining the following articles Dell VXRai vSAN Drives ineligible and identify-and-solve-ineligible-disk-problems-in-virtual-san/
I solved this problem with the following steps:
Step 1: Identify the Disk with vdq -qH
Step 2: Use partedUtil get “/dev/disks/<DISK>” to list all partitions:
partedUtil get “/dev/disks/t10.NVMe____Dell_Ent_NVMe_CM6_MU_3.2TB______________017D7D23E28EE38C”
Step 3: Use This disk has 2 partitions. Use the partedUtil delete “/dev/disks/<DISK>” <PARTITION> command to delete all partitions:
Step 4:
When all partitions are removed, do a rescan:
~ # esxcli storage core adapter rescan –all
Step 5: Claim Unused Disks
I had the opportunity to test a Dell vSAN node. I had a older unattend install esxi iso.
This installed the ESXi OS on the wrong disk.
I hate to type a very complex password twice.
So automation is the key.
I love de ks.cfg install option
Sow following the following guide did not the trik:
https://www.dell.com/support/kbdoc/en-us/000177584/automating-operating-system-deployment-to-dell-boss-techniques-for-different-operating-systems
This did not work:
install –overwritevmfs –firstdisk=”DELLBOSS VD”
After doing a manual install:
What works:
# For Dell Boss Controller “Dell BOSS-N1″
install –overwritevmfs –firstdisk=”Dell BOSS-N1”
My home lab is manly used for testing new stuff released by VMware (70%) en Microsoft (20%) other stuff (10%)
For the Base I use my Home PC
Intel i5 12600k
128 GB Memory
2 x 2TB Samsung 980 and 990 Pro SSD.
Windows 11 Pro
VMware Workstation Pro
On my Home PC running
Server 2022 (Eval for DC)
ESXi801 (16 GB) (NSX Demo Cluster)
ESXi802 (16 GB) (NSX Demo Cluster)
ESXi803 (64 GB) (General Cluster) )
ESXi804 (64 GB) (General Cluster)
ESXi805 (24 GB) (Single Node vSAN Cluster)
ESXi806 (16 GB) (4 Node vSAN Cluster)
ESXi807 (16 GB) (4 Node vSAN Cluster)
ESXi808 (16 GB) (4 Node vSAN Cluster)
ESXi809 (16 GB) (4 Node vSAN Cluster)
ESXi701 (24GB) (General Cluster)
ESXi702 (24GB) (General Cluster)
In general cluster there a running the most VM’s. Also here I am testing Packer and Terraform.
For a while I used a 2TB Samsung SSD a Storage for ESXi Server through Truenas
But I wanted a larger storage for all my VM’s.
After reading on William Liam blog Synology DS723+ in Homelab and Synology NFS VAAI Plug-in support for vSphere 8.0
So I did a nice upgrade.
I used not the original Synology Parts. Following parts works fine.
Kingston 16 GB DDR4-3200 notebook memory
WD Red SN700, 500 GB SSD
WD Red Pro, 8 TB
* For Read-Write caching you need 2 SSD devices.
For mouting the NFS shared I created a little powercli script.
You must be logged in to post a comment.