There’s a lot more to cover for vSphere 7 so in this post, I’ll talk about the important bits with regards to vSphere 7 with Kubernetes, VMware Cloud Foundation 4 and vSAN 7.
vSphere 7 with Kubernetes
If you’re involved with VMware products in your day job, you must have heard about Project Pacific a lot over the past couple of years. It is no secret that the project was all about integrating Kubernetes into the vSphere platform so that it looks and feels the same to the IT admin from their regular infrastructure point of view but developers also don’t feel any different i.e. by being able to use the same Kubernetes constructs and commands that they’ve been used to.
vSphere 7 with Kubernetes is the result of all that behind the scenes work. Powered by VMware Cloud Foundation 4, it combines the best of both worlds by introducing “Tanzu Kubernetes Grid Service” which abstracts the translation between traditional vSphere infrastructure constructs and Kubernetes. It also provides the API access used by the developers to achieve the same agility that they’ve been used to in any other implementation of Kubernetes.
As you know now, vSphere 7 is being announced on March 10 and will be generally available in April 2020. As VMware Cloud Foundation (VCF) forms the basis of all VMware based cloud environments whether it be private, public or service provider based clouds, it will provide the same user experience throughout, helping with a unified cloud regardless of where the workload is run.
VMware Cloud Foundation 4
That takes us nicely into what’s new in VMware Cloud Foundation 4. Well for starters, it is completely based on NSX-T and is, therefore, the only option you get when installing or upgrading to it.
In addition, there’s a lot more flexibility in how the management domain is deployed and which components on Day 0 vs Day X. Footprint of the management domain has also reduced significantly (mainly due to the elimination of external PSC). As you can also see from the picture above, now you can use the same NSX manager for the entire deployment or deploy new ones for different workload domains.
As mentioned previously, vSphere with Kubernetes is the big focus for this release. All of these changes come together to ensure there isn’t a steep learning curve for administrators when it comes to managing Kubernetes. Tanzu Kubernetes Grid Cluster provides the interface that handles all the namespace management on behalf of the administrator and also takes care of lifecycle management of them.
Honourable mentions also include multi-instance management (think central management of multiple VCF deployments), Workspace ONE Access (previously known as Identity Manager) and some security enhancements.
vSAN 7
Being a foundational component of vSphere now, with every release, there are always a ton of improvements when it comes to vSAN and this release is no different. What’s New in vSAN 7.0 can be broadly categorised into Lifecycle Management, Native File Services and Cloud-Native Services.
As I mentioned in my first post of this series, keeping a consistent deployment across thousands of servers is hard! It has been a challenge for VMware admins for a very long time and I am glad about the enhancements being made in this area.
Having a desired-state model for a particular deployment and the ability to detect and remediate drift in the configuration is exactly what admins of large environments (especially service providers) were asking for. For the firmware side of things, it’s only Dell and HPE to start with but I am sure more vendors will join in later.
Next is Native File Services which are fully-integrated now. NFS v4.1 and 3 are supported and they’re ideal for both traditional and cloud-native workloads. Keeping with the theme of providing a consistent experience to both kinds of workloads, all the access behaves normally as for traditional workloads.
A point to remember is that the file service capabilities are not aimed at replacing typical filer type large scale storage. The idea is to cater to the immediate needs of the cluster e.g. file shares but not large workload storage demands.
That capability naturally flows into file-based persistent volumes that work well with Kubernetes pods and are governed by the same block and file-level storage policy mechanisms.
Volume encryption at rest and snapshots are supported in the usual fashion and application observability is also available in the form of Wavefront and Prometheus. That is, of course, in addition to the usual vRealize Operations management provided at the infrastructure level.
There is improved integration with DRS when it comes to Stretched Clusters, especially during failover. In the last post, I discussed the emphasis of DRS on being “workload-centric”.
When applied to a stretched cluster scenario and in case of failover, it becomes clearer what that means. Once the failed site returns after failover, the improved DRS will not migrate VMs to the “desired site” until it is satisfied with the recovery and resynchronisation of data is complete. That reduces the load on the intersite link during recovery conditions and improves VM read performance in that state.
There are significant improvements in the operations and management of vSAN too. Skyline Health is integrated for all customers but you can go to higher tiers of visibility too.
You will see improvements in VM capacity reporting as it becomes more consistent and accurate across the different view, a problem that many have noticed in the past. In addition, there’s a new “Memory Consumption” metric that will show how much memory vSAN is consuming per host – another popular ask by customers.
That said, I am also particularly happy about the visibility of vSphere Replication Data as it has been an unknown for a while. vSphere 7.0 is changing that.
vSphere Replication is heavily used in disaster recovery, migration and various hybrid cloud scenarios and given its impact, it is important to report accurately on vSphere Replication objects. Now, there’s improved visibility on those objects and consumption is shown at the virtual objects but also cluster-level views.
In addition, there is support for larger physical devices and serviceability by supporting hot-plug for NVMe devices (minimising host restarts but for a limited number of OEMs for now).
Removal of the requirement of Eager Zero Thick requirement for shared disk in vSAN is also a significant one for me.
Some I/O intensive workloads e.g. Oracle RAC have strict disk requirements when it comes to placement on vSANs and best practice so far has been to create them with “Object Space Reservation” set to 100, making them Eager Zero Thick. vSAN 7.0 removes that requirement due to improvements in vSAN I/O performance.
This post has also surpassed 1000 words so I am putting a stop here for this one and see you with further goodness of vSphere 7.0 in the next post!
A few more blogs for your reference:
Leave A Comment