Tuesday, December 18, 2018

VSAN ERASURE CODING FAILURE HANDLING - VMware Certifications


I had a very interesting question recently about how vSAN handles a failure in an object that is running with an erasure coding configuration. In the case of vSAN this is either a RAID-5 or a RAID-6. On vSAN, a RAID-5 is implemented with 3 data segments and 1 parity segment (3+1), with parity striped across all four components. RAID-6 is implemented as 4 data segments and 2 parity segments (4+2), again with the parity striped across all of the six components. So what happens when we need to continue writing to one of these objects after a component/segment has failed.


After discussing this with one of our vSAN engineering leads, the answer is that it depends on which offset you are writing to.  Let’s take RAID-5 as an example. The RAID-5 VMDK object address space is split into 1 MB stripes.  If we take the 3 of the 4 RAID-5 components together, this makes up one contiguous 3 MB range. We refer to this a row which is distributed over the three components. The fourth component is used for parity.  The component used for the parity gets rotated for each row.

First, we will look at a row that has lost a data component. Let’s take the first row. Future writes to the 0-2 MB range in the object address space will be unaffected.  They will still go to their respective data component (either 1 or 2). Writes to the 2-3 MB range will read data from Comp1 and Comp2, calculate the new parity based on all 3 data components, and then write parity in Comp4. But of course there cannot be a write to Comp3 as it is now failed/missing. This same procedure applies to all other rows that are missing data due to a failure of Comp3.

Let’s now look at a row that has lost its parity component, for example, row 2. Writes to the 3-6 MB range will just write the data to Comp1, Comp2 and Comp4 as normal with no parity. Hence there are no parity reads associated with this write operation. In this case there is a reduction in the amount of IO amplification involved. For RAID-5 writes, we would typically have to read the existing data and parity, write back the new data, calculate the new parity and write it back. Now, with rows that have parity on the failed component, the reads and writes will not be amplified. In fact, as we have seen, reads and writes are decreased from 2 to 1 in cases where parity on the affected component.

So, to recap, we still maintain a 3+1 RAID-5 arrangement for data placement, but there is a “functional repair” whereby we include the data that cannot be written in the parity calculation. We can then use that parity (with the other two data components Comp1 and Comp2) to reconstruct the original data if we need to service a guest read, or of course to resync to Comp3 when it recovers.

Success Secrets: How you can Pass VMware Certification Exams in first attempt



Tuesday, December 11, 2018

Datrium + Kubernetes with VMware vSphere - VMware Certifications


You may configure any Kubernetes distribution to access VMware vSphere VMDK Volumes. This includes using VMware vSphere VMDK Volumes as persistent volumes for application data with Datrium. The vSphere Cloud Provider allows using vSphere managed storage for Volumes, Persistent volumes, Storage classes, and provisioning volumes. Datrium presents a single-namespace (or multiple) datastore to vSphere that is then abstracted by Kubernetes.

Dynamic Volume Provisioning


Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users.

To enable dynamic provisioning, a Kubernetes cluster administrator needs to pre-create one or more StorageClass objects for users. StorageClass objects define which provisioner should be used and what parameters should be passed to that provisioner when dynamic provisioning is invoked.

Backing Up Persistent Volumes


Kubernetes provisions new volumes as persistent independent disks to freely attach and detach the volume on any node in the cluster, and for Datrium DVX each persistent volume is an independent VMDK.

As a consequence, it is not possible to backup volumes that use VMware snapshots, and VMware recommends stopping the application utilizing the PV, cloning the PV, restarting the application.

Datrium can uniquely address Kubernetes persistent volumes backup and replication with native non-disruptive snapshots that have zero impact on volumes and applications usability or performance.

Success Secrets: How you can Pass VMware Certification Exams in first attempt 



Tuesday, December 4, 2018

VMware Recognized in CRN 2018 Products of the Year Awards


CRN, a brand of The Channel Company, has recognized three VMware products as part of their 2018 Products of the Year Awards:


  • VMware Cloud on AWS – overall winner in the Hybrid Cloud category
  • VMware NSX updates named as a finalist in the Software-Defined Networking category
  • VMware vSAN 6.7 – overall winner in the Software-Defined Storage category

Products and services named on this list represent best-in-breed technological innovation, financial opportunity for partners, and customer demand. For the third year in a row, the winners were determined through a combination of editorial selection and a survey fielded to solution providers, who are currently selling both the technology and specific vendor product, to accurately capture real-world satisfaction among partners and their customers.