Live Chat Software by Kayako
 News Categories
(20)Microsoft Technet (2)StarWind (6)TechRepublic (4)ComuterTips (1)SolarWinds (1)Xangati (1)MyVirtualCloud.net (28)VMware (8)NVIDIA (9)VDI (1)pfsense vRouter (4)VEEAM (3)Google (2)RemoteFX (1)developers.google.com (1)MailCleaner (1)Udemy (1)AUGI (2)AECbytes Architecture Engineering Constrution (7)VMGuru (2)AUTODESK (9)storageioblog.com (1)Atlantis Blog (23)AT.COM (2)community.spiceworks.com (1)archdaily.com (16)techtarget.com (3)hadoop360 (3)bigdatastudio (1)virtualizetips.com (1)blogs.vmware.com (3)VECITA (1)vecom.vn (1)Palo Alto Networks (4)itnews.com.au (2)serverwatch.com (1)Nhịp Cầu đầu tư (3)VnEconomy (1)Reuters (1)Tom Tunguz (1)Medium.com (1)Esri (1)www.specommerce.com (1)tweet (1)Tesla (1)fool.com (6)ITCNews (1)businessinsider.com (1)hbr.org Harvard Business Review (1)Haravan (2)techcrunch.com (1)vn.trendmicro.com (3)thangletoan.wordpress.com (3)IBM (1)www.droidmen.com (2)blog.parallels.com (1)betanews.com (9)searchvmware.techtarget.com (1)www.bctes.com (1)www.linux.com (4)blog.capterra.com (1)theelearningcoach.com (1)www.examgeneral.com (1)www.wetutoringnation.com (1)chamilo.org/ (1)www.formalms.org (1)chalkup.co (1)www.mindonsite.com (5)moodle.org (4)moodle.croydon.ac.uk (1)opensource.com (1)1tech.eu (1)remote-learner.net (1)paradisosolutions.com (2)sourceforge.net (24)searchbusinessanalytics.techtarget.com (1)nscs.gov.sg (1)virten.net (1)fastest.com.vn (1)elearninglearning.com (2)www.computerweekly.com (1)youtube.com (1)www.techradar.com (3)computer.howstuffworks.com (2)techz.vn (2)techsignin.com (1)itworld.com (20)searchsecurity.techtarget.com (1)makeuseof.com (1)nikse.dk (1)4kdownload.com (1)thegioididong.com (1)itcentralstation.com (1)www.dddmag.com (2)Engenius (1)networkcomputing.com (1)woshub.com (1)hainam121.wordpress.com (1)www.lucidchart.com (1)www.mof.gov.vn (3)www.servethehome.com (6)www.analyticsvidhya.com (1)petewarden.com (2)ethinkeducation.com
RSS Feed
News
Aug
23
VM Storage Policies on VSAN
Posted by Thang Le Toan on 23 August 2018 05:45 AM

VMware architects explain the concept of storage policy-based management.

In vSphere 5.0, VMware introduced a feature called profile-driven storage. Profile-driven storage is a feature that allows vSphere administrators to easily select the correct datastore on which to deploy virtual machines (VMs). The selection of the datastore is based on the capabilities of that datastore, or to be more specific, the underlying capabilities of the storage array that have been assigned to this datastore. Examples of the capabilities are RAID level, thin provisioning, deduplication, encryption, replication, etc. The capabilities are completely dependent on the storage array.

Throughout the life cycle of the VM, profile-driven storage allows the administrator to check whether its underlying storage is still compatible. In other words, does the datastore on which the VM resides still have the correct capabilities for this VM? The reason why this is useful is because if the VM is migrated to a different datastore for whatever reason, the administrator can ensure that it has moved to a datastore that continues to meet its requirements. If the VM is migrated to a datastore without paying attention to the capabilities of the destination storage, the administrator can still check the compliance of the VM storage from the vSphere client at any time and take corrective actions if it no longer resides on a datastore that meets its storage requirements (in other words, move it back to a compliant datastore).

However, VM storage policies and storage policy-based management (SPBM) have taken this a step further. In the previous paragraph, we described a sort of storage quality of service driven by the storage. All VMs residing on the same datastore would inherit the capabilities of the datastore. With VSAN, the storage quality of service no longer resides with the datastore; instead, it resides with the VM and is enforced by the VM storage policy associated with the VM and the VM disks (VMDKs). Once the policy is pushed down to the storage layer, in this case VSAN, the underlying storage is then responsible for creating storage for the VM that meets the requirements placed in the policy.

Introducing Storage Policy-Based Management in a VSAN Environment

VSAN leverages this approach to VM deployment, using an updated method called storage policy-based management (SPBM). All VMs deployed to a VSAN datastore must use a VM storage policy, although if one is not specifically created, a default one that is associated with the datastore is assigned to the VM. The VM storage policy contains one or more VSAN capabilities. This chapter will describe the VSAN capabilities. After the VSAN cluster has been configured and the VSAN datastore has been created, VSAN surfaces up a set of capabilities to the vCenter Server. These capabilities, which are surfaced by the vSphere APIs for Storage Awareness (VASA) storage provider (more on this shortly) when the cluster is configured successfully, are used to set the availability, capacity, and performance policies on a per-VM (and per-VMDK) basis when that VM is deployed on the VSAN datastore.

As previously mentioned, this differs significantly from the previous VM storage profile mechanism that we had in vSphere in the past. With the VM storage profile feature, the capabilities were associated with datastores, and were used for VM placement decisions. Now, through SPBM, administrators create a policy defining the storage requirements for the VM, and this policy is pushed out to the storage, which in turn instantiates per-VM (and per-VMDK) storage for virtual machines. In vSphere 6.0, VMware introduced Virtual Volumes (VVols). Storage policy-based management for VMs using VVols is very similar to storage policy-based management for VMs deployed on VSAN. In other words, administrators no longer need to carve up logical unit numbers (LUNs) or volumes for virtual machine storage. Instead, the underlying storage infrastructure instantiates the virtual machine storage based on the contents of the policy. What we have now with SPBM is a mechanism whereby we can specify the requirements of the VM, and the VMDKs. These requirements are then used to create a policy. This policy is then sent to the storage layer [in the case of VVols, this is a SAN or network-attached storage (NAS) storage array] asking it to build a storage object for this VM that meets these policy requirements. In fact, a VM can have multiple policies associated with it, different policies for different VMDKs.

By way of explaining capabilities, policies, and profiles, capabilities are what the underlying storage is capable of providing by way of availability, performance, and reliability. These capabilities are visible in vCenter Server. The capabilities are then used to create a VM storage policy (or just policy for short). A policy may contain one or more capabilities, and these capabilities reflect the requirements of your VM or application running in a VM. Previous versions of vSphere used the term profiles, but these are now known as policies.

Deploying VMs on a VSAN datastore is very different from previous approaches in vSphere. In the past, an administrator would present a LUN or volume to a group of ESXi hosts and in the case of block storage partition, format, and build a VMFS file system to create a datastore for storing VM files. In the case of network-attached storage (NAS), a network file system (NFS) volume is mounted to the ESXi host, and once again a VM is created on the datastore. There is no way to specify a RAID-0 stripe width for these VMDKs, nor is there any way to specify a RAID-1 replica for the VMDK.

In the case of VSAN (and now VVols), the approach to deploying VMs is quite different. Consideration must be given to the availability, performance, and reliability factors of the application running in the VM. Based on these requirements, an appropriate VM storage policy must be created and associated with the VM during deployment.

There were five capabilities in the initial release of VSAN, as illustrated in Figure 4.1.

Figure 4.1

Figure 4.1 VSAN capabilities that can be used for VM storage policies

In VSAN 6.2, the number of capabilities is increased to support a number of new features. These features include the ability to implement RAID-5 and RAID-6 configurations for virtual machine objects deployed on an all-flash VSAN configuration, alongside the existing RAID-0 and RAID-1 configurations. With RAID-5 and RAID-6, it now allows VMs to tolerate one or two failures, but it means that the amount of space consumed is much less than a RAID-1 configuration to tolerate a similar amount of failures. There is also a new policy for software checksum. Checksum is enabled by default, but it can be disabled through policies if an administrator wishes to disable it. The last capability relates to quality of service and provides the ability to limit the number of input/output operations per second (IOPS) for a particular object.

You can select the capabilities when a VM storage policy is created. Note that certain capabilities are applicable to hybrid VSAN configurations (e.g., flash read cache reservation), while other capabilities are applicable to all-flash VSAN configurations (e.g., failure tolerance method set to performance).

VM storage policies are essential in VSAN deployments because they define how a VM is deployed on a VSAN datastore. Using VM storage policies, you can define the capabilities that can provide the number of VMDK RAID-0 stripe components or the number of RAID-1 mirror copies of a VMDK. If an administrator desires a VM to tolerate one failure but does not want to consume as much capacity as a RAID-1 mirror, a RAID-5 configuration can be used. This requires a minimum of four hosts in the cluster and implements a distributed parity mechanism across the storage of all four hosts. If this configuration would be implemented with RAID-1, the amount of capacity consumed would be 200% the size of the VMDK. If this is implemented with RAID-5, the amount of capacity consumed would be 133% the size of the VMDK.

Similarly, if an administrator desires a VM to tolerate two failures using a RAID-1 mirroring configuration, there would need to be three copies of the VMDK, meaning the amount of capacity consumed would be 300% the size of the VMDK. With a RAID-6 implementation, a double parity is implemented, which is also distributed across all the hosts. For RAID-6, there must be a minimum of six hosts in the cluster. RAID-6 also allows a VM to tolerate two failures, but only consumes capacity equivalent to 150% the size of the VMDK.

Figure 4.2 shows the new policies introduced in VSAN 6.2.

Figure 4.2

Figure 4.2 New VSAN capabilities

The sections that follow highlight where you should use these capabilities when creating a VM storage policy and when to tune these values to something other than the default. Remember that a VM storage policy will contain one or more capabilities.

In the initial release of VSAN, five capabilities were available for selection to be part of the VM storage policy. In VSAN 6.2, as previously highlighted, additional policies were introduced. As an administrator, you can decide which of these capabilities can be added to the policy, but this is, of course, dependent on the requirements of your VM. For example, what performance and availability requirements does the VM have? The capabilities are as follows:

  • Number of failures to tolerate

  • Number of disk stripes per object

  • Failure tolerance method

  • IOPS limit for object

  • Disable object checksum

  • Flash read cache reservation (hybrid configurations only)

  • Object space reservation

  • Force provisioning

The sections that follow describe the VSAN capabilities in detail.

Number of Failures to Tolerate

In this section, number of failures to tolerate is described having failure tolerance method set to its default value that is Performance. Later on we will describe a different scenario when failure tolerance method is set to Capacity.

This capability sets a requirement on the storage object to tolerate at least n number of failures in the cluster. This is the number of concurrent host, network, or disk failures that may occur in the cluster and still ensure the availability of the object. When the failure tolerance method is set to its default value of RAID-1, the VM’s storage objects are mirrored; however, the mirroring is done across ESXi hosts, as shown in Figure 4.3.

Figure 4.3

Figure 4.3 Number of failures to tolerate results in a RAID-1 configuration

When this capability is set to a value of n, it specifies that the VSAN configuration must contain at least n + 1 replicas (copies of the data); this also implies that there are 2n + 1 hosts in the cluster.

Note that this requirement will create a configuration for the VM objects that may also contain an additional number of witness components being instantiated to ensure that the VM remains available even in the presence of up to number of failures to tolerate concurrent failures (see Table 4.1). Witnesses provide a quorum when failures occur in the cluster or a decision has to be made when a split-brain situation arises. These witnesses will be discussed in much greater detail later in the book, but suffice it to say that witness components play an integral part in maintaining VM availability during failures and maintenance tasks.

Figure 4.4

Figure 4.4 RAID-5 configuration, a result of failure tolerance method RAID5/6 and number of failures to tolerate set to 1

The RAID-5 or RAID-6 configurations also work with number of disk stripes per object. If stripe width is also specified as part of the policy along with failure tolerance method set to RAID5/6 each of the components on each host is striped in a RAID-0 configuration, and these are in turn placed in either a RAID-5 or-6 configuration.

One final note is in relation to having a number of failures to tolerate setting of zero or three. If you deploy a VM with this policy setting, which includes a failure tolerance method RAID5/6 setting, the VM provisioning wizard will display a warning stating that this policy setting is only effective when the number of failures to tolerate is set to either one or two. You can still proceed with the deployment, but the object is deployed as a single RAID-0 object.

Number of Disk Stripes Per Object

This capability defines the number of physical disks across which each replica of a storage object (e.g., VMDK) is striped. When failure tolerance method is set to performance, this policy setting can be considered in the context of a RAID-0 configuration on each RAID-1 mirror/replica where I/O traverses a number of physical disk spindles. When failure tolerance method is set to capacity, each component of the RAID-5 or RAID-6 stripe may also be configured as a RAID-0 stripe. Typically, when the number of disk stripes per object is defined, the number of failures to tolerate is also defined. Figure 4.5 shows what a combination of these two capabilities could result in, once again assuming that the new VSAN 6.2 policy setting of failure tolerance method is set to its default value RAID-1.

Figure 4.5

Figure 4.5 Storage object configuration when stripe width set is to 2 and failures to tolerate is set to 1 and replication method optimizes for is not set

To understand the impact of stripe width, let’s examine it first in the context of write operations and then in the context of read operations.

Because all writes go to the cache device write buffer, the value of an increased stripe width may or may not improve performance. This is because there is no guarantee that the new stripe will use a different cache device; the new stripe may be placed on a capacity device in the same disk group, and thus the new stripe will use the same cache device. If the new stripe is placed in a different disk group, either on the same host or on a different host, and thus leverages a different cache device, performance might improve. However, you as the vSphere administrator have no control over this behavior. The only occasion where an increased stripe width could definitely add value is when there is a large amount of data to destage from the cache tier to the capacity tier. In this case, having a stripe could improve destage performance.

From a read perspective, an increased stripe width will help when you are experiencing many read cache misses, but note that this is a consideration in hybrid configurations only. All-flash VSAN considerations do not have a read cache. Consider the example of a VM deployed on a hybrid VSAN consuming 2,000 read operations per second and experiencing a hit rate of 90%. In this case, there are still 200 read operations that need to be serviced from magnetic disk in the capacity tier. If we make the assumption that a single magnetic disk can provide 150 input/output operations per second (IOPS), then it is obvious that it is not able to service all of those read operations, so an increase in stripe width would help on this occasion to meet the VM I/O requirements. In an all-flash VSAN, which is extremely read intensive, striping across multiple capacity flash devices can also improve performance.

In general, the default stripe width of 1 should meet most, if not all VM workloads. Stripe width is a capability that should change only when write destaging or read cache misses are identified as a performance constraint.

IOPS Limit for Object

IOPS limit for object is a new Quality of Service (QoS) capability introduced with VSAN 6.2. This allows administrators to ensure that an object, such as a VMDK, does not generate more than a predefined number of I/O operations per second. This is a great way of ensuring that a “noisy neighbor” virtual machine does not impact other virtual machine components in the same disk group by consuming more than its fair share of resources. By default, VSAN uses an I/O size of 32 KB as a base. This means that a 64 KB I/O will therefore represent two I/O operations in the limits calculation. I/Os that are less than or equal to 32 KB will be considered single I/O operations. For example, 2 × 4 KB I/Os are considered as two distinct I/Os. It should also be noted that both read and write IOPS are regarded as equivalent. Neither cache hit rate nor sequential I/O are taken into account. If the IOPS limit threshold is passed, the I/O is throttled back to bring the IOPS value back under the threshold. The default value for this capability is 0, meaning that there is no IOPS limit threshold and VMs can consume as many IOPS as they want, subject to available resources.

Flash Read Cache Reservation

This capability is applicable to hybrid VSAN configurations only. It is the amount of flash capacity reserved on the cache tier device as read cache for the storage object. It is specified as a percentage of the logical size of the storage object (i.e., VMDK). This is specified as a percentage value (%), with up to four decimal places. This fine granular unit size is needed so that administrators can express sub 1% units. Take the example of a 1 TB VMDK. If you limited the read cache reservation to 1% increments, this would mean cache reservations in increments of 10 GB, which in most cases is far too much for a single VM.

Note that you do not have to set a reservation to allow a storage object to use cache. All VMs equally share the read cache of cache devices. The reservation should be left unset (default) unless you are trying to solve a real performance problem and you believe dedicating read cache is the solution. If you add this capability to the VM storage policy and set it to a value 0 (zero), however, you will not have any read cache reserved to the VM that uses this policy. In the current version of VSAN, there is no proportional share mechanism for this resource when multiple VMs are consuming read cache, so every VM consuming read cache will share it equally.

Object Space Reservation

All objects deployed on VSAN are thinly provisioned. This means that no space is reserved at VM deployment time, but rather space is consumed as the VM uses storage. The object space reservation capability defines the percentage of the logical size of the VM storage object that may be reserved during initialization. The object space reservation is the amount of space to reserve specified as a percentage of the total object address space. This is a property used for specifying a thick provisioned storage object. If object space reservation is set to 100%, all of the storage capacity requirements of the VM storage are reserved up front (thick). This will be lazy zeroed thick (LZT) format and not eager zeroed thick (EZT). The difference between LZT and EZT is that EZT virtual disks are zeroed out at creation time; LZT virtual disks are zeroed out gradually at first write time.

One thing to bring to the readers’ attention is the special case of using object space reservation when deduplication and compression are enabled on the VSAN cluster. When deduplication and compression are enabled, any objects that wish to use object space reservation in a policy must have it set to either 0% (no space reservation) or 100% (fully reserved). Values between 1% and 99% are not allowed. Any existing objects that have object space reservation between 1% and 99% will need to be reconfigured with 0% or 100% prior to enabling deduplication and compression on the cluster.

Force Provisioning

If the force provisioning parameter is set to a nonzero value, the object that has this setting in its policy will be provisioned even if the requirements specified in the VM storage policy cannot be satisfied by the VSAN datastore. The VM will be shown as noncompliant in the VM summary tab and relevant VM storage policy views in the vSphere client. If there is not enough space in the cluster to satisfy the reservation requirements of at least one replica, however, the provisioning will fail even if force provisioning is turned on. When additional resources become available in the cluster, VSAN will bring this object to a compliant state.

One thing that might not be well understood regarding force provisioning is that if a policy cannot be met, it attempts a much simpler placement with requirements which reduces to number of failures to tolerate to 0, number of disk stripes per object to 1, and flash read cache reservation to 0 (on hybrid configurations). This means Virtual SAN will attempt to create an object with just a single copy of data. Any object space reservation (OSR) policy setting is still honored. Therefore there is no gradual reduction in capabilities as VSAN tries to find a placement for an object. For example, if policy contains number of failures to tolerate = 2, VSAN won’t attempt an object placement using number of failures to tolerate = 1. Instead, it immediately looks to implement number of failures to tolerate = 0.

Similarly, if the requirement was number of failures to tolerate = 1, number of disk stripes per object = 4, but Virtual SAN doesn’t have enough capacity devices to accommodate number of disk stripes per object = 4, then it will fall back to number of failures to tolerate = 0, number of disk stripes per object = 1, even though a policy of number of failures to tolerate = 1, number of disk stripes per object = 2 or number of failures to tolerate = 1, number of disk stripes per object = 3 may have succeeded.

Caution should be exercised if this policy setting is implemented. Since this allows VMs to be provisioned with no protection, it can lead to scenarios where VMs and data are at risk.

Administrators who use this option to force provision virtual machines need to be aware that although virtual machine objects may be provisioned with only one replica copy (perhaps due to lack of space), once additional resources become available in the cluster, VSAN may immediately consume these resources to try to satisfy the policy settings of virtual machines.

Some commonly used cases where force provisioning is used are (a) when boot-strapping a VSAN management cluster, starting with a single node that will host the vCenter Server, which is then used to configure a larger VSAN cluster, and (b) when allowing the provisioning of virtual machine/desktops when a cluster is under maintenance, such as a virtual desktop infrastructure (VDI) running on VSAN.

Remember that this parameter should be used only when absolutely needed and as an exception. When used by default, this could easily lead to scenarios where VMs, and all data associated with it, are at risk. Use with caution!

Disable Object Checksum

VSAN 6.2 introduced this new capability. This feature, which is enabled by default, is looking for data corruption (bit rot), and if found, automatically corrects it. Checksum is validated on the complete I/O path, which means that when writing data the checksum is calculated and automatically stored. Upon a read the checksum of the data is validated, and if there is a mismatch, the data is repaired. VSAN 6.2 also includes a scrubber mechanism. This mechanism is configured to run once a year (by default) to check all data on the VSAN datastore; however, this value can be changed by setting an advanced host setting. We recommend leaving this configured to the default value of once a year. In some cases you may desire to disable checksums completely. The reason for this could be performance, although the overhead is negligible and most customers prefer data integrity over a 1% to 3% performance increase. However in some cases, this performance increase may be desired. Another reason for disabling checksums is in the situation where the application already provides a checksum mechanism, or the workload does not require checksum. If that is the case, then checksums can be disabled through the “disable object checksum capability,” which should be set to “Yes” to disable it.

That completes the capabilities overview. Let’s now look at some other aspects of the storage policy-based management mechanism.

VASA Vendor Provider

As part of the VSAN cluster creation step, each ESXi host has a VSAN storage provider registered with vCenter. This uses the vSphere APIs for Storage Awareness (VASA) to surface up the VSAN capabilities to the vCenter Server. The capabilities can then be used to create VM storage policies for the VMs deployed on the VSAN datastore. If you are familiar with VASA and have used it with traditional storage environments, you’ll find this functionality familiar; however, with traditional storage environments that leverage VASA, some configuration work needs to be done to add the storage provider for that particular storage. In the context of VSAN, a vSphere administrator does not need to worry about registering these; these are automatically registered when a VSAN cluster is created.

An Introduction to VASA

VASA allows storage vendors to publish the capabilities of their storage to vCenter Server, which in turn can display these capabilities in the vSphere Web Client. VASA may also provide information about storage health status, configuration info, capacity and thin provisioning info, and so on. VASA enables VMware to have an end-to-end story regarding storage. Traditionally, this enabled storage arrays to inform the VASA storage provider of capabilities, and then the storage provider informed vCenter Server, so now users can see storage array capabilities from vSphere Web Client. Through VM storage policies, these storage capabilities are used in the vSphere Web Client to assist administrators in choosing the right storage in terms of space, performance, and service-level agreement (SLA) requirements. This was true for both traditional storage arrays, and now it is true for VSAN also. Prior to the release of virtual volumes (VVols), there was a notable difference in workflow when using VASA and VM storage policies when comparing traditional storage to VSAN. With traditional storage, VASA historically surfaced information about the datastore capabilities, and a vSphere administrator had to choose the appropriate storage on which to place the VM. With VSAN, and now VVols, you define the capabilities you want to have for your VM storage in a VM storage policy. This policy information is then pushed down to the storage layer, basically informing it that these are the requirements you have for storage. VASA will then tell you whether the underlying storage (e.g., VSAN) can meet these requirements, effectively communicating compliance information on a per-storage object basis. The major difference is that this functionality is now working in a bidirectional mode. Previously, VASA would just surface up capabilities. Now it not only surfaces up capabilities but also verifies whether a VM’s storage requirements are being met based on the contents of the policy.

Storage Providers

Figure 4.6 illustrates an example of what the storage provider looks like. When a VSAN cluster is created, the VASA storage provider from every ESXi host in the cluster is registered to the vCenter Server. In a four-node VSAN cluster, the VASA VSAN storage provider configuration would look similar to this.

Figure 4.6

Figure 4.6 VSAN storage providers, added when the VSAN cluster is created

You can always check the status of the storage providers by navigating in the Web Client to the vCenter Server inventory item, selecting the Manage tab and then the Storage Providers view. One VSAN provider should always be online. The other storage providers should be in standby mode. This is all done automatically by VSAN. There is typically no management of the VASA providers required by administrators.

In VSAN clusters that have more than eight ESXi hosts, and thus more than eight VASA storage providers, the list of storage providers is shortened to eight in the user interface (UI) for display purposes. The number of standby storage providers is still displayed correctly; you simply won’t be able to interrogate them.

VSAN Storage Providers: Highly Available

You might ask why every ESXi host registers this storage provider. The reason for this is high availability. Should one ESXi host fail, another ESXi host in the cluster can take over the presentation of these VSAN capabilities. If you examine the storage providers shown in Figure 4.6, you will see that only one of the VSAN providers is online. The other storage providers from the other two ESXi hosts in this three-node cluster are in a standby state. Should the storage provider that is currently active go offline or fail for whatever reason (most likely because of a host failure), one of the standby providers will be promoted to active.

There is very little work that a vSphere administrator needs to do with storage providers to create a VSAN cluster. This is simply for your own reference. However, if you do run into a situation where the VSAN capabilities are not surfacing up in the VM storage policies section, it is worth visiting this part of the configuration and verifying that at least one of the storage providers is active. If you have no active storage providers, you will not discover any VSAN capabilities when trying to build a VM storage policy. At this point, as a troubleshooting step, you could consider doing a refresh of the storage providers by clicking on the refresh icon (orange circular arrows) in the storage provider screen.

What should be noted is that the VASA storage providers do not play any role in the data path for VSAN. If storage providers fail, this has no impact on VMs running on the VSAN datastore. The impact of not having a storage provider is lack of visibility into the underlying capabilities, so you will not be able to create new storage policies. However, already running VMs and policies are unaffected.

Changing VM Storage Policy On-the-Fly

Being able to change a VM storage policy on-the-fly is quite a unique aspect of VSAN. We will use an example to explain the concept of how you can change a VM storage policy on-the-fly and how it changes the layout of a VM without impacting the application or the guest operating system running in the VM.

Consider the following scenario, briefly mentioned earlier in the context of stripe width. A vSphere administrator has deployed a VM on a hybrid VSAN configuration with the default VM storage policy, which is that the VM storage objects should have no disk striping and should tolerate one failure. The layout of the VM disk file would look something like Figure 4.7.

Figure 4.7

Figure 4.7 VSAN policy with the capability number of failures to tolerate = 1

The VM and its associated applications initially appeared to perform satisfactorily with a 100% cache hit rate; however, over time, an increasing number of VMs were added to the VSAN cluster. The vSphere administrator starts to notice that the VM deployed on VSAN is getting a 90% read cache hit rate. This implies that 10% of reads need to be serviced from magnetic disk/capacity tier. At peak time, this VM is doing 2,000 read operations per second. Therefore, there are 200 reads that need to be serviced from magnetic disk (the 10% of reads that are cache misses). The specifications on the magnetic disks imply that each disk can do 150 IOPS, meaning that a single disk cannot service these additional 200 IOPS. To meet the I/O requirements of the VM, the vSphere administrator correctly decides to create a RAID-0 stripe across two disks.

On VSAN, the vSphere administrator has two options to address this.

The first option is to simply modify the VM storage policy currently associated with the VM and add a stripe width requirement to the policy; however, this would change the storage layout of all the other VMs using this same policy.

Another approach is to create a brand-new policy that is identical to the previous policy but has an additional capability for stripe width. This new policy can then be attached to the VM (and VMDKs) suffering from cache misses. Once the new policy is associated with the VM, the administrator can synchronize the new/updated policy with the VM. This can be done immediately, or can be deferred to a maintenance window if necessary. If it is deferred, the VM is shown as noncompliant with its new policy. When the policy change is implemented, VSAN takes care of changing the underlying VM storage layout required to meet the new policy, while the VM is still running without the loss of any failure protection. It does this by mirroring the new storage objects with the additional components (in this case additional RAID-0 stripe width) to the original storage objects.

As seen, the workflow to change the VM storage policy can be done in two ways; either the original current VM storage policy can be edited to include the new capability of a stripe width = 2 or a new VM storage policy can be created that contains the failures to tolerate = 1 and stripe width = 2. The latter is probably more desirable because you may have other VMs using the original policy, and editing that policy will affect all VMs using it. When the new policy is created, this can be associated with the VM and the storage objects in a number of places in the vSphere Web Client. In fact, policies can be changed at the granularity of individual VM storage objects (e.g., VMDK) if necessary.

After making the change, the new components reflecting the new configuration (e.g., a RAID-0 stripe) will enter a state of reconfiguring. This will temporarily build out additional replicas or components, in addition to keeping the original replicas/components, so additional space will be needed on the VSAN datastore to accommodate this on-the-fly change. When the new replicas or components are ready and the configuration is completed, the original replicas/components are discarded.

Note that not all policy changes require the creation of new replicas or components. For example, adding an IOPS limit, or reducing the number of failures to tolerate, or reducing space reservation does not require this. However, in many cases, policy changes will trigger the creation of new replicas or components.

Your VM storage objects may now reflect the changes in the Web Client, for example, a RAID-0 stripe as well as a RAID-1 replica configuration, as shown in Figure 4.8.

Figure 4.8

Figure 4.8 VSAN RAID-0 and RAID-1 configuration

Compare this to the tasks you may have to perform on many traditional storage arrays to achieve this. It would involve, at the very least, the following:

  • The migration of VMs from the original datastore.

  • The decommissioning of the said LUN/volume.

  • The creation of a new LUN with the new storage requirements (different RAID level).

  • Possibly the reformatting of the LUN with VMFS in the case of block storage.

  • Finally, you have to migrate your VMs back to the new datastore.

In the case of VSAN, after the new storage replicas or components have been created and synchronized, the older storage replicas and/or components will be automatically removed. Note that VSAN is capable of striping across disks, disk groups, and hosts when required, as depicted in Figure 4.8, where stripes S1a and S1b are located on the same host but stripes S2a and S2b are located on different hosts. It should also be noted that VSAN can create the new replicas or components without the need to move any data between hosts; in many cases the new components can be instantiated on the same storage on the same host.

We have not shown that there are, of course, additional witness components that could be created with such a change to the configuration. For a VM to continue to access all its components, a full replica copy of the data must be available and more than 50% of the components (votes) of that object must also be available in the cluster. Therefore, changes to the VM storage policy could result in additional witness components being created, or indeed, in the case of introducing a policy with less requirements, there could be fewer witnesses.

You can actually see the configuration changes taking place in the vSphere UI during this process. Select the VM that is being changed, click its manage tab, and then choose the VM storage policies view, as shown in Figure 4.9. Although this view does not show all the VM storage objects, it does display the VM home namespace, and the VMDKs are visible.

Figure 4.9

Figure 4.9 VM Storage Policy view in the vSphere client showing component reconfiguration

In VSAN 6.0, there is also a way to examine all resyncing components. Select the VSAN cluster object in the vCenter inventory, then select monitor, Virtual SAN, and finally “resyncing components” in the menu. This will display all components that are currently resyncing/rebuilding. Figure 4.10 shows the resyncing dashboard view, albeit without any resyncing activity taking place.

Figure 4.10

Figure 4.10 Resyncing activity as seen from the vSphere Web Client

Objects, Components, and Witnesses

A number of new concepts have been introduced in this chapter so far, including some new terminology. Chapter 5, “Architectural Details,” covers in greater detail objects, components, and indeed witness disks, as well as which VM storage objects are impacted by a particular capability in the VM storage policy. For the moment, it is enough to understand that on VSAN, a VM is no longer represented by a set of files but rather a set of storage objects. There are five types of storage objects:

  • VM home namespace

  • VMDKs

  • VM swap

  • Snapshot delta disks

  • Snapshot memory

Although the vSphere Web Client displays only the VM home namespace and the VMDKs (hard disks) in the VM > monitor > policies > physical disk placement, snapshot deltas and VM swap can be viewed in the cluster > monitor > Virtual SAN > virtual disks view. We will also show ways of looking at detailed views of all the storage objects, namely delta and VM swap, in Chapter 10, “Troubleshooting, Monitoring, and Performance,” when we look at various monitoring tools available to VSAN.

VM Storage Policies

VM storage policies work in an identical fashion to storage profiles introduced in vSphere 5.0, insofar as you simply build a policy containing your VM provisioning requirements. There is a major difference in how storage policies work when compared to the original storage profiles feature. With storage profiles, you simply used the requirements in the policy to select an appropriate datastore when provisioning the VM. The storage policies not only select the appropriate datastore, but also inform the underlying storage layer that there are also certain availability and performance requirements associated with this VM. So while the VSAN datastore may be the destination datastore when the VM is provisioned with a VM storage policy, settings within the policy will stipulate additional requirements. For example, it may state that this VM has a requirement for a number of replica copies of the VM files for availability, a stripe width and read cache requirement for high performance, and a thin provisioning requirement.

VM storage policies are held inside VSAN, as well as being stored in the vCenter inventory database. Every object stores its policy inside its own metadata. This means that vCenter is not required for VM storage policy enforcement. So if for some reason the vCenter Server is unavailable, policies can continue to be enforced.

Enabling VM Storage Policies

In the initial release of VSAN, VM storage policies could be enabled or disabled via the UI. This option is not available in later releases. However, VM storage policies are automatically enabled on a cluster when VSAN is enabled on the cluster. Although VM storage policies are normally only available with certain vSphere editions, a VSAN license will also provide this feature.

Creating VM Storage Policies

vSphere administrators have the ability to create multiple policies. As already mentioned, a number of VSAN capabilities are surfaced up by VASA related to availability and performance, and it is at this point that the administrator must decide what the requirements are for the applications running inside of the VMs from a performance and availability perspective. For example, how many component failures (hosts, network, and disk drives) does the administrator require this VM to tolerate and continue to function? Also, is the application running in this VM demanding from an IOPS perspective? If so, an adequate read cache should be provided as a possible requirement so that the performance requirement is met. Other considerations include whether the VM should be thinly provisioned or thickly provisioned, if RAID-5 or RAID-6 configurations are desired to save storage space, if checksum should be disabled, or if an IOPS limit is required for a particular VM to avoid a “noisy neighbor” situation.

Another point to note is that since vSphere 5.5, policies also support the use of tags for provisioning. Therefore, instead of using VSAN datastore capabilities for the creation of requirements within a VM storage policy, tag-based policies may also be created. The use of tag-based policies is outside the scope of this book, but further information may be found in the generic vSphere storage documentation.

Assigning a VM Storage Policy During VM Provisioning

The assignment of a VM storage policy is done during the VM provisioning. At the point where the vSphere administrator must select a destination datastore, the appropriate policy is selected from the drop-down menu of the available VM storage policies. The datastores are then separated into compatible and incompatible datastores, allowing the vSphere administrator to make the appropriate and correct choice for VM placement.

This matching of datastores does not necessarily mean that the datastore will meet the requirements in the VM storage policy. What it means is that the datastore understands the set of requirements placed in the policy. It may still fail to provision this VM if there are not enough resources available to meet the requirements placed in the policy. However, if a policy cannot be met, the compatibility section in the lower part of the screen displays a warning that states why a policy may not be met.

This three-node cluster example shows a policy that contains a number of failures to tolerate = 2. A three-node cluster cannot meet this policy, but when the policy was originally created, the VSAN datastore shows up as a matching resource as it understood the contents of the policy. However, on trying to use this policy when deploying a VM, the VSAN datastore shows up as noncompliant, as Figure 4.11 demonstrates.

Figure 4.11

Figure 4.11 The VSAN datastore is shown as noncompliant when a policy cannot be met

This is an important point to keep in mind: Just because VSAN tells you it is compatible with a particular policy when that policy is originally created, this in no way implies that it can deploy a VM that uses the policy.

Summary

You may have used VM storage profiles in the past. VM storage policies differ significantly. Although we continue to use VASA, the vSphere APIs for Storage Awareness, VM storage policies have allowed us to switch the storage characteristics away from datastores and to the VMs. VMs, or more specifically the applications running in VMs, can now specify their own requirements using a policy that contains underlying storage capabilities around performance, reliability, and availability.

 

VMware vSAN datastore not listed as a matching datastore when creating a storage based policy (2065479)

Symptoms

When creating a virtual machine storage based policy, you may experience one or more of these symptoms:

  • No matching vSAN (formerly known as Virtual SAN) datastores are listed in the matching resources page of the Storage Based Policy creation wizard.
  • You are unable to create the storage based policy.
Cause
This issue occurs due to inconsistent data between the Storage Management Service (SMS) and the Storage Based Policy Manager (SPBM).
Resolution
This is a known issue affecting VMware vSAN 5.5.x.

Currently, there is no resolution.
To work around the issue:
  1. In the vSphere Web Client navigate to vCenter Server > Manage > Storage Providers.
  2. Perform these actions on each of the vSAN vSphere Storage APIs (VASA) providers:

    1. Synchronize all vSAN providers by clicking on the red circular arrows.
    2. Rescan the storage providers for new storage systems and capabilities by clicking on the icon to the immediate right of the circular red arrows.

  3. Create your new virtual machine storage based policy and select vSAN as the matching datastore.
Related Information
For more information on Storage Providers and policy management, see Virtual SAN and Storage Policy-Based Management section in the VMware vSphere 5.5 Storage Guide.
 
For related information on vSAN:
To be alerted when this document is updated, click the Subscribe to Article link in the Actions box.
Request a Product Feature
To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page

Read more »



Aug
25

Mô hình Virtual Labs của Veeam Backup:

Virtual Lab

The virtual lab is an isolated virtual environment in which Veeam Backup & Replication verifies VMs. In the virtual lab, Veeam Backup & Replication starts VMs from the application group and the verified VM. The virtual lab is used not only for the SureBackup verification procedure, but also for U-AIR and On-Demand Sandbox.

The virtual lab does not require that you provision extra resources for it. You can deploy the virtual lab on any ESX(i) host in your virtual environment.

The virtual lab is fully fenced off from the production environment. The network configuration of the virtual lab mirrors the network configuration of the production environment. For example, if verified VMs and VMs from the application group are located in two logical networks in the production environment, the virtual lab will also have two networks. The networks in the virtual lab will be mapped to corresponding production networks.

VMs in isolated networks have the same IP addresses as in the production network. This lets VMs in the virtual lab function just as if they function in the production environment.

Virtual Lab

 

Mô hình dựng Virtual Labs trên vCloud và vSphere (phiên bản cũ)

Protecting vSphere with Veeam Hands on Lab

Lab Duration: Approximately 1 hour

Lab Skill level: Intermediate

Overview

In this lab you take the role of an administrator at a small company that has just deployed VMware vSphere.
However due to a growing backup window and expensive software renewal costs, a decision was made to deploy Veeam Backup and Replication in your test environment as a pilot.
The goal of this lab is for you to configure a newly installed Veeam server to protect a test virtual machine. In addition, you will also need to do a test restore of a word document from a windows vm.

Prerequisites

It is recommended that lab users have some familiarity with VMware vCloud Director, therefore Lab 0 – Intro to vCloud Director is recommended before taking any other labs.

Tasks Include:

  • Creating a Veeam Backup Job
  • Monitoring a Backup Job
  • Restoring a Virtual Machine
  • Restoring a File
  • Replicating a Virtual Machine to a DR ESXi Host
  • Failover from our production host to our DR host

For the best experience use a device with two monitors. Ideally you would use one screen for displaying this lab manual, and the other for the VMware View Desktop/vCloud Lab environment. If two monitors are not available, other suggestions include using an iPad or other tablet for the lab manual or working with another person and using their device for one thing and yours for the other. Printed lab manuals may be available if none of the above are possible.

Step 1 – Login and Lab Deployment

This lab will leverage VMware vCloud, in order to access the lab you will need to navigate to this URL:
https://www.vcloudlab.net/cloud/org/<username>/
(your username and password was in your welcome email)
Login with the username and password assigned to you or your group.

Once logged in you will need to deploy the “Lab 2 – Protecting vSphere with Veeam” vApp, to do this click on the “Add vApp from Catalog” button. Next Select “Public Catalogs” from the “Look in:” drop down menu. You should now see a list of the available labs, Select the “Lab 2 – Protecting vSphere with Veeam” lab and click “Next”. You can now name the vApp if you would like or leave it at the default. The lease settings can be left alone. Click “Finish” to deploy your lab.

vCloud Director will now deploy all of the virtual machines and networking components necessary for your lab. This process should not take more than 10 seconds.
Please proceed to the next step.

Step 2 – Power Up your Lab

Once the lab has been provisioned it will be in a “Stopped” state, to power it up simply click the green play button on the vApp.

Because vCenter has many services to start, it will take about five minutes to fully boot up. We can review some lab info while we wait.

Step 3 – Lab Information and Setup

This lab is built on top of VMware vCloud Director. vCloud Director leverages VMware vSphere and adds a layer of abstraction so that resources can be offered to consumers through a self service interface, and without them needing knowledge of how those resources are configured on the backend.

This technology is the same thing you would get from a vCloud Powered VMware Service Provider, and can be leveraged for anything from test/dev environments to mission critical business applications.

This lab consists of several pieces:
1.) A Veeam Server – 192.168.2.10 – Credentials: administrator / vmware1
2.) 2 ESXi 5.1 Servers – 192.168.2.20 & 192.168.2.25 – Credentials: root / vmware1
3.) A test VM — this is a virtual machine running inside of one of the ESXi hosts

How the lab works

At this point your lab should be getting close to finishing its boot process, before we start let’s take a minute and explain how you will interact with your lab.
The ‘VeeamServer’ virtual machine will be where we spend all of our time in this lab. Because the lab is 100% isolated, the only way to access anything in your lab is through the VeeamServer console.

Vmware vCloud Director can be accessed from a VMware View desktop, or directly from your personal machine, as the www.vcloudlab.net URL is publicly accessible. However for best video performance, using the VMware View Desktop is  preferred. Check your welcome email for more information on how to access the View Desktops.

Step 4 – Opening the VeeamServer’s Console

Let’s first open up the vApp so that we can see our individual virtual machines. Click open on the vApp.

Next, click once on the VeeamServer VM, then click again on the console thumbnail. This will open the console of the VM. Click continue or accept on any SSL warnings. You may also need to allow popups in the browser.

NOTE: You will need to click “Allow” on the Remote Console plugin the first time that you open a console. It will appear at the bottom of the Internet explorer window at the same time that a popup box telling you to install it appears. YOU DO NOT need to install it, just click allow at the bottom as seen in the following screenshot.


Note: it can takes 3 attempts before the console will open due to popups and allowing the “Add on” to run, this will largely depend on if you are using a view desktop or if you are using your local machine.

Step 5 – Login to the Veeam Server

At this point you have the VeeamServer console open and probably see the Windows 2008 R2 “Press CTRL ALT Delete to login” screen. The best way to do that is to use the CTRL+ALT+DEL button in the top right corner of the console window. Find and use that button and then use administrator / vmware1 to login.

Step 6 – Launch Veeam Backup and Replication

On the desktop you will find a Veeam icon double-click on the Veeam Icon and let the interface load. Now find the “Infrastructure” section in the left menu. Click on it, and look in main body to the right. You will see that both ESXi servers have been added to Veeam. 192.168.2.20 will serve as our main server, and 192.168.2.25 will serve as our DR server that we will replicate to.

The default “c:\backups” backup repository is what we will use for backups. Normally you would need to create a backup repository before doing a backup. For Lab purposes we don’t need to.

Step 7 – Create a Backup Job

Click on the “Backup & Replication” section in the left menu. Right now you will see nothing in the right box because we have not created any jobs yet.
Click the “Backup Job” icon at the top of the Veeam Interface.

The job wizard will appear. On the first page name your job “File Server Backup”, then click next.

On page two we need to select which VM’s we want to backup. Click the “Add” button on the right side of the wizard. Now click the + beside of 192.168.2.20, this will take some time before it expands. Once it does expand select the “FileServer” VM. Click Add and then next.

After you click on the FilseServer VM, click ok and then you should see it added to the list, like below.

On the “Storage” page of the wizard we can select which backup repository the data will go to along with how long to retain a restore point. There are also advanced settings that we will not cover in this lab that can be adjusted here. You can leave all of the settings at their default and just click Next to proceed.

The Guest Processing screen has options that need to be setup if we are backing up a VM that has Microsoft VSS capabilities. Check the box next to “Enable application-aware image processing” and then fill in the Username and Password below with administrator / vmware1

Next we need to setup our backup schedule. Check the “Run this job automatically” option and then review the different types of schedules. For Lab purposes we can just leave these settings at the defaults.

On this final screen we can select the “Run the job when I click Finish” option and then select Finish. Veeam will now create your backup job and do the first backup. Now that you are back in the main window you will see your new job and see that it’s status will start with 0% complete….

Step 8 – Monitoring a job

Right Click on the job and select the “Statistics” option. Next click the “Show Details” button.

This screen will show you the overall details of the job, if you wish to see more details about a specific VM in that job click on it in the left pane. If you click on the “FileServer” VM you will see which part of the job is currently processing and how fast.

The job will take about 10 minutes to complete (this is very dependant on the number of lab users and may take longer if the lab load is heavy), exit the status window and proceed to the next slide.

Step 9 – Creating a Replication Job

While our backup job is running we can go ahead and create our replication job. The goal is to replicate the FileServer VM from Prod-ESXi1 to DR-ESXi. Click on the “Home” tab at the top of the Veeam interface, and then select ” Replication Job”.

On the first page of the wizard we can name the replication job. I used “Replication Job 1” but you can use whatever you want. The check boxes at the bottom are more advanced settings and will not be used in this lab. Proceed to the next step of the wizard.

The next step is to add a virtual machine to the replication job. This is the same process as adding a VM to a backup job. Click add, then select the FileServer VM. Then click Next.

The next step is much different from a backup job. Here we select the destination host, instead of a destination backup repository. Click “Choose” on the “Host or Cluster” box and select the 192.168.2.25 ESXi host. All of the other options will auto populate. Then click Next.

The job settings section would normally be where you change your local and remote Veeam Proxy servers, but for lab purposes we can leave everything at its default settings. Click Next to continue.

You should now be on the Guest Processing page, here you should click “Enable application-aware image processing” and enter administrator / vmware1 as the credentials. Then go to click next to set the schedule.

Here on the schedule screen we can setup a replication schedule just like we can do for a backup job. I selected daily at 10pm, but for lab purposes it really doesn’t matter.

After setting the replication schedule you will be on the summary page of the wizard. Click Finish so that Veeam creates the job. You should then be back at the jobs page and be able to see the backup and the replication job. If the backup job has completed, right click on the replication job and select “Run Now”. If the backup job is not done yet wait until it is before proceeding.

Step 10 – Delete a Known File

Open up Windows Explorer by clicking the “folder” icon in the task bar to the right of the start menu. In the address bar type \\192.168.2.30\files If asked to login use administrator \ vmware1

You should see one file called “RestoreMe” open that file and verify that it has text in it, then close it.

Right click on that file and select delete, the folder should now be empty.

Step 11 – Restore the File

Head back over to the Veeam interface. In the top icon bar you should see one called “Restore”, click that icon. We will be restoring the file from the local backup, but notice that we can even restore files from the replica’s on our DR server. Click “Guest files (Windows)” in the left menu then click next.

Next expand out the “FileServer” backup job and select the “FileServer” VM so Veeam knows which VM we ant to restore files for. Then click next.

If you had multiple restore points this next screen would allow you to drill down into exactly which one you wanted to use. Click next.

The next screen allows you to enter a reason for the restore. This is not required and is strictly for logging/documentation purposes if you ever needed to know why a restore was done. Click Next.

After clicking next you will be on the “Completing the Restore Wizard” page. Don’t worry we are not restoring the entire VM.  After clicking “Finish” Veeam does work behind the scenes and will then present an explorer-like interface for us to find the files we want to restore, Click finish.

Selecting your files from the Backup

Once the explorer interface opens, navigate to the “C” drive on the left and then click the “files” folder. Inside you should see the “RestoreMe” file, click on it and you will see the buttons change in the ribbon at the top. Click the “Restore” icon, which will restore the file back to its original location.

The file will be restored to its original location and you will see a status window popup.

Step 12 – Check the restored file

Let’s now check to make sure the file has been restored and that the data is in place. Open up Windows Explorer and in the address bar type in: \\192.168.2.30\files
You should see the RestoreMe file is back, open it and see if it has the same data inside of it. If it does then your restore worked successfully.

Step 13 – Failing over to your DR Server

In the next few steps we are going to leverage the Veeam Replication job that you created to start up the FileServer VM on the DR-ESXi host. Before we can do that make sure that Veeam has completed the replication job that we created, and that its status is success. Once it has completed we can proceed.

Before we can initiate a failover to our DR site server we need to power off “FileServer” from the ESXi1 host.
Use the vSphere client on the desktop to login to: 192.168.2.20
Credentials: root / vmware1

Then right click on FileServer and select
“Power->Power Off”.

In the top icon bar click “Restore” again, just like we did before.

This time select “Failover to replica” on the right side and then click next.

On the next screen press the “Add VM” button on the right. Then select “From Replica”, a box will then popup where we can select “FileServer” after expanding the replication job we created.

Click OK after selecting “FileServer”

On the next screen we can again fill in a reason of why we are failing over. This is not required, Click Next.
And then click finish. You will now see a progress box appear with the process status. If you leave this box open you will eventually see “Failover completed Successfully”. You can now close the box.

Step 14 – Verify Failover

At this point we can login to both of our ESXi servers and see that FileServer is now powered off on the 192.168.2.20 host and “FileServer_replica” is now powered on at the DR host 192.168.2.25

A Second way to verify is to go back to
\\192.168.2.30\data
and look at the time stamp of the RestoreMe file.

Congratulations! You have completed Lab 2

If you still have time remaining feel free to explore the lab environment and vCloud Director.

More highlights about Veeam include:

  • Ability to failback from DR
  • Ability to do an instant VM recovery
  • Multiple backup proxies for parallel job execution
  • SureBackup can test your backups after every backup

Housekeeping / Lab Cleanup

When you are done with the lab, you can delete it from the HOL Cloud. To do this click on “Home” in the top left area of the vCloud interface. Next find your Lab vApp and press the Red Stop Button, after the vApp has stopped you can right click on it and select “Delete”.

Please note that your HOL Cloud account can only provision a small number of VM’s at the same time, so you will need to delete one lab before staring another or provisioning will fail.


Read more »



Aug
25
Default Passwords for VMware and EMC
Posted by Thang Le Toan on 25 August 2017 10:25 PM

Default Passwords

Here is a collection of default password to save you time googling for them:

EMC Secure Remote Support (ESRS) Axeda Policy Manager Server:
•Username: admin
•Password: EMCPMAdm7n

EMC VNXe Unisphere (EMC VNXe Series Quick Start Guide, step 4):
•Username: admin
•Password: Password123#

EMC vVNX Unisphere:
•Username: admin
•Password: Password123#
NB You must change the administrator password during this first login.

EMC CloudArray Appliance:
•Username: admin
•Password: password
NB Upon first login you are prompted to change the password.

EMC CloudBoost Virtual Appliance:
https://:4444
•Username: localadmin
•Password: password
NB You must immediately change the admin password.
$ password

EMC Ionix Unified Infrastructure Manager/Provisioning (UIM/P):
•Username: sysadmin
•Password: sysadmin

EMC VNX Monitoring and Reporting:
•Username: admin
•Password: changeme

EMC RecoverPoint:
•Username: admin
Password: admin
•Username: boxmgmt
Password: boxmgmt
•Username: security-admin
Password: security-admin

EMC XtremIO:

XtremIO Management Server (XMS)
•Username: xmsadmin
password: 123456 (prior to v2.4)
password: Xtrem10 (v2.4+)

XtremIO Management Secure Upload
•Username: xmsupload
Password: xmsupload

XtremIO Management Command Line Interface (XMCLI)
•Username: tech
password: 123456 (prior to v2.4)
password: X10Tech! (v2.4+)

XtremIO Management Command Line Interface (XMCLI)
•Username: admin
password: 123456 (prior to v2.4)
password: Xtrem10 (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)
•Username: tech
password: 123456 (prior to v2.4)
password: X10Tech! (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)
•Username: admin
password: 123456 (prior to v2.4)
password: Xtrem10 (v2.4+)

XtremIO Easy Installation Wizard (on storage controllers / nodes)
•Username: xinstall
Password: xiofast1

XtremIO Easy Installation Wizard (on XMS)
•Username: xinstall
Password: xiofast1

Basic Input/Output System (BIOS) for storage controllers / nodes
•Password: emcbios

Basic Input/Output System (BIOS) for XMS
•Password: emcbios

EMC ViPR Controller :
http://ViPR_virtual_ip (the ViPR public virtual IP address, also known as the network.vip)

•Username: root
Password: ChangeMe

EMC ViPR Controller Reporting vApp:
http://:58080/APG/

•Username: admin
Password: changeme

EMC Solutions Integration Service:
https://:5480

•Username: root
Password: emc

EMC VSI for VMware vSphere Web Client:
https://:8443/vsi_usm/
•Username: admin
•Password: ChangeMe

Note:
After the Solutions Integration Service password is changed, it cannot be modified.
If the password is lost, you must redeploy the Solutions Integration Service and use the default login ID and password to log in.

Cisco Integrated Management Controller (IMC) / CIMC / BMC:
•Username: admin
•Password: password

Cisco UCS Director:
•Username: admin
•Password: admin
•Username: shelladmin
•Username: changeme

Hewlett Packard P2000 StorageWorks MSA Array Systems:
•Username: admin
•Password: !admin (exclamation mark ! before admin)
•Username: manage
•Password: !manage (exclamation mark ! before manage)

IBM Security Access Manager Virtual Appliance:

•Username: admin
•Password: admin

VCE Vision:
•Username: admin
•Password: 7j@m4Qd+1L
•Username: root
•Password: V1rtu@1c3!

VMware vSphere Management Assistant (vMA):
•Username: vi-admin
•Password: vmware

VMware Data Recovery (VDR):
•Username: root
•Password: vmw@re (make sure you enter @ as Shift-2 as in US keyboard layout)

VMware vCenter Hyperic Server:
https://Server_Name_or_IP:5480/
•Username: root
•Password: hqadmin

https://Server_Name_or_IP:7080/
•Username: hqadmin
•Password: hqadmin

VMware vCenter Chargeback:
https://Server_Name_or_IP:8080/cbmui
•Username: root
•Password: vmware

VMware vCenter Server Appliance (VCSA) 5.5:
https://Server_Name_or_IP:5480
•Username: root
•Password: vmware

VMware vCenter Operations Manager (vCOPS):

Console access:
•Username: root
•Password: vmware

Manager:
https://Server_Name_or_IP
•Username: admin
•Password: admin

Administrator Panel:
https://Server_Name_or_IP/admin
•Username: admin
•Password: admin

Custom UI User Interface:
https://Server_Name_or_IP/vcops-custom
•Username: admin
•Password: admin

VMware vCenter Support Assistant:
http://Server_Name_or_IP
•Username: root
•Password: vmware

VMware vCenter / vRealize Infrastructure Navigator:
https://Server_Name_or_IP:5480
•Username: root
•Password: specified during OVA deployment

VMware ThinApp Factory:
•Username: admin
•Password: blank (no password)

VMware vSphere vCloud Director Appliance:
•Username: root
•Password: vmware

VMware vCenter Orchestrator :
https://Server_Name_or_IP:8281/vco – VMware vCenter Orchestrator
https://Server_Name_or_IP:8283 – VMware vCenter Orchestrator Configuration
•Username: vmware
•Password: vmware

VMware vCloud Connector Server (VCC) / Node (VCN):
https://Server_Name_or_IP:5480
•Username: admin
•Password: vmware
•Username: root
•Password: vmware

VMware vSphere Data Protection Appliance:
•Username: root
•Password: changeme

VMware HealthAnalyzer:
•Username: root
•Password: vmware

VMware vShield Manager:
https://Server_Name_or_IP
•Username: admin
•Password: default type enable to enter Privileged Mode, password is ‘default’ as well

Teradici PCoIP Management Console:
•The default password is blank

Trend Micro Deep Security Virtual Appliance (DS VA):
•Login: dsva
•password: dsva

Citrix Merchandising Server Administrator Console:
•User name: root
•password: C1trix321

TP-Link ADSL modem / router, Wi-Fi :
•User name: admin
•password: admin

VMTurbo Operations Manager:
•User name: administrator
•password: administrator
If DHCP is not enabled, configure a static address by logging in with these credentials:
•User name: ipsetup
•password: ipsetup
Console access:
•User name: root
•password: vmturbo


Read more »



Jul
21
VMware vSphere Integrated Containers Hands-on Lab
Posted by Thang Le Toan on 21 July 2017 03:26 AM

Hands-on Labs are the fastest and easiest way to test-drive the full technical capabilities of VMware products. These evaluations are free, up and running on your browser in minutes, and require no installation.

vSphere Integrated Containers

This lab explores vSphere Integrated Containers with Docker Linux containers and VMware vSphere.  HOL-1730-USE-1-MYVMW-HOL . 2:15 hrs (link: https://my.vmware.com/en/web/vmware/evalcenter?p=vic-17-hol&src=em_59512367d1671&cid=70134000001K4s5 )

 


Read more »



Jul
12

Micro segmentation is probably the number one reason for companies with vSphere to purchase NSX. This feature inserts a packet filter in between your VMs. A filter you can configure centrally, link to single VMs, groups of VMs, kinds of VMs and specify according to your needs. As your data security on the internet is, involuntarily, tested on a daily basis, it’s not a question of IF but rather WHEN you will face a breach of your security. Micro segmentation can be your saviour at that moment, as it restrains the attacker to the compromised host. Compare it to your house. With a generic firewall, you just lock the front door. If a burglar gets in, he can walk right into your living room and nick your TV. With micro segmentation, every door in your house is closed and properly locked. If your front door is broken down, the unwanted guest is still limited to the hallway.

Recently a lot rumors went around over the interwebs on NSX. It was said that for the deployment of micro segmentation, you need a full NSX deployment. That is, you need to deploy the appliance, including all the planes, maybe even an edge router, the works. And it would take you years to install, let alone configure. Well, that is not correct. To deploy NSX just for micro segmentation, you only need to deploy the NSX Manager appliance and connect it to your vCenter. That’s all! No need for distributed switches, no need for full blown NSX deployment. So, to be clear, for micro segmentation:

  • You only need to deploy the NSX Manager VM
  • You need virtual distributed switches (If you are on vSphere 6, you don’t need Enterprise+ to use VDS when you have NSX)
  • You don’t have to have vSphere 6, it also works with vSphere 5.1 and 5.5
  • You don’t need expensive core switches or any other physical network gear requirements, its all virtual
  • You do need to have the vCenter Server Appliance and use the web client (it really got incredibly better in v6!)
  • You need about 12 gigs of RAM, 4 vCPUs, 60 gigs of storage and 1 vNIC for deployment

But then you’re good to go! I will not be going into configuration of policies in this post. That is food for another post. In this post we just go through the installation motions to get micro segmentation available to you in vSphere 5.x or 6.

So, to be clear on how to do it, I went along and just did it myself in my lab environment. Now, my lab currently consists of 2 hosts, but for demonstration purposes it is enough. Both run vSphere 6, vCenter Server Appliance v6 on iSCSI storage. I do use distributed switches but as mentioned before, this is not a requirement. When you first log into the vCenter Server Appliance, it looks like this.

00 vcenter start

No mention of firewalls anywhere.

NSX comes as an OVA file. I’m assuming you all know how to to download and deploy an OVA. One remark on this. When you deploy OVAs with the vCenter Appliance, you need to install the vCenter Integration Plugin on your management desktop first. And there is a little trick with that if you do.  It took me a while to figure out what went wrong, which is why I am telling you in advance: the CIP modifies your HOSTS file in your Windows directory with an entry pointing to local host, to which it later connects. If your install does not modify the HOSTS file correctly, CIP will not install or will not start properly and you will not be able to deploy OVA files with vCenter Appliance. The easiest way to work around the problem, is to take an account with sufficient rights, browse to the HOSTS file in your Windows folder and remove the READ ONLY flag, before you install CIP. After the installation, you can set the READ ONLY flag back.

01 hostsfile readonly

The hosts file can be found in your Windows folder, usually on C: C:\Windows\System32\Drivers\etc\hosts. Take care that you do not modify the access rights of the file, otherwise Windows will refuse to read the file and work with the entries. After installing CIP, you can go forward and deploy the NSX OVA file. The version I downloaded is 6.1.4, which is the most recent version at this time of writing.

02b nsx ova deployment

Mark the checkbox on top for the extra configuration options to proceed. At this time it might be a good idea to talk about the network settings. The NSX Manager VM has just 1 IP address and it connects to your vCenter Server Appliance. Although I did not see it mentioned in the documentation, it’s probably a good idea to create your DNS entries for the appliance before you deploy. As my lab environment has a bit flakey DNS implementation, its less essential, but as you probably are going to move this into production in time, this is a good moment to do so. The next screens zoom in on where you want the appliance to be created in your cluster, where you want it to be stored and last but not least, the network settings.

03 nsx ova deployment network

After the IP information, you are asked (but not required) to enter the NTP server and if you would like to enable SSH. I left the last checkbox blank for now. After this deployment, we are ready to power it up and go and have a cup of coffee, because the NSX appliance, like the vCSA, takes a couple of ages to boot for the first time. When it’s done doing so, you can start your browser and open the secure NSX page on the new VM: https://<yournsxip&gt; or <dnshostname>. If all went well, you will get a login page where you need to log in with username “admin” and the password you defined earlier during the OVA deployment. After the obligatory browser-moaning about the self signed certificates (commercial ones are not supported, yet) you are presented with the NSX appliance console, which is pretty compact

04 nsx controller console

To move on, select “Manage vCenter Registration”. Now, I have Single Sign On configured in my lab and it actually works quite well, although I break my lab on a regular basis. You might consider creating and using a specific service account to register the NSX appliance with the vCenter Appliance. If you did so, you will need those credentials in the next screen, otherwise just use the vCenter admin account.

05 nsx controller vcenter connect

If you have entered the information correctly, you should get a little popup stating the vCenter Appliance certificate fingerprint. If you select OK, the NSX manager will go on and register with vCenter.

06 nsx controller vcenter connect success

If this process was finished correctly, you should see a nice green dot appear in the screen next to the status message that your NSX management appliance is now connected to vCenter. Go back to your vCenter appliance and log off. When you log in again, you should see a new entry in your options list!

07 vcenter NSX options

When you go into “Networking and Security”, you will be presented with the NSX management screen. From this screen, you can configure NSX completely. Now there is one thing left to do. You need to install the VIBs in every host of your NSX cluster. This sounds like a lot of work and incredibly complicated, but in fact it’s easy as pie. When you click on the install menu entry in the NSX console on the left side, you are presented with a couple of tabs.

installing vibs

Go to host preparation, you should see your datacenter cluster there. Just click “INSTALL” and NSX will automatically go and install the VIBs one host after another. That is all you need to do. No VXLAN install, no data planes, this is it.

08 vcenter NSX DFW console

For the micro segmentation, you go to distributed firewall to see your ruleset and you can go into SpoofGuard, Service Definitions and Service Composer to create the policies you want and need to keep that door shut to anyone without a proper ticket.

The complete installation took me about an hour, including DNS modification and preparing my client system with CIP. When you are at this point, you might want to go and involve the security guy(s) to help you create the correct policies for your VMs. But as you can see, the NSX installation for micro segmentation is straight forward and does not require full deployment of NSX to make it work for you.


Read more »



Jul
12
vCenter Server appliance 6.0 URL-based patching
Posted by Thang Le Toan on 12 July 2017 12:08 AM

With the recent release of vCenter Server appliance 6.0 Update 1b support was added for patching your vCenter server appliance, making use of a URL within your company network. Before this patch your vCenter appliance would have to make a direct connection with the internet and download the patches from the VMware repository. Now with this change you will be able to download the patches on your workstation, place them on a webserver within your company netwerk and then apply the patches to your vCenter appliance.

To start things of with you will need to have your own web server, this can be either a Windows or Linux based server. After that you will need to download the zipped updated bundle from the vmware website. Once downloaded extract the files into your repository directory on the webserver, this should result in two subdirectories called “manifest” and “package-pool”.

After that you have two options in updating the vCenter appliance, either by the vCenter server appliance Management interface (VAMI) or through the use of the command line interface.

vCenter server appliance management interface:

  1. Open the management vCenter Virtual Appliance web interface on port 5480.
  2. In the Update tab, click Settings.
  3. Select Use Specified Repository.
  4. For the Repository URL, enter the URL of the repository you created. For example, if the repository directory is vc_update_repo, the URL should be similar to the following URL: http://web_server_name.your_company.com/vc_update_repo
  5. Click OK.
  6. Click Check Updates.
  7. Click Check URL.
  8. Under Available Updates, click Install Updates.
  9. Based on your requirement, select from the following options:
    • Install all updates.
    • Install third-party updates.

Command line interface:

  1. Access the appliance shell and log in as a user who has a super administrator (default root) role.
  2. To stage the patches included in a repository URL: software-packages stage –url http://web_server_name.your_company.com/vc_update_repo
  3. Install staged patches:  software-packages install –staged
  4. To reboot after patching if needed: reboot -r “patch reboot”

This update should keep your IT environment up to date and secure, even if your Update server doesn’t have internet access.


Read more »



Jul
12
VMworld 2016 – What’s New in vSphere 6.5
Posted by Thang Le Toan on 12 July 2017 12:05 AM

Almost every year VMware announces a new version of its core product: vSphere. vSphere, or ESX and vCenter, has been around for quite some time and it is the core product for your Software Defined Datacenter. After so many years of revolution and innovation, what things can be improved? VMware thought of some and the new version shines with a couple of features some have been longing for and a couple of features that will set it even further apart from any competitive hypervisor. Curious? Let’s run through some of the cool new stuff! This is a cherry-pick from all the new features.

vCenter and PSC

vCenter has long been the core management server of ESX. It makes managing ESX so easy. But there were a few drawbacks. The first versions ran on Windows. Since vSphere 6.0 the Appliance version is the better way to go but migrating to it was complicated. With the latest update of the migration version, that limit has been removed. So vCenter Server Appliance is the way to go. It will be the new center of your virtual infrastructure. But Heartbeat has been decommissioned and there is no proper way to make vCenter high-available (other than FT, but that’s not really HA). With version 6.5, vCenter finally has native HA.

High Availability

You can now install vCenter with HA built right into the appliance. Yes, that’s right, the appliance version. vCenter HA works with an active/passive architecture where it uses a witness to prevent split-brain situations. The Platform Services Controller or PSC can be installed in an Active/Active setup. With these technologies, vCenter finally is no longer the single point of failure. RTO can be 5 minutes and has no dependencies on shared storage or any external databases or relations.

Deployment

Remember that moment where you installed vCenter Server Appliance for the first time? You opened the ISO on your Mac, you started the webpage, clicked the link.. erm, waitaminute.. That’s an EXE file! To Windows it is, then. So you open up the ISO on your Windows machine, click the link, install the plugin and the installation fails! It does something strange with your system’s hosts file, which is a protected system file in Windows. So you fix that and jump through all the pre-stage hoops and after a while your deployment starts. And after some more waiting, it fails with an error message telling you to go find a logfile and see what went wrong. Who has not at least seen a few of these hoops you had to jump through before you got it up and running?

Well, no more. Not only does VCSA look better, but it works better.. on Windows as well as on MacOS and Linux and without the plugin. The install procedure has been split in two where you first deploy the VM with basic settings and then set up roles, single sign-on and more. So if your deployment falls on its nose during the initial stage, you haven’t entered a world of configuration information you now have to do again. And another feature is, you can create a template from it after stage 1 has finished so you always set up vCenter Server Appliance the same way without loosing more of your precious time and making sure they are all identical in the process.

Update Manager

Update manager always felt like it was left behind a bit but it is so important to all of us out there that need to maintain those precious vSphere installs. You always needed a Windows server to install and run it and you still needed the VI Client to really set it up, scan and remediate hosts and clusters. With v6 you could scan and remediate clusters from the web client but you still needed the Windows backend to make it run.. until now.. With version 6.5, update manager is finally baked into the vCenter Server Appliance. You can scan and remediate your hosts and clusters right from within vCenter Server Appliance without any external dependencies.

Backup and Restore

At least once in every IT guy’s lifetime it happens: your infra crashes and burns. You have to revert to your backup solution to get up and running again. But will it work? You never really know until it’s done, no matter how may test runs you do. This is especially true for vCenter. vCenter Server so often causes the chicken-and-egg dilemma when it comes to backup/restore solutions. It took some time but VMware has added an out-of-the-box native backup and restore functionality into vCenter Server Appliance 6.5 and you can use it next to your current backup solution of vCenter if you like. The new B/R can however remove the dependency on third party backup solutions. It just writes a bunch of files on a storage API of your choice (SCP, FTP or HTTP) from which you can redeploy your own VCSA with the same server UUID you already had, from the standard vSphere ISO, no matter if you had a VCSA with integrated PSC or an external PSC. And it has a plain and simple user interface for protecting vCenter Server Appliances and PSC’s. You can even encrypt your backups so all your secrets stay safe.

Management Interfaces

Okay, so we’re heading into territory I personally do not like so much. In the past VMware made a lot of changes to the management interfaces. First with the VI Client, then with the Web Client that felt like the slowed down version of the appliance, finally with a full blown redesign where speed was picked up quite good but it still required Adobe Flash. Why not HTML5, was the callout from almost all of you out there? Get rid of Flash. VMware heard you, but is not quite ready. So, basically there can be five main management interfaces.

  1. The currently most used is the vSphere Web Client. It’s based on the Adobe Flex platform and needs Flash to run. And it’s still here.
  2. Next is the HTML5-based vSphere Client. This tool has had accelerated development mostly because all of you out there downloaded the HTML5 Fling so much. That Fling will continue to be updated more often and can be used by all of you who are looking for that cutting edge functionality. However, the Fling will remain unsupported.
  3. Then there is the revamped Appliance Management UI. This is also an HTML5 interface. And there is also a similar interface that is especially for the Platform Services Controller, where the SSO configuration can be managed.
  4. Finally, and staying with the HTML5 theme, we have the Host Client. The Host Client also started out as a VMware Fling but made it into the product as of vSphere 6.0 Update 2.

That makes a total of 5 (counting nr 3 twice, as mentioned). Not the best story, but we hope that VMware will in the end roll it up into one. Now, as a reminder, these new features are only available in the vCenter Server Appliance.

Client Plugin

Did you hate that Client Integration Plugin or what? It would not run on just any client, then there were security issues and then when you thought you were in the green, you tried running an installation of an OVA and came to the conclusion that the CIP had some kind of an issue and it refuses to install an OVA because you need the CIP for that. Well, in version 6.5, the plugin is gone. It’s all native browser functions. That should make for a lot of happy faces.

vSphere Security

Security keeps getting more focus in IT. And VMware is no exception. Data integrity, privacy, know who has access, know who changed something. This has been on the wish-list of many for quite some time.

vSphere Logging

In the old days, vSphere would not tell you who changed what. It just stated that a change was made, period. Who changed it? What was changed and when? Log collectors could not help as the information simply was not transported to it. Since v6, the information of which user changed what and when is logged, but it is not reflected in the logs that are transported to external log collectors, not even to LogInsight. You need third party tools to or scripts by various knowledgable people to make vCenter show that information.

Now, with v6.5, vSphere shows you what happened, who made it happen and when it happened. Logs become more actionable. When an admin changes the amount of vCPU’s or adds memory to a VM, logs will clearly show:

  • The account that made the changes
  • The VM that was changed
  • A list of changes that were made to the VM in the format “old setting” -> “new setting”

This way, you always know what the old setting was and what the new setting is. If you are troubleshooting a server, you can now easily revert it back to its original state when changes were not documented.

VM Encryption and vMotion Encryption

With vSphere 6.5, you can now apply an encryption policy to a VM. What does that even mean? Once a VM is encrypted, the VMDK’s and the VM files are encrypted. This is done via symmetric keys. The key comes from the key manager and unlocks the key stored in the VMX/VM settings. The stored/unencrypted key is then used to encrypt/decrypt. It does not require any changes to the VM, the OS within the VM, the datastore or the VM’s hardware version. The VM itself has no access to the keys used to encrypt and when you vMotion an encrypted VM, vMotion also is encrypted (otherwise you might still be able to read the VM contents). Obviously to make encryption valuable, not everybody should have access to the keys. So a new role is introduced, the “No Cryptography Administrator”. This admin can do almost anything a “normal” admin can do, except encrypt or decrypt VM’s, access consoles of encrypted VM’s and download encrypted VMs. They can manage the encrypted VM in terms of power on and off, boot and shutdown and vmotion.

VM encryption depends on an external key management server or KMS. The symmetric keys come from the KMS. The KMS key encrypts the VM key. That’s the key that vCenter requests and sends to the hosts. That key is stored in the host memory and used to decrypt the key used to encrypt.that traditionally is managed by security. The KMS hands out keys that vSphere uses to encrypt and decrypt VMs. Obviously not everyone can have access to encryption keys, that would defy the purpose of the encryption. This will stir things up a bit with your current admin roles as you may need to re-evaluate who needs access to what.

In the wake of VM encryption comes vMotion encryption. vMotion encryption does not encrypt the whole vMotion network, it encrypts the vMotion data. As mentioned, it is required when you vmotion your encrypted VM’s but you can also enable it to encrypt all vMotion traffic. vMotion encryption has 3 settings:

  • Disabled: (obviously) do not use encryption
  • Opportunistic: Use encryption when source and destination host both support encryption
  • Required: Only allow encrypted vMotion. This will mean vMotion will fail if one of the hosts does not support it.

Secure Boot

UEFI secure boot has been around for some time and with vSphere 6.5 we can now also leverage it in the datacenter, both for the host and the VM. If Secure Boot is enabled, you can’t install unsigned code. With Secure Boot enabled, ESXi will ONLY boot and use signed code, for ESXi as well as additional VIBs. This ensures that the hypervisor has a cryptographic chain of trust to the certificate stored in the firmware. UEFI ensures the kernel boots clean after which the secure boot verifier launches and validates each VIB against the certificate stored in the UEFI firmware. Secure Boot checks this every time the host boots. If the check fails anywhere in the chain, the host will fail to boot. Consequently, secure boot inside the VM is also a chain. It can be enabled in the UI as well as with PowerCli.

HA and DRS Enhancements

High Availibility and Distributed Resource Scheduling are two major components of vSphere that have made a big difference over the years. Where HA keeps your VM’s alive and available, DRS keeps your hosts balanced and well utilized. In vSphere 6.5, there are a couple of enhancements that certainly are worth mentioning.

HA Orchestrated Restarts

One of the things we’re all familiar with boot order. You want the AD servers booted before you want the DB servers booted. You want the App servers booted once the DB servers are booted and so on. In vSphere 6.5 you now have HA Orchestrated Restarts, where you can define in what order a specific multi-tier app needs to boot, like first the DB server, then the App server and last the Web tier. Every time HA needs to restart this tier, it will do so according to your rules.

ProActive HA and Quarantine Mode

How can HA be proactive? It’s not like you see a failure coming. Or is it? As it turns out, and you all probably know this, almost all big server vendors have extra hardware checks and monitoring build into their servers. This is monitored by their hardware management solution like Dell OpenManage or HP’s Insight Manager. Now, HA can vacate a host once an alert is raised. As soon as a notification comes in of a host being in a degraded mode, HA will vMotion the VM’s on that host to another host in the cluster.

Once a host is in degraded mode, HA will put it in Quarantine Mode. Any host that is either moderately or severely degraded will be put in Quarantine. This means that HA will not move VM’s to it until you fix the server and get it out of quarantine.

DRS Policies

Tuning your DRS was pretty basic. With vSphere 6.5 you can tune it more to your situation and use case. With DRS policies the distribution of VM’s over your hosts gets more equal. DRS now also looks at consumed memory versus active memory for load balancing. Also, DRS now looks at CPU overcommit to prevent a single host from overcommitting on CPU load. This is especially useful when you have a lot of smaller VM’s in your infrastructure, like with VDI.

Network-Aware DRS

DRS used to not look at the network load of a host when it moved VM’s around. On occasion that could run you into trouble when a network intensive VM was sitting on a host when another VM with high network load was moved onto it and things start slowing down. DRS now also looks at the saturation of a host’s network links and avoids moving VM’s onto it that could cause a slow-down or worse. It still is a lower priority than CPU and memory so no guarantees on performance here.

Wrap-Up

So that wraps up our cherry picking of the new features. There is more to hear and see, like vSAN 6.5, Virtual Volumes updates and Storage Policies and control but we’ll save that for a more storage intensive post. No exact release date has been communicated yet. VMware states it will release vSphere 6.5 in the fourth quarter of 2016.

Update: Many thanks to Mike Foley for the corrections on VM encryption and secure boot.


Read more »



Jan
13
List of VMware Default Usernames and Passwords
Posted by Thang Le Toan on 13 January 2016 01:37 AM

Dưới đây là các địa chỉ web local, https, http, port và username, mật khẩu của hệ thống các sản phẩm mặc định của VMware.

Nó rất khó nhớ, dễ quên nên tôi ghi lại để tiện đường cấu hình, thay đổi sau này khi lần đầu khởi động và cấu hình:

 

Horizon Application Manager

http://IPorDNS/SAAS/login/0

http://IPorDNS

 

Horizon Connector

https://IPorDNS:8443/

 

vCenter Appliance Configuration

https://IPorDNS_of_Server:5480

username: root

password: vmware

 

vCenter Application Discovery Manager

http://IPorDNS

username: root

password: 123456

default ADM management console password is 123456 and the CLI password is ChangeMe

 

vCenter Chargeback

http://IPorDNS:8080/cbmui/

username: root

password: vmware

 

vCenter Infrastructure Navigator:

https://IPorDNS_of_Server:5480

username: root

password: Supplied during OVA deployment

 

vCenter Log Insight

https:// log_insight-host/

username: admin

password: password specified during initial configuration

 

vCenter MOB

https://vcenterIP/mob

 

vCenter Web Client Configuration

https://IPorDNS_of_Server:9443/admin-app

username: root

password: vmware

 

vCenter vSphere Web Client Access

https://IPorDNS_of_Server:9443/vsphere-client/

username: root

password: vmware

For vSphere 5.1  = Windows default username: admin@System-Domain

For vSphere 5.1 = Linux (Virtual Appliance) default username: root@System-Domain

For vSphere 5.5 = default username: administrator@vsphere.local

 

vCenter Single Sign On (SSO)

https://IPorDNS_of_Server:7444/lookupservice/sdk

For vSphere 5.1 = Windows default username: admin@System-Domain

For vSphere 5.1 = Linux (Virtual Appliance) default username: root@System-Domain

password: specified during installation

Adding AD authentication to VMware SSO 5.1

For vSphere 5.5 = default username: administrator@vsphere.local

 

vCenter Orchestrator Appliance

http://orchestrator_appliance_ip

Appliance Configuration:

change the root password of the appliance Linux user. Otherwise, the first time when you try to log in to the appliance Web console, you will be prompted to change the password.

Orchestrator Configuration:

username: vmware

password:vmware

Orchestrator Client:

username: vcoadmin

password: vcoadmin

Web Operator

username: vcoadmin

password: vcoadmin

 

vCenter Orchestrator for Windows:

https://IPorDNS:8283 or http://IPorDNS:8282

username: vmware

password: vmware

WebViews: http://orchestrator_server:8280.

 

vCenter Orchestrator for vCloud Automation Center (built-in):

https://vcloud_automation_center_appliance_ip:8283

username: vmware

password: vmware (after initial logon, this password is changed)

vCO Client is accessible from http://vcloud_automation_center_appliance_ip

username: administrator@vsphere.local (or the SSO admin username)

password: specified password for the SSO admin during vCAC-Identity deployment

 

vCenter Operations

Manager: https://IPorDNS_of_UI_Server

username: admin

password: admin

Admin: https://IPorDNS_of_UI_Server/admin

username: admin

password: admin

CustomUI: https://IPorDNS_of_UI_Server/vcops-custom/

username: admin

password: admin

 

vCloud Automation Center Identity Appliance

https://identity-hostname.domain.name:5480/

username: root

password: password supplied during appliance deployment

 

vCloud Automation Center vCAC Appliance

https://identity-hostname.domain.name:5480/

username: root

password: password supplied during appliance deployment

 

vCloud Automation Center

https://vcac-appliance-hostname.domain.name/shell-ui-app

username: administrator@vsphere.local

password: SSO password configured during deployment

 

vCloud Automation Center built-in vCenter Orchestrator:

https://vcloud_automation_center_appliance_ip:8283

username: vmware

password: vmware (after initial logon, this password is changed)

vCO Client is accessible from http://vcloud_automation_center_appliance_ip

username: administrator@vsphere.local (or the SSO admin username)

password: specified password for the SSO admin during vCAC-Identity deployment

 

vCloud Connector Node

https://IPorDNS:5480

username: admin

password: vmware

 

vCloud Connector Server

https://IPorDNS:5480

username: admin

password: vmware

 

vCloud Director

https://IPorDNS/cloud/

username: administrator

password: specified during wizard setup

 

vCloud Director Appliance

username: root

password: Default0

OracleXEDatabase

username: vcloud

password: VCloud

 

vCloud Networking and Security

console to VM

username: admin

password: default

type "enable"

password: default

type "setup" then configure IP settings

http://IPorDNS

 

VMware Site Recovery Manager:

username: vCenter admin username

password: vCenter admin password

 

vShield Manager

console to VM

username: admin

password: default

type "enable"

password: default

type "setup" then configure IP settings

http://IPorDNS

 

vFabric Application Director

https://IP_or_DNS:8443/darwin/

root: specified during deployment

password: specified during deplyent

darwin_user password: specified during deployment

admin: specified during deployment

 

vFabric AppInsight

http://IP_or_DNS

username: admin

password: specified during OVA deployment

 

vFabric Data Director

https://IPorDNS/datadirector

username: created during wizard

password: created during qizard

 

vFabric Hyperic vApp

username: root

password: hqadmin

 

vFabric Suite License

https://IPorDNS:8443/vfabric-license-server/report/create

 

View Admin

https://IPorDNS/admin

username: windows credentials

password: windows credentials

 

vSphere Data Protection Appliance

https://<IP_address_VDP_A ppliance>:8543/vdp-configure/

username: root

password: changeme

 

vSphere Replication Appliance

https://vr-appliance-address:5480

username: root

password: You configured the root password during the OVF deployment of the vSphere Replication appliance

 

Zimbra Appliance Administration Console

https://IPorDNS:5480

username: vmware

password: configured during wizard setup


Read more »



Sep
20
Installing Corporate CA Certificates on iPhone or iPad for Use with VMware View
Posted by Thang Le Toan on 20 September 2015 01:02 PM
Installing Corporate CA Certificates on iPhone or iPad for Use with VMware View

I was upgrading my VMware View environment recently from 5.0 to 5.1 and wrote about some initial problems in my article Trouble Recomposing View 5.x Desktops After Upgrade to vSphere 5.0 U2. After I had resolved those initial problems I needed to load my internal Root CA certificate onto all my company’s iPhone’s and iPad’s. This is because one of the big changes or improvements in View 5.1 is with security and you now need trusted certificates in order to connect to any of the desktops. Fortunately there is no need to purchase expensive public certificates if you have an internal corporate PKI / CA’s already configured, unless you want to. This article will show you how you can easily get your iPhones or iPad’s to trust your corporate CA certificates for use with VMware View.

 

I’ve included images here to explain the process as I think it’s easier to follow. I used one of my iPhones to keep the images reasonably small. To be honest you’re much more likely to be doing this on an iPad. But iPhones are perfectly usable in my opinion provided  you have the iPhone to VGA adapters and a Bluetooth Keyboard.

Trying to Connect Without Trusting the Certificate

If you try to connect to a VMware View 5.1 environment using the iOS View Client without first trusting the CA certificate you will receive a message as per the image below:

View Connection Denied

If you click on View Certificate you will see some details about the untrusted certificate:

View Client Untrusted Cert View

There is no way to set your device to trust your CA certificate from this screen. In order for you to get your iPhone or iPad to trust the certificate you will need to follow the process below.

Getting Your iPhone or iPad to Trust Your CA Certificate

1. Obtain a copy of the CA Certs (Root CA and Intermediate CA if used) and email them to your device, such as in the following image:

View Cert Emailed

You’ll notice the attachment in the image above shows a certificate type icon.

2. You now need to tap on the attachment. You will be presented with the following screen:

View Cert Install p1

At this point before continuing  to the next step you should click on More Details. You should verify that it is indeed the certificate that you were expecting, it’s form your corporate CA, and that it is valid and should be trusted. Once you are satisfied this is indeed a legitimate certificate that you should trust you continue.

3. Tap Install.  You will see the following warning image displayed on the screen:

View Cert Install p2

Because your corporate CA is not a trusted public CA it is not automatically in the trusted list for your devices. This is the reason this warning is being displayed. Provided you are happy with the checks you’ve done in the previous step, after reading this warning you can continue to the next step.

4. Tap Install. You will see the following image displayed on screen:

View Cert Install p3

At this point you need to enter your passcode so that the certificate can be loaded into your devices trust store and be trusted. Once you have entered your passcode successfully you will automatically be at the next step.

5. You have successfully loaded your corporate CA certificate into your devices trust store. You will see the following image displayed on the screen:

View Cert Install P4

Now when you connect using the VMware View Client your Connection Servers certificates, which were signed by your corporate CA, will be trusted and your connections will be successful. If you have  more than one CA that needs to be trusted you need to complete these steps for each of the certificates. You can now Tap Done and go back to the VMware View Client and test the connections.

6. Now when connecting to your VMware View Connection Servers or Security Servers an image similar to the following will be displayed on screen:

View Connection Allowed - Cert VerifiedYou can see by the tick on the padlock and the text https being displayed in green that the certificate and connection are trusted. If the connections weren’t trusted you wouldn’t have been able to connect. Enter your username and password and then tap done or go.

7. You will receive the list of entitled desktops similar to the image below and you can no proceed to use your desktops as per normal. This process is complete!

View Connection Allowed - Displaying Desktops

Removing a Certificate From Your iPhone or iPad Trust Store 

If for some reason you find out that a certificate has become invalid or has been revoked you will need to remove it from the trust store on your iDevice. To do this is very simple.

1. Tap Settings.

2. Tap General. You will see on the screen something similar to the following:

Remove Cert Settings Screen p1

You can see the profile listed and the name of the CA in this example.

3. Tap Profile. You will see on the screen something similar to the following:

Remove Cert Settings Screen p2

4. Tap  Remove. You will see a warning displayed similar to the following:

Remove Cert Settings Screen p3

5. Tap Remove. You will see the passcode dialog box displayed as per the image below.

View Cert Install p3

6. Enter your passcode. You will be returned to the settings screen and you’ll notice as per the image below that the profile has now gone.

Remove Cert Settings Screen p4

You have now completely removed the certificate from your devices trust store. When the new certificates are issued you can go back and follow the process to install them again.

Final Word

As you would expect Apple has made it fairly painless to get this all working. However when it comes to security and trusting certificates great care needs to be taken. You must verify that the certificates that are being sent to you for use are genuine and can be trusted. If for some reason the certificates expire, are revoked or for some other reason invalidated then you need to follow the process to remove the certificates from the trust store and then install the new ones. I hope this has been helpful and that you get hours of productivity out of your VMware View 5.1 vDesktops from your favourite iDevices.


Read more »



Sep
20

Today brings another update to VMware Horizon, version 6.1 is being announced. With this update comes several new features and a peek at a few others expected in a future release. The NVIDIA GPU support is the worst kept secret, since it was announced that vSphere 6 would have vGPU support. It was only going to be a matter of time until Horizon was updated to take advantage of the new vGPU feature.

Note: Some of the tech preview items will only be available via the public VMware demo site or via private requests. Not all tech preview items will be included in the GA code like many have been in the past.

The summer of 2014 saw the release of Horizon 6.0 and the ability to present RDS based applications. It was missing a number of features and VMware quickly closed the printing gap in 6.01. Today in 6.1 we are seeing several new features which I will cover in more detail. A few other features will enter tech preview mode and are likely to be released in an upcoming version.

new features

 

 USB Redirection

In 6.1 the ability to redirect USB storage devices for Horizon applications and hosted desktops will now be available. This helps close another gap that existed. It will only be available in 2012/2012R2 OS versions.

usb redirect

 

Client Drive redirection

This is something that has been available in Citrix XenApp since the stone ages. It will only be available in tech preview for now, but I’m sure we will see this some time this year. Initial support for Windows only clients with other OS’s coming later.

client drive

Horizon Client for Chromebooks

The current option in you want to use a Chromebook as your endpoint is to access Horizon via the HTML 5 web browser. This limited you to only connect to a desktop, because Horizon apps were not supported over HTML5. Without a proper client pass-thru items such as USB devices were not possible either.

The Horizon client for Chromebooks will be based on the Android version that has been around already. There has been growing demands for this client. This will be available as a tech preview sometime in Q1/Q2 of 2015.

Cloud Pod updates

The cloud pod architecture was released last year to provide an architecture for building a multi-site Horizon install. The initial version was not that attractive in my eyes. The updated version in 6.1 brings the configuration and management parts of cloud pod into the horizon manager. The previous version had to be done via command line and global entitlements were not shown in the Horizon manager.

Other Items

We are also see a number of other check the box type items that are expected due to vSphere 6 updates.

  • VVOL support Horizon 6 desktops
  • VSAN 6 support
  • Large cluster size support for VSAN6 and higher densities
  • Support for Windows 2012R2 as a desktop OS
  • Linux VDI will be a private tech preview option

Read more »




Help Desk Software by Kayako