To deliver VDI Performance it is key to understand I/O performance when creating your VDI architecture. Iometer is treated as the industry standard tool when you want to test load upon a storage subsystem. While there are many tools available, Iometer’s balance between usability and function sets it out. However, Iometer has its quirks and I’ll attempt to show exactly how you should use Iometer to get the best results, especially when testing for VDI environments. I’ll also show you how to stop people using Iometer to fool you.
As Iometer requires almost no infrastructure, you can use it to very quickly determine the storage subsystem performance. In steady state a desktop (VDI or RDS) I/O profile will be approximately 80/20 write/read, 80/20 random/sequential and the block size of the reads and writes will be in 4k blocks. The block size in a real windows workload does vary between 512B and 1MB, but the vast majority will be at 4K, as Iometer does not allow a mixed block size during testing we will use a constant 4K.
That said, while Iometer is great for analysing storage subsystem performance,if you need to simulate a real world workload for your VDI environment I would recommend using tools from the likes of Login VSI or DeNamik.
Bottlenecks for Performance in VDI
Iometer is usually run from within a windows guest which is sitting upon the storage subsystem. This means that there are many layers between it and the storage as we see below:
If we are to test the performance of the storage, the storage must be the bottleneck. This means there must be sufficient resource in all the other layers to handle the traffic.
Impact of Provisioning Technologies on Storage Performance
If your VM is provisioned using Citrix Provisioning Services (PVS), Citrix Machine Creation Services (MCS) or VMware View Linked Clones, you will be bottlenecked by the provisioning technology. If you test with Iometer against the C: drive of a provisioned VMs you will not get full insight of the storage performance as these three technologies fundamentally change the way I/O is treated.
You cannot drive maximum IOPS from a single VM, it is therefore not recommended to run Iometer against these VMs when attempting to stress-test storage.
I would always add a second drive to the VM and test Iometer against a second hard drive as this by-passes the issue with PVS/MCS/Linked Clones.
In 99% of cases I would actually rather test against a ‘vanilla’ Windows 7 VM. By this I mean a new VM installed from scratch, without it joining the domain and only having the appropriate hypervisor tools installed. Remember, Iometer is designed to test storage. By testing with a ‘vanilla’ VM environment you baseline core performance delivery. From that you can go to test a fully configured VM; and now you can understand the impact of AV filter drivers, provisioned by linked clones, or other software/agents etc. has on storage performance.
Using Iometer for VDI testing: advantages and disadvantages
Before we move on to the actual configuration setting within Iometer, I want to talk a little bit about the test file that Iometer creates to throw I/O against. This file is called iobw.tst and is why I both love and hate Iometer. It’s the source of Iometers biggest bugs and also it’s biggest advantage.
First, the advantage; Iometer can create any size of test file you like in order to represent the test scenario that you need. When we talk about a single host with 100 Win 7 VMs, or 8 RDS VMs, the size of the I/O ‘working set’ must be, at a minimum, the aggregate size of the pagefiles: as this is will be the a set of unique data that will consistently be used. So for the 100 Win 7 VMs, with 1GB RAM, this test file will be at least 100GB and for the 8 RDS VMs, with 10GB RAM, it would be at least 80GB. The actual working set of data will probably be much higher than this, but I’m happy to recommend this as a minimum. This means that it would be very hard for a storage array or RAID card to hold the working set in cache. Iometer allows us to set the test file to a size that will mimic such a working set. In practice, I’ve found that a 20GB test file is sufficient to accurately mimic a single host VDI load. If you are still getting unexpected results from your storage, I’d try and increase the size of this test file.
Second, the disadvantage; iobw.tst is buggy. If you resize the file without deleting, it fails to resize (without error) and if you delete the file without closing Iometer, Iometer crashes. In addition, if you do not run Iometer as administrator, Windows 7 will put the iobw.tst file in the profile instead of the root of C:. OK, that’s not technically Iometer’s fault, but it’s still annoying.
Recommended Configuration of Iometer for VDI workloads
First tab (Disk Targets)
The number of workers is essentially the number of threads used to create the I/O requests, adding workers will add latency, it will also add a small amount of total I/O. I consider 4 workers to be the best balance between latency and IOPS.
Highlighting the computer icon means that all workers are configured simultaneously, you can check that the workers are configured correctly by highlighting the individual workers.
The second drive should be used to avoid issues with filter drivers/provisioning etc on C: (although Iometer should always be run in a ‘vanilla’ installation).
The number of sectors gives you the size of the test file, this is extremely important as is mentioned above. You can use the following website to determine the sectors/GB:
The size used in the example to get 20GB is 41943040 sectors.
The reason for configuring 16 outstanding I/Os is similar to the number of workers as increasing I/Os will increase Latency while slightly increasing IOPS. As with workers, I think 16 is a good compromise. You can also refer to the following article regarding outstanding I/Os: http://communities.vmware.com/docs/DOC-3961
Second tab (Network Targets)
No changes are needed on the network Targets tab
Third tab (Access Specifications)
To configure a workload that mimics a desktop, we need to create a new specification.
The new Access specification should have the following settings. This is to ensure that the tests model as closely as possible a VDI workload. The settings are:
The reason for choosing these values are too detailed to go into here, but you can refer to the following document on Windows 7 I/O:
You should then Add the access specification to the manager.
Fifth tab (Test Setup)
I’d advise only configuring the test to run for 30 seconds, the results should be representative after that amount of time. More importantly, if you are testing your production SAN, Iometer once configured correctly will eat all of your SAN performance. Therefore, if you have other workloads on your SAN, running Iometer for a long time will severely impact them.
Fourth tab (Results Display)
Set the Update Frequency (seconds) slider to the left so you can see the results as they happen.
Set the ‘Results Since’ to ‘Start of Test’ which will give you a reliable average.
Both Read and Write avg. response times (Latency) are essential.
It should be noted that the csv file Iometer creates will capture all metrics while the GUI will only show six.
It is recommended that you save the configuration for later use by clicking the disk icon. This will save you having to re-configure Iometer each test run you do. The file is saved as *.icf in a location of your choosing. Or to save some time, download a preconfigured Iometer Desktop Virtualization Configuration file and load it into Iometer.
Start the test using the green flag.
Generally the higher the IOPs the better, indicated by ‘total IOPS per second’ counter above, but this must be delivered at a reasonable latency, anything under 5ms will provide a good user experience.
Given the max possible IOPS for a single spindle is 200, you should sanity check your results against predicted values. For an SSD you can get 3-15,000 IOPS depending on how empty it is and how expensive it is, so again you can sanity check your results.
You don’t need to split IOPS or throughput out into read and write because we know Iometer will be working at 80/20, as we configured in the access specification.
How can I check someone isn’t using Iometer to trick me?
To the untrained eye Iometer can be used to show very unrepresentative results. Here is a list of things to check when someone is showing you an Iometer result.
- What size is the test file in Explorer? it needs to be very large (minimum 20GB), don’t check in the Iometer gui.
- How sequential is the workload? The more sequential, the easier it is to show better IOPS and throughput. (It should be set to a minimum of 75% random)
- What’s the block size? Windows has a block size of 4K, anything else is not a relevant test and probably helps out the vendor.
Read more »
For a maker of design software whose customers have to manage and collaborate on giant files, the solution for client networking also applies in-house.
When discussion in IT circles turns to the topic of desktop virtualization, there is a bit of conventional wisdom you might hear that goes something like this:
"Well, virtualized desktop infrastructure may be fine for relatively simple computer environments, like call centers or order-entry clerks. But you're never going to see them in high-performance, graphically intensive situations. Like, say, Autodesk."
But you'll never hear that sentiment uttered at Autodesk.
Not only is the design software giant itself a heavy user of VDI technologies, but it's also seeing an ever-increasing amount of interest in virtualized desktops from even its biggest customers. And while the company doesn't advocate any particular system architecture, it supports whatever its clients end up choosing.
"There is absolutely more interest" in VDI from customers, says Anthony Hauck, director of product strategy for the architecture, engineering and construction division, one of the foundations of Autodesk's business.
Supporting Customers As They Transition to VDI
"Starting about four years ago, it became a steady topic of conversation whenever we were talking to our biggest customers," he said. "We discovered that, invariably, there was somewhere in their infrastructure where they were starting to virtualize desktops. It became very apparent to us that the industry was starting a transition to a different model of computing."
His company has supported that transition principally by continually expanding the software and hardware products on which it has tested its core products like AutoCAD and Revit.
Autodesk customers, Hauck said, are interested in virtualization in part for the same reasons that everyone else is, especially the ability to centralize the deployment and support of desktops. But another big driver is the increasingly global nature of the economy, especially for those customers focused around construction and engineering.
Collaboration Across Large Distances
"More and more of the time, we see our software being used by geographically far-flung teams," he said. "The actual project is located in one place, but people have to collaborate on it no matter where in the world they happen to be located. Projects might even be spread out among many firms. They will use VDI to bring the teams together using delivered desktops. The data for the project stays centralized, even while those working on it are widely dispersed."
Hauck said that when big customers talk with Autodesk about virtualization, they have usually already decided on some sort of VDI project, and are looking for technical advice on how to best implement it. Autodesk's main contribution, he said, is in providing the technical specs for the back-end infrastructure that will be necessary to ensure that performance stays as robust as it had been on individual desktops. Hauck said Autodesk stays vendor-neutral in making its recommendations.
When discussing Autodesk and VDI, Hauck stressed that the company has expertise on both sides of the coin: As someone selling software into virtualized environments, and as a major customer of the technology itself.
Testing in VDI Environments
Much of the company's R&D testing is done using VDI; one division runs 16,000 simulations on virtualized desktops every evening. And when Autodesk offers free online trials of its key products, it often has potential customers run them in virtualized desktop settings.
"We can have a high degree of confidence in the recommendations we make to customers about virtualization in part because we are doing so much work in virtualized environments ourselves. We have quite a bit of in-house experience," Hauck said.
Autodesk customers in highly technical fields often push the envelope of virtualization technology. Says Hauck, "We've seen people running some of our software on iPads, even though an iPad's specs are far below what are needed. Of course, it's not really running on an iPad, but in a virtualized desktop being delivered to the iPad. We've sometimes ourselves been surprised at how much our customers have been able to do."
And how about that old bugaboo, graphics? Not an issue, says Hauck.
In the very early days of VDI, latency might have been a problem; the cursor, for example, might have been jittery as an engineer tried to move it across the screen. But because of software and hardware innovations, he said, that’s no longer a problem. Many new Autodesk customers, said Hauck, are getting extra graphical horsepower from a new breed of graphics accelerator products designed specifically for virtualized environments.
"As far as graphics performance is concerned, it's been a long time since we've had to work on that as an issue with any of our customers," Hauck said. "This is definitely one of those cases where no news is good news."
Read more »
3 cách mà VDI có thể chiếm dụng dải IP.
Virtual environments use at least twice as a many IP addresses as physical ones because each desktop and endpoint used to access it need their own addresses. Luckily, IPAM tools can help you keep track of your addresses.
IP address consumption doubles when you deploy virtual desktops, so it's important that IP address management is on your radar.
When an organization begins working toward implementing VDI, it has a lot of things to consider: Is the storage connectivity fast enough? Do the host servers have enough memory? Will the end-user experience be acceptable?
How virtualization consumes IP addresses
There are three primary ways that desktop virtualization affects IP address consumption. The first has to do with changes that you may need to make to your DHCPconfiguration.
Depending on how many virtual desktops you want to support, you may need to create additional DHCP scopes. You might even need to deploy some extra DHCP servers. This certainly isn't necessary in every situation, but it happens often enough to make it worth mentioning.
The second way IP address consumption becomes a factor is that the organization may suddenly consume far more IP addresses than it did prior to the desktop virtualization implementation. The reason for this is quite simple.
Consider an environment without virtual desktops. Each PC consumes an IP address, as do any backend servers. Shops implementing virtual desktops or VDI sometimes overlook the fact that desktop virtualization does not eliminate desktop hardware needs. Regardless of whether users connect via tablets, thin client devices or repurposed PCs, the endpoint consumes an IP address, and so does each virtual desktop.
This means that desktop virtualization effectively doubles IP address consumption on the client side. Each user consumes at least two IP addresses: The physical hardware uses one address and the virtual desktop uses another. There is no way to get around this requirement, so you must ensure that an adequate number of IP addresses are available to support virtual desktops and endpoints.
The third reason IP address consumption increases in a virtual desktop environment has to do with the way workers use virtual desktops. Employees can use virtual desktops on a wide variety of devices, such as PCs, smartphones and tablets. This gives workers the freedom to use the device that makes the most sense in a given situation. But IP address consumption does not mirror device use in real time.
When a device connects to the network, a DHCP server issues the device an IP address lease, but the lease isn't revoked when the device disconnects from the network. The lease remains in effect for a predetermined length of time, regardless of whether the device is still being used. As such, the IP address is only available to the device that leased it; it's not available for other devices to access during the lease period.
Desktop virtualization by its very nature leads to increased IP address consumption. The actual degree to which the IP addresses are consumed varies depending on device usage, however. From a desktop standpoint, you can expect the IP address consumption to double, but in organizations where workers use multiple devices, consumption can be even higher.
How to protect the network against increased IP consumption
The first thing I recommend doing is implementing session limits. Remember, every virtual desktop that is powered up consumes an IP address. You can establish some degree of control over the IP address consumption by limiting the number of concurrent sessions that users are allowed to establish. If each user is only allowed to have one or two concurrent sessions, then you will consume fewer IP addresses (not to mention fewer host resources) than you would if each user could launch an unlimited number of virtual desktops.
I also recommend adopting an automated IP address management tool. There are a number of third-party options on the market. Windows Server 2012 and 2012 R2 also included IP address management software in the Microsoft IPAM feature.
Like any other form of resource consumption, IP address usage tends to evolve over time. To that end, it is extremely important to track IP address usage over the long term so you can project if or when your IP address pools are in danger of depletion.
An IP address management tool should also include an alerting mechanism that responds to situations where a DHCP pool runs low on addresses; the depletion of a DHCP scope can result in a service outage for some users. Using an automated software application to track scope usage is the best way to make sure that you are never caught off guard.
Link tham khảo: http://searchvirtualdesktop.techtarget.com/tip/Three-ways-virtualization-increases-IP-address-consumption
Tác giả: Brien Posey - http://www.techtarget.com/contributor/Brien-Posey
Read more »
Chi phí ẩn của hệ thống lưu trữ khi xây dựng VDI.
Virtual Desktop Storage Basics: The hidden costs of VDI storage
Virtual desktops may make management easier, but proper planning is needed to reduce storage bottlenecks, ensure performance and accommodate growth. Storage subsystems can ease VDI deployments, whose costs can balloon if you don't follow best practices for supervision. In this final segment of a four-part e-book, we'll help you understand how VDI storage affects your bottom line.
Even though virtual desktops run as instances within a server's memory, the desktop image, applications and every single fragment of user data require storage—and that storage costs money. Any organization that contemplates a move to desktop virtualization must also understand the many costs associated with acquiring, installing, managing, maintaining and protecting their users' desktops. Below are the cost points and lesser-known implications of storage for VDI deployments.
The known costs of VDI storage
Several common storage costs demand immediate attention. There are the initial capital expenditures for the storage system, the disks and any software that supports the storage. The installation costs should also be considered -- particularly for a large storage chassis that might require specific power or cooling support from the data center. Storage also needs to be administered and managed, and tasks such as provisioning, monitoring, migration, troubleshooting and backup restoration require human involvement. These costs are present in any storage projects, but large VDI deployments can put a real bite on the capital budget.
Fortunately, VDI storage costs can also be mitigated using many of the same tactics and technologies that are brought to bear on other storage projects. For example, thin provisioning allows storage to be logically allocated without all of that disk space actually in place. Since the storage space really can’t be used by anything else once it’s allocated, but it may take considerable time to use all of the allocated space, thin provisioning is a way to defer the expense of new disks until their space is really needed.
Data deduplication can dramatically reduce storage use by removing redundant blocks and files from the storage system. In effect, only one complete copy of data is actually stored on disk -- redundant data is simply removed and redirected to the one working copy. As long as the storage system can perform data deduplication on live data in real time without significantly affecting storage performance, it can be an enormous boost to storage efficiency.
Tiered storage can also be implemented to reduce VDI storage costs. For example, golden desktop and application images that form the foundation of virtual desktops can be stored on more-expensive but higher-performance disks, while user data can be relegated to larger and less-expensive disk types. In some situations, solid-state drives (SSDs) or hybrid drives can be employed in the top tier. "You can deploy SSDs as a significant aspect of your storage tier," said Ray Lucchesi, president and founder of Silverton Consulting. "With SSDs, you'll sustain better throughput and random I/O."
Finally, reducing the scope of your VDI deployment will shrink the overall storage requirements -- also reducing the cost of VDI storage in your environment. "The majority of organizations that we work with are clearly looking just for a segment of their end users to deploy desktops to," said Mark Bowker, an analyst at Enterprise Strategy Group. He notes that organizations pursuing VDI for specific tactical reasons may shoulder the expense even when the return on investment may not make sense. Organizations planning to adopt virtual desktops must consider the needs of the user base and understand their limitations. Users who require the versatility found in traditional desktop/laptop systems may not be good candidates for virtualization, particularly if they demand frequent changes or customization.
Hidden costs of VDI storage
But organizations enamored with the promise of VDI can easily incur a variety of lesser-known costs that, if left unmanaged, can balloon storage needs (and costs). A common mistake is settling for "fat" desktops where a virtual desktop includes all of the configuration files, operating systems, applications and user data. This approach works, but it is far more storage than a virtual desktop instance actually needs. When this oversized image is multiplied by the total number of "fat" desktops and then multiplied by the additional storage needed for backups and disaster recovery, the storage requirements can easily overwhelm the purported cost benefits of VDI.
Desktop provisioning administration can also be problematic and costly. VDI makes sense only when an administrator can provision and update a large number of desktops using automated techniques such as scripts. Desktop provisioning involves creating a virtual machine, installing the OS, creating a template, customizing the template and then cloning the boot image to the production desktop on the VDI server.
It might only take 20 minutes or so to tackle these tasks manually, but multiplied by dozens or even hundreds or thousands of desktops, the administrative problems become insurmountable. Patching also requires manual processes that can be equally problematic and time-consuming. "Most people might find that the biggest cost is administration," Lucchesi said.
These two challenges can often be mitigated by creating "thin" desktops. For example, administrators can thin-provision a volume, build a golden image with an operating system and applications in a template, and then create writable snapshots to be assigned to each end user. User data is stored apart from the desktop image -- perhaps on a different storage system. The interrelated components can be used and reused without creating entirely new images, and it eliminates the manual cloning process.
Patching can also be accomplished automatically while leaving user settings and data alone. Virtualization tools such as VMware's View Composer allow administrators to make images that share virtual disks with a master image, using less storage. By comparison, VMware's View Manager streamlines and automates virtual desktop provisioning and management.
Poorly implemented storage architectures and inefficient desktop provisioning can result in poor load times and unresponsive applications, as well as costly lost productivity and angry users. Proper storage system implementation is critical for adequate disk performance under random I/O workloads and network resilience to prevent access disruptions. Storage systems with advanced caching can share desktop images from the cache, radically improving load times for users where desktop images are almost identical.
Tham khảo nguồn: http://searchvirtualdesktop.techtarget.com/tip/Virtual-Desktop-Storage-Basics-The-hidden-costs-of-VDI-storage
Tham khảo tác giả: Stephen J. Bigelow - Senior Technology Editor - http://www.techtarget.com/contributor/Stephen-J-Bigelow
Read more »
Loạt bài về chi phí xây dựng VDI (VDI = Very Dumb Ideas???)
It’s been said before that implementing virtual desktop infrastructure isn’t about saving money. And while that’s true—generally VDI only saves money when you can have fewer total virtual desktops than users— we’d like to take this truth a step further. In fact, if you’re thinking about implementing VDI, it’s likely going to cost you money.
So what makes VDI such a cost suck? The following are five hidden costs associated with VDI implementation:
1. Unmet storage needs – Getting started with VDI involves significant storage costs, and when storage needs are underestimated, it can make matters worse. As companies try to scale VDI, not having enough storage can lead to performance problems and additional costs.
2. Windows licensing (VECD) – Every VDI environment needs a VEDC license, which will cost upwards of $23 per device per year, depending on your current licensing agreement with Microsoft. VEDC serves as a subscription license for access points that do not have a qualifying copy of Windows. There is no way around this additional cost and still be contractually compliant.
3. Non-compatible apps – App virtualization products aren’t compatible with all apps, requiring these “non-compatible apps” to be dealt with in other ways. If VDI is only securing a portion of device applications, finding a secondary solution means spending more money on a secondary product. If the point of VDI implementation is to save money, this con does just the opposite.
4. Bandwidth Costs – To ensure a reasonable experience for users, bandwidth must be plentiful and always available. Many users hitting the system simultaneously can create significant performance drags and have productivity implications. Installing high speed routers and other network infrastructure attaches a substantial cost to any VDI project.
5. Energy Costs – With all the VDI infrastructure resident in the data center the additional cost for energy is significantly increased. As the ratio of users to servers is relatively low in VDI – in some instances less than 30 users per server – a large number of new servers are required to support them and those servers need power to run. While high density/high performance servers can be used they also have a large power footprint due to the cooling required to keep them operational.
If VDI doesn’t save your company money and add to your bottom line, why would you implement it? You wouldn’t! Fortunately, there are other more cost efficient solutions for implementing BYO in the enterprise space that will not only save you more money, but will provide your employees with a more productive solution. With a reliance on network connectivity for an effective user experience, an increased cost of shared storage infrastructure, and increased exposure to data centers that could result in a security breach, VDI turns into a VERY DUMB IDEA.
Tham khảo nguồn: http://datacenterpost.com/2013/09/dont-get-duped-5-hidden-costs-virtual-desktop-infrastructure-vdi/
Tham khảo tác giả: John Whaley, Founder and CTO of Moka5 -
Read more »
Loạt bài về các chi phí khi xây dựng VDI.
Brian Madden has an excellent post up today called The hidden costs of VDI. I’ve been working nearly full time the last two months helping to put together a Microsoft Services offering around desktop virtualization in general and VDI in particular so have spent a lot of time looking into both the technical and business considerations that must be taken into account. I’d summarize his post in three points:
- TCO models, like statistics, can be made to tell any story you or a vendor wants
- Cost models typically assume full replacement of legacy systems to show maximum benefit but his rarely occurs due to technical, political, or other unforeseen reasons
- Since VDI is relatively new (compared to traditional desktops and Terminal Services/Citrix Server-based Computing), there are a lot of technical and compatibility issues and considerations that are not well understood outside a small group of experts
As a well known fan and expert on Server Based Computing (SBC), i.e. Terminal Services or Citrix Presentation Server/XenApp, Brian prefaced the article by saying that he likes VDI “where it make sense”. He correctly points out that nearly all vendors and TCO models show that Server Based Computing still provides the lowest TCO due to its high user density but that there are limitations which make other approaches such as VDI relevant.
That is where I’ll jump in with my thoughts because I completely agree with those statements and it has been the foundation of the offering I have been working on. It starts with the notion of flexible desktop computing and desktop optimization that Microsoft has been talking about for some time now. An overview of this approach is presented in this whitepaper. To summarize, there are a variety of ways that a desktop computing environment can be delivered to users ranging from traditional desktops, to server based computing, to VDI, with a multitude of variations in between with the addition of virtualization at the layers illustrated below:
Rather than selecting a one-size-fits-all solution, virtualization provides architects a new, more flexible set of choices that can be combined to optimize the cost and user experience of the desktop infrastructure. The following four steps lead to an optimized solution:
Define User Types: Analyze your user base and define categories such as Mobile Workers, Information Workers, Task Workers, etc. and the percent distribution of users among them. The requirements of these user types will be utilized to select the appropriate mix of enabling technologies.
Define Desktop Architecture Patterns: Each architecture pattern should consist of a device type (thin client, PC, etc) and choice of:
- OS execution (Local, Desktop Virtualization, or Server Based Computing)
- App execution (Local, Application Virtualization, or Application Remoting)
- Display (Local or Presentation Virtualization)
For each pattern, determine which user types it can be applied to. For example, with mobile or potentially disconnected users, presentation virtualization alone would not be applicable as it requires a network connection. Power users may require a full workstation environment for resource intensive applications but may be able to leverage application virtualization for others. These are just a few examples where different user groups have different requirements.
Determine TCO for each Architecture Pattern: Use a recognized TCO model to determine the TCO for each pattern. Minor adjustments to these models can be made to account for specific technology differences but most include TCO values for PCs, PCs with virtualized apps, VDI, and TS/Citrix thin client scenarios. Be wary of vendor provided TCO models. To Brian’s points, be sure to gain a full and complete understanding of the chosen TCO model and what does and does not include. Consistent application of the model across the different architecture patterns is critical for relevant comparisons.
Model Desktop Optimization Scenarios: With the above data, appropriate architecture patterns can be selected for each user type by choosing the lowest TCO architecture pattern that still meets user requirements. By varying the user distribution and selected architecture patterns, an optimized mix can be determined. It is tempting to simply choose the lowest TCO architecture pattern for all users but this can be very dangerous in that it will typically impact your high value, power users the most if their requirements are not accounted for.
A one-size-fits-all approach would result in either a large number of PCs if not using virtualization, a large number of servers if virtualizing everything, or failure to meet power user needs if using only server based computing. An optimized solution is one which utilizes the right mix of technologies to provide the required functionality for each user type at the lowest average TCO. Combined with a unified management system that handles physical and virtual resources across devices, operating systems, and applications, substantial cost savings can be realized.
As I mentioned at the top, a lot of the concepts in addition to very detailed architecture and implementation guidance are part of the Microsoft Services Core IO offerings. For the last two years, in addition to my customer work I have been deeply involved in the creation of the Server Virtualization with Advanced Management (SVAM) offering. The work I mentioned above around VDI architecture will complement that and be available later this summer. Finally, specific to desktop imaging, deployment, and optimization, there is also the Desktop Optimization using Windows Vista and 2007 Microsoft Office System (DOVO) offering. Taken together in concert with the underlying product suites, these illustrate Microsoft’s “desktop to datacenter” solutions and how to plan, design, and implement them.
Tham khảo nguồn: http://blogs.technet.com/b/davidzi/archive/2009/05/11/finding-the-hidden-costs-of-vdi.aspx
Tham khảo tác giả: DavidZie - http://social.technet.microsoft.com/profile/DavidZie
Read more »
Loạt bài về các chi phí liên quan tới setup hệ thống VDI.
The Secret Bottleneck of VDI
by Mark Lockwood | October 10, 2014 | 22 Comments
For years, organizations deploying VDI for the first time have felt the sting of the most common issue that plagues virtual desktops: inadequate performance from shared storage. The storage infrastructure that has worked well for decades when used for servers is found to be dramatically lacking when virtual desktops are deployed to it; for that reason, VDI has received many black eyes (and in some CxO offices, a permanent bad reputation) and that has been very difficult to overcome.
Fortunately, the available options for shared storage are multiplying, with VDI driving a good portion of that change. An organization looking to deploy VDI today has all-flash array options, de-duplication offerings, hyper-converged (scale-out) systems, and even local storage aggregation products available – all of which can not only increase performance of VDI, but lower the cost as well. Although the price point of an individual virtual desktop can be endlessly debated and is largely dependent on the performance requirements of the end user, it is now a safe assumption that a virtual desktop can be made to perform as well as, or better than, a physical machine at a reasonably competitive price.
Obviously, that means VDI performance issues are behind us, right? Unfortunately, the answer is no – at least for organizations looking to run hundreds or thousands of virtual desktops. The hurdle of “slow storage” was so large and so overwhelming for VDI deployments that it was often functionally impossible to see if there were other bottlenecks in the system. The problem these organizations encounter, once storage has been “solved”, is that suddenly mass events, such as anti-virus scans and updates, inventory scans, and software distribution have all the IOPS they can handle, so they begin competing for the other parts of the shared infrastructure: network, CPU, and memory.
What happens when 150 persistent virtual desktops, all hosted on the same physical VDI host, are told to download and install a 70mb patch, and there is no disk I/O contention? One hundred and fifty instances of a software delivery agent spring to life, churning against the vCPUs assigned to the virtual desktops. The patch is retrieved. The patch is expanded. The patch is installed. Post-installation cleanup occurs. And don’t forget the reboot. The performance problem is no longer the disk, but it still very much exists; one bottleneck has been removed only to reveal another.
As you can imagine, the great benefit of virtualization – oversubscription – absolutely cannot deliver even close to adequate performance to these virtual desktops all clamoring for cycles at the same time. Often, the result is an end user experience that ranges from slow to unusable. Desktops that should update in a minute or two are so starved for resources that they take 30 minutes or more, during which time they have nearly zero cycles to dedicate to the work of the user. Ultimately, this turns into hallway chatter: “VDI is awful.” “I want my desktop back.” “This is the worst technology I’ve ever used.”
How do you avoid this? In my latest research document, “Selecting the Right Application Delivery Model for Virtual Desktops” (available to Gartner for Technical Professionals clients), alternative ways to deliver applications (and updates, and patches) to persistent virtual desktops are reviewed in detail. By comparing options such as Server-Based Computing, Application Virtualization, and Application Layering to traditional, physical-focused application management tools, I hope that organizations will be able to see and avoid this issue before it happens, or remediate the problem if it has already been encountered.
Like most technology, none of these solutions are one-size-fits-all. However, among the options discussed, I believe most will find a solution that will reduce or eliminate this bottleneck. And good news – once you have solved storage and solved application delivery, the performance hurdles of VDI finally begin to get smaller. Don’t lose hope!
Which application delivery approach are you currently using for your VDI implementation, and how well is it working for you? Let us know in the comments.
Tham khảo nguồn: http://blogs.gartner.com/mark-lockwood/2014/10/10/the-secret-bottleneck-of-vdi/
Tham khảo tác giả: Mark Lockwood, Research Director, 1 years at Gartner, 20 years IT Industry - full bio: http://www.gartner.com/AnalystBiography?authorId=49949
Read more »
Windows Server 2012 introduces some significant improvements to RemoteFX™. One feature – RemoteFX vGPU - already present in Windows Server 2008 R2 is the ability to use a physical graphic adapter (GPU) in the Hyper-V host to accelerate the host-side rendering of display content. This guide describes the configuration steps to leverage RemoteFX vGPU in Windows Server 2012. For Windows Server 2012 R2 changes please see the following RDS blog post:RemoteFX vGPU Improvements in Windows Server 2012 R2
What is RemoteFX?
RemoteFX in Windows Server 2012 is a suite of improvements to the Microsoft Remote Display Protocol (RDP). It optimizes the display experience for remote users, even on constraint networks. Additionally, RemoteFX improves access to peripheral devices attached to the client, e.g. via USB.
The vGPU feature of RemoteFX makes it possible for multiple virtual machines to share a physical graphics adapter. The virtual machines are able to offload rendering of graphic information from the processor to the dedicated graphics adapter. This will decrease the CPU load and improve the scalability for graphic intense workloads that run in the VDI virtual machines.
Requirements for RemoteFX vGPU
To off-load graphically intense workloads from the CPU to a physical GPU in Windows Server 2012 the following hardware is required:
- A SLAT-capable processor (Second Level Address Translation). AMD calls this processor feature “NPT” (Nested Page Table), Intel calls it “EPT” (Extended Page Table).
- A DirectX 11-capable GPU with a WDDM 1.2 compatible driver.
|Windows 7 with SP1 virtual machines with RemoteFX vGPU enabled require a DirectX 11-capable GPU on Windows Server 2012. The Windows 7 with SP1 virtual machine RemoteFX vGPU enabled virtual machine will still only support DirectX 9 in this configuration for RemoteFX vGPU.
- Windows 7 Enterprise with SP1 or Windows 8 Enterprise as guest operating system in a virtual machine with the RemoteFX 3D Video Adapter enabled.
Checking the requirements
SLAT CPU Support
Before enabling the Hyper-V role on the server, the following tools can be used to find out whether the server CPU supports SLAT:
On an elevated command prompt run the command:
At the very end of the output information this text should appear on a system with SLAT capable CPU:
The free tool can be downloaded from the Sysinternals pages on Technet (http://technet.microsoft.com/en-us/sysinternals/cc835722.aspx ). It has to be run before installing Hyper-V on an elevated command prompt.
C:\coreinfo.exe –accepteula –v
If the CPU meets the requirements, both lines are marked with a star symbol (*).
DirectX 11 compatible graphic adapter
Hardware vendor web page
In order to verify whether the GPU is DirectX 11 ready you can use the web page of the manufacturer of the graphic adapter.
Microsoft Windows Server Catalog
To leverage RemoteFX vGPU in an enterprise environment it is recommended to use a graphic adapter listed on the Windows Server Catalog. Windows Server 2012 certified GPUs are listed here .
DirectX Diagnostic Tool (DXDiag.exe)
Run this command at an elevated command prompt:
The DirectX Diagnostic Tool will be displayed. Check the line “DirectX Version” for DirectX 11.
In addition to that, please verify the Feature Level information at the "Display" tab of DXdiag. The GPU Feature Level has to be at least 11.0 for RemoteFX vGPU in Windows Server 2012.
WDDM 1.2 compatible driver
Use the DXdiag tool to verify driver for the graphics adapter on the Windows Server 2012 is Windows Device Driver Model (WDDM) 1.2 compatible (NOTE: ensure you are logged on to the physical server, do not use a Remote Desktop connection since it will not display the graphics properties):
Installation and configuration of RemoteFX vGPU
- Install Windows Server 2012 on the server.
- Verify SLAT-support of the CPU using Systeminfo or Coreinfo.
- Install Windows Updates.
- Install the latest available display driver which at least supports WDDM 1.2 and DirectX 11 from the manufacturer of the graphic adapter.
- Verify DirectX 11 with Dxdiag.
- Verify the Windows SKU running inside the virtual machine is “Windows 8 Enterprise” or “Windows 7 Enterprise“, which is required for RemoteFX vGPU support.
- Install Hyper-V Role using Server Manager and reboot the server.
- Install the Remote Desktop Virtualization Host Role using Server Manager or use the Remote Desktop Standard Deployment Wizard for a full setup of all Remote Desktop Services Roles. For the isolated installation of the role you have to select “Roles- and Feature based installation“. After the role setup is completed, the server must be restarted.
- Open Hyper-V Manager and select the physical GPU support in Hyper-V Settings.
- Add or import the Windows 8 Enterprise or Windows 7 Enterprise virtual machines onto the Windows Server 2012 virtual machine.
- Add the RemoteFX 3D Graphics adapter to the Window 8 Enterprise or Windows 7 Enterprise virtual machine by navigating to the virtual machine settings and select “Add Hardware”.
- Select “RemoteFX 3D Video Adapter” and click “Add”
- Navigate to the properties of the added “RemoteFX 3D Video Adapter” and configure the maximum number of monitors and resolution that will be used by the Remote Desktop clients that will connect to this virtual machine.
- Commit the changes to the properties of the virtual machine and launch the virtual machine to configure the Windows 8 or Windows 7 Sp1 Enterprise client.
|Certain steps must be applied to allow Remote Desktop Connections for Windows 7 with SP1 Enterprise client
- Hyper-V Integrated Services must be updated
- Allow Remote Desktop to communicate through Windows Firewall
- Optional: Add the local Administrator account to the Remote Desktop Users Group. This enables the local Administrator remote access. Any Domain Administrators added to Administrators Group will have access by default.
- When using Remote Desktop Connection, change the Performance Experience to LAN. This enables the RemoteFX Adapter to function correctly in the virtual machine
- After the virtual machine restarts, you will see a black screen on the virtual machine console with the message “Video Remoting was disconnected. The virtual machine is using the 3D video adaptor, which is not supported by the Virtual Machine Connection console.” This is expected, and you will not be able to log on to the virtual machine from the Virtual Machine Connection. You will be able to remotely log on to the virtual machine by using an account that is a member of the Remote Desktop Users group on the virtual machine.
- After configuring the virtual machine, check the Device Manager in the virtual machine to verify that the “Microsoft RemoteFX Graphics Device – WDDM“ is recognized as a display adapter when using the Remote Desktop Client to connect
- DXDiag can also be used to verify the display adapter. Within the virtual machine using a Remote Desktop Session, launch the DXDiag tool and confirm on the display tab that in fact the “Microsoft RemoteFX Graphics Device – WDDM” has been enabled.
- RemoteFX vGPU is now configured and the virtual machines can be leveraged in a VDI Deployment.
Finally, it is important to use the scenario-based deployment option in Server Manager to setup and configure a Remote Desktop Services infrastructure. Only this option will install the other required roles like RD Licensing, RD Web Access and RD Connection broker to enable end user access to the virtual machines.
Have fun with RemoteFX and the Remote Desktop Services in Windows Server 2012!
Frequently Asked Questions
What are the requirements to use RemoteFX graphics acceleration by using the vGPU?
- Hyper-V running on Full Installation of Windows Server 2008 R2 SP1 or Windows Server 2012
- DX11 vGPU with WDDM v1.2 driver
- SLAT-Capable processor
- Remote Desktop Virtualization Host role service must be installed (to enable RemoteFX vGPU)
- Hyper-V must have Physical GPUs enabled for use with RemoteFX vGPU
- The virtual machine must have the “RemoteFX 3D Video Adapter” added
- The Windows SKU running inside the virtual machine must be “Windows 8 Enterprise” or “Windows 7 Enterprise”
How can I determine if my system has a SLAT supported processor?
To use RemoteFX/vGPU with Hyper-V a SLAT supported processor must be present.
1) The CoreInfo utility from Sysinternals.com can be used to verify the processor is SLAT supported:http://technet.microsoft.com/en-us/sysinternals/cc835722
2) Open the command prompt as an administrator on the server host
3) Run the following commands:
- A temporary drive letter will appear
Coreinfo –accepteula –v
- The output should return results for the following: Hypervisor, and EPT. The EPT parameter should have a “*” indicating it supports Intel extended page tables (SLAT)
What types of vGPU are supported with RemoteFX?
When running Windows Server 2012 with the RemoteFX vGPU, the host must have a DX11.1 (WDDM 1.2) capable graphics card and driver. DX9 / DX 10 only capable GPUs are no longer supported for use with the RemoteFX vGPU on Windows Server 2012.
Note: The above statement applies to both Windows 7 with SP1 virtual machines and Windows 8 virtual machines that leverage the RemoteFX vGPU on Windows Server 2012. See the following blog article for more information including links to list of supported cards from Nvidia and AMD:http://blogs.msdn.com/b/rds/archive/2012/06/13/richvgpu.aspx
Can I use multiple types of GPUs?
No, if more than one GPU is installed, the GPU’s need to be identical. The GPU must have sufficient dedicated memory that is separate from system memory.
What versions of Windows are supported inside a virtual machine to use the vGPU?
Not every version of Windows enables use of the vGPU, even if the vGPU is enabled in Hyper-V for the given virtual machine you are connecting to. Ensure that you are running Enterprise version of Windows client.
You can use one of these options to verify you are running enterprise version:
- Navigate to licensing options / system info
- Open a command prompt and run the the “Systeminfo” command, one of the returned parameters (“OS Name”) will show the version of Windows you are running.
How can I determine the RemoteFX vGPU is utilized in a RemoteFX/RDP session?
- Confirm the device connecting supports RDP 7.1 (RemoteFX Codec) or RDP 8.
- Confirm the virtual machine is configured to use RemoteFX with the vGPU:a. On the Hyper-V host system launch the Hyper-V Manager and go to “Hyper-V Settings” (right click on the host).
b. Confirm the Physical GPU is present and the “Use this GPU with RemoteFX” is enabled.
c. Navigate to the Windows 8 virtual machine. Then navigate to the properties of the virtual machine and confirm the “RemoteFX 3D Video Adapter” is in the hardware list. If it is not in the list, use the “Add Hardware” option to add the “RemoteFX 3D Video Adapter” to the virtual machine.
- In the Windows 8 virtual machine itself confirm the “Microsoft RemoteFX Graphics Device – WDDM” video adapter is there:
- When connecting to the session confirm if RemoteFX/vGPU is enabled in the virtual machine you are connecting to review the Windows Eventlog on the Windows 8 virtual machine:
- Launch eventvwr.exe.
- In the eventviewer look for the following in the tree on the left:
“Applications and Services Logs” > “RemoteDesktopServices-RdpCoreTS” > “Operational”
- In the listview in the middle look for event ID 34:
- The detail of the event should show “Remote Desktop Protocol will use the RemoteFX host mode module to connect to the computer.”:
- If you see event ID 33 then it means the vGPU is not enabled, for example when connecting to a virtual machine running RDSH.
- For an RD Session Host on Windows Server 2008 R2 the RemoteFX RDP 7.1 Codec can be forced using the following online documentation: http://technet.microsoft.com/en-us/library/ff817595(v=ws.10).aspx
Are there any performance tests I can run to show the benefit of vGPU?
From a performance point of view you can use the following example to compare the Frame rate between a virtual machine without vGPU enabled in Hyper-V and virtual machine enabled with vGPU:
- Fishbowl (for example set 100 fish and compare side by side with virtual machine that does not have vGPU enabled)
- Particle Acceleration
I don’t see a difference between the vGPU and non-vGPU virtual machine?
When you do not notice any difference between a virtual machine with vGPU enabled and a virtual machine without vGPU enabled then confirm the following:
- Which version (Enterprise or Pro) of windows 8 is the vGPU enabled virtual machine running? The RemoteFX vGPU hardware acceleration is only supported on the Enterprise version of Windows.
- RD Session Host does not support the vGPU, however with RD Session Host you will still have the benefits of software emulation of the GPU on the Windows Server 2012 RD Session Host server.
What performance counters are available to determine RemoteFX performance issues?
The following performance counters are available for RemoteFX Graphics on Windows 8 / Windows Server 2012:
- RemoteFX Graphics:
a) Average Encoding Time
b) Frames Skipped/Second – Insufficient Client Resources
- RemoteFX Network:
a) Insufficient Network Resources b) Insufficient Server resources
If the bottleneck is encoding speed, 1a and 2b will be high. If the bottleneck is bandwidth, 2b will be high. If bottleneck is client speed, 1b will be high.
|That Frames-per-Second measured inside a virtual machine may in-itself not be a good measure as there are other factors like bandwidth, server resources which may all play a factor.
I am seeing a blank screen when connecting to a Windows 8 Enterprise virtual machine with vGPU enabled?
Verify the server running Hyper-V is running the same version / build as the version running in the virtual machine. For example if the Windows 8 virtual machine is running build 9200 and the RD Virtualization Host server a prerelease build 8400 then the Hyper-V drivers are not compatible and a black screen will displayed. Ensure the same build is used to resolve the problem. Also check if you are running the enterprise SKU that has support for RemoteFX vGPU (e.g. either Windows 8 Enterprise or Windows 7 Enterprise with SP1).
What monitor configurations are supported when connecting to a Windows 8 Pro or Windows 7 with Service Pack 1 virtual machine with RemoteFX vGPU enabled?
Maximum monitor resolutions in Windows 8 Enterprise or Windows 7 Enterprise with SP1 virtual machines:
Supported maximum monitors per RemoteFX vGPU Enabled virtual machine
Windows 7 Enterprise with SP1 on Windows Server 2008 R2 SP1 Hyper-V or Windows Server 2012
Windows 8 Enterprise on Windows Server 2012 Hyper-V
1024 x 768
1280 x 1024
1600 x 1200
1920 x 1200
2560 x 1600
Monitor resolutions that can be in landscape and portrait modes:
640 x 480
1280 x 800
1600 x 1200
2048 x 1536
800 x 600
1280 x 1024
1600 x 1050
2560 x 1440
1024 x 768
1366 x 768
1920 x 1080
2560 x 1600
|1280 x 720
||1440 x 900
||1920 x 1200
1280 x 768
1400 x 1050
2048 x 1080
Physical GPU in Windows Server 2012 Hyper-V settings is unavailable post domain join?
When a Windows Server 2012 Remote Desktop Virtualization Host is added to a domain and the default domain policy is applied, the option to select a physical GPU used for Remote FX (within Hyper-V settings) is unavailable. There is a known issue which has been addressed in Windows Server 2012 R2. For more information on the root cause and how to address on Windows Server 2012, please see KB2878821 .
Read more »
In this guide, we will show how to setup and configure RemoteFX for a Hyper-V host running Windows Server 2012 R2. The host is used for testing and development purposes, it is not member of a domain. (In this guide, the name of the host machine we are configuring will be “Black”.)
Especially we are interested to have a synthesized DirectX 11 capable graphics adapter in our virtual machines. This is known as the RemoteFX vGPU feature.
The host machine has a Core i7 processor (Haswell generation). Graphics adapter is a NVIDIA Quadro K620. So the hardware meets the requirements for RemoteFX and we will not bother with discussing details of hardware requirements.
Let’s start with setting up the host machine.
Add the Hyper-V Role
- After the basic setup of Windows Server 2012 R2 Datacenter, ask Windows Update to install all available updates. Also be sure that the latest available display driver is installed.
- Then the Hyper-V role needs to be added. In Server Manager, choose Add Roles and Features from the Manage menu. Choose a Role-based or feature-based installation. Then select the local server to add the role to.
- Add the Hyper-V role. Confirm to also add all required features suggested by the wizard and let the wizard finalize the job.
- As usual, after adding the Hyper-V role the Hyper-V Manager will be available.
Add the Remote Desktop Virtualization Host Role Service and the Remote Desktop Licensing Role Service
For RemoteFX to work, the Remote Desktop Virtualization Host role service must be up and running on the Hyper-V host machine.
Also a Remote Desktop Licensing Server must be available in the network. The RD Virtualization Host needs the RD Licensing Server to confirm the availability of Remote Desktop Client Access Licenses.
Since our installation is for development and testing, we will set up the licensing server on the same machine as the Hyper-V host.
- In Server Manager, choose Add Roles and Features from the Manage menu. Choose aRole-based or feature-based installation. Then select the local server to add the role to.
- Add both the Remote Desktop Licensing role service and the Remote Desktop Virtualization Host role service. Confirm to also add all required features suggested by the wizard and let the wizard finalize the job.
- Now the Remote Desktop Licensing Manager and the RD Licensing Diagnoser will be available.
Install Remote Desktop Licenses
Now let us install some Remote Desktop CALs. (A MSDN subscription is a good way to get RD CALs.)
- Start the Remote Desktop Licensing Manager. Connect to the machine the Remote Desktop Licensing service is running on, i.e. to the local machine in our scenario.
- Open the Properties in the server node’s context menu. Connection method should beAutomatic connection. Also type in the Required Information (your name, your company, and your country).
- Now choose to Install Licenses in the server node’s context menu. The Install Licenses Wizard opens up.
- Choose the appropriate License program. In our example, we choose License Pack (Retail Purchase) which is what you typically get from a MSDN subscription.
- Provide the required license information (depends on the chosen license program).
- Let the wizard finalize its job to install the licenses.
- The Remote Desktop Licensing Manager indicates that the task succeeded.
Configure the License Server for the Remote Desktop Virtualization Host
- Open up the RD Licensing Diagnoser. The Licensing Diagnoser tells us that our Remote Desktop Session Host Server cannot find a RD Licensing Server. We didn’t set up a RD Session Host Server but a RD Virtualization Host Server, but anyway.
- The RD Virtualization Host Server locates the RD Licensing Server via group policies. Since our server is not member of any domain we use a local group policy. So open up gpedit.mscand navigate to Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Licensing.
- Open up the Use the specified Remote Desktop license servers policy setting. Enable the policy setting and type in the name of the license server to use.
- Then open up the Remote Desktop licensing mode policy setting. Enable the policy setting and specify the licensing mode Per Device. (Remember that we installed Per Device licenses.)
- With these policy settings the RD Licensing Diagnoser is happy now.
Choose the GPU to be used with RemoteFX
Now that the Remote Desktop Virtualization Host role is installed a RemoteFX 3D Video Adapteris available which can be added to virtual machines.
But first we need to associate the RemoteFX 3D Video Adapter with an appropriate physical GPU.
- In Hyper-V Manager, open up Hyper-V Settings.
- Choose the appropriate GPU to be used with RemoteFX.
Windows 8.1 as Guest
First we evaluate RemoteFX with Windows 8.1 as a guest in a virtual machine. The client from which we will connect to the VM is also running Windows 8.1.
Using the RemoteFX 3D Video Adapter in a virtual machine requires Windows 8.1 Enterprise as guest operating system. For Windows 8.1 Pro or Core the RemoteFX 3D Video Adapter will not be offered as available hardware to add.
Also be sure that you configure a Generation 1 VM. Generation 2 does not support RemoteFX.
- Set up a Generation 1 virtual machine with Windows 8.1 Enterprise and ask Windows Update to apply all available updates.
- In Hyper-V Manager, you should configure Integration Services to include Guest services(which are not included by default).
- Windows 8.1 with all available updates has the latest integration services installed to work with Windows Server 2012 R2 as a host. So you do not need to update the Integration Services in the VM.
- Add the RemoteFX 3D Video Adapter to your VM and configure the adapter according to your needs.
- Connect to the VM by a Remote Desktop Connection from a Windows 8.1 client. (Note that connecting to our RemoteFX enabled VM by a Virtual Machine Connection is also supported.)
- RemoteFX can be verified by checking the display adapter in the VM’s Device Manager.Microsoft RemoteFX Graphics Device – WDDM indicates that RemoteFX is available.
- Executing dxdiag.exe from the Run dialog shows the VM’s supported DirectX features levels, the WDDM driver model, and the amount of graphics memory available.
- The VM’s event log will also contain diagnostic information about the remote desktop connection and the feature support negotiated between the VM and the client. In Event Viewer, open up Applications and Services Logs\Microsoft\Windows\RemoteDesktopServices-RdpCoreTS\Operational. Event ID 34 Remote Desktop Protocol will use the RemoteFX host mode module to connect to the client computer indicates that RemoteFX vGPU is enabled.
Windows 7 with Service Pack 1 as Guest
Now we evaluate RemoteFX with Windows 7 with Service Pack 1 as a guest in a virtual machine. Again the client from which we connect to the VM is running Windows 8.1.
Using the RemoteFX 3D Video Adapter in a virtual machine is working with Windows 7 Enterprise or Windows 7 Ultimate as guest operating system. (I did not evaluate other Windows 7 SKUs.)
Also be sure that you configure a Generation 1 VM. Generation 2 does not support RemoteFX.
- Set up a virtual machine with Windows 7 with Service Pack 1 Enterprise. Before installing any further updates check the version of the Remote Desktop Protocol by opening up theAbout dialog of the Remote Desktop Connection application. The supported RDP version is 7.1.
- Now ask Windows Update to install all available updates. Check the supported RDP version again. It is 8.1 now. So the RDP 8.1 update for Windows 7 SP1 has been installed by Windows Update. But note that as soon RemoteFX will have been enabled the RDP version effective in use when clients connect to this VM will fall back to 7.1.
- In Hyper-V Manager, you should configure Integration Services to include Guest services(which are not included by default). Do not yet add the RemoteFX 3D Video Adapter to the VM. We will do it later.
- We need to update the Integration Services installed in the VM. So insert the Integration Services Setup Disk into the VM and install the latest version.
- When trying to connect to a RemoteFX enabled Windows 7 VM with Virtual Machine Connection, i.e. from Hyper-V Manager without using RDP, you will find out that Virtual Machine Connection is not supported. You must use a Remote Desktop Connection. This is different from a Windows 8.1 VM with RemoteFX where you can connect with Virtual Machine Connection.
- Since a RDP connection is the only way to interact with a RemoteFX enabled Windows 7 VM be sure that the VM has a network adapter and that remote access is allowed in the VM.
- Add all users who should be granted Remote Desktop access explicitly to the Remote Desktop Users group. Do not rely on the dialog claiming that members of the Administrators group can connect even if they are not listed.
- Since Windows 7 Service Pack 1, inbound RemoteFX connections to a VM are blocked by the Windows Firewall by default. There is a group of firewall rules we need to enable to allow RemoteFX. Inside the Windows 7 VM, open up Windows Firewall with Advanced Security. In the Inbound Rules tab, enable all rules belonging to the group Remote Desktop – RemoteFX.
- Now add the RemoteFX 3D Video Adapter to your VM and configure the adapter according to your needs.
- Start the VM, make the first connect with Virtual Machine Connection, wait for the driver installation to finish, then restart the VM.
- Remember that after the restart you can no longer connect to the VM with Virtual Machine Connection. You must use Remote Desktop Connection instead. When connecting with Remote Desktop Connection, be sure that the user name contains the machine name of the VM.
- When connected to the VM, RemoteFX can be verified by checking the display adapter in the VM’s Device Manager. Microsoft RemoteFX Graphics Device – WDDM indicates that RemoteFX is available.
- Running dxdiag.exe does not show the DirectX feature level clearly but in a Windows 7 VM with RemoteFX it is DX9 even if the physical graphics adapter supports DX11. (A Windows 8.1 VM with RemoteFX supports up to DirectX feature level 11.1 if it is supported by the physical graphics adapter, too.)
- In Event Viewer, open up Applications and Services Logs\Microsoft\Windows\RemoteDesktopServices-RdpCoreTS\Operational. You might not find the expected event ID 34. Instead event ID 33 is present.
- In addition, you will find a warning with event ID 5 in the Admin log claiming that the client does not support RemoteFX.
- When a Windows 8.1 client talks to a Windows 7 SP1 VM via RDP the Remote Desktop Protocol 7.1 will be used which is less optimized to work over WAN or WLAN connections. Close the connection, i.e. log off. To improve the RemoteFX experience, on the Experiencetab of the Remote Desktop Connection client application choose LAN (10 Mbps or higher). Then connect again. Now at least the warning in the Admin log should have been gone.
Comparison: Windows 8.1 vs Windows 7 SP1
The table summarizes the features supported by a Windows 8.1 VM and a Windows 7 SP1 VM. As always in this guide, it is assumed that the VM is hosted by Windows Server 2012 R2 and that RemoteFX vGUI is enabled.
When talking about the effective RD protocol version in use we assume that we connect to the VM from a client machine with Windows 8.1.
|VM Operating System
||Max DirectX Feature Level Supported
||RDP Version in use
||Virtual Machine Connection Supported
||Enhanced Session Mode Supported
||Generation 2 VM Supported
|Windows 7 SP1
Read more »