Live Chat Software by Kayako
 News Categories
(19)Microsoft Technet (2)StarWind (4)TechRepublic (3)ComuterTips (1)SolarWinds (1)Xangati (1)MyVirtualCloud.net (27)VMware (5)NVIDIA (9)VDI (1)pfsense vRouter (3)VEEAM (3)Google (2)RemoteFX (1)developers.google.com (1)MailCleaner (1)Udemy (1)AUGI (2)AECbytes Architecture Engineering Constrution (7)VMGuru (2)AUTODESK (1)storageioblog.com (1)Atlantis Blog (7)AT.COM (2)community.spiceworks.com (1)archdaily.com (14)techtarget.com (2)hadoop360 (3)bigdatastudio (1)virtualizetips.com (1)blogs.vmware.com (3)VECITA (1)vecom.vn (1)Palo Alto Networks (4)itnews.com.au (2)serverwatch.com (1)Nhịp Cầu đầu tư (3)VnEconomy (1)Reuters (1)Tom Tunguz (1)Medium.com (1)Esri (1)www.specommerce.com (1)tweet (1)Tesla (1)fool.com (6)ITCNews (1)businessinsider.com (1)hbr.org Harvard Business Review (1)Haravan (2)techcrunch.com (1)vn.trendmicro.com (3)thangletoan.wordpress.com (3)IBM (1)www.droidmen.com (2)blog.parallels.com (1)betanews.com (6)searchvmware.techtarget.com (1)www.bctes.com (1)www.linux.com (4)blog.capterra.com (1)theelearningcoach.com (1)www.examgeneral.com (1)www.wetutoringnation.com (1)chamilo.org/ (1)www.formalms.org (1)chalkup.co (1)www.mindonsite.com (5)moodle.org (4)moodle.croydon.ac.uk (1)opensource.com (1)1tech.eu (1)remote-learner.net (1)paradisosolutions.com (2)sourceforge.net (7)searchbusinessanalytics.techtarget.com (1)nscs.gov.sg (1)virten.net (1)fastest.com.vn (1)elearninglearning.com (2)www.computerweekly.com (1)youtube.com (2)computer.howstuffworks.com (2)techz.vn (2)techsignin.com (1)itworld.com (7)searchsecurity.techtarget.com (1)makeuseof.com (1)nikse.dk (1)4kdownload.com (1)thegioididong.com (1)itcentralstation.com (1)www.dddmag.com (1)Engenius (1)networkcomputing.com (1)woshub.com (1)hainam121.wordpress.com (1)www.lucidchart.com (1)www.mof.gov.vn (3)www.servethehome.com (6)www.analyticsvidhya.com
RSS Feed
News
Aug
25

Mô hình Virtual Labs của Veeam Backup:

Virtual Lab

The virtual lab is an isolated virtual environment in which Veeam Backup & Replication verifies VMs. In the virtual lab, Veeam Backup & Replication starts VMs from the application group and the verified VM. The virtual lab is used not only for the SureBackup verification procedure, but also for U-AIR and On-Demand Sandbox.

The virtual lab does not require that you provision extra resources for it. You can deploy the virtual lab on any ESX(i) host in your virtual environment.

The virtual lab is fully fenced off from the production environment. The network configuration of the virtual lab mirrors the network configuration of the production environment. For example, if verified VMs and VMs from the application group are located in two logical networks in the production environment, the virtual lab will also have two networks. The networks in the virtual lab will be mapped to corresponding production networks.

VMs in isolated networks have the same IP addresses as in the production network. This lets VMs in the virtual lab function just as if they function in the production environment.

Virtual Lab

 

Mô hình dựng Virtual Labs trên vCloud và vSphere (phiên bản cũ)

Protecting vSphere with Veeam Hands on Lab

Lab Duration: Approximately 1 hour

Lab Skill level: Intermediate

Overview

In this lab you take the role of an administrator at a small company that has just deployed VMware vSphere.
However due to a growing backup window and expensive software renewal costs, a decision was made to deploy Veeam Backup and Replication in your test environment as a pilot.
The goal of this lab is for you to configure a newly installed Veeam server to protect a test virtual machine. In addition, you will also need to do a test restore of a word document from a windows vm.

Prerequisites

It is recommended that lab users have some familiarity with VMware vCloud Director, therefore Lab 0 – Intro to vCloud Director is recommended before taking any other labs.

Tasks Include:

  • Creating a Veeam Backup Job
  • Monitoring a Backup Job
  • Restoring a Virtual Machine
  • Restoring a File
  • Replicating a Virtual Machine to a DR ESXi Host
  • Failover from our production host to our DR host

For the best experience use a device with two monitors. Ideally you would use one screen for displaying this lab manual, and the other for the VMware View Desktop/vCloud Lab environment. If two monitors are not available, other suggestions include using an iPad or other tablet for the lab manual or working with another person and using their device for one thing and yours for the other. Printed lab manuals may be available if none of the above are possible.

Step 1 – Login and Lab Deployment

This lab will leverage VMware vCloud, in order to access the lab you will need to navigate to this URL:
https://www.vcloudlab.net/cloud/org/<username>/
(your username and password was in your welcome email)
Login with the username and password assigned to you or your group.

Once logged in you will need to deploy the “Lab 2 – Protecting vSphere with Veeam” vApp, to do this click on the “Add vApp from Catalog” button. Next Select “Public Catalogs” from the “Look in:” drop down menu. You should now see a list of the available labs, Select the “Lab 2 – Protecting vSphere with Veeam” lab and click “Next”. You can now name the vApp if you would like or leave it at the default. The lease settings can be left alone. Click “Finish” to deploy your lab.

vCloud Director will now deploy all of the virtual machines and networking components necessary for your lab. This process should not take more than 10 seconds.
Please proceed to the next step.

Step 2 – Power Up your Lab

Once the lab has been provisioned it will be in a “Stopped” state, to power it up simply click the green play button on the vApp.

Because vCenter has many services to start, it will take about five minutes to fully boot up. We can review some lab info while we wait.

Step 3 – Lab Information and Setup

This lab is built on top of VMware vCloud Director. vCloud Director leverages VMware vSphere and adds a layer of abstraction so that resources can be offered to consumers through a self service interface, and without them needing knowledge of how those resources are configured on the backend.

This technology is the same thing you would get from a vCloud Powered VMware Service Provider, and can be leveraged for anything from test/dev environments to mission critical business applications.

This lab consists of several pieces:
1.) A Veeam Server – 192.168.2.10 – Credentials: administrator / vmware1
2.) 2 ESXi 5.1 Servers – 192.168.2.20 & 192.168.2.25 – Credentials: root / vmware1
3.) A test VM — this is a virtual machine running inside of one of the ESXi hosts

How the lab works

At this point your lab should be getting close to finishing its boot process, before we start let’s take a minute and explain how you will interact with your lab.
The ‘VeeamServer’ virtual machine will be where we spend all of our time in this lab. Because the lab is 100% isolated, the only way to access anything in your lab is through the VeeamServer console.

Vmware vCloud Director can be accessed from a VMware View desktop, or directly from your personal machine, as the www.vcloudlab.net URL is publicly accessible. However for best video performance, using the VMware View Desktop is  preferred. Check your welcome email for more information on how to access the View Desktops.

Step 4 – Opening the VeeamServer’s Console

Let’s first open up the vApp so that we can see our individual virtual machines. Click open on the vApp.

Next, click once on the VeeamServer VM, then click again on the console thumbnail. This will open the console of the VM. Click continue or accept on any SSL warnings. You may also need to allow popups in the browser.

NOTE: You will need to click “Allow” on the Remote Console plugin the first time that you open a console. It will appear at the bottom of the Internet explorer window at the same time that a popup box telling you to install it appears. YOU DO NOT need to install it, just click allow at the bottom as seen in the following screenshot.


Note: it can takes 3 attempts before the console will open due to popups and allowing the “Add on” to run, this will largely depend on if you are using a view desktop or if you are using your local machine.

Step 5 – Login to the Veeam Server

At this point you have the VeeamServer console open and probably see the Windows 2008 R2 “Press CTRL ALT Delete to login” screen. The best way to do that is to use the CTRL+ALT+DEL button in the top right corner of the console window. Find and use that button and then use administrator / vmware1 to login.

Step 6 – Launch Veeam Backup and Replication

On the desktop you will find a Veeam icon double-click on the Veeam Icon and let the interface load. Now find the “Infrastructure” section in the left menu. Click on it, and look in main body to the right. You will see that both ESXi servers have been added to Veeam. 192.168.2.20 will serve as our main server, and 192.168.2.25 will serve as our DR server that we will replicate to.

The default “c:\backups” backup repository is what we will use for backups. Normally you would need to create a backup repository before doing a backup. For Lab purposes we don’t need to.

Step 7 – Create a Backup Job

Click on the “Backup & Replication” section in the left menu. Right now you will see nothing in the right box because we have not created any jobs yet.
Click the “Backup Job” icon at the top of the Veeam Interface.

The job wizard will appear. On the first page name your job “File Server Backup”, then click next.

On page two we need to select which VM’s we want to backup. Click the “Add” button on the right side of the wizard. Now click the + beside of 192.168.2.20, this will take some time before it expands. Once it does expand select the “FileServer” VM. Click Add and then next.

After you click on the FilseServer VM, click ok and then you should see it added to the list, like below.

On the “Storage” page of the wizard we can select which backup repository the data will go to along with how long to retain a restore point. There are also advanced settings that we will not cover in this lab that can be adjusted here. You can leave all of the settings at their default and just click Next to proceed.

The Guest Processing screen has options that need to be setup if we are backing up a VM that has Microsoft VSS capabilities. Check the box next to “Enable application-aware image processing” and then fill in the Username and Password below with administrator / vmware1

Next we need to setup our backup schedule. Check the “Run this job automatically” option and then review the different types of schedules. For Lab purposes we can just leave these settings at the defaults.

On this final screen we can select the “Run the job when I click Finish” option and then select Finish. Veeam will now create your backup job and do the first backup. Now that you are back in the main window you will see your new job and see that it’s status will start with 0% complete….

Step 8 – Monitoring a job

Right Click on the job and select the “Statistics” option. Next click the “Show Details” button.

This screen will show you the overall details of the job, if you wish to see more details about a specific VM in that job click on it in the left pane. If you click on the “FileServer” VM you will see which part of the job is currently processing and how fast.

The job will take about 10 minutes to complete (this is very dependant on the number of lab users and may take longer if the lab load is heavy), exit the status window and proceed to the next slide.

Step 9 – Creating a Replication Job

While our backup job is running we can go ahead and create our replication job. The goal is to replicate the FileServer VM from Prod-ESXi1 to DR-ESXi. Click on the “Home” tab at the top of the Veeam interface, and then select ” Replication Job”.

On the first page of the wizard we can name the replication job. I used “Replication Job 1” but you can use whatever you want. The check boxes at the bottom are more advanced settings and will not be used in this lab. Proceed to the next step of the wizard.

The next step is to add a virtual machine to the replication job. This is the same process as adding a VM to a backup job. Click add, then select the FileServer VM. Then click Next.

The next step is much different from a backup job. Here we select the destination host, instead of a destination backup repository. Click “Choose” on the “Host or Cluster” box and select the 192.168.2.25 ESXi host. All of the other options will auto populate. Then click Next.

The job settings section would normally be where you change your local and remote Veeam Proxy servers, but for lab purposes we can leave everything at its default settings. Click Next to continue.

You should now be on the Guest Processing page, here you should click “Enable application-aware image processing” and enter administrator / vmware1 as the credentials. Then go to click next to set the schedule.

Here on the schedule screen we can setup a replication schedule just like we can do for a backup job. I selected daily at 10pm, but for lab purposes it really doesn’t matter.

After setting the replication schedule you will be on the summary page of the wizard. Click Finish so that Veeam creates the job. You should then be back at the jobs page and be able to see the backup and the replication job. If the backup job has completed, right click on the replication job and select “Run Now”. If the backup job is not done yet wait until it is before proceeding.

Step 10 – Delete a Known File

Open up Windows Explorer by clicking the “folder” icon in the task bar to the right of the start menu. In the address bar type \\192.168.2.30\files If asked to login use administrator \ vmware1

You should see one file called “RestoreMe” open that file and verify that it has text in it, then close it.

Right click on that file and select delete, the folder should now be empty.

Step 11 – Restore the File

Head back over to the Veeam interface. In the top icon bar you should see one called “Restore”, click that icon. We will be restoring the file from the local backup, but notice that we can even restore files from the replica’s on our DR server. Click “Guest files (Windows)” in the left menu then click next.

Next expand out the “FileServer” backup job and select the “FileServer” VM so Veeam knows which VM we ant to restore files for. Then click next.

If you had multiple restore points this next screen would allow you to drill down into exactly which one you wanted to use. Click next.

The next screen allows you to enter a reason for the restore. This is not required and is strictly for logging/documentation purposes if you ever needed to know why a restore was done. Click Next.

After clicking next you will be on the “Completing the Restore Wizard” page. Don’t worry we are not restoring the entire VM.  After clicking “Finish” Veeam does work behind the scenes and will then present an explorer-like interface for us to find the files we want to restore, Click finish.

Selecting your files from the Backup

Once the explorer interface opens, navigate to the “C” drive on the left and then click the “files” folder. Inside you should see the “RestoreMe” file, click on it and you will see the buttons change in the ribbon at the top. Click the “Restore” icon, which will restore the file back to its original location.

The file will be restored to its original location and you will see a status window popup.

Step 12 – Check the restored file

Let’s now check to make sure the file has been restored and that the data is in place. Open up Windows Explorer and in the address bar type in: \\192.168.2.30\files
You should see the RestoreMe file is back, open it and see if it has the same data inside of it. If it does then your restore worked successfully.

Step 13 – Failing over to your DR Server

In the next few steps we are going to leverage the Veeam Replication job that you created to start up the FileServer VM on the DR-ESXi host. Before we can do that make sure that Veeam has completed the replication job that we created, and that its status is success. Once it has completed we can proceed.

Before we can initiate a failover to our DR site server we need to power off “FileServer” from the ESXi1 host.
Use the vSphere client on the desktop to login to: 192.168.2.20
Credentials: root / vmware1

Then right click on FileServer and select
“Power->Power Off”.

In the top icon bar click “Restore” again, just like we did before.

This time select “Failover to replica” on the right side and then click next.

On the next screen press the “Add VM” button on the right. Then select “From Replica”, a box will then popup where we can select “FileServer” after expanding the replication job we created.

Click OK after selecting “FileServer”

On the next screen we can again fill in a reason of why we are failing over. This is not required, Click Next.
And then click finish. You will now see a progress box appear with the process status. If you leave this box open you will eventually see “Failover completed Successfully”. You can now close the box.

Step 14 – Verify Failover

At this point we can login to both of our ESXi servers and see that FileServer is now powered off on the 192.168.2.20 host and “FileServer_replica” is now powered on at the DR host 192.168.2.25

A Second way to verify is to go back to
\\192.168.2.30\data
and look at the time stamp of the RestoreMe file.

Congratulations! You have completed Lab 2

If you still have time remaining feel free to explore the lab environment and vCloud Director.

More highlights about Veeam include:

  • Ability to failback from DR
  • Ability to do an instant VM recovery
  • Multiple backup proxies for parallel job execution
  • SureBackup can test your backups after every backup

Housekeeping / Lab Cleanup

When you are done with the lab, you can delete it from the HOL Cloud. To do this click on “Home” in the top left area of the vCloud interface. Next find your Lab vApp and press the Red Stop Button, after the vApp has stopped you can right click on it and select “Delete”.

Please note that your HOL Cloud account can only provision a small number of VM’s at the same time, so you will need to delete one lab before staring another or provisioning will fail.


Read more »



Jul
12

Micro segmentation is probably the number one reason for companies with vSphere to purchase NSX. This feature inserts a packet filter in between your VMs. A filter you can configure centrally, link to single VMs, groups of VMs, kinds of VMs and specify according to your needs. As your data security on the internet is, involuntarily, tested on a daily basis, it’s not a question of IF but rather WHEN you will face a breach of your security. Micro segmentation can be your saviour at that moment, as it restrains the attacker to the compromised host. Compare it to your house. With a generic firewall, you just lock the front door. If a burglar gets in, he can walk right into your living room and nick your TV. With micro segmentation, every door in your house is closed and properly locked. If your front door is broken down, the unwanted guest is still limited to the hallway.

Recently a lot rumors went around over the interwebs on NSX. It was said that for the deployment of micro segmentation, you need a full NSX deployment. That is, you need to deploy the appliance, including all the planes, maybe even an edge router, the works. And it would take you years to install, let alone configure. Well, that is not correct. To deploy NSX just for micro segmentation, you only need to deploy the NSX Manager appliance and connect it to your vCenter. That’s all! No need for distributed switches, no need for full blown NSX deployment. So, to be clear, for micro segmentation:

  • You only need to deploy the NSX Manager VM
  • You need virtual distributed switches (If you are on vSphere 6, you don’t need Enterprise+ to use VDS when you have NSX)
  • You don’t have to have vSphere 6, it also works with vSphere 5.1 and 5.5
  • You don’t need expensive core switches or any other physical network gear requirements, its all virtual
  • You do need to have the vCenter Server Appliance and use the web client (it really got incredibly better in v6!)
  • You need about 12 gigs of RAM, 4 vCPUs, 60 gigs of storage and 1 vNIC for deployment

But then you’re good to go! I will not be going into configuration of policies in this post. That is food for another post. In this post we just go through the installation motions to get micro segmentation available to you in vSphere 5.x or 6.

So, to be clear on how to do it, I went along and just did it myself in my lab environment. Now, my lab currently consists of 2 hosts, but for demonstration purposes it is enough. Both run vSphere 6, vCenter Server Appliance v6 on iSCSI storage. I do use distributed switches but as mentioned before, this is not a requirement. When you first log into the vCenter Server Appliance, it looks like this.

00 vcenter start

No mention of firewalls anywhere.

NSX comes as an OVA file. I’m assuming you all know how to to download and deploy an OVA. One remark on this. When you deploy OVAs with the vCenter Appliance, you need to install the vCenter Integration Plugin on your management desktop first. And there is a little trick with that if you do.  It took me a while to figure out what went wrong, which is why I am telling you in advance: the CIP modifies your HOSTS file in your Windows directory with an entry pointing to local host, to which it later connects. If your install does not modify the HOSTS file correctly, CIP will not install or will not start properly and you will not be able to deploy OVA files with vCenter Appliance. The easiest way to work around the problem, is to take an account with sufficient rights, browse to the HOSTS file in your Windows folder and remove the READ ONLY flag, before you install CIP. After the installation, you can set the READ ONLY flag back.

01 hostsfile readonly

The hosts file can be found in your Windows folder, usually on C: C:\Windows\System32\Drivers\etc\hosts. Take care that you do not modify the access rights of the file, otherwise Windows will refuse to read the file and work with the entries. After installing CIP, you can go forward and deploy the NSX OVA file. The version I downloaded is 6.1.4, which is the most recent version at this time of writing.

02b nsx ova deployment

Mark the checkbox on top for the extra configuration options to proceed. At this time it might be a good idea to talk about the network settings. The NSX Manager VM has just 1 IP address and it connects to your vCenter Server Appliance. Although I did not see it mentioned in the documentation, it’s probably a good idea to create your DNS entries for the appliance before you deploy. As my lab environment has a bit flakey DNS implementation, its less essential, but as you probably are going to move this into production in time, this is a good moment to do so. The next screens zoom in on where you want the appliance to be created in your cluster, where you want it to be stored and last but not least, the network settings.

03 nsx ova deployment network

After the IP information, you are asked (but not required) to enter the NTP server and if you would like to enable SSH. I left the last checkbox blank for now. After this deployment, we are ready to power it up and go and have a cup of coffee, because the NSX appliance, like the vCSA, takes a couple of ages to boot for the first time. When it’s done doing so, you can start your browser and open the secure NSX page on the new VM: https://<yournsxip&gt; or <dnshostname>. If all went well, you will get a login page where you need to log in with username “admin” and the password you defined earlier during the OVA deployment. After the obligatory browser-moaning about the self signed certificates (commercial ones are not supported, yet) you are presented with the NSX appliance console, which is pretty compact

04 nsx controller console

To move on, select “Manage vCenter Registration”. Now, I have Single Sign On configured in my lab and it actually works quite well, although I break my lab on a regular basis. You might consider creating and using a specific service account to register the NSX appliance with the vCenter Appliance. If you did so, you will need those credentials in the next screen, otherwise just use the vCenter admin account.

05 nsx controller vcenter connect

If you have entered the information correctly, you should get a little popup stating the vCenter Appliance certificate fingerprint. If you select OK, the NSX manager will go on and register with vCenter.

06 nsx controller vcenter connect success

If this process was finished correctly, you should see a nice green dot appear in the screen next to the status message that your NSX management appliance is now connected to vCenter. Go back to your vCenter appliance and log off. When you log in again, you should see a new entry in your options list!

07 vcenter NSX options

When you go into “Networking and Security”, you will be presented with the NSX management screen. From this screen, you can configure NSX completely. Now there is one thing left to do. You need to install the VIBs in every host of your NSX cluster. This sounds like a lot of work and incredibly complicated, but in fact it’s easy as pie. When you click on the install menu entry in the NSX console on the left side, you are presented with a couple of tabs.

installing vibs

Go to host preparation, you should see your datacenter cluster there. Just click “INSTALL” and NSX will automatically go and install the VIBs one host after another. That is all you need to do. No VXLAN install, no data planes, this is it.

08 vcenter NSX DFW console

For the micro segmentation, you go to distributed firewall to see your ruleset and you can go into SpoofGuard, Service Definitions and Service Composer to create the policies you want and need to keep that door shut to anyone without a proper ticket.

The complete installation took me about an hour, including DNS modification and preparing my client system with CIP. When you are at this point, you might want to go and involve the security guy(s) to help you create the correct policies for your VMs. But as you can see, the NSX installation for micro segmentation is straight forward and does not require full deployment of NSX to make it work for you.


Read more »



Jul
12
vCenter Server appliance 6.0 URL-based patching
Posted by Thang Le Toan on 12 July 2017 12:08 AM

With the recent release of vCenter Server appliance 6.0 Update 1b support was added for patching your vCenter server appliance, making use of a URL within your company network. Before this patch your vCenter appliance would have to make a direct connection with the internet and download the patches from the VMware repository. Now with this change you will be able to download the patches on your workstation, place them on a webserver within your company netwerk and then apply the patches to your vCenter appliance.

To start things of with you will need to have your own web server, this can be either a Windows or Linux based server. After that you will need to download the zipped updated bundle from the vmware website. Once downloaded extract the files into your repository directory on the webserver, this should result in two subdirectories called “manifest” and “package-pool”.

After that you have two options in updating the vCenter appliance, either by the vCenter server appliance Management interface (VAMI) or through the use of the command line interface.

vCenter server appliance management interface:

  1. Open the management vCenter Virtual Appliance web interface on port 5480.
  2. In the Update tab, click Settings.
  3. Select Use Specified Repository.
  4. For the Repository URL, enter the URL of the repository you created. For example, if the repository directory is vc_update_repo, the URL should be similar to the following URL: http://web_server_name.your_company.com/vc_update_repo
  5. Click OK.
  6. Click Check Updates.
  7. Click Check URL.
  8. Under Available Updates, click Install Updates.
  9. Based on your requirement, select from the following options:
    • Install all updates.
    • Install third-party updates.

Command line interface:

  1. Access the appliance shell and log in as a user who has a super administrator (default root) role.
  2. To stage the patches included in a repository URL: software-packages stage –url http://web_server_name.your_company.com/vc_update_repo
  3. Install staged patches:  software-packages install –staged
  4. To reboot after patching if needed: reboot -r “patch reboot”

This update should keep your IT environment up to date and secure, even if your Update server doesn’t have internet access.


Read more »



Jul
12
VMworld 2016 – What’s New in vSphere 6.5
Posted by Thang Le Toan on 12 July 2017 12:05 AM

Almost every year VMware announces a new version of its core product: vSphere. vSphere, or ESX and vCenter, has been around for quite some time and it is the core product for your Software Defined Datacenter. After so many years of revolution and innovation, what things can be improved? VMware thought of some and the new version shines with a couple of features some have been longing for and a couple of features that will set it even further apart from any competitive hypervisor. Curious? Let’s run through some of the cool new stuff! This is a cherry-pick from all the new features.

vCenter and PSC

vCenter has long been the core management server of ESX. It makes managing ESX so easy. But there were a few drawbacks. The first versions ran on Windows. Since vSphere 6.0 the Appliance version is the better way to go but migrating to it was complicated. With the latest update of the migration version, that limit has been removed. So vCenter Server Appliance is the way to go. It will be the new center of your virtual infrastructure. But Heartbeat has been decommissioned and there is no proper way to make vCenter high-available (other than FT, but that’s not really HA). With version 6.5, vCenter finally has native HA.

High Availability

You can now install vCenter with HA built right into the appliance. Yes, that’s right, the appliance version. vCenter HA works with an active/passive architecture where it uses a witness to prevent split-brain situations. The Platform Services Controller or PSC can be installed in an Active/Active setup. With these technologies, vCenter finally is no longer the single point of failure. RTO can be 5 minutes and has no dependencies on shared storage or any external databases or relations.

Deployment

Remember that moment where you installed vCenter Server Appliance for the first time? You opened the ISO on your Mac, you started the webpage, clicked the link.. erm, waitaminute.. That’s an EXE file! To Windows it is, then. So you open up the ISO on your Windows machine, click the link, install the plugin and the installation fails! It does something strange with your system’s hosts file, which is a protected system file in Windows. So you fix that and jump through all the pre-stage hoops and after a while your deployment starts. And after some more waiting, it fails with an error message telling you to go find a logfile and see what went wrong. Who has not at least seen a few of these hoops you had to jump through before you got it up and running?

Well, no more. Not only does VCSA look better, but it works better.. on Windows as well as on MacOS and Linux and without the plugin. The install procedure has been split in two where you first deploy the VM with basic settings and then set up roles, single sign-on and more. So if your deployment falls on its nose during the initial stage, you haven’t entered a world of configuration information you now have to do again. And another feature is, you can create a template from it after stage 1 has finished so you always set up vCenter Server Appliance the same way without loosing more of your precious time and making sure they are all identical in the process.

Update Manager

Update manager always felt like it was left behind a bit but it is so important to all of us out there that need to maintain those precious vSphere installs. You always needed a Windows server to install and run it and you still needed the VI Client to really set it up, scan and remediate hosts and clusters. With v6 you could scan and remediate clusters from the web client but you still needed the Windows backend to make it run.. until now.. With version 6.5, update manager is finally baked into the vCenter Server Appliance. You can scan and remediate your hosts and clusters right from within vCenter Server Appliance without any external dependencies.

Backup and Restore

At least once in every IT guy’s lifetime it happens: your infra crashes and burns. You have to revert to your backup solution to get up and running again. But will it work? You never really know until it’s done, no matter how may test runs you do. This is especially true for vCenter. vCenter Server so often causes the chicken-and-egg dilemma when it comes to backup/restore solutions. It took some time but VMware has added an out-of-the-box native backup and restore functionality into vCenter Server Appliance 6.5 and you can use it next to your current backup solution of vCenter if you like. The new B/R can however remove the dependency on third party backup solutions. It just writes a bunch of files on a storage API of your choice (SCP, FTP or HTTP) from which you can redeploy your own VCSA with the same server UUID you already had, from the standard vSphere ISO, no matter if you had a VCSA with integrated PSC or an external PSC. And it has a plain and simple user interface for protecting vCenter Server Appliances and PSC’s. You can even encrypt your backups so all your secrets stay safe.

Management Interfaces

Okay, so we’re heading into territory I personally do not like so much. In the past VMware made a lot of changes to the management interfaces. First with the VI Client, then with the Web Client that felt like the slowed down version of the appliance, finally with a full blown redesign where speed was picked up quite good but it still required Adobe Flash. Why not HTML5, was the callout from almost all of you out there? Get rid of Flash. VMware heard you, but is not quite ready. So, basically there can be five main management interfaces.

  1. The currently most used is the vSphere Web Client. It’s based on the Adobe Flex platform and needs Flash to run. And it’s still here.
  2. Next is the HTML5-based vSphere Client. This tool has had accelerated development mostly because all of you out there downloaded the HTML5 Fling so much. That Fling will continue to be updated more often and can be used by all of you who are looking for that cutting edge functionality. However, the Fling will remain unsupported.
  3. Then there is the revamped Appliance Management UI. This is also an HTML5 interface. And there is also a similar interface that is especially for the Platform Services Controller, where the SSO configuration can be managed.
  4. Finally, and staying with the HTML5 theme, we have the Host Client. The Host Client also started out as a VMware Fling but made it into the product as of vSphere 6.0 Update 2.

That makes a total of 5 (counting nr 3 twice, as mentioned). Not the best story, but we hope that VMware will in the end roll it up into one. Now, as a reminder, these new features are only available in the vCenter Server Appliance.

Client Plugin

Did you hate that Client Integration Plugin or what? It would not run on just any client, then there were security issues and then when you thought you were in the green, you tried running an installation of an OVA and came to the conclusion that the CIP had some kind of an issue and it refuses to install an OVA because you need the CIP for that. Well, in version 6.5, the plugin is gone. It’s all native browser functions. That should make for a lot of happy faces.

vSphere Security

Security keeps getting more focus in IT. And VMware is no exception. Data integrity, privacy, know who has access, know who changed something. This has been on the wish-list of many for quite some time.

vSphere Logging

In the old days, vSphere would not tell you who changed what. It just stated that a change was made, period. Who changed it? What was changed and when? Log collectors could not help as the information simply was not transported to it. Since v6, the information of which user changed what and when is logged, but it is not reflected in the logs that are transported to external log collectors, not even to LogInsight. You need third party tools to or scripts by various knowledgable people to make vCenter show that information.

Now, with v6.5, vSphere shows you what happened, who made it happen and when it happened. Logs become more actionable. When an admin changes the amount of vCPU’s or adds memory to a VM, logs will clearly show:

  • The account that made the changes
  • The VM that was changed
  • A list of changes that were made to the VM in the format “old setting” -> “new setting”

This way, you always know what the old setting was and what the new setting is. If you are troubleshooting a server, you can now easily revert it back to its original state when changes were not documented.

VM Encryption and vMotion Encryption

With vSphere 6.5, you can now apply an encryption policy to a VM. What does that even mean? Once a VM is encrypted, the VMDK’s and the VM files are encrypted. This is done via symmetric keys. The key comes from the key manager and unlocks the key stored in the VMX/VM settings. The stored/unencrypted key is then used to encrypt/decrypt. It does not require any changes to the VM, the OS within the VM, the datastore or the VM’s hardware version. The VM itself has no access to the keys used to encrypt and when you vMotion an encrypted VM, vMotion also is encrypted (otherwise you might still be able to read the VM contents). Obviously to make encryption valuable, not everybody should have access to the keys. So a new role is introduced, the “No Cryptography Administrator”. This admin can do almost anything a “normal” admin can do, except encrypt or decrypt VM’s, access consoles of encrypted VM’s and download encrypted VMs. They can manage the encrypted VM in terms of power on and off, boot and shutdown and vmotion.

VM encryption depends on an external key management server or KMS. The symmetric keys come from the KMS. The KMS key encrypts the VM key. That’s the key that vCenter requests and sends to the hosts. That key is stored in the host memory and used to decrypt the key used to encrypt.that traditionally is managed by security. The KMS hands out keys that vSphere uses to encrypt and decrypt VMs. Obviously not everyone can have access to encryption keys, that would defy the purpose of the encryption. This will stir things up a bit with your current admin roles as you may need to re-evaluate who needs access to what.

In the wake of VM encryption comes vMotion encryption. vMotion encryption does not encrypt the whole vMotion network, it encrypts the vMotion data. As mentioned, it is required when you vmotion your encrypted VM’s but you can also enable it to encrypt all vMotion traffic. vMotion encryption has 3 settings:

  • Disabled: (obviously) do not use encryption
  • Opportunistic: Use encryption when source and destination host both support encryption
  • Required: Only allow encrypted vMotion. This will mean vMotion will fail if one of the hosts does not support it.

Secure Boot

UEFI secure boot has been around for some time and with vSphere 6.5 we can now also leverage it in the datacenter, both for the host and the VM. If Secure Boot is enabled, you can’t install unsigned code. With Secure Boot enabled, ESXi will ONLY boot and use signed code, for ESXi as well as additional VIBs. This ensures that the hypervisor has a cryptographic chain of trust to the certificate stored in the firmware. UEFI ensures the kernel boots clean after which the secure boot verifier launches and validates each VIB against the certificate stored in the UEFI firmware. Secure Boot checks this every time the host boots. If the check fails anywhere in the chain, the host will fail to boot. Consequently, secure boot inside the VM is also a chain. It can be enabled in the UI as well as with PowerCli.

HA and DRS Enhancements

High Availibility and Distributed Resource Scheduling are two major components of vSphere that have made a big difference over the years. Where HA keeps your VM’s alive and available, DRS keeps your hosts balanced and well utilized. In vSphere 6.5, there are a couple of enhancements that certainly are worth mentioning.

HA Orchestrated Restarts

One of the things we’re all familiar with boot order. You want the AD servers booted before you want the DB servers booted. You want the App servers booted once the DB servers are booted and so on. In vSphere 6.5 you now have HA Orchestrated Restarts, where you can define in what order a specific multi-tier app needs to boot, like first the DB server, then the App server and last the Web tier. Every time HA needs to restart this tier, it will do so according to your rules.

ProActive HA and Quarantine Mode

How can HA be proactive? It’s not like you see a failure coming. Or is it? As it turns out, and you all probably know this, almost all big server vendors have extra hardware checks and monitoring build into their servers. This is monitored by their hardware management solution like Dell OpenManage or HP’s Insight Manager. Now, HA can vacate a host once an alert is raised. As soon as a notification comes in of a host being in a degraded mode, HA will vMotion the VM’s on that host to another host in the cluster.

Once a host is in degraded mode, HA will put it in Quarantine Mode. Any host that is either moderately or severely degraded will be put in Quarantine. This means that HA will not move VM’s to it until you fix the server and get it out of quarantine.

DRS Policies

Tuning your DRS was pretty basic. With vSphere 6.5 you can tune it more to your situation and use case. With DRS policies the distribution of VM’s over your hosts gets more equal. DRS now also looks at consumed memory versus active memory for load balancing. Also, DRS now looks at CPU overcommit to prevent a single host from overcommitting on CPU load. This is especially useful when you have a lot of smaller VM’s in your infrastructure, like with VDI.

Network-Aware DRS

DRS used to not look at the network load of a host when it moved VM’s around. On occasion that could run you into trouble when a network intensive VM was sitting on a host when another VM with high network load was moved onto it and things start slowing down. DRS now also looks at the saturation of a host’s network links and avoids moving VM’s onto it that could cause a slow-down or worse. It still is a lower priority than CPU and memory so no guarantees on performance here.

Wrap-Up

So that wraps up our cherry picking of the new features. There is more to hear and see, like vSAN 6.5, Virtual Volumes updates and Storage Policies and control but we’ll save that for a more storage intensive post. No exact release date has been communicated yet. VMware states it will release vSphere 6.5 in the fourth quarter of 2016.

Update: Many thanks to Mike Foley for the corrections on VM encryption and secure boot.


Read more »



Jan
13
List of VMware Default Usernames and Passwords
Posted by Thang Le Toan on 13 January 2016 01:37 AM

Dưới đây là các địa chỉ web local, https, http, port và username, mật khẩu của hệ thống các sản phẩm mặc định của VMware.

Nó rất khó nhớ, dễ quên nên tôi ghi lại để tiện đường cấu hình, thay đổi sau này khi lần đầu khởi động và cấu hình:

 

Horizon Application Manager

http://IPorDNS/SAAS/login/0

http://IPorDNS

 

Horizon Connector

https://IPorDNS:8443/

 

vCenter Appliance Configuration

https://IPorDNS_of_Server:5480

username: root

password: vmware

 

vCenter Application Discovery Manager

http://IPorDNS

username: root

password: 123456

default ADM management console password is 123456 and the CLI password is ChangeMe

 

vCenter Chargeback

http://IPorDNS:8080/cbmui/

username: root

password: vmware

 

vCenter Infrastructure Navigator:

https://IPorDNS_of_Server:5480

username: root

password: Supplied during OVA deployment

 

vCenter Log Insight

https:// log_insight-host/

username: admin

password: password specified during initial configuration

 

vCenter MOB

https://vcenterIP/mob

 

vCenter Web Client Configuration

https://IPorDNS_of_Server:9443/admin-app

username: root

password: vmware

 

vCenter vSphere Web Client Access

https://IPorDNS_of_Server:9443/vsphere-client/

username: root

password: vmware

For vSphere 5.1  = Windows default username: admin@System-Domain

For vSphere 5.1 = Linux (Virtual Appliance) default username: root@System-Domain

For vSphere 5.5 = default username: administrator@vsphere.local

 

vCenter Single Sign On (SSO)

https://IPorDNS_of_Server:7444/lookupservice/sdk

For vSphere 5.1 = Windows default username: admin@System-Domain

For vSphere 5.1 = Linux (Virtual Appliance) default username: root@System-Domain

password: specified during installation

Adding AD authentication to VMware SSO 5.1

For vSphere 5.5 = default username: administrator@vsphere.local

 

vCenter Orchestrator Appliance

http://orchestrator_appliance_ip

Appliance Configuration:

change the root password of the appliance Linux user. Otherwise, the first time when you try to log in to the appliance Web console, you will be prompted to change the password.

Orchestrator Configuration:

username: vmware

password:vmware

Orchestrator Client:

username: vcoadmin

password: vcoadmin

Web Operator

username: vcoadmin

password: vcoadmin

 

vCenter Orchestrator for Windows:

https://IPorDNS:8283 or http://IPorDNS:8282

username: vmware

password: vmware

WebViews: http://orchestrator_server:8280.

 

vCenter Orchestrator for vCloud Automation Center (built-in):

https://vcloud_automation_center_appliance_ip:8283

username: vmware

password: vmware (after initial logon, this password is changed)

vCO Client is accessible from http://vcloud_automation_center_appliance_ip

username: administrator@vsphere.local (or the SSO admin username)

password: specified password for the SSO admin during vCAC-Identity deployment

 

vCenter Operations

Manager: https://IPorDNS_of_UI_Server

username: admin

password: admin

Admin: https://IPorDNS_of_UI_Server/admin

username: admin

password: admin

CustomUI: https://IPorDNS_of_UI_Server/vcops-custom/

username: admin

password: admin

 

vCloud Automation Center Identity Appliance

https://identity-hostname.domain.name:5480/

username: root

password: password supplied during appliance deployment

 

vCloud Automation Center vCAC Appliance

https://identity-hostname.domain.name:5480/

username: root

password: password supplied during appliance deployment

 

vCloud Automation Center

https://vcac-appliance-hostname.domain.name/shell-ui-app

username: administrator@vsphere.local

password: SSO password configured during deployment

 

vCloud Automation Center built-in vCenter Orchestrator:

https://vcloud_automation_center_appliance_ip:8283

username: vmware

password: vmware (after initial logon, this password is changed)

vCO Client is accessible from http://vcloud_automation_center_appliance_ip

username: administrator@vsphere.local (or the SSO admin username)

password: specified password for the SSO admin during vCAC-Identity deployment

 

vCloud Connector Node

https://IPorDNS:5480

username: admin

password: vmware

 

vCloud Connector Server

https://IPorDNS:5480

username: admin

password: vmware

 

vCloud Director

https://IPorDNS/cloud/

username: administrator

password: specified during wizard setup

 

vCloud Director Appliance

username: root

password: Default0

OracleXEDatabase

username: vcloud

password: VCloud

 

vCloud Networking and Security

console to VM

username: admin

password: default

type "enable"

password: default

type "setup" then configure IP settings

http://IPorDNS

 

VMware Site Recovery Manager:

username: vCenter admin username

password: vCenter admin password

 

vShield Manager

console to VM

username: admin

password: default

type "enable"

password: default

type "setup" then configure IP settings

http://IPorDNS

 

vFabric Application Director

https://IP_or_DNS:8443/darwin/

root: specified during deployment

password: specified during deplyent

darwin_user password: specified during deployment

admin: specified during deployment

 

vFabric AppInsight

http://IP_or_DNS

username: admin

password: specified during OVA deployment

 

vFabric Data Director

https://IPorDNS/datadirector

username: created during wizard

password: created during qizard

 

vFabric Hyperic vApp

username: root

password: hqadmin

 

vFabric Suite License

https://IPorDNS:8443/vfabric-license-server/report/create

 

View Admin

https://IPorDNS/admin

username: windows credentials

password: windows credentials

 

vSphere Data Protection Appliance

https://<IP_address_VDP_A ppliance>:8543/vdp-configure/

username: root

password: changeme

 

vSphere Replication Appliance

https://vr-appliance-address:5480

username: root

password: You configured the root password during the OVF deployment of the vSphere Replication appliance

 

Zimbra Appliance Administration Console

https://IPorDNS:5480

username: vmware

password: configured during wizard setup


Read more »



Sep
6
VMware Fling – VMware I/O Analyzer
Posted by Thang Le Toan on 06 September 2015 11:05 PM

VMware released a new version of their VMware I/O Analyzer Fling. VMware I/O Analyzer is an integrated framework designed to measure storage performance in a virtual environment and to help diagnose storage performance concerns. I/O Analyzer, supplied as an easy-to-deploy virtual appliance, automates storage performance analysis through a unified interface that can be used to configure and deploy storage tests and view graphical results for those tests.

I/O Analyzer can use IOmeter to generate synthetic I/O loads or a trace replay tool to deploy real application workloads. It uses the VMware VI SDK to remotely collect storage performance statistics from VMware ESX/ESXi hosts. Standardizing load generation and statistics collection allows users and VMware engineers to have a high level of confidence in the data collected.

VMware I/O Analyzer features

  • Integrated framework for storage performance testing
  • Readily deployable virtual appliance
  • Easy configuration and launch of storage I/O tests on one or more hosts
  • Integrated performance results at both guest and host levels
  • Storage I/O trace replay as an additional workload generator
  • Ability to upload storage I/O traces for automatic extraction of vital metrics
  • Graphical visualization of workload metrics and performance results
VMware IO Analyzer

VMware IO Analyzer

New in version 1.6.1

  • Changed guest I/O scheduler to NOOP and disable I/O coalescing at I/O scheduler level.
  • Downgraded VM version to 7 to be compatible with ESX/ESXi 4.0.
  • Back-end improvements to workload generator synchronization to support 240+ workers.
  • Bug fixes.

System Requirements

I/O Analyzer has the following minimum system requirements:
  • VMware ESX/ESXi version 4.0 or later
  • 16.5 GB of storage space (additional space recommended)
  • 1 vCPU and 2048 MB VM memory
  • A supported Internet browser (Google Chrome or Mozilla Firefox)
You can download the VMware I/O Analyzer Fling here.

Read more »



Aug
21
NVIDIA GRID vGPU on VMware Horizon
Posted by Thang Le Toan on 21 August 2015 01:51 AM

VMware, NVIDIA and Google are now working together to deliver graphics rich applications to enterprise cloud desktops. The result of the collaboration to date is two key technology previews that are announced at VMworld 2014.

NVIDIA GRID vGPU on VMware Horizon

First, VMware announced the technology preview of NVIDIA GRID vGPU on the VMware platform which will bring rich 3D applications to VMware Horizon and Horizon DaaS. If you need to deliver high-end 3D graphics with secure remote desktops, you will want to try NVIDIA GRID vGPU with VMware Horizon.

Today VMware announced a early access program that be available for select NVIDIA and VMware customers in Q4 2014. Sign up today to be considered for trying out NVIDIA GRID vGPU with VMware products at www.nvidia.com/grid-vmware-vgpu.

This vGPU technology was announced earlier this year at the GPU Technology Conference. VMware announced the intention to collaborate with NVIDIA to bring GRID vGPU to VMware products. NVIDIA GRID vGPU is exciting technology that allows multiple virtual machines to share the power of a single GPU to deliver rich 3D graphics and high performance video. Combined with VMware Horizon, together NVIDIA and VMware will be able to deliver the highest end 3D applications to the most demanding users in design, manufacturing, and engineering.

vGPU advantages

NVIDIA GRID vGPU brings the full benefit of NVIDIA hardware-accelerated graphics to virtualized solutions. This technology provides exceptional graphics performance for virtual desktops equivalent to local PCs when sharing a GPU among multiple users. GRID vGPU is the industry’s most advanced technology for sharing true GPU hardware acceleration between multiple virtual desktops—without compromising the graphics experience. Application features and compatibility are exactly the same as they would be at the desk. With GRID vGPU technology, the graphics commands of each virtual machine are passed directly to the GPU, without translation by the hypervisor. This allows the GPU hardware to be time-sliced to deliver the ultimate in shared virtualized graphics performance.

vGPU vs vSGA vs vDGA

This vGPU technology bridges the gap between the vSGA and vDGA graphics modes which I described in this article. vGPU offers vDGA performance and DirectX and OpenGL support with the density of vSGA. So, the best of both worlds. So if you are running applications that require DirectX 11 but vDGA is too costly, vGPU is the ideal mix between graphics support and desktop density.

vGPU vs vSGA vs vDGA

 

Deliver Rich Graphics Applications to Chromebooks

Second, VMware announced a technology preview with Google and NVIDIA to deliver this rich workstation level graphics and user experience to Chromebook users.

Building on the foundation of NVIDIA GRID vGPU in the datacenter to enable rich graphics applications to VMware Horizon desktops, VMware has partnered with Google and NVIDIA to deliver those rich graphics applications to Chromebook users.

Enterprise customers are using Chromebooks for an affordable, mobile device to access the applications they need. With the new generation of Chromebooks like the Tegra K1 powered Chromebooks, rich graphics are now available to more devices than ever.

Check out the next generation of VMware Blast Performance which delivers high-performance virtual desktops with workstation-class graphics applications on the latest Chromebooks, with the video below.

Seeing is believing, so go to the VMware and NVIDIA booths at VMworld 2014 to see the future of rich graphics applications delivered from the cloud to the devices you want to use. I’ve done it and I must say “I’m really impressed!”


Read more »




Help Desk Software by Kayako