Live Chat Software by Kayako
Deep learning projects: Cloud-based AI or dedicated hardware?
Posted by Thang Le Toan on 08 March 2018 12:28 AM
Are deep learning projects part of your AI agenda this year? Here's how to evaluate the tradeoffs between using cloud-based AI infrastructure versus dedicated hardware.
Chip and system vendors are developing -- and rapidly innovating -- new AI processors designed for deep learning projects that use neural networks, the computing systems designed to approximate how human brains work.
At the same time, many cloud vendors have also been introducing these processing capabilities via dedicated GPUs and field programmable gate arrays (FPGAs), the integrated circuits designed to be customized after manufacturing. Google, which has stated that AI is strategic across all its businesses, is offering dedicated AI services built on its custom tensor processing unit (TPU), the company's application-specific integrated circuit developed specifically for neural network deep learning projects.
"Cloud providers are betting that, over time, all companies will use deep learning and want to get a head start," said Sid J. Reddy, chief scientist at Conversica, which develops AI software for marketing and sales.
As CIOs begin mapping out their AI strategies -- in particular, their need and ability to do deep learning projects -- they must consider a variety of tradeoffs between using faster, more efficient private AI infrastructure, the operational efficiencies of the cloud, and their anticipated AI development lifecycle.
In general, private AI infrastructure is cost-effective for companies doing multiple, highly customized AI projects. If those companies are using data from applications running in the cloud, however, the cost of moving data into an on-premises AI system could offset the value of having dedicated hardware, making cloud-based AI cheaper. But, for many deep learning projects in this incredible fast-moving field, the economics could quickly change. Here's a breakdown.
Take small steps
Private AI infrastructure requires a large investment in fixed costs and ongoing maintenance costs. Because of the capital expense related to building and maintaining private AI infrastructure, cloud-based AI services -- even when they cost in aggregate more than private infrastructure -- can be the smart economic choice as enterprises flesh out their AI strategy before making a bigger commitment.
For small companies, fears about the high price of using this new AI infrastructure shouldn't be the reason to not try deep learning projects, Reddy said. As deep learning becomes more accepted as state-of-the-art for a wide range of tasks, he believes that more AI algorithms will transition to it. This is because deep learning promises to reduce some of the overhead in preparing data and optimizing new AI models.
Enterprises and small companies, alike, also need to determine if they have enough data to train the models for their deep learning projects without "overfitting," or creating a model that does not make accurate predictions for new data. Reddy said this is easier for a startup like Conversica that has data from hundreds of millions of conversations to work with. "It might not be the case with other startups that have limited aggregated data to begin with," he said.
Going beyond the basics
Some cloud providers like Microsoft with its Cognitive Services in Azure use FPGA chips under the hood for improving specific AI services. This approach hides the complexity of the FPGA from the customer, while providing some of the cost savings that FPGA chips provide on the back end. AWS has taken a different approach, becoming the first provider to allow enterprises to directly access FPGAs for some applications. And enterprises are starting to experiment with these.
For example, Understory, a weather forecasting service, has started moving some of its heavier machine learning algorithms into the cloud using AWS' new FPGA service to help with the analysis.
"Given our expansion of stations and our plan for growth, we will need to become smarter about the types of processors and metal we run our analyses and algorithms on," said Eric Hewitt, vice president of technology at Understory. "We would not push this type of power to our edge computing layer, but for real-time algorithms running on a network of data, it's feasible that we would use them."
Private AI, good for specialized needs
Some IT executives believe significant cost savings and performance improvements can be reaped by customizing AI-related hardware.
"I use a private infrastructure because my very specific needs are sold at a premium in the cloud," said Rix Ryskamp, CEO of UseAIble, an AI algorithm vendor. "If I had more general needs (typically, not machine learning), I would use cloud-only solutions for simplicity."
CIOs also need to think about the different components in the AI development lifecycle when deciding how to architect their deep learning projects. In the early research and development stages of an AI lifecycle, enterprises analyze large data sets to optimize a production-ready set of AI models. These models require less processing power when done in an on-premises production system than in cloud-based AI infrastructure. Therefore, Ryskamp recommended companies use private infrastructure for R&D.
The cloud, on the other hand, is often a better fit for production apps as long as requirements -- like intensive processing power -- do not make cost a problem.
"CIOs who already prefer the cloud should use it so long as their AI/[machine learning] workloads do not require so much custom hardware that cloud vendors cannot be competitive," Ryskamp said.
Energy efficiency, a red herring in deep learning projects?
"In general, the economics of doing large-scale deep learning projects in the public cloud are not favorable," said Robert Lee, chief architect with FlashBlade at Pure Storage, a data storage provider.
On the flip side, Lee agreed that training is most cost-effective where data is collected or situated. So, if an enterprise is drawing on a large pool of SaaS data, or using a cloud-based data lake, then he said it does makes more sense to implement the deep learning project in the cloud.
Indeed, the economic calculus of on-premises versus using cloud-based AI infrastructure will also vary according to a company's resources and timetable. The attraction of deploying private infrastructure, so that it can take advantage of the greater power efficiency of FPGAs and new AI-chips, is only one benefit, Lee argued.
"The bigger Opex lever is in making data science teams more productive by optimizing and streamlining the process of data collection, curation, transformation and training," he argued.
Tremendous time and effort is often spent in the extract, transform and load-like phases of deep learning projects, which create delays to data science teams, rather than running the AI algorithms themselves.
Continuous learning blurs choice between cloud-based AI and private
The other consideration is that as AI systems mature and evolve, continuous or active learning will become more important. Initial approaches to AI have centered around training models to do prediction/classification, then deploying them into production to analyze data as it's generated.
Which will you choose for your deep learning projects: cloud-based AI or on-premises hardware?
"We are starting to realize that in most use-cases, we are never actually done training and that there's no clear break between learning and practicing," Lee said.
In the long run, CIOs will need to see that AI models in deep learning projects are very much like humans who continuously learn. A good model is like the undergraduate with an engineering degree who was trained in basic concepts and has a good basic understanding about how to think about engineering. But expertise is developed over time and with experience, while learning on the job. Implementing these kinds of learning loops will blur the lines around distinctions such as doing the R&D component on private infrastructure versus in cloud-based AI infrastructure.
"Just like their human counterparts, AI systems need to continuously learn -- they need to be fed a constant pipeline of data collection/inference/evaluation/retraining wherever possible," Lee said.
Read more »
Desktop as a Service (DaaS)
Posted by Thang Le Toan on 14 January 2018 01:09 PM
Desktop as a service (DaaS) is a cloud computing offering in which a third party hosts the back end of a virtual desktop infrastructure (VDI) deployment.
With DaaS, desktop operating systems run inside virtual machines on servers in a cloud provider's data center. All the necessary support infrastructure, including storage and network resources, also lives in the cloud. As with on-premises VDI, a DaaS provider streams virtual desktops over a network to a customer's endpoint devices, where end users may access them through client software or a web browser.
How does desktop as a service work?
In the desktop-as-a-service delivery model, the cloud computing provider manages the back-end responsibilities of data storage, backup, security and upgrades. While the provider handles all the back-end infrastructure costs and maintenance, customers usually manage their own virtual desktop images, applications and security, unless those desktop management services are part of the subscription.
Typically, an end user's personal data is copied to and from their virtual desktop during logon and logoff, and access to the desktop is device-, location- and network-independent.
VDI vs. DaaS
Desktop as a service provides all the advantages of virtual desktop infrastructure, including remote worker support, improved security and ease of desktop management.
Further, DaaS aims to provide additional cost benefits. Deploying VDI in-house requires a significant upfront investment in compute, storage and network infrastructure. Those costs have decreased, however, thanks to the emergence of converged and hyper-converged infrastructure systems purpose-built for VDI.
With DaaS, on the other hand, organizations pay no upfront costs. They only pay for the virtual desktops they use each month. Over time, however, these subscription costs can add up and eventually be higher than the capital expenses of deploying on-premises VDI.
Additionally, some advanced virtual desktop management capabilities may not be available for certain DaaS deployments, depending on the provider.
Major DaaS providers
Two leading virtual desktop infrastructure vendors, Citrix and VMware, also provide desktop-as-a-service offerings. Another major DaaS provider is Amazon Web Services, whose offering is called WorkSpaces.
There are also a variety of cloud computing providers that host and manage Citrix and VMware virtual desktops through those vendors' partner programs.
What roadblocks are preventing DaaS from becoming the most common delivery model?
Read more »
IBM Cloud Private pulls from Big Blue's roots
Posted by Thang Le Toan on 10 November 2017 02:04 AM
IBM sticks close to its roots with IBM Cloud Private, which taps Big Blue's enterprise and middleware strengths to move customers from the data center to private cloud.
Despite continually working to reinvent itself, IBM never strays far from its roots, as evidenced by its move to bring cloud-native technology to the enterprise data center to accelerate digital transformation efforts.
Earlier last week, IBM launched IBM Cloud Private, which enables enterprises to bring modern development technologies such as containers, microservices and APIs -- all attributes of public cloudenvironments -- to private clouds in the data center, where IBM has tenure as a leading technology provider.
Big Blue dominant in the data center
IBM has long held a dominant position in the data center, with its mainframe, database and middleware technology. Now, the company is launching off that base to help its enterprise customers in regulated industries or that have sensitive data -- such as healthcare, government and finance -- gain the benefits of cloud-native computing development tools and processes, portability and integration.
"As part of its private cloud offering, IBM's been enhancing its developer services in the form of an integrated DevOps tool chain via a service catalog featuring a range of runtimes, development frameworks, tools, middleware, OSS and other services," Charlotte Dunlap, an analyst with GlobalData, said. "This plays into IBM's intent to provide developers with the tools, languages and frameworks they're accustomed to using, e.g., extending services to Node.js or Swift developers."
Indeed, the new offering provides developers with access to a variety of management and DevOps tools, including application performance management, Netcool, UrbanCode and Cloud Brokerage. It also includes support for popular tools such as Jenkins, Prometheus, Grafana, and ElasticSearch.
Kubernetes at its core
Steve Robinson, general manager of IBM Hybrid Cloud, said that after several entries into the private cloud space with offerings such as Bluemix Local and others, Big Blue "took a clean sheet of paper and took a look at modern development technologies" and decided to base IBM Cloud Private on Kubernetes. "Then, we decided to bring our DevOps stack and middleware stack forward," he said.
IBM introduced container-optimized versions of its core middleware -- IBM WebSphere Liberty, Db2 and MQ messaging middleware -- to complement the new product.
Positioning vs. competition
Meanwhile, some observers view IBM Cloud Private as IBM's answer to competing offerings such as Microsoft Azure Stack, which provides similar on-premises capabilities. However, IBM said that its strength in middleware and its foundation in enterprise systems set it apart.
"This better positions IBM against primary rivals which are Microsoft Azure Stack and VMware/Pivotal, with a cloud strategy that has evolved up the stack from [infrastructure as a service] to [platform as a service] and now to what they call 'enterprise transformation' -- meaning more personalized customer engagement capabilities fulfilled through technologies supporting multi-cloud, cognitive and API, and blockchain," Dunlap said of the new product. "IBM says 71% of its customers today use three or more clouds including public, private and departmental. Private remains their largest customer opportunity with complex requirements and latency issues."
Based on its own data, IBM estimated that customers will spend more than $50 billion annually on private cloud infrastructure beginning in 2017 and growing at 15% to 20% each year through 2020.
Microsoft's one big advantage in the segment is being able to do both public and private cloud almost seamlessly, said Rob Enderle, an industry expert and founder of the Enderle Group.
"Recently, Cisco and Google partnered to provide the same capability, and now IBM is moving at the same opportunity," he said. "IBM, like Cisco, should be particularly strong on the on-premises side of this and their execution with SoftLayer has been very strong of late resulting in what should be a very competitive offering. This should expand the available market for IBM's now hybrid solution significantly."
In a statement, Tyler Best, CTO of car rental giant Hertz, said, "Private cloud is a must for many enterprises, such as ours, working to reduce or eliminate their dependence on internal data centers." He added that a strategy of public, private and hybrid cloud is "essential" for large enterprises transitioning from legacy systems to the cloud.
With such a big opportunity at stake, every cloud vendor is positioning itself to capture as much of the wave of enterprise interest in Kubernetes as possible onto its own platform, said Rhett Dillingham, an analyst at Moor Insights & Strategy. And with IBM Cloud Private, IBM is providing its Kubernetes-based platform for use on private infrastructure with the integrated value of its investment in complementary management and developer tooling.
"As part of this, IBM is offering new containerized versions of its software and development frameworks, because it has a big opportunity to help its existing software customers transition to cloud by modernizing their management of IBM WebSphere Liberty-, Db2- and MQ-based applications using containers via Kubernetes," Dillingham said. "This is a key opportunity for IBM in bridging from leading provider for traditional enterprise applications to leading provider for cloud-modernized and cloud-native applications on its IBM Cloud Private and IBM Public Cloud offerings."
Sticking to its knitting
So, with IBM Cloud Private, IBM is sticking to its knitting while helping to advance its enterprise customers with modern development tools.
"IBM Cloud Private extends the value of customers' existing IBM investments rather than being a new, on-premises cloud platform, like Microsoft's Azure Stack," said Charles King, principal analyst at Pund-IT.
The primary benefit of this offering is it enables enterprises to take advantage of the investments they've already made in existing systems, applications and data by bringing them into an elastic cloud platform.
"This will help accelerate application development, more easily expose these applications to new public cloud services and even provide the option of moving applications to the public cloud," said Michael Elder, distinguished engineer for the IBM Cloud Private platform. "We also think it sets an enterprise up with a powerful new tool for workload portability from their datacenter to the public cloud."
The platform provides tools to help bootstrap new applications into containers and enable existing applications for the cloud, he noted.
"We also build IBM Microservice Builder into the platform, which offers preconfigured Jenkins CI service build container images and publishes them to the built-in image registry right out of the box," Elder said.
The system also includes other management and security features, such as multi-cloud management automation, a security vulnerability advisor, data encryption and privileged access, and more.
Moreover, IBM Cloud Private supports Intel-based hardware from Cisco, Dell EMC, Lenovo and NetApp, and it can be deployed via VMware, Canonical and other OpenStack distributions.
Read more »
VeloCloud-VMware acquisition will battle Cisco in the branch
Posted by Thang Le Toan on 06 November 2017 11:36 PM
The VeloCloud-VMware acquisition will mark the first time VMware will compete directly with Cisco in networking. Cisco, however, remains the 800-pound gorilla.
VMware plans to acquire SD-WAN vendor VeloCloud Networks, a move that would turn the branch office into a battleground for the virtualization provider and Cisco.
The VeloCloud-VMware acquisition, announced this week, would be carried out in early February. With VeloCloud, VMware would go head-to-head against Cisco's Viptela, IWAN and Meraki brands. SD-WAN, in general, intelligently routes branch traffic across multiple links, such as broadband, MPLS and LTE.
"This is the first time that Cisco and VMware will directly compete in the networking world," said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo.
Before, the closest Cisco and VMware came to competing in networking was with their software-defined networking platforms ACI and NSX, respectively. The products, however, serve mostly different purposes in the data center. NSX provisions network services within VMware's virtualized computing environments while ACI distributes application-centric policies to Cisco switches.
Read more »
Choosing the right virtualization monitoring and management tools
Posted by Thang Le Toan on 10 October 2016 12:15 PM
There are a bevy of virtualization monitoring and management tools on the market, and your IT shop’s success hinges on finding the right one.
Virtualization monitoring and management tools are critical to a successful server consolidation project. If you drive hardware utilization rates up too far, you can end up with poor virtualization performance for end users. Since the last thing you want is user complaints, you’ll have to closely monitor performance as you add more and more virtual machines (VMs).
All virtualization monitoring and management tools have varying capabilities. Some measure performance in real time, while others provide historical data. Certain tools provide statistical analysis of data to eliminate false positives and can help with monitoring and troubleshooting performance problems. Capacity planning and analysis and chargeback of virtual infrastructure resources are other abilities to look for in performance monitoring tools.
Virtualization monitoring and management options
Quest Software Inc.’s vFoglight (formerly Vizioncore vFoglight) is one of the few multiplatform performance measuring tools and can monitor both VMware Inc. vSphere and Microsoft Hyper-V. VFoglight has one of the prettiest interfaces, but it may require a significant amount of resources to run (see Figure 1).
The red/yellow/green indicators on the dashboard are useful, as are the visually appealing graphs and dials showing performance status. VFoglight records information on guest processes as well as “alerts with expert advice.” This product has detailed architectural representations, models workload migration, and it helps manage resource usage and multiple virtual data centers.
Veeam Monitor from Veeam Software is part of the larger Veeam One suite. Monitor installs as a traditional Windows application, and you can use the included SQL Express local database or connect it to your own database.
The monitoring and management tool integrates with the free Veeam Business View to show the virtual infrastructure how your company is organized. Monitor offers good views of “top talkers” in each of the different resource areas: CPU, memory, storage and network (see Figure 2).
The product has more than 125 performance alerts, and it can correlate vSphere performance information with vSphere events, which is beneficial for troubleshooting. The free version of Veeam Monitor has some limitations, such as providing only seven days of history.
Veeam Monitor Plus comes with capacity planning, change management and reporting/chargeback capabilities. The commercial version of Veeam Monitor is agentless and includes custom alarms. It also enables management of guest, host and vCenter processes.
Virtualization monitoring and management tools with actionable predictions
VKernel Corp. was the first company to feature virtualization capacity analysis. Its vOPS tool deploys as a virtual appliance, so there is no OS license or traditional app/database installs to perform.
The VKernel vOperations Suite (see Figure 3) includes the Performance Analyzer, Capacity Manager, Optimizer, Reporting and Chargeback components.
It is also available as a vCenter plug-in or a Web-based interface and integrates with Active Directory. The components are simply unlocked with license keys, and just one appliance is needed for all affected products.
Single appliance cuts the complexity of virtualization monitoring and management
A relatively new entry in the market is VMTurbo Inc.’s Virtualization Management Suite (VMS). VMS is a virtual appliance that provides basic virtual infrastructuremonitoring at no cost. Additional modules (licensed individually)include Reporter, Planner and Optimizer, as well as the free Monitor piece. All ofVMTurbo’s virtualization monitoring and management products are installed in a single virtual appliance.
The Reporter module focuses on performance reporting and capacity reporting, not necessarily inventory reporting like Veeam Reporter.
Planner deals with what-if scenarios and is a capacity planning tool rather than a physical-to-virtual (P2V) planning tool (see Figure 4). Optimizer aids in resource optimization and bottleneck identification.
Helpful netflow stats
Xangati’s virtualization monitoring and management tools tool also deploys as a virtual appliance. A free version of Xangati for VMware’s ESX hypervisor is limited to monitoring a single ESX host.
Xangati’s offering is unique because of these features:
In addition, Xangati recently released dashboards that give health scores to virtual infrastructure objects. These scores are based on historical statistics and provide guidance on normal and abnormal behavior in the infrastructure.
The tool has continuous, real-time visibility into more than 100 metrics on an ESX/ESXi host and its VM activity, including CPU, communications, memory, and disk and storage latency (see Figure 5). The commercial version has a management dashboard for multiple hosts and virtual desktop infrastructure (VDI) functionality.
Xangati is useful for troubleshooting the virtual network, and it would be beneficial if other performance tools integrated NetFlow statistics.
For Linux pros
Zenoss Inc.’s products are based on open source software. Zenoss Community Edition is free, but if you don’t have Linux experience, it may be difficult to deploy (see Figure 6). A downloadable virtual machine is designed for VMware Player, so you may have difficulty deploying it to vSphere.
Zenoss offers configuration management database (CMDB) support, inventory and change tracking, performance monitoring, log monitoring, and alerting.
The Community Edition isn’t as strong for capacity analysis and finding waste as some of the more dedicated tools—but then, it is free.
VMware Inc.’s two primary performance monitoring tools are vCenter Operations and Capacity IQ. In August 2010, the company acquired Integrien and absorbed Alive into its product line. That product later became vCenter Operations and takes a statistical anomaly approach to identifying performance problems.
It learns what is “normal” in your virtual infrastructure and provides numerical scores for how hosts and virtual machines deviate from that statistical norm.
VMware’s other primary performance monitoring and management application is VMware vCenter Capacity IQ, which focuses on—you guessed it—capacity of the virtual infrastructure.
It is deployed as a virtual machine and is used to identify overallocated virtual machines, automate capacity monitoring and management tasks and identify capacity bottlenecks. Capacity IQ can also perform what-if capacity analysis.
Depending on your enterprise needs, you may need insight into VMware vSphere statistics through what is already reported in VMware’s vCenter or knowledge of the virtual infrastructure’s network and storage through alternate sources.
Choosing between virtualization monitoring and management tools
Decide which functions are most important for your data center.
Not all virtualization monitoring and management tools are created equal. Some are intended for general performance monitoring and alerts, while others offer very specific capacity analysis or troubleshooting capabilities. Performance tools may be bundled with other products such as change management or reporting applications.
Veeam Monitor and Quest vFoglight are solid examples of traditional, virtual infrastructure performance tools. But lesser-known performance tools such as Xangati, VKernel, SolarWinds and VMTurbo offer unique features. Make sure you look at third-party vendors before you select a tool.
Some IT shops may have concerns about virtual appliances for security reasons, but you may love the ease of deploying virtual appliances over installing traditional Windows apps. Evaluate performance monitoring and management tools like these in your own environment to ensure that they give you not only immediate but also long-term value.
Read more »
Why you need VM monitoring tools
Posted by Thang Le Toan on 10 October 2016 12:08 PM
Smaller environments may be able to make do with native hypervisor features, but for true scalability, you'll need dedicated virtualization management tools.
VM monitoring tools should be an important part of any virtual data center. In some cases, organizations may be reluctant to invest in such tools because they do not see the cost benefit. After all, any hypervisor includes native tools, and the organization may already have tools in place for monitoring physical servers. Even so, VM monitoring software provide capabilities that go beyond those of monitoring tools designed for physical servers or native tools included with a hypervisor.
Major hypervisor vendors, such as Microsoft and VMware, offer their own hypervisor management and monitoring tools. Microsoft's product is System Center Virtual Machine Manager (SCVMM). VMware offers vCenter, as well as a variety of options under its vRealize Suite. There are also third-party tools available, such as VMTurbo, and vCommander from Embotics. Each product has its own unique feature set and capabilities, but there are some relatively common capabilities which administrators typically look for in VM monitoring tools. These capabilities usually provide specific value for virtualization environments, and are beyond the scope of a product designed for monitoring physical servers.
Tools for physical servers come up short
This raises the question of what benefits virtualization-specific monitoring software provides that monitoring software intended for use in the physical world does not. One such example might be automated VM deployment. In all fairness, there are many products able to perform bare-metal provisioning for physical servers, and many can even deploy an operating system to a VM. However, a product designed for physical servers would presumably lack the ability to create a VM. As such, an administrator would have to manually create the VM and configure its hardware allocations before using a tool to deploy the operating system. Conversely, a virtualization specific tool such as vCenter or SCVMM can be used to automate the entire VM creation and operating system deployment process.
Another capability that a hypervisor-level monitoring tool might provide that tools designed for physical servers do not, is dynamic workload optimization. VM monitoring software can monitor each virtualization host's utilization, and dynamically move workloads among hosts on an as-needed basis in an effort to balance the workload. Although tools designed for the physical world might offer workload monitoring as a part of a capacity planning feature, there would be no feature that would allow workloads to be dynamically moved among physical machines because the physical world has no concept of VM portability.
Native features vs. third-party VM monitoring tools
Although virtualization specific management and monitoring tools offer additional capabilities, it is also worth considering what functionality such tools might provide beyond what is available in the tools that are natively included with the hypervisor. Once again, the feature set varies from one vendor to the next, but one of the biggest advantages to adopting dedicated VM monitoring tools is that these tools generally provide better scalability than native tools.
Consider the tools Microsoft provides for Hyper-V. Hyper-V includes the Hyper-V Manager, which allows for basic VM creation, monitoring and management. However, the Hyper-V Manager deals with VMs on a per-host basis. This means administrators must make the tool aware of each Hyper-V host. Furthermore, the Hyper-V Manager does not provide a unified view of the organization's VMs. In contrast, Microsoft SCVMM allows administrators to define host servers, host groups, host clusters and even private clouds. The tool can also display all of the organization's VMs collectively, regardless of which server they reside on. In other words, the SCVMM provides much better management scalability than what can be achieved by using the Hyper-V Manager.
Of course scalability isn't the only feature SCVMM offers beyond the Hyper-V Manager's capabilities. SCVMM includes template-based VM deployments, abstracted storage management and the ability to define maintenance schedules, among others. Keep in mind that I am only using SCVMM as an example. VMware's vCenter offers numerous capabilities that do not exist in VMware's native management interface.
Although it is possible to operate a virtualized environment without dedicated VM monitoring software, such tools allow you to use your virtualization infrastructure to its full potential. VM monitoring tools provide capabilities which simply cannot be achieved using native hypervisor tools or management tools that are designed for the physical world.
Read more »
VMware unveils Cross-Cloud Architecture at VMworld 2016
Posted by Thang Le Toan on 07 October 2016 01:53 AM
Cross-Cloud Architecture, VMware's latest attempt at a comprehensive multicloud offering, is largely focused on VMs, which could cause problems going forward.
The dust has settled from VMworld US for another year. What did we get for new products and features? There was no new version of vSphere or any of the other core VMware products. Instead, we got bundles and architecture. The Cloud Foundation bundle is basically vSphere with virtual SAN for storage and NSX for networking. This comes as no surprise, as these are the products that VMware has stated will bring revenue growth, so they need to be easier to sell. At least VMware didn't choose a horrible name for the bundle this time -- the new architecture is called VMware Cross-Cloud Architecture. I hope that VMware is starting down the long road to a real hybrid cloud approach, though I fear that this is another attempt to make hybrid cloud an all VMware offering.
What businesses want
Businesses that I talk to have wanted a workload broker for many years. They want to be able to offer business units the ability to deploy applications on a range of platforms, both on premises and in the cloud.
Businesses want to get the agility of a public cloud deployment for some applications. These same businesses want to have tight control over applications that contain critical data. The hardest part is that they want the full value of each platform, not the lowest common denominator service offering that works the same on every platform.
Most businesses that choose to put workloads in Amazon Web Services (AWS) are not doing so because they can get a virtual machine for 10 cents per hour. Most workloads in AWS are engineered and developed specifically for the AWS platform. The same is true for workloads on the Google Compute Platform (GCP). Deployed application architectures are different on these public clouds.
Ironically, cloud deployments are very similar to on-premises deployments in Azure. Those adopting Azure are more likely to migrate their existing on-premises workloads into Azure because they can migrate without making major architectural changes.
VMware has seen the same behavior for the relatively small number of clients that moved to vCloud Air. Both Azure and vCloud are well-suited to lift and shift migrations, where on-premises VMs are migrated into the cloud. Based on VMware's history, a hybrid cloud is all about where to place your VMs. But that is not how public cloud has been valuable to AWS and GCP customers.
Qualms over Cross-Cloud Architecture
What I don't like about VMware's Cross-Cloud Architecture is that it seems to be focused solely on VMs. This is natural for VMware, as they have had great success with VMs in enterprise data centers. When all you have is a hypervisor, every problem is solved by more VMs.
The Cross-Cloud Architecture makes it easy to deploy some of those VMs on AWS, or other cloud platforms. That's great, but it's just the beginning. A complete cross-cloud architecture needs to allow VMware customers to get full value out of each cloud platform.
Making a broker expose the uniqueness of each cloud is no simple task. Cloud platforms are highly differentiated and services are not directly comparable. The simplest example is that you can run Microsoft SQL Server in many ways, including on premises, as a service on Azure and as a service on AWS. Each platform has a different way to manage database performance and colocation of multiple databases.
Each cloud platform might have options for other kinds of databases; for example, AWS has nearly a dozen database options in its own services. Then databases can be run inside VMs on any platform that lets you run a VM. A more complicated example is a network load balancer. For on-premises deployment, you can have physical or virtual appliances.
Alternatively, you might use network virtualization like VMware NSX. In the cloud, you might use the virtual options, or you might use the cloud platform's own load balancer. To get the most value from a hybrid cloud deployment, you need to choose the appropriate deployment for each platform.
Test your multicloud deployment knowledge
Businesses recognize that the apps and services it needs to function can no longer all run on one platform in a single location. Test your understanding of multicloud deployment.
Planning for the future
So far, Cross-Cloud Architecture seems to treat the non-vSphere platforms as if they are vSphere platforms. Basically, it becomes another place you might run your VMs. As a start, this is okay -- being able to choose from a range of locations when I deploy a VM is good. It's what we expected when VMware acquired DynamicOps back in 2012. The acquisition has resulted in the vRealize Automation (vRA) product. VRA is the heart of the Cross-Cloud Architecture. But VMs are not the answer to every problem.
Businesses want to use the services of each cloud, and VMware needs to extend the Cross-Cloud Architecture to cover the services that each cloud platform offers. I expect that we will see this when VMware has a cloud platform that offers application services. For me, the real question is when VMware will start to offer these services. Maybe Cross-Cloud Architecture is really only marketing aimed at reducing the perceived value of competing cloud offerings. If every cloud is just a place to run your VMs, then the VMware cloud is a great option.
Why should IT professionals embrace the cloud?
Analyzing VMware's evolving hybrid cloud strategy
VMware's cloud strategy fails to impress at VMworld 2016
Read more »
Why cloud ERP software is the best choice for manufacturing SMBs
Posted by Thang Le Toan on 13 May 2016 11:13 AM
Small and medium-sized manufacturers pursue cloud ERP software for lower costs, easier maintenance and greater flexibility. Learn how some SMBs have benefited by moving to the cloud.
When it comes to ERP systems today, it can be easy to forget that some companies -- even small and medium-sized businesses (SMB) -- still install them on premises. But there is no doubt: Cloud ERP software is taking the world by storm. SMB manufacturers, like their larger counterparts, are increasingly taken with cloud offerings.
The Four Dimensions of ERP Consolidation
Organizations today are shouldered with an embarrassment of riches: too many software systems. How to best deal with the dilemma? Consolidation. But the integration challenges are not few in number.
Traditional enterprise software vendors have cloud-based ERP products targeted at the SMB space, while pure-play cloud vendors and even open source cloud providers are gaining traction. The whole world of ERP seems to have gone cloud.
"Certainly for new implementations, a lot more organizations are looking for cloud solutions," said Nick Castellina, research director of business planning and execution at Aberdeen Group. "SMBs don't have the money for upfront capital expenditure. And they don't want to focus on owning a large ERP implementation. They want to focus on what they do best: making goods and selling them."
Don't think cloud approach is 'one-size-fits-all'
Of course, there are still situations where a midsize manufacturer might prefer to install and run its ERP software on site. "Cloud is not a one-size-fits-all solution," said Katharine Rudd, managing director at Alsbridge, a global consulting company. "For more complex customized deployments, on-premises or hosted solutions will be a better fit."
On the other hand, many consultants, including Castellina, recommend that companies reduce the amount of ERP customization to a minimum to keep a handle on complexity.
Not so long ago, it was common to hear managers say they could not use cloud ERP software because of reliability and security concerns. Those concerns have largely been put to bed. The fact is a cloud ERP vendor has far more resources to devote to uptime (backed up by service level agreements) and information security than virtually all SMBs.
Manufacturers that have the very highest uptime requirements can use a system that fails over to a network connection, Castellina said. At least one cloud ERP vendor sells an appliance that is installed on premises to buffer transactions between the factory and the cloud.
Moving to the cloud a 'no-brainer' decision for SMBs
By the summer of 2014, electrical manufacturer Scott Fetzer Electrical Group's (SFEG) ERP system was on its last legs. A unit of giant conglomerate Berkshire Hathaway, SFEG produces electrical products, including motors used in everyday items such as blenders. SFEG needed to modernize its operations to support an effort to promote business automation through the use of robotics. In use for 25 years, the existing ERP system was a legacy application, to put it mildly.
Rather than seeking to update its on-premises system, SFEG opted for instant modernization in the form of cloud ERP software from Kenandy, based on the Force.com platform.
"We just didn't have good enough visibility into our data," said Matt Bush, director of operations for SFEG in Fairview, Tenn. "We had accounting-created reports that would come out once a day to give a pulse on what was happening. But we needed real-time insight."
In addition to gaining access to real-time financial, production and shipping metrics, now employees can access the system via smartphone wherever they are located -- a major improvement over the previous system.
"Our ultimate dream was to run the factory from the beach in Cancun," Bush said.
Cloud ERP costs 'an order of magnitude' less than on premises
Now, SFEG executives have immediate insight into what they can and cannot ship at any given moment. Orders are not released to production until all the parts are in place, resulting in fewer delayed orders and a much better experience for customers.
Bush appreciates the quarterly software updates that proceed without much fanfare. His team has customized the interface to emphasize fields that are important to them, such as enabling multiple pick operations for shipping. This customization was accomplished easily and without any need for programming -- an advantage of the embedded Force.com platform, Bush said.
SFEG invested $100,000 in the Kenandy cloud ERP software, which Bush believes is "an order of magnitude" less costly than a new on-premises installation would have been. Although a few advanced capabilities -- including personnel and machine capacity planning -- are missing, they are coming soon. And the cost and maintenance advantages have been superb.
"It's a no-brainer," Bush said. "This is ideal for a small to midsize company."
Read more »