Live Chat Software by Kayako
 News Categories
(19)Microsoft Technet (2)StarWind (5)TechRepublic (3)ComuterTips (1)SolarWinds (1)Xangati (1) (27)VMware (8)NVIDIA (9)VDI (1)pfsense vRouter (3)VEEAM (3)Google (2)RemoteFX (1) (1)MailCleaner (1)Udemy (1)AUGI (2)AECbytes Architecture Engineering Constrution (7)VMGuru (2)AUTODESK (1) (1)Atlantis Blog (11)AT.COM (2) (1) (15) (2)hadoop360 (3)bigdatastudio (1) (1) (3)VECITA (1) (1)Palo Alto Networks (4) (2) (1)Nhịp Cầu đầu tư (3)VnEconomy (1)Reuters (1)Tom Tunguz (1) (1)Esri (1) (1)tweet (1)Tesla (1) (6)ITCNews (1) (1) Harvard Business Review (1)Haravan (2) (1) (3) (3)IBM (1) (2) (1) (8) (1) (1) (4) (1) (1) (1) (1) (1) (1) (1) (5) (4) (1) (1) (1) (1) (2) (18) (1) (1) (1) (1) (2) (1) (3) (2) (2) (1) (16) (1) (1) (1) (1) (1) (1) (1)Engenius (1) (1) (1) (1) (1) (3) (6) (1)
RSS Feed
Latest Updates
Run legacy COBOL in a virtual infrastructure or cloud
Posted by Thang Le Toan on 07 June 2018 11:21 AM

COBOL forms the core of many legacy systems, and virtualizing it can be challenging. Organizations can use migration tools and Visual COBOL to modernize it.

The risks of maintaining an out-of-date COBOL system are significant, so organizations must either modernize a legacy COBOL system or transform it from the ground up. COBOL forms the core of many important systems. The organizations that use them are rightfully wary of modernization, because the costs and risks involved in transforming a central application are immense. Failure to act -- either by virtualizing COBOL or migrating it to the cloud -- could be detrimental to the business.

Many systems are rooted in legacy COBOL

COBOL is at the core of many banking and business systems; governments worldwide are heavy users. Much of COBOL -- there are some 200 billion lines of legacy COBOL code still in use -- runs on old-style mainframes, despite their high operational expenses and limited performance.

For COBOL's base of conservative users, those mainframes are a significant barrier to migration.  They tend to see mainframe replacement with client-server or commercial off-the-shelf systems as too risky, at least until the new technology matures. In today's mobile-focused, graphical environment, however, the need for a bridge out of legacy COBOL has become critical, so code modules have been written to interface to the outside universe using C, .NET and other modern languages and environments.

Risks are high for legacy systems

Current IT tasks demand agility, and businesses need to evolve to match the changing market and get ahead of competitors. In the past, this seemed to require a migration from legacy COBOL to a modern language. The problem with this approach is that the COBOL app is often the backbone for company operations. The huge costs involved in the rewrite process to prove new code before it's released into production add to this risk.

There are plenty of examples of huge failures in the rewrite process. A number of government software projects around the world have crashed and burned spectacularly. This possibility makes many CIOs grit their teeth and buy another mainframe every 10 years or so. These captive customers can expect to pay through the nose, first for capital equipment and then for support and operating costs.

New tools provide virtualization options

The maturation of the cloud, coupled with migration tools, offers solutions to the captive hardware dilemma. We now have tools that transition core legacy COBOL apps from the mainframe into VMs without any major rewrite. You'll need to recompile to target the VM, but that's not a big deal compared to mass code rewrites.

Almost all of the code can move without change, though anything written in assembler will need to be recoded. With the right compiler, I/O and comms will also be identical to the old system. Things might get a bit more complicated if you're going to use RESTful I/O instead of block I/O.

Taking advantage of the parallel nature of the cloud is also a potential challenge. There are several companies, such as Micro Focus, that offer consultation on this topic, as well as with the general issue of tuning legacy COBOL in the cloud.

Making migration happen

The trickiest part of a COBOL migration to the private cloud lies in building out a virtual infrastructure that properly supports the apps. Both semantics and structure differ between private clouds and, of course, are all different from the mainframe experience. A one-to-one match won’t happen, so it's crucial to understand the differences between the original and what is possible in the private cloud. Don't expect to get it right the first time. It's critical to deploy test and QA processes to succeed with the migration.

There are no universities or trade schools teaching COBOL; it has become the Latin of the IT industry, a dead language only good for ancient IT tasks.

Performance is another issue. Private clouds aren't mainframes and have much more scalability. It's critical that your app is set up to run in a multicore environment that will allow it to take advantage of the private cloud's inherent parallelism.

If the app isn't multithreaded, all might not be lost on the performance side. Some of the largest private cloud instances are powerful computers in their own right and might match or exceed a mainframe in compute power, especially if given a large dynamic RAM space and local solid-state drive instance storage. Of course, the most recent mainframes are faster too, but you should compare the 12-year-old mainframe you want to lose with the latest private cloud instances.

Legacy COBOL can be rejuvenated

All of this sounds pretty easy, but we need to step back and look at some collateral issues involved in a cloud transition. The programming staff for COBOL typically consists of long-term employees nearing retirement. They can apply changes with pinpoint accuracy because they have enormous experience with the app in use and in the company's field of business. The problem is that the industry isn't generating replacements. There are no universities or trade schools teaching COBOL; it has become the Latin of the IT industry, a dead language only good for ancient IT tasks.

The answer is to rejuvenate COBOL itself. A decade ago, Visual Basic revived Basic in a similar way through a design that worked with current coding practices. As with Basic, this doesn't initially mean changing the code itself, but this COBOL update does make COBOL easier to learn for new programmers and speeds up the process.

Visual COBOL isn't the complete answer. Organizations need to upgrade the IT processes that relate to changes, which is a considerable effort. Many legacy COBOL shops have change queues longer than a year, a timeline that doesn't meet the modern standards of agile operation.

Competitive pressure might force transformation

Even a cloud-based COBOL approach will face the competitive pressure of new technologies. Technology built around in-memory databases can easily outpace a legacy COBOL app, while GPU acceleration and massively parallel processing on the software side and much faster server engines on the cloud infrastructure side can increase the agility gap. In other words, the decision to move to cloud COBOL might only delay an inevitable move to a modern app.

The challenge for the CIO is to decide whether to modernize by moving legacy COBOL apps to VMs, the cloud or do a complete makeover. With today's Visual COBOL as a tool, that's at least a realistic choice. 

Read more »

operational intelligence (OI)
Posted by Thang Le Toan on 07 June 2018 06:52 AM

Operational intelligence (OI) is an approach to data analysis that enables decisions and actions in business operations to be based on real-time data as it's generated or collected by companies. Typically, the data analysis process is automated, and the resulting information is integrated into operational systems for immediate use by business managers and workers.

OI applications are primarily targeted at front-line workers who, hopefully, can make better-informed business decisions or take faster action on issues if they have access to timely business intelligence (BI) and analytics data. Examples include call-center agents, sales representatives, online marketing teams, logistics planners, manufacturing managers and medical professionals. In addition, operational intelligence can be used to automatically trigger responses to specified events or conditions.

What is now known as OI evolved from operational business intelligence, an initial step focused more on applying traditional BI querying and reporting. OI takes the concept to a higher analytics level, but operational BI is sometimes still used interchangeably with operational intelligence as a term.

How operational intelligence works

In most OI initiatives, data analysis is done in tandem with data processing or shortly thereafter, so workers can quickly identify and act on problems and opportunities in business operations. Deployments often include real-time business intelligence systems set up to analyze incoming data, plus real-time data integration tools to pull together different sets of relevant data for analysis.

Stream processing systems and big data platforms, such as Hadoop and Spark, can also be part of the OI picture, particularly in applications that involve large amounts of data and require advanced analytics capabilities. In addition, various IT vendors have combined data streaming, real-time monitoring and data analytics tools to create specialized operational intelligence platforms.

As data is analyzed, organizations often present operational metrics, key performance indicators (KPIs) and business insights to managers and other workers in interactive dashboards that are embedded in the systems they use as part of their jobs; data visualizations are usually included to help make the information easy to understand. Alerts can also be sent to notify users of developments and data points that require their attention, and automated processes can be kicked off if predefined thresholds or other metrics are exceeded, such as stock trades being spurred by prices hitting particular levels.

Operational intelligence uses and examples

Stock trading and other types of investment management are prime candidates for operational intelligence initiatives because of the need to monitor huge volumes of data in real time and respond rapidly to events and market trends. Customer analytics is another area that's ripe for OI. For example, online marketers use real-time tools to analyze internet clickstream data, so they can better target marketing campaigns to consumers. And cable TV companies track data from set-top boxes in real time to analyze the viewing activities of customers and how the boxes are functioning. 

The growth of the internet of things has sparked operational intelligence applications for analyzing sensor data being captured from manufacturing machines, pipelines, elevators and other equipment; that enables predictive maintenance efforts designed to detect potential equipment failures before they occur. Other types of machine data also fuel OI applications, including server, network and website logs that are analyzed in real time to look for security threats and IT operations issues.

There are less grandiose operational intelligence use cases, as well. That includes the likes of call-center applications that provide operators with up-to-date customer records and recommend promotional offers while they're on the phone with customers, as well as logistics ones that help calculate the most efficient driving routes for fleets of delivery vehicles.

OI benefits and challenges

The primary benefit of OI implementations is the ability to address operational issues and opportunities as they arise -- or even before they do, as in the case of predictive maintenance. Operational intelligence also empowers business managers and workers to make more informed -- and hopefully better -- decisions on a day-by-day basis. Ultimately, if managed successfully, the increased visibility and insight into business operations can lead to higher revenue and competitive advantages over rivals.

But there are challenges. Building operational intelligence architecture typically involves piecing together different technologies, and there are numerous data processing platforms and analytics tools to choose between, some of which may require new skills in organizations. High performance and sufficient scalability are also needed to handle the real-time workloads and large volumes of data common in OI applications without choking the system.

Also, most business processes at a typical company don't require real-time data analysis. With that in mind, a key part of operational intelligence projects involves determining which end users need up-to-the-minute data and then training them to handle the information once it starts being delivered to them in that fashion.

Operational intelligence vs. business intelligence

How operational intelligence differs from business intelligence software

Conventional BI systems support the analysis of historical data that has been cleansed and consolidated in a data warehouse or data mart before being made available for business analytics uses. BI applications generally aim to tell corporate executives and business managers what happened in the past on revenues, profits and other KPIs to aid in budgeting and strategic planning.

Margaret Rouse asks:

Early on, BI data was primarily distributed to users in static operational reports. That's still the case in some organizations, although many have shifted to dashboards with the ability to drill down into data for further analysis. In addition, self-service BI tools let users run their own queries and create data visualizations on their own, but the focus is still mostly on analyzing data from the past.

Operational intelligence systems let business managers and front-line workers see what's currently happening in operational processes and then immediately act upon the findings, either on their own or through automated means. The purpose is not to facilitate planning, but to drive operational decisions and actions in the moment.

Read more »

Virtual private cloud vs. private cloud differences explained
Posted by Thang Le Toan on 05 June 2018 12:55 AM

Virtual private clouds and private clouds differ in terms of architecture, the provider and tenants, and resource delivery. Decide between the two models based on these distinctions.

Organizations trying to decide between virtual private cloud vs. private cloud must first define what they want to accomplish. A private cloud gives individual business units more control over the IT resources allocated to them, whereas a virtual private cloud offers organizations a different level of isolation.

Read more »

Posted by Thang Le Toan on 24 May 2018 01:57 AM

Robojournalism is the use of software programs to generate articles, reports and other types of content. 

Sophisticated content generation programs rely upon a combination of artificial intelligence (AI), data analytics and machine learning to produce content that can be hard to differentiate from that written by a human.


When an earthquake struck Los Angeles in the early morning hours of February 1, 2014, a content generation algorithm created by programmer/journalist Ken Schwencke posted the story to the L.A. Times within eight minutes of the tremor, complete with a map pinpointing the epicenter. 

Schwencke's software is designed to receive structured data from the US Geological Survey (USGS) and to determine, based on an earthquake's magnitude and proximity to California, whether it is news. The content generation program assembles the details in a vocabulary specific to the subject matter, including typical journalistic terms and turns of phrase. 

Ben Walsh makes a presentation about what he calls "human-assisted journalism":

Read more »

CDN (content delivery network)
Posted by Thang Le Toan on 23 May 2018 02:25 AM

A CDN (content delivery network), also called a content distribution network, is a group of geographically distributed and interconnected servers that provide cached internet content from a network location closest to a user to accelerate its delivery. The primary goal of a CDN is to improve web performance by reducing the time needed to transmit content and rich media to users' internet-connected devices.

Content delivery network architecture is also designed to reduce network latency, which is often caused by hauling traffic over long distances and across multiple networks. Eliminating latency has become increasingly important, as more dynamic content, video and software as a service are delivered to a growing number of mobile devices.

CDN providers house cached content in either their own network points of presence (POP) or in third-party data centers. When a user requests content from a website, if that content is cached on a content delivery network, the CDN redirects the request to the server nearest to that user and delivers the cached content from its location at the network edge. This process is generally invisible to the user.

A wide variety of organizations and enterprises use CDNs to cache their website content to meet their businesses' performance and security needs. The need for CDN services is growing, as websites offer more streaming video, e-commerce applications and cloud-based applications where high performance is key. Few CDNs have POPs in every country, which means many organizations use multiple CDN providers to make sure they can meet the needs of their business or consumer customers wherever they are located.

In addition to content caching and web delivery, CDN providers are capitalizing on their presence at the network edge by offering services that complement their core functionalities.  These include security services that encompass distributed denial-of-service (DDoS) protection, web application firewalls (WAFs) and bot mitigation; web and application performance and acceleration services; streaming video and broadcast media optimization; and even digital rights management for video. Some CDN providers also make their APIs available to developers who want to customize the CDN platform to meet their business needs, particularly as webpages become more dynamic and complex.

How does a CDN work?

The process of accessing content cached on a CDN network edge location is almost always transparent to the user. CDN management software dynamically calculates which server is located nearest to the requesting user and delivers content based on those calculations. The CDN server at the network edge communicates with the content's origin server to make sure any content that has not been cached previously is also delivered to the user. This not only eliminates the distance that content travels, but reduces the number of hops a data packet must make. The result is less packet loss, optimized bandwidth and faster performance, which minimizes timeouts, latency and jitter, and it improves the overall user experience. In the event of an internet attack or outage, content hosted on a CDN server will remain available to at least some users.

Organizations buy services from CDN providers to deliver their content to their users from the nearest location. CDN providers either host content themselves or pay network operators and internet service providers (ISPs) to host CDN servers. Beyond placing servers at the network edge, CDN providers use load balancing and solid-state hard drives to help data reach users faster. They also work to reduce file sizes using compression and special algorithms, and they are deploying machine learning and AI to enable quicker load and transmission times.

History of CDNs

The first CDN was launched in 1998 by Akamai Technologies soon after the public internet was created. Akamai's original techniques serve as the foundation of today's content distribution networks. Because content creators realized they needed to find a way to reduce the time it took to deliver information to users, CDNs were seen as a way to improve network performance and to use bandwidth efficiently. That basic premise remains important, as the amount of online content continues to grow.

So-called first-generation CDNs specialized in e-commerce transactions, software downloads, and audio and video streaming. As cloud and mobile computing gained traction, second-generation CDN services evolved to enable the efficient delivery of more complex multimedia and web content to a wider community of users via a more diverse mix of devices. As internet use grew, the number of CDN providers multiplied, as have the services CDN companies offer.

New CDN business models also include a variety of pricing methods that range from charges per usage and volume of content delivered to a flat rate or free for basic services, with add-on fees for additional performance and optimization services. A wide variety of organizations use CDN services to accelerate static and dynamic content, online gaming and mobile content delivery, streaming video and a number of other uses.

What are the main benefits of using a CDN?

The primary benefits of traditional CDN services include the following:

  • Improved webpage load times to prevent users from abandoning a slow-loading site or e-commerce application where purchases remain in the shopping cart;
  • Improved security from a growing number of services that include DDoS mitigation, WAFs and bot mitigation;
  • Increased content availability because CDNs can handle more traffic and avoid network failures better than the origin server that may be located several networks away from the end user; and
  • A diverse mix of performance and web content optimization services that complement cached site content.

How do you manage CDN security?

A representative list of CDN providers in this growing market include the following:

Why you need to know about CDN technology

A wide variety of organizations use CDNs to meet their businesses' performance and security needs. The need for CDN services is growing, as websites offer more streaming video, e-commerce applications and cloud-based applications, where high performance is essential.

CDN technology is also an ideal method to distribute web content that experiences surges in traffic, because distributed CDN servers can handle sudden bursts of client requests at one time over the internet. For example, spikes in internet traffic due to a popular event, like online streaming video of a presidential inauguration or a live sports event, can be spread out across the CDN, making content delivery faster and less likely to fail due to server overload.

Because it duplicates content across servers, CDN technology inherently serves as extra storage space and remote data backup for disaster recovery plans.


AWS GPU instance type slashes cost of streaming apps

The cost of graphics acceleration can often make the technology prohibitive, but a new AWS GPU instance type for AppStream 2.0 makes that process more affordable.

Download Our AWS Cloud Computing Must-Have Guide

While Amazon Web Services (AWS) has established itself as a top contender in the cloud computing market, it's not without its challenges and misconceptions. Get expert insight into the most common and pressing questions regarding AWS management, monitoring, costs, benefits, limitations and more.

Amazon AppStream 2.0, which enables enterprises to stream desktop apps from AWS to an HTML5-compatible web browser, delivers graphics-intensive applications for workloads such as creative design, gaming and engineering that rely on DirectX, OpenGL or OpenCL for hardware acceleration. The managed AppStream service eliminates the need for IT teams to recode applications to be browser-compatible.

The newest AWS GPU instance type for AppStream, Graphics Design, cuts the cost of streaming graphics applications up to 50%, according to the company. AWS customers can launch Graphics Design GPU instances or create a new instance fleet with the Amazon AppStream 2.0 console or AWS software development kit. AWS’ Graphics Design GPU instances come in four sizes that range from 2-16 virtual CPUs and 7.5-61 gibibytes (GiB) of system memory, and run on AMD FirePro S7150x2 Server GPUs with AMD Multiuser GPU technology.

Developers can now also select between two types of Amazon AppStream instance fleets in a streaming environment. Always-On fleets provide instant access to apps, but charge fees for every instance in the fleet. On-Demand fleets charges fees for instances when end users are connected, plus an hourly fee, but there is a delay when an end user accesses the first application.

New features and support

In addition to the new AWS GPU instance type, the cloud vendor rolled out several other features this month, including:

  • ELB adds network balancer. AWS Network Load Balancer helps maintain low latency during spikes on a single static IP address per Availability Zone. Network Load Balancer — the second offshoot of Elastic Load Balancing features, following Application Load Balancer — routes connections to Virtual Private Cloud-based Elastic Compute Cloud (EC2) instances and containers.
  • New edge locations on each coast. Additional Amazon CloudFront edge locations in Boston and Seattle improve end user speed and performance when they interact with content via CloudFront. AWS now has 95 edge locations across 50 cities in 23 countries.
  • X1 instance family welcomes new member. The AWS x1e.32xlarge instance joins the X1 family of memory-optimized instances, with the most memory of any EC2 instance — 3,904 GiB of DDR4 instance memory — to help businesses reduce latency for large databases, such as SAP HANA. The instance is also AWS’ most expensive at about $16-$32 per hour, depending on the environment and payment model.
  • AWS Config opens up support. The AWS Config service, which enables IT teams to manage service and resource configurations, now supports both DynamoDB tables and Auto Scaling groups. Administrators can integrate those resources to evaluate the health and scalability of their cloud deployments.
  • Start and stop on the Spot. IT teams can now stop Amazon EC2 Spot Instances when an interruption occurs and then start them back up as needed. Previously, Spot Instances were terminated when prices rose above the user-defined level. AWS saves the EBS root device, attached volumes and the data within those volumes; those resources restore when capacity returns, and instances maintain their ID numbers.
  • EC2 expands networking performance. The largest instances of the M4, X1, P2, R4, I3, F1 and G3 families now use Elastic Network Adapter (ENA) to reach a maximum bandwidth of 25 Gb per second. The ENA interface enables both existing and new instances to reach this capacity, which boosts workloads reliant on high-performance networking.
  • New Direct Connect locations. Three new global AWS Direct Connect locations allow businesses to establish dedicated connections to the AWS cloud from an on-premises environment. New locations include: Boston, at Markley, One Summer Data Center for US-East-1; Houston, at CyrusOne West I-III data center for US-East-2; and Canberra, Australia, at NEXTDC C1 Canberra data center for AP-Southeast-2.
  • Role and policy changes. Several changes to AWS Identity and Access Management (IAM) aim to better protect an enterprise’s resources in the cloud. A policy summaries feature lets admins identify errors and evaluate permissions in the IAM console to ensure each action properly matches to the resources and conditions it affects. Other updates include a wizard for admins to create the IAM roles, and the ability to delete service-linked roles through the IAM console, API or CLI — IAM ensures that no resources are attached to a role before deletion.
  • Six new data streams. Amazon Kinesis Analytics, which enables businesses to process and query streaming data in an SQL format, has six new types of stream processes to simplify data processing: STEP(), LAG(), TO_TIMESTAMP(), UNIX_TIMESTAMP(), REGEX_REPLACE() and SUBSTRING(). AWS also increased the service’s capacity to process higher data volume streams.
  • Get DevOps notifications. Additional notifications from AWS CodePipeline for stage or action status changes enable a DevOps team to track, manage and act on changes during continuous integration and continuous delivery. CodePipeline integrates with Amazon CloudWatch to enable Amazon Simple Notification Service messages, which can trigger an AWS Lambda function in response.
  • AWS boosts HIPAA eligibility. Amazon’s HIPAA Compliance Program now includes Amazon Connect, AWS Batch and two Amazon Relational Database Service (RDS) engines, RDS for SQL Server and RDS for MariaDB — all six RDS engines are HIPAA eligible. AWS customers that sign a Business Associate Agreement can use those services to build HIPAA-compliant applications.
  • RDS for Oracle adds features. The Amazon RDS for Oracle engine now supports Oracle Multimedia, Oracle Spatial and Oracle Locator features, with which businesses can store, manage and retrieve multimedia and multi-dimensional data as they migrate databases from Oracle to AWS. The RDS Oracle engine also added support for multiple Oracle Application Express versions, which enables developers to build applications within a web browser.
  • Assess RHEL security. Amazon Inspector expanded support for Red Hat Enterprise Linux (RHEL) 7.4 assessments, to run Vulnerabilities & Exposures, Amazon Security Best Practices and Runtime Behavior Analysis scans in that RHEL environment on EC2 instances.


BPM in cloud evolves to suit line of business, IoT

While on-premises BPM tools have caused a tug of war between lines of business and IT, the cloud helps appease both sides. Here's what to expect from this cloud BPM trend and more.

Business process management tools rise in importance as companies try to make better use -- and reuse -- of IT assets. And, when coupled with cloud, this type of software can benefit from a pay-as-you-go model for more efficient cost management, as well as increased scalability.


As a result, cloud-based BPM has become a key SaaS tool in the enterprise. Looking forward, the growth of BPM in cloud will drive three major trends that enterprise users should track.

Reduced bias

BPM is designed to encourage collaboration between line departments and IT, but the former group often complains that BPM tools hosted in the data center favor the IT point of view in both emphasis and design. To avoid this and promote equality between these two groups, many believe that BPM tools have to move to neutral territory: the cloud.

Today, BPM supports roughly a dozen different roles and is increasingly integrated with enterprise architecture practices and models. This expands the scope of BPM software, as well as the number of non-IT professionals who use it. Collaboration and project management, for example, account for most of the new features in cloud BPM software.

Collaboration features in cloud-based BPM include project tools and integration with social networks. While business people widely use platforms like LinkedIn for social networking, IT professionals use other wiki-based tools. Expect to see a closer merger between the two.

This push for a greater line department focus in BPM could also divide the BPM suites themselves. While nearly all the cloud BPM products are fairly broad in their application, those from vendors with a CIO-level sales emphasis, such as IBM's Business Process Manager on Cloud or Appian, focus more on IT. NetSuite, on the other hand, is an example of cloud BPM software with a broader organizational target.

Software practices influence BPM

Cloud, in general, affects application design and development, which puts pressure on BPM to accommodate changes in software practices. Cloud platforms, for example, have encouraged a more component-driven vision for applications, which maps more effectively to business processes. This will be another factor that expands line department participation in BPM software.

BPM in cloud encourages line organizations to take more control over applications. The adoption of third-party tools, rather than custom development, helps them target specific business problems. This, however, is a double-edged sword: It can improve automated support for business processes but also duplicate capabilities and hinder workflow integration among organizations. IT and line departments will have to define a new level of interaction.

IoT support

The third trend to watch around BPM in cloud involves internet of things (IoT) and machine-to-machine communications. These technologies presume that sensors will activate processes, either directly or through sensor-linked analytics. This poses a challenge for BPM, because it takes human judgment out of the loop and requires instead that business policies anticipate human review of events and responses. That shifts the emphasis of BPM toward automated policies, which, in the past, has led to the absorption of BPM into things like Business Process Modeling Language, and puts the focus back on IT.

What do you expect from cloud BPM in the future?

In theory, business policy automation has always been within the scope of BPM. But, in practice, BPM suites have offered only basic support for policy automation or even for the specific identification of business policies. It's clear that this will change and that policy controls to guide IoT deployments will be built into cloud-based BPM.

Read more »

Protection from ransomware requires layered backup, DR
Posted by Thang Le Toan on 17 May 2018 10:57 PM

A strategy for protection and successful recovery from ransomware includes everything from monitoring tools to offline storage. Organizations should use multiple methods.


CHICAGO -- The VeeamON session on protection from ransomware Wednesday started with a question for attendees: How many had experienced a ransomware attack at their organization?


Dozens of hands went up.

Ransomware attacks continue to make news. In just the last couple of months, high-profile victims included the city of Atlanta and a school district in Massachusetts. Many attacks, though, go unreported or unmentioned to the general public.

A layered defense is important to be able to protect and recover from ransomware, Rick Vanover, Veeam's director of product strategy, told the packed room of close to 200 people.

Backup, DR, education all play a role

Using offline storage to create an air gap is arguably the most technically efficient method of protection against ransomware. Tape is a good fit for air gapping, because you can take it off site, where it is not connected to the network or any other devices.

"The one reason I love tape is its resiliency in this situation," Vanover said.

Other offline or semioffline storage choices include replicated virtual machines, primary storage snapshots, Veeam Cloud Connect backups that aren't connected directly to the backup infrastructure and rotating hard drives.

Educating users is another major component of a comprehensive strategy for protection from ransomware.

"No matter how often you do it, you can't do it enough," said Joe Marton, senior systems engineer at Veeam.

Advice for users includes being overly careful about clicking links and attachments and telling IT immediately if there appears to be an issue.

IT should have visibility into suspicious behavior using monitoring capabilities. For example, Veeam ONE includes a predefined alarm that triggers if it detects possible ransomware activity.

Organizations as a whole should continue to follow the standard "3-2-1" backup plan of having three different copies of data on two different media types, one of which is off site or offline.

From a disaster recovery angle, DR isn't just for natural disasters.

"Ransomware can be a disaster," Marton said.

That means an organization's DR process applies to ransomware attacks.

The organization should also document its recovery plan, specifically one for ransomware incidents.

Matt Fonner, a severity one engineer of the Veeam support team, said every week he deals with two or three restores from a ransomware attack.

Ransomware, protection continue to evolve

The ransomware story does change every time you write it.
Rick Vanoverdirector of product strategy, Veeam

Vanover said later that he spent about 25 minutes following the presentation talking with people about attacks and protection from ransomware. One person told him that her SMB had been hit and decided to pay the ransom, rather than deal with an inferior restore program -- that wasn't Veeam.

Vanover said organizations should classify data to figure out which level of resiliency is needed. Not everything needs to be in that most expensive tier.

Vanover said the ransomware landscape has changed from a year ago, when he also gave a presentation on ransomware protection at VeeamON.

"The ransomware story does change every time you write it," he said.

One new twist in the storage is ransomware is attacking backups themselves. In a common scenario, ransomware will infiltrate a backup and stay dormant until the data is recovered back to the network following an attack on primary storage.

That's where offline storage comes in, Vanover said.

Data protection vendors are also starting to add specific features to protect backups from ransomware. For example, Asigra Cloud Backup has embedded malware engines in the backup and recovery stream, and CloudBerry Backup detects possible cases of ransomware in backups.

Vanover said if he drew up another presentation in a month or two, it would probably be different.

"We have to always evolve to the threatscape," he said.

Read more »

How to optimize Raspberry Pi code using its GPU
Posted by Thang Le Toan on 30 April 2018 03:55 AM

When I was at Apple, I spent five years trying to get source-code access to the Nvidia and ATI graphics drivers. My job was to accelerate image-processing operations using GPUs to do the heavy lifting, and a lot of my time went into debugging crashes or strange performance issues. I could have been a lot more effective if I’d had better insights into the underlying hardware, and been able to step through and instrument the code that controlled the graphics cards. Previously I’d written custom graphics drivers for game consoles, so I knew how useful having that level of control could be.

I never got the access I’d wanted, and it left me with an unscratched itch. I love CUDA/OpenCL and high-level shader interfaces, but the underlying hardware of graphics cards is so specialized, diverse, and quirky that you can’t treat them like black boxes and expect to get the best performance. Even with CUDA, you end up having to understand the characteristics of what’s under the hood if you want to really speed things up. I understand why most GPU manufacturers hate the idea, even just the developer support you’d need to offer for a bare-metal interface would take a lot of resources, but it still felt like a big missed opportunity to write more efficient software.

That all meant I was very excited when Broadcom released detailed documentation of the GPU used on the Raspberry Pi a few months ago. The Pi’s a great device to demonstrate the power of deep learning computer vision, and I’d ported my open-source library to run on it, but the CPU was woefully slow on the heavy math that neural networks require, taking almost twenty seconds even with optimized assembler, so I had a real problem I thought GPU acceleration might be able to help with.

Broadcom’s manual is a good description of the hardware interface to their GPU, but you’ll need more than that if you’re going to write code to run on it. In the end I was able to speed up object recognition from twenty seconds on the CPU to just three on the GPU, but it took a lot of head-scratching and help from others in the community to get there. In the spirit of leaving a trail of breadcrumbs through the forest, I’m going to run through some of what I learned along the way.

Getting started

Broadcom’s Videocore Reference Guide will be your bible and companion, I’m constantly referring to it to understand everything from assembly instructions to interface addresses.

The very first program you should try running is the hello_fft sample included in the latest Raspbian. If you can get this running, then at least you’re set up correctly to run GPU programs.

There’s a missing piece in that example though – the source assembler text isn’t included, only a compiled binary blob. [Thanks to Andrew Holmes and Eben for pointing me to a recent update adding the assembler code!] There isn’t an official program available to compile GPU assembler, so the next place to look is eman’s excellent blog series on writing an SHA-256 implementation. This includes a simple assembler, which I’ve forked and patched a bit to support instructions I needed for my algorithm. Once you’ve got his code running, and have the assembler installed, you should be ready to begin coding.


There’s no debugger for the GPU, at all. You can’t even log messages. In the past I’ve had to debug shaders by writing colors to the screen, but in this case there isn’t even a visible output surface to use. I’ve never regretted investing time up-front into writing debug tools, so I created a convention where a register was reserved for debug output, it would be written out to main memory at the end of the program, could be immediately invoked with a LOG_AND_EXIT() macro, and the contents would be printed out to the console after the code was done. It’s still painful, but this mechanism at least let me get glimpses of what was going on internally.

I also highly recommend using a regular laptop to ssh into your Pi, alongside something like sshfs so you can edit source files easily in your normal editor. You’ll be crashing the device a lot during development, so having a separate development machine makes life a lot easier.

Vertex Program Memory

One of the eternal problems of GPU optimization is getting data back and forth between the main processor and the graphics chip. GPUs are blazingly fast when they’re working with data in their local memory, but coordinating the transfers so they don’t stall either processor is a very hard problem. My biggest optimization wins on the Playstation 2 came from fiddling with the DMA controller to feed the GPU more effectively, and on modern desktop GPUs grouping data into larger batches to upload is one of the most effective ways to speed things up.

The Broadcom GPU doesn’t have very much dedicated memory at all. In fact, the only RAM that’s directly accessible is 4,096 bytes in an area known as Vertex Program Memory. This is designed to be used as a staging area for polygon coordinates so they can be transformed geometrically. My initial assumption was that this would have the fastest path into and out of the GPU, so I built my first implementation to rely on it for data transfer. Unfortunately, it has a few key flaws.

There are actually 12 cores inside the GPU, each one known as a QPU for Quad Processing Unit. The VPM memory is shared between them, so there wasn’t much available for each. I ended up using only 8 cores, and allocating 512 bytes of storage to each, which meant doing a lot of small and therefore inefficient transfers from main memory. The real killer was that a mutex lock was required before kicking off a transfer, so all of the other cores ground to a halt while one was handling an upload, which killed parallelism and overall performance.

Texture Memory Unit

After I released the initial VPM-based version of the matrix-to-matrix multiply GEMM function that’s the most time-consuming part of the object recognition process, several people mentioned that the Texture Memory Unit or TMU was a lot more efficient. The documentation only briefly mentions that you can use the TMU for general memory access, and there wasn’t any detail on how to do it, so I ended up looking at the disassembly of the hello_fft sample to see how it was done. I also received some help over email from Eben Upton himself, which was a lovely surprise! Here’s a summary of what I learned:

 – There are two TMUs available to each core. You can manually choose how to use each if you have an algorithmic way to send the same work to both, by turning off ‘TMU swap’, or if you leave it enabled half the cores will be transparently rewired to use alternating TMUs for 0 and 1.

 – You write a vector of 16 addresses to registers ra56 and ra60 for TMU0 and 1 respectively, and that will start a fetch of the values held in those addresses.

 – Setting a ldtmu0/1 code in an instruction causes the next read in the pipeline to block until the memory values are returned, and then you can read from r4 to access those values in further instructions.

 – There’s a potentially long latency before those values are ready. To mitigate that, you can kick off up to four reads on each TMU before calling a ldtmu0/1. This means that memory reads can be pipelined while computation is happening on the GPU, helping performance a lot thanks to all the overlapping pipelining.

 – To reduce extra logic-checking instructions, I don’t try to prevent overshooting on speculative reads, which means there may be accesses beyond the end of arrays (though the values aren’t used). In practice this hasn’t caused problems.

 – I didn’t dive into this yet, but there’s a 4K direct-mapped L1 cache with 64-byte lines for the TMU. Avoiding aliasing on this will be crucial for maintaining speed, and in my case I bet it depends heavily on the matrix size and allocation of work to different QPUs. There are performance counters available to monitor cache hits and misses, and on past experience dividing up the data carefully so everything stays in-cache could be a big optimization.

 – A lot of my data is stored as 8 or 16-bit fixed point, and the VPM had a lot more support for converting them into float vectors than the TMU does. I discovered some funky problems, like the TMU ignoring the lower two bits of addresses and only loading from 32-bit aligned words, which was tricky when I was dealing with odd matrix widths and lower precision. There isn’t much support for ‘swizzling’ between components in the 16-float vectors that are held in each register either, beyond rotating, so I ended up doing lots of masking tricks.

 – Reading from nonsensical addresses can crash the system. During development I’d sometimes end up with wildly incorrect values for my read addresses, and that would cause a hang so severe I’d have to reboot.

 – This isn’t TMU specific, but I’ve noticed that having a display attached to your Pi taxes the GPU, and can result in slower performance by around 25%.

In the end I was able to perform object recognition in just three seconds with the optimized TMU code, rather than six using the VPM, which opens up a lot more potential applications!

Going Further

Developing GPU code on the Raspberry Pi has come a long way in just the last few months, but it’s still in its early stages. I’m hitting mysterious system hangs when I try to run my deep learning TMU example with any kind of overclocking for example, and there’s no obvious way to debug those kind of problems, especially if they’re hard to reproduce in a simple example.

The community, including folks like eman, Eben, Andrew Holme, and Herman Hermitage, are constantly improving and extending the documentation, examples, and tools, so developing should continue to get easier. I recommend keeping an eye on the Raspberry Pi forums to see the latest news! 

Running the example

If you want to try out the deep learning object recognition code I developed yourself, you can follow these steps:

Install Raspbian.

Install the latest firmware by running `sudo rpi-update`.

From `raspi-config`, choose 256MB for GPU memory.

Clone qpu-asm from Github.

Run `make` inside the qpu-asm folder.

Create a symbolic link to the qpu-asm program, for example by running `sudo ln -s /home/pi/projects/qpu-asm/qpu-asm /usr/bin/`.

Clone DeepBeliefSDK from Github.

From the DeepBeliefSDK/source folder, run `make TARGET=pi GEMM=piqpu`.

Once it’s successfully completed the build, make sure the resulting library is in your path, for example by running `sudo ln -s /home/pi/projects/DeepBeliefSDK/source/ /usr/lib/`.

Run `sudo ./jpcnn -i data/dog.jpg -n ../networks/jetpac.ntwk -t -m s`

You should see output that looks like this:Screen Shot 2014-08-07 at 1.49.33 PM


Read more »

virtual private cloud
Posted by Thang Le Toan on 28 April 2018 12:16 AM

A virtual private cloud (VPC) is the logical division of a service provider's public cloud multi-tenant architecture to support private cloud computing in a public cloud environment.

The terms private cloud and virtual private cloud are sometimes used incorrectly as synonyms. There is a distinct difference -- in a private cloud model, the IT department acts as a service provider and the individual business units act as tenants. In a virtual private cloud model, a public cloud provider acts as the service provider and the cloud's subscribers are the tenants.

The public cloud provider is responsible for ensuring that each private cloud customer's data remains isolated from every other customer's data both in transit and inside the cloud provider's network. This can be accomplished through the use of security policies requiring some -- or all -- of the following elements: encryption, tunneling, private IP addressing or allocating a unique VLAN to each customer.

One of the biggest benefits of virtual private clouds is that they allow an enterprise to take advantage of the benefits that hybrid clouds provide without violating compliance regulations. VPCs can become an extension of the enterprise's own data center without dealing with the complexities that building a private cloud on-premise would require.


Virtual private cloud vs. private cloud differences explained

Virtual private clouds and private clouds differ in terms of architecture, the provider and tenants, and resource delivery. Decide between the two models based on these distinctions.

Organizations trying to decide between virtual private cloud vs. private cloud must first define what they want to accomplish. A private cloud gives individual business units more control over the IT resources allocated to them, whereas a virtual private cloud offers organizations a different level of isolation.


Virtual private clouds are typically layers of isolation within public clouds, but they might lack the self-service portal that enables IT to provide individual business units with DIY IT environments. Private clouds are generally on-premises environments with self-service portals that designated employees can use to deploy resources without intervention from IT.

But interest in the private cloud is about much more than just technology; private clouds represent a fundamental shift in the way organizations deliver IT resources.

In the past, corporate IT acted as a gatekeeper for all things tech. If a business unit within an organization needed to deploy a new application or a new service, they went through IT.

This way of doing things was problematic for both the business units and for IT. Whenever a department had to seek IT approval for a tech project, it ran the risk of IT denying the project or modifying its scope beyond recognition. Even if it was approved, the business unit might have to wait weeks or even months for IT to implement it.

The old way of doing things was also problematic for the IT department because it often put IT in the awkward position of having to say no to someone else's ideas. On the other hand, if IT did approve the project, it meant an increased workload for the IT staff that had to deploy, maintain and support the new application.

Moving away from traditional virtual infrastructures

Private cloud environments represent a shift away from the rigid administrative model that organizations have used for so long. Rather than the IT department acting as the sole governing body for all the organization's tech resources, it instead takes on the role of a service provider.

In a private cloud, the IT infrastructure is carved up into a series of private areas, and each area is assigned to a specific business unit. One or more designated employees within the department take on the role of tenant administrators for the available resources. These administrators are free to use the resources as they see fit without first seeking IT approval.

Differences between virtual private clouds and private clouds

This doesn't mean that tenant administrators have total autonomy, nor does it mean that they require specialized IT skills. Every organization sets up its private cloud differently, but IT usually provides tenant administrators with a self-service portal that is designed to simplify tasks, such as deploying and managing VMs. Furthermore, IT usually creates VM templates that tenant administrators can use any time they create a new VM.

In other words, tenant administrators can create VMs on an as-needed basis, but must do so within the limits IT has put in place. These limits ensure that tenant administrators don't deplete the underlying infrastructure of hardware resources. Additionally, the use of templates guarantees that admins create VMs in accordance with the organization's security policies.

Virtual private cloud vs. private cloud

When it comes to virtual private cloud vs. private cloud, the terms are sometimes used interchangeably. In most cases, however, a virtual private cloud is different from a private cloud.

In a private cloud model, the IT department acts as a service provider and the individual business units act as tenants. In a virtual private cloud model, a public cloud provider acts as the service provider and the cloud's subscribers are the tenants. Just as the tenant administrators in a private cloud are free to create resources within the limits that have been set up for them, a public cloud's subscribers are also free to create resources within the public cloud.

In a private cloud model, the IT department acts as a service provider and the individual business units act as tenants. In a virtual private cloud model, a public cloud provider acts as the service provider and the cloud's subscribers are the tenants.

When public cloud subscribers create resources, such as VM instances, databases or gateways, those instances are created within a virtual private cloud. Think of the virtual private cloud as an isolation boundary that keeps subscribers from being able to access -- or interfere with -- each other's resources.

Each public cloud provider has its own way of doing things, but some providers allow tenants to define additional virtual private clouds. For example, Amazon allows AWS subscribers to create as many virtual private clouds as they need.

Each virtual private cloud acts as an isolated environment. Organizations sometimes use virtual private clouds to isolate web servers from other cloud-hosted resources, or to create an isolation boundary around the virtual servers that make up a multi-tier application.

The new norm: Organizations don't have to choose

In spite of virtual private cloud vs. private cloud distinctions, the lines between them are blurring more than ever. Rather than choosing between a private cloud and a public cloud, most organizations opt for a hybrid cloud.

Admins can construct hybrid clouds in many different ways, but one option is to create a self-service environment similar to that of a typical private cloud, but to configure it so some resources reside on premises, while others reside in the public cloud.

Startups will almost always benefit from operating entirely in the public cloud because doing so enables them to avoid a large upfront investment in IT infrastructure. For organizations that already have an on-premises IT infrastructure in place, however, a hybrid cloud usually offers the best of both worlds.

Read more »

Kubernetes container orchestration gets big data star turn
Posted by Thang Le Toan on 27 April 2018 10:18 AM

The foundations of big data continue to shift, driven in great part by AI and machine learning applications. The push to work in real time, and to quickly place AI tools in the hands of data scientists and business analysts, has created interest in software containers as a more flexible mechanism for deploying big data systems and applications.

Now, Kubernetes container orchestration is emerging to provide an underpinning for the new container-based workloads. It has stepped into the big data spotlight -- one formerly reserved for data frameworks like Hadoop and Spark.

These frameworks continue to play an important role in big data, but in more of a supporting role, as discussed in this podcast review of the 2018 Strata Data Conference in San Jose, Calif. That's particularly true in the case of Hadoop, the featured topic in only a couple of sessions at the conference, which until last year was called Strata + Hadoop World.

"It's not that people are turning their backs on Hadoop," said Craig Stedman, SearchDataManagement's senior executive editor and a podcast participant. "But it is becoming part of the woodwork."

The attention of IT teams is shifting more toward the actual applications and how they can get more immediate value out of data science, AI and machine learning, he indicated. Maximizing resources is a must, and this is where Kubernetes-based containers are seen as potential helpers for teams looking to swap workloads in and out and maximize the use of computing resources in fast-moving environments.

Kubernetes connections for Spark and Flink, a rival stream processing engine, are increasingly being more closely watched.

At Strata, Stedman said, the deployment of a prototype Kubernetes-Spark combination and several other machine learning frameworks in a Kubernetes-based architecture was seen partly as a way to nimbly shift workloads between CPUs and GPUs, the latter processor type playing a growing role in training the neural network's underlying machine learning and deep learning applications.

The deployment was the work of Inc., a Beijing-based online retailer and Strata presenter. It is worth emphasizing the early adopter status of such implementations, however. While is running production applications in the container architecture, Stedman reported that it's still studying performance and reliability issues around the new coupling of Spark and Kubernetes that's included in Apache Spark 2.3 as an experimental technology.

Overall, in fact, there is much learning ahead for Kubernetes container orchestration when it comes to big data. That is because containers tend to be ephemeral or stateless, while big data is traditionally stateful, providing data persistence.

Bridging the two takes on state is the goal of a Kubernetes volume driver that MapR Technologies announced at Strata, which is integrated into the company's big data platform. As such, it addresses one of the obstacles Kubernetes container orchestration faces in big data applications.

Stedman said the march to stateful applications on Kubernetes continued to advance after the conference, as Data Artisans launched its dA Platform for what it described as stateful stream processing with Flink. The development and runtime environment is intended for use with real-time analytics, machine learning and other applications that can be deployed on Kubernetes in order to provide dynamic allocation of computing resources.

Listen to this podcast to learn more about the arrival of containers in the world of Hadoop and Spark and the overall evolution of big data as seen at the Strata event.

Jack Vaughan asks:

What challenges or opportunities do you see for your organization with Kubernetes with Spark and Hadoop?

Read more »

Kubernetes gains momentum in big data implementation process
Posted by Thang Le Toan on 27 April 2018 09:21 AM

Big data vendors and users are looking to Kubernetes-managed containers to help accelerate system and application deployments and enable more flexible use of computing resources.

It's still early going for containerizing the big data implementation process. However, users and vendors alike are increasingly eying software containers and Kubernetes, a technology for orchestrating and managing them, as tools to help ease deployments of big data systems and applications.

Early adopters expect big data containers running in Kubernetes clusters to accelerate development and deployment work by enabling the reuse of system builds and application code. The container approach should also make it easier to move systems and applications to new platforms, reallocate computing resources as workloads change and optimize the use of an organization's available IT infrastructure, advocates say.

The pace is picking up on big data technology vendors adding support for containers and Kubernetes to their product offerings. For example, at the Strata Data Conference in San Jose, Calif., this month, MapR Technologies Inc. said it has integrated a Kubernetes volume driver into its big data platform to provide persistent data storage for containerized applications tied to the orchestration technology.

MapR previously supported the use of specialized Docker containers with built-in connectivity to the MapR Converged Data Platform, but the Kubernetes extension is "much more transparent and native to the environment," said Jack Norris, the Santa Clara, Calif., company's senior vice president of data and applications. He added that the persistent storage capability lets containers be used for stateful applications, a requirement for a typical big data implementation with Hadoop and related technologies.

Also, the version 2.3 update of the open source Apache Spark processing engine released in late February includes a native Kubernetes scheduler. The Spark on Kubernetes technology, which is being developed by contributors from Bloomberg, Google, Intel and several other companies, is still described as experimental in nature, but it enables Spark 2.3 workloads to be run in Kubernetes clusters.

Expo hall, 2018 Strata Data Conference in San Jose

Craig Stedman/TechTarget

Containerizing big data systems and applications was a big topic of discussion at the 2018 Strata Data Conference in San Jose, Calif.

Not to be outdone, an upcoming 1.5 release of Apache Flink -- a stream processing rival to Spark -- will provide increased ties to both Kubernetes and the rival Apache Mesos technology, according to Fabian Hueske, a co-founder and software engineer at Flink vendor Data Artisans. Users can run the Berlin-based company's current Flink distribution on Kubernetes, "but it's not always straightforward to do that now," Hueske said at the Strata conference. "It will be much easier with the new release."

Big data containers achieve liftoff Inc., an online retailer based in Beijing, is an early user of Spark on Kubernetes. The company has also containerized TensorFlow, Caffe and other machine learning and deep learning frameworks in a single Kubernetes-based architecture, which it calls Moonshot.

The use of containers is designed to streamline and simplify big data implementation efforts in support of machine learning and other AI analytics applications that are being run in the new architecture, said Zhen Fan, a software development engineer at "A major consideration was that we should support all of the AI workloads in one cluster so we can maximize our resource usage," Fan said during a conference session.

However, he added that the containers also make it possible to quickly deploy analytics systems on the company's web servers to take advantage of overnight processing downtime.

"In e-commerce, the [web servers] are quite busy until midnight," Fan said. "But from 12 to 6 a.m., they can be used to run some offline jobs." began work on the AI architecture in mid-2017; the retailer currently has 300 nodes running production jobs in containers, and it plans to expand the node count to 1,000 in the near future, Fan said. The Spark on Kubernetes technology was installed in the third quarter of last year, initially to support applications run with Spark's stream processing module.

However, that part of the deployment is still a proof-of-concept project intended to test "if Spark on Kubernetes is ready for a production environment," said Wei Ting Chen, a senior software engineer at Intel, which is helping build the architecture. Chen noted that some pieces of Spark have yet to be tied to Kubernetes, and he cited several other issues that need to be assessed.

For example, and Intel are looking at whether using Kubernetes could cause performance bottlenecks when launching large numbers of containers, Chen said. Reliability is another concern, as more and more processing workloads are run through Spark on Kubernetes, he added.

Out on the edge with Kubernetes

Spark on Kubernetes is a bleeding-edge technology that's currently best suited to big data implementations in organizations that have sufficient "technical muscle," said Vinod Nair, director of product management at Pepperdata Inc., a vendor of performance management tools for big data systems that is involved in the Spark on Kubernetes development effort.

The Kubernetes scheduler is a preview feature in Spark 2.3 and likely won't be ready for general availability for another six to 12 months, according to Nair. "It's a fairly large undertaking, so I expect it will be some time before it's out in production," he said. "It's at about an alpha test state at this point."

Pepperdata plans to support Kubernetes-based containers for Spark and the Hadoop Distributed File System in some of its products, starting with Application Spotlight, a performance management portal for big data application developers that the Cupertino, Calif., company announced this month. With the recent release of Hadoop 3.0, the YARN resource manager built into Hadoop can also control Docker containers, "but Kubernetes seems to have much bigger ambitions to what it wants to do," Nair said.

Not everyone is sold on Kubernetes -- or K8s, as it's informally known. BlueData Software Inc. uses a custom orchestrator to manage the Docker containers at the heart of its big-data-as-a-service platform. Tom Phelan, co-founder and chief architect at BlueData, said he still thinks the homegrown tool has a technical edge on Kubernetes, particularly for stateful applications. He added, though, that the Santa Clara, Calif., vendor is working with Kubernetes in the lab with an eye on possible future adoption.

We're trying to see if [Kubernetes] is going to be useful to us as we migrate to production.
Kinnary Janglasenior software engineer, Pinterest

Pinterest Inc. is doing the same thing. The San Francisco company is moving to use Docker containers to speed up development and deployment of various machine learning applications that help drive its image bookmarking and social networking site under the covers, said Kinnary Jangla, a senior software engineer at Pinterest.

Jangla, who built a container-based setup for debugging machine learning models as a test case, said in a presentation at Strata that Pinterest is also testing a Kubernetes cluster. "We're trying to see if that is going to be useful to us as we migrate to production," she said. "But we're not there yet."

Craig Stedman asks:

Is your organization adopting big data containers and Kubernetes? Why or why not?

Read more »

Help Desk Software by Kayako