Live Chat Software by Kayako
 News Categories
(19)Microsoft Technet (2)StarWind (5)TechRepublic (3)ComuterTips (1)SolarWinds (1)Xangati (1)MyVirtualCloud.net (27)VMware (8)NVIDIA (9)VDI (1)pfsense vRouter (3)VEEAM (3)Google (2)RemoteFX (1)developers.google.com (1)MailCleaner (1)Udemy (1)AUGI (2)AECbytes Architecture Engineering Constrution (7)VMGuru (2)AUTODESK (1)storageioblog.com (1)Atlantis Blog (11)AT.COM (2)community.spiceworks.com (1)archdaily.com (14)techtarget.com (2)hadoop360 (3)bigdatastudio (1)virtualizetips.com (1)blogs.vmware.com (3)VECITA (1)vecom.vn (1)Palo Alto Networks (4)itnews.com.au (2)serverwatch.com (1)Nhịp Cầu đầu tư (3)VnEconomy (1)Reuters (1)Tom Tunguz (1)Medium.com (1)Esri (1)www.specommerce.com (1)tweet (1)Tesla (1)fool.com (6)ITCNews (1)businessinsider.com (1)hbr.org Harvard Business Review (1)Haravan (2)techcrunch.com (1)vn.trendmicro.com (3)thangletoan.wordpress.com (3)IBM (1)www.droidmen.com (2)blog.parallels.com (1)betanews.com (8)searchvmware.techtarget.com (1)www.bctes.com (1)www.linux.com (4)blog.capterra.com (1)theelearningcoach.com (1)www.examgeneral.com (1)www.wetutoringnation.com (1)chamilo.org/ (1)www.formalms.org (1)chalkup.co (1)www.mindonsite.com (5)moodle.org (4)moodle.croydon.ac.uk (1)opensource.com (1)1tech.eu (1)remote-learner.net (1)paradisosolutions.com (2)sourceforge.net (17)searchbusinessanalytics.techtarget.com (1)nscs.gov.sg (1)virten.net (1)fastest.com.vn (1)elearninglearning.com (2)www.computerweekly.com (1)youtube.com (3)computer.howstuffworks.com (2)techz.vn (2)techsignin.com (1)itworld.com (15)searchsecurity.techtarget.com (1)makeuseof.com (1)nikse.dk (1)4kdownload.com (1)thegioididong.com (1)itcentralstation.com (1)www.dddmag.com (1)Engenius (1)networkcomputing.com (1)woshub.com (1)hainam121.wordpress.com (1)www.lucidchart.com (1)www.mof.gov.vn (3)www.servethehome.com (6)www.analyticsvidhya.com (1)petewarden.com
RSS Feed
News
May
23
CDN (content delivery network)
Posted by Thang Le Toan on 23 May 2018 02:25 AM

A CDN (content delivery network), also called a content distribution network, is a group of geographically distributed and interconnected servers that provide cached internet content from a network location closest to a user to accelerate its delivery. The primary goal of a CDN is to improve web performance by reducing the time needed to transmit content and rich media to users' internet-connected devices.

Content delivery network architecture is also designed to reduce network latency, which is often caused by hauling traffic over long distances and across multiple networks. Eliminating latency has become increasingly important, as more dynamic content, video and software as a service are delivered to a growing number of mobile devices.

CDN providers house cached content in either their own network points of presence (POP) or in third-party data centers. When a user requests content from a website, if that content is cached on a content delivery network, the CDN redirects the request to the server nearest to that user and delivers the cached content from its location at the network edge. This process is generally invisible to the user.

A wide variety of organizations and enterprises use CDNs to cache their website content to meet their businesses' performance and security needs. The need for CDN services is growing, as websites offer more streaming video, e-commerce applications and cloud-based applications where high performance is key. Few CDNs have POPs in every country, which means many organizations use multiple CDN providers to make sure they can meet the needs of their business or consumer customers wherever they are located.

In addition to content caching and web delivery, CDN providers are capitalizing on their presence at the network edge by offering services that complement their core functionalities.  These include security services that encompass distributed denial-of-service (DDoS) protection, web application firewalls (WAFs) and bot mitigation; web and application performance and acceleration services; streaming video and broadcast media optimization; and even digital rights management for video. Some CDN providers also make their APIs available to developers who want to customize the CDN platform to meet their business needs, particularly as webpages become more dynamic and complex.

How does a CDN work?

The process of accessing content cached on a CDN network edge location is almost always transparent to the user. CDN management software dynamically calculates which server is located nearest to the requesting user and delivers content based on those calculations. The CDN server at the network edge communicates with the content's origin server to make sure any content that has not been cached previously is also delivered to the user. This not only eliminates the distance that content travels, but reduces the number of hops a data packet must make. The result is less packet loss, optimized bandwidth and faster performance, which minimizes timeouts, latency and jitter, and it improves the overall user experience. In the event of an internet attack or outage, content hosted on a CDN server will remain available to at least some users.

Organizations buy services from CDN providers to deliver their content to their users from the nearest location. CDN providers either host content themselves or pay network operators and internet service providers (ISPs) to host CDN servers. Beyond placing servers at the network edge, CDN providers use load balancing and solid-state hard drives to help data reach users faster. They also work to reduce file sizes using compression and special algorithms, and they are deploying machine learning and AI to enable quicker load and transmission times.

History of CDNs

The first CDN was launched in 1998 by Akamai Technologies soon after the public internet was created. Akamai's original techniques serve as the foundation of today's content distribution networks. Because content creators realized they needed to find a way to reduce the time it took to deliver information to users, CDNs were seen as a way to improve network performance and to use bandwidth efficiently. That basic premise remains important, as the amount of online content continues to grow.

So-called first-generation CDNs specialized in e-commerce transactions, software downloads, and audio and video streaming. As cloud and mobile computing gained traction, second-generation CDN services evolved to enable the efficient delivery of more complex multimedia and web content to a wider community of users via a more diverse mix of devices. As internet use grew, the number of CDN providers multiplied, as have the services CDN companies offer.

New CDN business models also include a variety of pricing methods that range from charges per usage and volume of content delivered to a flat rate or free for basic services, with add-on fees for additional performance and optimization services. A wide variety of organizations use CDN services to accelerate static and dynamic content, online gaming and mobile content delivery, streaming video and a number of other uses.

What are the main benefits of using a CDN?

The primary benefits of traditional CDN services include the following:

  • Improved webpage load times to prevent users from abandoning a slow-loading site or e-commerce application where purchases remain in the shopping cart;
  • Improved security from a growing number of services that include DDoS mitigation, WAFs and bot mitigation;
  • Increased content availability because CDNs can handle more traffic and avoid network failures better than the origin server that may be located several networks away from the end user; and
  • A diverse mix of performance and web content optimization services that complement cached site content.

How do you manage CDN security?

A representative list of CDN providers in this growing market include the following:

Why you need to know about CDN technology

A wide variety of organizations use CDNs to meet their businesses' performance and security needs. The need for CDN services is growing, as websites offer more streaming video, e-commerce applications and cloud-based applications, where high performance is essential.

CDN technology is also an ideal method to distribute web content that experiences surges in traffic, because distributed CDN servers can handle sudden bursts of client requests at one time over the internet. For example, spikes in internet traffic due to a popular event, like online streaming video of a presidential inauguration or a live sports event, can be spread out across the CDN, making content delivery faster and less likely to fail due to server overload.

Because it duplicates content across servers, CDN technology inherently serves as extra storage space and remote data backup for disaster recovery plans.

 

AWS GPU instance type slashes cost of streaming apps

The cost of graphics acceleration can often make the technology prohibitive, but a new AWS GPU instance type for AppStream 2.0 makes that process more affordable.

Download Our AWS Cloud Computing Must-Have Guide

While Amazon Web Services (AWS) has established itself as a top contender in the cloud computing market, it's not without its challenges and misconceptions. Get expert insight into the most common and pressing questions regarding AWS management, monitoring, costs, benefits, limitations and more.

Amazon AppStream 2.0, which enables enterprises to stream desktop apps from AWS to an HTML5-compatible web browser, delivers graphics-intensive applications for workloads such as creative design, gaming and engineering that rely on DirectX, OpenGL or OpenCL for hardware acceleration. The managed AppStream service eliminates the need for IT teams to recode applications to be browser-compatible.

The newest AWS GPU instance type for AppStream, Graphics Design, cuts the cost of streaming graphics applications up to 50%, according to the company. AWS customers can launch Graphics Design GPU instances or create a new instance fleet with the Amazon AppStream 2.0 console or AWS software development kit. AWS’ Graphics Design GPU instances come in four sizes that range from 2-16 virtual CPUs and 7.5-61 gibibytes (GiB) of system memory, and run on AMD FirePro S7150x2 Server GPUs with AMD Multiuser GPU technology.

Developers can now also select between two types of Amazon AppStream instance fleets in a streaming environment. Always-On fleets provide instant access to apps, but charge fees for every instance in the fleet. On-Demand fleets charges fees for instances when end users are connected, plus an hourly fee, but there is a delay when an end user accesses the first application.

New features and support

In addition to the new AWS GPU instance type, the cloud vendor rolled out several other features this month, including:

  • ELB adds network balancer. AWS Network Load Balancer helps maintain low latency during spikes on a single static IP address per Availability Zone. Network Load Balancer — the second offshoot of Elastic Load Balancing features, following Application Load Balancer — routes connections to Virtual Private Cloud-based Elastic Compute Cloud (EC2) instances and containers.
  • New edge locations on each coast. Additional Amazon CloudFront edge locations in Boston and Seattle improve end user speed and performance when they interact with content via CloudFront. AWS now has 95 edge locations across 50 cities in 23 countries.
  • X1 instance family welcomes new member. The AWS x1e.32xlarge instance joins the X1 family of memory-optimized instances, with the most memory of any EC2 instance — 3,904 GiB of DDR4 instance memory — to help businesses reduce latency for large databases, such as SAP HANA. The instance is also AWS’ most expensive at about $16-$32 per hour, depending on the environment and payment model.
  • AWS Config opens up support. The AWS Config service, which enables IT teams to manage service and resource configurations, now supports both DynamoDB tables and Auto Scaling groups. Administrators can integrate those resources to evaluate the health and scalability of their cloud deployments.
  • Start and stop on the Spot. IT teams can now stop Amazon EC2 Spot Instances when an interruption occurs and then start them back up as needed. Previously, Spot Instances were terminated when prices rose above the user-defined level. AWS saves the EBS root device, attached volumes and the data within those volumes; those resources restore when capacity returns, and instances maintain their ID numbers.
  • EC2 expands networking performance. The largest instances of the M4, X1, P2, R4, I3, F1 and G3 families now use Elastic Network Adapter (ENA) to reach a maximum bandwidth of 25 Gb per second. The ENA interface enables both existing and new instances to reach this capacity, which boosts workloads reliant on high-performance networking.
  • New Direct Connect locations. Three new global AWS Direct Connect locations allow businesses to establish dedicated connections to the AWS cloud from an on-premises environment. New locations include: Boston, at Markley, One Summer Data Center for US-East-1; Houston, at CyrusOne West I-III data center for US-East-2; and Canberra, Australia, at NEXTDC C1 Canberra data center for AP-Southeast-2.
  • Role and policy changes. Several changes to AWS Identity and Access Management (IAM) aim to better protect an enterprise’s resources in the cloud. A policy summaries feature lets admins identify errors and evaluate permissions in the IAM console to ensure each action properly matches to the resources and conditions it affects. Other updates include a wizard for admins to create the IAM roles, and the ability to delete service-linked roles through the IAM console, API or CLI — IAM ensures that no resources are attached to a role before deletion.
  • Six new data streams. Amazon Kinesis Analytics, which enables businesses to process and query streaming data in an SQL format, has six new types of stream processes to simplify data processing: STEP(), LAG(), TO_TIMESTAMP(), UNIX_TIMESTAMP(), REGEX_REPLACE() and SUBSTRING(). AWS also increased the service’s capacity to process higher data volume streams.
  • Get DevOps notifications. Additional notifications from AWS CodePipeline for stage or action status changes enable a DevOps team to track, manage and act on changes during continuous integration and continuous delivery. CodePipeline integrates with Amazon CloudWatch to enable Amazon Simple Notification Service messages, which can trigger an AWS Lambda function in response.
  • AWS boosts HIPAA eligibility. Amazon’s HIPAA Compliance Program now includes Amazon Connect, AWS Batch and two Amazon Relational Database Service (RDS) engines, RDS for SQL Server and RDS for MariaDB — all six RDS engines are HIPAA eligible. AWS customers that sign a Business Associate Agreement can use those services to build HIPAA-compliant applications.
  • RDS for Oracle adds features. The Amazon RDS for Oracle engine now supports Oracle Multimedia, Oracle Spatial and Oracle Locator features, with which businesses can store, manage and retrieve multimedia and multi-dimensional data as they migrate databases from Oracle to AWS. The RDS Oracle engine also added support for multiple Oracle Application Express versions, which enables developers to build applications within a web browser.
  • Assess RHEL security. Amazon Inspector expanded support for Red Hat Enterprise Linux (RHEL) 7.4 assessments, to run Vulnerabilities & Exposures, Amazon Security Best Practices and Runtime Behavior Analysis scans in that RHEL environment on EC2 instances.

 

BPM in cloud evolves to suit line of business, IoT

While on-premises BPM tools have caused a tug of war between lines of business and IT, the cloud helps appease both sides. Here's what to expect from this cloud BPM trend and more.

Business process management tools rise in importance as companies try to make better use -- and reuse -- of IT assets. And, when coupled with cloud, this type of software can benefit from a pay-as-you-go model for more efficient cost management, as well as increased scalability.

 

As a result, cloud-based BPM has become a key SaaS tool in the enterprise. Looking forward, the growth of BPM in cloud will drive three major trends that enterprise users should track.

Reduced bias

BPM is designed to encourage collaboration between line departments and IT, but the former group often complains that BPM tools hosted in the data center favor the IT point of view in both emphasis and design. To avoid this and promote equality between these two groups, many believe that BPM tools have to move to neutral territory: the cloud.

Today, BPM supports roughly a dozen different roles and is increasingly integrated with enterprise architecture practices and models. This expands the scope of BPM software, as well as the number of non-IT professionals who use it. Collaboration and project management, for example, account for most of the new features in cloud BPM software.

Collaboration features in cloud-based BPM include project tools and integration with social networks. While business people widely use platforms like LinkedIn for social networking, IT professionals use other wiki-based tools. Expect to see a closer merger between the two.

This push for a greater line department focus in BPM could also divide the BPM suites themselves. While nearly all the cloud BPM products are fairly broad in their application, those from vendors with a CIO-level sales emphasis, such as IBM's Business Process Manager on Cloud or Appian, focus more on IT. NetSuite, on the other hand, is an example of cloud BPM software with a broader organizational target.

Software practices influence BPM

Cloud, in general, affects application design and development, which puts pressure on BPM to accommodate changes in software practices. Cloud platforms, for example, have encouraged a more component-driven vision for applications, which maps more effectively to business processes. This will be another factor that expands line department participation in BPM software.

BPM in cloud encourages line organizations to take more control over applications. The adoption of third-party tools, rather than custom development, helps them target specific business problems. This, however, is a double-edged sword: It can improve automated support for business processes but also duplicate capabilities and hinder workflow integration among organizations. IT and line departments will have to define a new level of interaction.

IoT support

The third trend to watch around BPM in cloud involves internet of things (IoT) and machine-to-machine communications. These technologies presume that sensors will activate processes, either directly or through sensor-linked analytics. This poses a challenge for BPM, because it takes human judgment out of the loop and requires instead that business policies anticipate human review of events and responses. That shifts the emphasis of BPM toward automated policies, which, in the past, has led to the absorption of BPM into things like Business Process Modeling Language, and puts the focus back on IT.

What do you expect from cloud BPM in the future?

In theory, business policy automation has always been within the scope of BPM. But, in practice, BPM suites have offered only basic support for policy automation or even for the specific identification of business policies. It's clear that this will change and that policy controls to guide IoT deployments will be built into cloud-based BPM.


Read more »



Apr
27
Kubernetes container orchestration gets big data star turn
Posted by Thang Le Toan on 27 April 2018 10:18 AM

The foundations of big data continue to shift, driven in great part by AI and machine learning applications. The push to work in real time, and to quickly place AI tools in the hands of data scientists and business analysts, has created interest in software containers as a more flexible mechanism for deploying big data systems and applications.

Now, Kubernetes container orchestration is emerging to provide an underpinning for the new container-based workloads. It has stepped into the big data spotlight -- one formerly reserved for data frameworks like Hadoop and Spark.

These frameworks continue to play an important role in big data, but in more of a supporting role, as discussed in this podcast review of the 2018 Strata Data Conference in San Jose, Calif. That's particularly true in the case of Hadoop, the featured topic in only a couple of sessions at the conference, which until last year was called Strata + Hadoop World.

"It's not that people are turning their backs on Hadoop," said Craig Stedman, SearchDataManagement's senior executive editor and a podcast participant. "But it is becoming part of the woodwork."

The attention of IT teams is shifting more toward the actual applications and how they can get more immediate value out of data science, AI and machine learning, he indicated. Maximizing resources is a must, and this is where Kubernetes-based containers are seen as potential helpers for teams looking to swap workloads in and out and maximize the use of computing resources in fast-moving environments.

Kubernetes connections for Spark and Flink, a rival stream processing engine, are increasingly being more closely watched.

At Strata, Stedman said, the deployment of a prototype Kubernetes-Spark combination and several other machine learning frameworks in a Kubernetes-based architecture was seen partly as a way to nimbly shift workloads between CPUs and GPUs, the latter processor type playing a growing role in training the neural network's underlying machine learning and deep learning applications.

The deployment was the work of JD.com Inc., a Beijing-based online retailer and Strata presenter. It is worth emphasizing the early adopter status of such implementations, however. While JD.com is running production applications in the container architecture, Stedman reported that it's still studying performance and reliability issues around the new coupling of Spark and Kubernetes that's included in Apache Spark 2.3 as an experimental technology.

Overall, in fact, there is much learning ahead for Kubernetes container orchestration when it comes to big data. That is because containers tend to be ephemeral or stateless, while big data is traditionally stateful, providing data persistence.

Bridging the two takes on state is the goal of a Kubernetes volume driver that MapR Technologies announced at Strata, which is integrated into the company's big data platform. As such, it addresses one of the obstacles Kubernetes container orchestration faces in big data applications.

Stedman said the march to stateful applications on Kubernetes continued to advance after the conference, as Data Artisans launched its dA Platform for what it described as stateful stream processing with Flink. The development and runtime environment is intended for use with real-time analytics, machine learning and other applications that can be deployed on Kubernetes in order to provide dynamic allocation of computing resources.

Listen to this podcast to learn more about the arrival of containers in the world of Hadoop and Spark and the overall evolution of big data as seen at the Strata event.

Jack Vaughan asks:

What challenges or opportunities do you see for your organization with Kubernetes with Spark and Hadoop?


Read more »



Apr
27
Kubernetes gains momentum in big data implementation process
Posted by Thang Le Toan on 27 April 2018 09:21 AM

Big data vendors and users are looking to Kubernetes-managed containers to help accelerate system and application deployments and enable more flexible use of computing resources.

It's still early going for containerizing the big data implementation process. However, users and vendors alike are increasingly eying software containers and Kubernetes, a technology for orchestrating and managing them, as tools to help ease deployments of big data systems and applications.

Early adopters expect big data containers running in Kubernetes clusters to accelerate development and deployment work by enabling the reuse of system builds and application code. The container approach should also make it easier to move systems and applications to new platforms, reallocate computing resources as workloads change and optimize the use of an organization's available IT infrastructure, advocates say.

The pace is picking up on big data technology vendors adding support for containers and Kubernetes to their product offerings. For example, at the Strata Data Conference in San Jose, Calif., this month, MapR Technologies Inc. said it has integrated a Kubernetes volume driver into its big data platform to provide persistent data storage for containerized applications tied to the orchestration technology.

MapR previously supported the use of specialized Docker containers with built-in connectivity to the MapR Converged Data Platform, but the Kubernetes extension is "much more transparent and native to the environment," said Jack Norris, the Santa Clara, Calif., company's senior vice president of data and applications. He added that the persistent storage capability lets containers be used for stateful applications, a requirement for a typical big data implementation with Hadoop and related technologies.

Also, the version 2.3 update of the open source Apache Spark processing engine released in late February includes a native Kubernetes scheduler. The Spark on Kubernetes technology, which is being developed by contributors from Bloomberg, Google, Intel and several other companies, is still described as experimental in nature, but it enables Spark 2.3 workloads to be run in Kubernetes clusters.

Expo hall, 2018 Strata Data Conference in San Jose

Craig Stedman/TechTarget

Containerizing big data systems and applications was a big topic of discussion at the 2018 Strata Data Conference in San Jose, Calif.
 

Not to be outdone, an upcoming 1.5 release of Apache Flink -- a stream processing rival to Spark -- will provide increased ties to both Kubernetes and the rival Apache Mesos technology, according to Fabian Hueske, a co-founder and software engineer at Flink vendor Data Artisans. Users can run the Berlin-based company's current Flink distribution on Kubernetes, "but it's not always straightforward to do that now," Hueske said at the Strata conference. "It will be much easier with the new release."

Big data containers achieve liftoff

JD.com Inc., an online retailer based in Beijing, is an early user of Spark on Kubernetes. The company has also containerized TensorFlow, Caffe and other machine learning and deep learning frameworks in a single Kubernetes-based architecture, which it calls Moonshot.

The use of containers is designed to streamline and simplify big data implementation efforts in support of machine learning and other AI analytics applications that are being run in the new architecture, said Zhen Fan, a software development engineer at JD.com. "A major consideration was that we should support all of the AI workloads in one cluster so we can maximize our resource usage," Fan said during a conference session.

However, he added that the containers also make it possible to quickly deploy analytics systems on the company's web servers to take advantage of overnight processing downtime.

"In e-commerce, the [web servers] are quite busy until midnight," Fan said. "But from 12 to 6 a.m., they can be used to run some offline jobs."

JD.com began work on the AI architecture in mid-2017; the retailer currently has 300 nodes running production jobs in containers, and it plans to expand the node count to 1,000 in the near future, Fan said. The Spark on Kubernetes technology was installed in the third quarter of last year, initially to support applications run with Spark's stream processing module.

However, that part of the deployment is still a proof-of-concept project intended to test "if Spark on Kubernetes is ready for a production environment," said Wei Ting Chen, a senior software engineer at Intel, which is helping JD.com build the architecture. Chen noted that some pieces of Spark have yet to be tied to Kubernetes, and he cited several other issues that need to be assessed.

For example, JD.com and Intel are looking at whether using Kubernetes could cause performance bottlenecks when launching large numbers of containers, Chen said. Reliability is another concern, as more and more processing workloads are run through Spark on Kubernetes, he added.

Out on the edge with Kubernetes

Spark on Kubernetes is a bleeding-edge technology that's currently best suited to big data implementations in organizations that have sufficient "technical muscle," said Vinod Nair, director of product management at Pepperdata Inc., a vendor of performance management tools for big data systems that is involved in the Spark on Kubernetes development effort.

The Kubernetes scheduler is a preview feature in Spark 2.3 and likely won't be ready for general availability for another six to 12 months, according to Nair. "It's a fairly large undertaking, so I expect it will be some time before it's out in production," he said. "It's at about an alpha test state at this point."

Pepperdata plans to support Kubernetes-based containers for Spark and the Hadoop Distributed File System in some of its products, starting with Application Spotlight, a performance management portal for big data application developers that the Cupertino, Calif., company announced this month. With the recent release of Hadoop 3.0, the YARN resource manager built into Hadoop can also control Docker containers, "but Kubernetes seems to have much bigger ambitions to what it wants to do," Nair said.

Not everyone is sold on Kubernetes -- or K8s, as it's informally known. BlueData Software Inc. uses a custom orchestrator to manage the Docker containers at the heart of its big-data-as-a-service platform. Tom Phelan, co-founder and chief architect at BlueData, said he still thinks the homegrown tool has a technical edge on Kubernetes, particularly for stateful applications. He added, though, that the Santa Clara, Calif., vendor is working with Kubernetes in the lab with an eye on possible future adoption.

We're trying to see if [Kubernetes] is going to be useful to us as we migrate to production.
Kinnary Janglasenior software engineer, Pinterest

Pinterest Inc. is doing the same thing. The San Francisco company is moving to use Docker containers to speed up development and deployment of various machine learning applications that help drive its image bookmarking and social networking site under the covers, said Kinnary Jangla, a senior software engineer at Pinterest.

Jangla, who built a container-based setup for debugging machine learning models as a test case, said in a presentation at Strata that Pinterest is also testing a Kubernetes cluster. "We're trying to see if that is going to be useful to us as we migrate to production," she said. "But we're not there yet."

Craig Stedman asks:

Is your organization adopting big data containers and Kubernetes? Why or why not?


Read more »



Apr
27
IT teams take big data security issues into their own hands
Posted by Thang Le Toan on 27 April 2018 08:07 AM

Data security needs to be addressed upfront in deployments of big data systems -- and users are likely to find they have to build some security capabilities themselves.

When TMW Systems Inc. began building a big data environment to run advanced analytics applications three years ago, the first step wasn't designing and implementing the Hadoop-based architecture -- rather, it involved putting together a framework to secure the data going into the new platform.

"I started with the security model," said Timothy Leonard, TMW's executive vice president of operations and CTO. "I wanted my customers to know that when it comes to the security of their data, it's like Fort Knox -- the data is protected. Then I built the rest of the environment on top of it."

Big data security issues shouldn't be an afterthought in deployments of Hadoop, Spark and related technologies, according to technology analysts and experienced IT managers. That's partly because of the importance of safeguarding data against theft or misuse -- and partly because of the work it typically takes to create effective defenses in data lakes and other big data systems.

Timothy Leonard, executive vice president of operations and technology, TMWTimothy Leonard

TMW, which develops transportation management software for trucking companies and collects operational data from them for analysis, has implemented three tiers of data protections. That starts with system-level security on the Mayfield Heights, Ohio, company's big data architecture, which is based on Hortonworks Inc.'s distribution of Hadoop. In addition, data security and governance functions specify who's authorized to access information and under what circumstances.

And, finally, a metadata layer built by Leonard's team provides end-to-end data lineage records on how individual data elements are being used and by whom. That enables TMW to track the use of sensitive data and run audits in search of suspicious activities, he said -- "to see if [a data element] moves 400 times today," for example.

Self-improvement security projects

Leonard said TMW uses Apache Ranger and Knox, two open source tools spearheaded by Hortonworks, to support role-based security in some data science applications and encrypt data while it's stored in the big data environment and when it's moving between different points.

But the metadata repository was a DIY technology, and TMW also created a custom data dictionary that maps data elements to different levels of security based on their sensitivity. "We discovered some areas where we had to improve on what was there," Leonard said, adding that, overall, "big data at the security level hasn't fully matured yet."

Merv Adrian, analyst, Gartner Merv Adrian

The lack of technology maturity is one of the biggest big data security issues facing users, Gartner analyst Merv Adrian said. That applies to the data security and governance tools currently available for use in big data environments and to big data technologies themselves, he noted.

Hadoop, NoSQL databases and other big data platforms don't provide the same level of built-in security features that mainstream relational databases do, Adrian said. Also, data lakes generally incorporate a variety of technologies that aren't configured consistently for security tasks such as activity logging and auditing. "There's a lot of complexity down at the surface to what people are trying to do," he explained.

Piece parts for big data security

Meanwhile, the commercial and open source security tools now on the market address some pieces of, but not the entire, big data puzzle, according to Adrian. "Very few, if any, vendors can cover the gamut," he said. "Ultimately, user organizations are going to have to get to a holistic view [of big data security] -- and today, they're going to have to build that themselves."

In a report published in March 2017, Forrester Research analysts Brian Hopkins and Mike Gualtieri pointed to a common framework for managing metadata, security and data governance as the top item needed to make technologies in the big data ecosystem work better together. But Hortonworks and rivals Cloudera and MapR Technologies are taking different paths. The tools they offer "do not work together, and none of them unifies everything [users] need," Hopkins and Gualtieri wrote. That also applies to Amazon Web Services, the other major big data platform vendor (see "Security menu").

Data security features in big data platforms
 

Other big data security issues that Adrian cited include the scale of the data volumes typically involved; the use of data from new sources, including external ones; a lack of upfront data classification as raw data is pulled into data lakes; and the movement of data between cloud and on-premises systems in hybrid environments. The analytics outputs generated by data scientists can also expose sensitive data in unforeseen ways, he said.

Gene Stevens, co-founder and CTO, ProtectWiseGene Stevens

Network security startup ProtectWise Inc. designed its internal big data security strategy to address such issues across the spectrum of data acquisition, transport, processing, storage and usage, according to co-founder and CTO Gene Stevens. And, like TMW, ProtectWise had to do lot of custom development to meet its needs for securing the network operations data it collects from customers to monitor and analyze.

To transmit data from corporate networks to its data lake in the AWS cloud, for example, the Denver-based company built software sensors that generate customer-specific encryption keys to prevent a compromise in one network from exposing the data of other customers to attackers. The keys are used just once and then disposed of; doing so "relegates any compromises to one moment in time, which makes them essentially useless," Stevens said.

Security weaknesses not desired

ProtectWise, which collects more than 40 billion data records amounting to 600 TB daily, also set up its own key management system to oversee security processes on most of the data transfers into the Amazon Simple Storage Service (S3) instead of only relying on the one provided by AWS. "We have good faith in Amazon in general," Stevens said. "But any weaknesses they have in their key management system, we don't want to inherit that."

Furthermore, ProtectWise developed routines to encrypt data in the Apache Spark processing engine and the DataStax Enterprise edition of the Cassandra NoSQL database, which it uses in conjunction with the Amazon EMR platform to run analytics applications on both real-time and historical data. Stevens said Spark currently doesn't offer the kind of encryption support ProtectWise needs; Cassandra does "but at a tremendous performance hit" that the company can't afford to take.

All hands on deck for big data security

Security is an underappreciated topic among many data management professionals, according to Gartner's Adrian. But he believes that needs to change, particularly as organizations face up to big data security issues.

Data management teams should get more involved in the process of protecting big data systems, Adrian said. In data lakes built around Hadoop and other technologies that aren't as mature as relational databases are, "security is everybody's business," he noted.

And security initiatives can go hand in hand with efforts to improve data management and usage, TMW's Leonard said. In addition to supporting security audits, Leonard said a metadata repository lets his team see whether data scientists are correctly applying trucking operations data in the transportation management software vendor's big data environment as part of analytics applications.

"We've found things, not that they weren't authorized to access a certain data element, but when they do, they're using it in the wrong way," Leonard explained. As a result, he added, TMW's training program has been upgraded to give the data scientists better information on how to use the data at their disposal.

He said he's open to using embedded functionality that's "more security-friendly" in technologies like Spark and Cassandra. "But we're happy to build some of this ourselves because it's business-critical," he noted. "Security is in our DNA. Not taking it seriously is not an option."

It's the same for TMW's Leonard when it comes to dealing with big data security issues. Protecting the data in the company's Hadoop environment "is the No. 1 thing on my mind," he said. "It's one thing to drive into big data, but boy, you better have security around it."


Read more »



Mar
20
Sage adds Intacct financial management software to its ERP
Posted by Thang Le Toan on 20 March 2018 12:52 AM

Sage says the move will boost its cloud financial management software and U.S. presence. Analysts think it's a good technology move but are unsure about the market impact.

Sage Software intends to expand both its cloud offerings and its customer base in North America.

Sage, an ERP vendor based in Newcastle upon Tyne, U.K., is acquiring Intacct, a San Jose-based vendor of financial management software for $850 million, according to the company.

Sage's core products include the Sage X3 ERP system, the Sage One accounting and invoicing application and Sage Live real-time accounting software. The company's products are aimed primarily at SMBs, and Sage claims that it has just over 6 million users worldwide, with the majority of these in Europe.

Intacct provides SaaS financial management software to SMBs, with most of its customer base in North America, according to the company.

The move to acquire Intacct demonstrates Sage's determination to "win the cloud" and expand its U.S. customer base, according to a Sage press release announcing the deal.

"Today we take another major step forward in delivering our strategy and we are thrilled to welcome Intacct into the Sage family," Stephen Kelly, Sage CEO, said in the press release. "The acquisition of Intacct supports our ambitions for accelerating growth by winning new customers at scale and builds on our other cloud-first acquisitions, strengthening the Sage Business Cloud. Intacct opens up huge opportunities in the North American market, representing over half of our total addressable market."

Combining forces makes sense for Intacct because the company shares the same goals as Sage, according to Intacct CEO Robert Reid.

"We are excited to become part of Sage because we are relentlessly focused on the same goal -- to deliver the most innovative cloud solutions for our customers," Reid said in the press release. "Intacct is growing rapidly in our market and we are proud to be a recognized customer satisfaction leader across midsize, large and global enterprise businesses. By combining our strengths with those of Sage, we can jointly accelerate success for our customers."

Intacct brings real cloud DNA to financial management software

Intacct's specialty in cloud financial management software should complement Sage's relatively weak financial functionality, according to Cindy Jutras, president of the ERP consulting firm Mint Jutras.

"[Intacct] certainly brings real cloud DNA, and a financial management solution that would be a lot harder to grow out of than the solutions they had under the Sage One brand," Jutras said. "It also has stronger accounting than would be embedded within Sage X3. I would expect X3 to still be the go-to solution for midsize manufacturers since that was never Intacct's target, but Intacct may very well become the go-to ERP for service companies, like professional services."

Jutras also mentioned that Intacct was one of the first applications to address the new ASC 606 revenue recognition rules, something that Sage has not done yet. Sage's cloud strategy has been murky up to this point, but Jutras was unsure that this move will clarify that.

"It doesn't seem any of its existing products -- except their new Sage Live developed on the Salesforce platform -- are multi-tenant SaaS and up until recently they seemed to be going the hybrid route by leaving ERP on premises and surrounding it with cloud services," she said.

The deal should strengthen Sage's position in the SMB market, according to Chris Devault, manager of software selection at Panorama Consulting Solutions.

"This is a very good move for Sage, as it will bring a different platform and much needed technology to help Sage round out their small to mid-market offerings," Devault said.

Getting into the U.S. market

Overall it appears to be a positive move for Sage, both from a technology and market perspective, according to Holger Mueller, vice president and principal analyst at Constellation Research Inc.

"It's a good move by Sage to finally tackle finance in the cloud and get more exposure to the largest software market in the world, the U.S.," Mueller said. "But we see more than finance moving to the cloud, as customers are starting to look for or demand a complete suite to be available on the same platform. Sage will have to move fast to integrate Intacct and get to a compelling cloud suite roadmap."

Time will also tell if this move will position Sage better in the SMB ERP landscape.

"It's early to say, but it puts them in the SMB category with Oracle NetSuite, FinancialForce, Epicor and Acumatica at the lower end," Mueller said.

In what ways do you think Sage ERP is enhanced by adding Intacct financial management software?

Sage's developer certifications for Sage ERP X3 add more muscle to its ISV ecosystem.

SaaS can ease the pain of ERP upgrades, but some users like letting the vendor do it.


Read more »



Mar
20
The risk analytics software your company really needs
Posted by Thang Le Toan on 20 March 2018 12:49 AM

Risk analytics tools are more and more critical for CFOs seeking to improve operational efficiency. Just one problem: It can be hard to figure out just what those tools are.

As the use of big data is a documented phenomenon across corporate America, risk analytics – that is, using analytics to collect, analyze and measure real-time data to predict risk to make better business decisions -- is also becoming more popular.

That's according to Sanjaya Krishna, U.S. digital risk consulting leader at KPMG in Washington, D.C.

By using risk analytics software, CFOs can improve operational efficiency and keep their companies' exposures to acceptable risk. But where exactly does a CFO go to "get" risk analytics tools?

The search for risk analytics software

"Risk analytics is a fairly broad term, so there are a number of things that come to mind when we talk about risk analytics," Krishna said. "There are a number of specialized risk analytics products. There are also broader analytic packages that can … 'check the risk analytics box' to a certain extent, though the package isn't built to be a risk analytics solution."

There are products, such as KPMG Risk Front, that focus on providing customized risk analytics based on public internet commentary, Krishna said. And KPMG's Continuous Monitoring product provides for customized risk analytics based on internal transactional data.

Enterprises should consider a solution that takes these differences into account, making sure that a dashboard can become detailed and granular, while also offering a 50,000-foot view.
Rajiv Shahsenior solutions architect, GigaSpaces Technologies Inc.

There is also a number of established enterprise governance, risk and compliance packages that provide companies a way of housing and analyzing all sorts of identified risks at the enterprise level or within certain business areas, he said.

Finally, there are highly specialized, industry-specific risk analytic tools, especially in the financial services industry, according to Krishna.

Risk analytics tools, regardless of the industry, have been around for a while, said Danny Baker, vice president of market strategy, financial and risk management solutions at Fiserv Inc., a provider of financial services technology based in Brookfield, Wis.

"They have historically been purposed for less strategic items -- they were seen as just a checkbox to please the regulators," he said.

Now, though, risk analytics software has transitioned and evolved from tactical, point solutions to helping organizations optimize their strategic futures.

"Especially for banks and credit unions, risk analytics tools are focused more on strategy and the need to integrate with other departments, like finance," Baker said. "The integration across departments is key."

But it's not just the tools that are important.

Sometimes a company may even use a database as a risk analytics tool, said Ken Krupa, enterprise CTO at MarkLogic Corp., an Enterprise NoSQL database provider in San Carlos, Calif.

Taking the broad approach to the data quality issue

"There are, indeed, specialized products, as well as packages that play a role in risk analytics," Krupa said. "These third-party suites of tools do a lot of the math on where there are risks, but if the math is based on bad or incomplete data, risk cannot be adequately addressed."

What's more, oftentimes, a company doesn't have a clear picture of the quality of the data that it's working with because making that data available from upstream systems depends on complex extract, transform and load (ETL) processes supported by a large team of developers of varying skill sets, he said.

Therefore, there's actually an inherent risk in not having transparent access to a 360-degree view of the data -- mainly caused by data in silos. However, leveraging a database that can successfully integrate the many silos of data can go a long way toward minimizing data quality risks, according to Krupa.

"You may not initially think of a database as a risk analytics tool, but the right kind of database serves a critical role in organizing all of the inputs that risk analytics tools use," he said. "The right type of database -- one that minimizes ETL dependency and provides a clear view of all kinds of data, like that offered by MarkLogic -- can make risk analytics better, faster and with less cost."

Anand Venugopal, head of StreamAnalytix product management and go-to-market at Impetus Technologies Inc. in Los Gatos, Calif., concurred with Krupa that bringing all a company's data into one place is critical to enabling better risk-based business decisions.

Since many organizations are in the process of modernizing their infrastructures -- particularly around analytics platforms -- they are moving away from point solutions if they can, he said.

The new paradigm is bringing together all the relevant information -- if not in one place, at least, having the mechanisms to bring it together on demand -- and then do the analytics together in one place, Venugopal said.

"So, what is beyond proven is that analytics and decision-making [are] more accurate not with more advanced algorithms, but with more data, i.e., diverse data, and more data sources, i.e., 25 different data sources as opposed to five different data sources," he said.

It all points to the fact that even with moderate algorithms, more data gives organizations better results than trying to use "rocket science algorithms" with limited data, Venugopal said.

"What that means to enterprise technology is that they are building risk platforms on top of the modern data warehouses, which combines a variety of internal and external data sets, and trying to combine real-time data feeds -- real-time triggers, real-time market factors, currency risk, etc. -- which was not part of the previous generation's capabilities," he said.

Single-point products can only address limited portions of this because that's how they're designed; enterprise risk can only be covered with a broader approach, according to Venugopal.

"I think the trend [in enterprises] is more toward building sophisticated risk strategies and applications, and they're building out those and they're using core big data technology components like the Hadoop stack, like the Spark stack and tools like Impetus' Extreme Analytics," he said.

Custom risk analytics software and other considerations

Organizations looking to implement technology to mitigate risk have to consider a few additional things, including the usability and feature set, according to Rajiv Shah, senior solutions architect for GigaSpaces Technologies Inc. in New York City.

What have you found most confusing when searching for risk analytics software?

"For instance, high-volume traders need a solution that won't interfere with the data sync that is critical to being up to the microsecond," he said.

A product that offers multilevel dashboarding is also key, according to Shah.

For example, the data a CFO needs to know is far different than, say, what a risk or compliance officer needs to know, he said.

"Enterprises should consider a solution that takes these differences into account, making sure that a dashboard can become detailed and granular, while also offering a 50,000-foot view," Shah said. "And a strong risk mitigation strategy and tool set should be able to identify and simulate a wide range of scenarios."

According to Fiserv's Baker, it's important that a risk mitigation technology doesn't hinder a company's regular operations.

"For larger organizations, it often becomes critical to build your own solution to meet the needs," he said.

Mike Juchno, partner at Ernst & Young Advisory Services, agreed that there is a custom tool component to risk analytics.

"Many of our clients already have these tools -- they're some sort of predictive analytics tool like SPSS, like SAS, like R, and some visualization on top of them, like Tableau or Power BI," he said. "So, we are able to build something custom to deal with a risk that may be unique to them or their industry or their particular situation. So, we typically find that it's a custom approach."

When it comes to looking for an off-the-shelf product, CFOs often hear about risk analytics tools from their peer-to-peer organizations. These groups come together to share information about tools.

"Of course, you're going to also look toward other companies or competitors that are doing risk management and performance management well and see what tools they have in place," Baker said. "The most high-performing clients I see embed their tools into not only solving current risk, but also expecting and forecasting future risk."

Although an organization can go to Fiserv and ask for a menu of risk analytics tools, it's more successful if both the company and Fiserv drill down into what the organization is trying to accomplish and customize the tools from there, according to Baker.

Most organizations want to make better strategic decisions, as the challenges of growth are greater now, and improve their forward-looking, strategic discipline and processing, he said.

The focus has shifted to agility and efficiency when implementing risk analytics tools, Baker said.

"The high-performing Fiserv clients I work with have integrated risk analytics tools into finance operations," he said. "These advanced solutions offer an integrative solution that also forecasts and plans for the strategic future."

Organizations are increasingly being thoughtful with their risk processes, he said. And in recent years questions to vendors have evolved from "what are your risk tools?" to "how do I get better information to make decisions for the future?"


Read more »



Mar
20

Regulatory compliance, loan covenants and currency risk are common targets, as organizations sift through ERP and other data looking for patterns that might give early warning.

As CFO of TIBCO Software Inc., Tom Berquist spends a lot of time working on risks, such as the failure to live up to loan covenants. Berquist uses risk analytics software to stay on top of things.

"As a private equity-backed company -- we're owned by Vista Equity Partners -- we carry a large amount of debt," he said. "We have covenants associated with that and they're tied to a number of our financial metrics." Consequently, a major part of Berquist's risk-management process is to stay in front of what's going on with the business. If there's going to be softness in TIBCO's top-line revenue, he has to make sure to manage the company's cost structure so it doesn't violate any of the covenants. Berquist said he has a lot of risk analytics tied to that business problem.

The intent of risk analytics is to give CFOs and others in the C-suite a complete, up-to-date risk profile "as of now," said Thomas Frénéhard, director of solution management, governance, risk and compliance at software vendor SAP.

"There's no need to wait for people to compile information at the end of the quarter and send you [information] that's outdated," Frénéhard said. "What CFOs want now is their financial exposure today."

Looking for patterns in corporate data

Risk analytics involves the use of data analysis to obtain insights into various risks in financial, operational and business processes, as well as to monitor risks in ways that can't be achieved through more traditional approaches to risk management, financial controls and compliance management, said John Verver, a strategic advisor to ACL Services, a maker of governance, risk and compliance software based in Vancouver, B.C.

Some of the most common uses of risk analytics are in core financial processes and core ERP application areas, including the purchase-to-pay and order-to-cash cycles, revenue and payroll -- "analyzing and testing the detailed transactions, for example, to look for indications of fraud [and] indications of noncompliance with regulatory requirements and controls," Verver said.

Once the data is in one place, CFOs should be able to easily visualize the data in a risk dashboard.
Dan Zittingchief product officer, ACL Services

Using advanced risk management -- i.e., risk analytics software -- will allow CFOs to access data from complex systems, including ERP environments, and easily identify key areas of risk, said Dan Zitting, chief product officer of ACL Services.

"The technology can be set up to pull data from the HR, sales and billing departments, for example, and cross-reference the information within the program's interface," Zitting said in an email. "Once the data is in one place, CFOs should be able to easily visualize the data in a risk dashboard that summarizes activity and flags changes in risk."

Berquist also uses risk analytics to manage foreign currency risk for TIBCO, which is an international company, as well as risks connected to managing cash.

"Every month I close the books, I get all my actuals and I export them all into my data warehouse and I load up my dashboards. I happen to use TIBCO Spotfire [business intelligence software], but you can load them up in any risk analytics tool," he said. "Then I review where we stand on everything that has happened so far. Are expenses in line? Where does our revenue stand? What happened with currency? What happened with cash? How does the balance sheet look? That's the first part of the problem."

The second part is forecasting what will happen with TIBCO's expenses, which helps Berquist ensure that the company is going to generate sufficient cash to avoid violating covenants and mitigate the effects of offshore currency fluctuations.

Berquist said there are general-purpose risk management technologies, some of which are tied to such things as identifying corporate fraud, but there is also company- or industry-specific risk analytics software.

"My big concern is financial risk, so most of my [use of risk analytics] is around those types of measures," he said.

Risk analytics software helps CFOs make better decisions for the future because without an approach that allows them to run different scenarios and determine potential outcomes, they end up making gut instinct-oriented or seat-of-the-pants decisions, according to Berquist.

Sharing a similar view is Tom Kimner, head of global product marketing and operations for risk management at SAS Institute Inc., a provider of analytics software, based in Cary, N.C.

"What makes risk analytics a little bit different, in some cases, is that risk generally deals with the future and uncertainty," Kimner said.

Cristina Silingardi, a former CFO and treasurer at HamaTech USA Inc., a manufacturer of equipment for the semiconductor industry, concurred with Berquist that risk assessment can no longer be done as it used to be based on individuals' knowledge of their businesses, their instincts and a few key data points.

"There is so much data right now, and the biggest change I see is that now this data encompasses structured internal company data as well as unstructured external data," said Silingardi, now managing director of vcfo Holdings, a consulting firm based in Austin, Texas, that specializes in finance, recruiting and human resources.

CFOs started getting more involved with risk analytics when they needed better revenue metrics to understand predictability and trends, she said. Risk analytics software went beyond traditional risk-management tools by adding real-time reporting that puts key metrics right in front of CFOs and updates them all day long. Such data can help CFOs keep an eye on regulatory and contractual noncompliance from vendors, according to Silingardi.

"It helps them with pattern recognition, but only if [they] can translate that to really good visual dashboards that are looking at this data. [CFOs] used to focus only on a few things. Now, [they're] using all this data to get a much better picture," she said.

Forward-thinking mindset is key

Historically, risk analysis and assessment has tended to be a reactive and subjective process, according to Daniel Smith, director of data science and innovation at Syntelli Solutions Inc., a data analytics company based in Charlotte, N.C. After something bad happens, the tendency is for people to say, "'Let's investigate it, or, 'Let's all huddle up and think about what could happen and create a bunch of speculative scenarios,'" he said.

That's exactly the way many of SAP's customers still look at risk: through the rear-view mirror, said Bruce McCuaig, director of governance, risk and compliance solution marketing at SAP.

"Once or twice a year they report to the board and they look backwards, but what I think we're seeing now is the ability to look forward and report frequently online and in real time," McCuaig said.

In modern analytics and modern business, companies want to focus more on proactive, predictive and objective risk, Smith said. While focusing on risk in this manner gives CFOs visibility into the future, many don't have the pipeline of data and a single source of consolidated data to enable them to do that.

"They need a system, a way to collect that data and be able to analyze it," he said. "From a strategic point of view, it's more of a data initiative."

The goal is to give people the skills and applications to view highly interactive and multidimensional data as opposed to a traditional, two-dimensional tabular view in a spreadsheet, Smith said.

When it comes to risk analytics, CFOs should be thinking about techniques, not specific tools. Risk analysis is more about understanding ways to mine data better than about which platform can do it, according to Smith.

"Risk analytics is part of something larger. At SAP, we don't have a category of solutions called 'risk analytics,'" McCuaig said. "There are a variety of analytics tools that will serve the purpose."

How has your company used risk analytics?


Read more »



Mar
20
Risk mapping key to security, business integration
Posted by Thang Le Toan on 20 March 2018 12:43 AM

It’s no secret that data protection has become integral to bottom line success for digital businesses. As a result, it’s time for InfoSec professionals to crawl out of their caves and start communicating with the rest of the business, Tom Kartanowicz, head of information security at Natixis, North America, told the audience at the recent CDM Media CISO Summit.

To facilitate this communication, the language these pros will use is the language of security risk, Kartanowicz said.

“As security professionals, if we want to be taken seriously we need to put what we do into the risk lens to talk to the business so they understand the impact and how we’re trying to reduce the impact of the types of threats we’re seeing,” Kartanowicz said.

For example, even though the chief information security officer and chief risk officer may appear to be two different islands in an organization, they are part of the same team, he reminded the audience.

 

Business is the bridge that links them together so instead of working in silos, security professionals should carve out what Kartanowicz calls a “friends and family plan” that forms allies with other departments in their organization. The human resources department can help discipline somebody who might be an internal threat to the organization, corporate communications can help talk to the media and customers when there are incidents like DDoS and malware attacks, and the legal department can be valuable allies when it is time to take action against bad actors, he explained.

“As the CISO or as the head of InfoSec, you are missing out on a lot of valuable intelligence if you are not talking to all these different teams,” he stressed.

Risk mapping — a data visualization tool that outlines an organization’s specific risks — is an effective way to identify threats and vulnerabilities, then communicate them to the business, he said. Risk mapping helps an organization identify the areas where it’s going to spend their security budget, how to implement solutions and, most importantly, helps identify specific instances of risk reduction, he said.

Kartanowicz said there are two things to consider when evaluating and determining the likelihood of a risk: how easy is it to exploit and how often it occurs.

“If the vulnerabilities require technical skills held by 1% of the population, it’s going to be pretty difficult to exploit,” he said. “If on the other hand, anybody on the street can exploit it, it’s going to be pretty easy.”

It is then time to address the specific risks, he said.

“In the enterprise risk management world, the business can accept the risks, avoid the risks or [work to] mitigate the risks — this is where InfoSec comes in — or transfer the risks,” he said.

Using tools such as the NIST cybersecurity framework can help InfoSec reduce the risks, he said. It’s important that organizations tie in their disaster recovery, backup strategy, business continuity and crisis management into whatever the framework they choose, he added. Organizations should also ensure they have baseline controls in place to help minimize the risk of a data breach, he added.

But as threats evolve and vulnerabilities change, he suggested that the risk map be re-evaluated annually. Business requirements are constantly evolving and organizations are always entering different markets, but companies need to be constantly aware of the threat landscape, he added.

“Incidents will always occur; risk is not going away,” he said.


Read more »



Mar
20
risk map (risk heat map)
Posted by Thang Le Toan on 20 March 2018 12:42 AM

A risk map, also known as a risk heat map, is a data visualization tool for communicating specific risks an organization faces. A risk map helps companies identify and prioritize the risks associated with their business.

The goal of a risk map is to improve an organization's understanding of its risk profile and appetite, clarify thinking on the nature and impact of risks, and improve the organization's risk assessment model. In the enterprise, a risk map is often presented as a two-dimensional matrix. For example, the likelihood a risk will occur may be plotted on the x-axis, while the impact of the same risk is plotted on the y-axis.

A risk map is considered a critical component of enterprise risk management because it helps identify risks that need more attention. Identified risks that fall in the high-frequency and high-severity section can then be made a priority by organizations. If the organization is dispersed geographically and certain risks are associated with certain geographical areas, risks might be illustrated with a heat map, using color to illustrate the levels of risk to which individual branch offices are exposed.

risk matrix example

A risk matrix that includes natural disasters and human risk factors.

How to create a risk map

Identification of inherent risks is the first step in creating a risk map. Risks can be broadly categorized into strategic risk, compliance risk, operational risk, financial risk and reputational risk, but organizations should aim to chart their own lists by taking into consideration specific factors that might affect them financially. Once the risks have been identified, it is necessary to understand what kind of internal or external events are driving the risks.

The next step in risk mapping is evaluating the risks: estimating the frequency, the potential impact and possible control processes to offset the risks. The risks should then be prioritized. The most impactful risks can be managed by applying control processes to help lessen their potential occurrence.

As threats evolve and vulnerabilities change, a risk map must be re-evaluated periodically. Organizations also must review their risk maps regularly to ensure key risks are being managed effectively.

Why it's important to create a risk map

A risk map offers a visualized, comprehensive view of the likelihood and impact of an organization's risks. This helps the organization improve risk management and risk governance by prioritizing risk management efforts. This risk prioritization enables them to focus time and money on the most potentially damaging risks identified in a heat map chart.

A risk map also facilitates interdepartmental dialogues about an organization's inherent risks and promotes communication about risks throughout the organization. It helps organizations visualize risks in relation to each other, and it guides the development of a control assessment of how to deal with the risks and the consequence of those risks.

The map can help the company visualize how risks in one part of the organization can affect operations of another business unit within the organization.

How has creating a risk heat map helped your organization's risk management efforts?

A risk map also adds precision to an organization's risk assessment strategy and identifies gaps in an organization's risk management processes.

 


Read more »



Mar
20
risk map (risk heat map)
Posted by Thang Le Toan on 20 March 2018 12:42 AM

A risk map, also known as a risk heat map, is a data visualization tool for communicating specific risks an organization faces. A risk map helps companies identify and prioritize the risks associated with their business.

The goal of a risk map is to improve an organization's understanding of its risk profile and appetite, clarify thinking on the nature and impact of risks, and improve the organization's risk assessment model. In the enterprise, a risk map is often presented as a two-dimensional matrix. For example, the likelihood a risk will occur may be plotted on the x-axis, while the impact of the same risk is plotted on the y-axis.

A risk map is considered a critical component of enterprise risk management because it helps identify risks that need more attention. Identified risks that fall in the high-frequency and high-severity section can then be made a priority by organizations. If the organization is dispersed geographically and certain risks are associated with certain geographical areas, risks might be illustrated with a heat map, using color to illustrate the levels of risk to which individual branch offices are exposed.

risk matrix example

A risk matrix that includes natural disasters and human risk factors.

How to create a risk map

Identification of inherent risks is the first step in creating a risk map. Risks can be broadly categorized into strategic risk, compliance risk, operational risk, financial risk and reputational risk, but organizations should aim to chart their own lists by taking into consideration specific factors that might affect them financially. Once the risks have been identified, it is necessary to understand what kind of internal or external events are driving the risks.

The next step in risk mapping is evaluating the risks: estimating the frequency, the potential impact and possible control processes to offset the risks. The risks should then be prioritized. The most impactful risks can be managed by applying control processes to help lessen their potential occurrence.

As threats evolve and vulnerabilities change, a risk map must be re-evaluated periodically. Organizations also must review their risk maps regularly to ensure key risks are being managed effectively.

Why it's important to create a risk map

A risk map offers a visualized, comprehensive view of the likelihood and impact of an organization's risks. This helps the organization improve risk management and risk governance by prioritizing risk management efforts. This risk prioritization enables them to focus time and money on the most potentially damaging risks identified in a heat map chart.

A risk map also facilitates interdepartmental dialogues about an organization's inherent risks and promotes communication about risks throughout the organization. It helps organizations visualize risks in relation to each other, and it guides the development of a control assessment of how to deal with the risks and the consequence of those risks.

The map can help the company visualize how risks in one part of the organization can affect operations of another business unit within the organization.

How has creating a risk heat map helped your organization's risk management efforts?

A risk map also adds precision to an organization's risk assessment strategy and identifies gaps in an organization's risk management processes.

 


Read more »




Help Desk Software by Kayako