Live Chat Software by Kayako
 News Categories
(20)Microsoft Technet (2)StarWind (6)TechRepublic (4)ComuterTips (1)SolarWinds (1)Xangati (1)MyVirtualCloud.net (28)VMware (8)NVIDIA (9)VDI (1)pfsense vRouter (4)VEEAM (3)Google (2)RemoteFX (1)developers.google.com (1)MailCleaner (1)Udemy (1)AUGI (2)AECbytes Architecture Engineering Constrution (7)VMGuru (2)AUTODESK (9)storageioblog.com (1)Atlantis Blog (23)AT.COM (2)community.spiceworks.com (1)archdaily.com (16)techtarget.com (3)hadoop360 (3)bigdatastudio (1)virtualizetips.com (1)blogs.vmware.com (3)VECITA (1)vecom.vn (1)Palo Alto Networks (4)itnews.com.au (2)serverwatch.com (1)Nhịp Cầu đầu tư (3)VnEconomy (1)Reuters (1)Tom Tunguz (1)Medium.com (1)Esri (1)www.specommerce.com (1)tweet (1)Tesla (1)fool.com (6)ITCNews (1)businessinsider.com (1)hbr.org Harvard Business Review (1)Haravan (2)techcrunch.com (1)vn.trendmicro.com (3)thangletoan.wordpress.com (3)IBM (1)www.droidmen.com (2)blog.parallels.com (1)betanews.com (9)searchvmware.techtarget.com (1)www.bctes.com (1)www.linux.com (4)blog.capterra.com (1)theelearningcoach.com (1)www.examgeneral.com (1)www.wetutoringnation.com (1)chamilo.org/ (1)www.formalms.org (1)chalkup.co (1)www.mindonsite.com (5)moodle.org (4)moodle.croydon.ac.uk (1)opensource.com (1)1tech.eu (1)remote-learner.net (1)paradisosolutions.com (2)sourceforge.net (24)searchbusinessanalytics.techtarget.com (1)nscs.gov.sg (1)virten.net (1)fastest.com.vn (1)elearninglearning.com (2)www.computerweekly.com (1)youtube.com (1)www.techradar.com (3)computer.howstuffworks.com (2)techz.vn (2)techsignin.com (1)itworld.com (20)searchsecurity.techtarget.com (1)makeuseof.com (1)nikse.dk (1)4kdownload.com (1)thegioididong.com (1)itcentralstation.com (1)www.dddmag.com (2)Engenius (1)networkcomputing.com (1)woshub.com (1)hainam121.wordpress.com (1)www.lucidchart.com (1)www.mof.gov.vn (3)www.servethehome.com (6)www.analyticsvidhya.com (1)petewarden.com (2)ethinkeducation.com
RSS Feed
News
Sep
21
BackupAssist software offers cloud-to-local backup
Posted by Thang Le Toan on 21 September 2018 02:48 AM

BackupAssist 365 can be set to automatically download files and email mailboxes from the cloud to an on-premises device, creating local backups as a cloud-to-cloud alternative.

SMB backup software specialist BackupAssist this week added protection for email mailboxes and files stored in the cloud.

The newest BackupAssist software, called BackupAssist 365, lets customers copy email mailboxes and files from the cloud to an on-premises server. The on-premises, off-cloud backup can protect businesses against accidental or malicious data deletion and ransomware.

 

Unlike cloud-to-cloud backup products that allow organizations to move data from SaaS applications to another public cloud, the new BackupAssist software only backs up to local storage.

Troy Vertigan, vice president of channel sales and marketing at BackupAssist, based in Australia, said the cloud-to-local backup costs up to 75% less than a cloud-to-cloud subscription.

Despite what the BackupAssist software's 365 name suggests, the product works with more than just Microsoft Office 365. BackupAssist 365 lets users back up mailboxes from Rackspace, Microsoft Exchange, Gmail, Outlook and Internet Message Access Protocol servers, as well as files from Google Drive, Dropbox, OneDrive, Secure File Transfer Protocol and WebDAV. BackupAssist CEO Linus Chang said a future update will enable support for the entire G Suite.

George Crump, president and founder of analyst firm Storage Switzerland, said BackupAssist 365 is a good option for protecting data born in the cloud, but he's not sure the vendor's customers are convinced they need that.

Screenshot of BackupAssist 365's Exchange interface

BackupAssist

BackupAssist 365 backs up Microsoft Exchange email mailboxes.
 

"The big challenge is convincing the entire market that you actually do need to back up Office 365. There's still this misbelief that the cloud is this magical place where data never gets deleted," Crump said.

Although cloud users don't have to worry about hardware failure or disaster recovery, Crump said that merely shifts the risk elsewhere.

There's still this misbelief that the cloud is this magical place where data never gets deleted.
George Crumppresident and founder, Storage Switzerland

"Once you're cloud-based, your concern really isn't disaster recovery anymore. What you're really protecting against is data corruption and account hijack," Crump said.

BackupAssist's Chang agreed his SMB target market needs convincing of the value of cloud backup. "The majority of SMB customers and the vast majority of consumers are not doing any sort of backup whatsoever," Chang said.

Chang said BackupAssist's customers also include managed service providers who are in charge of their clients' data, and BackupAssist 365 can help them stay compliant.

The new BackupAssist software is generally available. Its annual subscription fee is $1 per user, per month, for the first 24 users, and it drops to 95 cents for 25 to 49 users and 90 cents if there are 50 or more users. A user is defined as a single account identity, allowing for one user to back up multiple clouds.


Read more »



Aug
3
zero-day (computer)
Posted by Thang Le Toan on 03 August 2018 01:03 AM

Zero-day is a flaw in software, hardware or firmware that is unknown to the party or parties responsible for patching or otherwise fixing the flaw. The term zero day may refer to the vulnerability itself, or an attack that has zero days between the time the vulnerability is discovered and the first attack. Once a zero-day vulnerability has been made public, it is known as an n-day or one-day vulnerability.

Ordinarily, when someone detects that a software program contains a potential security issue, that person or company will notify the software company (and sometimes the world at large) so that action can be taken. Given time, the software company can fix the code and distribute a patch or software update. Even if potential attackers hear about the vulnerability, it may take them some time to exploit it; meanwhile, the fix will hopefully become available first. Sometimes, however, a hacker may be the first to discover the vulnerability. Since the vulnerability isn't known in advance, there is no way to guard against the exploit before it happens. Companies exposed to such exploits can, however, institute procedures for early detection.

Security researchers cooperate with vendors and usually agree to withhold all details of zero-day vulnerabilities for a reasonable period before publishing those details. Google Project Zero, for example, follows industry guidelines that give vendors up to 90 days to patch a vulnerability before the finder of the vulnerability publicly discloses the flaw. For vulnerabilities deemed "critical," Project Zero allows only seven days for the vendor to patch before publishing the vulnerability; if the vulnerability is being actively exploited, Project Zero may reduce the response time to less than seven days.

Zero-day exploit detection

Zero-day exploits tend to be very difficult to detect. Antimalware software and some intrusion detection systems (IDSes) and intrusion prevention systems (IPSes) are often ineffective because no attack signature yet exists. This is why the best way to detect a zero-day attack is user behavior analytics. Most of the entities authorized to access networks exhibit certain usage and behavior patterns that are considered to be normal. Activities falling outside of the normal scope of operations could be an indicator of a zero-day attack.

For example, a web application server normally responds to requests in specific ways. If outbound packets are detected exiting the port assigned to that web application, and those packets do not match anything that would ordinarily be generated by the application, it is a good indication that an attack is going on.

Zero-day exploit period

Some zero-day attacks have been attributed to advanced persistent threat (APT) actors, hacking or cybercrime groups affiliated with or a part of national governments. Attackers, especially APTs or organized cybercrime groups, are believed to reserve their zero-day exploits for high-value targets.

N-day vulnerabilities continue to live on and are subject to exploits long after the vulnerabilities have been patched or otherwise fixed by vendors. For example, the credit bureau Equifax was breached in 2017 by attackers using an exploit against the Apache Struts web framework. The attackers exploited a vulnerability in Apache Struts that was reported, and patched, earlier in the year; Equifax failed to patch the vulnerability and was breached by attackers exploiting the unpatched vulnerability.

Likewise, researchers continue to find zero-day vulnerabilities in the Server Message Block protocol, implemented in the Windows OS for many years. Once the zero-day vulnerability is made public, users should patch their systems, but attackers continue to exploit the vulnerabilities for as long as unpatched systems remain exposed on the internet.

Defending against zero-day attacks

Zero-day exploits are difficult to defend against because they are so difficult to detect. Vulnerability scanning software relies on malware signature checkers to compare suspicious code with signatures of known malware; when the malware uses a zero-day exploit that has not been previously encountered, such vulnerability scanners will fail to block the malware.

Since a zero-day vulnerability can't be known in advance, there is no way to guard against a specific exploit before it happens. However, there are some things that companies can do to reduce their level of risk exposure.

  • Use virtual local area networks to segregate some areas of the network or use dedicated physical or virtual network segments to isolate sensitive traffic flowing between servers.
  • Implement IPsec, the IP security protocol, to apply encryption and authentication to network traffic.
  • Deploy an IDS or IPS. Although signature-based IDS and IPS security products may not be able to identify the attack, they may be able to alert defenders to suspicious activity that occurs as a side effect to the attack.
  • Use network access control to prevent rogue machines from gaining access to crucial parts of the enterprise environment.
  • Lock down wireless access points and use a security scheme such as Wi-Fi Protected Access 2 for maximum protection against wireless-based attacks.
  • Keep all systems patched and up to date. Although patches will not stop a zero-day attack, keeping network resources fully patched may make it more difficult for an attack to succeed. When a zero-day patch does become available, apply it as soon as possible.
  • Perform regular vulnerability scanning against enterprise networks and lock down any vulnerabilities that are discovered.

While maintaining a high standard for information security may not prevent all zero-day exploits, it can help defeat attacks that use zero-day exploits after the vulnerabilities have been patched.

Examples of zero-day attacks

Multiple zero-day attacks commonly occur each year. In 2016, for example, there was a zero-day attack (CVE-2016-4117) that exploited a previously undiscovered flaw in Adobe Flash Player. Also in 2016, more than 100 organizations succumbed to a zero day bug (CVE-2016-0167) that was exploited for an elevation of privilege attack targeting Microsoft Windows.

 

In 2017, a zero-day vulnerability (CVE-2017-0199) was discovered in which a Microsoft Office document in rich text format was shown to be able to trigger the execution of a visual basic script containing PowerShell commands upon being opened. Another 2017 exploit (CVE-2017-0261) used encapsulated PostScript as a platform for initiating malware infections.

The Stuxnet worm was a devastating zero-day exploit that targeted supervisory control and data acquisition (SCADA) systems by first attacking computers running the Windows operating system. Stuxnet exploited four different Windows zero-day vulnerabilities and spread through infected USB drives, making it possible to infect both Windows and SCADA systems remotely without attacking them through a network. The Stuxnet worm has been widely reported to be the result of a joint effort by U.S. and Israel intelligence agencies  to disrupt Iran's nuclear program.


Learn more about zero-day attacks
from the CompTia security course.

 

FBI admits to using zero-day exploits, not disclosing them

The FBI has admitted to using zero-day exploits rather than disclosing them, and experts say this should not be a surprise considering the history of federal agency actions.

In a surprise bout of openness, Amy Hess, executive assistant director for science and technology with the FBI, admitted that the FBI uses zero-day exploits, but said the agency does struggle with the decision.

 

In an interview with The Washington Post, Hess called it a "constant challenge" to decide whether it is better to use a zero-day exploit "to be able to identify a person who is threatening public safety" or to disclose the vulnerability in order to allow developers to secure products being used by the public. Hess also noted the FBI prefers not to rely on zero-day exploits because the fact that they can be patched at any moment makes them unreliable.

Jeff Schilling, CSO for Armor, said the surprise might come from the fact that many people don't know that the FBI has a foreign intelligence collection mission.

"Any agency that has a foreign intelligence collection mission in cyberspace has to make decisions every day on the value gained in leveraging a zero day to collect intelligence data, especially with the impact of not letting people who are at risk know of the potential vulnerability which could be compromised," Schilling said, adding that the need for the government to find a balance between security and intelligence is not a new phenomenon. "This country experienced the same intelligence gained versus operational impact during World War II (WWII) when the intelligence community did not disclose that we had broken both the Japanese and German codes. Lots of sailors, soldiers and airmen lost their lives to keep those secrets. I think the FBI and the rest of the intelligence community have the same dilemmas as the intelligence community in WWII, however, at this point, data, not lives are at risk."

Robert Hansen, vice president for WhiteHat Security Labs, said it boils down to whether the public trusts the government to not abuse its power in this area, and whether the government should assume that only it knows about these exploits.

"In general, I think that although the net truth is that most people in government have good intentions, they can't all be relied upon to hold such belief systems," Hansen said. "And, given that in most cases exploits are found much later, it stands to reason that it's more dangerous to keep vulnerabilities in place. That's not to diminish their value, however, it's very dangerous to presume that an agency is the only one [that] can and will find and leverage that vulnerability."

Adam Kujawa, head of malware intelligence at Malwarebytes Labs, said the draw of zero-day exploits may be too strong for government agencies to resist.

"The 'benefit' of this method [is] simply having access to a weapon that theoretically can't be protected against," Kujawa said. "This is like being able to shoot someone with a nuke when they are only wearing a bullet proof vest -- completely unstoppable, theoretically. Law enforcement, when they have a target in mind, be it a cybercriminal, terrorist, et cetera, are able to breach the security of the suspect and gather intelligence or collect information on them to identify any criminal activity that might happen or will happen."

Daren Glenister, field CTO at Intralinks Inc., noted that while leaving vulnerabilities unpatched leads to risk, there is also some benefit to not publishing vulnerabilities too soon.

"Patching a threat may take a vendor days or weeks. Every hour lost in providing a patch introduces additional risk to data and access to systems," Glenister said. "[However], by not publishing zero-day threats, it minimizes the widespread underground threat from hackers that occurs every time a new threat is disclosed."

The NSA recently detailed its vulnerability disclosure policy, but while doing so never mentioned whether or not the agency used zero-day exploits. Multiple experts said this admission by the FBI makes it safe to assume the NSA is also leveraging zero days in its efforts.

Adam Meyer, chief security strategist at SurfWatch Labs Inc., said it is not only reasonable to expect the NSA is actively exploiting zero days, but many others are as well.

"I believe it is safe to assume that any U.S. agency with a Defense or Homeland Security mission area are using exploits to achieve a presence against their targets," Meyer said. "Unfortunately, I also think it is safe to assume that every developed country in the world is doing the exact same thing. The reality is a zero day can be used against us just as much as for us."

Schilling said using zero days may not be the only option, but noted that human intelligence gathering carries much greater risks.

"At the end of the day, if we are leveraging zero days to stay ahead of our national threats, I am ok with us accepting the risk of data loss and compromises," Schilling said. "History has shown that we have accepted higher costs to protect our intelligence collection, and I think we are still OK today in the risk we are accepting as it is to save lives."

Kujawa said that while there are viable alternatives to using zero days to gather intelligence, it is hard to ignore the ease and relative safety of this method.

"There are plenty of viable methods of extracting information from a suspect; however the zero-day method is incredibly effective, very quiet and very fast. Law enforcement could attack systems using known exploits, social engineering tactics or gaining physical access to the system and installing malware manually, however none of these methods are guaranteed and they all can be protected against if the suspect is practicing common security procedures. The zero-day method will fall into the same bucket as the other attacks soon enough, however, so we will have to wait and see what the future holds for law enforcement in trying to gather evidence and intelligence on criminal suspects."


Read more »



Aug
2
what's a spear phishing mail ?
Posted by Thang Le Toan on 02 August 2018 01:38 AM

Spear phishing is an email-spoofing attack that targets a specific organization or individual, seeking unauthorized access to sensitive information. Spear-phishing attempts are not typically initiated by random hackers, but are more likely to be conducted by perpetrators out for financial gain, trade secrets or military information.

As with emails used in regular phishing expeditions, spear phishing messages appear to come from a trusted source. Phishing messages usually appear to come from a large and well-known company or website with a broad membership base, such as Google or PayPal. In the case of spear phishing, however, the apparent source of the email is likely to be an individual within the recipient's own company -- generally, someone in a position of authority -- or from someone the target knows personally.

Visiting United States Military Academy professor and National Security Agency official Aaron Ferguson called it the "colonel effect."  To illustrate his point, Ferguson sent out a message to 500 cadets, asking them to click a link to verify grades. Ferguson's message appeared to come from a Col. Robert Melville of West Point. Over 80% of recipients clicked the link in the message. In response, they received a notification that they'd been duped and a warning that their behavior could have resulted in downloads of spyware, Trojan horses and/or other malware.

Many enterprise employees have learned to be suspicious of unexpected requests for confidential information and will not divulge personal data in response to emails or click on links in messages unless they are positive about the source. The success of spear phishing depends upon three things: The apparent source must appear to be a known and trusted individual; there is information within the message that supports its validity, and the request the individual makes seems to have a logical basis.

Spear-phishing email
 

Spear phishing vs. phishing vs. whaling

This familiarity is what sets spear phishing apart from regular phishing attacks. Phishing emails are typically sent by a known contact or organization. These include a malicious link or attachment that installs malware on the target's device, or directs the target to a malicious website that is set up to trick them into giving sensitive information like passwords, account information or credit card information.

Spear phishing has the same goal as normal phishing, but the attacker first gathers information about the intended target. This information is used to personalize the spear-phishing attack. Instead of sending the phishing emails to a large group of people, the attacker targets a select group or an individual. By limiting the targets, it's easier to include personal information -- like the target's first name or job title -- and make the malicious emails seem more trustworthy.

The same personalized technique is used in whaling attacks, as well. A whaling attack is a spear-phishing attack directed specifically at high-profile targets like C-level executives, politicians and celebrities. Whaling attacks are also customized to the target and use the same social-engineering, email-spoofing and content-spoofing methods to access sensitive data.

Examples of successful attacks

In one version of a successful spear-phishing attack, the perpetrator finds a webpage for their target organization that supplies contact information for the company. Using available details to make the message seem authentic, the perpetrator drafts an email to an employee on the contact page that appears to come from an individual who might reasonably request confidential information, such as a network administrator. The email asks the employee to log into a bogus page that requests the employee's username and password, or click on a link that will download spyware or other malicious programming. If a single employee falls for the spear phisher's ploy, the attacker can masquerade as that individual and use social-engineering techniques to gain further access to sensitive data.

In 2015, independent security researcher and journalist Brian Krebs reported that Ubiquiti Networks Inc. lost $46.7 million to hackers who started the attack with a spear-phishing campaign. The hackers were able to impersonate communications from executive management at the networking firm and performed unauthorized international wire transfers.

Spear phishing defense

Spear-phishing attacks -- and whaling attacks -- are often harder to detect than regular phishing attacks because they are so focused.

In an enterprise, security-awareness training for employees and executives alike will help reduce the likelihood of a user falling for spear-phishing emails. This training typically educates enterprise users on how to spot phishing emails based on suspicious email domains or links enclosed in the message, as well as the wording of the messages and the information that may be requested in the email.

 

How to prevent a spear phishing attack from infiltrating an enterprise

While spear phishing emails are becoming harder to detect, there are still ways to prevent them. Threats expert Nick Lewis gives advice.

Spear phishing and social engineering are becoming more popular as attackers target humans as a particularly dependable point of ingress (HBGary, RSA, etc.). Considering that a well-crafted spear phishing email is almost indistinguishable from a legitimate email, what is the best way to prevent users from clicking on spear phishing links?

Phishing, social engineering and spear phishing have been growing in popularity over the last 10 or more years. The introduction of spear phishing and other newer forms of phishing are an evolution of social engineering or fraud. Attackers have found ways to exploit weaknesses in technologies like VoIP, IM and SMS messages, among others,  to commit fraud, and will continue to adapt as new technologies develop. Humans will always be an integral part of information security for an organization, but can always be targeted, regardless of the technologies in use. Humans are sometimes the weakest link.

To help minimize the chance of a spear phishing attack successfully infiltrating the enterprise, you can follow the advice from US-CERT on phishing or the guidance from the Anti-Phishing Working Group. Both have technical steps you can put in place, but both also include a security awareness and education component. Potentially the most effective method to combat phishing and its variants is to make sure users know to question suspicious communications and to verify the communication (email, IM, SMS, etc.) out-of-band with the requesting party. For example, if an employee gets an email from a colleague that doesn’t sound like it came from the sender or seems in some way suspicious, he or she should contact the sender using a different means of communication -- such as the phone -- to confirm the email. If the email can’t be verified, then it should be reported to your information security group, the Anti-Phishing Working Group or the FTC at spam@uce.gov.

Enterprises with high security needs could choose not to connect their systems to the Internet, not allow Internet email inbound except for approved domains, or only allow inbound email from approved email addresses. This will not stop all phishing attacks and will significantly decrease usability, but may be necessary for high-security environments.


Read more »



Jul
4
threat hunter (cybersecurity threat analyst)
Posted by Thang Le Toan on 04 July 2018 04:34 AM

A threat hunter, also called a cybersecurity threat analyst, is a security professional or managed service provider (MSP) that proactively uses manual or machine-assisted techniques to detect security incidents that may elude the grasp of automated systems. Threat hunters aim to uncover incidents that an enterprise would otherwise not find out about, providing chief information security officers (CISOs) and chief information officers (CIOs) with an additional line of defense against advanced persistent threats (APTs).

In order to detect a security incident an automated system might miss, a threat hunter uses critical-thinking skills and creativity to look at patterns of normal behavior and be able to identify network behavior anomalies. A threat hunter must have considerable business knowledge and an understanding of normal enterprise operations in order to avoid false positives and have good communication skills to share the results of the hunt. It is especially important for the threat hunter to keep current on the latest security research.

The job of the threat hunter is to both supplement and reinforce automated systems. As the review process uncovers patterns for initiating attacks, the security organization can use that information to improve its automated threat detection software. 

A 2017 SANS Institute report found more organizations are pursuing threat hunting initiatives, but notes the bulk of the growth is confined to vertical markets such as financial services, high tech, military and government and telecommunications. As of 2017, the field of threat hunting was still new for the majority of IT security organizations. The SANS Institute report noted 45% of the respondents to its threat hunting survey do their hunting on an ad hoc basis.

Threat hunters typically work within a security operations center (SOC) and take the lead role in an enterprise's threat detection and incident response activities. Threat hunting may be assigned as an additional duty to one or more security engineers within a SOC, or a SOC may dedicate security engineers to full-time threat hunting duties.

Additional options for creating a threat hunting team include rotating security engineers into the threat hunting role on a temporary basis and then having them return to their usual jobs within the SOC. Internally, threat hunters hunters are often managed by the an organization's CISO, who works with the CIO to coordinate enterprise security.


Read more »



Jun
5
Virtual private cloud vs. private cloud differences explained
Posted by Thang Le Toan on 05 June 2018 12:55 AM

Virtual private clouds and private clouds differ in terms of architecture, the provider and tenants, and resource delivery. Decide between the two models based on these distinctions.

Organizations trying to decide between virtual private cloud vs. private cloud must first define what they want to accomplish. A private cloud gives individual business units more control over the IT resources allocated to them, whereas a virtual private cloud offers organizations a different level of isolation.


Read more »



May
23
CDN (content delivery network)
Posted by Thang Le Toan on 23 May 2018 02:25 AM

A CDN (content delivery network), also called a content distribution network, is a group of geographically distributed and interconnected servers that provide cached internet content from a network location closest to a user to accelerate its delivery. The primary goal of a CDN is to improve web performance by reducing the time needed to transmit content and rich media to users' internet-connected devices.

Content delivery network architecture is also designed to reduce network latency, which is often caused by hauling traffic over long distances and across multiple networks. Eliminating latency has become increasingly important, as more dynamic content, video and software as a service are delivered to a growing number of mobile devices.

CDN providers house cached content in either their own network points of presence (POP) or in third-party data centers. When a user requests content from a website, if that content is cached on a content delivery network, the CDN redirects the request to the server nearest to that user and delivers the cached content from its location at the network edge. This process is generally invisible to the user.

A wide variety of organizations and enterprises use CDNs to cache their website content to meet their businesses' performance and security needs. The need for CDN services is growing, as websites offer more streaming video, e-commerce applications and cloud-based applications where high performance is key. Few CDNs have POPs in every country, which means many organizations use multiple CDN providers to make sure they can meet the needs of their business or consumer customers wherever they are located.

In addition to content caching and web delivery, CDN providers are capitalizing on their presence at the network edge by offering services that complement their core functionalities.  These include security services that encompass distributed denial-of-service (DDoS) protection, web application firewalls (WAFs) and bot mitigation; web and application performance and acceleration services; streaming video and broadcast media optimization; and even digital rights management for video. Some CDN providers also make their APIs available to developers who want to customize the CDN platform to meet their business needs, particularly as webpages become more dynamic and complex.

How does a CDN work?

The process of accessing content cached on a CDN network edge location is almost always transparent to the user. CDN management software dynamically calculates which server is located nearest to the requesting user and delivers content based on those calculations. The CDN server at the network edge communicates with the content's origin server to make sure any content that has not been cached previously is also delivered to the user. This not only eliminates the distance that content travels, but reduces the number of hops a data packet must make. The result is less packet loss, optimized bandwidth and faster performance, which minimizes timeouts, latency and jitter, and it improves the overall user experience. In the event of an internet attack or outage, content hosted on a CDN server will remain available to at least some users.

Organizations buy services from CDN providers to deliver their content to their users from the nearest location. CDN providers either host content themselves or pay network operators and internet service providers (ISPs) to host CDN servers. Beyond placing servers at the network edge, CDN providers use load balancing and solid-state hard drives to help data reach users faster. They also work to reduce file sizes using compression and special algorithms, and they are deploying machine learning and AI to enable quicker load and transmission times.

History of CDNs

The first CDN was launched in 1998 by Akamai Technologies soon after the public internet was created. Akamai's original techniques serve as the foundation of today's content distribution networks. Because content creators realized they needed to find a way to reduce the time it took to deliver information to users, CDNs were seen as a way to improve network performance and to use bandwidth efficiently. That basic premise remains important, as the amount of online content continues to grow.

So-called first-generation CDNs specialized in e-commerce transactions, software downloads, and audio and video streaming. As cloud and mobile computing gained traction, second-generation CDN services evolved to enable the efficient delivery of more complex multimedia and web content to a wider community of users via a more diverse mix of devices. As internet use grew, the number of CDN providers multiplied, as have the services CDN companies offer.

New CDN business models also include a variety of pricing methods that range from charges per usage and volume of content delivered to a flat rate or free for basic services, with add-on fees for additional performance and optimization services. A wide variety of organizations use CDN services to accelerate static and dynamic content, online gaming and mobile content delivery, streaming video and a number of other uses.

What are the main benefits of using a CDN?

The primary benefits of traditional CDN services include the following:

  • Improved webpage load times to prevent users from abandoning a slow-loading site or e-commerce application where purchases remain in the shopping cart;
  • Improved security from a growing number of services that include DDoS mitigation, WAFs and bot mitigation;
  • Increased content availability because CDNs can handle more traffic and avoid network failures better than the origin server that may be located several networks away from the end user; and
  • A diverse mix of performance and web content optimization services that complement cached site content.

How do you manage CDN security?

A representative list of CDN providers in this growing market include the following:

Why you need to know about CDN technology

A wide variety of organizations use CDNs to meet their businesses' performance and security needs. The need for CDN services is growing, as websites offer more streaming video, e-commerce applications and cloud-based applications, where high performance is essential.

CDN technology is also an ideal method to distribute web content that experiences surges in traffic, because distributed CDN servers can handle sudden bursts of client requests at one time over the internet. For example, spikes in internet traffic due to a popular event, like online streaming video of a presidential inauguration or a live sports event, can be spread out across the CDN, making content delivery faster and less likely to fail due to server overload.

Because it duplicates content across servers, CDN technology inherently serves as extra storage space and remote data backup for disaster recovery plans.

 

AWS GPU instance type slashes cost of streaming apps

The cost of graphics acceleration can often make the technology prohibitive, but a new AWS GPU instance type for AppStream 2.0 makes that process more affordable.

Download Our AWS Cloud Computing Must-Have Guide

While Amazon Web Services (AWS) has established itself as a top contender in the cloud computing market, it's not without its challenges and misconceptions. Get expert insight into the most common and pressing questions regarding AWS management, monitoring, costs, benefits, limitations and more.

Amazon AppStream 2.0, which enables enterprises to stream desktop apps from AWS to an HTML5-compatible web browser, delivers graphics-intensive applications for workloads such as creative design, gaming and engineering that rely on DirectX, OpenGL or OpenCL for hardware acceleration. The managed AppStream service eliminates the need for IT teams to recode applications to be browser-compatible.

The newest AWS GPU instance type for AppStream, Graphics Design, cuts the cost of streaming graphics applications up to 50%, according to the company. AWS customers can launch Graphics Design GPU instances or create a new instance fleet with the Amazon AppStream 2.0 console or AWS software development kit. AWS’ Graphics Design GPU instances come in four sizes that range from 2-16 virtual CPUs and 7.5-61 gibibytes (GiB) of system memory, and run on AMD FirePro S7150x2 Server GPUs with AMD Multiuser GPU technology.

Developers can now also select between two types of Amazon AppStream instance fleets in a streaming environment. Always-On fleets provide instant access to apps, but charge fees for every instance in the fleet. On-Demand fleets charges fees for instances when end users are connected, plus an hourly fee, but there is a delay when an end user accesses the first application.

New features and support

In addition to the new AWS GPU instance type, the cloud vendor rolled out several other features this month, including:

  • ELB adds network balancer. AWS Network Load Balancer helps maintain low latency during spikes on a single static IP address per Availability Zone. Network Load Balancer — the second offshoot of Elastic Load Balancing features, following Application Load Balancer — routes connections to Virtual Private Cloud-based Elastic Compute Cloud (EC2) instances and containers.
  • New edge locations on each coast. Additional Amazon CloudFront edge locations in Boston and Seattle improve end user speed and performance when they interact with content via CloudFront. AWS now has 95 edge locations across 50 cities in 23 countries.
  • X1 instance family welcomes new member. The AWS x1e.32xlarge instance joins the X1 family of memory-optimized instances, with the most memory of any EC2 instance — 3,904 GiB of DDR4 instance memory — to help businesses reduce latency for large databases, such as SAP HANA. The instance is also AWS’ most expensive at about $16-$32 per hour, depending on the environment and payment model.
  • AWS Config opens up support. The AWS Config service, which enables IT teams to manage service and resource configurations, now supports both DynamoDB tables and Auto Scaling groups. Administrators can integrate those resources to evaluate the health and scalability of their cloud deployments.
  • Start and stop on the Spot. IT teams can now stop Amazon EC2 Spot Instances when an interruption occurs and then start them back up as needed. Previously, Spot Instances were terminated when prices rose above the user-defined level. AWS saves the EBS root device, attached volumes and the data within those volumes; those resources restore when capacity returns, and instances maintain their ID numbers.
  • EC2 expands networking performance. The largest instances of the M4, X1, P2, R4, I3, F1 and G3 families now use Elastic Network Adapter (ENA) to reach a maximum bandwidth of 25 Gb per second. The ENA interface enables both existing and new instances to reach this capacity, which boosts workloads reliant on high-performance networking.
  • New Direct Connect locations. Three new global AWS Direct Connect locations allow businesses to establish dedicated connections to the AWS cloud from an on-premises environment. New locations include: Boston, at Markley, One Summer Data Center for US-East-1; Houston, at CyrusOne West I-III data center for US-East-2; and Canberra, Australia, at NEXTDC C1 Canberra data center for AP-Southeast-2.
  • Role and policy changes. Several changes to AWS Identity and Access Management (IAM) aim to better protect an enterprise’s resources in the cloud. A policy summaries feature lets admins identify errors and evaluate permissions in the IAM console to ensure each action properly matches to the resources and conditions it affects. Other updates include a wizard for admins to create the IAM roles, and the ability to delete service-linked roles through the IAM console, API or CLI — IAM ensures that no resources are attached to a role before deletion.
  • Six new data streams. Amazon Kinesis Analytics, which enables businesses to process and query streaming data in an SQL format, has six new types of stream processes to simplify data processing: STEP(), LAG(), TO_TIMESTAMP(), UNIX_TIMESTAMP(), REGEX_REPLACE() and SUBSTRING(). AWS also increased the service’s capacity to process higher data volume streams.
  • Get DevOps notifications. Additional notifications from AWS CodePipeline for stage or action status changes enable a DevOps team to track, manage and act on changes during continuous integration and continuous delivery. CodePipeline integrates with Amazon CloudWatch to enable Amazon Simple Notification Service messages, which can trigger an AWS Lambda function in response.
  • AWS boosts HIPAA eligibility. Amazon’s HIPAA Compliance Program now includes Amazon Connect, AWS Batch and two Amazon Relational Database Service (RDS) engines, RDS for SQL Server and RDS for MariaDB — all six RDS engines are HIPAA eligible. AWS customers that sign a Business Associate Agreement can use those services to build HIPAA-compliant applications.
  • RDS for Oracle adds features. The Amazon RDS for Oracle engine now supports Oracle Multimedia, Oracle Spatial and Oracle Locator features, with which businesses can store, manage and retrieve multimedia and multi-dimensional data as they migrate databases from Oracle to AWS. The RDS Oracle engine also added support for multiple Oracle Application Express versions, which enables developers to build applications within a web browser.
  • Assess RHEL security. Amazon Inspector expanded support for Red Hat Enterprise Linux (RHEL) 7.4 assessments, to run Vulnerabilities & Exposures, Amazon Security Best Practices and Runtime Behavior Analysis scans in that RHEL environment on EC2 instances.

 

BPM in cloud evolves to suit line of business, IoT

While on-premises BPM tools have caused a tug of war between lines of business and IT, the cloud helps appease both sides. Here's what to expect from this cloud BPM trend and more.

Business process management tools rise in importance as companies try to make better use -- and reuse -- of IT assets. And, when coupled with cloud, this type of software can benefit from a pay-as-you-go model for more efficient cost management, as well as increased scalability.

 

As a result, cloud-based BPM has become a key SaaS tool in the enterprise. Looking forward, the growth of BPM in cloud will drive three major trends that enterprise users should track.

Reduced bias

BPM is designed to encourage collaboration between line departments and IT, but the former group often complains that BPM tools hosted in the data center favor the IT point of view in both emphasis and design. To avoid this and promote equality between these two groups, many believe that BPM tools have to move to neutral territory: the cloud.

Today, BPM supports roughly a dozen different roles and is increasingly integrated with enterprise architecture practices and models. This expands the scope of BPM software, as well as the number of non-IT professionals who use it. Collaboration and project management, for example, account for most of the new features in cloud BPM software.

Collaboration features in cloud-based BPM include project tools and integration with social networks. While business people widely use platforms like LinkedIn for social networking, IT professionals use other wiki-based tools. Expect to see a closer merger between the two.

This push for a greater line department focus in BPM could also divide the BPM suites themselves. While nearly all the cloud BPM products are fairly broad in their application, those from vendors with a CIO-level sales emphasis, such as IBM's Business Process Manager on Cloud or Appian, focus more on IT. NetSuite, on the other hand, is an example of cloud BPM software with a broader organizational target.

Software practices influence BPM

Cloud, in general, affects application design and development, which puts pressure on BPM to accommodate changes in software practices. Cloud platforms, for example, have encouraged a more component-driven vision for applications, which maps more effectively to business processes. This will be another factor that expands line department participation in BPM software.

BPM in cloud encourages line organizations to take more control over applications. The adoption of third-party tools, rather than custom development, helps them target specific business problems. This, however, is a double-edged sword: It can improve automated support for business processes but also duplicate capabilities and hinder workflow integration among organizations. IT and line departments will have to define a new level of interaction.

IoT support

The third trend to watch around BPM in cloud involves internet of things (IoT) and machine-to-machine communications. These technologies presume that sensors will activate processes, either directly or through sensor-linked analytics. This poses a challenge for BPM, because it takes human judgment out of the loop and requires instead that business policies anticipate human review of events and responses. That shifts the emphasis of BPM toward automated policies, which, in the past, has led to the absorption of BPM into things like Business Process Modeling Language, and puts the focus back on IT.

What do you expect from cloud BPM in the future?

In theory, business policy automation has always been within the scope of BPM. But, in practice, BPM suites have offered only basic support for policy automation or even for the specific identification of business policies. It's clear that this will change and that policy controls to guide IoT deployments will be built into cloud-based BPM.


Read more »



May
17
Protection from ransomware requires layered backup, DR
Posted by Thang Le Toan on 17 May 2018 10:57 PM

A strategy for protection and successful recovery from ransomware includes everything from monitoring tools to offline storage. Organizations should use multiple methods.

 

CHICAGO -- The VeeamON session on protection from ransomware Wednesday started with a question for attendees: How many had experienced a ransomware attack at their organization?

 

Dozens of hands went up.

Ransomware attacks continue to make news. In just the last couple of months, high-profile victims included the city of Atlanta and a school district in Massachusetts. Many attacks, though, go unreported or unmentioned to the general public.

A layered defense is important to be able to protect and recover from ransomware, Rick Vanover, Veeam's director of product strategy, told the packed room of close to 200 people.

Backup, DR, education all play a role

Using offline storage to create an air gap is arguably the most technically efficient method of protection against ransomware. Tape is a good fit for air gapping, because you can take it off site, where it is not connected to the network or any other devices.

"The one reason I love tape is its resiliency in this situation," Vanover said.

Other offline or semioffline storage choices include replicated virtual machines, primary storage snapshots, Veeam Cloud Connect backups that aren't connected directly to the backup infrastructure and rotating hard drives.

Educating users is another major component of a comprehensive strategy for protection from ransomware.

"No matter how often you do it, you can't do it enough," said Joe Marton, senior systems engineer at Veeam.

Advice for users includes being overly careful about clicking links and attachments and telling IT immediately if there appears to be an issue.

IT should have visibility into suspicious behavior using monitoring capabilities. For example, Veeam ONE includes a predefined alarm that triggers if it detects possible ransomware activity.

Organizations as a whole should continue to follow the standard "3-2-1" backup plan of having three different copies of data on two different media types, one of which is off site or offline.

From a disaster recovery angle, DR isn't just for natural disasters.

"Ransomware can be a disaster," Marton said.

That means an organization's DR process applies to ransomware attacks.

The organization should also document its recovery plan, specifically one for ransomware incidents.

Matt Fonner, a severity one engineer of the Veeam support team, said every week he deals with two or three restores from a ransomware attack.

Ransomware, protection continue to evolve

The ransomware story does change every time you write it.
Rick Vanoverdirector of product strategy, Veeam

Vanover said later that he spent about 25 minutes following the presentation talking with people about attacks and protection from ransomware. One person told him that her SMB had been hit and decided to pay the ransom, rather than deal with an inferior restore program -- that wasn't Veeam.

Vanover said organizations should classify data to figure out which level of resiliency is needed. Not everything needs to be in that most expensive tier.

Vanover said the ransomware landscape has changed from a year ago, when he also gave a presentation on ransomware protection at VeeamON.

"The ransomware story does change every time you write it," he said.

One new twist in the storage is ransomware is attacking backups themselves. In a common scenario, ransomware will infiltrate a backup and stay dormant until the data is recovered back to the network following an attack on primary storage.

That's where offline storage comes in, Vanover said.

Data protection vendors are also starting to add specific features to protect backups from ransomware. For example, Asigra Cloud Backup has embedded malware engines in the backup and recovery stream, and CloudBerry Backup detects possible cases of ransomware in backups.

Vanover said if he drew up another presentation in a month or two, it would probably be different.

"We have to always evolve to the threatscape," he said.


Read more »



Mar
20
Sage adds Intacct financial management software to its ERP
Posted by Thang Le Toan on 20 March 2018 12:52 AM

Sage says the move will boost its cloud financial management software and U.S. presence. Analysts think it's a good technology move but are unsure about the market impact.

Sage Software intends to expand both its cloud offerings and its customer base in North America.

Sage, an ERP vendor based in Newcastle upon Tyne, U.K., is acquiring Intacct, a San Jose-based vendor of financial management software for $850 million, according to the company.

Sage's core products include the Sage X3 ERP system, the Sage One accounting and invoicing application and Sage Live real-time accounting software. The company's products are aimed primarily at SMBs, and Sage claims that it has just over 6 million users worldwide, with the majority of these in Europe.

Intacct provides SaaS financial management software to SMBs, with most of its customer base in North America, according to the company.

The move to acquire Intacct demonstrates Sage's determination to "win the cloud" and expand its U.S. customer base, according to a Sage press release announcing the deal.

"Today we take another major step forward in delivering our strategy and we are thrilled to welcome Intacct into the Sage family," Stephen Kelly, Sage CEO, said in the press release. "The acquisition of Intacct supports our ambitions for accelerating growth by winning new customers at scale and builds on our other cloud-first acquisitions, strengthening the Sage Business Cloud. Intacct opens up huge opportunities in the North American market, representing over half of our total addressable market."

Combining forces makes sense for Intacct because the company shares the same goals as Sage, according to Intacct CEO Robert Reid.

"We are excited to become part of Sage because we are relentlessly focused on the same goal -- to deliver the most innovative cloud solutions for our customers," Reid said in the press release. "Intacct is growing rapidly in our market and we are proud to be a recognized customer satisfaction leader across midsize, large and global enterprise businesses. By combining our strengths with those of Sage, we can jointly accelerate success for our customers."

Intacct brings real cloud DNA to financial management software

Intacct's specialty in cloud financial management software should complement Sage's relatively weak financial functionality, according to Cindy Jutras, president of the ERP consulting firm Mint Jutras.

"[Intacct] certainly brings real cloud DNA, and a financial management solution that would be a lot harder to grow out of than the solutions they had under the Sage One brand," Jutras said. "It also has stronger accounting than would be embedded within Sage X3. I would expect X3 to still be the go-to solution for midsize manufacturers since that was never Intacct's target, but Intacct may very well become the go-to ERP for service companies, like professional services."

Jutras also mentioned that Intacct was one of the first applications to address the new ASC 606 revenue recognition rules, something that Sage has not done yet. Sage's cloud strategy has been murky up to this point, but Jutras was unsure that this move will clarify that.

"It doesn't seem any of its existing products -- except their new Sage Live developed on the Salesforce platform -- are multi-tenant SaaS and up until recently they seemed to be going the hybrid route by leaving ERP on premises and surrounding it with cloud services," she said.

The deal should strengthen Sage's position in the SMB market, according to Chris Devault, manager of software selection at Panorama Consulting Solutions.

"This is a very good move for Sage, as it will bring a different platform and much needed technology to help Sage round out their small to mid-market offerings," Devault said.

Getting into the U.S. market

Overall it appears to be a positive move for Sage, both from a technology and market perspective, according to Holger Mueller, vice president and principal analyst at Constellation Research Inc.

"It's a good move by Sage to finally tackle finance in the cloud and get more exposure to the largest software market in the world, the U.S.," Mueller said. "But we see more than finance moving to the cloud, as customers are starting to look for or demand a complete suite to be available on the same platform. Sage will have to move fast to integrate Intacct and get to a compelling cloud suite roadmap."

Time will also tell if this move will position Sage better in the SMB ERP landscape.

"It's early to say, but it puts them in the SMB category with Oracle NetSuite, FinancialForce, Epicor and Acumatica at the lower end," Mueller said.

In what ways do you think Sage ERP is enhanced by adding Intacct financial management software?

Sage's developer certifications for Sage ERP X3 add more muscle to its ISV ecosystem.

SaaS can ease the pain of ERP upgrades, but some users like letting the vendor do it.


Read more »



Mar
20
The risk analytics software your company really needs
Posted by Thang Le Toan on 20 March 2018 12:49 AM

Risk analytics tools are more and more critical for CFOs seeking to improve operational efficiency. Just one problem: It can be hard to figure out just what those tools are.

As the use of big data is a documented phenomenon across corporate America, risk analytics – that is, using analytics to collect, analyze and measure real-time data to predict risk to make better business decisions -- is also becoming more popular.

That's according to Sanjaya Krishna, U.S. digital risk consulting leader at KPMG in Washington, D.C.

By using risk analytics software, CFOs can improve operational efficiency and keep their companies' exposures to acceptable risk. But where exactly does a CFO go to "get" risk analytics tools?

The search for risk analytics software

"Risk analytics is a fairly broad term, so there are a number of things that come to mind when we talk about risk analytics," Krishna said. "There are a number of specialized risk analytics products. There are also broader analytic packages that can … 'check the risk analytics box' to a certain extent, though the package isn't built to be a risk analytics solution."

There are products, such as KPMG Risk Front, that focus on providing customized risk analytics based on public internet commentary, Krishna said. And KPMG's Continuous Monitoring product provides for customized risk analytics based on internal transactional data.

Enterprises should consider a solution that takes these differences into account, making sure that a dashboard can become detailed and granular, while also offering a 50,000-foot view.
Rajiv Shahsenior solutions architect, GigaSpaces Technologies Inc.

There is also a number of established enterprise governance, risk and compliance packages that provide companies a way of housing and analyzing all sorts of identified risks at the enterprise level or within certain business areas, he said.

Finally, there are highly specialized, industry-specific risk analytic tools, especially in the financial services industry, according to Krishna.

Risk analytics tools, regardless of the industry, have been around for a while, said Danny Baker, vice president of market strategy, financial and risk management solutions at Fiserv Inc., a provider of financial services technology based in Brookfield, Wis.

"They have historically been purposed for less strategic items -- they were seen as just a checkbox to please the regulators," he said.

Now, though, risk analytics software has transitioned and evolved from tactical, point solutions to helping organizations optimize their strategic futures.

"Especially for banks and credit unions, risk analytics tools are focused more on strategy and the need to integrate with other departments, like finance," Baker said. "The integration across departments is key."

But it's not just the tools that are important.

Sometimes a company may even use a database as a risk analytics tool, said Ken Krupa, enterprise CTO at MarkLogic Corp., an Enterprise NoSQL database provider in San Carlos, Calif.

Taking the broad approach to the data quality issue

"There are, indeed, specialized products, as well as packages that play a role in risk analytics," Krupa said. "These third-party suites of tools do a lot of the math on where there are risks, but if the math is based on bad or incomplete data, risk cannot be adequately addressed."

What's more, oftentimes, a company doesn't have a clear picture of the quality of the data that it's working with because making that data available from upstream systems depends on complex extract, transform and load (ETL) processes supported by a large team of developers of varying skill sets, he said.

Therefore, there's actually an inherent risk in not having transparent access to a 360-degree view of the data -- mainly caused by data in silos. However, leveraging a database that can successfully integrate the many silos of data can go a long way toward minimizing data quality risks, according to Krupa.

"You may not initially think of a database as a risk analytics tool, but the right kind of database serves a critical role in organizing all of the inputs that risk analytics tools use," he said. "The right type of database -- one that minimizes ETL dependency and provides a clear view of all kinds of data, like that offered by MarkLogic -- can make risk analytics better, faster and with less cost."

Anand Venugopal, head of StreamAnalytix product management and go-to-market at Impetus Technologies Inc. in Los Gatos, Calif., concurred with Krupa that bringing all a company's data into one place is critical to enabling better risk-based business decisions.

Since many organizations are in the process of modernizing their infrastructures -- particularly around analytics platforms -- they are moving away from point solutions if they can, he said.

The new paradigm is bringing together all the relevant information -- if not in one place, at least, having the mechanisms to bring it together on demand -- and then do the analytics together in one place, Venugopal said.

"So, what is beyond proven is that analytics and decision-making [are] more accurate not with more advanced algorithms, but with more data, i.e., diverse data, and more data sources, i.e., 25 different data sources as opposed to five different data sources," he said.

It all points to the fact that even with moderate algorithms, more data gives organizations better results than trying to use "rocket science algorithms" with limited data, Venugopal said.

"What that means to enterprise technology is that they are building risk platforms on top of the modern data warehouses, which combines a variety of internal and external data sets, and trying to combine real-time data feeds -- real-time triggers, real-time market factors, currency risk, etc. -- which was not part of the previous generation's capabilities," he said.

Single-point products can only address limited portions of this because that's how they're designed; enterprise risk can only be covered with a broader approach, according to Venugopal.

"I think the trend [in enterprises] is more toward building sophisticated risk strategies and applications, and they're building out those and they're using core big data technology components like the Hadoop stack, like the Spark stack and tools like Impetus' Extreme Analytics," he said.

Custom risk analytics software and other considerations

Organizations looking to implement technology to mitigate risk have to consider a few additional things, including the usability and feature set, according to Rajiv Shah, senior solutions architect for GigaSpaces Technologies Inc. in New York City.

What have you found most confusing when searching for risk analytics software?

"For instance, high-volume traders need a solution that won't interfere with the data sync that is critical to being up to the microsecond," he said.

A product that offers multilevel dashboarding is also key, according to Shah.

For example, the data a CFO needs to know is far different than, say, what a risk or compliance officer needs to know, he said.

"Enterprises should consider a solution that takes these differences into account, making sure that a dashboard can become detailed and granular, while also offering a 50,000-foot view," Shah said. "And a strong risk mitigation strategy and tool set should be able to identify and simulate a wide range of scenarios."

According to Fiserv's Baker, it's important that a risk mitigation technology doesn't hinder a company's regular operations.

"For larger organizations, it often becomes critical to build your own solution to meet the needs," he said.

Mike Juchno, partner at Ernst & Young Advisory Services, agreed that there is a custom tool component to risk analytics.

"Many of our clients already have these tools -- they're some sort of predictive analytics tool like SPSS, like SAS, like R, and some visualization on top of them, like Tableau or Power BI," he said. "So, we are able to build something custom to deal with a risk that may be unique to them or their industry or their particular situation. So, we typically find that it's a custom approach."

When it comes to looking for an off-the-shelf product, CFOs often hear about risk analytics tools from their peer-to-peer organizations. These groups come together to share information about tools.

"Of course, you're going to also look toward other companies or competitors that are doing risk management and performance management well and see what tools they have in place," Baker said. "The most high-performing clients I see embed their tools into not only solving current risk, but also expecting and forecasting future risk."

Although an organization can go to Fiserv and ask for a menu of risk analytics tools, it's more successful if both the company and Fiserv drill down into what the organization is trying to accomplish and customize the tools from there, according to Baker.

Most organizations want to make better strategic decisions, as the challenges of growth are greater now, and improve their forward-looking, strategic discipline and processing, he said.

The focus has shifted to agility and efficiency when implementing risk analytics tools, Baker said.

"The high-performing Fiserv clients I work with have integrated risk analytics tools into finance operations," he said. "These advanced solutions offer an integrative solution that also forecasts and plans for the strategic future."

Organizations are increasingly being thoughtful with their risk processes, he said. And in recent years questions to vendors have evolved from "what are your risk tools?" to "how do I get better information to make decisions for the future?"


Read more »



Mar
20

Regulatory compliance, loan covenants and currency risk are common targets, as organizations sift through ERP and other data looking for patterns that might give early warning.

As CFO of TIBCO Software Inc., Tom Berquist spends a lot of time working on risks, such as the failure to live up to loan covenants. Berquist uses risk analytics software to stay on top of things.

"As a private equity-backed company -- we're owned by Vista Equity Partners -- we carry a large amount of debt," he said. "We have covenants associated with that and they're tied to a number of our financial metrics." Consequently, a major part of Berquist's risk-management process is to stay in front of what's going on with the business. If there's going to be softness in TIBCO's top-line revenue, he has to make sure to manage the company's cost structure so it doesn't violate any of the covenants. Berquist said he has a lot of risk analytics tied to that business problem.

The intent of risk analytics is to give CFOs and others in the C-suite a complete, up-to-date risk profile "as of now," said Thomas Frénéhard, director of solution management, governance, risk and compliance at software vendor SAP.

"There's no need to wait for people to compile information at the end of the quarter and send you [information] that's outdated," Frénéhard said. "What CFOs want now is their financial exposure today."

Looking for patterns in corporate data

Risk analytics involves the use of data analysis to obtain insights into various risks in financial, operational and business processes, as well as to monitor risks in ways that can't be achieved through more traditional approaches to risk management, financial controls and compliance management, said John Verver, a strategic advisor to ACL Services, a maker of governance, risk and compliance software based in Vancouver, B.C.

Some of the most common uses of risk analytics are in core financial processes and core ERP application areas, including the purchase-to-pay and order-to-cash cycles, revenue and payroll -- "analyzing and testing the detailed transactions, for example, to look for indications of fraud [and] indications of noncompliance with regulatory requirements and controls," Verver said.

Once the data is in one place, CFOs should be able to easily visualize the data in a risk dashboard.
Dan Zittingchief product officer, ACL Services

Using advanced risk management -- i.e., risk analytics software -- will allow CFOs to access data from complex systems, including ERP environments, and easily identify key areas of risk, said Dan Zitting, chief product officer of ACL Services.

"The technology can be set up to pull data from the HR, sales and billing departments, for example, and cross-reference the information within the program's interface," Zitting said in an email. "Once the data is in one place, CFOs should be able to easily visualize the data in a risk dashboard that summarizes activity and flags changes in risk."

Berquist also uses risk analytics to manage foreign currency risk for TIBCO, which is an international company, as well as risks connected to managing cash.

"Every month I close the books, I get all my actuals and I export them all into my data warehouse and I load up my dashboards. I happen to use TIBCO Spotfire [business intelligence software], but you can load them up in any risk analytics tool," he said. "Then I review where we stand on everything that has happened so far. Are expenses in line? Where does our revenue stand? What happened with currency? What happened with cash? How does the balance sheet look? That's the first part of the problem."

The second part is forecasting what will happen with TIBCO's expenses, which helps Berquist ensure that the company is going to generate sufficient cash to avoid violating covenants and mitigate the effects of offshore currency fluctuations.

Berquist said there are general-purpose risk management technologies, some of which are tied to such things as identifying corporate fraud, but there is also company- or industry-specific risk analytics software.

"My big concern is financial risk, so most of my [use of risk analytics] is around those types of measures," he said.

Risk analytics software helps CFOs make better decisions for the future because without an approach that allows them to run different scenarios and determine potential outcomes, they end up making gut instinct-oriented or seat-of-the-pants decisions, according to Berquist.

Sharing a similar view is Tom Kimner, head of global product marketing and operations for risk management at SAS Institute Inc., a provider of analytics software, based in Cary, N.C.

"What makes risk analytics a little bit different, in some cases, is that risk generally deals with the future and uncertainty," Kimner said.

Cristina Silingardi, a former CFO and treasurer at HamaTech USA Inc., a manufacturer of equipment for the semiconductor industry, concurred with Berquist that risk assessment can no longer be done as it used to be based on individuals' knowledge of their businesses, their instincts and a few key data points.

"There is so much data right now, and the biggest change I see is that now this data encompasses structured internal company data as well as unstructured external data," said Silingardi, now managing director of vcfo Holdings, a consulting firm based in Austin, Texas, that specializes in finance, recruiting and human resources.

CFOs started getting more involved with risk analytics when they needed better revenue metrics to understand predictability and trends, she said. Risk analytics software went beyond traditional risk-management tools by adding real-time reporting that puts key metrics right in front of CFOs and updates them all day long. Such data can help CFOs keep an eye on regulatory and contractual noncompliance from vendors, according to Silingardi.

"It helps them with pattern recognition, but only if [they] can translate that to really good visual dashboards that are looking at this data. [CFOs] used to focus only on a few things. Now, [they're] using all this data to get a much better picture," she said.

Forward-thinking mindset is key

Historically, risk analysis and assessment has tended to be a reactive and subjective process, according to Daniel Smith, director of data science and innovation at Syntelli Solutions Inc., a data analytics company based in Charlotte, N.C. After something bad happens, the tendency is for people to say, "'Let's investigate it, or, 'Let's all huddle up and think about what could happen and create a bunch of speculative scenarios,'" he said.

That's exactly the way many of SAP's customers still look at risk: through the rear-view mirror, said Bruce McCuaig, director of governance, risk and compliance solution marketing at SAP.

"Once or twice a year they report to the board and they look backwards, but what I think we're seeing now is the ability to look forward and report frequently online and in real time," McCuaig said.

In modern analytics and modern business, companies want to focus more on proactive, predictive and objective risk, Smith said. While focusing on risk in this manner gives CFOs visibility into the future, many don't have the pipeline of data and a single source of consolidated data to enable them to do that.

"They need a system, a way to collect that data and be able to analyze it," he said. "From a strategic point of view, it's more of a data initiative."

The goal is to give people the skills and applications to view highly interactive and multidimensional data as opposed to a traditional, two-dimensional tabular view in a spreadsheet, Smith said.

When it comes to risk analytics, CFOs should be thinking about techniques, not specific tools. Risk analysis is more about understanding ways to mine data better than about which platform can do it, according to Smith.

"Risk analytics is part of something larger. At SAP, we don't have a category of solutions called 'risk analytics,'" McCuaig said. "There are a variety of analytics tools that will serve the purpose."

How has your company used risk analytics?


Read more »




Help Desk Software by Kayako