Posted by Thang Le Toan on 28 January 2019 01:34 AM
The term "backup," which has become synonymous with data protection, may be accomplished via several methods. Here's how to choose the best way to safeguard your data.
Protecting data against loss, corruption, disasters (manmade or natural) and other problems is one of the top priorities...
for IT organizations. In concept, the ideas are simple, although implementing an efficient and effective set of backup operations can be difficult.
The term backup has become synonymous with data protection over the past several decades and may be accomplished via several methods. Backup software applications have been developed to reduce the complexity of performing backup and recovery operations. Backing up data is only one part of a disaster protection plan, and may not provide the level of data and disaster recovery capabilities desired without careful design and testing.
The purpose of most backups is to create a copy of data so that a particular file or application may be restored after data is lost, corrupted, deleted or a disaster strikes. Thus, backup is not the goal, but rather it is one means to accomplish the goal of protecting data. Testing backups is just as important as backing up and restoring data. Again, the point of backing up data is to enable restoration of data at a later point in time. Without periodic testing, it is impossible to guarantee that the goal of protecting data is being met.
Backing up data is sometimes confused with archiving data, although these operations are different. A backup is a secondary copy of data used for data protection. In contrast, an archive is the primary data, which is moved to a less-expensive type of media (such as tape) for long-term, low-cost storage.
The most basic and complete type of backup operation is a full backup. As the name implies, this type of backup makes a copy of all data to another set of media, which can be tape, disk or a DVD or CD. The primary advantage to performing a full backup during every operation is that a complete copy of all data is available with a single set of media. This results in a minimal time to restore data, a metric known as a recovery time objective (RTO). However, the disadvantages are that it takes longer to perform a full backup than other types (sometimes by a factor of 10 or more), and it requires more storage space.
Thus, full backups are typically run only periodically. Data centers that have a small amount of data (or critical applications) may choose to run a full backup daily, or even more often in some cases. Typically, backup operations employ a full backup in combination with either incremental or differential backups.
An incremental backup operation will result in copying only the data that has changed since the last backup operation of any type. The modified time stamp on files is typically used and compared to the time stamp of the last backup. Backup applications track and record the date and time that backup operations occur in order to track files modified since these operations.
Because an incremental backup will only copy data since the last backup of any type, it may be run as often as desired, with only the most recent changes stored. The benefit of an incremental backup is that they copy a smaller amount of data than a full. Thus, these operations will complete faster, and require less media to store the backup.
A differential backup operation is similar to an incremental the first time it is performed, in that it will copy all data changed from the previous backup. However, each time it is run afterwards, it will continue to copy all data changed since the previous full backup. Thus, it will store more data than an incremental on subsequent operations, although typically far less than a full backup. Moreover, differential backups require more space and time to complete than incremental backups, although less than full backups.
Table 1: A comparison of different backup operations
Changes from backup 1
Changes from backup 1
Changes from backup 2
Changes from backup 1
Changes from backup 3
Changes from backup 1
As shown in "Table 1: A comparison of different backup operations," each type of backup works differently. A full backup must be performed at least once. Afterwards, it is possible to run either another full, an incremental or a differential backup. The first partial backup performed, either a differential or incremental will back up the same data. By the third backup operation, the data that is backed up with an incremental is limited to the changes since the last incremental. In comparison, the third backup with a differential backup will backup all changes since the first full backup, which was backup 1.
From these three primary types of backup types it is possible to develop an approach to protecting data. Typically one of the following approaches is used:
Full weekly + Differential daily
Full weekly + Incremental daily
Many considerations will affect the choice of the optimal backup strategy. Typically, each alternative and strategy choice involves making tradeoffs between performance, data protection levels, total amount of data retained and cost. In "Table 2: A backup strategy's impact on space" below, the media capacity requirements and media required for recovery are shown for three typical backup strategies. These calculations presume 20 TB of total data, with 5% of the data changing daily, and no increase in total storage during the period. The calculations are based on 22 working days in a month and a one month retention period for data.
Table 2: A backup strategy's impact on space
Common backup scenarios
Media Space Required for one Month (20 TB @ 5% daily rate of change)
Media required for recovery
Full daily (weekdays)
Space for 22 daily fulls (22 * 20 TB) = 440.00 TB
Most recent backup only
Full (weekly) + Differential (weekdays)
Fulls, plus most recent differential since full (5 * 20 TB) + (22 * 5%* 20 TB) = 124.23 TB
Most recent full + most recent differential
Full (weekly) + Incremental (weekdays)
Fulls, plus all incrementals since weekly full (5 * 20 TB) + (22 * 5% * 20 TB) = 122.00 TB
Most recent full + all incrementals since full
As shown above, performing a full backup daily requires the most amount of space, and will also take the most amount of time. However, more total copies of data are available, and fewer pieces of media are required to perform a restore operation. As a result, implementing this backup policy has a higher tolerance to disasters, and provides the least time to restore, since any piece of data required will be located on at most one backup set.
As an alternative, performing a full backup weekly, coupled with running incremental backups daily will deliver the shortest backup time during weekdays and use the least amount of storage space. However, there are fewer copies of data available and restore time is the longest, since it may be required to utilize six sets of media to recover the information needed. If data is needed from data backed up on Wednesday, the Sunday full backup, plus the Monday, Tuesday and Wednesday incremental media sets are required. This can dramatically increase recovery times, and requires that each media set work properly; a failure in one backup set can impact the entire restoration.
Running a weekly full backup plus daily differential backups delivers results in between the other alternatives. Namely, more backup media sets are required to restore than with a daily full policy, although less than with a daily incremental policy. Also, the restore time is less than using daily Incrementals, and more than daily fulls. In order to restore data from a particular day, at most two media sets are required, diminishing the time needed to recover and the potential for problems with an unreadable backup set.
For organizations with small data sets, running a daily full backup provides a high level of protection without much additional storage space costs. Larger organizations or those with more data find that running a weekly full Backup, coupled with either daily incrementals or differentials provides a better option. Using differentials provides a higher level of data protection with less restore time for most scenarios with a small increase in storage capacity. For this reason, using a strategy of weekly full backups with daily differential backups is a good option for many organizations.
Most of the advanced backup options such as synthetic full, mirror, reverse incremental and CDP require disk storage as the backup target. A synthetic full simply reconstructs the full backup image using all required incrementals or the differential on disk. This synthetic full may then be stored to tape for offsite storage, with the advantage being reduced restoration time. Mirroring is copying of disk storage to another set of disk storage, with reverse incrementals used to add incremental type of backup support. Finally, CDP allows a greater number of restoration points than traditional backup options.
When deciding which type of backup strategy to use the question is not what type of backup to use, but when to use each, and how these options should be combined with testing to meet the overall business cost, performance and availability goals.
Posted by Thang Le Toan on 21 September 2018 02:48 AM
BackupAssist 365 can be set to automatically download files and email mailboxes from the cloud to an on-premises device, creating local backups as a cloud-to-cloud alternative.
SMB backup software specialist BackupAssist this week added protection for email mailboxes and files stored in the cloud.
The newest BackupAssist software, called BackupAssist 365, lets customers copy email mailboxes and files from the cloud to an on-premises server. The on-premises, off-cloud backup can protect businesses against accidental or malicious data deletion and ransomware.
Unlike cloud-to-cloud backup products that allow organizations to move data from SaaS applications to another public cloud, the new BackupAssist software only backs up to local storage.
Troy Vertigan, vice president of channel sales and marketing at BackupAssist, based in Australia, said the cloud-to-local backup costs up to 75% less than a cloud-to-cloud subscription.
Despite what the BackupAssist software's 365 name suggests, the product works with more than just Microsoft Office 365. BackupAssist 365 lets users back up mailboxes from Rackspace, Microsoft Exchange, Gmail, Outlook and Internet Message Access Protocol servers, as well as files from Google Drive, Dropbox, OneDrive, Secure File Transfer Protocol and WebDAV. BackupAssist CEO Linus Chang said a future update will enable support for the entire G Suite.
George Crump, president and founder of analyst firm Storage Switzerland, said BackupAssist 365 is a good option for protecting data born in the cloud, but he's not sure the vendor's customers are convinced they need that.
"The big challenge is convincing the entire market that you actually do need to back up Office 365. There's still this misbelief that the cloud is this magical place where data never gets deleted," Crump said.
Although cloud users don't have to worry about hardware failure or disaster recovery, Crump said that merely shifts the risk elsewhere.
George Crumppresident and founder, Storage Switzerland
"Once you're cloud-based, your concern really isn't disaster recovery anymore. What you're really protecting against is data corruption and account hijack," Crump said.
BackupAssist's Chang agreed his SMB target market needs convincing of the value of cloud backup. "The majority of SMB customers and the vast majority of consumers are not doing any sort of backup whatsoever," Chang said.
Chang said BackupAssist's customers also include managed service providers who are in charge of their clients' data, and BackupAssist 365 can help them stay compliant.
The new BackupAssist software is generally available. Its annual subscription fee is $1 per user, per month, for the first 24 users, and it drops to 95 cents for 25 to 49 users and 90 cents if there are 50 or more users. A user is defined as a single account identity, allowing for one user to back up multiple clouds.
Posted by Thang Le Toan on 03 August 2018 01:03 AM
Zero-day is a flaw in software, hardware or firmware that is unknown to the party or parties responsible for patching or otherwise fixing the flaw. The term zero day may refer to the vulnerability itself, or an attack that has zero days between the time the vulnerability is discovered and the first attack. Once a zero-day vulnerability has been made public, it is known as an n-day or one-day vulnerability.
Ordinarily, when someone detects that a software program contains a potential security issue, that person or company will notify the software company (and sometimes the world at large) so that action can be taken. Given time, the software company can fix the code and distribute a patch or software update. Even if potential attackers hear about the vulnerability, it may take them some time to exploit it; meanwhile, the fix will hopefully become available first. Sometimes, however, a hacker may be the first to discover the vulnerability. Since the vulnerability isn't known in advance, there is no way to guard against the exploit before it happens. Companies exposed to such exploits can, however, institute procedures for early detection.
Security researchers cooperate with vendors and usually agree to withhold all details of zero-day vulnerabilities for a reasonable period before publishing those details. Google Project Zero, for example, follows industry guidelines that give vendors up to 90 days to patch a vulnerability before the finder of the vulnerability publicly discloses the flaw. For vulnerabilities deemed "critical," Project Zero allows only seven days for the vendor to patch before publishing the vulnerability; if the vulnerability is being actively exploited, Project Zero may reduce the response time to less than seven days.
Zero-day exploit detection
Zero-day exploits tend to be very difficult to detect. Antimalware software and some intrusion detection systems (IDSes) and intrusion prevention systems (IPSes) are often ineffective because no attack signature yet exists. This is why the best way to detect a zero-day attack is user behavior analytics. Most of the entities authorized to access networks exhibit certain usage and behavior patterns that are considered to be normal. Activities falling outside of the normal scope of operations could be an indicator of a zero-day attack.
For example, a web application server normally responds to requests in specific ways. If outbound packets are detected exiting the port assigned to that web application, and those packets do not match anything that would ordinarily be generated by the application, it is a good indication that an attack is going on.
Zero-day exploit period
Some zero-day attacks have been attributed to advanced persistent threat (APT) actors, hacking or cybercrime groups affiliated with or a part of national governments. Attackers, especially APTs or organized cybercrime groups, are believed to reserve their zero-day exploits for high-value targets.
N-day vulnerabilities continue to live on and are subject to exploits long after the vulnerabilities have been patched or otherwise fixed by vendors. For example, the credit bureau Equifax was breached in 2017 by attackers using an exploit against the Apache Struts web framework. The attackers exploited a vulnerability in Apache Struts that was reported, and patched, earlier in the year; Equifax failed to patch the vulnerability and was breached by attackers exploiting the unpatched vulnerability.
Likewise, researchers continue to find zero-day vulnerabilities in the Server Message Block protocol, implemented in the Windows OS for many years. Once the zero-day vulnerability is made public, users should patch their systems, but attackers continue to exploit the vulnerabilities for as long as unpatched systems remain exposed on the internet.
Defending against zero-day attacks
Zero-day exploits are difficult to defend against because they are so difficult to detect. Vulnerability scanning software relies on malware signature checkers to compare suspicious code with signatures of known malware; when the malware uses a zero-day exploit that has not been previously encountered, such vulnerability scanners will fail to block the malware.
Since a zero-day vulnerability can't be known in advance, there is no way to guard against a specific exploit before it happens. However, there are some things that companies can do to reduce their level of risk exposure.
Use virtual local area networks to segregate some areas of the network or use dedicated physical or virtual network segments to isolate sensitive traffic flowing between servers.
Implement IPsec, the IP security protocol, to apply encryption and authentication to network traffic.
Deploy an IDS or IPS. Although signature-based IDS and IPS security products may not be able to identify the attack, they may be able to alert defenders to suspicious activity that occurs as a side effect to the attack.
Use network access control to prevent rogue machines from gaining access to crucial parts of the enterprise environment.
Lock down wireless access points and use a security scheme such as Wi-Fi Protected Access 2 for maximum protection against wireless-based attacks.
Keep all systems patched and up to date. Although patches will not stop a zero-day attack, keeping network resources fully patched may make it more difficult for an attack to succeed. When a zero-day patch does become available, apply it as soon as possible.
Perform regular vulnerability scanning against enterprise networks and lock down any vulnerabilities that are discovered.
While maintaining a high standard for information security may not prevent all zero-day exploits, it can help defeat attacks that use zero-day exploits after the vulnerabilities have been patched.
Examples of zero-day attacks
Multiple zero-day attacks commonly occur each year. In 2016, for example, there was a zero-day attack (CVE-2016-4117) that exploited a previously undiscovered flaw in Adobe Flash Player. Also in 2016, more than 100 organizations succumbed to a zero day bug (CVE-2016-0167) that was exploited for an elevation of privilege attack targeting Microsoft Windows.
In 2017, a zero-day vulnerability (CVE-2017-0199) was discovered in which a Microsoft Office document in rich text format was shown to be able to trigger the execution of a visual basic script containing PowerShell commands upon being opened. Another 2017 exploit (CVE-2017-0261) used encapsulated PostScript as a platform for initiating malware infections.
The Stuxnet worm was a devastating zero-day exploit that targeted supervisory control and data acquisition (SCADA) systems by first attacking computers running the Windows operating system. Stuxnet exploited four different Windows zero-day vulnerabilities and spread through infected USB drives, making it possible to infect both Windows and SCADA systems remotely without attacking them through a network. The Stuxnet worm has been widely reported to be the result of a joint effort by U.S. and Israel intelligence agencies to disrupt Iran's nuclear program.
Learn more about zero-day attacks from the CompTia security course.
FBI admits to using zero-day exploits, not disclosing them
The FBI has admitted to using zero-day exploits rather than disclosing them, and experts say this should not be a surprise considering the history of federal agency actions.
In a surprise bout of openness, Amy Hess, executive assistant director for science and technology with the FBI, admitted that the FBI uses zero-day exploits, but said the agency does struggle with the decision.
In an interview with The Washington Post, Hess called it a "constant challenge" to decide whether it is better to use a zero-day exploit "to be able to identify a person who is threatening public safety" or to disclose the vulnerability in order to allow developers to secure products being used by the public. Hess also noted the FBI prefers not to rely on zero-day exploits because the fact that they can be patched at any moment makes them unreliable.
Jeff Schilling, CSO for Armor, said the surprise might come from the fact that many people don't know that the FBI has a foreign intelligence collection mission.
"Any agency that has a foreign intelligence collection mission in cyberspace has to make decisions every day on the value gained in leveraging a zero day to collect intelligence data, especially with the impact of not letting people who are at risk know of the potential vulnerability which could be compromised," Schilling said, adding that the need for the government to find a balance between security and intelligence is not a new phenomenon. "This country experienced the same intelligence gained versus operational impact during World War II (WWII) when the intelligence community did not disclose that we had broken both the Japanese and German codes. Lots of sailors, soldiers and airmen lost their lives to keep those secrets. I think the FBI and the rest of the intelligence community have the same dilemmas as the intelligence community in WWII, however, at this point, data, not lives are at risk."
Robert Hansen, vice president for WhiteHat Security Labs, said it boils down to whether the public trusts the government to not abuse its power in this area, and whether the government should assume that only it knows about these exploits.
"In general, I think that although the net truth is that most people in government have good intentions, they can't all be relied upon to hold such belief systems," Hansen said. "And, given that in most cases exploits are found much later, it stands to reason that it's more dangerous to keep vulnerabilities in place. That's not to diminish their value, however, it's very dangerous to presume that an agency is the only one [that] can and will find and leverage that vulnerability."
Adam Kujawa, head of malware intelligence at Malwarebytes Labs, said the draw of zero-day exploits may be too strong for government agencies to resist.
"The 'benefit' of this method [is] simply having access to a weapon that theoretically can't be protected against," Kujawa said. "This is like being able to shoot someone with a nuke when they are only wearing a bullet proof vest -- completely unstoppable, theoretically. Law enforcement, when they have a target in mind, be it a cybercriminal, terrorist, et cetera, are able to breach the security of the suspect and gather intelligence or collect information on them to identify any criminal activity that might happen or will happen."
Daren Glenister, field CTO at Intralinks Inc., noted that while leaving vulnerabilities unpatched leads to risk, there is also some benefit to not publishing vulnerabilities too soon.
"Patching a threat may take a vendor days or weeks. Every hour lost in providing a patch introduces additional risk to data and access to systems," Glenister said. "[However], by not publishing zero-day threats, it minimizes the widespread underground threat from hackers that occurs every time a new threat is disclosed."
The NSA recently detailed its vulnerability disclosure policy, but while doing so never mentioned whether or not the agency used zero-day exploits. Multiple experts said this admission by the FBI makes it safe to assume the NSA is also leveraging zero days in its efforts.
Adam Meyer, chief security strategist at SurfWatch Labs Inc., said it is not only reasonable to expect the NSA is actively exploiting zero days, but many others are as well.
"I believe it is safe to assume that any U.S. agency with a Defense or Homeland Security mission area are using exploits to achieve a presence against their targets," Meyer said. "Unfortunately, I also think it is safe to assume that every developed country in the world is doing the exact same thing. The reality is a zero day can be used against us just as much as for us."
Schilling said using zero days may not be the only option, but noted that human intelligence gathering carries much greater risks.
"At the end of the day, if we are leveraging zero days to stay ahead of our national threats, I am ok with us accepting the risk of data loss and compromises," Schilling said. "History has shown that we have accepted higher costs to protect our intelligence collection, and I think we are still OK today in the risk we are accepting as it is to save lives."
Kujawa said that while there are viable alternatives to using zero days to gather intelligence, it is hard to ignore the ease and relative safety of this method.
"There are plenty of viable methods of extracting information from a suspect; however the zero-day method is incredibly effective, very quiet and very fast. Law enforcement could attack systems using known exploits, social engineering tactics or gaining physical access to the system and installing malware manually, however none of these methods are guaranteed and they all can be protected against if the suspect is practicing common security procedures. The zero-day method will fall into the same bucket as the other attacks soon enough, however, so we will have to wait and see what the future holds for law enforcement in trying to gather evidence and intelligence on criminal suspects."
Posted by Thang Le Toan on 02 August 2018 01:38 AM
Spear phishing is an email-spoofing attack that targets a specific organization or individual, seeking unauthorized access to sensitive information. Spear-phishing attempts are not typically initiated by random hackers, but are more likely to be conducted by perpetrators out for financial gain, trade secrets or military information.
As with emails used in regular phishing expeditions, spear phishing messages appear to come from a trusted source. Phishing messages usually appear to come from a large and well-known company or website with a broad membership base, such as Google or PayPal. In the case of spear phishing, however, the apparent source of the email is likely to be an individual within the recipient's own company -- generally, someone in a position of authority -- or from someone the target knows personally.
Visiting United States Military Academy professor and National Security Agency official Aaron Ferguson called it the "colonel effect." To illustrate his point, Ferguson sent out a message to 500 cadets, asking them to click a link to verify grades. Ferguson's message appeared to come from a Col. Robert Melville of West Point. Over 80% of recipients clicked the link in the message. In response, they received a notification that they'd been duped and a warning that their behavior could have resulted in downloads of spyware, Trojan horses and/or other malware.
Many enterprise employees have learned to be suspicious of unexpected requests for confidential information and will not divulge personal data in response to emails or click on links in messages unless they are positive about the source. The success of spear phishing depends upon three things: The apparent source must appear to be a known and trusted individual; there is information within the message that supports its validity, and the request the individual makes seems to have a logical basis.
Spear phishing vs. phishing vs. whaling
This familiarity is what sets spear phishing apart from regular phishing attacks. Phishing emails are typically sent by a known contact or organization. These include a malicious link or attachment that installs malware on the target's device, or directs the target to a malicious website that is set up to trick them into giving sensitive information like passwords, account information or credit card information.
Spear phishing has the same goal as normal phishing, but the attacker first gathers information about the intended target. This information is used to personalize the spear-phishing attack. Instead of sending the phishing emails to a large group of people, the attacker targets a select group or an individual. By limiting the targets, it's easier to include personal information -- like the target's first name or job title -- and make the malicious emails seem more trustworthy.
The same personalized technique is used in whaling attacks, as well. A whaling attack is a spear-phishing attack directed specifically at high-profile targets like C-level executives, politicians and celebrities. Whaling attacks are also customized to the target and use the same social-engineering, email-spoofing and content-spoofing methods to access sensitive data.
Examples of successful attacks
In one version of a successful spear-phishing attack, the perpetrator finds a webpage for their target organization that supplies contact information for the company. Using available details to make the message seem authentic, the perpetrator drafts an email to an employee on the contact page that appears to come from an individual who might reasonably request confidential information, such as a network administrator. The email asks the employee to log into a bogus page that requests the employee's username and password, or click on a link that will download spyware or other malicious programming. If a single employee falls for the spear phisher's ploy, the attacker can masquerade as that individual and use social-engineering techniques to gain further access to sensitive data.
In 2015, independent security researcher and journalist Brian Krebs reported that Ubiquiti Networks Inc. lost $46.7 million to hackers who started the attack with a spear-phishing campaign. The hackers were able to impersonate communications from executive management at the networking firm and performed unauthorized international wire transfers.
Spear phishing defense
Spear-phishing attacks -- and whaling attacks -- are often harder to detect than regular phishing attacks because they are so focused.
In an enterprise, security-awareness training for employees and executives alike will help reduce the likelihood of a user falling for spear-phishing emails. This training typically educates enterprise users on how to spot phishing emails based on suspicious email domains or links enclosed in the message, as well as the wording of the messages and the information that may be requested in the email.
How to prevent a spear phishing attack from infiltrating an enterprise
While spear phishing emails are becoming harder to detect, there are still ways to prevent them. Threats expert Nick Lewis gives advice.
Spear phishing and social engineering are becoming more popular as attackers target humans as a particularly dependable point of ingress (HBGary, RSA, etc.). Considering that a well-crafted spear phishing email is almost indistinguishable from a legitimate email, what is the best way to prevent users from clicking on spear phishing links?
Phishing, social engineering and spear phishing have been growing in popularity over the last 10 or more years. The introduction of spear phishing and other newer forms of phishing are an evolution of social engineering or fraud. Attackers have found ways to exploit weaknesses in technologies like VoIP, IM and SMS messages, among others, to commit fraud, and will continue to adapt as new technologies develop. Humans will always be an integral part of information security for an organization, but can always be targeted, regardless of the technologies in use. Humans are sometimes the weakest link.
To help minimize the chance of a spear phishing attack successfully infiltrating the enterprise, you can follow the advice from US-CERT on phishing or the guidance from the Anti-Phishing Working Group. Both have technical steps you can put in place, but both also include a security awareness and education component. Potentially the most effective method to combat phishing and its variants is to make sure users know to question suspicious communications and to verify the communication (email, IM, SMS, etc.) out-of-band with the requesting party. For example, if an employee gets an email from a colleague that doesn’t sound like it came from the sender or seems in some way suspicious, he or she should contact the sender using a different means of communication -- such as the phone -- to confirm the email. If the email can’t be verified, then it should be reported to your information security group, the Anti-Phishing Working Group or the FTC at firstname.lastname@example.org.
Enterprises with high security needs could choose not to connect their systems to the Internet, not allow Internet email inbound except for approved domains, or only allow inbound email from approved email addresses. This will not stop all phishing attacks and will significantly decrease usability, but may be necessary for high-security environments.
A threat hunter, also called a cybersecurity threat analyst, is a security professional or managed service provider (MSP) that proactively uses manual or machine-assisted techniques to detect security incidents that may elude the grasp of automated systems. Threat hunters aim to uncover incidents that an enterprise would otherwise not find out about, providing chief information security officers (CISOs) and chief information officers (CIOs) with an additional line of defense against advanced persistent threats (APTs).
In order to detect a security incident an automated system might miss, a threat hunter uses critical-thinking skills and creativity to look at patterns of normal behavior and be able to identify network behavior anomalies. A threat hunter must have considerable business knowledge and an understanding of normal enterprise operations in order to avoid false positives and have good communication skills to share the results of the hunt. It is especially important for the threat hunter to keep current on the latest security research.
The job of the threat hunter is to both supplement and reinforce automated systems. As the review process uncovers patterns for initiating attacks, the security organization can use that information to improve its automated threat detection software.
A 2017 SANS Institute report found more organizations are pursuing threat hunting initiatives, but notes the bulk of the growth is confined to vertical markets such as financial services, high tech, military and government and telecommunications. As of 2017, the field of threat hunting was still new for the majority of IT security organizations. The SANS Institute report noted 45% of the respondents to its threat hunting survey do their hunting on an ad hoc basis.
Threat hunters typically work within a security operations center (SOC) and take the lead role in an enterprise's threat detection and incident response activities. Threat hunting may be assigned as an additional duty to one or more security engineers within a SOC, or a SOC may dedicate security engineers to full-time threat hunting duties.
Additional options for creating a threat hunting team include rotating security engineers into the threat hunting role on a temporary basis and then having them return to their usual jobs within the SOC. Internally, threat hunters hunters are often managed by the an organization's CISO, who works with the CIO to coordinate enterprise security.
Virtual private clouds and private clouds differ in terms of architecture, the provider and tenants, and resource delivery. Decide between the two models based on these distinctions.
Organizations trying to decide between virtual private cloud vs. private cloud must first define what they want to accomplish. A private cloud gives individual business units more control over the IT resources allocated to them, whereas a virtual private cloud offers organizations a different level of isolation.
Virtual private clouds are typically layers of isolation within public clouds, but they might lack the self-service portal that enables IT to provide individual business units with DIY IT environments. Private clouds are generally on-premises environments with self-service portals that designated employees can use to deploy resources without intervention from IT.
But interest in the private cloud is about much more than just technology; private clouds represent a fundamental shift in the way organizations deliver IT resources.
In the past, corporate IT acted as a gatekeeper for all things tech. If a business unit within an organization needed to deploy a new application or a new service, they went through IT.
This way of doing things was problematic for both the business units and for IT. Whenever a department had to seek IT approval for a tech project, it ran the risk of IT denying the project or modifying its scope beyond recognition. Even if it was approved, the business unit might have to wait weeks or even months for IT to implement it.
The old way of doing things was also problematic for the IT department because it often put IT in the awkward position of having to say no to someone else's ideas. On the other hand, if IT did approve the project, it meant an increased workload for the IT staff that had to deploy, maintain and support the new application.
Moving away from traditional virtual infrastructures
Private cloud environments represent a shift away from the rigid administrative model that organizations have used for so long. Rather than the IT department acting as the sole governing body for all the organization's tech resources, it instead takes on the role of a service provider.
In a private cloud, the IT infrastructure is carved up into a series of private areas, and each area is assigned to a specific business unit. One or more designated employees within the department take on the role of tenant administrators for the available resources. These administrators are free to use the resources as they see fit without first seeking IT approval.
This doesn't mean that tenant administrators have total autonomy, nor does it mean that they require specialized IT skills. Every organization sets up its private cloud differently, but IT usually provides tenant administrators with a self-service portal that is designed to simplify tasks, such as deploying and managing VMs. Furthermore, IT usually creates VM templates that tenant administrators can use any time they create a new VM.
In other words, tenant administrators can create VMs on an as-needed basis, but must do so within the limits IT has put in place. These limits ensure that tenant administrators don't deplete the underlying infrastructure of hardware resources. Additionally, the use of templates guarantees that admins create VMs in accordance with the organization's security policies.
Virtual private cloud vs. private cloud
When it comes to virtual private cloud vs. private cloud, the terms are sometimes used interchangeably. In most cases, however, a virtual private cloud is different from a private cloud.
In a private cloud model, the IT department acts as a service provider and the individual business units act as tenants. In a virtual private cloud model, a public cloud provider acts as the service provider and the cloud's subscribers are the tenants. Just as the tenant administrators in a private cloud are free to create resources within the limits that have been set up for them, a public cloud's subscribers are also free to create resources within the public cloud.
When public cloud subscribers create resources, such as VM instances, databases or gateways, those instances are created within a virtual private cloud. Think of the virtual private cloud as an isolation boundary that keeps subscribers from being able to access -- or interfere with -- each other's resources.
Each public cloud provider has its own way of doing things, but some providers allow tenants to define additional virtual private clouds. For example, Amazon allows AWS subscribers to create as many virtual private clouds as they need.
Each virtual private cloud acts as an isolated environment. Organizations sometimes use virtual private clouds to isolate web servers from other cloud-hosted resources, or to create an isolation boundary around the virtual servers that make up a multi-tier application.
The new norm: Organizations don't have to choose
In spite of virtual private cloud vs. private cloud distinctions, the lines between them are blurring more than ever. Rather than choosing between a private cloud and a public cloud, most organizations opt for a hybrid cloud.
What are the pros and cons of private clouds and virtual private clouds?
Admins can construct hybrid clouds in many different ways, but one option is to create a self-service environment similar to that of a typical private cloud, but to configure it so some resources reside on premises, while others reside in the public cloud.
Startups will almost always benefit from operating entirely in the public cloud because doing so enables them to avoid a large upfront investment in IT infrastructure. For organizations that already have an on-premises IT infrastructure in place, however, a hybrid cloud usually offers the best of both worlds.
A CDN (content delivery network), also called a content distribution network, is a group of geographically distributed and interconnected servers that provide cached internet content from a network location closest to a user to accelerate its delivery. The primary goal of a CDN is to improve web performance by reducing the time needed to transmit content and rich media to users' internet-connected devices.
Content delivery network architecture is also designed to reduce network latency, which is often caused by hauling traffic over long distances and across multiple networks. Eliminating latency has become increasingly important, as more dynamic content, video and software as a service are delivered to a growing number of mobile devices.
CDN providers house cached content in either their own network points of presence (POP) or in third-party data centers. When a user requests content from a website, if that content is cached on a content delivery network, the CDN redirects the request to the server nearest to that user and delivers the cached content from its location at the network edge. This process is generally invisible to the user.
A wide variety of organizations and enterprises use CDNs to cache their website content to meet their businesses' performance and security needs. The need for CDN services is growing, as websites offer more streaming video, e-commerce applications and cloud-based applications where high performance is key. Few CDNs have POPs in every country, which means many organizations use multiple CDN providers to make sure they can meet the needs of their business or consumer customers wherever they are located.
In addition to content caching and web delivery, CDN providers are capitalizing on their presence at the network edge by offering services that complement their core functionalities. These include security services that encompass distributed denial-of-service (DDoS) protection, web application firewalls (WAFs) and bot mitigation; web and application performance and acceleration services; streaming video and broadcast media optimization; and even digital rights management for video. Some CDN providers also make their APIs available to developers who want to customize the CDN platform to meet their business needs, particularly as webpages become more dynamic and complex.
How does a CDN work?
The process of accessing content cached on a CDN network edge location is almost always transparent to the user. CDN management software dynamically calculates which server is located nearest to the requesting user and delivers content based on those calculations. The CDN server at the network edge communicates with the content's origin server to make sure any content that has not been cached previously is also delivered to the user. This not only eliminates the distance that content travels, but reduces the number of hops a data packet must make. The result is less packet loss, optimized bandwidth and faster performance, which minimizes timeouts, latency and jitter, and it improves the overall user experience. In the event of an internet attack or outage, content hosted on a CDN server will remain available to at least some users.
Organizations buy services from CDN providers to deliver their content to their users from the nearest location. CDN providers either host content themselves or pay network operators and internet service providers (ISPs) to host CDN servers. Beyond placing servers at the network edge, CDN providers use load balancing and solid-state hard drives to help data reach users faster. They also work to reduce file sizes using compression and special algorithms, and they are deploying machine learning and AI to enable quicker load and transmission times.
History of CDNs
The first CDN was launched in 1998 by Akamai Technologies soon after the public internet was created. Akamai's original techniques serve as the foundation of today's content distribution networks. Because content creators realized they needed to find a way to reduce the time it took to deliver information to users, CDNs were seen as a way to improve network performance and to use bandwidth efficiently. That basic premise remains important, as the amount of online content continues to grow.
So-called first-generation CDNs specialized in e-commerce transactions, software downloads, and audio and video streaming. As cloud and mobile computing gained traction, second-generation CDN services evolved to enable the efficient delivery of more complex multimedia and web content to a wider community of users via a more diverse mix of devices. As internet use grew, the number of CDN providers multiplied, as have the services CDN companies offer.
New CDN business models also include a variety of pricing methods that range from charges per usage and volume of content delivered to a flat rate or free for basic services, with add-on fees for additional performance and optimization services. A wide variety of organizations use CDN services to accelerate static and dynamic content, online gaming and mobile content delivery, streaming video and a number of other uses.
What are the main benefits of using a CDN?
The primary benefits of traditional CDN services include the following:
Improved webpage load times to prevent users from abandoning a slow-loading site or e-commerce application where purchases remain in the shopping cart;
Improved security from a growing number of services that include DDoS mitigation, WAFs and bot mitigation;
Increased content availability because CDNs can handle more traffic and avoid network failures better than the origin server that may be located several networks away from the end user; and
A diverse mix of performance and web content optimization services that complement cached site content.
How do you manage CDN security?
A representative list of CDN providers in this growing market include the following:
A wide variety of organizations use CDNs to meet their businesses' performance and security needs. The need for CDN services is growing, as websites offer more streaming video, e-commerce applications and cloud-based applications, where high performance is essential.
CDN technology is also an ideal method to distribute web content that experiences surges in traffic, because distributed CDN servers can handle sudden bursts of client requests at one time over the internet. For example, spikes in internet traffic due to a popular event, like online streaming video of a presidential inauguration or a live sports event, can be spread out across the CDN, making content delivery faster and less likely to fail due to server overload.
AWS GPU instance type slashes cost of streaming apps
The cost of graphics acceleration can often make the technology prohibitive, but a new AWS GPU instance type for AppStream 2.0 makes that process more affordable.
Download Our AWS Cloud Computing Must-Have Guide
While Amazon Web Services (AWS) has established itself as a top contender in the cloud computing market, it's not without its challenges and misconceptions. Get expert insight into the most common and pressing questions regarding AWS management, monitoring, costs, benefits, limitations and more.
Amazon AppStream 2.0, which enables enterprises to stream desktop apps from AWS to an HTML5-compatible web browser, delivers graphics-intensive applications for workloads such as creative design, gaming and engineering that rely on DirectX, OpenGL or OpenCL for hardware acceleration. The managed AppStream service eliminates the need for IT teams to recode applications to be browser-compatible.
The newest AWS GPU instance type for AppStream, Graphics Design, cuts the cost of streaming graphics applications up to 50%, according to the company. AWS customers can launch Graphics Design GPU instances or create a new instance fleet with the Amazon AppStream 2.0 console or AWS software development kit. AWS’ Graphics Design GPU instances come in four sizes that range from 2-16 virtual CPUs and 7.5-61 gibibytes (GiB) of system memory, and run on AMD FirePro S7150x2 Server GPUs with AMD Multiuser GPU technology.
Developers can now also select between two types of Amazon AppStream instance fleets in a streaming environment. Always-On fleets provide instant access to apps, but charge fees for every instance in the fleet. On-Demand fleets charges fees for instances when end users are connected, plus an hourly fee, but there is a delay when an end user accesses the first application.
New features and support
In addition to the new AWS GPU instance type, the cloud vendor rolled out several other features this month, including:
ELB adds network balancer. AWS Network Load Balancer helps maintain low latency during spikes on a single static IP address per Availability Zone. Network Load Balancer — the second offshoot of Elastic Load Balancing features, following Application Load Balancer — routes connections to Virtual Private Cloud-based Elastic Compute Cloud (EC2) instances and containers.
New edge locations on each coast. Additional Amazon CloudFront edge locations in Boston and Seattle improve end user speed and performance when they interact with content via CloudFront. AWS now has 95 edge locations across 50 cities in 23 countries.
X1 instance family welcomes new member. The AWS x1e.32xlarge instance joins the X1 family of memory-optimized instances, with the most memory of any EC2 instance — 3,904 GiB of DDR4 instance memory — to help businesses reduce latency for large databases, such as SAP HANA. The instance is also AWS’ most expensive at about $16-$32 per hour, depending on the environment and payment model.
AWS Config opens up support. The AWS Config service, which enables IT teams to manage service and resource configurations, now supports both DynamoDB tables and Auto Scaling groups. Administrators can integrate those resources to evaluate the health and scalability of their cloud deployments.
Start and stop on the Spot. IT teams can now stop Amazon EC2 Spot Instances when an interruption occurs and then start them back up as needed. Previously, Spot Instances were terminated when prices rose above the user-defined level. AWS saves the EBS root device, attached volumes and the data within those volumes; those resources restore when capacity returns, and instances maintain their ID numbers.
EC2 expands networking performance. The largest instances of the M4, X1, P2, R4, I3, F1 and G3 families now use Elastic Network Adapter (ENA) to reach a maximum bandwidth of 25 Gb per second. The ENA interface enables both existing and new instances to reach this capacity, which boosts workloads reliant on high-performance networking.
New Direct Connect locations. Three new global AWS Direct Connect locations allow businesses to establish dedicated connections to the AWS cloud from an on-premises environment. New locations include: Boston, at Markley, One Summer Data Center for US-East-1; Houston, at CyrusOne West I-III data center for US-East-2; and Canberra, Australia, at NEXTDC C1 Canberra data center for AP-Southeast-2.
Role and policy changes. Several changes to AWS Identity and Access Management (IAM) aim to better protect an enterprise’s resources in the cloud. A policy summaries feature lets admins identify errors and evaluate permissions in the IAM console to ensure each action properly matches to the resources and conditions it affects. Other updates include a wizard for admins to create the IAM roles, and the ability to delete service-linked roles through the IAM console, API or CLI — IAM ensures that no resources are attached to a role before deletion.
Six new data streams. Amazon Kinesis Analytics, which enables businesses to process and query streaming data in an SQL format, has six new types of stream processes to simplify data processing: STEP(), LAG(), TO_TIMESTAMP(), UNIX_TIMESTAMP(), REGEX_REPLACE() and SUBSTRING(). AWS also increased the service’s capacity to process higher data volume streams.
Get DevOps notifications. Additional notifications from AWS CodePipeline for stage or action status changes enable a DevOps team to track, manage and act on changes during continuous integration and continuous delivery. CodePipeline integrates with Amazon CloudWatch to enable Amazon Simple Notification Service messages, which can trigger an AWS Lambda function in response.
AWS boosts HIPAA eligibility. Amazon’s HIPAA Compliance Program now includes Amazon Connect, AWS Batch and two Amazon Relational Database Service (RDS) engines, RDS for SQL Server and RDS for MariaDB — all six RDS engines are HIPAA eligible. AWS customers that sign a Business Associate Agreement can use those services to build HIPAA-compliant applications.
RDS for Oracle adds features. The Amazon RDS for Oracle engine now supports Oracle Multimedia, Oracle Spatial and Oracle Locator features, with which businesses can store, manage and retrieve multimedia and multi-dimensional data as they migrate databases from Oracle to AWS. The RDS Oracle engine also added support for multiple Oracle Application Express versions, which enables developers to build applications within a web browser.
Assess RHEL security. Amazon Inspector expanded support for Red Hat Enterprise Linux (RHEL) 7.4 assessments, to run Vulnerabilities & Exposures, Amazon Security Best Practices and Runtime Behavior Analysis scans in that RHEL environment on EC2 instances.
BPM in cloud evolves to suit line of business, IoT
While on-premises BPM tools have caused a tug of war between lines of business and IT, the cloud helps appease both sides. Here's what to expect from this cloud BPM trend and more.
Business process management tools rise in importance as companies try to make better use -- and reuse -- of IT assets. And, when coupled with cloud, this type of software can benefit from a pay-as-you-go model for more efficient cost management, as well as increased scalability.
As a result, cloud-based BPM has become a key SaaS tool in the enterprise. Looking forward, the growth of BPM in cloud will drive three major trends that enterprise users should track.
BPM is designed to encourage collaboration between line departments and IT, but the former group often complains that BPM tools hosted in the data center favor the IT point of view in both emphasis and design. To avoid this and promote equality between these two groups, many believe that BPM tools have to move to neutral territory: the cloud.
Today, BPM supports roughly a dozen different roles and is increasingly integrated with enterprise architecture practices and models. This expands the scope of BPM software, as well as the number of non-IT professionals who use it. Collaboration and project management, for example, account for most of the new features in cloud BPM software.
Collaboration features in cloud-based BPM include project tools and integration with social networks. While business people widely use platforms like LinkedIn for social networking, IT professionals use other wiki-based tools. Expect to see a closer merger between the two.
This push for a greater line department focus in BPM could also divide the BPM suites themselves. While nearly all the cloud BPM products are fairly broad in their application, those from vendors with a CIO-level sales emphasis, such as IBM's Business Process Manager on Cloud or Appian, focus more on IT. NetSuite, on the other hand, is an example of cloud BPM software with a broader organizational target.
Software practices influence BPM
Cloud, in general, affects application design and development, which puts pressure on BPM to accommodate changes in software practices. Cloud platforms, for example, have encouraged a more component-driven vision for applications, which maps more effectively to business processes. This will be another factor that expands line department participation in BPM software.
BPM in cloud encourages line organizations to take more control over applications. The adoption of third-party tools, rather than custom development, helps them target specific business problems. This, however, is a double-edged sword: It can improve automated support for business processes but also duplicate capabilities and hinder workflow integration among organizations. IT and line departments will have to define a new level of interaction.
The third trend to watch around BPM in cloud involves internet of things (IoT) and machine-to-machine communications. These technologies presume that sensors will activate processes, either directly or through sensor-linked analytics. This poses a challenge for BPM, because it takes human judgment out of the loop and requires instead that business policies anticipate human review of events and responses. That shifts the emphasis of BPM toward automated policies, which, in the past, has led to the absorption of BPM into things like Business Process Modeling Language, and puts the focus back on IT.
What do you expect from cloud BPM in the future?
In theory, business policy automation has always been within the scope of BPM. But, in practice, BPM suites have offered only basic support for policy automation or even for the specific identification of business policies. It's clear that this will change and that policy controls to guide IoT deployments will be built into cloud-based BPM.
A strategy for protection and successful recovery from ransomware includes everything from monitoring tools to offline storage. Organizations should use multiple methods.
CHICAGO -- The VeeamON session on protection from ransomware Wednesday started with a question for attendees: How many had experienced a ransomware attack at their organization?
Dozens of hands went up.
Ransomware attacks continue to make news. In just the last couple of months, high-profile victims included the city of Atlanta and a school district in Massachusetts. Many attacks, though, go unreported or unmentioned to the general public.
A layered defense is important to be able to protect and recover from ransomware, Rick Vanover, Veeam's director of product strategy, told the packed room of close to 200 people.
Backup, DR, education all play a role
Using offline storage to create an air gap is arguably the most technically efficient method of protection against ransomware. Tape is a good fit for air gapping, because you can take it off site, where it is not connected to the network or any other devices.
"The one reason I love tape is its resiliency in this situation," Vanover said.
Other offline or semioffline storage choices include replicated virtual machines, primary storage snapshots, Veeam Cloud Connect backups that aren't connected directly to the backup infrastructure and rotating hard drives.
Educating users is another major component of a comprehensive strategy for protection from ransomware.
"No matter how often you do it, you can't do it enough," said Joe Marton, senior systems engineer at Veeam.
Advice for users includes being overly careful about clicking links and attachments and telling IT immediately if there appears to be an issue.
IT should have visibility into suspicious behavior using monitoring capabilities. For example, Veeam ONE includes a predefined alarm that triggers if it detects possible ransomware activity.
Organizations as a whole should continue to follow the standard "3-2-1" backup plan of having three different copies of data on two different media types, one of which is off site or offline.
From a disaster recovery angle, DR isn't just for natural disasters.
"Ransomware can be a disaster," Marton said.
That means an organization's DR process applies to ransomware attacks.
The organization should also document its recovery plan, specifically one for ransomware incidents.
Matt Fonner, a severity one engineer of the Veeam support team, said every week he deals with two or three restores from a ransomware attack.
Ransomware, protection continue to evolve
Rick Vanoverdirector of product strategy, Veeam
Vanover said later that he spent about 25 minutes following the presentation talking with people about attacks and protection from ransomware. One person told him that her SMB had been hit and decided to pay the ransom, rather than deal with an inferior restore program -- that wasn't Veeam.
Vanover said organizations should classify data to figure out which level of resiliency is needed. Not everything needs to be in that most expensive tier.
Vanover said the ransomware landscape has changed from a year ago, when he also gave a presentation on ransomware protection at VeeamON.
"The ransomware story does change every time you write it," he said.
One new twist in the storage is ransomware is attacking backups themselves. In a common scenario, ransomware will infiltrate a backup and stay dormant until the data is recovered back to the network following an attack on primary storage.
That's where offline storage comes in, Vanover said.
Data protection vendors are also starting to add specific features to protect backups from ransomware. For example, Asigra Cloud Backup has embedded malware engines in the backup and recovery stream, and CloudBerry Backup detects possible cases of ransomware in backups.
Vanover said if he drew up another presentation in a month or two, it would probably be different.
"We have to always evolve to the threatscape," he said.
Sage says the move will boost its cloud financial management software and U.S. presence. Analysts think it's a good technology move but are unsure about the market impact.
Sage Software intends to expand both its cloud offerings and its customer base in North America.
Sage, an ERP vendor based in Newcastle upon Tyne, U.K., is acquiring Intacct, a San Jose-based vendor of financial management software for $850 million, according to the company.
Sage's core products include the Sage X3 ERP system, the Sage One accounting and invoicing application and Sage Live real-time accounting software. The company's products are aimed primarily at SMBs, and Sage claims that it has just over 6 million users worldwide, with the majority of these in Europe.
Intacct provides SaaS financial management software to SMBs, with most of its customer base in North America, according to the company.
The move to acquire Intacct demonstrates Sage's determination to "win the cloud" and expand its U.S. customer base, according to a Sage press release announcing the deal.
"Today we take another major step forward in delivering our strategy and we are thrilled to welcome Intacct into the Sage family," Stephen Kelly, Sage CEO, said in the press release. "The acquisition of Intacct supports our ambitions for accelerating growth by winning new customers at scale and builds on our other cloud-first acquisitions, strengthening the Sage Business Cloud. Intacct opens up huge opportunities in the North American market, representing over half of our total addressable market."
Combining forces makes sense for Intacct because the company shares the same goals as Sage, according to Intacct CEO Robert Reid.
"We are excited to become part of Sage because we are relentlessly focused on the same goal -- to deliver the most innovative cloud solutions for our customers," Reid said in the press release. "Intacct is growing rapidly in our market and we are proud to be a recognized customer satisfaction leader across midsize, large and global enterprise businesses. By combining our strengths with those of Sage, we can jointly accelerate success for our customers."
Intacct brings real cloud DNA to financial management software
Intacct's specialty in cloud financial management software should complement Sage's relatively weak financial functionality, according to Cindy Jutras, president of the ERP consulting firm Mint Jutras.
"[Intacct] certainly brings real cloud DNA, and a financial management solution that would be a lot harder to grow out of than the solutions they had under the Sage One brand," Jutras said. "It also has stronger accounting than would be embedded within Sage X3. I would expect X3 to still be the go-to solution for midsize manufacturers since that was never Intacct's target, but Intacct may very well become the go-to ERP for service companies, like professional services."
Jutras also mentioned that Intacct was one of the first applications to address the new ASC 606 revenue recognition rules, something that Sage has not done yet. Sage's cloud strategy has been murky up to this point, but Jutras was unsure that this move will clarify that.
"It doesn't seem any of its existing products -- except their new Sage Live developed on the Salesforce platform -- are multi-tenant SaaS and up until recently they seemed to be going the hybrid route by leaving ERP on premises and surrounding it with cloud services," she said.
The deal should strengthen Sage's position in the SMB market, according to Chris Devault, manager of software selection at Panorama Consulting Solutions.
"This is a very good move for Sage, as it will bring a different platform and much needed technology to help Sage round out their small to mid-market offerings," Devault said.
Getting into the U.S. market
Overall it appears to be a positive move for Sage, both from a technology and market perspective, according to Holger Mueller, vice president and principal analyst at Constellation Research Inc.
"It's a good move by Sage to finally tackle finance in the cloud and get more exposure to the largest software market in the world, the U.S.," Mueller said. "But we see more than finance moving to the cloud, as customers are starting to look for or demand a complete suite to be available on the same platform. Sage will have to move fast to integrate Intacct and get to a compelling cloud suite roadmap."
Time will also tell if this move will position Sage better in the SMB ERP landscape.
Risk analytics tools are more and more critical for CFOs seeking to improve operational efficiency. Just one problem: It can be hard to figure out just what those tools are.
As the use of big data is a documented phenomenon across corporate America, risk analytics – that is, using analytics to collect, analyze and measure real-time data to predict risk to make better business decisions -- is also becoming more popular.
That's according to Sanjaya Krishna, U.S. digital risk consulting leader at KPMG in Washington, D.C.
By using risk analytics software, CFOs can improve operational efficiency and keep their companies' exposures to acceptable risk. But where exactly does a CFO go to "get" risk analytics tools?
The search for risk analytics software
"Risk analytics is a fairly broad term, so there are a number of things that come to mind when we talk about risk analytics," Krishna said. "There are a number of specialized risk analytics products. There are also broader analytic packages that can … 'check the risk analytics box' to a certain extent, though the package isn't built to be a risk analytics solution."
There are products, such as KPMG Risk Front, that focus on providing customized risk analytics based on public internet commentary, Krishna said. And KPMG's Continuous Monitoring product provides for customized risk analytics based on internal transactional data.
Rajiv Shahsenior solutions architect, GigaSpaces Technologies Inc.
There is also a number of established enterprise governance, risk and compliance packages that provide companies a way of housing and analyzing all sorts of identified risks at the enterprise level or within certain business areas, he said.
Finally, there are highly specialized, industry-specific risk analytic tools, especially in the financial services industry, according to Krishna.
Risk analytics tools, regardless of the industry, have been around for a while, said Danny Baker, vice president of market strategy, financial and risk management solutions at Fiserv Inc., a provider of financial services technology based in Brookfield, Wis.
"They have historically been purposed for less strategic items -- they were seen as just a checkbox to please the regulators," he said.
Now, though, risk analytics software has transitioned and evolved from tactical, point solutions to helping organizations optimize their strategic futures.
"Especially for banks and credit unions, risk analytics tools are focused more on strategy and the need to integrate with other departments, like finance," Baker said. "The integration across departments is key."
But it's not just the tools that are important.
Sometimes a company may even use a database as a risk analytics tool, said Ken Krupa, enterprise CTO at MarkLogic Corp., an Enterprise NoSQL database provider in San Carlos, Calif.
Taking the broad approach to the data quality issue
"There are, indeed, specialized products, as well as packages that play a role in risk analytics," Krupa said. "These third-party suites of tools do a lot of the math on where there are risks, but if the math is based on bad or incomplete data, risk cannot be adequately addressed."
What's more, oftentimes, a company doesn't have a clear picture of the quality of the data that it's working with because making that data available from upstream systems depends on complex extract, transform and load (ETL) processes supported by a large team of developers of varying skill sets, he said.
Therefore, there's actually an inherent risk in not having transparent access to a 360-degree view of the data -- mainly caused by data in silos. However, leveraging a database that can successfully integrate the many silos of data can go a long way toward minimizing data quality risks, according to Krupa.
"You may not initially think of a database as a risk analytics tool, but the right kind of database serves a critical role in organizing all of the inputs that risk analytics tools use," he said. "The right type of database -- one that minimizes ETL dependency and provides a clear view of all kinds of data, like that offered by MarkLogic -- can make risk analytics better, faster and with less cost."
Anand Venugopal, head of StreamAnalytix product management and go-to-market at Impetus Technologies Inc. in Los Gatos, Calif., concurred with Krupa that bringing all a company's data into one place is critical to enabling better risk-based business decisions.
Since many organizations are in the process of modernizing their infrastructures -- particularly around analytics platforms -- they are moving away from point solutions if they can, he said.
The new paradigm is bringing together all the relevant information -- if not in one place, at least, having the mechanisms to bring it together on demand -- and then do the analytics together in one place, Venugopal said.
"So, what is beyond proven is that analytics and decision-making [are] more accurate not with more advanced algorithms, but with more data, i.e., diverse data, and more data sources, i.e., 25 different data sources as opposed to five different data sources," he said.
It all points to the fact that even with moderate algorithms, more data gives organizations better results than trying to use "rocket science algorithms" with limited data, Venugopal said.
"What that means to enterprise technology is that they are building risk platforms on top of the modern data warehouses, which combines a variety of internal and external data sets, and trying to combine real-time data feeds -- real-time triggers, real-time market factors, currency risk, etc. -- which was not part of the previous generation's capabilities," he said.
Single-point products can only address limited portions of this because that's how they're designed; enterprise risk can only be covered with a broader approach, according to Venugopal.
"I think the trend [in enterprises] is more toward building sophisticated risk strategies and applications, and they're building out those and they're using core big data technology components like the Hadoop stack, like the Spark stack and tools like Impetus' Extreme Analytics," he said.
Custom risk analytics software and other considerations
Organizations looking to implement technology to mitigate risk have to consider a few additional things, including the usability and feature set, according to Rajiv Shah, senior solutions architect for GigaSpaces Technologies Inc. in New York City.
What have you found most confusing when searching for risk analytics software?
"For instance, high-volume traders need a solution that won't interfere with the data sync that is critical to being up to the microsecond," he said.
A product that offers multilevel dashboarding is also key, according to Shah.
For example, the data a CFO needs to know is far different than, say, what a risk or compliance officer needs to know, he said.
"Enterprises should consider a solution that takes these differences into account, making sure that a dashboard can become detailed and granular, while also offering a 50,000-foot view," Shah said. "And a strong risk mitigation strategy and tool set should be able to identify and simulate a wide range of scenarios."
According to Fiserv's Baker, it's important that a risk mitigation technology doesn't hinder a company's regular operations.
"For larger organizations, it often becomes critical to build your own solution to meet the needs," he said.
Mike Juchno, partner at Ernst & Young Advisory Services, agreed that there is a custom tool component to risk analytics.
"Many of our clients already have these tools -- they're some sort of predictive analytics tool like SPSS, like SAS, like R, and some visualization on top of them, like Tableau or Power BI," he said. "So, we are able to build something custom to deal with a risk that may be unique to them or their industry or their particular situation. So, we typically find that it's a custom approach."
When it comes to looking for an off-the-shelf product, CFOs often hear about risk analytics tools from their peer-to-peer organizations. These groups come together to share information about tools.
"Of course, you're going to also look toward other companies or competitors that are doing risk management and performance management well and see what tools they have in place," Baker said. "The most high-performing clients I see embed their tools into not only solving current risk, but also expecting and forecasting future risk."
Although an organization can go to Fiserv and ask for a menu of risk analytics tools, it's more successful if both the company and Fiserv drill down into what the organization is trying to accomplish and customize the tools from there, according to Baker.
Most organizations want to make better strategic decisions, as the challenges of growth are greater now, and improve their forward-looking, strategic discipline and processing, he said.
The focus has shifted to agility and efficiency when implementing risk analytics tools, Baker said.
"The high-performing Fiserv clients I work with have integrated risk analytics tools into finance operations," he said. "These advanced solutions offer an integrative solution that also forecasts and plans for the strategic future."
Organizations are increasingly being thoughtful with their risk processes, he said. And in recent years questions to vendors have evolved from "what are your risk tools?" to "how do I get better information to make decisions for the future?"