Live Chat Software by Kayako
 News Categories
(19)Microsoft Technet (2)StarWind (6)TechRepublic (3)ComuterTips (1)SolarWinds (1)Xangati (1)MyVirtualCloud.net (27)VMware (8)NVIDIA (9)VDI (1)pfsense vRouter (3)VEEAM (3)Google (2)RemoteFX (1)developers.google.com (1)MailCleaner (1)Udemy (1)AUGI (2)AECbytes Architecture Engineering Constrution (7)VMGuru (2)AUTODESK (5)storageioblog.com (1)Atlantis Blog (18)AT.COM (2)community.spiceworks.com (1)archdaily.com (16)techtarget.com (2)hadoop360 (3)bigdatastudio (1)virtualizetips.com (1)blogs.vmware.com (3)VECITA (1)vecom.vn (1)Palo Alto Networks (4)itnews.com.au (2)serverwatch.com (1)Nhịp Cầu đầu tư (3)VnEconomy (1)Reuters (1)Tom Tunguz (1)Medium.com (1)Esri (1)www.specommerce.com (1)tweet (1)Tesla (1)fool.com (6)ITCNews (1)businessinsider.com (1)hbr.org Harvard Business Review (1)Haravan (2)techcrunch.com (1)vn.trendmicro.com (3)thangletoan.wordpress.com (3)IBM (1)www.droidmen.com (2)blog.parallels.com (1)betanews.com (8)searchvmware.techtarget.com (1)www.bctes.com (1)www.linux.com (4)blog.capterra.com (1)theelearningcoach.com (1)www.examgeneral.com (1)www.wetutoringnation.com (1)chamilo.org/ (1)www.formalms.org (1)chalkup.co (1)www.mindonsite.com (5)moodle.org (4)moodle.croydon.ac.uk (1)opensource.com (1)1tech.eu (1)remote-learner.net (1)paradisosolutions.com (2)sourceforge.net (22)searchbusinessanalytics.techtarget.com (1)nscs.gov.sg (1)virten.net (1)fastest.com.vn (1)elearninglearning.com (2)www.computerweekly.com (1)youtube.com (3)computer.howstuffworks.com (2)techz.vn (2)techsignin.com (1)itworld.com (19)searchsecurity.techtarget.com (1)makeuseof.com (1)nikse.dk (1)4kdownload.com (1)thegioididong.com (1)itcentralstation.com (1)www.dddmag.com (2)Engenius (1)networkcomputing.com (1)woshub.com (1)hainam121.wordpress.com (1)www.lucidchart.com (1)www.mof.gov.vn (3)www.servethehome.com (6)www.analyticsvidhya.com (1)petewarden.com
RSS Feed
News
Aug
17
New Quantum backup appliance brings Veeam to tape
Posted by Thang Le Toan on 17 August 2018 12:16 AM

Integrated disk backup appliances are now common, but Quantum and Veeam have taken the converged architecture to tape backups to eliminate the need for a dedicated external server.

Quantum and Veeam today launched what they call a converged tape appliance, which integrates Veeam data protection software with Quantum backup.

Until now, Veeam Backup and Replication users could only create tape backups by connecting a dedicated external server to host the Veeam software. The new Quantum backup appliance removed that layer of complexity by installing a blade server into its Scalar i3 tape library. The server is preconfigured to be Veeam-ready, with Windows Server 2016 -- Veeam's server OS of choice -- preinstalled.

By installing a server to a tape device, Quantum has created the industry's first ever tape appliance with built-in compute power.

"In some ways, this is a new category of product," said Eric Bassier, senior director of product management and marketing at Quantum, based in Colorado Springs, Colo. "In Veeam environments, every other tape vendor's tape requires a dedicated external server that runs the Veeam tape server software ... We took that server, we built it into the tape library so that we eliminate that physical server, and we make it that much simpler and faster to create tape for offline protection."

Customers can buy a device with one IBM SAS LTO-8 tape drive for $17,000 or a two-drive version for $23,000.

Tape storage has been around for a long time and "still remains one of the lowest-cost, long-term ways to store your data," said Ken Ringdahl, vice president of global alliance architecture at Veeam, based in Baar, Switzerland. But Bassier stressed the new Quantum backup appliance's true role in the modern backup and recovery system is it protects against ransomware.

Render of Quantum Scalar i3 tape appliance

Quantum

Quantum's Scalar converged tape appliance for Veeam

"It's offline. Data stored on tape is not connected to the network in any way. Because it's offline, it is the best and most effective protection against ransomware," Bassier said.

The ransomware threat has brought on a renewed interest in tape for backup.

Edwin Yuen, senior analyst at Enterprise Strategy Group in Milford, Mass., said ransomware has gotten more sophisticated over the past 12 to 18 months, and tape provides an offline backup method.

"Ransomware is not an acute event," Yuen said. "You're getting infected, and it's sitting there, waiting. Oftentimes, it's mapping out or finding other backups or other restores."

Storing data offline in a tape cartridge like the new Quantum backup option provides an air gap between live production systems and backed up data that is not possible to achieve with disk. That air gap can prevent ransomware from infecting live data.

If you really think about tape, it's one of those technologies that got dismissed, but never actually went away.
Edwin Yuensenior analyst, Enterprise Strategy Group

"If you really think about tape, it's one of those technologies that got dismissed, but never actually went away. It was consistently used; it just wasn't in vogue, so to speak. But there's certainly been a renewed interest in new uses for tape," Yuen said. "This integration by Quantum and Veeam really makes it a lot easier to bring tape into this configuration, so you can take advantage of that air gap."

According to Yuen, thanks to market maturity and the age of magnetic tape technology, there are now only a few major companies that manufacture tape libraries. This is why Yuen said he finds the partnership between Quantum and Veeam especially noteworthy, as it demonstrates a relatively young company showing interest in tape.

"The fact that these two companies came together shows interest across the board," Yuen said. "It's not a 20-year standard industry company, but one that's been an up-and-comer now getting into the tape market through this appliance."


Read more »



Aug
17
Network Management Software Become a Master (EzMaster 0.13.13)
Posted by Thang Le Toan on 17 August 2018 12:12 AM

Link: https://www.engeniustech.com/ezmaster-network-management-software.html#download

Save Time, Build Revenue and Improve Efficiencies

Complete Scalability

Start small and grow; control an office network or multiple networks in separate buildings. Expand and easily see your devices across town, states or the country.

Unlimited Flexibility

EnGenius hardware and ezMaster network management software combineto add flexibility and management simplicity when you need it.

Unmatched Affordability

Affordable, predictable costs and a lower TCO per deployment.No per AP licensing and annual subscription fees.

 

Rich Reporting & Analytics

Pinpoint & address potential problems before they affect users with invaluable reporting, analytics & real-time monitoring tools.

 

New ezMaster Users

Manage, Monitor & Troubleshoot Neutron and EnTurbo Hardware, download ezMaster today

  1. If you have No Virtual Machine currently running on your PC or Server
  2. If you are currently running on a virtualized platform of VMWare Player 7 or Oracle Virtual Box
  3. If you are currently running on a virtualized platform of VMWare ESXi vSphere
  4. If you are currently running on a virtualized platform of 2012R2 Hyper-V
  5. If you are currently running on a virtualized platform of Win10 64-bit Hyper-V

Download

Existing ezMaster Users

Make sure you have the newest features & update to the latest version of ezMaster

Update


Read more »



Aug
16
non-volatile storage (NVS)
Posted by Thang Le Toan on 16 August 2018 05:26 AM

Non-volatile storage (NVS) is a broad collection of technologies and devices that do not require a continuous power supply to retain data or program code persistently on a short- or long-term basis.

 

Three common examples of NVS devices that persistently store data are tape, a hard disk drive (HDD) and a solid-state drive (SSD). The term non-volatile storage also applies to the semiconductor chips that store the data or controller program code within devices such as SSDs, HDDs, tape drives and memory modules.

Many types of non-volatile memory chips are in use today. For instance, NAND flash memory chips commonly store data in SSDs in enterprise and personal computer systems, USB sticks, and memory cards in consumer devices such as mobile telephones and digital cameras. NOR flash memory chips commonly store controller code in storage drives and personal electronic devices.

Non-volatile storage technologies and devices vary widely in the manner and speed in which they transfer data to and retrieve data or program code from a chip or device. Other differentiating factors that have a significant impact on the type of non-volatile storage a system manufacturer or user chooses include cost, capacity, endurance and latency.

For example, an SSD equipped with NAND flash memory chips can program, or write, and read data faster and at lower latency through electrical mechanisms than a mechanically addressed HDD or tape drive that uses a head to write and read data to magnetic storage media. However, the per-bit price to store data in a flash-based SSD is generally higher than the per-bit cost of an HDD or tape drive, and flash SSDs can sustain a limited number of write cycles before they wear out.

Volatile vs. non-volatile storage devices

The key difference between volatile and non-volatile storage devices is whether or not they are able to retain data in the absence of a power supply. Volatile storage devices lose data when power is interrupted or turned off. By contrast, non-volatile devices are able to keep data regardless of the status of the power source.

Common types of volatile storage include static random access memory (SRAM) and dynamic random access memory (DRAM). Manufacturers may add battery power to a volatile memory device to enable it to persistently store data or controller code.

Enterprise and consumer computing systems often use a mix of volatile and non-volatile memory technologies, and each memory type has advantages and disadvantages. For instance, SRAM is faster than DRAM and well suited to high-speed caching. DRAM is less expensive to produce and requires less power than SRAM, and manufacturers often use it to store program code that a computer needs to operate.

Comparison of non-volatile memory types
 

By contrast, non-volatile NAND flash is slower than SRAM and DRAM, but it is cheaper to produce. Manufacturers commonly use NAND flash memory to store data persistently in business systems and consumer devices. Storage devices such as flash-based SSDs access data at a block level, whereas SRAM and DRAM support random data access at a byte level.

Like NAND, NOR flash is less expensive to produce than volatile SRAM and DRAM. NOR flash costs more than NAND flash, but it can read data faster than NAND, making it a common choice to boot consumer and embedded devices and to store controller code in SSDs, HDDs and tape drives. NOR flash is generally not used for long-term data storage due to its poor endurance.

Trends and future directions

Manufacturers are working on additional types of non-volatile storage to try to lower the per-bit cost to store data and program code, improve performance, increase endurance levels and reduce power consumption.

For instance, manufacturers developed 3D NAND flash technology in response to physical scaling limitations of two-dimensional, or planar, NAND flash. They are able to reach higher densities at a lower cost per bit by vertically stacking memory cells with 3D NAND technology than they can by using a single layer of memory cells with planar NAND.

NVM use cases
 

Emerging 3D XPoint technology, co-developed by Intel Corp. and Micron Technology Inc., offers higher throughput, lower latency, greater density and improved endurance over more commonly used NAND flash technology. Intel ships 3D XPoint technology under the brand name Optane in SSDs and in persistent memory modules intended for data center use. Persistent memory modules are also known as storage class memory.

3D XPoint non-volatile technology

Micron Technology Inc.

An image of a 3D XPoint technology die.
 

Using non-volatile memory express (NVMe) technology over a computer's PCI Express (PCIe) bus in conjunction with flash storage and newer options such as 3D XPoint can further accelerate performance, and reduce latency and power consumption. NVMe offers a more streamlined command set to process input/output (I/O) requests with PCIe-based SSDs than the Small Computer System Interface (SCSI) command set does with Serial Attached SCSI (SAS) storage drives and the analog telephone adapter (ATA) command set does with Serial ATE (SATA) drives.

Everspin Technologies DDR3 ST-MRAM storage.

Everspin Technologies Inc.

Everspin's EMD3D064M 64 Mb DDR3 ST-MRAM in a Ball Grid Array package.
 

Emerging non-volatile storage technologies currently in development or in limited use include ferroelectric RAM (FRAM or FeRAM), magnetoresistive RAM (MRAM), phase-change memory (PCM), resistive RAM (RRAM or ReRAM) and spin-transfer torque magnetoresistive RAM (STT-MRAM or STT-RAM).


Read more »



Aug
16
SSD write cycle
Posted by Thang Le Toan on 16 August 2018 05:24 AM

An SSD write cycle is the process of programming data to a NAND flash memory chip in a solid-state storage device.

A block of data stored on a flash memory chip must be electrically erased before new data can be written, or programmed, to the solid-state drive (SSD). The SSD write cycle is also known as the program/erase (P/E) cycle.

When an SSD is new, all of the blocks are erased and new, incoming data is directly written to the flash media. Once the SSD has filled all of the free blocks on the flash storage media, it must erase previously programmed blocks to make room for new data to be written. Blocks that contain valid, invalid or unnecessary data are copied to different blocks, freeing the old blocks to be erased. The SSD controller periodically erases the invalidated blocks and returns them into the free block pool.

The background process an SSD uses to clean out the unnecessary blocks and make room for new data is called garbage collection. The garbage collection process is generally invisible to the user, and the programming process is often identified simply as a write cycle, rather than a write/erase or P/E cycle.

Why write cycles are important

A NAND flash SSD is able to endure only a limited number of write cycles. The program/erase process causes a deterioration of the oxide layer that traps electrons in a NAND flash memory cell, and the SSD will eventually become unreliable, wear out and lose its ability to store data.

The number of write cycles, or endurance, varies based on the type of NAND flash memory cell. An SSD that stores a single data bit per cell, known as single-level cell (SLC) NAND flash, can typically support up to 100,000 write cycles. An SSD that stores two bits of data per cell, commonly referred to as multi-level cell (MLC) flash, generally sustains up to 10,000 write cycles with planar NAND and up to 35,000 write cycles with 3D NAND. The endurance of SSDs that store three bits of data per cell, called triple-level cell (TLC) flash, can be as low as 300 write cycles with planar NAND and as high as 3,000 write cycles with 3D NAND. The latest quadruple-level cell (QLC) NAND will likely support a maximum of 1,000 write cycles.

Comparison of NAND flash memory
 

As the number of bits per NAND flash memory cell increases, the cost per gigabyte (GB) of the SSD declines. However, the endurance and the reliability of the SSD are also lower.

NAND flash writes

Micron

 

Common write cycle problems

Challenges that SSD manufacturers have had to address to use NAND flash memory to store data reliably over an extended period of time include cell-to-cell interference as the dies get smaller, bit failures and errors, slow data erases and write amplification.

Manufacturers have enhanced the endurance and reliability of all types of SSDs through controller software-based mechanisms such as wear-leveling algorithms, external data buffering, improved error correction code (ECC) and error management, data compression, overprovisioning, better internal NAND management and block wear-out feedback. As a result, flash-based SSDs have not worn out as quickly as users once feared they would.

Vendors commonly offer SSD warranties that specify a maximum number of device drive writes per day (DWPD) or terabytes written (TBW). DWPD is the number of times the entire capacity of the SSD can be overwritten on a daily basis during the warranty period. TBW is the total amount of data that an SSD can write before it is likely to fail. Vendors of flash-based systems and SSDs often offer guarantees of five years or more on their enterprise drives.

Manufacturers sometimes specify the type of application workload for which an SSD is designed, such as write-intensive, read-intensive or mixed-use. Some vendors allow the customer to select the optimal level of endurance and capacity for a particular SSD. For instance, an enterprise user with a high-transaction database might opt for a greater DWPD number at the expense of capacity. Or a user operating a database that does infrequent writes might choose a lower DWPD and a higher capacity.


Read more »



Aug
16
all-flash array (AFA)
Posted by Thang Le Toan on 16 August 2018 05:09 AM

An all-flash array (AFA), also known as a solid-state storage disk system, is an external storage array that uses only flash media for persistent storage. Flash memory is used in place of the spinning hard disk drives (HDDs) that have long been associated with networked storage systems.

Vendors that sell all-flash arrays usually allow customers to mix flash and disk drives in the same chassis, a configuration known as a hybrid array. However, those products often represent the vendor's attempt to retrofit an existing disk array by replacing some of the media with flash.

All-flash array design: Retrofit or purpose-built

Other vendors sell purpose-built systems designed natively from the ground up to only support flash. These models also embed a broad range of software-defined storage features to manage data on the array.

A defining characteristic of an AFA is the inclusion of native software services that enable users to perform data management and data protection directly on the array hardware. This is different from server-side flash installed on a standard x86 server. Inserting flash storage into a server is much cheaper than buying an all-flash array, but it also requires the purchase and installation of third-party management software to supply the needed data services.

Leading all-flash vendors have written algorithms for array-based services for data management, including clones, compression and deduplication -- either an inline or post-process operation -- snapshots, replication, and thin provisioning.

As with its disk-based counterpart, an all-flash array provides shared storage in a storage area network (SAN) or network-attached storage (NAS) environment.

How an all-flash array differs from disk

Flash memory, which has no moving parts, is a type of nonvolatile memory that can be erased and reprogrammed in units of memory called blocks. It is a variation of erasable programmable read-only memory (EEPROM), which got its name because the memory blocks can be erased with a single action, or flash. A flash array can transfer data to and from solid-state drives (SSDs) much faster than electromechanical disk drives.

The advantage of an all-flash array, relative to disk-based storage, is full bandwidth performance and lower latency when an application makes a query to read the data. The flash memory in an AFA typically comes in the form of SSDs, which are similar in design to an integrated circuit.

Pure FlashBlade

Pure Storage

Image of a Pure Storage FlashBlade enterprise storage array
 

Flash is more expensive than spinning disk, but the development of multi-level cell (MLC) flash, triple-level cell (TLC) NAND flash and 3D NAND flash has lowered the cost. These technologies enable greater flash density without the cost involved in shrinking NAND cells.

MLC flash is slower and less durable than single-level cell (SLC) flash, but companies have developed software that improves its wear level to make MLC acceptable for enterprise applications. SLC flash remains the choice for applications with the highest I/O requirements, however. TLC flash reduces the price more than MLC, although it also comes with performance and durability tradeoffs that can be mitigated with software. Vendor products that support TLC SSDs include the Dell EMC SC Series and Kaminario K2 arrays.

Considerations for buying an all-flash array

Deciding to buy an AFA involves more than simple comparisons of vendor products. An all-flash array that delivers massive performance increases to a specific set of applications may not provide equivalent benefits to other workloads. For example, running virtualized applications in flash with inline data deduplication and compression tends to be more cost-effective than flash that supports streaming media in which unique files are uncompressible.

An all-SSD system will produce smaller variations than that of an HDD array in maximum, minimum and average latencies. This makes flash a good fit for most read-intensive applications.

The tradeoff comes in write amplification, which relates to how an SSD will rewrite data to erase an entire block. Write-intensive workloads require a special algorithm to collect all the writes on the same block of the SSD, thus ensuring the software always writes multiple changes to the same block.

Garbage collection can present a similar issue with SSDs. A flash cell can only withstand a limited number of writes, so wear leveling can be used to increase flash endurance. Most vendors design their all-flash systems to minimize the impact of garbage collection and wear leveling, although users with write-intensive workloads may wish to independently test a vendor's array to determine the best configuration.

Despite paying a higher upfront price for the system, users who buy an AFA may see the cost of storage decline over time. This is tied to an all-flash array's increased CPU utilization, which means an organization will need to buy fewer application servers.

The physical size of an AFA is smaller than that of a disk array, which lowers the rack count. Having fewer racks in a system also reduces the heat generated and the cooling power consumed in the data center.

All-flash array vendors, products and markets

Flash was first introduced as a handful of SSDs in otherwise all-HDD systems with the purpose to create a small flash tier to accelerate a few critical applications. Thus was born the hybrid flash array.

The next phase of evolution arrived with the advent of software that enabled an SSD to serve as a front-end cache for disk storage, extending the benefit of faster performance across all the applications running on the array.

The now-defunct vendor Fusion-io was an early pioneer of fast flash. Launched in 2005, Fusion-io sold Peripheral Component Interface Express (PCIe) cards packed with flash chips. Inserting the PCIe flash cards in server slots enabled a data center to boost the performance of traditional server hardware. Fusion-io was acquired by SanDisk in 2014, which itself was subsequently acquired by Western Digital Corp.

Also breaking ground early was Violin, whose systems -- designed with custom-built silicon -- gained customers quickly, fueling its rise in public markets in 2013. By 2017, Violin was surpassed by all-flash competitors whose arrays integrated sophisticated software data services. After filing for bankruptcy, the vendor was relaunched by private investors as Violin Systems in 2018, with a focus on selling all-flash storage to managed service providers.

comparison of all-flash storage arrays
Independent analyst Logan G. Harbaugh compares various all-flash arrays. This chart was created in August 2017.
 

All-flash array vendors, such as Pure Storage and XtremIO -- part of Dell EMC -- were among the earliest to incorporate inline compression and data deduplication, which most other vendors now include as a standard feature. Adding deduplication helped give AFAs the opportunity for price parity with storage based on cheaper rotating media.

A sampling of leading all-flash array products includes the following:

  • Dell EMC VMAX
  • Dell EMC Unity
  • Dell EMC XtremIO
  • Dell EMC Isilon NAS
  • Fujitsu Eternus AF
  • Hewlett Packard Enterprise (HPE) 3PAR StoreServ
  • HPE Nimble Storage AF series
  • Hitachi Vantara Virtual Storage Platform
  • Huawei OceanStor
  • IBM FlashSystem V9000
  • IBM Storwize 5000 and Storwize V7000F
  • Kaminario K2
  • NetApp All-Flash Fabric-Attached Array (NetApp AFF)
  • NetApp SolidFire family -- including NetApp HCI
  • Pure Storage FlashArray
  • Pure FlashBlade NAS/object storage array
  • Tegile Systems T4600 -- bought in 2017 by Western Digital
  • Tintri EC Series

Impact on hybrid arrays use cases

Falling flash prices, data growth and integrated data services have increased the appeal of all-flash arrays for many enterprises. This has led to industry speculation that all-flash storage can supplant hybrid arrays, although there remain good reasons to consider using a hybrid storage infrastructure.

HDDs offer predictable performance at a fairly low cost per gigabyte, although they use more power and are slower than flash, resulting in a high cost per IOPS. All-flash arrays also have a lower cost per IOPS, coupled with the advantages of speed and lower power consumption, but they carry a higher upfront acquisition price and per-gigabyte cost.

AFA vs. hybrid array
 

A hybrid flash array enables enterprises to strike a balance between relatively low cost and balanced performance. Since a hybrid array supports high-capacity disk drives, it offers greater total storage than an AFA.

All-flash NVMe and NVMe over Fabrics

All-flash arrays based on nonvolatile memory express (NVMe) flash technologies represent the next phase of maturation. The NVMe host controller interface speeds data transfer by enabling an application to communicate directly with back-end storage.

NVMe is meant to be a faster alternative to the Small Computer System Interface (SCSI) standard that transfers data between a host and a target device. Development of the NVMe standard is under the auspices of NVM Express Inc., a nonprofit organization comprising more than 100 member technology companies.

The NVMe standard is widely considered to be the eventual successor to the SAS and SATA protocols. NVMe form factors include add-in cards, U.2 2.5-inch and M.2 SSD devices.

Some of the NVMe-based products available include:

  • DataDirect Networks Flashscale
  • Datrium DVX hybrid system
  • HPE Persistent Memory
  • Kaminario K2.N
  • Micron Accelerated Solutions NVMe reference architecture
  • Micron SolidScale NVMe over Fabrics appliances
  • Pure Storage FlashArray//X
  • Tegile IntelliFlash

A handful of NVMe-flash startups are bringing products to market, as well, including:

  • Apeiron Data Systems combines NVMe drives with data services housed in field-programmable gate arrays instead of servers attached to storage arrays.
  • E8 Storage E8-D24 NVMe flash arrays replicate snapshots to attached compute servers to reduce management overhead on the array.
  • Excelero software-defined storage runs on any x86 server.
  • Mangstor MX6300 NVMe over Fabrics (NVMe-oF) storage is branded PCIe NVMe add-in cards on Dell PowerEdge servers.
  • Pavilion Data Systems-branded Pavilion Memory Array.
  • Vexata VX-100 is based on the software-defined Vexata Active Data Fabric.

Industry experts expect 2018 to usher in more end-to-end, rack-scale flash storage systems based on NVMe-oF. These systems integrate custom NVMe flash modules as a fabric in place of a bunch of NVMe SSDs.

The NVMe-oF transport mechanism enables a long-distance connection between host devices and NVMe storage devices. IBM, Kaminario and Pure Storage have publicly disclosed products to support NVMe-oF, although most storage vendors have pledged support.

All-flash storage arrays in hyper-converged infrastructure

Hyper-converged infrastructure (HCI) systems combine computing, networking, storage and virtualization resources as an integrated appliance. Most hyper-convergence products are designed to use disk as front-end storage, relying on a moderate flash cache layer to accelerate applications or to use as cold storage. For reasons related to performance, most HCI arrays were not traditionally built primarily for flash storage, although that started to change in 2017.

Now the leading HCI vendors sell all-flash versions. Among these vendors are Cisco, Dell EMC, HPE, Nutanix, Pivot3 and Scale Computing. NetApp launched an HCI product in October 2017 built around its SolidFire all-flash storage platform.


Read more »



Aug
3
zero-day (computer)
Posted by Thang Le Toan on 03 August 2018 01:03 AM

Zero-day is a flaw in software, hardware or firmware that is unknown to the party or parties responsible for patching or otherwise fixing the flaw. The term zero day may refer to the vulnerability itself, or an attack that has zero days between the time the vulnerability is discovered and the first attack. Once a zero-day vulnerability has been made public, it is known as an n-day or one-day vulnerability.

Ordinarily, when someone detects that a software program contains a potential security issue, that person or company will notify the software company (and sometimes the world at large) so that action can be taken. Given time, the software company can fix the code and distribute a patch or software update. Even if potential attackers hear about the vulnerability, it may take them some time to exploit it; meanwhile, the fix will hopefully become available first. Sometimes, however, a hacker may be the first to discover the vulnerability. Since the vulnerability isn't known in advance, there is no way to guard against the exploit before it happens. Companies exposed to such exploits can, however, institute procedures for early detection.

Security researchers cooperate with vendors and usually agree to withhold all details of zero-day vulnerabilities for a reasonable period before publishing those details. Google Project Zero, for example, follows industry guidelines that give vendors up to 90 days to patch a vulnerability before the finder of the vulnerability publicly discloses the flaw. For vulnerabilities deemed "critical," Project Zero allows only seven days for the vendor to patch before publishing the vulnerability; if the vulnerability is being actively exploited, Project Zero may reduce the response time to less than seven days.

Zero-day exploit detection

Zero-day exploits tend to be very difficult to detect. Antimalware software and some intrusion detection systems (IDSes) and intrusion prevention systems (IPSes) are often ineffective because no attack signature yet exists. This is why the best way to detect a zero-day attack is user behavior analytics. Most of the entities authorized to access networks exhibit certain usage and behavior patterns that are considered to be normal. Activities falling outside of the normal scope of operations could be an indicator of a zero-day attack.

For example, a web application server normally responds to requests in specific ways. If outbound packets are detected exiting the port assigned to that web application, and those packets do not match anything that would ordinarily be generated by the application, it is a good indication that an attack is going on.

Zero-day exploit period

Some zero-day attacks have been attributed to advanced persistent threat (APT) actors, hacking or cybercrime groups affiliated with or a part of national governments. Attackers, especially APTs or organized cybercrime groups, are believed to reserve their zero-day exploits for high-value targets.

N-day vulnerabilities continue to live on and are subject to exploits long after the vulnerabilities have been patched or otherwise fixed by vendors. For example, the credit bureau Equifax was breached in 2017 by attackers using an exploit against the Apache Struts web framework. The attackers exploited a vulnerability in Apache Struts that was reported, and patched, earlier in the year; Equifax failed to patch the vulnerability and was breached by attackers exploiting the unpatched vulnerability.

Likewise, researchers continue to find zero-day vulnerabilities in the Server Message Block protocol, implemented in the Windows OS for many years. Once the zero-day vulnerability is made public, users should patch their systems, but attackers continue to exploit the vulnerabilities for as long as unpatched systems remain exposed on the internet.

Defending against zero-day attacks

Zero-day exploits are difficult to defend against because they are so difficult to detect. Vulnerability scanning software relies on malware signature checkers to compare suspicious code with signatures of known malware; when the malware uses a zero-day exploit that has not been previously encountered, such vulnerability scanners will fail to block the malware.

Since a zero-day vulnerability can't be known in advance, there is no way to guard against a specific exploit before it happens. However, there are some things that companies can do to reduce their level of risk exposure.

  • Use virtual local area networks to segregate some areas of the network or use dedicated physical or virtual network segments to isolate sensitive traffic flowing between servers.
  • Implement IPsec, the IP security protocol, to apply encryption and authentication to network traffic.
  • Deploy an IDS or IPS. Although signature-based IDS and IPS security products may not be able to identify the attack, they may be able to alert defenders to suspicious activity that occurs as a side effect to the attack.
  • Use network access control to prevent rogue machines from gaining access to crucial parts of the enterprise environment.
  • Lock down wireless access points and use a security scheme such as Wi-Fi Protected Access 2 for maximum protection against wireless-based attacks.
  • Keep all systems patched and up to date. Although patches will not stop a zero-day attack, keeping network resources fully patched may make it more difficult for an attack to succeed. When a zero-day patch does become available, apply it as soon as possible.
  • Perform regular vulnerability scanning against enterprise networks and lock down any vulnerabilities that are discovered.

While maintaining a high standard for information security may not prevent all zero-day exploits, it can help defeat attacks that use zero-day exploits after the vulnerabilities have been patched.

Examples of zero-day attacks

Multiple zero-day attacks commonly occur each year. In 2016, for example, there was a zero-day attack (CVE-2016-4117) that exploited a previously undiscovered flaw in Adobe Flash Player. Also in 2016, more than 100 organizations succumbed to a zero day bug (CVE-2016-0167) that was exploited for an elevation of privilege attack targeting Microsoft Windows.

 

In 2017, a zero-day vulnerability (CVE-2017-0199) was discovered in which a Microsoft Office document in rich text format was shown to be able to trigger the execution of a visual basic script containing PowerShell commands upon being opened. Another 2017 exploit (CVE-2017-0261) used encapsulated PostScript as a platform for initiating malware infections.

The Stuxnet worm was a devastating zero-day exploit that targeted supervisory control and data acquisition (SCADA) systems by first attacking computers running the Windows operating system. Stuxnet exploited four different Windows zero-day vulnerabilities and spread through infected USB drives, making it possible to infect both Windows and SCADA systems remotely without attacking them through a network. The Stuxnet worm has been widely reported to be the result of a joint effort by U.S. and Israel intelligence agencies  to disrupt Iran's nuclear program.


Learn more about zero-day attacks
from the CompTia security course.

 

FBI admits to using zero-day exploits, not disclosing them

The FBI has admitted to using zero-day exploits rather than disclosing them, and experts say this should not be a surprise considering the history of federal agency actions.

In a surprise bout of openness, Amy Hess, executive assistant director for science and technology with the FBI, admitted that the FBI uses zero-day exploits, but said the agency does struggle with the decision.

 

In an interview with The Washington Post, Hess called it a "constant challenge" to decide whether it is better to use a zero-day exploit "to be able to identify a person who is threatening public safety" or to disclose the vulnerability in order to allow developers to secure products being used by the public. Hess also noted the FBI prefers not to rely on zero-day exploits because the fact that they can be patched at any moment makes them unreliable.

Jeff Schilling, CSO for Armor, said the surprise might come from the fact that many people don't know that the FBI has a foreign intelligence collection mission.

"Any agency that has a foreign intelligence collection mission in cyberspace has to make decisions every day on the value gained in leveraging a zero day to collect intelligence data, especially with the impact of not letting people who are at risk know of the potential vulnerability which could be compromised," Schilling said, adding that the need for the government to find a balance between security and intelligence is not a new phenomenon. "This country experienced the same intelligence gained versus operational impact during World War II (WWII) when the intelligence community did not disclose that we had broken both the Japanese and German codes. Lots of sailors, soldiers and airmen lost their lives to keep those secrets. I think the FBI and the rest of the intelligence community have the same dilemmas as the intelligence community in WWII, however, at this point, data, not lives are at risk."

Robert Hansen, vice president for WhiteHat Security Labs, said it boils down to whether the public trusts the government to not abuse its power in this area, and whether the government should assume that only it knows about these exploits.

"In general, I think that although the net truth is that most people in government have good intentions, they can't all be relied upon to hold such belief systems," Hansen said. "And, given that in most cases exploits are found much later, it stands to reason that it's more dangerous to keep vulnerabilities in place. That's not to diminish their value, however, it's very dangerous to presume that an agency is the only one [that] can and will find and leverage that vulnerability."

Adam Kujawa, head of malware intelligence at Malwarebytes Labs, said the draw of zero-day exploits may be too strong for government agencies to resist.

"The 'benefit' of this method [is] simply having access to a weapon that theoretically can't be protected against," Kujawa said. "This is like being able to shoot someone with a nuke when they are only wearing a bullet proof vest -- completely unstoppable, theoretically. Law enforcement, when they have a target in mind, be it a cybercriminal, terrorist, et cetera, are able to breach the security of the suspect and gather intelligence or collect information on them to identify any criminal activity that might happen or will happen."

Daren Glenister, field CTO at Intralinks Inc., noted that while leaving vulnerabilities unpatched leads to risk, there is also some benefit to not publishing vulnerabilities too soon.

"Patching a threat may take a vendor days or weeks. Every hour lost in providing a patch introduces additional risk to data and access to systems," Glenister said. "[However], by not publishing zero-day threats, it minimizes the widespread underground threat from hackers that occurs every time a new threat is disclosed."

The NSA recently detailed its vulnerability disclosure policy, but while doing so never mentioned whether or not the agency used zero-day exploits. Multiple experts said this admission by the FBI makes it safe to assume the NSA is also leveraging zero days in its efforts.

Adam Meyer, chief security strategist at SurfWatch Labs Inc., said it is not only reasonable to expect the NSA is actively exploiting zero days, but many others are as well.

"I believe it is safe to assume that any U.S. agency with a Defense or Homeland Security mission area are using exploits to achieve a presence against their targets," Meyer said. "Unfortunately, I also think it is safe to assume that every developed country in the world is doing the exact same thing. The reality is a zero day can be used against us just as much as for us."

Schilling said using zero days may not be the only option, but noted that human intelligence gathering carries much greater risks.

"At the end of the day, if we are leveraging zero days to stay ahead of our national threats, I am ok with us accepting the risk of data loss and compromises," Schilling said. "History has shown that we have accepted higher costs to protect our intelligence collection, and I think we are still OK today in the risk we are accepting as it is to save lives."

Kujawa said that while there are viable alternatives to using zero days to gather intelligence, it is hard to ignore the ease and relative safety of this method.

"There are plenty of viable methods of extracting information from a suspect; however the zero-day method is incredibly effective, very quiet and very fast. Law enforcement could attack systems using known exploits, social engineering tactics or gaining physical access to the system and installing malware manually, however none of these methods are guaranteed and they all can be protected against if the suspect is practicing common security procedures. The zero-day method will fall into the same bucket as the other attacks soon enough, however, so we will have to wait and see what the future holds for law enforcement in trying to gather evidence and intelligence on criminal suspects."


Read more »



Aug
2
what's a spear phishing mail ?
Posted by Thang Le Toan on 02 August 2018 01:38 AM

Spear phishing is an email-spoofing attack that targets a specific organization or individual, seeking unauthorized access to sensitive information. Spear-phishing attempts are not typically initiated by random hackers, but are more likely to be conducted by perpetrators out for financial gain, trade secrets or military information.

As with emails used in regular phishing expeditions, spear phishing messages appear to come from a trusted source. Phishing messages usually appear to come from a large and well-known company or website with a broad membership base, such as Google or PayPal. In the case of spear phishing, however, the apparent source of the email is likely to be an individual within the recipient's own company -- generally, someone in a position of authority -- or from someone the target knows personally.

Visiting United States Military Academy professor and National Security Agency official Aaron Ferguson called it the "colonel effect."  To illustrate his point, Ferguson sent out a message to 500 cadets, asking them to click a link to verify grades. Ferguson's message appeared to come from a Col. Robert Melville of West Point. Over 80% of recipients clicked the link in the message. In response, they received a notification that they'd been duped and a warning that their behavior could have resulted in downloads of spyware, Trojan horses and/or other malware.

Many enterprise employees have learned to be suspicious of unexpected requests for confidential information and will not divulge personal data in response to emails or click on links in messages unless they are positive about the source. The success of spear phishing depends upon three things: The apparent source must appear to be a known and trusted individual; there is information within the message that supports its validity, and the request the individual makes seems to have a logical basis.

Spear-phishing email
 

Spear phishing vs. phishing vs. whaling

This familiarity is what sets spear phishing apart from regular phishing attacks. Phishing emails are typically sent by a known contact or organization. These include a malicious link or attachment that installs malware on the target's device, or directs the target to a malicious website that is set up to trick them into giving sensitive information like passwords, account information or credit card information.

Spear phishing has the same goal as normal phishing, but the attacker first gathers information about the intended target. This information is used to personalize the spear-phishing attack. Instead of sending the phishing emails to a large group of people, the attacker targets a select group or an individual. By limiting the targets, it's easier to include personal information -- like the target's first name or job title -- and make the malicious emails seem more trustworthy.

The same personalized technique is used in whaling attacks, as well. A whaling attack is a spear-phishing attack directed specifically at high-profile targets like C-level executives, politicians and celebrities. Whaling attacks are also customized to the target and use the same social-engineering, email-spoofing and content-spoofing methods to access sensitive data.

Examples of successful attacks

In one version of a successful spear-phishing attack, the perpetrator finds a webpage for their target organization that supplies contact information for the company. Using available details to make the message seem authentic, the perpetrator drafts an email to an employee on the contact page that appears to come from an individual who might reasonably request confidential information, such as a network administrator. The email asks the employee to log into a bogus page that requests the employee's username and password, or click on a link that will download spyware or other malicious programming. If a single employee falls for the spear phisher's ploy, the attacker can masquerade as that individual and use social-engineering techniques to gain further access to sensitive data.

In 2015, independent security researcher and journalist Brian Krebs reported that Ubiquiti Networks Inc. lost $46.7 million to hackers who started the attack with a spear-phishing campaign. The hackers were able to impersonate communications from executive management at the networking firm and performed unauthorized international wire transfers.

Spear phishing defense

Spear-phishing attacks -- and whaling attacks -- are often harder to detect than regular phishing attacks because they are so focused.

In an enterprise, security-awareness training for employees and executives alike will help reduce the likelihood of a user falling for spear-phishing emails. This training typically educates enterprise users on how to spot phishing emails based on suspicious email domains or links enclosed in the message, as well as the wording of the messages and the information that may be requested in the email.

 

How to prevent a spear phishing attack from infiltrating an enterprise

While spear phishing emails are becoming harder to detect, there are still ways to prevent them. Threats expert Nick Lewis gives advice.

Spear phishing and social engineering are becoming more popular as attackers target humans as a particularly dependable point of ingress (HBGary, RSA, etc.). Considering that a well-crafted spear phishing email is almost indistinguishable from a legitimate email, what is the best way to prevent users from clicking on spear phishing links?

Phishing, social engineering and spear phishing have been growing in popularity over the last 10 or more years. The introduction of spear phishing and other newer forms of phishing are an evolution of social engineering or fraud. Attackers have found ways to exploit weaknesses in technologies like VoIP, IM and SMS messages, among others,  to commit fraud, and will continue to adapt as new technologies develop. Humans will always be an integral part of information security for an organization, but can always be targeted, regardless of the technologies in use. Humans are sometimes the weakest link.

To help minimize the chance of a spear phishing attack successfully infiltrating the enterprise, you can follow the advice from US-CERT on phishing or the guidance from the Anti-Phishing Working Group. Both have technical steps you can put in place, but both also include a security awareness and education component. Potentially the most effective method to combat phishing and its variants is to make sure users know to question suspicious communications and to verify the communication (email, IM, SMS, etc.) out-of-band with the requesting party. For example, if an employee gets an email from a colleague that doesn’t sound like it came from the sender or seems in some way suspicious, he or she should contact the sender using a different means of communication -- such as the phone -- to confirm the email. If the email can’t be verified, then it should be reported to your information security group, the Anti-Phishing Working Group or the FTC at spam@uce.gov.

Enterprises with high security needs could choose not to connect their systems to the Internet, not allow Internet email inbound except for approved domains, or only allow inbound email from approved email addresses. This will not stop all phishing attacks and will significantly decrease usability, but may be necessary for high-security environments.


Read more »



Feb
20
sensor analytics
Posted by Thang Le Toan on 20 February 2018 01:30 PM

Sensor analytics is the statistical analysis of data that is created by wired or wireless sensors.

A primary goal of sensor analytics is to detect anomalies. The insight that is gained by examining deviations from an established point of reference can have many uses, including predicting and proactively preventing equipment failure in a manufacturing plant, alerting a nurse in an electronic intensive care unit (eICU) when a patient’s blood pressure drops, or allowing a data center administrator to make data-driven decisions about heating, ventilating and air conditioning (HVAC).

Because sensors are often always on, it can be challenging to collect, store and interpret the tremendous amount of data they create. A sensor analytics system can help by integrating event-monitoring, storage and analytics software in a cohesive package that will provide a holistic view of sensor data. Such a system has three parts: the sensors that monitor events in real-time, a scalable data store and an analytics engine. Instead of analyzing all data as it is being created, many engines perform time-series or event-driven analytics, using algorithms to sample data and sophisticated data modeling techniques to predict outcomes. These approaches may change, however, as advancements in big data analytics, object storage and event stream processing technologies will make real-time analysis easier and less expensive to carry out.

Most sensor analytics systems analyze data at the source as well as in the cloud. Intermediate data analysis may also be carried out at a sensor hub that accepts inputs from multiple sensors, including accelerometers, gyroscopes, magnetometers and pressure sensors. The purpose of intermediate data analysis is to filter data locally and reduce the amount of data that needs to be transported to the cloud. This is often done for efficiency reasons, but it may also be carried out for security and compliance reasons.

The power of sensor analytics comes from not only quantifying data at a particular point in time, but by putting the data in context over time and examining how it correlates with other, related data. It is expected that as the Internet of Things (IoT) becomes a mainstream concern for many industries and wireless sensor networks become ubiquitous, the need for data scientists and other professionals who can work with the data that sensors create will grow -- as will the demand for data artists and software that helps analysts present data in a way that’s useful and easily understood.

 

Do you think conversational technologies like chatbots will soon be replaced by smarter AI agents?


Read more »



Feb
20
Recover your deleted files quickly and easily
Posted by Thang Le Toan on 20 February 2018 10:09 AM

Recuva®

Recover your deleted files quickly and easily.

Accidentally deleted an important file? Lost files after a computer crash? No problem - Recuva recovers files from your Windows computer, recycle bin, digital camera card, or MP3 player!

Download Free Version Get Recuva Pro!
  • Life saver

    Superior file recovery

    Recuva can recover pictures, music, documents, videos, emails or any other file type you’ve lost. And it can recover from any rewriteable media you have: memory cards, external hard drives, USB sticks and more!

  • Damaged disk

    Recovery from damaged disks

    Unlike most file recovery tools, Recuva can recover files from damaged or newly formatted drives. Greater flexibility means greater chance of recovery.

  • Scan

    Deep scan for buried files

    For those hard to find files, Recuva has an advanced deep scan mode that scours your drives to find any traces of files you have deleted.

  • Shredder

    Securely delete files

    Sometimes you want a file gone for good. Recuva’s secure overwrite feature uses industry- and military-standard deletion techniques to make sure your files stay erased.


Read more »



Feb
18
Intel Xeon D-2100 with VMware ESXi and Ubuntu 16.04 on Supermicro X11SDV
Posted by Thang Le Toan on 18 February 2018 05:14 PM
VMware ESXi 6.5u1 And Intel Xeon D 2100 With Supermicro X11SDV
VMware ESXi 6.5u1 And Intel Xeon D 2100 With Supermicro X11SDV

If you have been following our Intel Xeon D-2100 Series Launch Coverage Central you will have seen that we already published the expected Xeon D-2100 series OS compatibility list. Usually, VMware is one of the laggards in terms of hardware compatibility for bleeding-edge platforms. Conversely, Microsoft Windows and Linux tend to have leading compatibility of server parts. As a result, we wanted to take an opportunity to test the Intel Xeon D-2100 platform, based on a Supermicro X11SDV-4C-TLN2F, with VMware ESXi 6.5u1.

Intel Xeon D-2100 with VMware ESXi 6.5u1

We made a short, sub 2 minute, video of our results. In the video, you will see that indeed, this worked without an issue. We made the recording using an older ISO, from August 2017, just to ensure that there has not been an update since then. The VMware ESXi 6.5u1 was the first iteration to fully support Intel Xeon Scalable and AMD EPYC which is why it is becoming popular in data centers and labs.

At the start of the video, you can see the machine running Ubuntu 16.04.3 LTS prior to installation. In essence, while recording we realized that we covered Ubuntu compatibility inadvertently. Ubuntu was installed without a single additional driver needing to be added, and we have both GUI and server versions running on the Supermicro X11SDV generation platforms.

We wondered why we did not need to install an additional set of drivers, and the reason quickly became clear: Intel did something very smart with this generation. At the heart of the platform’s I/O is Lewisburg. If that sounds familiar, it is because the Lewisburg generation of PCH is what was introduced on the Intel Xeon Scalable series of mainstream server processors.

Intel Xeon D 2100 Series Lewisburg
Intel Xeon D 2100 Series Lewisburg

For those deploying these embedded platforms, it means that we expect broad out-of-box OS support for the Intel Xeon D-2100 series platforms out of the gate. That is not something we always see with new embedded platforms, such as the Intel Atom C3000 series.

Now, the question remains, if you are working with VMware ESXi virtualization solutions is the Intel Xeon D-2100 series the right platform. We think there is a place for it in highly space-constrained situations or those that require low power, low memory capacity, and high compute performance. You can check out our Platform Power Consumption and First Benchmarks of the Intel Xeon D-2100 Series and we will have lots more on STH in the near future.


Read more »




Help Desk Software by Kayako