Live Chat Software by Kayako
 News Categories
(20)Microsoft Technet (2)StarWind (6)TechRepublic (4)ComuterTips (1)SolarWinds (1)Xangati (1)MyVirtualCloud.net (28)VMware (8)NVIDIA (9)VDI (1)pfsense vRouter (4)VEEAM (3)Google (2)RemoteFX (1)developers.google.com (1)MailCleaner (1)Udemy (1)AUGI (2)AECbytes Architecture Engineering Constrution (7)VMGuru (2)AUTODESK (9)storageioblog.com (1)Atlantis Blog (23)AT.COM (2)community.spiceworks.com (1)archdaily.com (16)techtarget.com (3)hadoop360 (3)bigdatastudio (1)virtualizetips.com (1)blogs.vmware.com (3)VECITA (1)vecom.vn (1)Palo Alto Networks (4)itnews.com.au (2)serverwatch.com (1)Nhịp Cầu đầu tư (3)VnEconomy (1)Reuters (1)Tom Tunguz (1)Medium.com (1)Esri (1)www.specommerce.com (1)tweet (1)Tesla (1)fool.com (6)ITCNews (1)businessinsider.com (1)hbr.org Harvard Business Review (1)Haravan (2)techcrunch.com (1)vn.trendmicro.com (3)thangletoan.wordpress.com (3)IBM (1)www.droidmen.com (2)blog.parallels.com (1)betanews.com (9)searchvmware.techtarget.com (1)www.bctes.com (1)www.linux.com (4)blog.capterra.com (1)theelearningcoach.com (1)www.examgeneral.com (1)www.wetutoringnation.com (1)chamilo.org/ (1)www.formalms.org (1)chalkup.co (1)www.mindonsite.com (5)moodle.org (4)moodle.croydon.ac.uk (1)opensource.com (1)1tech.eu (1)remote-learner.net (1)paradisosolutions.com (2)sourceforge.net (24)searchbusinessanalytics.techtarget.com (1)nscs.gov.sg (1)virten.net (1)fastest.com.vn (1)elearninglearning.com (2)www.computerweekly.com (1)youtube.com (1)www.techradar.com (3)computer.howstuffworks.com (2)techz.vn (2)techsignin.com (1)itworld.com (20)searchsecurity.techtarget.com (1)makeuseof.com (1)nikse.dk (1)4kdownload.com (1)thegioididong.com (1)itcentralstation.com (1)www.dddmag.com (2)Engenius (1)networkcomputing.com (1)woshub.com (1)hainam121.wordpress.com (1)www.lucidchart.com (1)www.mof.gov.vn (3)www.servethehome.com (6)www.analyticsvidhya.com (1)petewarden.com (2)ethinkeducation.com
RSS Feed
News
Aug
16
cache memory
Posted by Thang Le Toan on 16 August 2018 05:11 AM

Cache memory, also called CPU memory, is high-speed static random access memory (SRAM) that a computer microprocessor can access more quickly than it can access regular random access memory (RAM). This memory is typically integrated directly into the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU. The purpose of cache memory is to store program instructions and data that are used repeatedly in the operation of programs or information that the CPU is likely to need next. The computer processor can access this information quickly from the cache rather than having to get it from computer's main memory. Fast access to these instructions increases the overall speed of the program.

As the microprocessor processes data, it looks first in the cache memory. If it finds the instructions or data it's looking for there from a previous reading of data, it does not have to perform a more time-consuming reading of data from larger main memory or other data storage devices. Cache memory is responsible for speeding up computer operations and processing.

Once they have been opened and operated for a time, most programs use few of a computer's resources. That's because frequently re-referenced instructions tend to be cached. This is why system performance measurements for computers with slower processors but larger caches can be faster than those for computers with faster processors but less cache space.


This CompTIA A+ video tutorial explains
cache memory.

Multi-tier or multilevel caching has become popular in server and desktop architectures, with different levels providing greater efficiency through managed tiering. Simply put, the less frequently certain data or instructions are accessed, the lower down the cache level the data or instructions are written.

Implementation and history

Mainframes used an early version of cache memory, but the technology as it is known today began to be developed with the advent of microcomputers. With early PCs, processor performance increased much faster than memory performance, and memory became a bottleneck, slowing systems.

In the 1980s, the idea took hold that a small amount of more expensive, faster SRAM could be used to improve the performance of the less expensive, slower main memory. Initially, the memory cache was separate from the system processor and not always included in the chipset. Early PCs typically had from 16 KB to 128 KB of cache memory.

With 486 processors, Intel added 8 KB of memory to the CPU as Level 1 (L1) memory. As much as 256 KB of external Level 2 (L2) cache memory was used in these systems. Pentium processors saw the external cache memory double again to 512 KB on the high end. They also split the internal cache memory into two caches: one for instructions and the other for data.

Processors based on Intel's P6 microarchitecture, introduced in 1995, were the first to incorporate L2 cache memory into the CPU and enable all of a system's cache memory to run at the same clock speed as the processor. Prior to the P6, L2 memory external to the CPU was accessed at a much slower clock speed than the rate at which the processor ran, and slowed system performance considerably.

Early memory cache controllers used a write-through cache architecture, where data written into cache was also immediately updated in RAM. This approached minimized data loss, but also slowed operations. With later 486-based PCs, the write-back cache architecture was developed, where RAM isn't updated immediately. Instead, data is stored on cache and RAM is updated only at specific intervals or under certain circumstances where data is missing or old.

Cache memory mapping

Caching configurations continue to evolve, but cache memory traditionally works under three different configurations:

  • Direct mapped cache has each block mapped to exactly one cache memory location. Conceptually, direct mapped cache is like rows in a table with three columns: the data block or cache line that contains the actual data fetched and stored, a tag with all or part of the address of the data that was fetched, and a flag bit that shows the presence in the row entry of a valid bit of data.
  • Fully associative cache mapping is similar to direct mapping in structure but allows a block to be mapped to any cache location rather than to a prespecified cache memory location as is the case with direct mapping.
  • Set associative cache mapping can be viewed as a compromise between direct mapping and fully associative mapping in which each block is mapped to a subset of cache locations. It is sometimes called N-way set associative mapping, which provides for a location in main memory to be cached to any of "N" locations in the L1 cache.

Format of the cache hierarchy

Cache memory is fast and expensive. Traditionally, it is categorized as "levels" that describe its closeness and accessibility to the microprocessor.

cache memory diagram
 

L1 cache, or primary cache, is extremely fast but relatively small, and is usually embedded in the processor chip as CPU cache.

L2 cache, or secondary cache, is often more capacious than L1. L2 cache may be embedded on the CPU, or it can be on a separate chip or coprocessor and have a high-speed alternative system bus connecting the cache and CPU. That way it doesn't get slowed by traffic on the main system bus.

Level 3 (L3) cache is specialized memory developed to improve the performance of L1 and L2. L1 or L2 can be significantly faster than L3, though L3 is usually double the speed of RAM. With multicore processors, each core can have dedicated L1 and L2 cache, but they can share an L3 cache. If an L3 cache references an instruction, it is usually elevated to a higher level of cache.

In the past, L1, L2 and L3 caches have been created using combined processor and motherboard components. Recently, the trend has been toward consolidating all three levels of memory caching on the CPU itself. That's why the primary means for increasing cache size has begun to shift from the acquisition of a specific motherboard with different chipsets and bus architectures to buying a CPU with the right amount of integrated L1, L2 and L3 cache.

Contrary to popular belief, implementing flash or more dynamic RAM (DRAM) on a system won't increase cache memory. This can be confusing since the terms memory caching (hard disk buffering) and cache memory are often used interchangeably. Memory caching, using DRAM or flash to buffer disk reads, is meant to improve storage I/O by caching data that is frequently referenced in a buffer ahead of slower magnetic disk or tape. Cache memory, on the other hand, provides read buffering for the CPU.

Specialization and functionality

In addition to instruction and data caches, other caches are designed to provide specialized system functions. According to some definitions, the L3 cache's shared design makes it a specialized cache. Other definitions keep instruction caching and data caching separate, and refer to each as a specialized cache.

Translation lookaside buffers (TLBs) are also specialized memory caches whose function is to record virtual address to physical address translations.

Still other caches are not, technically speaking, memory caches at all. Disk caches, for instance, can use RAM or flash memory to provide data caching similar to what memory caches do with CPU instructions. If data is frequently accessed from disk, it is cached into DRAM or flash-based silicon storage technology for faster access time and response.

SSD caching vs. primary storage
SSD caching vs. primary storage
Mute
Current Time 0:00
/
Duration Time 3:00
 
SSD caching vs. primary storage

Dennis Martin, founder and president of Demartek LLC, explains the pros and cons of using solid-state drives as cache and as primary storage.

Specialized caches are also available for applications such as web browsers, databases, network address binding and client-side Network File System protocol support. These types of caches might be distributed across multiple networked hosts to provide greater scalability or performance to an application that uses them.

Locality

The ability of cache memory to improve a computer's performance relies on the concept of locality of reference. Locality describes various situations that make a system more predictable, such as where the same storage location is repeatedly accessed, creating a pattern of memory access that the cache memory relies upon.

There are several types of locality. Two key ones for cache are temporal and spatial. Temporal locality is when the same resources are accessed repeatedly in a short amount of time. Spatial locality refers to accessing various data or resources that are in close proximity to each other.

Cache vs. main memory

DRAM serves as a computer's main memory, performing calculations on data retrieved from storage. Both DRAM and cache memory are volatile memories that lose their contents when the power is turned off. DRAM is installed on the motherboard, and the CPU accesses it through a bus connection.

Dynamic RAM

Appaloosa

An example of dynamic RAM.
 

DRAM is usually about half as fast as L1, L2 or L3 cache memory, and much less expensive. It provides faster data access than flash storage, hard disk drives (HDDs) and tape storage. It came into use in the last few decades to provide a place to store frequently accessed disk data to improve I/O performance.

DRAM must be refreshed every few milliseconds. Cache memory, which also is a type of random access memory, does not need to be refreshed. It is built directly into the CPU to give the processor the fastest possible access to memory locations, and provides nanosecond speed access time to frequently referenced instructions and data. SRAM is faster than DRAM, but because it's a more complex chip, it's also more expensive to make.

Comparison of memory types
 

Cache vs. virtual memory

A computer has a limited amount of RAM and even less cache memory. When a large program or multiple programs are running, it's possible for memory to be fully used. To compensate for a shortage of physical memory, the computer's operating system (OS) can create virtual memory.

To do this, the OS temporarily transfers inactive data from RAM to disk storage. This approach increases virtual address space by using active memory in RAM and inactive memory in HDDs to form contiguous addresses that hold both an application and its data. Virtual memory lets a computer run larger programs or multiple programs simultaneously, and each program operates as though it has unlimited memory.

Virtual memory in the memory hierarchy
Where virtual memory fits in the memory hierarchy.
 

In order to copy virtual memory into physical memory, the OS divides memory into pagefiles or swap files that contain a certain number of addresses. Those pages are stored on a disk and when they're needed, the OS copies them from the disk to main memory and translates the virtual addresses into real addresses.


Read more »



Apr
26
Cloud app vs. web app: Understanding the differences
Posted by Thang Le Toan on 26 April 2018 11:28 AM

Are the terms cloud app and web app interchangeable? Not really, although they are very similar. Tajudeen Abubakr explains the difference.

 

The line between a cloud app and a web app remains as blurry as ever. This of course stems from the natural similarities that exist between them. I'm of the opinion, however, that there are noteworthy differences, especially when looking to leverage cloud apps for richer user customization experience and seamless integration with resilient and scalable back-end infrastructure, which often characterizes public cloud services.

 

Webolution

 

Just how different, similar or even blurry are these concepts? How is this of any concern to cloud consumers? And what should application service providers do to revolutionize their web apps for the cloud?

 

Cloud app

 

For me, a cloud app is the evolved web app. It's equally used to access online services over the Internet like web apps but not always exclusively dependent on web browsers to work. It's possible for a customizable, multi-tenancy cloud app to be solely available over the web browser from service providers, but quite often the web-interface is used as alternative access methods to the custom built cloud app for online services.

 

Cloud apps are usually characterized by advanced features such as:

 

  • Data is stored in a cloud / cloud-like infrastructure
  • Data can be cached locally for full-offline mode
  • Support for different user requirements, e.g., data backup cloud app with different features such as data compression, security, backup schedule
  • Can be used from web browser and/or custom built apps installed on Internet connected devices such as desktops, mobile phones
  • Can be used to access a wider range of services such as on-demand computing cycle, storage, application development platforms

 

 

Examples of cloud apps

 

Some common examples include Mozy, Evernote, Sugar Sync, Salesforce, Dropbox, NetSuite, and Zoho.com. Other qualifying examples such as web email (Google, Yahoo, Microsoft Hotmail, etc.) may not be so obvious, but they depend on cloud technology and are available off-line if consumers so choose to have them configured as such.

 

There are numerous websites where you can find useful information on cloud apps. I found www.getapp.com to be particularly informative. It includes cloud app reviews and ratings to evaluate the apps.

 

Web apps

 

Web apps on the other hand are almost exclusively designed to be used from a web browser. A combination of server-side script (ASP, PHP etc) and client-side script (HTML, JavaScript, Adobe Flash) are commonly used to develop the web application. The web browser (thin client) relies on the web server components installed on backend infrastructure systems for the heavy lifting in providing its core functional web services.

 

The obvious benefit that this computing model provides over the traditional desktop app is that it is accessible from anywhere via the web browser. Cloud apps can also be accessed this way.

 

 

Examples of web apps

 

For many, including myself, web services such as WebEx, electronic banking, online shopping applications, and eBay fall into this category in as much as they are exclusively web-based with limited options for consumer customization.

 

In another example, I would include Facebook and similar types of web applications. I'm sure some will disagree with this, but I don't think Facebook exactly offers customized services. It's simply used as it is provided.

Conclusion

Application service providers have been quick to exploit advantages brought about by pioneering web app building framework technologies for greater customer reach. However these technologies are not necessarily optimized for building new apps for the cloud era.

 

Cloud apps are web apps in the sense that they can be used through web browsers but not all web apps are cloud apps. Software vendors often bundle web apps to sell as "cloud" apps simply because it's the latest buzz-word technology, but web apps do not offer the same richness in functionality and customization you'll get from cloud apps. So, buyer beware!

 

Some software application vendors also falsely think that just because their application runs on the web, this automatically qualifies it to be a cloud app. This is not always the case. For your web app to evolve into a cloud app, it should exhibit certain properties such as

 

  • True multi-tenancy to support various requirements & needs for consumers
  • Support for virtualization technology, which plays a starring role for cloud era apps. Web applications should either be built to support this or re-engineered to do so

 

The good news is that vendors looking to move into this cloud app space now have rich development platforms and frameworks to choose from. Whether migrating from an existing web app or even starting from scratch. These new age cloud app development platforms are affordable and agile, reducing time to market and software development complexities.

 

VMware Cloud foundryGoogle apps EngineMicrosoft AzureAppcara, Salesforce (Heroku and Force.com), AppFogEngine YardStanding Cloud, and Mendix are examples of such development platforms offering cloud-based technology for building modern applications.

 


Read more »



Oct
6
Telehealth Platform và Telehealth là gì ?
Posted by Thang Le Toan on 06 October 2017 11:59 PM

Telehealth is the transmission of health-related services or information over the telecommunications infrastructure. The term covers both telemedicine, which includes remote patient monitoring, and non-clinical elements of the healthcare system, such as education.

 

Telehealth examinations can be performed by physicians, nurses or other healthcare professionals over a videoconference connection to answer a patient's specific question about their condition. A telehealth visit can also be a remote substitute for a regular physician exam or as a follow-up visit to a previous care episode.

Convenience, for both sides of the care equation, is one of the major benefits of telehealth. Patients can communicate with physicians from their homes, or the patient can travel to a nearby public telehealth kiosk where a physician can conduct a thorough inspection of the patient's well-being.

In the United States, differences in state telemedicine licensure laws complicate the practice of telehealth. Some states require physicians to have full medical licenses to be able to practice telemedicine, while other states mandate physicians have special telemedicine licenses. Medicare and Medicaid reimbursement for telehealth services, such as remote checkups, has slowly been catching up to the level of in-person healthcare and the majority of states provide some amount of financial reimbursement to providers who perform telehealth visits.

The American Medical Association is one of the major healthcare groups that called for standards to be applied to telehealth to give patients more access to remote care services. The American Telemedicine Association, established in 1993, promotes the delivery of care through remote means and hosts a yearly conference on the latest news and developments in telehealth. The U.S. Department of Veterans Affairs (VA) also supports the development of telehealth. A bill introduced in Congress in 2015 would allow qualified VA health professionals to treat U.S. veterans without requiring the patient and physician to be in the same state.

 

What is the difference between telehealth and telemedicine or are the terms always used interchangeably?


Read more »



Jul
29

Sách là nguồn tri thức vô giá của nhân loại. Vì vậy, hãy biết tận dụng nó làm chìa khóa then chốt dẫn đến cánh cửa thành công.

 

Khi Satya Nadella lên nắm quyền Giám đốc Điều hành tại Microsoft vào đầu năm 2014, ông đã tuyên bố sẽ đưa cả công ty lên đến một tầm cao mới của một kỷ nguyên phát triển vượt bậc, và quả thật những gì ông làm được cũng chứng tỏ Nadella đang đi đúng hướng trên con đường đã vạch ra của mình.

Điểm mấu chốt mà Nadella tập trung thúc đẩy chính là “tư tưởng cấp tiến”, nhấn mạnh tầm quan trọng của việc học hỏi lẫn nhau, rút kinh nghiệm từ chính những sai lầm trước đó của mình, từ đó ngày càng củng cố va phát triển, hoàn thiện bản thân hơn nữa.

Trong một cuộc phỏng vấn với Dina Bass đến từ Bloomberg, Nadella đã chia sẻ rằng cuốn sách “Mindset: The New Psychology of Success” xuất bản vào năm 2007 bởi Giáo sư tâm lý học Carol Dweck tại Đại học Stanford đã giúp ông rất nhiều trong việc hình thành bản năng, nhân cách cũng như các nguyên tắc “tư tưởng cấp tiến”, cuối cùng lấy đó làm tiêu chuẩn cốt lõi mà ông đang áp dụng ngay tại Microsoft.

Satya Nadella trao đổi cùng nhân viên
Satya Nadella trao đổi cùng nhân viên

Dưới đây là những dòng chia sẻ của Nadella:

“Có một quan điểm khá đơn giản đã được Giáo sư Carol Dweck nhắc đến, trong đó nói rằng nếu bạn chọn 2 người, một người ‘liên tục học hỏi’ và một người thuộc loại ‘biết tất cả mọi thứ’, người thứ nhất sẽ luôn giành được ưu thế hơn so với đối thủ còn lại xét trên tổng thể, kể cả khi ban đầu họ chưa chứng tỏ được rõ năng lực của mình cho lắm.”

Cuốn sách của Dweck đã tập trung khai thác những khía cạnh sâu xa của vấn đề trên, đề cập đến việc một số người có những tư tưởng được “lập trình sẵn” từ trước, rằng tài năng của họ là tự nhiên theo bản năng bên trong con người, do đó việc cố gắng hơn nữa được cho là tốn công vô ích. Trong khi đó, số khác lại có được “tư tưởng cấp tiến”, tin rằng mọi thứ đều có thể được tìm ra cách giải quyết nếu cống hiến và làm việc chăm chỉ.

“Không phải ai thông minh nhất cũng là người giỏi nhất,” trích lời Dweck trong cuốn sách của mình.

Quả thực, triết lý trên đã có tác dụng rất tích cực đối với sự phát triển và tăng trưởng của Microsoft, tiến tới ảnh hưởng lên cả cơ chế sản xuất mảng máy tính của mình sau thất bại ở thị trường smartphone. Từ đó, Nadella đã tiến hành cân nhắc thêm nhiều phương án trong tương lai, bao gồm cả những góc độ liên quan đến đối thủ khó chịu Linux và thách thức đi kèm.

Giáo sư Carol Dweck (Stanford)
Giáo sư Carol Dweck (Stanford)

Trích lời Nadella trên Bloomberg: “Tôi cần phải thư giãn đầu óc, tự hỏi mình rằng ‘Có lúc nào mình đã quá cứng nhắc, hay có lúc nào mình đã không áp dụng theo tiêu chuẩn đã đề ra hay không?’ Nếu có thể hoàn toàn làm chủ được nó, tôi tin rằng công ty sẽ thu được nhiều thành tựu sáng chói hơn nữa đúng như những kỳ vọng của mọi người.”

Đồng sáng lập ra Microsoft - Bill Gates - cũng là một độc giả hâm mộ những triết lý của Carol Dweck trong cuốn “Mindset” đã đề cập phía trên.

Những nhận xét và chia sẻ của Nadella đã giúp cho “Mindset” vượt lên đứng top những cuốn sách bán chạy nhất trên Amazon mặc dù nó đã ra đời được 6 năm.

Về nội dung của cuốn sách, Dweck đã tiến đến giải thích lí do tại sao tài năng không phải là yếu tố duy nhất giúp chúng ta nắm giữ thành công, mà là cách chúng ta hình thành suy nghĩ và quan điểm về nó. Bà đã nhấn mạnh rằng việc chúng ta quan trọng hóa vai trò của khả năng ban đầu không hoàn toàn khích lệ những tư tưởng tiến bộ, mà còn có thể phản tác dụng, dẫn đến cản trở thành tích. Với một động lực và phương pháp đúng đắn, chúng ta luôn thu được những kết quả tích cực, tương tự như việc giáo dục con trẻ bằng cách đánh vào lòng tự trọng của chúng. Tầm quan trọng của một suy nghĩ có thể thay đổi kết quả của cả một quá trình, dựa trên nghị lực và sự kiên nhẫn - vốn đã được áp dụng bởi nhiều người nổi tiếng trên thế giới - chính là nội dung cơ bản và chính yếu nhất mà Dweck muốn truyền đạt lại cho người đọc.

Bài lược dịch trên báo TechInsider


Read more »



Jan
7
Replace SyncToy with FreeFileSync for your SMB backup needs
Posted by Thang Le Toan on 07 January 2017 12:03 AM

FreeFileSync is an open source, cross-platform backup tool. Learn how to install it and then use it to set up a backup job.

 

When a small business cannot afford industry standard backup tools like Acronis, or they are working off a desktop machine and need a more flexible backup than what is built into their platform, what options are there? One option that many SMBs use is SyncToy, but that software hasn't had a new release since 2009. Another option is the open source backup tool FreeFileSync.

FreeFileSync offers these features:

  • Detect moved and renamed files and folders
  • Run comparison before sync
  • Copy locked files (using Volume Shadow Copy Service)
  • Detect conflicts and propagate deletions
  • Binary file comparison
  • Symbolic Links support
  • Sync as a batch job (automated)
  • Process multiple folder pairs
  • Copy NTFS extended attributes and security permissions
  • Support long path names > 260 characters
  • Fail-safe file copy
  • Comprehensive error reporting
  • Cross-platform (Windows/Linux)
  • Expand environment variables (such as %USERPROFILE%)
  • Access drive letters by volume name
  • 64-bit support
  • Version control
  • Optimal sync sequence to prevent disc space bottlenecks
  • Full Unicode support
  • Include/exclude filters
  • Local or portable installation
  • Recurring backups via macros %time%, %date%
  • Case sensitive synchronization
  • Built-in locking serializes multiple jobs on same network share

FreeFileSync is not exactly point and click, and it does require you to have at least a basic understanding of these backup plans:

  • Automatic: Identify and propagate changes on both sides using a database. Deletions, renaming, and conflicts are detected automatically.
  • Mirror: Right folder is modified to exactly match left folder upon completed sync.
  • Update: Copy new or updated files from left folder to right folder.

It is also possible to create a custom backup type. For a custom backup, you can configure these possible options:

  • Copy new items right to left
  • Delete left item
  • Copy new items left to right
  • Delete right item
  • Overwrite right item
  • Overwrite left item
  • Do nothing
  • Leave unresolved conflicts

Installing FreeFileSync

Here are the installation steps for Windows:

  1. Download the installer file.
  2. Double-click the downloaded file.
  3. Walk through the installation wizard.

Here are the installation steps for Ubuntu Linux:

  1. Open a terminal window.
  2. Issue the command sudo add-apt-repository ppa:freefilesync/ffs to add the repository.
  3. Update apt with the command sudo apt-get update.
  4. Install FreeFileSync with the command sudo apt-get install freefilesync.

Using FreeFileSync

We'll create an automatic backup so most of the dirty work is handled by the application. During the creation process, you'll see how simple it is to create the other types of backups.

 

First, you must decide on the source and targets for the backup. I will working on a Ubuntu 12.10 machine (the process for setting up the backups is the same on Windows and Linux — the only adjustments Windows users need to make are directory paths). I want to back up (sync) my /home/jlwallen/Pictures directory to a Pictures directory on an external drive.

Open FreeFileSync and, when the main window appears (Figure A), click the Browse button in the left pane. Locate your source directory (in my case /home/jlwallen/Pictures). You could also enter the path to that directory in the text area above the left pane. Figure A

Depending upon your skill level, you might be intimidated by the interface, but the tool is much easier to use than it looks. (Click the image to enlarge.)

Then, you'll do the same thing for the right pane and locate the target directory. Once you locate the target, you'll want to do a comparison run on the two locations. Click the Compare button and, the results of the comparison will appear very quickly. You get plenty of information (Figure B) about what is going to happen when you click the Synchronize button. Figure B

In the bottom right corner, you get a snapshot of exactly what is going to happen when the backup occurs. (Click the image to enlarge.)

If you need version control on a backup, this is also possible. If you click the gear icon next to the Synchronize button, the Synchronization Settings window will appear (Figure C). Click the Versioning button, and then you can configure the versioning limit for a backup. Figure C

How errors are handled is also configured in this window. (Click the image to enlarge.)

Let's say you want to save and schedule this particular backup.  FreeFileSync does not have a built-in scheduler, so you need to save the backup as a file and then use your operating system's built-in schedule to schedule the saved backup file. Here's how:

  1. Once you set up your backup exactly how you want, go to Advanced | Create Batch Job.
  2. In the Batch Job window (Figure D), make sure everything is set exactly how you need.
  3. If the backup job is to run without user intervention, make sure to disable the Show Progress Dialog checkbox in the Batch Settings tab.
  4. In the Batch Settings tab, set Error Handling to Ignore.
  5. Save the backup script with a unique name (by default the name will be SyncJob.ffs_batch) by clicking the Save As button.

Figure D

From this window, you can set up backup filtering. (Click the image to enlarge.)

Now that the backup script is written, the final step is to use your built-in scheduling tool to run the script when necessary. If you're unsure of how to use cron for Linux, you could install a handy tool called GNOME Schedule to gain a nice GUI tool for scheduling cron jobs.

You now have a reliable backup scheduled.

Conclusion

Although you won't benefit from full-metal backups and restores, if you're looking for a flexible and easy to use data backup solution, give FreeFileSync a try, and see if it meets your needs.


Read more »



Jun
7
How do I use a RAM disk to help speed up disk-intensive applications?
Posted by Tuan Hoang Anh on 07 June 2015 08:12 AM
Jack Wallen explains how to create a RAM disk on your Windows PC and how it can be used to increase the performance of disk-intensive applications. 

There are certain applications that do an unusual amount of reading and writing data. Under normal operating circumstances, these applications work fine. But what happens when those disk-intensive applications start competing with other applications? When this happens, a serious slowdown can occur. You can prevent those slowdowns with the help of RAM disks.

A RAM disk is basically a special partition of your PC's memory that has been formatted and configured (via a special application) to be used as a high-speed target for data reading and writing. These RAM drives are significantly faster than traditional storage, so those applications will see a noticeable boost. Let's take a look at the process of creating a RAM drive in Windows for this purpose.

This blog post is also available in PDF format in a TechRepublic download.

Step 1: Download and install the necessary application

One of the best applications that I have found for this task is Dataram's RAMDisk. You can download a free version that will give you up to a 4GB RAM drive. If you need more than 4GB, you can purchase the registration license for only $9.95. I would recommend trying the free version first to make sure the tool will suit your needs.

Once you have downloaded the file, go ahead and install it. The installation is as simple as any other Windows install. After you have the application installed, you are ready to start creating your RAM disk.

Step 2: Configure the RAM disk

To start the configuration tool, click: Start | Dataram RAMDisk | RAMDisk Configuration Utility. When you start this tool, a small window will open (Figure A) where you take care of all the RAM disk configurations.

Figure A

The maximum size of your RAM disk will depend on how much spare RAM your computer has (you will want to have plenty of extra RAM) and whether or not you have purchased a license.

Enter the size you want, check the type of partition you want to use, and then click Start. You will be prompted to install the device software in order for this to work. The installation of the drivers is part of the RAMDisk start-up.

Note: There are a few reasons why a RAM disk will fail to start. First and foremost is that you need to have administrative privileges for this to work. If you have admin privileges and the RAM disk still fails, lower the size of the RAM disk and try again.

When the RAM disk has been initialized, it will show up in Windows Explorer as a regular disk (in my case it is showing up as Local Disk I).

Stay on top of the latest Microsoft Windows tips and tricks with TechRepublic's Windows Desktop newsletter, delivered every Monday and Thursday.Automatically sign up today!

Now, it is very important to understand that, by nature, RAM disks use volatile storage. In other words, when you stop that RAM disk (by either manually stopping it in the RAMDisk utility or by rebooting the computer) all the contents of that RAM disk will be lost.

Fortunately, Dataram has thought of this and gives you another option in the configuration. If you look at the Load and Save tab, you will see that you can set the RAM disk up so that it will load at start-up. You will also want to consider the box marked as Save Disk Image on Shutdown. If you know you do not want to lose the data on the RAM disk, you MUST check at least this latter option. You can also set it up to autosave an image of the RAM disk if the data you are writing to the RAM disk is crucial and you want to ensure it is saved.

Step 3: Use the RAM disk

One of the easiest ways to use the RAM disk is for temporary Internet files. You can move the temporary folder for Internet Explorer over to your RAM disk, which will do two things: First, it will speed up Web browsing, and second (if you set the RAM disk to not save the image) it will lose all browsing history every time the machine is rebooted. So you get a speed increase and an increase in security.

 

To do this, open Internet Explorer and then click Tools | Internet Options | General. In the Browsing History section, click Settings. In this new window (Figure B), you will need to make sure the size of the disk space to use is less than the size of the RAM disk you intend on using.

Figure B

By default your temporary IE storage folder will be on C. You need to redirect this to the RAM disk.

After configuring the size, click on the Move Folder button and then relocate the folder to your RAM disk. Click OK when you are done with this task.

Another great use for RAM disks is for application building. If you are a programmer and want to try to cut down build times, try moving your build folders to a RAM disk and build from within. You will find your build times can be cut by approximately 25 percent. Although this may not sound like a terribly huge time advantage, if you constantly have to rebuild (during testing phases or the like), that 25 percent is going to mean a lot at the end of the day.

Final thoughts

RAM disks are very handy tools for those trying to squeeze out as much performance and/or security as they can from their PCs. Give RAM disks a try and see if you can manage to increase your PC's or application's performance. If you have found an interesting (or helpful) use for RAM disks, share your experience with your fellow TechRepublic users.


Read more »




Help Desk Software by Kayako