Article Summary


Summary: Erasure codes have a high reliability with much lower cost to store information when compared to data replication. This is one of the reasons distributed storage systems are increasing taking advantage of these codes. The research paper has presented a latest erasure code for storage systems that can the capacity to overcome the disadvantage of these types of codes due to their requirement of employing multiple disks to reconstruct unavailable data block. This latest code i.e. HACFS uses a code to optimize its performance that helps in reducing storage overhead.

Strengths: HACFS incorporates an adaptive coding technique that is fast compared to data replication. HACFS brings improvements to the read latency, time and reconstruction that is degraded. This system has a low bounded storage overhead as compared to the previous storage solutions. For product code, latency is reduced from 25 to 46% and reconstruction time is improved from 14 to 43%.  For LRC codes, latency is reduced from 21 to 43%, reconstruction time is improved by up 32% and storage overhead is better by 4 to 10%.

Weaknesses: When storage overhead aggressively limited by converting blocks, the conversion cost of HACFS is increased.  Because of LRC codes incorporated in the proposed HACFS, the performance is worse for loads than RS(6,3). This is because of the fact that LRC code’s recovery cost is high for global parities higher than RS(6,3).

Questions: How can these erasure codes be incorporated in cloud storage?

How would the large scale cloud object stores respond to erasure codes?