Monday, November 24, 2008

palimpsest soft storage capacity file system

Paper: Palimpsest: soft capacity storage for planetary-scale services.

Summary:
This paper presents an ephemeral storage system Palimpsest which provides secure shared soft-storage capacity for wide area model with a congestion based pricing scheme.
This model targets storage requirement of shared planetary scale computational infrastructure like Planet Lab and Denali.

Liked:
• Model can be used for storing data with short life time like checkpoint states and intermediate output of some computation.
• Model provides a number of trades off like storage capacity and persistence. Also trade off between resilience to the loss of fragments versus storage cost and network bandwidth.
• Use of erasure code for storing data provides high durability and reliability to data provided time constant not changes.

Disliked:
• Dynamic time constant values cause refreshing of block stores at timings based on state of Palimpsest. This effect can destabilize the user application/service as it’ll disrupt normal flow of execution for a service.
• Use of erasure code, secure hashing can lead to high cost of data store which can be undesirable. Also for check pointing or intermediate storage requirement purpose, this can cause delays.
• On one hand system making data durable by fragmenting data but on the other hand it is discarding data immediately if time constant goes low due to some other application doing heavy write operations. This is contradictory to model’s functionality.
• No mechanism to modify data. Every time we need to over write data. This is undesirable in cases where data generated between consecutive stages is related and only difference of data needs to be stored.
• Paper is very superficial and not enough detail is provided on how things will work in actual implementation.

Details will be provided later.. right now i dont have any time

No comments:

Post a Comment