Object storage is plentiful, durable and cheap. That makes it the ideal storage mechanism for long-term or archive storage. In fact, it’s so cost-efficient that many traditional storage appliances, primarily targetted at on-premises environments, also make use of object-based storage to achieve cost-efficiencies.
With all this data stored on object storage and on a cloud platform where it can be made accessible from anywhere, it’s natural to think how wonderful it would be if it could also be used directly by applications. While in hindsight, it would seem like a great idea, object storage was never designed to act as a file system and therefore, doesn’t natively include any such provision.
This is where LucidLink Filespaces comes in.
LucidLink Filespaces is a cloud-based file system that is built exclusively for object-based storage. Using LucidLink Filespaces, a company can create a common namespace for the data it wants its workers to access and it becomes available to everyone from any location connected via an Internet connection.
The main feature of this product is that data does not need to be fully-synchronised to the local file system in order for it to be used. Only a small portion of the data requested can be fetched on-demand and more segments are downloaded as the request is made. Partial download of files along with Internet speeds of today makes the powerful combination that makes it all possible.
That feature is especially useful for a global workforce that needs to collaborate over widely-distributed geographic locations and yet need to have access to up-to-date versions of the data.
LucidLink is a completely software-based solution i.e. no hardware or networking is required on the consumer side, which makes it extremely quick to deploy and consume. There are clients available for all the various OSes which once installed, can connect to the target namespaces. The data accessed appear to applications as being available on the local filesystem.
On the source side, all S3 compliant cloud (or on-premises) platform vendors are supported. As the data is streamed on-demand, it makes some interesting data consumption use cases possible e.g. editing of large video files or manipulating large datasets without consuming large amounts of local disk space.
For more interesting use cases, have a look at this video, delivered by Peter Thompson (CEO and Co-Founder of LucidLink):
I recommend you watch this session because it may not be immediately obvious as to how many use cases such a capability can cover if the need to have all data present locally is taken out of the equation.
In fact, one use case popped into my head while in that discussion that I can see being extremely useful for companies who don’t want to keep adding storage to their NAS appliances endlessly. Traditionally, those devices have been the central storage for an organisation but in addition to serving file data, they have also stored backups or data that has not been touched for a long time. That backup or cold data typically makes up most of the data on the appliance and is the main cause why companies have to keep adding storage to it.
I can see LucidLink being the solution that trickle uploads that data into object-based storage when live bandwidth consumption is low. That data should remain available on demand but won’t need to constantly live on the NAS storage, thereby freeing up space that is consumed by infrequently accessed data. That should not only put a halt to extra storage expense but may even make more space available, extending the service life of that appliance.
I did mention it during the clip and discussed it further with the LucidLink team after the session ended, especially the trickle upload mechanism. I wish we had more time to discuss but we had another session shortly after.
This capability is currently not available with LucidLink but the team is definitely interested and I look forward to its release as companies will see enormous cost-saving using this feature.
The feature is not available today but I would love @Lucid_Link to enable trickle cold data migration into the cloud, while making more space available on the local device.
— Ather Beg (@AtherBeg) September 26, 2019
The Technical Side
The techies amongst us would appreciate a little more technical detail so here’s the presentation by George Dochev (CTO and Co-Founder of LucidLink), where he explains how it all works:
An important step in achieving this magic is breaking the data down into its metadata and the data itself. As you can imagine, doing so allows the client to emulate a whole copy of the data, while clever wizardry is done to fetch the relevant bits on on-demand. While it’s true that you are indeed relying on the link, its speed and application type for it all to be possible, most applications don’t have strict performance requirements. Given connectivity speeds these days, prefetch algorithms and delays introduced due to other factors, most applications are able to operate in such an environment.
LucidLink offers both eventually and strongly-consistent models (at LucidLink level i.e. not to be confused with object storage level) but as you would expect, the latter is at some cost to performance.
I wanted to delve deeper into the data consistency side of things as most object storage platforms still operate in an “eventually-consistent” model, which would require a strong consistency model built into LucidLink for conflict resolution, which is critical for it to be used reliably for certain use cases. I wanted to know exactly how it works but time constraints didn’t allow for it so maybe some other time.
George also talked about using parallel TCP streams with flow-control to optimise the communication which also means the more bandwidth one has to the Internet, the more responsive the storage experience is. Other technical details were also discussed so I’d recommend watching the video to learn more.
It’s not unusual to see the progression in one area of technology, enabling progress in another area. This is no different. File system protocols of early days, don’t translate well to the cloud and the reverse has been true for object-based storage too. However, separation of metadata from the data itself and the availability of reliable high-speed connectivity has enabled LucidLink to bridge this gap and create this solution that provides global file systems access with the durability and cost-effectiveness of object storage.
Being a completely software-based also helps the cause greatly as it makes deployment, configuration and integration quick and easy. However, I am unsure as to how many customers will actually take advantage of the global namespace feature as it is likely to cause them reconfiguration/modification of their applications in order to take full advantage of the global namespace.
The solution also needs some maturing and roadmap items do show that LucidLink is working towards it e.g. enterprise authentication mechanisms, deeper protocol level controls, better scaling (when going beyond millions of files) etc.
All that said, LucidLink Filespaces is indeed a clever piece of software that does tick a lot of boxes. Every application might not agree with it but it does look like the right answer for many others.
Disclaimer: As is customary for Tech Field Day delegates (and just in case), I would like to say that while Gestalt IT paid for my travel, accommodation etc. to attend Cloud Field Day 6, I am not being paid or asked to write anything either good or bad about any of the companies that presented at the event.