Over the course of the last several years I have had numerous engagements with a certain line of products which caused me to wander into the depths of file-system centric hierarchical storage management solutions (HSM); namely Dell-EMC’s DiskXtender (formerly Legato and OTG). As you may know, DiskXtender (DX) allows an engineer, architect, or techie of the likes, to configure UNIX, Linux, or Windows server systems with automated, policy based file migrations across tiered storage platforms.

In Windows, DX utilizes a file system driver that intercepts user input/output requests. Requests are handled by DX and, if needed, the data will be migrated back to an accessible, higher tier storage location for use by the user and associated applications. It is this IO intercept that allows DX to detect the state of the data, match it to the criteria of the rule-based policies, and act upon it. It is in this process that DX takes control of the data and adds DX extended attributes to the files.

The HSM concept, in general, is intended to allow an organization to retain online access to greater amounts of data without the increase in associated storage costs. Simply put, data that matches applicable policy will be automatically copied to less expensive storage media for archiving and retrieval as needed at a later point. This action is intended to be transparent by design; however, that is certainly not always the case.

The idea of using DX to migrate data off of high cost storage to lower cost storage may have sounded appealing to any bookkeeper reviewing a proposed IT budget. However, as many have come to know, there are some technical pitfalls surrounding use of this solution. Adding insult to injury, on December 31st, 2015, EMC announced that the sun was setting on the inglorious DiskXtender software. The EOSL (end of service life) date is December 31st 2017; after which EMC will no longer provide any support or development for DX whatsoever.

For those who have it entwined in their environment today, there may be a lot of heavy lifting in the months ahead. If data preservation is of any importance to you or your organization, it is imperative to tackle this task with urgency. In the meantime, try not to exceed the licensed capacity of the file system manager. If you do, focus your efforts on retiring the platform, not seeking additional licensing. EMC won’t give you one anyway.

DX file system manager (FSM) is setup by first configuring extended drives from any server disk that is local to the FSM installation including block storage presented from a disk array. Once an extended drive has been configured, directories can then be selected to be managed as media folders. Under every media folder is where the mayhem occurs. Media groups containing tier two media, which must be configured beforehand, are assigned to a single, selected media folder. Rule based policies are then configured (Purge, Move, Delete, and Index rules) to manage the data within the media folder and instruct the FSM on how to handle data matching configured criteria on the next background scan. Additionally, configuration of advanced options (not covered within this article), task schedules, and background scan settings is required.

To outline some basics, three heavily used functions in DX are moves, purges, and fetches. A move (copy) occurs when a file matches the move criteria and gets copied out to its specified media group. A purge occurs either by policy or watermark threshold. A purge deletes the data from the extended drive, leaving only a stub file, while retaining the data on the lower tier storage specified in the media group(s). A fetch simply brings the purged data back, meaning there are once again two copies of the data; one on the first tier and another on lower tiers. EMC DiskXtender supports many types of lower tier media: EMC Atmos, EMC Centera, NAS, tape and/or CD/DVD managed by the DX MediaStore server components, and more. Each media type can present its own challenges, technical limitations, cost savings, etc.

As it stands, DX can no longer be purchased for use. I am assuming that if you are reading this you are either using it currently, have in the past, or are simply reading out of curiosity. For your sake I hope the latter. The retirement process of DX FSM is quite often not an easy one, especially if you have been purging your data off of your extended drives, leaving only stub files in its place.

First and foremost: Stop writing to DX managed drives, you’re only making your job harder.

If the system has no purged data you are in better shape than if it did. You should have enough disk space, at current, to hold all of your data since nothing has been removed from tier one. You cannot rely on the fact that a purge water mark has never been breached or that there were not purge rules configured within the FSM. Purging can also happen at the command of the system administrator, at any time. To confirm or deny the presence or purged data, I highly recommended a scan of all media folders across the DX environment with the DXDMCHK command line utility. This will output the number of files that are currently purged, among other statistics.

Unfortunately, upon confirmation that you have zero purged files, you simply cannot copy data to another location as the DX extended attributes will remain attached to the file. These attributes must be stripped before DX can be removed or the data copied to an alternate location. If they are not, not only will this extra meta-data be attached to your data for eternity, but you could also suffer data loss when the DX FSM is deconstructed and decommissioned.

When you decommission DX, the configuration that was put in place must be reverse engineered before you can release the system from its entanglement. This will require you to delete all rules and policies, de-allocate media groups from the media folder, and remove the extended drive. Performing these necessary actions will delete your data from its tier one location if the extended attributes have not been cleaned from the data. This can be achieved in several different ways. In my experience, the easiest method would be to use the DXDMCHK utility since all of the data resides on the local spinning disk in this scenario.

Before a copy and/or removal is done, you will need to open a ticket with the EMC DX support team to request a DXDMCHK advanced key. This key allows the DXDMCHK tool to run in privileged mode and gives access to advanced features; namely the ability to clean data. Keep in mind the keys they issue are only good for seven days. It is likely that you will need to request multiple keys over the course of migration so keep the ticket open.

When the data is cleaned, it is stripped of these extended attributes and DX relinquishes control of the data. Make sure that before the data is cleaned, you remove all rules from the media folder and disable background scanning across all of the extended drives. If this is not performed, the FSM could reassume control of the data before you are done depending on how your rules and background scanning is configured. Once completed, DX should no longer manage any data and the FSM can be deconstructed and removed.

On the other hand, if the system has purged data you have a few choices to make based on the overall resources and configuration of your systems and storage infrastructure. First determine if you are fetching all of your purged data back based on your available free space. If there is enough room to store everything, you can fetch all the data and clean it with the outline above. If there is not enough storage presented to your server, you will need to determine if you are able to expand its storage. A different approach will be required if not.

What solution are you providing to replace your second tier storage? Is it possible to add additional storage to the current server? If not, what is your new storage target? Whatever your solution is, you will need to manipulate the data within the FSM itself now. The DX Migration Utility is out of the question as the target (if you are migrating to another server) will not/should not contain another instance of DX.

A multi-target group could be configured with two media types: your original lower tier media and the new target media. DX will keep all media within the media group in sync. This process is referred to as a sync-fetch. The issue with a sync-fetch is that DX must fetch purged data back to write it back out to the second additional media group. But as indicated previously, you do not have enough storage space to contain all of this data. So DX must also purge off data at the same time to make room for these fetched files to return.

Not only is this resource intensive in every manner (CPU, Memory, Network, etc) but it also runs the risk of filling up the extended drive if the rate of fetches and purges do not remain symmetrical (and they won’t). This process however, runs largely without human intervention, other than monitoring and correcting, but can be quite slow.

The second option, and arguably the better of the two I am reviewing, is to compact the media. Especially since time is not on our side in this matter. Again, a few things must be taken into consideration before handling this task. Compaction simply removes the data from the media and places it back onto tier one storage, in a clean state. It also removes the media from the media group it was assigned to. Just as before, rules must be adjusted so that the data does not end back on its point of origin. Additional media representing the new target (server, NAS, SAN) must be created and assigned to the media folder. Subsequently the move rule must be changed to show the new media as the destination of the DX data.

Compactions will then occur as you schedule them, bringing the data back and moving it out to your new target. This process is simple, but error prone, and can be a massive time drain depending on the amount of data you have and the type of media it resides on. IE: Compacting 30 tapes is much better than compacting 400 DVDs. When complete, redirect users and applications to use the new location to access the data which was configured as the additional media group target in the FSM.

Media compaction back to tier one works because it inherently cleans the data. Both sync-fetch and compaction back to tier two migration targets work because the users will be redirected to use the data on that target, bypassing DX altogether. DX extended attributes are only written to the source data on the extended drives, not the data on lower tiers.

By no means is this intended to be a comprehensive retirement guide, product review, or technical recommendation; rather a goodbye intended to get current DX users past stage one of the grieving process, denial. Storage is getting cheaper; that does not make HSM solutions bad, but they are not right for everyone. Whatever you decide to do moving forward, you need take a deep dive into retiring all DiskXtender products as soon as possible. Otherwise you will end up troubleshooting compaction errors while searching forums as your only form of support and diving into the bowels of a file-system intrusive application that is capable of deleting massive amounts of data with something as innocuous as a click of a mouse or removing media from a media group.

Godspeed

Leave a comment

Your email address will not be published. Required fields are marked *