Giving file servers redundancy

So as I have mentioned before, I work for a 24 hour company and as a result complete redundancy of our systems is an absolute must.

I recently decided to set up DFS on our dedicated file server so that we can have redundancy of all our data.

Our current setup consists of a Hyper-V host running about 7 servers, one of which is a dedicated file server, and we have a VPN running through our router to an AWS VPC.

I fired up a new EC2 instance on AWS, joined the server to the domain (thanks to the VPN) and configured it as a secondary DC (more redundancy there!). I then added a new drive to the server and installed DFS Management on both our primary FS and the new Secondary DC (which will become the FS “hub”).

The first step in this process was creating a namespace for our shared folders, as it’s just basic redundancy we’re going for, I decided to go for the following namespace:

I then went through and added all of our shares, adding them to DFS in the process:

All I needed to do after that, was add the second DC as a namespace server and it created all the necessary folders there.

Aside from setting extra’s in terms of priorities, that’s all there really is to DFS! This is just a basic rundown, mind. I’ll do a start to finish config soon.