Things got a little backed up - we're processing the data and things should be back to normal within the hour.

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

15
My dream home lab/SAN
Post Body

Hey there,

I thought I'd come and post my idea for a home SAN that I'll probably start building in January.

The purpose will be to give me something to screw around with for testing LizardFS and CephFS without risking stuff I really care about.

First off, the location. I live in an apartment complex in Seattle and I'm located right next to the boiler room. For various reasons it wouldn't be a problem for me to put basically whatever I wanted in said boiler room; and it has a nice big vent to the outside. The plan is to add fans in strategic locations to provide adequate ventilation.

Second, the rack. I'm ideally going to use a 42U rack, despite the fact that I'll probably only be using 6U for a long time. This is the one I'm planning on using: http://www.amazon.com/Tripp-Lite-SR42UBSD-Enclosure-Capacity/dp/B005FLU5Z4/

I'll probably move to something much more heavy duty once the farm grows.

And the final piece of core hardware, the switch: http://www.ebay.com/itm/221421900710

I'm hoping to eventually use 10GbE networking equipment throughout the whole rack. I should be able to pick up a second-hand 10g Arista switch for < $2500 when the time comes.

Now, on to the actual servers...

For the metamaster, I'll be using a HP Proliant DL360 G5. These can be found second hand for under $200 and can house 64GB of RAM which is plenty for the dataset I'll be using. (For comparison, at work we have 150TB of data in our cluster and that's from multiple years worth of student, staff and faculty data - and yet our metadata set only recently outgrew 64GB of RAM)

I will have a second HP Proliant DL360 G5 for the "shadow master" (essentially a hot spare in case the master dies).

Finally, for the chunkservers (which store data) I would use something like this: http://www.supermicro.com/products/system/1U/6017/SYS-6017R-73THDP_.cfm - which gives you 12 hard drives in 1U and 10GbE built right in.

Final bill of materials:

  • 1x 42U rack ($1000)
  • 1x Arista 10G 24-port switch ($2500)
  • 2x HP Proliant DL360 G5 servers ($400)
  • 3x Supermicro SuperServer 6017R-73THDP ($4200)

Assuming I can't get anything cheaper, that means it'll come to about $8000 for everything, sans hard drives, and give me a total capacity of 180TB. There are cheaper and better ways to do all this, but my goal is to have 3 chunkservers, a master and a shadowmaster.

Each of the chunkservers can hold 2 SSDs, so the plan is to put in two 512GB Samsung 840 Pros in each.

Put it all together with cache pools in Ceph and you have a system that will (in short bursts) perform as fast as an SSD for writes and (for frequently requested data) reads as well, but with an absolutely monstrous data capacity.

Thoughts?

~ pseudo

EDIT: Plans for what I'll put on it and further details in the comments.

Author
Account Strength
100%
Account Age
13 years
Verified Email
Yes
Verified Flair
No
Total Karma
48,522
Link Karma
3,718
Comment Karma
43,179
Profile updated: 4 days ago
Posts updated: 5 months ago
2.5PB SeaweedFS

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
9 years ago