Storj.io and Elasticache

I’m starting my storj node operator (SNO) adventures during their alpha, and I am enjoying gathering statistics and whatnot –

This post is devoted to the adventure – Right now, it is just a few scripts and stuff I am using:

Here is my LogStash entry for Storj Docker logs:

if [container][name] == "storagenode" {
       mutate {
         add_tag => ["StorageNode"]
         }
   }

if "StorageNode" in [tags] {
     mutate {
       gsub => ["message", "\r\n", "LINE_BREAK"]
     }
     grok {
       patterns_dir => "/etc/logstash/patterns"
       match => [ "message", "%{STORJPARSE}" ]
     }
     mutate {
       replace => [ "message", "%{message}" ]
     }
   }

And here is what i am currently doing to GROK that stuff:

FUNCTION (download(ed| failed| started)?|upload(ed| failed| started)?)
 DELETE (deleted)
 STORAGEPARSE %{TIMESTAMP_ISO8601:time}%{SPACE}%{GREEDYDATA}%{SPACE}piecestore%{SPACE}%{FUNCTION:function}%{SPACE}{"Piece ID"\: "%{DATA:pieceid}", "SatelliteID": "%{DATA:satteliteid}", "Action": "%{DATA:action}"}%{GREEDYDATA:message}
 CCANCEL %{TIMESTAMP_ISO8601:time}%{SPACE}%{GREEDYDATA}%{SPACE}piecestore%{GREEDYDATA}: infodb:%{SPACE}%{DATA:action},%{DATA:function}%{GREEDYDATA:message}
 CCANCEL1 %{TIMESTAMP_ISO8601:time}%{SPACE}%{GREEDYDATA}%{SPACE}piecestore%{SPACE}%{FUNCTION:function}%{SPACE}%{SPACE}{"Piece ID"\: "%{DATA:pieceid}", "SatelliteID": "%{DATA:satteliteid}", "Action": "%{DATA:action}", "error": "%{DATA:action_error}"%{GREEDYDATA:message}
 RPCCANCEL %{TIMESTAMP_ISO8601:time}%{SPACE}%{GREEDYDATA}%{SPACE}piecestore protocol: rpc error: code = Canceled desc = %{SPACE}%{DATA:action},%{DATA:function}%{GREEDYDATA:message}
 STORJDELETE %{TIMESTAMP_ISO8601:time}%{SPACE}%{GREEDYDATA}%{SPACE}piecestore%{SPACE}%{DELETE:function}%{SPACE}{"Piece ID"\: "%{DATA:pieceid}"
 STORJPARSE (?:%{STORJDELETE}|%{RPCCANCEL}|%{CCANCEL1}|%{CCANCEL}|%{STORAGEPARSE})

The Gluster Cluster

It is time to replace Goomba…
He’s been a good little beast, but i am physically out of space to add more hard disks; and frankly, this case simply will not be good for long-term storage…

Maker:0x4c,Date:2017-12-14,Ver:4,Lens:Kan03,Act:Lar01,E-Y

Maker:0x4c,Date:2017-12-14,Ver:4,Lens:Kan03,Act:Lar01,E-Y

Maker:0x4c,Date:2017-12-14,Ver:4,Lens:Kan03,Act:Lar01,E-Y

Maker:0x4c,Date:2017-12-14,Ver:4,Lens:Kan03,Act:Lar01,E-Y

Maker:0x4c,Date:2017-12-14,Ver:4,Lens:Kan03,Act:Lar01,E-Y

I’ve decided to move to a GlusterFS cluster using system on chip boards (odroid HC2)

This is how I do it… Continue reading “The Gluster Cluster”

What’s under the hood – Part 1

Every once and a while, I like to post a little about what I have running in my “server room” (basement shelf).

I have a rather large shelf in the back corner of my basement that looks a lot like this.

[peg-image src=”http://lh3.googleusercontent.com/-0IKXqhQVz_Y/VkkXkD1xY8I/AAAAAAACI7A/oOSpH-Wk2tw/s144-o/IMG_20151115_183852.jpg” href=”https://picasaweb.google.com/102474139899968458086/20151115?authkey=1TN7tINkEL0#6217526668835251138″ caption=”” type=”image” alt=”IMG_20151115_183852.jpg” image_size=”4160×3120″ ]

Describing everything goes like this…

Top Right:

[peg-image src=”http://lh3.googleusercontent.com/-u_IJYWrTQiw/VkkXanML3AI/AAAAAAACI6w/s7QuPd7xPDI/s144-o/IMG_20151115_183802.jpg” href=”https://picasaweb.google.com/102474139899968458086/20151115?authkey=1TN7tINkEL0#6217526506525809666″ caption=”” type=”image” alt=”IMG_20151115_183802.jpg” image_size=”4160×3120″ ]

3 APC UPS’: 2 Back-UPS XS-1300-LCD’s, one Back-UPS XS-1500.

Obviously those guys handle back-up power and are labeled appropriately. On average, I get about 45 minutes of standby power from them, which is plenty of time to either shut down, or get the generator running.

I also recently acquired an Avocent DVR4020. I know it’s overkill, but I LOVE having remote access to the console of my machines.

Next, Bottom Left:

[peg-image src=”http://lh3.googleusercontent.com/-QLPm_qNwkVo/VkkXbVB8gqI/AAAAAAACI6o/fn6PIMhPW2E/s144-o/IMG_20151115_183808.jpg” href=”https://picasaweb.google.com/102474139899968458086/20151115?authkey=1TN7tINkEL0#6217526518830891682″ caption=”” type=”image” alt=”IMG_20151115_183808.jpg” image_size=”4160×3120″ ]

This is kinda my pride and joy… Goomba—

He runs a little hot for you to see why he’s named Goomba, but he’s the storage pod.

He’s got 6x3TB Drives, 3x2TB Drives and 4x5TB Drivesdrives

They are broken down as follows:

zpoolstatus

 

[peg-image src=”http://lh3.googleusercontent.com/-8Xb8PIJ3vHA/VkkXeHiRu3I/AAAAAAACI64/q4YMNnaqXAM/s144-o/IMG_20151115_183820.jpg” href=”https://picasaweb.google.com/102474139899968458086/20151115?authkey=1TN7tINkEL0#6217526566748011378″ caption=”” type=”image” alt=”IMG_20151115_183820.jpg” image_size=”4160×3120″ ]

The VMware Boxes, Spiny and Bowser… Bowser is a remnant of an old AMD VMware cluster that I ran, but he’s alright, but I am migrating to Intel NUCs like Spiny.

Spiny is an Intel NUC5I5MYHE, which for those that aren’t capable of googling, is a 5th generation Intel i5 NUC. the MYHE version of the NUC is the beefier processor version. He acts liek he has a quad core, even though he’s only a dual. I have had phenominal performance from this guy, and I hope to replace Bowser with another one soon.

 

Soon I will have to break down each box individually, mostly because all of them required a little TLC in order to make them purr like a kitten. Additionally, I will soon break down all of the VMs I have running, which will hopefully provide some clarity for those looking to mimic my setup.

 

Cable Management – AU Style

The good old days at AU came up in a conversation at work today… Mark and Kurt, I still am confident that this was a ridiculous method of cable management, but I have to give it to you, it was MUCH more functional than all the cables sitting on the floor of the cabinets.12096516_927445794622_7300568640743779737_n

MOAR STORAGE

I’d like to thank friends of Goomba for making him fatter!
This brings the total drive count to 13, and once fully reconfigured, 27TB of redundant storage.

[peg-image src=”http://lh3.googleusercontent.com/-qJzvnKcXKO4/Vf8NH5aUyMI/AAAAAAAB4WM/mtgT8SkIb5s/s144-o/IMAG0188.jpg” href=”https://picasaweb.google.com/102474139899968458086/20151027?authkey=zlQ78RWN-WI#6196686041606572226″ caption=”” type=”image” alt=”IMAG0188.jpg” image_size=”2688×1520″ ]

DRM Cable Channels

I go through this fight every year… I pay for cable, and greedy tv channel owners DRM lock their channels so that people don’t steal the content…

The problem is, as a paying customer, I can’t view their channel because I don’t use a traditional cable box.

Now I have to find an “alternate” source.

Service Restoration

Alright Friends of Yoshi —
The data drives on goomba are back online, and I am bringing all services back online slowly over the next 30 minutes.

Things will be slow for the next day or so, as the data drives do their thing.

<nerd>
[******@goomba ~]$ zpool status ThreeStore
pool: ThreeStore
state: ONLINE
status: One or more devices has experienced an error resulting in data corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the entire pool from backup.
scan: scrub in progress since Sun Sep 6 14:06:10 2015
258G scanned out of 16.9T at 227M/s, 21h24m to go
0 repaired, 1.49% done
</nerd>

Expansion Advice

Calling all nerds…. I need help… I have a big problem, I need to find a cost efficient way to “fix” my poorly configured NAS server.

Here’s the dilemma: I have a server with 9 Hard drives: 6x3TB drives, 3x2TB drives, it’s configured to about 66% efficiency beacause I acquired all of those drives over time… long story short, I need to “Temporarily” store 11TB of stuff somewhere.

My machine can only hold 3 more drives without significant modification, and my budget to fix is about $300, which limits my purchases to about 3x3TB drives…

So my questions are as follows: does anyone have any large hard drives laying around that I can borrow?

Does anyone know of any good hard drive deals?