Tumgik
cleverhacks · 7 years
Link
1 note · View note
cleverhacks · 7 years
Link
0 notes
cleverhacks · 7 years
Link
1 note · View note
cleverhacks · 7 years
Link
0 notes
cleverhacks · 7 years
Link
Create a static HTML page from a Trello status board
0 notes
cleverhacks · 8 years
Text
Results of some googling over concerns about AWS Elastic Load Balancers' ability to handle 30K+ concurrent requests/sec. Research indicates this is not a problem (depending on how quickly that traffic grows):
http://www.rightscale.com/blog/cloud-management-best-practices/benchmarking-load-balancers-cloud # the canonical benchmark of LBs on AWS
https://harish11g.blogspot.com/2012/07/aws-elastic-load-balancing-elb-amazon.html # see point 4, but this was also 4 years ago
https://www.jayway.com/2015/04/13/600k-concurrent-websocket-connections-on-aws-using-node-js/ # EC2 without ELB, using node.js and kernel tuning
http://blog.flux7.com/blogs/aws/must-know-facts-about-aws-elb
http://shlomoswidler.com/2009/07/elastic-in-elastic-load-balancing-elb.html # testing elasticity in ELB (mostly overlaps with AWS whitepaper below)
https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html # service limits for AWS in general; useful reference
https://aws.amazon.com/articles/1636185810492479 # AWS whitepaper on best practices for evaluating ELB
https://aws.amazon.com/documentation/elastic-load-balancing/ # formal ELB documentation
https://lab.getbase.com/how-we-discovered-limitations-on-the-aws-tcp-stack/
https://coderwall.com/p/__z9ia/scale-php-on-ec2-to-30-000-concurrent-users-server
https://blog.unitedheroes.net/archives/p/4633/how-many-sockets-does-aws-support-anyway/ # AWS EC2 websockets support per instance
tl;dr - ELB is effectively unlimited, but you need to pre-warm the ELB if you are expecting an abrupt spike in traffic (see https://aws.amazon.com/articles/1636185810492479#pre-warming)
0 notes
cleverhacks · 8 years
Link
0 notes
cleverhacks · 8 years
Link
recovered a volume that had gone missing post-power failure with this thread. <3
0 notes
cleverhacks · 8 years
Link
0 notes
cleverhacks · 8 years
Link
in which the case is made for containers having the bare minimum as a default, then adding what's needed as it's needed, rather than starting with a full OS distribution.
1 note · View note
cleverhacks · 8 years
Text
12TB of small, silent FreeNAS
ZOTAC ZBox CI321
Kingston 120GB SSD
Crucial 2x8GB PC3-12800 RAM kit
Mediasonic ProBox 4-bay USB3/eSATA enclosure
4x3TB WD Caviar Green HDD
FreeNAS
total: sub-$800
verdict: FreeNAS is awesome. I am very impressed by the ease of installation, setup, configuration, and administration, while remaining very extensible and fully manageable from both the CLI and a nice web UI. Highly recommended. (also: that ZBox is amazing.)
0 notes
cleverhacks · 8 years
Link
no matter how long you've been using the UNIX CLI toolset, there are still wonderful new things to learn that you've never heard of before. This is one of them that I wish I'd known of 20 years ago.
1 note · View note
cleverhacks · 8 years
Text
resize a bunch of Hadoop YARN disks on CentOS VMs
We do some testing with Hadoop (front-ended by Ambari) on CentOS 6 in vSphere 5.5. We have allocated separate SCSI disk VMDKs on each Hadoop node for e.g. /hadoop/yarn, /hadoop/data, etc. We realized after allocating the disks that our use case required 2.5x more space in the YARN drives. Here's how I fixed it:
edit configs for each of the VMs in vCenter and increase the size of the SCSI disk you allocated for YARN. (You can also do this with the API, I think, but the command sequence there is left as an exercise for the reader.)
on each Linux VM: unmount, rescan the SCSI device in question (our nodes have 3 SCSI drives each; substitute 2:0:2:0 for whatever device ID corresponds on yours), extend the partition, fsck, resize and remount. (example loops through 10 nodes)
# for i in {01..10} ; do ssh -t hadoop-${i} "hostname ; umount -f /hadoop/yarn ; echo 1 >/sys/class/scsi_disk/2\:0\:2\:0/device/rescan && (echo d; echo n; echo p; echo 1; echo; echo; echo w) | fdisk /dev/sdc && e2fsck -f /dev/sdc1 && resize2fs /dev/sdc1 && mount /hadoop/yarn || echo '*** ERROR ***'" ; done
0 notes
cleverhacks · 8 years
Link
Ubuntu as a Time Machine target and general AFP server (particularly useful in my case for making a large video share available to an iTunes instance, for streaming to Apple TV).
0 notes
cleverhacks · 8 years
Link
0 notes
cleverhacks · 8 years
Link
0 notes
cleverhacks · 9 years
Link
really, really good advice on why distributed systems are hard, and why their design, operation and debugging is very different from that of even very large monolithic systems.
0 notes