Swap and Hadoop

TL;DR : Turn off swap completely. A properly designed and tuned Hadoop system will not need it.

So there you are, minding your Hadoop cluster, when alerts start to come in: The hosts are swapping! Oh, the horror! The end is nigh! Why do we even have this horrible swap space?

But all is not lost. Read on for a short history lesson and a mind-boggling revelation. Read more of this post

Things to Come From the Cloudera/Hortonworks Merger

Now that the two Hadoop distribution giants have merged, it is time to call out what will happen to their overlapping software offerings. The following are my predictions:

Ambari is out – replaced by Cloudera Manager.
This is a no-brainer for anyone that has used the two tools. People can rant and rave about open source and freedom all they want, but Cloudera Manager is light-years ahead of Ambari in terms of functionality and features. I mean, Ambari can only deploy a single cluster. CM can deploy multiple clusters. And the two features I personally use the most in my job as a consultant are nowhere to be found in Ambari: Host/Role layout and a non-default Configuration view.

Tez is out – replaced by Spark.
Cloudera has already declared that Spark has replaced MapReduce. There is little reason for Tez to remain as a Hive execution engine when Spark does the same things and can also be used for general computation outside of Hive.

Hive LLAP is out – replaced by Impala.
Similar to Tez, there is no reason to keep interactive query performance tools for Hive around when Impala was designed to do just that. Remember: Hive is for batch and Impala is for exploration.

What do you think? Leave your thoughts in the comments.

Hadoop Cluster Sizes

A few years ago, I presented Hadoop Operations: Starting Out Small / So Your Cluster Isn’t Yahoo-sized (yet) at a conference. It included a definition of Hadoop cluster sizes. I am posting those words here to ease future references to that definition.

Question: What is a tiny/small/medium/large [Hadoop] cluster?

Answer:

  • Tiny: 1-9 nodes
  • Small: 10-99 nodes
  • Medium: 100-999 nodes
  • Large: 1000+ nodes
  • Yahoo-sized: 4000 nodes

Failed Disk Replacement with Navigator Encrypt

Hardware fails.  Especially hard disks.  Your Hadoop cluster will be operating with less capacity until that failed disk is replaced.  Using full disk encryption adds to the replacement trouble.  Here is how to do it without bringing down the entire machine (assuming of course that your disk is hot swappable).

Assumptions:

  • Cloudera Hadoop and/or Cloudera Kafka environment.
  • Cloudera Manager is in use.
  • Cloudera Navigator Encrypt is in use.
  • Physical hardware that will allow for a data disk to be hot swapped without powering down the entire machine. Otherwise you can pretty much skip steps 2 and 4.
  • We are replacing a data disk and not an OS disk.

Read more of this post

Impala Load Balancing with Amazon Elastic Load Balancer

In a previous post, we explained how to configure a proxy server to provide load balancing for the Impala daemon. The proxy software used was HAproxy, a free, open source load balancer. This post will demonstrate how to use Amazon’s Elastic Load Balancer (ELB) to perform Impala load balancing when running in Amazon’s Elastic Compute Cloud (EC2).

Read more of this post

Hue Load Balancer TLS Errors

If you are configuring the Hue load balancer with Apache httpd 2.4 and TLS certificates, there is a chance that you may end up with errors. The httpd proxy will check the certificates of the target systems and if they do not pass some basic consistency checks, the proxied connection fails. This could happen if you are using self-signed certificates or a private certificate authority. The subject of the target certificate may be incorrect (ie the CommonName or CN may be wrong in the cert) or the subjectAlternativeName (SAN) may not match the subject.

Error messages in the Hue httpd logs in /var/log/hue-httpd/error_log may include:

  • AH01084: pass request body failed to
  • AH00898: Error during SSL Handshake with remote server returned by

Disabling target system certificate checks is a temporary solution. Add the following lines to the Hue load balancer httpd.conf.

SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off

If using Cloudera Manager to configure the Hue High Availability, add the above lines to the Hue Load Balancer Advanced Configuration Snippet (Safety Valve) for httpd.conf.

Hue Load Balancer Advanced Configuration Snippet (Safety Valve) for httpd.conf dialog box in Cloudera Manager
Hue Load Balancer Advanced Configuration Snippet (Safety Valve) for httpd.conf dialog box in Cloudera Manager

Ideally, you would also fix the TLS certificates so that they pass the httpd certificate checks, but this fix will buy you the time to get your certificates requests regenerated and signed.

High availability and load balancing of Hue has been available since Hue version 3.9. The above error has been seen in CDH 5.10.1 on RHEL 7.3 with httpd 2.4.

Update:

June 27 2017
It looks like Cloudera is seeing this issue in CDH 5.11.0.

Encrypting Amazon EC2 boot volumes via Packer

In order to layer on some easy data-at-rest security, I want to encrypt the boot volumes of my Amazon EC2 instances.  I also want to use the centos.org CentOS images but those are not encrypted.  How can I end up with an encrypted copy of those AMIs in the fewest steps?

In the past, I have used shell scripts and the AWS CLI to perform the boot volume encryption dance. The steps are basically:

  1. Deploy an instance running the source AMI.
  2. Create an image from that instance.
  3. Copy the image and encrypt the copy.
  4. Delete the unencrypted image.
  5. Terminate the instance.
  6. Add tags to new AMI.

The script has a need for a lot of VPC/subnet/security group preparation (which I guess could have been added to the script), and if there were errors during the execution then cleanup was very manual (more possible script work). The script is very flexible and meets my needs, but it is a codebase that needs expertise in order to maintain. And I have better things to do with my time.

A simpler solution is Packer.

Read more of this post

How To Rebuild Cloudera’s Spark

As a followup to the post How to upgrade Spark on CDH5.5, I will show you how to get a build environment up and running with a CentOS 7 virtual machine running via Vagrant and Virtual Box. This will allow for the quick build or rebuild of Cloudera’s version of Apache Spark from https://github.com/cloudera/spark.

Why?

You may want to rebuild Cloudera’s Spark in the event that you want to add functionality that was not compiled in by default. The Thriftserver and SparkR are two things that Cloudera does not ship (nor support), so if you are looking for these things, these instructions will help.

Using a disposable virtual machine will allow for a repeatable build and will keep your workstation computing environment clean of all the bits that may get installed.

Read more of this post