Slides and Video from My Talk at PDC 2019

I got a chance to speak to Hadoop folks at this years Pune Data Conference held in Pune, India.

My talk is titled Admins: Smoke Test Your Hadoop Cluster! This is the abstract:

Software smoke testing is a preliminary level of testing. It makes certain that all of the primary components of a system are functioning correctly. For example, when installing a new secured Hadoop cluster, running a series of quick tests to make sure that things like HDFS and MapReduce are operational can save a lot of headache before enabling Kerberos. Smoke tests can also save you time and embarrassment by making sure that things work before you turn the cluster over to your customer.


In this talk, Michael Arnold will explain the utility of testing Hadoop components after cluster builds and software upgrades. Michael will present code examples that you can use to confirm functionality of Spark, Kudu, HBase, Kafka, MapReduce, etc on your cluster.

This is the link to the slide presentation and video.

Things to Come From the Cloudera/Hortonworks Merger

Now that the two Hadoop distribution giants have merged, it is time to call out what will happen to their overlapping software offerings. The following are my predictions:

Ambari is out – replaced by Cloudera Manager.
This is a no-brainer for anyone that has used the two tools. People can rant and rave about open source and freedom all they want, but Cloudera Manager is light-years ahead of Ambari in terms of functionality and features. I mean, Ambari can only deploy a single cluster. CM can deploy multiple clusters. And the two features I personally use the most in my job as a consultant are nowhere to be found in Ambari: Host/Role layout and a non-default Configuration view.

Tez is out – replaced by Spark.
Cloudera has already declared that Spark has replaced MapReduce. There is little reason for Tez to remain as a Hive execution engine when Spark does the same things and can also be used for general computation outside of Hive.

Hive LLAP is out – replaced by Impala.
Similar to Tez, there is no reason to keep interactive query performance tools for Hive around when Impala was designed to do just that. Remember: Hive is for batch and Impala is for exploration.

What do you think? Leave your thoughts in the comments.

Hadoop Cluster Sizes

A few years ago, I presented Hadoop Operations: Starting Out Small / So Your Cluster Isn’t Yahoo-sized (yet) at a conference. It included a definition of Hadoop cluster sizes. I am posting those words here to ease future references to that definition.

Question: What is a tiny/small/medium/large [Hadoop] cluster?

Answer:

  • Tiny: 1-9 nodes
  • Small: 10-99 nodes
  • Medium: 100-999 nodes
  • Large: 1000+ nodes
  • Yahoo-sized: 4000 nodes

Self-Signed CA … Whaaat?

<Begin documentation rant…>

Can we all please just stop this “Self-signed CA” nonsense?

Every   single  root certificate authority on the planet (and all known dimensions) is, by definition… *self signed*.

What you might want to say instead is “Public CA” vs “Private CA”.

<End documentation rant.>

Thanks Apple. (Not Really)

Thanks Apple, for making recent products that don’t do what I expect them to do.

For the need to buy a bunch of dongles to get all my existing peripherals to work with your laptop.

For one of said dongles (the Apple USB-C Digital AV Multiport Adapter) being unable to pass through enough power to charge my laptop or for use to pass data.

Use the USB-C port of this adapter for charging your Mac, not for data transfer or video.

And,

This port delivers a maximum of 60W power, suitable for MacBook models and 13-inch MacBook Pro models. For the best charging performance on 15-inch MacBook Pro models, connect the power supply directly to your Mac, not through the adapter.

For forcing me to buy a dongle to use my headphones and charge my Apple phone at the same time.

For not providing the same port to plug said headphones into both the phone and the laptop.  I mean, make up your minds.  Is it lightning or not?  (And don’t tell me bluetooth is the future.  I guarantee you it is not for me.)  Thank you to Belkin for providing a solution.

For making the touchpad on your laptop so big that I lose my finger resting points and invariably palm click or double touch to the point of frustration.

For thinking its a good idea to reuse a connector plug format to push different protocols.  Is it Thunderbolt? Or is it mini Display Port?  Is it Thunderbolt 3?  Or is it USB-C?  Is that cable certified for the faster speeds?  Does it have the fancy logo?

I held on to my iPhone 5S for as long as I could, but in the end it just became too slow for my needs.  I held on to my 2015 MacBook Pro for as long as it let me, but it died a sad death last week due to battery expansion and loss of boot disk.

I want to remain an Apple hardware fan (partially because PC/Linux leaves so much to be desired) but it is getting harder every year to remain happy.

Failed Disk Replacement with Navigator Encrypt

Hardware fails.  Especially hard disks.  Your Hadoop cluster will be operating with less capacity until that failed disk is replaced.  Using full disk encryption adds to the replacement trouble.  Here is how to do it without bringing down the entire machine (assuming of course that your disk is hot swappable).

Assumptions:

  • Cloudera Hadoop and/or Cloudera Kafka environment.
  • Cloudera Manager is in use.
  • Cloudera Navigator Encrypt is in use.
  • Physical hardware that will allow for a data disk to be hot swapped without powering down the entire machine. Otherwise you can pretty much skip steps 2 and 4.
  • We are replacing a data disk and not an OS disk.

Steps:

The following are steps to replace a failed disk that is encrypted by Cloudera Navigator Encrypt.   If any of the settings are missing in your Cloudera Manager (CM), you might consider upgrading CM to a newer version.

  1. Determine the failed disk.  The example used here is a disk that is mounted at /data/0.
  2. Configure data directories to remove the disk you are swapping out:
    1. HDFS
      1. Go to the HDFS service.
      2. Click the Instances tab.
      3. Click the affected DataNode.
      4. Click the Configuration tab.
      5. Select Category > Main.
      6. Change the value of the DataNode Data Directory property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific DataNode instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      7. Click Save Changes to commit the changes.
      8. Refresh the affected DataNode. Select Actions > Refresh DataNode configuration.
    2. YARN
      1. Go to the YARN service.
      2. Click the Instances tab.
      3. Click the affected NodeManager.
      4. Click the Configuration tab.
      5. Select Category > Main.
      6. Change the value of the NodeManager Local Directories property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific NodeManager instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      7. Change the value of the NodeManager Container Log Directories property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific NodeManager instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      8. Click Save Changes to commit the changes.
      9. Refresh the affected NodeManager. Select Actions > Refresh NodeManager.
    3. Impala
      1. Go to the Impala service.
      2. Click the Instances tab.
      3. Click the affected Impala Daemon.
      4. Click the Configuration tab.
      5. Select Category > Main.
      6. Change the value of the Impala Daemon Scratch Directories property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific Impala Daemon instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      7. Click Save Changes to commit the changes.
      8. Refresh the affected Impala Daemon. Select Actions > Refresh the Impala Daemon.
    4. Kafka
      1. Go to the Kafka service.
      2. Click the Instances tab.
      3. Click the affected Kafka Broker.
      4. Click the Configuration tab.
      5. Select Category > Main.
      6. Change the value of the Log Directories property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific Kafka Broker instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      7. Click Save Changes to commit the changes.
      8. Refresh the affected Kafka Broker. Select Actions > Refresh Kafka Broker.
  3. Remove the old disk and add the replacement disk.
    1. List out the disks in the system, taking note of the name of the failed disk. (lsblk; lsscsi)
    2. Determine the failed disk.  Example used here is /data/0 which is mounted at /navencrypt/0.  (readlink -f /data/0)
    3. Determine the Navigator Encrypt DISKID of the failed source device. (grep /navencrypt/0 /etc/navencrypt/ztab)
    4. Clean up Navigator Encrypt entries. (navencrypt-prepare --undo ${DISKID} || navencrypt-prepare --undo-force ${DISKID})
      1. Also possibly need to use: (cryptsetup luksClose /dev/mapper/0; dd if=/dev/zero of=${DISK}1 ibs=1M count=1)
    5. Remove failed disk.
    6. Add replacement disk.
    7. Perform any HBA configuration (i.e. Dell PERC/HP SmartArray RAID0 machinations).
    8. Determine the name of the new disk.  Example used here is /dev/sdo. (lsblk; lsscsi)
    9. Partition the replacement disk. (parted -s ${DISK} mklabel gpt mkpart primary xfs 1 100%)
    10. Have Navigator Encrypt configure the disk for encryption and write out a new filesystem. (navencrypt-prepare -t xfs -o noatime --use-uuid ${DISK}1 /navencrypt/0)
    11. Fix the symlink target directory installed by navencrypt-move. (mkdir -p $(readlink -f /data/0))
  4. Configure data directories to restore the disk you have swapped in:
    1. HDFS
      1. Change the value of the DataNode Data Directory property to add back the directory that is the mount point for the disk you added.
      2. Click Save Changes to commit the changes.
      3. Refresh the affected DataNode. Select Actions > Refresh DataNode configuration.
      4. Run the HDFS fsck utility to validate the health of HDFS.
    2. YARN
      1. Change the value of the NodeManager Local Directories and NodeManager Container Log Directories properties to add back the directory that is the mount point for the disk you added.
      2. Click Save Changes to commit the changes.
      3. Refresh the affected DataNode. Select Actions > Refresh NodeManager.
    3. Impala
      1. Change the value of the Impala Daemon Scratch Directories property to add back the directory that is the mount point for the disk you added.
      2. Click Save Changes to commit the changes.
      3. Refresh the affected DataNode. Select Actions > Refresh the Impala Daemon.
    4. Kafka
      1. Change the value of the Log Directories property to add back the directory that is the mount point for the disk you added.
      2. Click Save Changes to commit the changes.
      3. Refresh the affected Kafka Broker. Select Actions > Refresh Kafka Broker.

Reference Links:

https://www.cloudera.com/documentation/enterprise/latest/topics/admin_dn_swap.html

https://www.cloudera.com/documentation/enterprise/latest/topics/navigator_encrypt_prepare.html#concept_device_uuids

Keybase and GNUPG and Yubikey (oh my!)

I’ve been meaning to generate PGP keys for my work identity and there is this newfangled social key site named Keybase that is integrated in some tools (Terraform) that I use and I figured I should make it all work with my new Yubikey 4 hardware keystore. So I scoured the Intarwebs for details and could not find the needed incantation. Read more of this post