Self-Signed CA … Whaaat?

<Begin documentation rant…>

Can we all please just stop this “Self-signed CA” nonsense?

Every   single  root certificate authority on the planet (and all known dimensions) is, by definition… *self signed*.

What you might want to say instead is “Public CA” vs “Private CA”.

<End documentation rant.>

Advertisements

Thanks Apple. (Not Really)

Thanks Apple, for making recent products that don’t do what I expect them to do.

For the need to buy a bunch of dongles to get all my existing peripherals to work with your laptop.

For one of said dongles (the Apple USB-C Digital AV Multiport Adapter) being unable to pass through enough power to charge my laptop or for use to pass data.

Use the USB-C port of this adapter for charging your Mac, not for data transfer or video.

And,

This port delivers a maximum of 60W power, suitable for MacBook models and 13-inch MacBook Pro models. For the best charging performance on 15-inch MacBook Pro models, connect the power supply directly to your Mac, not through the adapter.

For forcing me to buy a dongle to use my headphones and charge my Apple phone at the same time.

For not providing the same port to plug said headphones into both the phone and the laptop.  I mean, make up your minds.  Is it lightning or not?  (And don’t tell me bluetooth is the future.  I guarantee you it is not for me.)  Thank you to Belkin for providing a solution.

For making the touchpad on your laptop so big that I lose my finger resting points and invariably palm click or double touch to the point of frustration.

For thinking its a good idea to reuse a connector plug format to push different protocols.  Is it Thunderbolt? Or is it mini Display Port?  Is it Thunderbolt 3?  Or is it USB-C?  Is that cable certified for the faster speeds?  Does it have the fancy logo?

I held on to my iPhone 5S for as long as I could, but in the end it just became too slow for my needs.  I held on to my 2015 MacBook Pro for as long as it let me, but it died a sad death last week due to batter expansion and loss of boot disk.

I want to remain an Apple hardware fan (partially because PC/Linux leaves so much to be desired) but it is getting harder every year to remain happy.

Failed Disk Replacement with Navigator Encrypt

Hardware fails.  Especially hard disks.  Your Hadoop cluster will be operating with less capacity until that failed disk is replaced.  Using full disk encryption adds to the replacement trouble.  Here is how to do it without bringing down the entire machine (assuming of course that your disk is hot swappable).

Assumptions:

  • Cloudera Hadoop and/or Cloudera Kafka environment.
  • Cloudera Manager is in use.
  • Cloudera Navigator Encrypt is in use.
  • Physical hardware that will allow for a data disk to be hot swapped without powering down the entire machine. Otherwise you can pretty much skip steps 2 and 4.
  • We are replacing a data disk and not an OS disk.

Steps:

The following are steps to replace a failed disk that is encrypted by Cloudera Navigator Encrypt.   If any of the settings are missing in your Cloudera Manager (CM), you might consider upgrading CM to a newer version.

  1. Determine the failed disk.  The example used here is a disk that is mounted at /data/0.
  2. Configure data directories to remove the disk you are swapping out:
    1. HDFS
      1. Go to the HDFS service.
      2. Click the Instances tab.
      3. Click the affected DataNode.
      4. Click the Configuration tab.
      5. Select Category > Main.
      6. Change the value of the DataNode Data Directory property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific DataNode instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      7. Click Save Changes to commit the changes.
      8. Refresh the affected DataNode. Select Actions > Refresh DataNode configuration.
    2. YARN
      1. Go to the YARN service.
      2. Click the Instances tab.
      3. Click the affected NodeManager.
      4. Click the Configuration tab.
      5. Select Category > Main.
      6. Change the value of the NodeManager Local Directories property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific NodeManager instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      7. Change the value of the NodeManager Container Log Directories property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific NodeManager instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      8. Click Save Changes to commit the changes.
      9. Refresh the affected NodeManager. Select Actions > Refresh NodeManager.
    3. Impala
      1. Go to the Impala service.
      2. Click the Instances tab.
      3. Click the affected Impala Daemon.
      4. Click the Configuration tab.
      5. Select Category > Main.
      6. Change the value of the Impala Daemon Scratch Directories property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific Impala Daemon instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      7. Click Save Changes to commit the changes.
      8. Refresh the affected Impala Daemon. Select Actions > Refresh the Impala Daemon.
    4. Kafka
      1. Go to the Kafka service.
      2. Click the Instances tab.
      3. Click the affected Kafka Broker.
      4. Click the Configuration tab.
      5. Select Category > Main.
      6. Change the value of the Log Directories property to remove the directories that are mount points for the disk you are removing.

        Warning: Change the value of this property only for the specific Kafka Broker instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.

      7. Click Save Changes to commit the changes.
      8. Refresh the affected Kafka Broker. Select Actions > Refresh Kafka Broker.
  3. Remove the old disk and add the replacement disk.
    1. List out the disks in the system, taking note of the name of the failed disk. (lsblk; lsscsi)
    2. Determine the failed disk.  Example used here is /data/0 which is mounted at /navencrypt/0.  (readlink -f /data/0)
    3. Determine the Navigator Encrypt DISKID of the failed source device. (grep /navencrypt/0 /etc/navencrypt/ztab)
    4. Clean up Navigator Encrypt entries. (navencrypt-prepare --undo ${DISKID} || navencrypt-prepare --undo-force ${DISKID})
      1. Also possibly need to use: (cryptsetup luksClose /dev/mapper/0; dd if=/dev/zero of=${DISK}1 ibs=1M count=1)
    5. Remove failed disk.
    6. Add replacement disk.
    7. Perform any HBA configuration (i.e. Dell PERC/HP SmartArray RAID0 machinations).
    8. Determine the name of the new disk.  Example used here is /dev/sdo. (lsblk; lsscsi)
    9. Partition the replacement disk. (parted -s ${DISK} mklabel gpt mkpart primary xfs 1 100%)
    10. Have Navigator Encrypt configure the disk for encryption and write out a new filesystem. (navencrypt-prepare -t xfs -o noatime --use-uuid ${DISK}1 /navencrypt/0)
    11. Fix the symlink target directory installed by navencrypt-move. (mkdir -p $(readlink -f /data/0))
  4. Configure data directories to restore the disk you have swapped in:
    1. HDFS
      1. Change the value of the DataNode Data Directory property to add back the directory that is the mount point for the disk you added.
      2. Click Save Changes to commit the changes.
      3. Refresh the affected DataNode. Select Actions > Refresh DataNode configuration.
      4. Run the HDFS fsck utility to validate the health of HDFS.
    2. YARN
      1. Change the value of the NodeManager Local Directories and NodeManager Container Log Directories properties to add back the directory that is the mount point for the disk you added.
      2. Click Save Changes to commit the changes.
      3. Refresh the affected DataNode. Select Actions > Refresh NodeManager.
    3. Impala
      1. Change the value of the Impala Daemon Scratch Directories property to add back the directory that is the mount point for the disk you added.
      2. Click Save Changes to commit the changes.
      3. Refresh the affected DataNode. Select Actions > Refresh the Impala Daemon.
    4. Kafka
      1. Change the value of the Log Directories property to add back the directory that is the mount point for the disk you added.
      2. Click Save Changes to commit the changes.
      3. Refresh the affected Kafka Broker. Select Actions > Refresh Kafka Broker.

Reference Links:

https://www.cloudera.com/documentation/enterprise/latest/topics/admin_dn_swap.html

https://www.cloudera.com/documentation/enterprise/latest/topics/navigator_encrypt_prepare.html#concept_device_uuids

Keybase and GNUPG and Yubikey (oh my!)

I’ve been meaning to generate PGP keys for my work identity and there is this newfangled social key site named Keybase that is integrated in some tools (Terraform) that I use and I figured I should make it all work with my new Yubikey 4 hardware keystore. So I scoured the Intarwebs for details and could not find the needed incantation. Read more of this post

puppet network module 3.10.0

Today, I have released a large update to my Red Hat network Puppet module to the Puppet Forge.  Numerous pull requests were merged including:

  • Added support for promiscuous interfaces. (Elyse Salberg)
  • Added a parameter to disable restart of network service on change. (Evgeni Golov)
  • Added support for netmask and broadcast parameters in alias range. (Nick Irvine)
  • Added support for ARPCHECK=no for alias ranges. (Nick Irvine)
  • Droped requirement of ipaddress/netmask on static interfaces. (Brian Murphey) Helpful for IPv6-only interfaces.
  • Added support for ARPCKECK=no to static interfaces. (Sander Cornelissen)
  • Made RES_OPTIONS for single-request-reopen optional (default true) (Elyse Salberg)
  • Changed macadress for bond slaves to be optional (if not provided, try to get value from facts). (Elyse Salberg)
  • Added explicit userctl, bootproto, onboot for bond slaves. (Elyse Salberg)
  • Added explicit userctl for static bonds. (Elyse Salberg)
  • Finally fixed the PEERDNS logic by making PEERDNS be separate from DNS1, DNS2, and DOMAIN.

https://forge.puppetlabs.com/razorsedge/network
https://github.com/razorsedge/puppet-network

Let me know if you have any feedback!

strict_variables and the RazorsEdge Puppet Modules

Over the past month I have been adding much needed support for running Puppet with strict_variables = true to all of the RazorsEdge Puppet modules. Thanks to coreone, I finally had a solution that did not require tearing out the legacy global variable support. As much as I think that continued inclusion of global variable support has become painful, I am still committed to keeping it around.

I also managed to get the Rspec testing Ruby gem dependencies configured such that things can still be tested on Ruby 1.8.7, 1.9.3, and 2.x as well as Puppet 2.7, 3.x, and 4.x. Travis-CI is also testing Ruby 2.4 and Puppet 5.x for all of the modules. As of now, only two modules are not passing the Puppet 5 Rspec tests and I hope to get those sorted soon.

https://forge.puppetlabs.com/razorsedge/certmaster
https://forge.puppetlabs.com/razorsedge/cloudera
https://forge.puppetlabs.com/razorsedge/func
https://forge.puppetlabs.com/razorsedge/hp_mcp
https://forge.puppetlabs.com/razorsedge/hp_spp
https://forge.puppetlabs.com/razorsedge/lsb
https://forge.puppetlabs.com/razorsedge/network
https://forge.puppetlabs.com/razorsedge/openlldp
https://forge.puppetlabs.com/razorsedge/openvmtools
https://forge.puppetlabs.com/razorsedge/razorsedge
https://forge.puppetlabs.com/razorsedge/snmp
https://forge.puppetlabs.com/razorsedge/tor
https://forge.puppetlabs.com/razorsedge/vmwaretools

Let me know if you have any feedback!

Hue Load Balancer TLS Errors

This is a reblog from the Clairvoyant blog.

If you are configuring the Hue load balancer with Apache httpd 2.4 and TLS certificates, there is a chance that you may end up with errors. The httpd proxy will check the certificates of the target systems and if they do not pass some basic consistency checks, the proxied connection fails.

Read more of my post on the Clairvoyant blog.