Categories
Posts

WordPress Update Day

Droplet Server Image

First step is to backup the system to make sure that I can restore if something goes sideways. Running poweroff in the console will bring down the system, then using the Digital Ocean console to actually power down the system, create a named drive snapshot, then power up the system gets us a failsafe.

System Updates

apt update
apt upgrade -y
apt autoremove -y

After a reboot all is up and well.

WordPress Site Backups

I found several helpful references on doing Easy Engine wordpress backups and settled on this (modified) script to produce some easy tarball backups of the sites and their dabases before doing an update.

#!/bin/bash

ee="/opt/easyengine/sites"
backup="/home/backup"
sites=$(ls $ee)
year=`date +%Y`
month=`date +%m`
day=`date +%d`
date=`date +%Y-%m-%d`

mkdir -p $backup/$year/$month/$day

for i in $sites
do
    echo "Starting backup for $i"
    ee shell $i --command="wp db export ../$i.sql"
    cd $ee/$i/app && tar --create --gzip --file $i.tar.xz htdocs $i.sql
    rm $ee/$i/app/$i.sql
    mv $ee/$i/app/$i.tar.xz $backup/$year/$month/$day
done

These backup files can be inspected with tar --list --gzip --file <file> and extracted with tar --extract --gzip --file <file>

Update Easy Engine

I’m currently on the latest release, so no update required here.

WordPress Application Updates

With backups in place it should be safe to run the automatic updates in the WP sites. I tried running the updates from the wordpress.com/jetpack interface. I verified the updates in the wp admin pages. The jetpack integration seemed to work, but required some page reloads before all the updates would apply. My plugins are set to autoupdate, so no manual intervention was required.

Checkout

A quick view of the sites didn’t show any problems. Disk usage on the droplet is at 50%. The bulk of usage is /var/lib/docker/overlay2. I’ll need to keep an eye on that, since high overlay usage is a sign that changed files are being written in the containers, but not in volumes. It looks like cache plugins are writing temp files in there, which isn’t a big deal, since they shouldn’t grow, but if temp files or log files are getting written, it could be a problem.

Cleanup

Reminder to cleanup old backups in later update runs.

Categories
Posts

Upgrade Unbootable

After doing a Ubuntu update, I found my droplet unbootable. Digital Ocean provides a useful utility ISO for restarting a droplet in recovery mode, but by itself I wasn’t able to see or fix the problem. Luckily I found a blog post with the correct set of remounts and chrooting to be able to rerun apt and get everything running.

In order to get things running, you restart the droplet with the Recovery ISO as the startup drive, mount the main disk, choose 6 for interactive shell, then in the shell:

mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
mount --bind /run /mnt/run"      
chroot /mnt
apt update
apt upgrade

Restart the droplet and boot from main harddisk (you have to power cycle, a soft reboot doesn’t do the trick). After rebooting, I reran the updates just to make sure:

apt update
apt upgrade
apt autoremove

I found that not all my docker containers would come back up, and traced the failure to insufficient peak memory. This lead me to move my swap to a larger swapfile to allow for greater headroom.

fallocate -l 3G /swapfile3g
ls -lh /swapfile
ls -lh /swapfile3g
chmod 600 /swapfile3g
mkswap /swapfile3g
swapon /swapfile3g
swapon -s
vi /etc/fstab
swapoff /swapfile
Categories
Posts

WordPress MySQL Stability

For a few days after moving my blogs to self-hosted WordPress, I was seeing somewhat frequent restarts of the MariaDB container. I didn’t see any real indications as to the cause, just a few sql formatting warnings.

I added swap to the host vm, thinking that it may have been memory pressure causing issues. Since I have done that, the problem seems to have gone away. EasyEngine recommends adding swap and a few other tuning tweaks if this problem recurs.

Categories
Posts

Transition to Self-Hosted WordPress

I’ve had a number of domains which I had hosted blogs on in the past and that have been lying fallow. I was pretty happy hosting them on wordpress.com, but the number of sites and the monthly cost per site were just too high to justify the cost of WordPress hosting for sites that were simple “for fun” web publishing. I’ve made several attempts at various static-site rendering solutions or Netflify JAMstack implementations, and while I think they are truly awesome, they also require a lot of development and implementation maintenance (tooling versions are constantly evolving and there isn’t much guarantee that content will continue to work over the long haul). I’ve also lost content in previous engine transfers (mostly from exporting from hosted blog platforms that ran out of VC money), so I suppose I’d rather stay with WordPress(.org) if I can. Eventaully someone may create a static rendering pluging for the WordPress engine and hosting this will all be easier with free or cheap static page CDNs and the engine hosted privately, but so fare those solutions do not seem fully baked and well supported.

Essential System Setup

I started with a 1gb Digital Ocean droplet with Ubuntu 20.04 installed and weekly backups configured. I set up a new ssh key and added a ssh config file entry for connect to the host:

Host wordpress
    HostName 167.172.199.82
    User root

The host needed initial package updates:

apt update
apt upgrade
apt install net-tools

I changed the ssh port as well, to reduce amount of useless security probing and port scanning. This isn’t a huge security measure, but I understand that there a lot of unsophisticated nuisance attacks.

vi /etc/ssh/sshd_config
# Uncomment and modify "Port" directive
service sshd restart

This required an update to my ssh config file as well.

EasyEngine Install

The basic install for EasyEngine 4 is a simple bash script download and execute. Before installing, there were 23gb free on my system drive. EasyEngine installs docker and pulls several images.

wget -qO ee rt.cx/ee4 && sudo bash ee

After this install I had 19gb free on my droplet drive.

Configuring a first site and importing from Tumblr

EasyEngine makes setting up a new site a single line command. I want to set this up with LetsEncrypt SSL, so the first step is to get DNS for joshrivers.me and www.joshrivers.me pointed at the IP address of the droplet. I’m disabling Cloudflare proxying here because it currently appears that this conflicts with getting LetsEncrypt setup.

With this configured, setting up the basic site is a single command:

ee site create joshrivers.me --wp --ssl=le

After a few minutes, and entering my email address for certificate renewal notification emails, I had a functional site. I verified that redirect-to-https was configured with curl -v http://www.joshrivers.me and the web server is supplying 301 redirects to the https version of the site.

First step after creating the new site is to login with the generated admin credentials and change the display name of the account (no point in having a randomized username and printing it on every web page).

Outbound email did not work out of the box with the builtin postfix. Trying a test send from the command line gave an error: server message: 451 4.3.0 <my address>: Temporary lookup failure. Installing cli tools into the postfix container shows that name resolution is working from there. It appears there is a configuration error with the dockerized postfix where email fails when the domain matches the hosted site. The errors go away when attempting to deliver to a different domain. Frustratingly, I still don’t see any email delivery at this point. Those emails may be being blocked due to my SPF configuration for my domains. There was a useful fix described on the community forum, but I still have no mail delivery.

Ultimately, I don’t think I need transactional emails from wordpress, so I’m going to leave it alone for now. I can dig at postfix and SPF later if it seems important.

Further settings to update:

  • Site Timezone
  • Disable Comments
  • Day and Name Permalinks

Next I imported my old Tumblr Blog, which required removing the custom domain mapping in Tumblr and using an OAuth login. After importing I verified comments were disabled on all the old posts. And I deleted the “Hello World” post. I used a simpler theme for joshrivers.me than the default WP theme. It could use some work, but it is fine as a start, and at least as decent as my Tumblr theme was. Next I set up Jetpack.

Imports fron WordPress.com

For each of my additional sites, I ran through this outline:

  • DNS Reconfiguration
  • ee site create subvert.org --wp --ssl=le
  • Initial Admin Settings: https://subvert.org/admin
    • Trash sample post and pages
    • Edit default user display names
    • Change site name and tag line
    • Correct site time zone
    • Disable comments
    • Day and Name permalinks
    • Delete default plugins
  • Plugins
  • Import
    • Importing with media import seems to work seamlessly. I got an error due to an attachement from my long ago import when I got onto wordpress.com but I don’t think anything was lost there.
    • Deactivate Importer Plugin
  • Change default category

Added this to CSS to stop showing the (disabled!) comments link:

.post-comment-link {
  display: none !important;
}

Using the DB shell instructions from the EasyEngine Handbook, I was able to change the admin email address:

SELECT * FROM wp_users;
UPDATE wp_users SET user_email = "<my_address>" WHERE user_email="admin@subvert.org";
SELECT * FROM wp_options WHERE option_name="admin_email";
UPDATE wp_options set option_value = "joshrivers@me.com" WHERE option_name="admin_email";

I next tested this with the outbound email instructions. Still no mail delivery. This is bugging the heck out of me, but I also can’t think of a single reason I need to send emails from my blog server, so I should really just leave it alone, right?

Categories
Posts

2 Unit Tests, 0 Integration Tests

SinkPaper

Categories
Posts

2 Unit Tests, 0 Integration Tests

Categories
Posts

2 Unit Tests, 0 Integration Tests

giphy-downsized-large

 

Categories
Posts

Strong Encryption

In my experience, very few people understand the techniques, limitations, and implications of encryption (and the attacks upon it). I work with, and have worked with, a large number of talented software engineers and system administrators, and for the most part their understanding of this topic is only a surface understanding. The public image of encryption  and hacking on CSI, and Mission Impossible only makes it worse. I don’t expect people to undestand this technology and its complexities. It’s too hard. Just relax and understand that you have no clue, and most likely no normal manager, politician, or non-specialized technologist does either. It is VERY complex.

What is not complex is the understanding that encryption is important to you. Personally, economically, socially, and politically, knowledge is power and control. We already live in a world where powerful people and organizations are allowed to keep more secrets than individuals. A world without legal strong encryption could easily become one where the powerful have unlimited secrets, and you are allowed zero.

It makes me very glad to see Apple and Tim Cook fighting the police and the President for your right to have some privacy in your life.

I strongly agree with John Gruber’s statement in support of Tim Cook:

TIM COOK LASHES OUT AT WHITE HOUSE OFFICIALS FOR BEING WISHY-WASHY ON ENCRYPTION

Jenna McLaughlin, reporting for The Intercept:

Apple CEO Tim Cook lashed out at the high-level delegation of Obama administration officials who came calling on tech leaders in San Jose last week, criticizing the White House for a lack of leadership and asking the administration to issue a strong public statement defending the use of unbreakable encryption.

The White House should come out and say “no backdoors,” Cook said. That would mean overruling repeated requests from FBI Director James Comey and other administration officials that tech companies build some sort of special access for law enforcement into otherwise unbreakable encryption. Technologists agree that any such measure could be exploited by others.

Nick Heer, at Pixel Envy:

Apple — and Tim Cook, specifically — is the only major tech company currently defending encryption against intrusive surveillance to this degree. Every other company is either open to compromise publicly, has privately compromised, or has failed to take a firm stand.

This came up during last night’s Republican primary debate — not about tech companies refusing to allow backdoors in encryption systems, but about Apple specifically. Tim Cook is right, and encryption and privacy experts are all on his side, but where are the other leaders of major U.S. companies? Where is Larry Page? Satya Nadella? Mark Zuckerberg? Jack Dorsey? I hear crickets chirping.

Real leaders have courage, and on this very essential issue — in the face of fierce political pushback from law enforcement officials — only Tim Cook is showing any.

Thank you.

Categories
Posts

Volume all the (docker) things

I’m starting to get the hang of the Docker thing. I’ve been doing it _just barely_ long enough to see some change, and I’ve read material over a longer period. I think it’s important (from an architecture planning perspective) to constantly keep in mind that Docker is a set of very rapidly changing abstractions over a set of solid long-term base functions.  Often the abstractions really suck for a while, since they haven’t truly figured out what things are going to look like in the end.

So we have lots of instances of:

In the past we used this crazy workaround for a feature being missing <——> now we have a half-baked abstraction that may or may not be better than the crazy workaround <——> when we get to use docker 1.9 (or 2.5 or whatever) it will make sense in this way better way.

Logging is one of those that is probably pretty solidly fixed in docker 1.9.

In networking, the new shape in docker 1.9 looks awesome, so by docker 1.11, it should be solid.

Volumes are just starting to get a real picture laid out for us.

So for volumes, we have four ways that docker manages volumes:

  1. Host mount (i.e. -v /home/josh/folder:/dock/app/folder): this completely makes sense, and is solid. The only problem with it is that it doesn’t allow any sort of clustering tools to manage/move the containers around, since the volume is outside of the management scope of docker. It’s a host volume, so it’s tied to the host. In the tight scope of our needs for doing Postgres, I think this is the option we should stick with. It just makes things explicit, and we don’t need to move our containers.
  2. Anonymous volumes: (i.e. -v /dock/app/folder or VOLUME /dock/app/folder in dockerfile). This creates a randomly named folder somewhere in /var that is mounted into the container. If you don’t docker rm –v the container when you are done with it, this folder will get orphaned, and you’ll leak disk space. While you can use docker inspect to find the folder, there is no real tooling for working with the volume or managing it’s lifecycle. By themselves, these volumes aren’t really useful. They have some performance and reliability implications, but really they are more of a hole in the abstraction, than an operation tool in their own right.
  3. Volumes-from: Using #2, we can create a container that has anonymous volumes, but give them a name/handle because they have a container that they are connected to. That container shouldn’t even be running, it should have been run and stopped or just by using the docker create command. Running the second container with —volumes-fom will have the storage from the first container be used as the persistent location for files from the running container. This idea is more compatible with docker cluster managers and portability, but it’s also awkward. You can’t create a data container with Docker Compose. You end up with IMPORTANT stopped containers lying around on your host (a lot of scripts that are out there for cleaning up orphans just delete all of your stopped containers…). For a long while, this has looked like the ‘docker way’ of managing persistent storage, but now it appears that things are changing…
  4. Named volumes (i.e. -v myvolume:/dock/app/folder) in Docker 1.9, a cluster of new ‘docker volume’ commands showed up, and the ability to give an ‘anonymous volume’ a name showed up, and volume drivers are in there too. I wouldn’t dream of relying on this functionality yet, as it seems like things are rapidly changing, and theres a pile of bugs and feature requests on github on the subject, but this is a clear abstraction and vision for the future of volume management. In the future, there will be some sort of separate storage server, with it’s own management tools. You’ll be able to cluster and cache and backup volumes through that tool (probably there will be a number of competing solutions), and you will just run your container with it’s volumes specified by name and connect to the volume driver and it will be automagically managed for you. If you need to move containers around, the volume storage server will make sure your data moves too. We should absolutely use this. Next year or sometime.

TLDR: let’s just use host volumes.

Categories
Posts

Anatomy of a Modern Production Stack · 80%

In coming up with a solution, it’s good to have a checklist to work from. You don’t have to check all the boxes, but it’s useful to know which ones you’re taking a miss on. Anatomy of a Modern Production Stack · 80%