Thoughts On Auto-Updating Older Versions of WordPress

WordPress-blue broken plate

Below is a reposting of a Twitter thread.

If the WordPress Powers That Be are going to move forward with auto-updating all version 3.7+ instances up to modern times, 2 things need to happen:

  1. A “One Click Rollback” button needs to be available to roll them back to where they were prior to the auto update; and
  2. The Classic Editor plugin needs to be automatically installed and activated.

I just got off the phone (while here at PressNomics 6) with a client whose site I built ~5 years ago while with another agency. It’s the PR site for a large real estate sales firm in the Washington, D.C. area. Their IT department just updated their WordPress core and plugins for the first time in over a year (read: since Gutenberg was merged into core). The update destroyed their ability to add custom taxonomies, and they could no longer create posts.

I suggested installing and activating the Classic Editor plugin which fixed the problem. This is a site that is still in active use, but only haphazardly maintained in terms of core and plugin updates. The IT department who runs it isn’t on top of the goings on within the WordPress community, and doesn’t know Gutenberg from the actual printing press from which it derives its name. (They also apparently don’t know what a “staging site” is either, but I digress…)

If this one site, that’s five years old, before Gutenberg was a twinkle in Matt Mullenweg’s eye, is still around, actively visited, and actively contributed to could break like this, so could hundreds or thousands or tens of thousands in similar circumstances.

This is the danger of the WordPress Powers That Be’s proposal. That the vast majority of the WordPress sites out there, the 1/3 of the Internet, will break, and their maintainers won’t know how to fix them.

Is there opportunity out there for independent developers like me? Sure. (Do I really want to see code I wrote over 5 years ago in an attempt to debug and remediate it? No, not really.) But this is more than an issue of business opportunity. It’s an issue of “doing no harm,” and recognizing that only a tiny, tiny fraction of WordPress site owners and/or maintainers are actually a part of the WP community, and are aware of the issues—and solutions—that lie beyond the admin dashboard.

A major change like the proposed auto-updating version 3.7+ WordPress sites can potentially cause a lot of damage if not done with safeguards (rollback option; Classic Editor) and extreme caution. #ethics

How the State Department Website Was Built

I recently spoke with my good friend Joe Casabona on his “How I Built It” podcast about working on the U.S. Department of State’s transition to WordPress. In the episode, we cover: working on a federal government project, even if you don’t necessarily agree with a particular administration’s policies; Gutenberg (and why this project isn’t using it); and how sometimes very different aspects of your life can come together in unexpected ways.

Joe is a terrific interviewer, and I had a great time on his podcast. You can listen to the full episode here.

Using Automated Website Testing to Win at Parenting

Sasha climbing a rock wall

Today my daughter starts a one-week summer camp at our local Earth Treks rock climbing center. She’s super-excited, and has been looking forward to this since I signed her up last December.

Of course, there is always a catch. The summer camp scene here in the D.C. area is ultra-competitive, and this one was no exception. You have to sign up early to get into camps with only a few slots, sometimes as early as October or November. By January and February, most of the summer camp programs in the area are open for enrollment, and mostly filled up. When they open, you gotta get in quick.

But I’m terrible: I forget things, I get busy. Just like every other working parent. And I certainly don’t trust myself to remember to check one particular summer camp every day in order to get in right when registrations open. Fortunately, I had a friend and former co-worker who is into climbing and had worked at that Earth Treks. She helped me narrow down about when registrations would open, but didn’t know the exact date. With only twelve slots per camp, once registration opened, those slots would fill quickly.

When Web Tools and Parenting Collide

Then it dawned on me. At my previous agency, we had used Ghost Inspector to perform daily tests on some of our websites in order to make sure certain features were always in working order. As a joke, I even had used it to snapshot the AWS home page once a day to see if I could spot new offerings by the cloud mega-service. What if I could use that same tool to gain an edge on the summer camp sign-up scene?

So I used the Ghost Inspector to find the path through to where the summer camp sign-up page would be. Because there was a bit of JavaScript tab action going on, there wasn’t a single definitive URL that one could just “hit” to see if registration was open. So I created a recording of how to get to the page where the sign-ups would be, and set Ghost Inspector to run that script every day.

Then I waited.

Every day for two weeks I got an email that my test “passed”. That is to say, the result was the same that day as the previous recording. Nothing had changed.

But of course, that’s not what I was looking for. In this case, a “passed” test was actually a failed result. But I waited, and after a couple of weeks, it finally came: I got an email notification that my Ghost Inspector test had “failed.” But in this case, it failed because the sign-up form had appeared. Registration was open!

I jumped onto the site and signed Sasha up for camp, the first registrant for that week’s program.

Is It Worth It?

Of course, at $99/month, Ghost Inspector is a pricy tool for just trying to get an edge on signing your kid for camp. Fortunately, I’ve had my account so long that I’m grandfathered into the free “Personal” plan that gives you 100 tests per month. I’m not sure that that plan still exists, but there is a 14 day free trial.

There are also alternatives to Ghost Inspector, but I can’t vouch for most of them. The requirement for this particular application was the ability to record a script of which elements to click on to get me to the screen I wanted. As I said earlier, there was no canonical URL I could just go to that had the form I needed. If there is, your options open up, because any visual regression testing tool that keeps a history would work.

Wraith is a tool I’ve used in the past for visual regression testing, but it’s pretty archaic and seemingly an abandoned project at this point. UIlicious is another scriptable tool, and they appear to have a free tier (but I’ve never tried it). There’s also the old, and extremely geeky Selenium. I’ve never been able to master it, but if you’re a QA engineer already well-versed in its use, it may be all you need to accomplish the task.

And if you have Ghost Inspector already, and you have the tests to spare, setting up another is no sweat. And getting your kid into the camp she really wants is totally worth it.


What work tools have you used to achieve a great parenting hack? Let me know in the comments!

How I Integrate Front-End Libraries Into My Development Process

Integrate front-end libraries into your workflow.

As I was listening to the latest episode of SyntaxWes Bos’ and Scott Tolinski’s great new podcast on front-end web development—the question was posed among the hosts over how they integrated front-end libraries—and more specifically, their CSS—into their own code while keeping everything up-to-date. Both admitted that they didn’t have a great answer (usually involved copying and pasting), so I thought I would share a system that works for me.

Most front-end libraries these days are kept on some sort of package management system. Usually this is NPM, but could also be Bower or something else. Assuming NPM, I would load it like I would any other node module, and save it to my package.json file. Take one of my go-to libraries, Breakpoint Sass. I’d install it like so:

npm install --save breakpoint-sass

Now, to include the Sass into my project, I would reference it as I would any other Sass partial, by writing the following line somewhere near the top of my main Sass file (following the path relative to where my main Sass file is located):

@import "../../node_modules/breakpoint-sass/stylesheets/breakpoint";

Obviously the exact path is going to depend on how your project files are organized, but you get the drift.

This works just as well for components like form element prettiers (think something like Nice Select), or anything else that enhances the visual design of your project. Often in those cases, it makes more sense to put the @import statement near where you will be adding in your own CSS to customize the look to your design.

I hope to expand upon this post a little more at some point to give some more workable examples, but for now, ping me in the comments or on Twitter if you have questions on how I do this.

From HTTP to HTTPS: Why Marketers Should Embrace Encryption.

This post was originally co-written with Jim Lansbury and Kurt Roberts for the RP3 Building Opportunity blog on January 26, 2016.


Lately the news has been full of articles about encryption: Big tech companies say it’s essential, the FBI says it’s terrible. Here’s how all that news affects marketers.

Americans now spend over eight hours a day consuming media, and two hours of that are spent on the web, where all of it is accessed by URL.  In the past 20 years, we’ve gone from almost no awareness of what a URL is to nearly complete awareness.  Still, there’s quite a few parts to the URL, and they’re all decided by how you structure your website.

The first part of your URL is the part that specifies the protocol; on the web, that means either HyperText Transfer Protocol (HTTP) or HyperText Transfer Protocol Secure (HTTPS).  It seems obvious that your bank should be using HTTPS, just from the name, but why would you want to serve your marketing sites over an encrypted connection?  Here’s three reasons:

HTTPS improves your search ranking.

Google has been using encryption as a signal for search ranking since April, 2014.  At the time they announced the move, they made it a very small part of your search ranking – only 1% of the final ranking.

Two years later, however, encryption is a hot topic, and Google has a very strong stance on it. The Google Webmasters Blog announced in mid-December that the crawler will start defaulting to the HTTPS version of a link over the HTTP version.

While no one knows exactly what the implications are for each search ranking, it’s certainly in Google’s own interest to favor secure sites while lobbying lawmakers to protect private access to strong encryption.

Going HTTPS is cheap—and it could pay for itself.

In terms of actual dollar costs, webhosts for years have charged a tidy little premium to give you that coveted SSL certificate. But these days, your options for obtaining one have never been more numerous, or more cost-effective. Heck, you can even get a certificate completely free thanks to Let’s Encrypt.

What about paying for itself in terms of better metrics?  HTTPS alone won’t lead to higher conversion rates or sales, but it is a prerequisite for HTTP/2 – and HTTP/2 is here, bringing with it speed boosts of about 50%. And those speed boosts matter in two important ways.

First, speed matters to your Google search ranking. Google has been considering page speed a factor in search rankings since 2010.  And second, it matters to your customers. It’s well-established that visitors leave slow-loading pages, with Amazon stating a few years ago that a 1 second delay in page load time would cost $1.6 billion in annual sales.

The bottom line is a small dollar investment and a faster website will convert more of your site’s visitors into paying customers.

HTTPS aligns you with high-tech companies.

Google isn’t the only brand advocating an HTTPS-only Internet.  Facebook and Twitter have both been HTTPS by default for years, and increasingly other tech companies are joining the call.

Governments across the western world are clamoring for major tech companies to open “backdoors” into their encrypted systems in the name of thwarting terrorism, but fortunately these companies have refused to bow to the pressure. Meanwhile, research continues to mount that encrypted communications are not offering terrorists any advantages.

The reason the tech companies support encryption is they have audiences that really value their right to privacy and know how technology is capable of undermining that right. Those users are their early adopters, beta testers and often loyal supporters. Their support is critical for new product launches, upgrades and changes.

So how do you get started?  Implementing HTTPS (and HTTP/2) properly takes expertise from your IT department or web partner, but it isn’t a difficult change to make in most cases. And as you can see, it can make a big difference to the success of your marketing.

Introducing: Taupecat Studios

I’m thrilled to announce that I’m diving into the independent developer market full-bore with the launch of Taupecat Studios.

I’m calling it “the brand-new digital agency with the long history of success.” After more than twenty years in the industry, and working for such luminary brands as Marriott International, Discovery Communications, and Long & Foster Real Estate, I knew I would be able to add value to a brand new roster of clients under my own banner.

Look for more details on this venture to come in the future, including some awesome client announcements.

Taupecat Studios is starting off small, but I have big plans. Let’s get to work!

My WordCamp US Wishlist

Inspired by Liam Dempsey’s post, I thought I, too, should write about what I’m looking forward to most about WordCamp US, now that it’s less than one week away.

Friends, Friends, Friends

WordCamp US is the largest annual gathering of WordPress professionals and practitioners in the world. I’m so fortunate that I’ve become involved in such an inclusive, welcoming, and helpful community. This is likely the only time this year I’ll get to see some of my best WordPress friends like Tracy Levesque, Mika Epstein, Brad Williams, and so many, many more. Expect a big freaking hug from me, folks!

Opportunities

This year, especially, as I’m trying to find my next career move, WordCamp US couldn’t come at a better time. It would be a fallacy to deny that WordCamp US is an enormous networking opportunity, with the best of the best in the WordPress community available to chat, share a drink, share a meal, or just get to know better. Again, the helpfulness and inclusiveness I mentioned above can be almost overwhelming.

Deep Knowledge

Oh yeah, there’s the sessions, too! While I was extremely fortunate to speak at last year’s inaugural WordCamp US, this year I get to sit back, relax, and absorb the knowledge that others are sharing. Lately, I’ve been trying to “go outside my comfort zone” when attending WordCamp talks, and I expect the trend to continue here. Instead of focusing on just developer-centric talks (okay, yes, I still intend to go to Nacin’s; that’s required viewing despite the lack of description), I want to attend more talks based on contentdesign, and other topics of which I know very little.

What are you looking forward to about this year’s WordCamp US? Let me know in the comments.

And if you don’t have your tickets yet, hurry! Time is running out.

Musings from Someone Discovering PostCSS

I originally wrote this piece about PostCSS as an internal post for my team at RP3 Agency, but I believe it might have relevance for front-end developers everywhere.

The new hotness in the CSS world is something called “PostCSS”, which I haven’t completely figured out yet but am getting there. Basically, things that happen after your Sass (or whatever) is done and has created a true CSS file go into this ecosystem. Think autoprefixer (for automatically entering browser vendor prefixes) and minification. (There’s even a school of thought that says this kind of thing can completely replace Sass, but I am so not there yet…)

In trying to bring a two-year-old project up to modern standards (the original project used things like Grunt which we don’t use anymore, having switched to Gulp), I’ve been trying to learn how to do things the “right” way.

So here’s the Gulp “styles” task I’ve come up with:

var gulp = require( 'gulp' ),
    sass = require( 'gulp-sass' ),
    rename = require( 'gulp-rename' ),
    plumber = require( 'gulp-plumber' ),
    gutil = require( 'gulp-util' ),
    sourcemaps = require( 'gulp-sourcemaps' ),
    postcss = require( 'gulp-postcss' ),
    autoprefixer = require( 'autoprefixer' ),
    csswring = require( 'csswring' ),
    del = require( 'del' ),
    concat = require( 'gulp-concat' ),
    uglify = require( 'gulp-uglify' ),
    connect = require( 'gulp-connect' );

gulp.task( 'styles', function() {

  gulp.src( __dirname + '/src/scss/*.scss' )
  .pipe( sourcemaps.init() )
  .pipe( plumber( function( err ) {
   gutil.beep();
   var errorText = err.message + '\n\n' + err.source;
   gutil.log( gutil.colors.red( errorText ) );
  } ) )
  .pipe( sass.sync() )
  .pipe( rename( function( path ) { path.extname = '.css' } ) )
  .pipe( postcss( [ autoprefixer( {
   browsers: [ 'last 2 versions', 'safari 5', 'ie 8', 'ie 9', 'opera 12.1', 'ios 6', 'android 4' ]
  } ), csswring() ] ) )
  .pipe( rename( function( path ) { path.extname = '.min.css' } ) )
  .pipe( sourcemaps.write( '.' ) )
  .pipe( gulp.dest( __dirname + '/dist/css/' ) )
  .pipe( connect.reload() );

} );

“autoprefixer” and “csswring” are plugins to PostCSS. I’m processing my Sass into CSS using “gulp-sass”, and then using PostCSS to do the autoprefixing and minification (in a sourcemap-friendly way, and that’s important as I’ll get to in a sec).

But I’ve come across a downside to this new flow. In the past I’ve written out two versions of the finished CSS file: an “expanded” one that’s more or less human readable for our development environments, and a minified version for production. In WordPress, for example, it’s easy to tell the theme which one to use and when based on whether we have debugging turned on or not.

But “csswring” was choking when trying to minify a CSS file that had a sourcemap, regardless of whether the sourcemap was inline or external. So if you’ll notice in this gulp task, the “expanded” CSS file never gets written out; the pipe goes directly from Sass file to minified CSS. But the sourcemap is written for the minified CSS, so if you’re working in Chrome, you can see where your property is being written in the Sass, like in this screenshot:

Screenshot of my project, demonstrating how sourcemaps are working on minified CSS.

It’s not a result that I’m 100% comfortable with, but I’m learning to stop worrying and love the minification. However, I’m wondering how this will fly in production. From one sense, there’s a certain amount of front-end civic responsibility in letting other developers see the source Sass you actually wrote, rather than just the processed and minified CSS that a computer crunched out. On the other hand, 99.9999% of your audience wouldn’t give one shit about that, so is the browser pulling down a sourcemap file that’s actually twice as large as the actual CSS file? Something else to figure out…