Dumping the CSS Framework
comments

Last updated 18 October 2015

At my old job the first step of most projects was to add bootstrap css and start styling some html based on a photoshop design. The photoshop images would even have a grid across them divided into 12 columns, just like bootstrap. While this was a great way to make mobile-friendly websites quickly (which our clients wanted) it had a few downsides.

  1. We were loading a lot of css and javascript we didn’t need.
  2. A lot of our designs were based on bootstrap conventions, regardless of what a particular site needed.
  3. I never learned how to do a lot things with css on my own, since bootstrap normally handled it. This meant I had a hard time overriding certain behaviors.

With all of that in mind, I found the best thing to do is to completely get rid of the framework altogether. You can still look at how bootstrap does things if you get stuck, since it is all open source. Once I dumped the framework I found myself much more confident making changes while having significantly smaller css sheets, no need for jQuery (which boostrap’s js depends on), and more fun!

I think it is good to get some practice using a css framework so you can see how people much better at css do things, but once you have a general idea what you are doing going without the framework is better and more satisfying.


Learning Testing the Hard Way
comments

Last updated 15 October 2015

Sometimes its easy to feel like you don’t have time to do test-driven development. You know what code to write, so it seems like an unnecessary distraction to have to stop and write a test before writing the code you’ve planned.

I certainly used to feel this way, but then I started working as the primary developer on a big-ish project. At first, everything was great. I was writing new features quickly and everyone was really happy with it. I’d just write the code then quickly click around to make sure it worked properly before releasing it. Then, inevitably, something in the requirements changed. That was no problem — I knew how the code worked so I jumped in and made the change.

After making a change, though, I had to quickly click around to make sure the new feature worked and that everything connected to the new feature worked the way it did before. And after adding enough new features, the clicking around wasn’t so quick. I realized that this repetitive clicking was a huge waste of time.

So I spent a few days backfilling in some capybara-based feature specs to simulate the kinds of clicking I would do. This test-writing after the fact was relatively painful, because I had to read through the code and try to write a test for all the situations the code was meant to handle. Now that I had the tests things were much faster since I could just run them instead of clicking around, but I’d occasionally break something that the tests didn’t cover and had to backfill even more tests.

I realized at this point that I could have avoided all of this pain and have saved time by writing the tests first. I eventually needed the tests anyway, and I wasted time clicking before I wrote the tests. I also would’ve had better tests that covered more of the corner cases of the code if I made sure to write the tests first.

That’s how I learned the hard way to do test-driven development. I thought it would be helpful to share this cautionary tale so other people don’t have to waste their time and learn things the hard way like I did.


Private Attr Readers in Ruby - The Best Thing Ever
comments

Last updated 13 September 2015

Private attr_readers are great and you should use them. When I start writing a class my skeleton usually looks like this:

class AwesomeClass
  def initialize(sweet:, keyword:, arguments:)
    @sweet = sweet
    @keyword = keyword
    @arguments = arguments
  end

  private

  attr_reader :sweet, :keyword, :arguments
end

If I need to make something public I always can, but by immediately starting with private attr_readers I start by hiding my instance variables behind a method. This hiding is not only from other classes (because they are private), but also from the rest of the class since the other methods aren’t directly accessing the instance variables. This way, if I need to add some more logic to a instance-variable related method, the code change is very easy.

class AwesomeClass
  def initialize(sweet:, keyword:, arguments:)
    @sweet = sweet
    @keyword = keyword
    @arguments = arguments
  end

  private

  attr_reader :sweet, :arguments

  def keyword
    KeywordManager.new(@keyword)
  end
end

Give this approach a try and I think you’ll find yourself doing it automatically from now on.


Fake Collaborators for Tests
comments

Last updated 08 September 2015

When I was first learning about testing I thought changing your code to make it easier to test was stupid. Since you are only supposed to test the public interface of a class, it seemed like cheating to fake out anything internal to a class. Over time, though, I realized that testing in this way lead to a bunch of unnecessary setup (usually creating a bunch of objects in the database) that I didn’t need. Take this typical (if extremely simple) use case class:

class PublishArticle
  def initialize(article:)
    @article = article
  end

  def publish
    article.update(state: "published", published_at: Time.now) if should_publish?
  end

  private

  attr_reader :article

  def should_publish?
    ArticlePublishingPolicy.new(article: article).valid?
  end
end

It’s pretty straight-forward — it just publishes an article if the article meets the article publishing policy. How should we test this? Previously, I would look up what the ArticlePublishingPolicy class did and see something like this:

class ArticlePublishingPolicy
  def initialize(article:)
    @article = article
  end

  def valid?
    author_in_good_standing? && article_valid?
  end

  private

  attr_reader :article

  def author_in_good_standing?
    article.author.in_good_standing?
  end

  def article_valid?
    article.valid? && article.approved_by_editor?
  end
end

To use this class I need to have an article that is valid, plus the article needs an author who is in good standing (and what does that mean?). I could create an author, who is in good standing (that would mean looking at the class of the author to see how to make that the case), plus I’d need to make sure I put all the correct information into an article to make it valid for publishing, like marking it approved by and editor. This seems like a lot of work that is totally unrelated to the PublishArticle class.

What if I changed the PublishArticle class to this?

class PublishArticle
  def initialize(article:, policy: nil)
    @article = article
    @policy = policy
  end

  def publish
    article.update(state: "published", published_at: Time.now) if should_publish?
  end

  private

  attr_reader :article

  def should_publish?
    policy.valid?
  end

  def policy
    @policy ||= ArticlePublishingPolicy.new(article: article)
  end
end

That let’s me write tests like this (in rspec, and assuming Article responds to published?):

describe "PublishArticle#publish" do
  let(:article) { Article.new }

  def publish_article
    PublishArticle.new(article: article, policy: policy).publish
  end

  context "policy is valid" do
    let(:policy) { double("ValidPolicy", valid?: true) }

    it "publishes the article" do
      publish_article.should be_published
    end

    it "sets published_at" do
      publish_article.published_at.should_not be_nil
    end
  end

  context "policy is invalid" do
    let(:policy) { double("InvalidPolicy", valid?: false) }

    it "does not publish the article" do
      publish_article.should_not be_published
    end

    it "doesn't set published_at" do
      publish_article.published_at.should be_nil
    end
  end
end

All of the sudden these tests don’t test anything that the PublishArticle class doesn’t care about. It doesn’t need to know about what makes an article valid, so we don’t test that here. Testing in this way makes tests faster, clearer, and less likely to change. Now, if something about the ArticlePublishingPolicy has to change, there is no reason to change the code in PublishArticle or your tests for it. Plus, you don’t need to change how you call PublishArticle in your production code to use this technique since we’ve provided sensible defaults.

I hope this encourages people new to testing to change their code just for test purposes. Inject those dependencies!


Managing Configuration on Multiple Machines
comments

Last updated 29 August 2015

Well, my experiment with emacs ended when I realized that getting the shell to work properly wasn’t going to happen unless I was willing to put some serious work in. It just didn’t seem worth it. So, I’m back to vim.

As I said before, I use yadr for my vim and zsh configuration. Lately, though, I’ve been trying to add some tmux excitement to my normal workflows and so I needed to add some additional tmux-based keybindings. Yadr has a way to do this without needing to fork it - by using a ~/.zsh.after directory with additional configuration. The problem with this, though, is that I needed to remember to update my config on both my personal and work computers whenever I made a change.

Clearly this was not exceptable. I needed to DRY up my life!

So, what I created a repo on github with my configuration files. The issue with this, of course, is that you need a way to clone the repo, but then put the files in the correct place (I also have a ~/.gitconfig.user file in there, for instance, so I can’t just clone everything into ~/.zsh.after) without having to remember where to put what. So I made a basic bash script that I run whenever I pull that symlinks the files into the correct place. Then, to keep my configuration files in sync, all I need to do is commit any changes, push it up to github, and then pull those changes down onto my other computer.

You can see the repo here if you want to get an idea of what is involved. I hope you find this technique helpful!


Trying Out Emacs
comments

Last updated 11 July 2015

When I started working at Reverb I had a medium amount of vim experience, mostly from programming on a chromebook. At Reverb, though, the expectation is that all programmers use vim (specifically mvim) using our CTO’s excellent vim configuration (which you can see here: https://github.com/skwp/dotfiles) to foster easy pairing.

I felt instantly pretty much as productive as I was in Caret or Sublime Text, which honestly wasn’t that productive. After a little bit, though, I was navigating both around the current file and through the very large Reverb codebase with ease, never touching the mouse, and doing all the things that make people love vim so much.

Naturally, this made me curious about emacs.

I had tried to go through the basic emacs tutor around the same time I was learning vim basics and quit almost immediately once I learned the incredibly perverse navigation commands. Not only did you have to hit control constantly, the keys to move the cursor were all over the place!

But, people still really love emacs so I was curious about why. I decided to give it a try, but using evil mode to use vim bindings within emacs. You might wonder what the point of using emacs at all is, if I’m just going to make it as much like vim as possible. Well, turns out it’s all the other stuff that emacs gives you.

Take a look at this screenshot:

Editing and using the internet in emacs

If you can’t tell, I’m editing a markdown file for the blog on the left while looking at the post in the brower on the right. In fact, the ability to browse the internet in a text only mode while having all my vim keybindings is the main reason I’m enjoying emacs so much.

I’m still pretty new at using emacs and am learning a lot, but so far I’m really glad I decided to give it a try. If I don’t give up soon, I’ll probably have more updates on my progress and configuration

Just imagine a world where you can use vim all the time: whether it’s for editing text, browsing the internet, or reading pdf files. The way to do it is to use emacs.


New Heroku Pricing
comments

Last updated 27 June 2015

Heroku’s new pricing introduces much better pricing for anyone who has been paying for Heroku’s services, but introduces a huge change for people wanting to host their apps for free — required downtime. Now all free apps have to sleep for 6 hours in every 24 hour period.

What they are actually doing is making their free tier better for people who use it in the way that they want — to get an app up and running in a production environment for testing. They now give you a free web and worker dyno so you can set up sidekiq at no cost while you are developing the app. Previously, if you wanted to use a second dyno at all you almost certainly had to pay some money.

Once users like this are ready to go into prime time they can either go through the hassle of switching to a different hosting service like Amazon, or they can just pay Heroku some money every month for more dynos that are always live. Clearly, this is what Heroku wants.

Unfortunately, I wasn’t using the free tier of Heroku to test out an app in development. I just wanted a single web dyno that, if pinged once an hour, was up all the time to host my personal website. I had no plans of ever giving Heroku any money, so it really makes no sense to give me free hosting. My best option with Heroku would be to pay $7 a month to get a site that is always up, and that is more than I’m willing to spend at the moment.

So, Heroku did very well. They are now giving free hosting to the people using the service in the way that they want and stopping giving free hosting to people who, like me, have no intention of ever paying.

As a result, I’ve switched this site’s technology (again). Now I’m using jekyll and hosting for free on github. Which is great!


Name Thy Args - Named Arguments in Ruby
comments

Last updated 21 June 2015

Just like last time, where I wrote about the underused Object#tap method, I want to talk about a relatively new and underutilized feature in Ruby. Today it is named arguments.

Starting in Ruby 2.0, you could give your arguments names like so:

def method(argument: "default")
  argument.length
end

This was nice, but since you were forced to provide a default value it wasn’t always super useful. Starting in Ruby 2.1, though, the requirement for a default was dropped so now you can write:

def method(argument:)
  argument.length
end

Why is this so great? It provides documentation when a method is called. Have you ever run across a method call like do_thing(user, false) in an existing codebase? Don’t you immediately have to look up the do_thing method to figure out what that false means? What if, instead, you saw do_thing(user: user, async: false)? Now you don’t need to lookup the code for that method since it is clear what the false means.

Another benefit is that the order doesn’t matter. do_thing(async: false, user: user) works just as well. Just last week I spent almost an hour figuring out why a test wasn’t passing before I realized I had some positional arguments in the wrong order. Named arguments avoid that.

The name of the arguments can also give reminders about what type the arguments should be. If you are supposed to pass a user id to the method, do_thing(user, false) is a much easier mistake to make than do_thing(user_id: user, async: false). It is immediately obvious that it should be do_thing(user_id: user.id, async: false).

Finally, Ruby is much more helpful about what is wrong when you forget a named argument than positional arguments. If you try do_thing(user) you get an error like ArgumentError: wrong number of arguments (1 for 2). Then you have to look at the method definition to see what you forgot. With named arguments if you try do_thing(user: user), you get an error like ArgumentError: missing keyword: async, which is much more informative.

The only downsides to named arguments are that they make your method calls a little more verbose, and that it makes any gems or libraries you create not as backwards compatible. I think the verbosity is worth it, though, and I think once Rails 5 comes out and people have to switch to Ruby 2.2 to use it, backwards compatibility won’t be such a big issue. In a private codebase, of course, you don’t have to worry about compatibility with old Rubies, so there is no downside there.

I encourage you to use named arguments as much as possible. I think you’ll see the benefits right away!


Tappety Tap Tap Tap Tap - Object#tap in Ruby
comments

Last updated 18 June 2015

There are a number of features that were added to Ruby in versions 1.9 or later that don’t seem to be in wide use for some reason. One that I have a fondness for is the tap method.

The tap method was added to the Object class in Ruby 1.9. All it does is yield self to a block and then return self.

Here’s a simple example:

Object.new.tap do |object|
  puts object
end
# returns the object

Doesn’t that seem useful? Not really? Okay. Well, according to the docs the method is meant to “tap into” a method chain, in the same way you’d tap into a rich vein of silver. Let’s see if we can find the silver here.

Imagine you need to create an object, call one of its methods, and then return it.

Hat.new(type: :bowler).tap do |hat|
  hat.tip
end

We’re getting closer.

What if you need to construct a string in a complicated way?

"".tap do |string|
  string << "Hello " if hello
  string << "Sir " if should_sir?
  string << "I'm writing to you"
  string << " about #{about}" if about
  string << "."
end

That’s so much better than the string = ""; string << "X" if something alternative that ends with string at the end to make sure you return it. tap takes care of that for you.

The biggest benefit I’ve found of using tap is that it makes it easy to create a variable with limited scope. This can be very useful in Rails views, especially if you use decorators to add view-related methods to an object.

Take this haml:

- decorated_post = PostDecorator.new(@post)
%h1= decorated_post.decorated_title
%p= decorated_post.decorated_body

That’s all fine, but if you are in a situation where you want to use two different decorators in the same view, you might have something like this:

- readably_decorated_post = ReadablePostDecorator.new(@post)
%h1= readably_decorated_post.decorated_title
%p= readably_decorated_post.decorated_body

- sortably_decorated_post = SortablePostDecorator.new(@post)
= link_to sortably_decorated_post.next_post

It can be hard to keep track of where a variable might be used and how. Also, those variable names are long to differentiate from themselves from each other. Try this instead:

- ReadablePostDecorator.new(@post).tap do |decorated_post|
  %h1= decorated_post.decorated_title
  %p= decorated_post.decorated_body

- SortablePostDecorator.new(@post).tap do |decorated_post|
  = link_to decorated_post.next_post

I find that more readable and you have a good idea of the start and end of the scope of the variables.

I hope this encourages you to bring out Object#tap for a little spin the next chance you get.


Why You Run Your Tests First
comments

Last updated 14 June 2015

When first learning test-driven-development, there are a lot of things about it that seem stupid and arbitrary. After working with it a while, though, you start to see that at least some of the rules make a lot of sense. The phrase most associated with TDD is “red, green, refactor”. Now, when I was first learning about TDD the green and refactor steps made a lot of sense, but why do I need to do the red step? I know the test will fail, because I haven’t written the code yet!

Now I understand, though, so I wanted to share what I learned with future doubters.

You only think you know what the bug is

When fixing bugs in a large project at work, I usually have an exception that occurred in production with details in honeybadger. I usually read the trace from the exception and think through where the problem might be. I then tend to have a very good idea of how to solve the issue.

The only way I know I’m fixing the issue, though, is to write a test that replicates the problem. Frequently I’ll write the test and, by running it, discover that it doesn’t actually cause the same exception I expected it to, or that it doesn’t cause any exception at all! If I wrote the test, changed the code, and then ran the test I’d think that the bug was fixed. By running the test first I verify that I’m fixing the thing I think I am.

The same idea applies to writing new features. If you run your tests first, you can verify that you are getting the error you expect to get. When you write a test like this:

specify { Thing.new.do_thing.should == "doing a thing" }

And your current code is:

class Thing
  def do_thing
  end
end

You run the test to see expected "doing a thing", got nil. This means that the rest of your code is working as you expect, and your planned code will make the test pass. Without running the test, you might write your code, then discover that you mistyped the method name, or that your method is returning something unexpected. Until you get the failure you want, you don’t really understand your code.

I hope that helps explain why the seemingly stupid step of running a test you know will fail is actually helpful.