Month: <span>November 2014</span>

Lock-iconI just finished installing an SSL certificate on JakeSavin.com. The main reason was to prevent impersonation and man-in-the-middle attacks while I'm editing or administering my site. I was using SSL to connect to my WordPress admin interface already, but with a self-signed certificate that produces warnings in the browser (in addition to not being as secure as it should be). Now that I have a CA-backed certificate, the warnings go away.

There are a some additional benefits to this:

  1. API clients like dedicated blog editing apps, that validate SSL certs (as they all should when connecting securely) should now work, though I have yet to test this.
  2. Anyone who visits my site can request the secure URL, and get an encrypted connection to protect their privacy. They can also be reasonably sure that they're actually visiting my real site and not an imposter—not that I'm actually worried about imposters.
  3. Google (at least) has started ranking sites that fully support SSL higher in their searches. Not that I'm really big on SEO for my site, but it's a “nice-to-have” feature.

See also: Embracing HTTPS (Konigsburg, Pant and Kvochko)

If you see any problems, please let me know via a comment, tweet or some-such.

 

Security

A couple of weeks ago, I started running River4 on my Synology DS412+ NAS device. At the moment I’m generating just one river, which you can see on river.jakesavin.com.

Since I needed River4 to run all the time, and I didn’t want to have to kick it off by hand every time the NAS boots up, I decided to write an init.d script to make River4 start automatically.

If you have a Synology NAS or other Linux-based machine that uses init.d to start daemon processes, you can use or adapt my script to run River4 on your machine.

How to

  1. Install node.js and River4 on your machine.
    • I installed River4 under /opt/share/river4/ since that’s where optional software usually goes on Synology systems, but yours can be wherever you want.
  2. Follow Dave’s instructions here in order to set up River4 with your data, and test that it’s working.
  3. Download the init.d shell script.
  4. Unzip, and copy the script to  /opt/etc/init.d/S58river4 on your NAS/server.
  5. Make the script executable on your NAS/server with:  chmod 755 S58river4
  6. Edit the variables near the top of the script to correspond to the correct paths on your local system.
    • If you’re using a Synology NAS, and the SynoCommunity release of node.js, then the only paths you should need to change are RIVER4_EXEC and RIVER4_FSPATH, which are the path to river4.js and your web-accessible data folder (river4data in Dave’s instructions).
  7. Run River4 using  /opt/etc/init.d/S58river4 start

At this point, River4 should be running.

If your firewall is enabled and want access to the dashboard, you’ll need to add a firewall rule to allow incoming TCP traffic on port 1337. I recommend you only do this for your local network, and not for the Internet at large, since River4 doesn’t currently require authentication.

Once your firewall has been configured, you should be able to access the dashboard via:

http://myserver:1337/dashboard

Notes

The script assumes you’re going to be generating your river of news using the local filesystem, per Dave’s instructions for using River4 with file system storage. I haven’t used it with S3 yet, but you should be able to simply comment out the line in my script that says export fspath, and get the S3 behavior.

There is no watcher, so if River4 crashes or is killed, or if node itself exits, then you’ll need to restart River4 manually. (It should restart automatically if you reboot your NAS.)

Questions, Problems, Caveats

I did this on my own, and it is likely to break in the future if River4 changes substantially. I can’t make any guarantees about support or updates.

If you have problems, for now please post a comment on this post, and I’ll do what I can to help.

Please don’t bug Dave. 😉

Source code

Here’s the source code of the init.d script. (The downloadable version also contains instructions at the top of the file.)

Uncategorized

Version control and I go back a long way.

Back in the late 1990’s, I was working in the QA team at Sonic Solutions, and was asked to look into our build scripts and source code control system, to investigate what it would take to get us to a cross-platform development environment—one that didn’t suck.

At the time, we were running our own build scripts implemented in the MPW Shell (which was weird, but didn’t suck), and our version control system was Projector (which did suck). I ended up evaluating and benchmarking several systems including CVS, SourceSafe (which Microsoft had just acquired), and Perforce.

In the end we landed on Perforce because it was far and away the fastest and most flexible system, and we knew from talking to folks at big companies (Adobe in particular) that it could scale.

Recently I’ve been reading about some of the advantages and disadvantages of Git versus Mercurial, and I realized I haven’t seen any discussion about a feature we had in the Perforce world called change lists.

Atomic commits, and why they’re good

In Perforce, as in Git and Mercurial, changes are always committed atomically, meaning that for a given commit to the repository, all the changes are made at once or not at all. If anything goes wrong during the commit process, nothing happens.

For example, if there are any conflicting changes between your local working copy and the destination repository, the system will force you to resolve the conflict first, before any of the changes are allowed to go forward.

Atomic commits give you two things:

First, you’re forced to only commit changes that are compatible with the current state of the destination repo.

Second, and more important, it’s impossible (or very difficult) to accidentally put the repo into an inconsistent state by committing a partial set of changes, whether you’re stopped in the middle by a conflicting change, or by a network or power outage, etc.

Multiple change lists?

In Git and Mercurial, as far as I can tell there is only one set of working changes that you can potentially commit on a given working copy. (In Git this is called the index, or sometimes the staging area.)

In Perforce, however, you can have multiple sets of changes in your local working copy, and commit them one at a time. There’s a default change list that’s analogous to Git’s index, but you can create as many additional change lists as you want to locally, each with its own set of files that will be committed atomically when you submit.

You can move files back and forth between change lists before you commit them. You can even author change notes as you go by updating the description of an in-progress change list, without having to commit the change set to the repository.

Having multiple change lists makes it possible, for example, to quickly fix a bug by updating one or two files locally and committing just those files, without having to do anything to isolate the quick fix from other sets of changes you may be working on at the time.

Each change list in Perforce is like its own separate staging area.

So what’s the corresponding DVCS workflow?

While it’s possible with some hand-waving to make isolated changes using Git or Mercurial, it seems like it would be easier to accidentally commit files unintentionally, unless you create a separate local branch for each set of changes.

I understand that one of the advantages people think of philosophically with distributed version control systems, is that they encourage frequent local commits by decoupling version control from a central authority.

But creating lots of local branches seems like a pretty heavy operation to me, in the case where you just need to make a small, quick change, or where you have multiple change sets you’re working on concurrently, but don’t want to have to keep multiple separate local branches in sync with a remote repo.

In the former case cloning the repo to a branch, just to make a small change isn’t particularly agile, especially if the repo is large.

In the latter case, if you’re working on multiple change lists at the same time, keeping more than one local branch in sync with the remote repo creates more and possibly redundant work. And more work means you’re more likely to make mistakes, or to get lazy and take riskier shortcuts.

But maybe I’m missing something.

What do you do?

In this situation, what’s the recommended workflow for Git and Mercurial? Any experts care to comment?

Development