Category: Development

Todd Motto is maintaining a categorized list of public JSON APIs here. As of right now he’s listed 322 endpoints in 26 categories. There are 174 that don’t need authentication, 63 that require OAuth, 80 that take an API key, and 245 that support HTTPS.

It’s a great list, though certainly there must be more.

If you’d like to contribute, Todd has kindly provided some instructions.

Development Web APIs

My friend Brent Simmons has recently written a series of blog posts—seven parts so far—on How Not to Crash, for Cocoa and iOS developers. Brent is an experienced and thoughtful programmer, and these are well worth a read. Most are probably useful even to programmers working in other languages.

Check them out!

How Not to Crash #1: KVO and Manual Bindings
How Not to Crash #2: Mutation Exceptions
How Not to Crash #3: NSNotification
How Not to Crash #4: Threading
How Not to Crash #5: Threading, part 2
How Not to Crash #6: Properties and Accessors
How Not to Crash #7: Dealing with Nothing

Update: Brent added two more How Not to Crash posts since I originally wrote this:

How Not to Crash #8: Infrastructure
How Not to Crash #9: Mindset

… and wrapped them all up in this post on inessential.com.

CocoaDev Development Uncategorized

TomcatToday I needed to start figuring out how to install an open source analytics package on my dev machine. It’s implemented in Java, and needed Tomcat. I groaned. “Great. Another complicated dependency to install,” I thought.

Turns out that installing Tomcat on a Mac is actually pretty easy. I ended up following Wolf Paulus’ tutorial here.

Nice write-up, Wolf. Thanks!

Development

Version control and I go back a long way.

Back in the late 1990’s, I was working in the QA team at Sonic Solutions, and was asked to look into our build scripts and source code control system, to investigate what it would take to get us to a cross-platform development environment—one that didn’t suck.

At the time, we were running our own build scripts implemented in the MPW Shell (which was weird, but didn’t suck), and our version control system was Projector (which did suck). I ended up evaluating and benchmarking several systems including CVS, SourceSafe (which Microsoft had just acquired), and Perforce.

In the end we landed on Perforce because it was far and away the fastest and most flexible system, and we knew from talking to folks at big companies (Adobe in particular) that it could scale.

Recently I’ve been reading about some of the advantages and disadvantages of Git versus Mercurial, and I realized I haven’t seen any discussion about a feature we had in the Perforce world called change lists.

Atomic commits, and why they’re good

In Perforce, as in Git and Mercurial, changes are always committed atomically, meaning that for a given commit to the repository, all the changes are made at once or not at all. If anything goes wrong during the commit process, nothing happens.

For example, if there are any conflicting changes between your local working copy and the destination repository, the system will force you to resolve the conflict first, before any of the changes are allowed to go forward.

Atomic commits give you two things:

First, you’re forced to only commit changes that are compatible with the current state of the destination repo.

Second, and more important, it’s impossible (or very difficult) to accidentally put the repo into an inconsistent state by committing a partial set of changes, whether you’re stopped in the middle by a conflicting change, or by a network or power outage, etc.

Multiple change lists?

In Git and Mercurial, as far as I can tell there is only one set of working changes that you can potentially commit on a given working copy. (In Git this is called the index, or sometimes the staging area.)

In Perforce, however, you can have multiple sets of changes in your local working copy, and commit them one at a time. There’s a default change list that’s analogous to Git’s index, but you can create as many additional change lists as you want to locally, each with its own set of files that will be committed atomically when you submit.

You can move files back and forth between change lists before you commit them. You can even author change notes as you go by updating the description of an in-progress change list, without having to commit the change set to the repository.

Having multiple change lists makes it possible, for example, to quickly fix a bug by updating one or two files locally and committing just those files, without having to do anything to isolate the quick fix from other sets of changes you may be working on at the time.

Each change list in Perforce is like its own separate staging area.

So what’s the corresponding DVCS workflow?

While it’s possible with some hand-waving to make isolated changes using Git or Mercurial, it seems like it would be easier to accidentally commit files unintentionally, unless you create a separate local branch for each set of changes.

I understand that one of the advantages people think of philosophically with distributed version control systems, is that they encourage frequent local commits by decoupling version control from a central authority.

But creating lots of local branches seems like a pretty heavy operation to me, in the case where you just need to make a small, quick change, or where you have multiple change sets you’re working on concurrently, but don’t want to have to keep multiple separate local branches in sync with a remote repo.

In the former case cloning the repo to a branch, just to make a small change isn’t particularly agile, especially if the repo is large.

In the latter case, if you’re working on multiple change lists at the same time, keeping more than one local branch in sync with the remote repo creates more and possibly redundant work. And more work means you’re more likely to make mistakes, or to get lazy and take riskier shortcuts.

But maybe I’m missing something.

What do you do?

In this situation, what’s the recommended workflow for Git and Mercurial? Any experts care to comment?

Development

Dave and I released some updates to Manila.root, the version of Manila that runs as a Tool inside the OPML Editor.

Instructions and notes are on the Frontier News site:

If you’ve been following me for the last couple of months, may have noticed that I’ve been spending some time looking at Manila again.

Recently, I completed a set of updates to bring Manila up to speed when running in the OPML Editor, and with Dave Winer’s help, that work is now released as a set of updates to the Manila.root Tool.

If you’re one of the people who still runs websites with Manila, I’d love to hear from you. Leave a comment here and say “Hi!”, or if you run into any problems with Manila as a Tool in the OPML Editor, please ask on the frontier-user mail list / Google group. 🙂

Development Manila

java-os-x-yosemiteI’m looking at various Mac options for JavaScript / Node.js IDEs, and decided to try out the Eclipse-based Aptana Studio 3 (now part of Appcelerator). But I ran into a problem when trying to run it—I kept getting an error saying:

The JVM shared library “/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk” does not contain the JNI_CreateJavaVM symbol

After much searching and reading of Stack Overflow posts, I decided after reading this, to completely uninstall the jdk and browser plugin from my machine, and start fresh with a clean install of Java for Yosemite.

Now I don’t know if it’s just me, but oddly the page on the Apple Support site that comes back blank. (I’ll assume there’s no conspiracy for the moment, and that this is just a bug.)

So I plugged the URL to the page into Google and loaded up the cached version. There I found the direct download link to the installer here: http://support.apple.com/downloads/DL1572/en_US/JavaForOSX2014-001.dmg

After running that installer, Aptana Studio (and also Eclipse) now launch just fine. Phew.

I’m posting this here to help others who are running into the JNI_CreateJavaVM error, and so I can find it again the next time I need to set up a new machine with Eclipse or Aptana.

ps. See also comment thread on Facebook.

Development

In the last post on this topic, I discussed some of the differences between Manila and WordPress, and how understanding those differences teased out some of the requirements for this project.

In this post I’m going to talk about the design and implementation of a ManilaToWXR Tool, some more requirements that were revealed through the process of building it, and a few of the tricky edge cases I had to deal with.

A little history first…

Frontier website headerAmong the more interesting things I did while I was a developer at UserLand, was to build a framework we called the Tools Framework, which brought together many different points of extensibility, and made it easy for developers to customize the environment.

In Frontier, Radio UserLand, and the OPML Editor, a Tool is a collection of code and data in a database, which extends or overrides some platform- or application-level functionality. It’s sort of analogous to a Plugin in the WordPress universe, but Tools can also do things like run code periodically (or continuously) in the background, or implement entirely new web applications, or even customize Frontier’s native UI.

For example, you could implement a Tool that hooks into the windowTypes framework and File menu callbacks to implement a new document type corresponding to a WordPress post. Commands in the File menu call the WordPress API, and present a native interface for editing your blog—probably in an outline. Radio UserLand did exactly this for Manila sites, and it was fantastic. (More on that later.)

Another example of a Tool is one that implements some new XML-RPC endpoints (RPC handlers in Frontier) to provide a programmatic API for accessing some content in a database on your server.

For my purposes, I’m not doing anything nearly so complicated. The main thing I wanted comes from the Tools > New Tool… menu command. This creates a new database and pre-populates it with a bunch of placeholders for things like its menu, a table for data and preferences, and of course a table where my code will live.

It gives me an easy, standard way to create a database with the right structure, and the hooks into the menu bar that I wanted to make my exporter easy to use.

Code Components

Now some of this may sound pedantic to the developer-types who are reading this, but please bear with me on behalf of our non-nerd cohorts.

Any time you need to write a lot of code, it makes sense to break the work down into small, bite-sized problems. By solving each of those problems one at a time, sometimes in layers, you eventually work your way towards a complete solution.

Each little piece should be simple enough that you can compartmentalize it and separate it from the other pieces. This is called factoring, and it’s good for lots of reasons including readability, maintainability, debug-ability, reuse. And if you miss something, make a mistake in your design, or discover that some part of your system doesn’t perform well, it’s far easier to rewrite just one or a couple of parts than it is to de-spaghettify a big, monolithic mess.

Components and sub-components should have simple and consistent interfaces so that other code that talks to them can in turn be made simple and consistent. Components should also have minimal or no side-effects, meaning that they don’t change data that some other code depends on. And components should usually perform one or a very small number of tasks in a predictable way, to keep them small, and make them easy to test and debug. If you find yourself writing hundreds of lines of code in one place, you probably need to break the problem down into smaller components.

So with these concepts in mind, I set about coming up with a component-level design for my Tool. I initially came up with four types of components that I would need, and each type of component may have a specific version depending on the type of object it knows about.

Iterators

First, I’m going to need an easy way to iterate across posts, stories, pictures, and other objects. As my code iterates objects in my site, the tool will create a fragment of XML that will go into a WXR file on disk.

By separating the iteration from everything else, I can easily change the order in which objects are exported, apply filters for specific object types, or only export objects in a given date or ID range. (It turned out that ranges and filters were useful for debugging later on.)

Manila stores most content in its #discussionGroup in a sub-table named messages. User information is in #membershipGroup, and there’s some other data scattered around too. But the most important content—posts, pages, pictures, and comments—is all in the #discussionGroup.

Initially I’d planned to make multiple passes over the data, with one pass for each type of data I wanted to export. So first export all the posts, next the pages, next pictures, etc. As it turned out however, in both Manila and WordPress, a post, a page, and a picture have more in common than not in terms of how they’re stored and the data that comes along with them. Therefore it actually made more sense to do just one pass, and export all the data at one time.

There was one exception, however: In WordPress unlike Manila, comments are stored in a separate table from other first-class site content, and they appear in a WXR file as children of an <item> rather than as their own <item> under the <channel> element:

In the end I decided to write two iterators. Each of them would take the address of the site (so they can find other required metadata about a person for instance), and the address of a function to call for each object as it goes along:

wxr.visit.messages – iterates over all of the messages in my site’s #discussionGroup, skipping over deleted items and comments, since they won’t be exported as an <item> in my WXR file.

wxr.visit.commentsrecurses over responses to a message to generate threaded comment information.

It turned out later on that I needed two more iterators—one for categories, and one for “Gems” (non-picture files), but the two above were a great starting point that would give my code easy access to the bulk of the content.

Data Extractors

Next I needed some data extractors. These are type-specific components will pull some data for a post, picture, comment, etc out of the database, and normalize it to a native data structure that can then easily be output to XML for my WXR file.

The most important data extractor is wxr.post.data, which takes the address of a message containing a blog post that’s in my site’s #discussionGroup—and returns a table (struct) that has all of the data elements that will go into an <item> in the exported WXR file.

Because the WordPress importer expects the comments as <wp:comment> sub-elements of <item> the post data extractor will also call into another data extractor that generates normalized data representing a comment.

For other types of objects I’ll need code that extracts data for that type as well. So I’ll need code to extract data for a picture, code to extract data for a page (story), and code to extract data for a gem (file).

Here’s part of the code that grabs the data for a comment:

There are a few interesting things to point out here:

  1. I chose to capture comment content even if it’s not approved. Better to keep the content than lose it, just in case I decide to approve it later.
  2. The call to wxr.comment.parent gets the ID of the comment’s parent. This preserves the threaded nature of the conversation, even if I decide not to have threaded comments in my WordPress site later on. It turns out that supporting both threaded and unthreaded comments was the source of some pain that I’ll explain in a future post.
  3. The call to wxr.string.processMacros is especially important. This call emulates what Manila, mainResponder, and the Frontier website framework do when a page is rendered to HTML. Without this capability, Frontier macro source code would leak through into my WordPress site, and possibly many internal links from #glossary items would not be broken. Getting this working was another source of pain that took a while to work through—again, more in a future post.
  4. All sub-items in the table that gets returned have names that start with “wp:”, which I’ll explain below…

Encoders

Once I had some structured data, I was going to need to use it to encode some XML. It turns out that this component could be done in a very generic way that would work with any of my data extractors.

Frontier actually does have somewhat comprehensive XML capabilities. But the way it’s implemented requires very verbose code that I really didn’t want to write. I had done quite enough of that in a past life. 😉

So I decided to write a much simpler one-way XML-izer that I could easily integrate with my data extractors.

The solution I came up with was to recurse over the data structure that an extractor passed to it, and generate an XML tree whose element names match sub-items’ names, and whose element content were the contents of each sub-item.

There were three features I needed to add in order to make this work well:

Namespaces: Many elements in a WXR file are in a non-default namespace—either wp: for the WordPress-specific data, or dc: for the Dublin Core extension. This feature was easy to deal with by just naming sub-items with the namespace prefix, i.e. an element named parent in the wp: namespace would simply be called wp:parent when returned by the data extractor.

Multiple elements: Often I needed to create multiple elements at a given level in the XML file that all have the same name. <wp:comment> is a good example. The solution I came up with here is similar to the one Frontier implements in its native XML verbs.

A compiled XML table in Frontier has sub-items representing elements, which have a number, a tab character, and the element’s name. The Frontier GUI hides the number and the tab character when you view the table, so you can see multiple same-named elements in the table editor. When you click an item’s name, the number and tab character are revealed, and you can edit them if you want. That said, you’re supposed to use the XML verbs, xml.addTable or xml.addValue to add elements.

Most of this is not particularly well documented, and personally I don’t think it was the most elegant solution, but it was effective at working around Frontier’s limitation that items in tables had to have unique names, whereas in XML they don’t.

I wanted something simpler, so I decided instead to simply strip anything after a comma character from the sub-item’s name. This way whenever my data extractor is adding an item, it can just use table.uniqueName with a prefix ending in a comma character, and then add the item at that address. Two lines of code, or one if we get just a little bit fancy:

XML attributes: The last problem to solve was generating attributes on XML elements, for example <guid isPermalink="false">...</guid>. It turns out that if there were an xml.addAttributeValue in Frontier, it could have handled this pretty easily, but that was never implemented. Instead I’d have to add an /atts sub-table, and add the attribute manually—which takes multiple lines of code just to set a single attribute. Of course I could implement xml.addAttributeValue, but I don’t have a way to distribute it, so nobody else could use it! 🙁

In addition, I really didn’t want big, deeply-nested data structures flying around my call-stack, since I’m going to be creating thousands of tables at run-time, and I was concerned about memory and performance.

In the end I decided to do a hack: By using the | character to delimit attribute/value pairs in the name of table sub-elements, I could include the attributes and their values into the element name itself. So the <guid isPermalink="false"> element would come from a sub-item named guid|isPermalink=false.

Normally I would avoid doing something like this since hacks have a tendency to be fragile, but in this case I know in advance what all of the output needs to look like, so I don’t need a robust widely-applicable solution, and the time I save with the hacky version is worth it.

Utility Functions

Then there’s a bunch of miscellany:

  • A way to easily wrap the body of a post with <![CDATA[…]]> tokens, and properly handle the edge case where the body actually contains those tokens.
  • A non-buggy way to encode entities in text destined for XML. (xml.entityEncode has had some bugs forever, which weren’t fixed because of Rule 1.)
  • Code to deal with encoding various date formats, and converting to GMT.
  • Code to convert non-printable characters into the appropriate HTML entities (which in turn get encoded in XML).
  • Other utility functions dealing with URLs, calculating permalinks, getting people’s names from their usernames, etc.

The Elephants in the Room

At this point there were a few more things I knew I would need to address. I’ll talk about these along with handling media objects in my next post. In the meantime, here’s a teaser:

  1. Lots of stuff in Manila just doesn’t work at all unless you actually install the site, with Manila’s source code available.
  2. The macro and glossary processors aren’t easy to get working unless the code is running in the context of a real web request.
  3. What should I do about all the incoming links to my site? Are they all going to simply break?

I’ll talk about how I dealt with these and other issues in the next post.

More soon…

Development Manila WordPress