How-To

Add Feed Discovery Links Easily

I'm working on discussion forums for NearbyGamers and I'm building the first feeds into the site. I worked up a clean way to add them from my controllers similar to my tidy stylesheets code. Here's how to do it.

In the <head> of your app/views/layouts/application.rhtml call the auto_discovery_link_tag to print the tags:

  <%- @feeds.each do |feed| -%>
    <%= auto_discovery_link_tag(:atom, *feed) %>
  <%- end -%>

In app/controllers/application.rb:

  def initialize
    @feeds = []
  end

  def add_feed title, options={}
    @feeds << [ { 
      :controller => self.controller_name, 
      :action => self.action_name, 
      :format => 'atom' 
    }.update(options), { :title => title } ] 
  end

And you're all set up. Where you want an action to present feeds, call add_feed. After title, it takes options for url construction.

GET and POST variable hashes in Ruby on Rails

In Rails, you access GET, POST, and routing variables through the params hash. This works almost all the time, except when you duplicate a variable name: routing overwrites GET overwrites POST.

For an app I'm working on I actually had to care where a variable comes from, so I dug for a while to find out how to access the raw hashes. It ain't pretty, but here it is in case anyone else ends up needing it:

get = CGIMethods.parse_query_paremeters(@request.query_string)
post = CGIMethods.parse_query_parameters(@request.raw_post) 
# you'd think you could use @request.query_parameters and
# @request.request_parameters, but they're update()d by route vars
route = @request.path_parameters

(Also, don't ask about this in #rubyonrails -- you'll just get lectured on how you don't really want to access the hashes, how you should rename all your variables and URLs, and how it simply isn't possible. This will be very frustrating and totally unproductive.)

Rails Makes Valid XHTML Easier

I'm working on a Rails site in my Copious Free Time and I wanted to share a little way that Ruby made my life easier. I'm making my pages valid XHTML 1.0 Transitional because it makes life easier to find bugs and it just feels good to know I'm meeting the spec.

The W3C Validator complained that I didn't have the rows and cols attributes on my <textarea> tags. My code for them looked like:

<%= text_area_tag :message, params[:message] %>

And I don't want to add the :size option because I use CSS to style all of them, it'd be confusing to see an unused size there. So I extended the text_area_tag method in my app/helpers/application_helper.rb to fill in a default:

module ApplicationHelper
  def text_area_tag(name, content=nil, options={})
    super(name, content, { :size => "40x10" }.update(options))

"Muntz" your code

A long time ago, when I was a much younger engineer, I read an article in "Electronic Design" magazine by Bob Pease of National Semiconductor titled "What's All This Muntzing Stuff, Anyhow?". In this article, he describes how Earl "Mad-man" Muntz, would clip parts out of his engineer's circuits to see if the parts were absolutely necessary. As a consequence, his television sets cost dramatically less than his competitors. This article is definitely worth a read (as are Bob's other articles).

Regulazy—Regular Expressions for the Rest of Us

I have been working on admitting my weaknesses lately. And one of them is I really, really suck at writing regular expressions. I don’t think I have ever ginned anything more complicated than a “make sure this is three digits” expression from scratch. And I even forgot how to write that expression as I was writing this post.

Fortunately, there are smart people in this world who can help me out in my struggles. One of them is Roy Osherove, a longtime contributor to the .NET community. Many moons ago he wrote The Regulator, a handy regular expression generation IDE.

Building Clean URLs Into a Site

I wrote about building a site with clean URLs, but that's useless to you. No, you've got a creaking hulking monster of a site that coughs up URLs like "render.php?action=list_mailbox&id=42189", was built "to meet an accelerated schedule", and eats summer interns whole.

This article tells you how to put clean and human-usable URLs on top of the site without even editing your underlying scripts. All these examples mention PHP but it doesn't matter what you coded the site in, you just have to be running Apache and have a little familiarity with regular expressions.

So we have two goals. First, requests for the new URL are internally rewritten to call the existing scripts without users ever knowing they exist. Second, requests for the old URLs get a 301 redirect to the new URLs so that search engines and good bookmarks immediately switch to the new URLs.

Let's work through an example .htaccess file. We take apart the new URLs and map them internally to the old URLs:

The Underground PHP and Oracle manual

Chris Jones just announced the publication of the PHP and Oracle Manual (PDF) and from a high speed eyeballing, it’s good—basically tells you everything you need to know to be able to do useful stuff with PHP + Oracle but with little assumed knowledge.

In fact it seems to be geared to the typical LAMP developer—for example there’s a section on “Installing Oracle XE on Debian, Ubuntu and Kubuntu” plus sections on “Limiting Rows and Creating Paged Datasets” and “Auto-Increment Columns” show an awareness that readers will probably have MySQL experience.

Perhaps one addendum (I didn’t find reference to it) would be pointing readers at SQL Developer, which is a fairly new, free offering from Oracle as a desktop based development tool—the Underground manual focuses on a web based interface to Oracle—something logically equivalent to phpMyAdmin and a good starting point I guess but if you have to do real work, SQL Developer is probably a better choice.

Building a Site With Clean URLs

As an aside in my post about Cambrian House I posted some code for making pretty URLs. A few people (no, not CH) have asked for a little more info, so I've written up an explanation of that code.

PHP makes it very easy to create bad URLs like /member.php?id=8. Those are bad because web spiders don't like to crawl URLs with GET variables, some browsers don't cache any GET URLs, they expose that you use PHP (when the visitor should never even know), and they're just downright ugly and hard to remember. I'm going to present a way to build a PHP/Apache site with clean URLs.

Let's look, line-by-line, at the contents of .htaccess. While writing this article I found a more elegant equivalent in the Wordpress code, so I'll present that here:

# Tell Apache to load mod_rewrite.
RewriteEngine On
# Rewrite URLs for the location starting at /
# Note this is URL location, not a path to your web root.
RewriteBase /

Strings are a Domain-Specific Language

Question: Isn't a domain-specific language just the same thing as a library? (Source: Pretty much everyone the first time they hear of DSLs.)

Answer: No, a DSL is much more than a library, and I have an example that won't make you say, "Well, sure, if you're doing something that esoteric..."

.NET and Excel Importing

A quick tech tip this week, it’s not ground breaking, it might not even be new to many of you, but it really helped me out. I was looking to buy a component to add a simple Excel import facility to a project, and I had one of those "d’oh" moments.

Amongst all the components for sale, there were search results about using Jet, ISAMs, and OLEDB. I’ve used Office and Jet enough in the past to know that it makes reading an Excel file a straight forward process, but somehow I hadn’t realized that I could do it just as easily using OLEDB from a .NET application.